Age | Commit message (Collapse) | Author |
|
|
|
...instead of passing its fields piecemeal.
PiperOrigin-RevId: 339345899
|
|
|
|
In VFS1's overlayfs, files use the device and inode number of the lower layer
inode if one exists, and the upper layer inode otherwise. The former behavior
is inefficient (requiring lower layer lookups even if the file exists and is
otherwise wholly determined by the upper layer), and somewhat dangerous if the
lower layer is also observable (since both the overlay and lower layer file
will have the same device and inode numbers and thus appear to be the same
file, despite being behaviorally different). VFS2 overlayfs imitates Linux
overlayfs (in its default configuration) instead; it always uses the inode
number from the originating layer, but synthesizes a unique device number for
directories and another device number for non-directory files that have not
been copied-up.
As it turns out, the latter is insufficient (in VFS2, and possibly Linux as
well), because a given layer may include files with different device numbers.
If two distinct files on such a layer have device number X and Y respectively,
but share inode number Z, then the overlay will map both files to some private
device number X' and inode number Z, potentially confusing applications. Fix
this by assigning synthetic device numbers based on the lower layer's device
number, rather than the lower layer's vfs.Filesystem.
PiperOrigin-RevId: 339300341
|
|
|
|
Updates #3921
PiperOrigin-RevId: 339195417
|
|
|
|
PiperOrigin-RevId: 339166854
|
|
Also change verity test to use a context with an active task. This is
required to delete/rename the file in the underlying file system.
PiperOrigin-RevId: 339146445
|
|
|
|
Much like the VFS2 gofer client, kernfs too now caches dentries. The size of the
LRU cache is configurable via mount options.
Have adopted the same reference semantics from gofer client dentry.
Only sysfs and procfs use this LRU cache. The rest of the kernfs users (devpts,
fusefs, host, pipefs, sockfs) still use the no cache approach.
PiperOrigin-RevId: 339139835
|
|
|
|
Control messages collected when peeking into a socket were being leaked.
PiperOrigin-RevId: 339114961
|
|
PiperOrigin-RevId: 338847417
|
|
|
|
Updates #1486.
PiperOrigin-RevId: 338832085
|
|
|
|
Fixes #4427, #4428
PiperOrigin-RevId: 338805047
|
|
PiperOrigin-RevId: 338798433
|
|
|
|
PiperOrigin-RevId: 338784921
|
|
|
|
Inode number consistency checks are now skipped in save/restore tests for
reasons described in greatest detail in StatTest.StateDoesntChangeAfterRename.
They pass in VFS1 due to the bug described in new test case
SimpleStatTest.DifferentFilesHaveDifferentDeviceInodeNumberPairs.
Fixes #1663
PiperOrigin-RevId: 338776148
|
|
|
|
PiperOrigin-RevId: 338756277
|
|
|
|
|
|
PiperOrigin-RevId: 338728070
|
|
- When encodeState.resolve() determines that the resolved reflect.Value is
contained by a previously-resolved object, set wire.Ref.Type to the
containing object's type (existing.obj.Type()) rather than the contained
value's type (obj.Type()).
- When encodeState.resolve() determines that the resolved reflect.Value
contains a previously-resolved object, handle cases where the new object
contains *multiple* previously-resolved objects. (This may cause
previously-allocated object IDs to become unused; to facilitate this, change
encodeState.pending to a map, and change the wire format to prefix each
object with its object ID.)
- Add encodeState.encodedStructs to avoid redundant encoding of structs, since
deduplication of objects via encodeState.resolve() doesn't work for objects
instantiated by StateSave() and passed to SaveValue() (i.e. fields tagged
`state:".(whatever)"`).
- Make unexported array fields deserializable via slices that refer to them by
casting away their unexportedness in decodeState.decodeObject().
Updates #1663
PiperOrigin-RevId: 338727687
|
|
|
|
Wait an additional RetransmitTimer duration after the last probe before
transitioning to Failed. The previous implementation transitions immediately to
Failed after sending the last probe, which is erroneous behavior.
PiperOrigin-RevId: 338723794
|
|
Read is implemented by PRead, with offset obtained from Seek.
PiperOrigin-RevId: 338718587
|
|
|
|
Added the following fields in kernfs.InodeAttr:
- blockSize
- atime
- mtime
- ctime
Also resolved all TODOs for #1193.
Fixes #1193
PiperOrigin-RevId: 338714527
|
|
Drain the notification channel after first accept as in case the first accept
never blocked then the notification for the first accept will still be in the
channel causing the second accept to fail as it will try to wait on the channel
and return immediately due to the older notification even though there is no
connection yet in the accept queue.
PiperOrigin-RevId: 338710062
|
|
Even if the child dentry has been cached, we should still check whether
the file and the corresponding Merkle tree file exist as expected. This
ensures that we can detect deletion/renaming of files that have been
previous enabled and opened.
Also make all verification failures to return EIO. This helps to test
verify failures.
PiperOrigin-RevId: 338709055
|
|
|
|
The SO_ACCEPTCONN option is used only on getsockopt(). When this option is
specified, getsockopt() indicates whether socket listening is enabled for
the socket. A value of zero indicates that socket listening is disabled;
non-zero that it is enabled.
PiperOrigin-RevId: 338703206
|
|
|
|
Previously, the NIC local address used when completing link resolution
was held in the neighbor entry. A neighbor is not identified by any
NIC local address so remove it.
PiperOrigin-RevId: 338699695
|
|
|
|
Earlier the count was dropped only after calling e.deliverAccepted. This lead to
an issue where there were no connections in SYN-RCVD state for the listening
endpoint but e.synRcvdCount would not be zero because it was being reduced only
when handleSynSegment returned after deliverAccepted returned.
This issue is seen when the Nth SYN for a listen backlog of size N which would
cause the listen backlog to be full gets dropped occasionally. This happens when
the new SYN comes at when the previous completed endpoint has been delivered to
the accept queue but the synRcvdCount hasn't yet been decremented because the
goroutine running handleSynSegment has not yet completed.
PiperOrigin-RevId: 338690646
|
|
|
|
Our current reference leak checker uses finalizers to verify whether an object
has reached zero references before it is garbage collected. There are multiple
problems with this mechanism, so a rewrite is in order.
With finalizers, there is no way to guarantee that a finalizer will run before
the program exits. When an unreachable object with a finalizer is garbage
collected, its finalizer will be added to a queue and run asynchronously. The
best we can do is run garbage collection upon sandbox exit to make sure that
all finalizers are enqueued.
Furthermore, if there is a chain of finalized objects, e.g. A points to B
points to C, garbage collection needs to run multiple times before all of the
finalizers are enqueued. The first GC run will register the finalizer for A but
not free it. It takes another GC run to free A, at which point B's finalizer
can be registered. As a result, we need to run GC as many times as the length
of the longest such chain to have a somewhat reliable leak checker.
Finally, a cyclical chain of structs pointing to one another will never be
garbage collected if a finalizer is set. This is a well-known issue with Go
finalizers (https://github.com/golang/go/issues/7358). Using leak checking on
filesystem objects that produce cycles will not work and even result in memory
leaks.
The new leak checker stores reference counted objects in a global map when
leak check is enabled and removes them once they are destroyed. At sandbox
exit, any remaining objects in the map are considered as leaked. This provides
a deterministic way of detecting leaks without relying on the complexities of
finalizers and garbage collection.
This approach has several benefits over the former, including:
- Always detects leaks of objects that should be destroyed very close to
sandbox exit. The old checker very rarely detected these leaks, because it
relied on garbage collection to be run in a short window of time.
- Panics if we forgot to enable leak check on a ref-counted object (we will try
to remove it from the map when it is destroyed, but it will never have been
added).
- Can store extra logging information in the map values without adding to the
size of the ref count struct itself. With the size of just an int64, the ref
count object remains compact, meaning frequent operations like IncRef/DecRef
are more cache-efficient.
- Can aggregate leak results in a single report after the sandbox exits.
Instead of having warnings littered in the log, which were
non-deterministically triggered by garbage collection, we can print all
warning messages at once. Note that this could also be a limitation--the
sandbox must exit properly for leaks to be detected.
Some basic benchmarking indicates that this change does not significantly
affect performance when leak checking is enabled, which is understandable
since registering/unregistering is only done once for each filesystem object.
Updates #1486.
PiperOrigin-RevId: 338685972
|
|
|
|
Previously a link endpoint was passed to
stack.LinkAddressResolver.LinkAddressRequest. With this change,
implementations that want a route for the link address request may
find one through the stack. Other implementations that want to send
a packet without a route may continue to do so using the network
interface directly.
Test: - arp_test.TestLinkAddressRequest
- ipv6.TestLinkAddressRequest
PiperOrigin-RevId: 338577474
|
|
|
|
Also enforce the minimum MTU for IPv4 and IPv6, and discard packets if the
minimum is not met.
PiperOrigin-RevId: 338404225
|
|
|
|
Reported-by: syzbot+c0e175d2b10708314eb3@syzkaller.appspotmail.com
PiperOrigin-RevId: 338386575
|