Age | Commit message (Collapse) | Author |
|
Fixes #4427, #4428
PiperOrigin-RevId: 338805047
|
|
PiperOrigin-RevId: 338798433
|
|
Also let the Github bug reviver detect both in TODOs.
PiperOrigin-RevId: 338785089
|
|
PiperOrigin-RevId: 338784921
|
|
PiperOrigin-RevId: 338780793
|
|
Inode number consistency checks are now skipped in save/restore tests for
reasons described in greatest detail in StatTest.StateDoesntChangeAfterRename.
They pass in VFS1 due to the bug described in new test case
SimpleStatTest.DifferentFilesHaveDifferentDeviceInodeNumberPairs.
Fixes #1663
PiperOrigin-RevId: 338776148
|
|
Handle "Resource temporarily unavailable" EAGAIN errors with a select
call before calling recvmsg.
Also rename similar helper call from "RecvMsgTimeout" to "RecvTimeout",
because it calls "recv".
PiperOrigin-RevId: 338761695
|
|
This caused test flakes.
PiperOrigin-RevId: 338758723
|
|
PiperOrigin-RevId: 338756277
|
|
There are too many dependencies.
PiperOrigin-RevId: 338746264
|
|
PiperOrigin-RevId: 338739277
|
|
PiperOrigin-RevId: 338728070
|
|
- When encodeState.resolve() determines that the resolved reflect.Value is
contained by a previously-resolved object, set wire.Ref.Type to the
containing object's type (existing.obj.Type()) rather than the contained
value's type (obj.Type()).
- When encodeState.resolve() determines that the resolved reflect.Value
contains a previously-resolved object, handle cases where the new object
contains *multiple* previously-resolved objects. (This may cause
previously-allocated object IDs to become unused; to facilitate this, change
encodeState.pending to a map, and change the wire format to prefix each
object with its object ID.)
- Add encodeState.encodedStructs to avoid redundant encoding of structs, since
deduplication of objects via encodeState.resolve() doesn't work for objects
instantiated by StateSave() and passed to SaveValue() (i.e. fields tagged
`state:".(whatever)"`).
- Make unexported array fields deserializable via slices that refer to them by
casting away their unexportedness in decodeState.decodeObject().
Updates #1663
PiperOrigin-RevId: 338727687
|
|
Wait an additional RetransmitTimer duration after the last probe before
transitioning to Failed. The previous implementation transitions immediately to
Failed after sending the last probe, which is erroneous behavior.
PiperOrigin-RevId: 338723794
|
|
Read is implemented by PRead, with offset obtained from Seek.
PiperOrigin-RevId: 338718587
|
|
Added the following fields in kernfs.InodeAttr:
- blockSize
- atime
- mtime
- ctime
Also resolved all TODOs for #1193.
Fixes #1193
PiperOrigin-RevId: 338714527
|
|
Drain the notification channel after first accept as in case the first accept
never blocked then the notification for the first accept will still be in the
channel causing the second accept to fail as it will try to wait on the channel
and return immediately due to the older notification even though there is no
connection yet in the accept queue.
PiperOrigin-RevId: 338710062
|
|
Even if the child dentry has been cached, we should still check whether
the file and the corresponding Merkle tree file exist as expected. This
ensures that we can detect deletion/renaming of files that have been
previous enabled and opened.
Also make all verification failures to return EIO. This helps to test
verify failures.
PiperOrigin-RevId: 338709055
|
|
Also updated a test which only fails with VFS1.
PiperOrigin-RevId: 338704940
|
|
The SO_ACCEPTCONN option is used only on getsockopt(). When this option is
specified, getsockopt() indicates whether socket listening is enabled for
the socket. A value of zero indicates that socket listening is disabled;
non-zero that it is enabled.
PiperOrigin-RevId: 338703206
|
|
Previously, the NIC local address used when completing link resolution
was held in the neighbor entry. A neighbor is not identified by any
NIC local address so remove it.
PiperOrigin-RevId: 338699695
|
|
Earlier the count was dropped only after calling e.deliverAccepted. This lead to
an issue where there were no connections in SYN-RCVD state for the listening
endpoint but e.synRcvdCount would not be zero because it was being reduced only
when handleSynSegment returned after deliverAccepted returned.
This issue is seen when the Nth SYN for a listen backlog of size N which would
cause the listen backlog to be full gets dropped occasionally. This happens when
the new SYN comes at when the previous completed endpoint has been delivered to
the accept queue but the synRcvdCount hasn't yet been decremented because the
goroutine running handleSynSegment has not yet completed.
PiperOrigin-RevId: 338690646
|
|
Our current reference leak checker uses finalizers to verify whether an object
has reached zero references before it is garbage collected. There are multiple
problems with this mechanism, so a rewrite is in order.
With finalizers, there is no way to guarantee that a finalizer will run before
the program exits. When an unreachable object with a finalizer is garbage
collected, its finalizer will be added to a queue and run asynchronously. The
best we can do is run garbage collection upon sandbox exit to make sure that
all finalizers are enqueued.
Furthermore, if there is a chain of finalized objects, e.g. A points to B
points to C, garbage collection needs to run multiple times before all of the
finalizers are enqueued. The first GC run will register the finalizer for A but
not free it. It takes another GC run to free A, at which point B's finalizer
can be registered. As a result, we need to run GC as many times as the length
of the longest such chain to have a somewhat reliable leak checker.
Finally, a cyclical chain of structs pointing to one another will never be
garbage collected if a finalizer is set. This is a well-known issue with Go
finalizers (https://github.com/golang/go/issues/7358). Using leak checking on
filesystem objects that produce cycles will not work and even result in memory
leaks.
The new leak checker stores reference counted objects in a global map when
leak check is enabled and removes them once they are destroyed. At sandbox
exit, any remaining objects in the map are considered as leaked. This provides
a deterministic way of detecting leaks without relying on the complexities of
finalizers and garbage collection.
This approach has several benefits over the former, including:
- Always detects leaks of objects that should be destroyed very close to
sandbox exit. The old checker very rarely detected these leaks, because it
relied on garbage collection to be run in a short window of time.
- Panics if we forgot to enable leak check on a ref-counted object (we will try
to remove it from the map when it is destroyed, but it will never have been
added).
- Can store extra logging information in the map values without adding to the
size of the ref count struct itself. With the size of just an int64, the ref
count object remains compact, meaning frequent operations like IncRef/DecRef
are more cache-efficient.
- Can aggregate leak results in a single report after the sandbox exits.
Instead of having warnings littered in the log, which were
non-deterministically triggered by garbage collection, we can print all
warning messages at once. Note that this could also be a limitation--the
sandbox must exit properly for leaks to be detected.
Some basic benchmarking indicates that this change does not significantly
affect performance when leak checking is enabled, which is understandable
since registering/unregistering is only done once for each filesystem object.
Updates #1486.
PiperOrigin-RevId: 338685972
|
|
Subcontainers are only configured when the container starts, however because
start doesn't load the spec, flag annotations that may override flags were
not getting applied to the configuration.
Updates #3494
PiperOrigin-RevId: 338610953
|
|
Also fixes the docker_image bazel rule, and website-server make target.
Fixes #3273
PiperOrigin-RevId: 338606668
|
|
Previously a link endpoint was passed to
stack.LinkAddressResolver.LinkAddressRequest. With this change,
implementations that want a route for the link address request may
find one through the stack. Other implementations that want to send
a packet without a route may continue to do so using the network
interface directly.
Test: - arp_test.TestLinkAddressRequest
- ipv6.TestLinkAddressRequest
PiperOrigin-RevId: 338577474
|
|
PiperOrigin-RevId: 338517024
|
|
Also enforce the minimum MTU for IPv4 and IPv6, and discard packets if the
minimum is not met.
PiperOrigin-RevId: 338404225
|
|
Reported-by: syzbot+c0e175d2b10708314eb3@syzkaller.appspotmail.com
PiperOrigin-RevId: 338386575
|
|
PiperOrigin-RevId: 338372736
|
|
PiperOrigin-RevId: 338321125
|
|
The python:test_subprocess enumerates all possible file descriptors and fails
by timeout if the limit is too high.
There is a know thing about docker that it sets this limit to 1M by default,
but on native linux, this limit will be between 1K to 32K.
PiperOrigin-RevId: 338197239
|
|
|
|
When the commit description contains "commit ", it will be wrongly identified
as commit hash. This commit changes to take only lines begins with "commit "
as a fix, since the description is always indented by `git log`.
Copybara uses merge commit for external contributors, this causes that not all
commits contain a Piper ID. Adding `--first-parent` to `git log` so that it
only lists commits that contain a Piper ID.
PiperOrigin-RevId: 338183812
|
|
PiperOrigin-RevId: 338168977
|
|
bhaskerh@ fixed a bunch of the EADDRINUSE flakes in #3662 so we should
unexclude them.
I have also tested other flaky tests on this list and removed those that do
not flake anymore.
PiperOrigin-RevId: 338158545
|
|
PiperOrigin-RevId: 338156438
|
|
//pkg/tcpip/stack:stack_x_test_nogo
//pkg/tcpip/transport/raw:raw_nogo
PiperOrigin-RevId: 338153265
|
|
PiperOrigin-RevId: 338126491
|
|
PiperOrigin-RevId: 338072845
|
|
Reported-by: syzbot+7406eef8247cb5a20855@syzkaller.appspotmail.com
PiperOrigin-RevId: 337974474
|
|
Reported-by: syzbot+078580ce5dd6d607fcd8@syzkaller.appspotmail.com
Reported-by: syzbot+2096681f6891e7bf8aed@syzkaller.appspotmail.com
PiperOrigin-RevId: 337973519
|
|
PiperOrigin-RevId: 337971497
|
|
- Check the sticky bit in overlay.filesystem.UnlinkAt(). Fixes
StickyTest.StickyBitPermDenied.
- When configuring a VFS2 overlay in runsc, copy the lower layer's root
owner/group/mode to the upper layer's root (as in the VFS1 equivalent,
boot.addOverlay()). This makes the overlay root owned by UID/GID 65534 with
mode 0755 rather than owned by UID/GID 0 with mode 01777. Fixes
CreateTest.CreateFailsOnUnpermittedDir, which assumes that the test cannot
create files in /.
- MknodTest.UnimplementedTypesReturnError assumes that the creation of device
special files is not supported. However, while the VFS2 gofer client still
doesn't support device special files, VFS2 tmpfs does, and in the overlay
test dimension mknod() targets a tmpfs upper layer. The test initially has
all capabilities, including CAP_MKNOD, so its creation of these files
succeeds. Constrain these tests to VFS1.
- Rename overlay.nonDirectoryFD to overlay.regularFileFD and only use it for
regular files, using the original FD for pipes and device special files. This
is more consistent with Linux (which gets the original inode_operations, and
therefore file_operations, for these file types from ovl_fill_inode() =>
init_special_inode()) and fixes remaining mknod and pipe tests.
- Read/write 1KB at a time in PipeTest.Streaming, rather than 4 bytes. This
isn't strictly necessary, but it makes the test less obnoxiously slow on
ptrace.
Fixes #4407
PiperOrigin-RevId: 337971042
|
|
PiperOrigin-RevId: 337968219
|
|
Using the newer bazel rules necessitates a transition from proto1 to
proto2. In order to resolve the incompatibility between proto2 and
gogoproto, the cri runtimeoptions proto must be vendored.
Further, some of the semantics of bazel caching changed during the
transition. It is now necessary to:
- Ensure that :gopath depends only on pure library targets, as the
propagation of go_binary build attributes (pure, static) will
affected the generated files (though content remains the same,
there are conflicts with respect to the gopath).
- Update bazel.mk to include the possibility of binaries in the
bazel-out directory, as it will now put runsc and others there.
This required some refinements to the mechanism of extracting
paths, since some the existing regex resulted in false positives.
- Change nogo rules to prevent escape generation on binary targets.
For some reason, the newer version of bazel attempted to run the
nogo analysis on the binary targets, which fails due to the fact
that objdump does not work on the final binary. This must be due
to a change in the semantics of aspects in bazel3.
PiperOrigin-RevId: 337958324
|
|
There were a few problems with cgroups:
- cleanup loop what breaking too early
- parse of /proc/[pid]/cgroups was skipping "name=systemd"
because "name=" was not being removed from name.
- When no limits are specified, fillFromAncestor was not being
called, causing a failure to set cpuset.mems
Updates #4536
PiperOrigin-RevId: 337947356
|
|
PiperOrigin-RevId: 337919424
|
|
Reported-by: syzbot+0268cc591c0f517a1de0@syzkaller.appspotmail.com
PiperOrigin-RevId: 337901664
|
|
This change makes the following changes:
- Unlocks MemoryFile.mu while calling mincore (checkCommitted) because mincore
can take a really long time. Accordingly looks up the segment in the tree
tree again and handles changes to the segment.
- MemoryFile.UpdateUsage() can now only be called at frequency at most 100Hz.
100 Hz = linux.CLOCKS_PER_SEC.
Co-authored-by: Jamie Liu <jamieliu@google.com>
PiperOrigin-RevId: 337865250
|