Age | Commit message (Collapse) | Author |
|
|
|
Wait an additional RetransmitTimer duration after the last probe before
transitioning to Failed. The previous implementation transitions immediately to
Failed after sending the last probe, which is erroneous behavior.
PiperOrigin-RevId: 338723794
|
|
|
|
Read is implemented by PRead, with offset obtained from Seek.
PiperOrigin-RevId: 338718587
|
|
|
|
Added the following fields in kernfs.InodeAttr:
- blockSize
- atime
- mtime
- ctime
Also resolved all TODOs for #1193.
Fixes #1193
PiperOrigin-RevId: 338714527
|
|
|
|
Drain the notification channel after first accept as in case the first accept
never blocked then the notification for the first accept will still be in the
channel causing the second accept to fail as it will try to wait on the channel
and return immediately due to the older notification even though there is no
connection yet in the accept queue.
PiperOrigin-RevId: 338710062
|
|
|
|
Even if the child dentry has been cached, we should still check whether
the file and the corresponding Merkle tree file exist as expected. This
ensures that we can detect deletion/renaming of files that have been
previous enabled and opened.
Also make all verification failures to return EIO. This helps to test
verify failures.
PiperOrigin-RevId: 338709055
|
|
|
|
Also updated a test which only fails with VFS1.
PiperOrigin-RevId: 338704940
|
|
|
|
The SO_ACCEPTCONN option is used only on getsockopt(). When this option is
specified, getsockopt() indicates whether socket listening is enabled for
the socket. A value of zero indicates that socket listening is disabled;
non-zero that it is enabled.
PiperOrigin-RevId: 338703206
|
|
|
|
Previously, the NIC local address used when completing link resolution
was held in the neighbor entry. A neighbor is not identified by any
NIC local address so remove it.
PiperOrigin-RevId: 338699695
|
|
|
|
Earlier the count was dropped only after calling e.deliverAccepted. This lead to
an issue where there were no connections in SYN-RCVD state for the listening
endpoint but e.synRcvdCount would not be zero because it was being reduced only
when handleSynSegment returned after deliverAccepted returned.
This issue is seen when the Nth SYN for a listen backlog of size N which would
cause the listen backlog to be full gets dropped occasionally. This happens when
the new SYN comes at when the previous completed endpoint has been delivered to
the accept queue but the synRcvdCount hasn't yet been decremented because the
goroutine running handleSynSegment has not yet completed.
PiperOrigin-RevId: 338690646
|
|
|
|
Our current reference leak checker uses finalizers to verify whether an object
has reached zero references before it is garbage collected. There are multiple
problems with this mechanism, so a rewrite is in order.
With finalizers, there is no way to guarantee that a finalizer will run before
the program exits. When an unreachable object with a finalizer is garbage
collected, its finalizer will be added to a queue and run asynchronously. The
best we can do is run garbage collection upon sandbox exit to make sure that
all finalizers are enqueued.
Furthermore, if there is a chain of finalized objects, e.g. A points to B
points to C, garbage collection needs to run multiple times before all of the
finalizers are enqueued. The first GC run will register the finalizer for A but
not free it. It takes another GC run to free A, at which point B's finalizer
can be registered. As a result, we need to run GC as many times as the length
of the longest such chain to have a somewhat reliable leak checker.
Finally, a cyclical chain of structs pointing to one another will never be
garbage collected if a finalizer is set. This is a well-known issue with Go
finalizers (https://github.com/golang/go/issues/7358). Using leak checking on
filesystem objects that produce cycles will not work and even result in memory
leaks.
The new leak checker stores reference counted objects in a global map when
leak check is enabled and removes them once they are destroyed. At sandbox
exit, any remaining objects in the map are considered as leaked. This provides
a deterministic way of detecting leaks without relying on the complexities of
finalizers and garbage collection.
This approach has several benefits over the former, including:
- Always detects leaks of objects that should be destroyed very close to
sandbox exit. The old checker very rarely detected these leaks, because it
relied on garbage collection to be run in a short window of time.
- Panics if we forgot to enable leak check on a ref-counted object (we will try
to remove it from the map when it is destroyed, but it will never have been
added).
- Can store extra logging information in the map values without adding to the
size of the ref count struct itself. With the size of just an int64, the ref
count object remains compact, meaning frequent operations like IncRef/DecRef
are more cache-efficient.
- Can aggregate leak results in a single report after the sandbox exits.
Instead of having warnings littered in the log, which were
non-deterministically triggered by garbage collection, we can print all
warning messages at once. Note that this could also be a limitation--the
sandbox must exit properly for leaks to be detected.
Some basic benchmarking indicates that this change does not significantly
affect performance when leak checking is enabled, which is understandable
since registering/unregistering is only done once for each filesystem object.
Updates #1486.
PiperOrigin-RevId: 338685972
|
|
|
|
Subcontainers are only configured when the container starts, however because
start doesn't load the spec, flag annotations that may override flags were
not getting applied to the configuration.
Updates #3494
PiperOrigin-RevId: 338610953
|
|
|
|
Also fixes the docker_image bazel rule, and website-server make target.
Fixes #3273
PiperOrigin-RevId: 338606668
|
|
|
|
Previously a link endpoint was passed to
stack.LinkAddressResolver.LinkAddressRequest. With this change,
implementations that want a route for the link address request may
find one through the stack. Other implementations that want to send
a packet without a route may continue to do so using the network
interface directly.
Test: - arp_test.TestLinkAddressRequest
- ipv6.TestLinkAddressRequest
PiperOrigin-RevId: 338577474
|
|
|
|
PiperOrigin-RevId: 338517024
|
|
|
|
Also enforce the minimum MTU for IPv4 and IPv6, and discard packets if the
minimum is not met.
PiperOrigin-RevId: 338404225
|
|
|
|
Reported-by: syzbot+c0e175d2b10708314eb3@syzkaller.appspotmail.com
PiperOrigin-RevId: 338386575
|
|
|
|
PiperOrigin-RevId: 338372736
|
|
|
|
PiperOrigin-RevId: 338321125
|
|
|
|
The python:test_subprocess enumerates all possible file descriptors and fails
by timeout if the limit is too high.
There is a know thing about docker that it sets this limit to 1M by default,
but on native linux, this limit will be between 1K to 32K.
PiperOrigin-RevId: 338197239
|
|
|
|
|
|
When the commit description contains "commit ", it will be wrongly identified
as commit hash. This commit changes to take only lines begins with "commit "
as a fix, since the description is always indented by `git log`.
Copybara uses merge commit for external contributors, this causes that not all
commits contain a Piper ID. Adding `--first-parent` to `git log` so that it
only lists commits that contain a Piper ID.
PiperOrigin-RevId: 338183812
|
|
|
|
PiperOrigin-RevId: 338168977
|
|
|
|
bhaskerh@ fixed a bunch of the EADDRINUSE flakes in #3662 so we should
unexclude them.
I have also tested other flaky tests on this list and removed those that do
not flake anymore.
PiperOrigin-RevId: 338158545
|
|
|
|
PiperOrigin-RevId: 338156438
|
|
|
|
//pkg/tcpip/stack:stack_x_test_nogo
//pkg/tcpip/transport/raw:raw_nogo
PiperOrigin-RevId: 338153265
|
|
|