Age | Commit message (Collapse) | Author |
|
PiperOrigin-RevId: 269921234
|
|
Note that the exact semantics for these signalfds are slightly different from
Linux. These signalfds are bound to the process at creation time. Reads, polls,
etc. are all associated with signals directed at that task. In Linux, all
signalfd operations are associated with current, regardless of where the
signalfd originated.
In practice, this should not be an issue given how signalfds are used. In order
to fix this however, we will need to plumb the context through all the event
APIs. This gets complicated really quickly, because the waiter APIs are all
netstack-specific, and not generally exposed to the context. Probably not
worthwhile fixing immediately.
PiperOrigin-RevId: 269901749
|
|
The RELEASE file must be at the top-level for the signed
repository to work correctly.
PiperOrigin-RevId: 269897109
|
|
PiperOrigin-RevId: 269690988
|
|
PiperOrigin-RevId: 269658971
|
|
- Fix ARG syntax in Dockerfiles.
- Fix curl commands in Dockerfiles.
- Fix some paths in proctor binaries.
- Check error from Walk in search helper.
PiperOrigin-RevId: 269641686
|
|
PiperOrigin-RevId: 269631877
|
|
PiperOrigin-RevId: 269614517
|
|
PiperOrigin-RevId: 269468986
|
|
* Use multi-stage builds in Dockerfiles.
* Combine all proctor binaries into a single binary.
* Change the TestRunner interface to reduce code duplication.
PiperOrigin-RevId: 269462101
|
|
absl flags are more modern and we can easily depend on them directly.
The repo now successfully builds with --incompatible_load_cc_rules_from_bzl.
PiperOrigin-RevId: 269387081
|
|
- Sandbox logs are generated when running tests
- Kokoro uploads the sandbox logs
- Supports multiple parallel runs
- Revive script to install locally built runsc with docker
PiperOrigin-RevId: 269337274
|
|
ENOTDIR has to be returned when a component used as a directory in
pathname is not, in fact, a directory.
PiperOrigin-RevId: 269037893
|
|
PiperOrigin-RevId: 268947847
|
|
PiperOrigin-RevId: 268845090
|
|
This also allows the tee(2) implementation to be enabled, since dup can now be
properly supported via WriteTo.
Note that this change necessitated some minor restructoring with the
fs.FileOperations splice methods. If the *fs.File is passed through directly,
then only public API methods are accessible, which will deadlock immediately
since the locking is already done by fs.Splice. Instead, we pass through an
abstract io.Reader or io.Writer, which elide locks and use the underlying
fs.FileOperations directly.
PiperOrigin-RevId: 268805207
|
|
They are no-ops, so the standard rule works fine.
PiperOrigin-RevId: 268776264
|
|
PiperOrigin-RevId: 268772451
|
|
PiperOrigin-RevId: 268757842
|
|
Currently it will not work with apt out of the box, as we
require the dists/ prefix, along with a distribution name.
This tweaks the overall structure to allow for the same URL
prefix to be used for all repositories, and enables multiple
architectures.
Fixes #852
PiperOrigin-RevId: 268756104
|
|
This fixes the RBE warning:
DEBUG: rbe_default not using checked in configs; Bazel version 0.29.1 was
picked/selected but no checked in config was found in map[...]
PiperOrigin-RevId: 268747944
|
|
PiperOrigin-RevId: 268614014
|
|
PiperOrigin-RevId: 268608466
|
|
PiperOrigin-RevId: 268604220
|
|
The credentials must be explicitly refreshed for pushing to
the repository on the Go branch.
PiperOrigin-RevId: 268589817
|
|
PiperOrigin-RevId: 268486127
|
|
PiperOrigin-RevId: 268397389
|
|
We need to include the `--stamp` flag in `tools/workspace_status.sh` for
the version to be picked up by the linker. Not sure why.
Also changes the VERSION string to STABLE_VERSION, which will cause the
program to be re-linked if the string changes.
Fixes #830
|
|
A recent Kokoro change pointed to go_tests.cfg (in line with the
other configurations), which unfortunately broke the presubmits.
This change also enabled the KVM tests, which were still using a
remote execution strategy.
This fixes both of these issues and allows presubmits to pass.
One additional test was caught with this case, which seems to
have been broken. It's unclear why this was not being caught.
PiperOrigin-RevId: 268166291
|
|
Fix a bug where udp.(*endpoint).Disconnect [accessible in gVisor via
epsocket.(*SocketOperations).Connect with AF_UNSPEC] would leak a port
reservation if the socket/endpoint had an ephemeral port assigned to it.
glibc's getaddrinfo uses connect with AF_UNSPEC, causing each call of
getaddrinfo to leak a port. Call getaddrinfo too many times and you run out of
ports (shows up as connect returning EAGAIN and getaddrinfo returning
EAI_NONAME "Name or service not known").
PiperOrigin-RevId: 268071160
|
|
This CL implements go_marshal, a code generation utility for
automatically serializing and deserializing ABI structs.
The go_marshal tool automatically generates implementations of the new
marshal interface. Unlike binary.Marshal/Unmarshal, the generated
interface implementations use no runtime reflection, and translates to
a single memcpy for most structs. See go_marshal/README.md for
details.
PiperOrigin-RevId: 268065475
|
|
PiperOrigin-RevId: 268062329
|
|
The IPv6 all-nodes multicast address will be joined on NIC enable, and the
appropriate IPv6 solicited-node multicast address will be joined when IPv6
addresses are added.
Tests: Test receiving packets destined to the IPv6 link-local all-nodes
multicast address and the IPv6 solicted node address of an added IPv6 address.
PiperOrigin-RevId: 268047073
|
|
PiperOrigin-RevId: 267709597
|
|
Since each Endpoint has a distinct mapping of the packet window, the Go race
detector does not recognize accesses by connected Endpoints to be related. This
means that this change isn't necessary for the Go race detector to accept
accesses of flipcall.Endpoint.Data(), but it *is* necessary for it to accept
accesses to shared variables outside the scope of flipcall that are
synchronized by flipcall.Endpoint state; see updated test for an example.
RaceReleaseMerge is needed (instead of RaceRelease) because calls to
raceBecomeInactive() from *unrelated* Endpoints can occur in any order.
(DowngradableRWMutex.RUnlock() has a similar property: calls to RUnlock() on
the same DowngradableRWMutex from different goroutines can occur in any order.
Remove the TODO asking to explain this now that this is understood.)
PiperOrigin-RevId: 267705325
|
|
This has the following fixes:
* Packages are passed to the tools/make_repository.sh command.
* All matching tags are built, for commits with multiple.
* The binary path is generated by the build command.
* Output from signing the repository is supressed.
* Allow a release author.
Change-Id: I2d08954ba76e35612f352be99d5bb99080f80892
|
|
See https://github.com/bazelbuild/bazel/issues/8743. This will be required in
Bazel 1.0.
Protobuf was updated in
https://github.com/protocolbuffers/protobuf/commit/bf0c69e1302fe9568fbe310cc54b37d20a9d16a3#diff-96239ee297e0a92ac6ff96a6bc434ef0.
GoogleTest was updated in
https://github.com/google/googletest/commit/6fd262ecf787d0dc2a91696fd4bf1d3ee1ebfa14.
gflags has not yet been updated, so the repo still won't build with
--incompatible_load_cc_rules_from_bzl.
Tested with buildifier -warnings=native-cc -lint=warn **/BUILD.
PiperOrigin-RevId: 267638515
|
|
This is done because the root container for CRI is the infrastructure (pause)
container and always gets a low oom_score_adj. We do this to ensure that only
the oom_score_adj of user containers is used to calculated the sandbox
oom_score_adj.
Implemented in runsc rather than the containerd shim as it's a bit cleaner to
implement here (in the shim it would require overwriting the oomScoreAdj and
re-writing out the config.json again). This processing is Kubernetes(CRI)
specific but we are currently only supporting CRI for multi-container support
anyway.
PiperOrigin-RevId: 267507706
|
|
PiperOrigin-RevId: 267498537
|
|
PiperOrigin-RevId: 267484802
|
|
TestNoDuplicates is racy as it tries to read the /proc file system
while the test is running. But it's possible that from the time a
directory entries are read and each entry processed something could
change and in some cases the entry being processed could have been
deleted. In such cases we should not fail the test but just
ignore the error and move on.
PiperOrigin-RevId: 267483094
|
|
- Most AIO tests call io_setup(nr_events = 128). sizeof(struct io_event)
(128*32 = 4096). However, the actual size of the mapping created by
io_setup() is determined by:
(from fs/aio.c:ioctx_alloc())
/*
* We keep track of the number of available ringbuffer slots, to prevent
* overflow (reqs_available), and we also use percpu counters for this.
*
* So since up to half the slots might be on other cpu's percpu counters
* and unavailable, double nr_events so userspace sees what they
* expected: additionally, we move req_batch slots to/from percpu
* counters at a time, so make sure that isn't 0:
*/
nr_events = max(nr_events, num_possible_cpus() * 4);
nr_events *= 2;
(from fs/aio.c:aio_setup_ring())
/* Compensate for the ring buffer's head/tail overlap entry */
nr_events += 2; /* 1 is required, 2 for good luck */
size = sizeof(struct aio_ring);
size += sizeof(struct io_event) * nr_events;
nr_pages = PFN_UP(size);
When we mremap() only the first page of a multi-page AIO ring buffer
mapping, fs/aio.c:aio_ring_mremap() updates struct kioctx::mmap_base -
but struct kioctx::mmap_size is untouched, so sys_io_destroy() =>
kill_ioctx() vm_unmaps() the mremapped page, plus some number of pages
after it. Just get the actual size of the mapping from /proc/self/maps.
- Delete test case MremapOver; while it is correct that Linux will not
complain if you overwrite the AIO ring buffer with another mapping, it
won't actually work in the sense that AIO events will not be written to
the new mapping, because Linux stores the struct pages of the ring
buffer in struct kioctx::ring_pages and writes to those through kmap()
rather than using userspace addresses.
- Don't munmap() after mremap(MREMAP_FIXED) returns EFAULT; see new
comment in factored-out test case MremapExpansion.
PiperOrigin-RevId: 267482903
|
|
Allow non-unique group IDs in the bazel docker container in order to avoid
failures using host group IDs that are already present in the image.
Issue #801
PiperOrigin-RevId: 267306089
|
|
This should correct the continuous build.
|
|
PiperOrigin-RevId: 267280086
|
|
Some processes are reparented to the root container depending
on the kill order and the root container would not reap in time.
So some zombie processes were still present when the test checked.
Fix it by running the second container inside a PID namespace.
PiperOrigin-RevId: 267278591
|
|
PiperOrigin-RevId: 267277711
|
|
There are a few cases addressed by this change
- We no longer generate a RST in response to a RST packet.
- When we receive a RST we cleanup and release all reservations immediately as
the connection is now aborted.
- An ACK received by a listening socket generates a RST when SYN cookies are not
in-use. The only reason an ACK should land at the listening socket is if we
are using SYN cookies otherwise the goroutine for the handshake in progress
should have gotten the packet and it should never have arrived at the
listening endpoint.
- Also fixes the error returned when a connection times out due to a
Keepalive timer expiration from ECONNRESET to a ETIMEDOUT.
PiperOrigin-RevId: 267238427
|
|
This also renames "subnet" to "addressRange" to avoid any more confusion with
an interface IP's subnet.
Lastly, this also removes the Stack.ContainsSubnet(..) API since it isn't used
by anyone. Plus the same information can be obtained from
Stack.NICAddressRanges().
PiperOrigin-RevId: 267229843
|
|
The simple test script has gotten out of control. Shard this script into
different pieces and attempt to impose order on overall test structure. This
change helps lay some of the foundations for future improvements.
* The runsc/test directories are moved into just test/.
* The runsc/test/testutil package is split into logical pieces.
* The scripts/ directory contains new top-level targets.
* Each test is now responsible for building targets it requires.
* The install functionality is moved into `runsc` itself for simplicity.
* The existing kokoro run_tests.sh file now just calls all (can be split).
After this change is merged, I will create multiple distinct workflows for
Kokoro, one for each of the scripts currently targeted by `run_tests.sh` today,
which should dramatically reduce the time-to-run for the Kokoro tests, and
provides a better foundation for further improvements to the infrastructure.
PiperOrigin-RevId: 267081397
|