Age | Commit message (Collapse) | Author |
|
PiperOrigin-RevId: 271442321
|
|
This change fixes compile errors:
pty.cc:1460:7: error: expected primary-expression before '.' token
...
PiperOrigin-RevId: 271033729
|
|
Closes #261
PiperOrigin-RevId: 270973347
|
|
The test is checking the wrong poll_fd for POLLHUP. The only
reason it passed till now was because it was also checking
for POLLIN which was always true on the other fd from the
previous poll!
PiperOrigin-RevId: 270780401
|
|
Adresses a deadlock with the rolled back change:
https://github.com/google/gvisor/commit/b6a5b950d28e0b474fdad160b88bc15314cf9259
Creating a session from an orphaned process group was causing a lock to be
acquired twice by a single goroutine. This behavior is addressed, and a test
(OrphanRegression) has been added to pty.cc.
Implemented the following ioctls:
- TIOCSCTTY - set controlling TTY
- TIOCNOTTY - remove controlling tty, maybe signal some other processes
- TIOCGPGRP - get foreground process group. Also enables tcgetpgrp().
- TIOCSPGRP - set foreground process group. Also enabled tcsetpgrp().
Next steps are to actually turn terminal-generated control characters (e.g. C^c)
into signals to the proper process groups, and to send SIGTTOU and SIGTTIN when
appropriate.
PiperOrigin-RevId: 270088599
|
|
Note that the exact semantics for these signalfds are slightly different from
Linux. These signalfds are bound to the process at creation time. Reads, polls,
etc. are all associated with signals directed at that task. In Linux, all
signalfd operations are associated with current, regardless of where the
signalfd originated.
In practice, this should not be an issue given how signalfds are used. In order
to fix this however, we will need to plumb the context through all the event
APIs. This gets complicated really quickly, because the waiter APIs are all
netstack-specific, and not generally exposed to the context. Probably not
worthwhile fixing immediately.
PiperOrigin-RevId: 269901749
|
|
absl flags are more modern and we can easily depend on them directly.
The repo now successfully builds with --incompatible_load_cc_rules_from_bzl.
PiperOrigin-RevId: 269387081
|
|
ENOTDIR has to be returned when a component used as a directory in
pathname is not, in fact, a directory.
PiperOrigin-RevId: 269037893
|
|
This also allows the tee(2) implementation to be enabled, since dup can now be
properly supported via WriteTo.
Note that this change necessitated some minor restructoring with the
fs.FileOperations splice methods. If the *fs.File is passed through directly,
then only public API methods are accessible, which will deadlock immediately
since the locking is already done by fs.Splice. Instead, we pass through an
abstract io.Reader or io.Writer, which elide locks and use the underlying
fs.FileOperations directly.
PiperOrigin-RevId: 268805207
|
|
A recent Kokoro change pointed to go_tests.cfg (in line with the
other configurations), which unfortunately broke the presubmits.
This change also enabled the KVM tests, which were still using a
remote execution strategy.
This fixes both of these issues and allows presubmits to pass.
One additional test was caught with this case, which seems to
have been broken. It's unclear why this was not being caught.
PiperOrigin-RevId: 268166291
|
|
See https://github.com/bazelbuild/bazel/issues/8743. This will be required in
Bazel 1.0.
Protobuf was updated in
https://github.com/protocolbuffers/protobuf/commit/bf0c69e1302fe9568fbe310cc54b37d20a9d16a3#diff-96239ee297e0a92ac6ff96a6bc434ef0.
GoogleTest was updated in
https://github.com/google/googletest/commit/6fd262ecf787d0dc2a91696fd4bf1d3ee1ebfa14.
gflags has not yet been updated, so the repo still won't build with
--incompatible_load_cc_rules_from_bzl.
Tested with buildifier -warnings=native-cc -lint=warn **/BUILD.
PiperOrigin-RevId: 267638515
|
|
TestNoDuplicates is racy as it tries to read the /proc file system
while the test is running. But it's possible that from the time a
directory entries are read and each entry processed something could
change and in some cases the entry being processed could have been
deleted. In such cases we should not fail the test but just
ignore the error and move on.
PiperOrigin-RevId: 267483094
|
|
- Most AIO tests call io_setup(nr_events = 128). sizeof(struct io_event)
(128*32 = 4096). However, the actual size of the mapping created by
io_setup() is determined by:
(from fs/aio.c:ioctx_alloc())
/*
* We keep track of the number of available ringbuffer slots, to prevent
* overflow (reqs_available), and we also use percpu counters for this.
*
* So since up to half the slots might be on other cpu's percpu counters
* and unavailable, double nr_events so userspace sees what they
* expected: additionally, we move req_batch slots to/from percpu
* counters at a time, so make sure that isn't 0:
*/
nr_events = max(nr_events, num_possible_cpus() * 4);
nr_events *= 2;
(from fs/aio.c:aio_setup_ring())
/* Compensate for the ring buffer's head/tail overlap entry */
nr_events += 2; /* 1 is required, 2 for good luck */
size = sizeof(struct aio_ring);
size += sizeof(struct io_event) * nr_events;
nr_pages = PFN_UP(size);
When we mremap() only the first page of a multi-page AIO ring buffer
mapping, fs/aio.c:aio_ring_mremap() updates struct kioctx::mmap_base -
but struct kioctx::mmap_size is untouched, so sys_io_destroy() =>
kill_ioctx() vm_unmaps() the mremapped page, plus some number of pages
after it. Just get the actual size of the mapping from /proc/self/maps.
- Delete test case MremapOver; while it is correct that Linux will not
complain if you overwrite the AIO ring buffer with another mapping, it
won't actually work in the sense that AIO events will not be written to
the new mapping, because Linux stores the struct pages of the ring
buffer in struct kioctx::ring_pages and writes to those through kmap()
rather than using userspace addresses.
- Don't munmap() after mremap(MREMAP_FIXED) returns EFAULT; see new
comment in factored-out test case MremapExpansion.
PiperOrigin-RevId: 267482903
|
|
PiperOrigin-RevId: 267280086
|
|
PiperOrigin-RevId: 266491264
|
|
PiperOrigin-RevId: 266491246
|
|
Ioctl was returning just the buffer size from epsocket.endpoint
and it was not considering data from epsocket.SocketOperations
that was read from the endpoint, but not yet sent to the caller.
PiperOrigin-RevId: 266485461
|
|
This was accidentally introduced in 31f05d5d4f62c4cd4fe3b95b333d0130aae4b2c1.
Fixes #788.
PiperOrigin-RevId: 266462843
|
|
When abstract unix domain socket paths are displayed in
/proc/net/unix, Linux historically emitted null bytes as padding at
the end of the path. Newer versions of Linux (v4.9,
e7947ea770d0de434d38a0f823e660d3fd4bebb5) display these as '@'
characters.
Update proc_net_unix test to handle both version of the padding.
PiperOrigin-RevId: 266230200
|
|
PiperOrigin-RevId: 266229756
|
|
PiperOrigin-RevId: 266199211
|
|
Fix a uninitialized memory bug in pwritev2 test.
PiperOrigin-RevId: 265772176
|
|
When output file is in append mode, sendfile(2) should fail
with EINVAL and not EBADF.
Closes #721
PiperOrigin-RevId: 265718958
|
|
PiperOrigin-RevId: 265535438
|
|
The flake had the call to futex_unlock_pi() returning EINVAL with the
FUTEX_OWNER_DIED set. In this case, userspace has to clean up stale
state. So instead of calling FUTEX_UNLOCK_PI outright, we'll use the
advised atomic compare_exchange as advised in the man page.
PiperOrigin-RevId: 265163920
|
|
The gunit macros are not safe to use in the child.
PiperOrigin-RevId: 264904348
|
|
This fixes the issue of not being able to bind to either a multicast or
broadcast address as well as to send and receive data from it. The way to solve
this is to treat these addresses similar to the ANY address and register their
transport endpoint ID with the global stack's demuxer rather than the NIC's.
That way there is no need to require an endpoint with that multicast or
broadcast address. The stack's demuxer is in fact the only correct one to use,
because neither broadcast- nor multicast-bound sockets care which NIC a
packet was received on (for multicast a join is still needed to receive packets
on a NIC).
I also took the liberty of refactoring udp_test.go to consolidate a lot of
duplicate code and make it easier to create repetitive tests that test the same
feature for a variety of packet and socket types. For this purpose I created a
"flowType" that represents two things: 1) the type of packet being sent or
received and 2) the type of socket used for the test. E.g., a "multicastV4in6"
flow represents a V4-mapped multicast packet run through a V6-dual socket.
This allows writing significantly simpler tests. A nice example is testTTL().
PiperOrigin-RevId: 264766909
|
|
test/syscalls/linux/proc_net_tcp.cc:252: Failure
Value of: connect(client->get(), &addr, addrlen)
Expected: not -1 (success)
Actual: -1 (of type int), with errno PosixError(errno=4 Interrupted system call)
PiperOrigin-RevId: 264743815
|
|
PiperOrigin-RevId: 264494359
|
|
The test is long running (175128 ms or so) which causes timeouts.
The test simply makes sure that private futexes can acquire
locks concurrently. Dropping current threads and increasing the
number of locks each thread tests the same concurrency concerns
but drops execution time to ~1411 ms.
PiperOrigin-RevId: 264476144
|
|
PiperOrigin-RevId: 264180125
|
|
Linux allows to call connect for ANY and the zero port.
PiperOrigin-RevId: 263892534
|
|
PiperOrigin-RevId: 263666789
|
|
SendMsg before this change would copy all the data over into a
new slice even if the underlying socket could only accept a
small amount of data. This is really inefficient with non-blocking
sockets and under high throughput where large writes could get
ErrWouldBlock or if there was say a timeout associated with the sendmsg()
syscall.
With this change we delay copying bytes in till they are needed and only
copy what can be potentially sent/held in the socket buffer. Reducing
the need to repeatedly copy data over.
Also a minor fix to change state FIN-WAIT-1 when shutdown(..., SHUT_WR) is called
instead of when we transmit the actual FIN. Otherwise the socket could remain in
CONNECTED state even though the user has called shutdown() on the socket.
Updates #627
PiperOrigin-RevId: 263430505
|
|
PiperOrigin-RevId: 263040624
|
|
Now if a process sends an unsupported netlink requests,
an error is returned from the send system call.
The linux kernel works differently in this case. It returns errors in the
nlmsgerr netlink message.
Reported-by: syzbot+571d99510c6f935202da@syzkaller.appspotmail.com
PiperOrigin-RevId: 262690453
|
|
This can happen because endpoint.Close() closes the accept channel first and
then drains/resets any accepted but not delivered connections. But there can be
connections that are connected but not delivered to the channel as the channel
was full. But closing the channel can cause these writes to fail with a write to
a closed channel.
The correct solution is to abort any connections in SYN-RCVD state and
drain/abort all completed connections before closing the accept channel.
PiperOrigin-RevId: 261951132
|
|
If there is an offset, the file must support pread/pwrite. See
fs/splice.c:do_splice.
PiperOrigin-RevId: 261944932
|
|
(Don't worry, this is mostly tests.)
Implemented the following ioctls:
- TIOCSCTTY - set controlling TTY
- TIOCNOTTY - remove controlling tty, maybe signal some other processes
- TIOCGPGRP - get foreground process group. Also enables tcgetpgrp().
- TIOCSPGRP - set foreground process group. Also enabled tcsetpgrp().
Next steps are to actually turn terminal-generated control characters (e.g. C^c)
into signals to the proper process groups, and to send SIGTTOU and SIGTTIN when
appropriate.
PiperOrigin-RevId: 261387276
|
|
PiperOrigin-RevId: 261373749
|
|
Export some readily-available fields for TCP_INFO and stub out the rest.
PiperOrigin-RevId: 261191548
|
|
Implements support for RTM_GETROUTE requests for netlink sockets.
Fixes #507
PiperOrigin-RevId: 261051045
|
|
The checksum was not being reset before being re-calculated and sent out.
This caused the sent checksum to always be `0x0800`.
Fixes #605.
PiperOrigin-RevId: 260965059
|
|
This doesn't currently pass on gVisor.
While I'm here, fix a bug where connecting to the v6-mapped v4 address doesn't
work in gVisor.
PiperOrigin-RevId: 260923961
|
|
This test flaked on my current CL. Linux makes no guarantee
that two inodes will consecutive (overflows happen).
https://github.com/avagin/linux-task-diag/blob/master/fs/inode.c#L880
PiperOrigin-RevId: 260608240
|
|
Unfortunately, Linux's ip_tables.h header doesn't compile in C++ because it
implicitly converts from void* to struct xt_entry_target*. C allows this, but
C++ does not. So we have to re-implement many types ourselves.
Relevant code here:
https://github.com/torvalds/linux/blob/master/include/uapi/linux/netfilter_ipv4/ip_tables.h#L222
PiperOrigin-RevId: 260565570
|
|
This fixes a bug introduced in cl/251934850 that caused
connect-accept-close-connect races to result in the second connect call
failiing when it should have succeeded.
PiperOrigin-RevId: 259584525
|
|
PiperOrigin-RevId: 258996346
|
|
This tweaks the handling code for IP_MULTICAST_IF to ignore the InterfaceAddr
if a NICID is given.
PiperOrigin-RevId: 258982541
|
|
PiperOrigin-RevId: 258859507
|