summaryrefslogtreecommitdiffhomepage
path: root/device
AgeCommit message (Collapse)Author
2023-12-11device: fix possible deadlock in close methodmasterMartin Basovnik
There is a possible deadlock in `device.Close()` when you try to close the device very soon after its start. The problem is that two different methods acquire the same locks in different order: 1. device.Close() - device.ipcMutex.Lock() - device.state.Lock() 2. device.changeState(deviceState) - device.state.Lock() - device.ipcMutex.Lock() Reproducer: func TestDevice_deadlock(t *testing.T) { d := randDevice(t) d.Close() } Problem: $ go clean -testcache && go test -race -timeout 3s -run TestDevice_deadlock ./device | grep -A 10 sync.runtime_SemacquireMutex sync.runtime_SemacquireMutex(0xc000117d20?, 0x94?, 0x0?) /usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25 sync.(*Mutex).lockSlow(0xc000130518) /usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213 sync.(*Mutex).Lock(0xc000130518) /usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55 golang.zx2c4.com/wireguard/device.(*Device).Close(0xc000130500) /Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:373 +0xb6 golang.zx2c4.com/wireguard/device.TestDevice_deadlock(0x0?) /Users/martin.basovnik/git/basovnik/wireguard-go/device/device_test.go:480 +0x2c testing.tRunner(0xc00014c000, 0x131d7b0) -- sync.runtime_SemacquireMutex(0xc000130564?, 0x60?, 0xc000130548?) /usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25 sync.(*Mutex).lockSlow(0xc000130750) /usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213 sync.(*Mutex).Lock(0xc000130750) /usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55 sync.(*RWMutex).Lock(0xc000130750) /usr/local/opt/go/libexec/src/sync/rwmutex.go:147 +0x45 golang.zx2c4.com/wireguard/device.(*Device).upLocked(0xc000130500) /Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:179 +0x72 golang.zx2c4.com/wireguard/device.(*Device).changeState(0xc000130500, 0x1) Signed-off-by: Martin Basovnik <martin.basovnik@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-12-11device: do atomic 64-bit add outside of vector loopJason A. Donenfeld
Only bother updating the rxBytes counter once we've processed a whole vector, since additions are atomic. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-12-11device: reduce redundant per-packet overhead in RX pathJordan Whited
Peer.RoutineSequentialReceiver() deals with packet vectors and does not need to perform timer and endpoint operations for every packet in a given vector. Changing these per-packet operations to per-vector improves throughput by as much as 10% in some environments. Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-12-11device: change Peer.endpoint locking to reduce contentionJordan Whited
Access to Peer.endpoint was previously synchronized by Peer.RWMutex. This has now moved to Peer.endpoint.Mutex. Peer.SendBuffers() is now the sole caller of Endpoint.ClearSrc(), which is signaled via a new bool, Peer.endpoint.clearSrcOnTx. Previous Callers of Endpoint.ClearSrc() now set this bool, primarily via peer.markEndpointSrcForClearing(). Peer.SetEndpointFromPacket() clears Peer.endpoint.clearSrcOnTx when an updated conn.Endpoint is stored. This maintains the same event order as before, i.e. a conn.Endpoint received after peer.endpoint.clearSrcOnTx is set, but before the next Peer.SendBuffers() call results in the latest conn.Endpoint source being used for the next packet transmission. These changes result in throughput improvements for single flow, parallel (-P n) flow, and bidirectional (--bidir) flow iperf3 TCP/UDP tests as measured on both Linux and Windows. Latency under load improves especially for high throughput Linux scenarios. These improvements are likely realized on all platforms to some degree, as the changes are not platform-specific. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-10-22device: ratchet up max segment size on androidJason A. Donenfeld
GRO requires big allocations to be efficient. This isn't great, as there might be Android memory usage issues. So we should revisit this commit. But at least it gets things working again. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-10-10device: move Queue{In,Out}boundElement Mutex to container typeJordan Whited
Queue{In,Out}boundElement locking can contribute to significant overhead via sync.Mutex.lockSlow() in some environments. These types are passed throughout the device package as elements in a slice, so move the per-element Mutex to a container around the slice. Reviewed-by: Maisem Ali <maisem@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-10-10device: distribute crypto work as slice of elementsJordan Whited
After reducing UDP stack traversal overhead via GSO and GRO, runtime.chanrecv() began to account for a high percentage (20% in one environment) of perf samples during a throughput benchmark. The individual packet channel ops with the crypto goroutines was the primary contributor to this overhead. Updating these channels to pass vectors, which the device package already handles at its ends, reduced this overhead substantially, and improved throughput. The iperf3 results below demonstrate the effect of this commit between two Linux computers with i5-12400 CPUs. There is roughly ~13us of round trip latency between them. The first result is with UDP GSO and GRO, and with single element channels. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 sender [ 5] 0.00-10.04 sec 12.3 GBytes 10.6 Gbits/sec receiver The second result is with channels updated to pass a slice of elements. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 13.2 GBytes 11.3 Gbits/sec 182 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 13.2 GBytes 11.3 Gbits/sec 182 sender [ 5] 0.00-10.04 sec 13.2 GBytes 11.3 Gbits/sec receiver Reviewed-by: Adrian Dewhurst <adrian@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-10-10conn, device: use UDP GSO and GRO on LinuxJordan Whited
StdNetBind probes for UDP GSO and GRO support at runtime. UDP GSO is dependent on checksum offload support on the egress netdev. UDP GSO will be disabled in the event sendmmsg() returns EIO, which is a strong signal that the egress netdev does not support checksum offload. The iperf3 results below demonstrate the effect of this commit between two Linux computers with i5-12400 CPUs. There is roughly ~13us of round trip latency between them. The first result is from commit 052af4a without UDP GSO or GRO. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 9.85 GBytes 8.46 Gbits/sec 1139 3.01 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 9.85 GBytes 8.46 Gbits/sec 1139 sender [ 5] 0.00-10.04 sec 9.85 GBytes 8.42 Gbits/sec receiver The second result is with UDP GSO and GRO. Starting Test: protocol: TCP, 1 streams, 131072 byte blocks [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 3.15 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 12.3 GBytes 10.6 Gbits/sec 232 sender [ 5] 0.00-10.04 sec 12.3 GBytes 10.6 Gbits/sec receiver Reviewed-by: Adrian Dewhurst <adrian@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-06-27device: wait for and lock ipc operations during closeJames Tucker
If an IPC operation is in flight while close starts, it is possible for both processes to deadlock. Prevent this by taking the IPC lock at the start of close and for the duration. Signed-off-by: James Tucker <jftucker@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-03-23conn: disable sticky sockets on AndroidJason A. Donenfeld
We can't have the netlink listener socket, so it's not possible to support it. Plus, android networking stack complexity makes it a bit tricky anyway, so best to leave it disabled. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-03-13global: buff -> bufJason A. Donenfeld
This always struck me as kind of weird and non-standard. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-03-10conn: inch BatchSize toward being non-dynamicJason A. Donenfeld
There's not really a use at the moment for making this configurable, and once bind_windows.go behaves like bind_std.go, we'll be able to use constants everywhere. So begin that simplification now. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-03-10conn, device, tun: implement vectorized I/O on LinuxJordan Whited
Implement TCP offloading via TSO and GRO for the Linux tun.Device, which is made possible by virtio extensions in the kernel's TUN driver. Delete conn.LinuxSocketEndpoint in favor of a collapsed conn.StdNetBind. conn.StdNetBind makes use of recvmmsg() and sendmmsg() on Linux. All platforms now fall under conn.StdNetBind, except for Windows, which remains in conn.WinRingBind, which still needs to be adjusted to handle multiple packets. Also refactor sticky sockets support to eventually be applicable on platforms other than just Linux. However Linux remains the sole platform that fully implements it for now. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-03-10conn, device, tun: implement vectorized I/O plumbingJordan Whited
Accept packet vectors for reading and writing in the tun.Device and conn.Bind interfaces, so that the internal plumbing between these interfaces now passes a vector of packets. Vectors move untouched between these interfaces, i.e. if 128 packets are received from conn.Bind.Read(), 128 packets are passed to tun.Device.Write(). There is no internal buffering. Currently, existing implementations are only adjusted to have vectors of length one. Subsequent patches will improve that. Also, as a related fixup, use the unix and windows packages rather than the syscall package when possible. Co-authored-by: James Tucker <james@tailscale.com> Signed-off-by: James Tucker <james@tailscale.com> Signed-off-by: Jordan Whited <jordan@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-02-16device: uniformly check ECDH output for zerosJason A. Donenfeld
For some reason, this was omitted for response messages. Reported-by: z <dzm@unexpl0.red> Fixes: 8c34c4c ("First set of code review patches") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2023-02-07global: bump copyright yearJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-09-20global: bump copyright yearJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-09-04all: use Go 1.19 and its atomic typesBrad Fitzpatrick
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-07-04conn, device, tun: set CLOEXEC on fdsBrad Fitzpatrick
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-16all: use any in place of interface{}Josh Bleecher Snyder
Enabled by using Go 1.18. A bit less verbose. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2022-03-16all: update to Go 1.18Josh Bleecher Snyder
Bump go.mod and README. Switch to upstream net/netip. Use strings.Cut. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-12-09global: apply gofumptJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-29device: handle peer post config on blank lineJason A. Donenfeld
We missed a function exit point. This was exacerbated by e3134bf ("device: defer state machine transitions until configuration is complete"), but the bug existed prior. Minus provided the following useful reproducer script: #!/usr/bin/env bash set -eux make wireguard-go || exit 125 ip netns del test-ns || true ip netns add test-ns ip link add test-kernel type wireguard wg set test-kernel listen-port 0 private-key <(echo "QMCfZcp1KU27kEkpcMCgASEjDnDZDYsfMLHPed7+538=") peer "eDPZJMdfnb8ZcA/VSUnLZvLB2k8HVH12ufCGa7Z7rHI=" allowed-ips 10.51.234.10/32 ip link set test-kernel netns test-ns up ip -n test-ns addr add 10.51.234.1/24 dev test-kernel port=$(ip netns exec test-ns wg show test-kernel listen-port) ip link del test-go || true ./wireguard-go test-go wg set test-go private-key <(echo "WBM7qimR3vFk1QtWNfH+F4ggy/hmO+5hfIHKxxI4nF4=") peer "+nj9Dkqpl4phsHo2dQliGm5aEiWJJgBtYKbh7XjeNjg=" allowed-ips 0.0.0.0/0 endpoint 127.0.0.1:$port ip addr add 10.51.234.10/24 dev test-go ip link set test-go up ping -c2 -W1 10.51.234.1 Reported-by: minus <minus@mnus.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-23device: reduce peer lock critical section in UAPIJosh Bleecher Snyder
The deferred RUnlock calls weren't executing until all peers had been processed. Add an anonymous function so that each peer may be unlocked as soon as it is completed. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-23device: remove code using unsafeJosh Bleecher Snyder
There is no performance impact. name old time/op new time/op delta TrieIPv4Peers100Addresses1000-8 78.6ns ± 1% 79.4ns ± 3% ~ (p=0.604 n=10+9) TrieIPv4Peers10Addresses10-8 29.1ns ± 2% 28.8ns ± 1% -1.12% (p=0.014 n=10+9) TrieIPv6Peers100Addresses1000-8 78.9ns ± 1% 78.6ns ± 1% ~ (p=0.492 n=10+10) TrieIPv6Peers10Addresses10-8 29.3ns ± 2% 28.6ns ± 2% -2.16% (p=0.000 n=10+10) Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-23global: use netip where possible nowJason A. Donenfeld
There are more places where we'll need to add it later, when Go 1.18 comes out with support for it in the "net" package. Also, allowedips still uses slices internally, which might be suboptimal. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-16device: only propagate roaming value before peer is referenced elsewhereJason A. Donenfeld
A peer.endpoint never becomes nil after being not-nil, so creation is the only time we actually need to set this. This prevents a race from when the variable is actually used elsewhere, and allows us to avoid an expensive atomic. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-16device: align 64-bit atomic member in DeviceJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-16device: start peers before running handshake testJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-16device: fix nil pointer dereference in uapi readDavid Anderson
Signed-off-by: David Anderson <danderson@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-15device: make new peers inherit broken mobile semanticsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-15device: defer state machine transitions until configuration is completeJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-15device: do not consume handshake messages if not runningJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-10-28device: timers: use pre-seeded per-thread unlocked fastrandn for jitterJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-10-28device: timers: seed unsafe rng before use for jitterJason A. Donenfeld
Forgetting to seed the unsafe rng, the jitter before followed a fixed pattern, which didn't help when a fleet of computers all boot at once. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-10-12global: remove old-style build tagsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-09-05global: add new go 1.17 build commentsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-06-04device: zero out allowedip node pointers when removingJason A. Donenfeld
This should make it a bit easier for the garbage collector. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-06-03device: limit allowedip fuzzer a to 4 times throughJason A. Donenfeld
Trying this for every peer winds up being very slow and precludes it from acceptable runtime in the CI, so reduce this to 4. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-06-03device: simplify allowedips lookup signatureJason A. Donenfeld
The inliner should handle this for us. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-06-03device: remove nodes by peer in O(1) instead of O(n)Jason A. Donenfeld
Now that we have parent pointers hooked up, we can simply go right to the node and remove it in place, rather than having to recursively walk the entire trie. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-06-03device: remove recursion from insertion and connect parent pointersJason A. Donenfeld
This makes the insertion algorithm a bit more efficient, while also now taking on the additional task of connecting up parent pointers. This will be handy in the following commit. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-06-03device: reduce size of trie structJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-05-22device: allow reducing queue constants on iOSJason A. Donenfeld
Heavier network extensions might require the wireguard-go component to use less ram, so let users of this reduce these as needed. At some point we'll put this behind a configuration method of sorts, but for now, just expose the consts as vars. Requested-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-05-20tun: linux: account for interface removal from outsideJason A. Donenfeld
On Linux we can run `ip link del wg0`, in which case the fd becomes stale, and we should exit. Since this is an intentional action, don't treat it as an error. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-05-18device: optimize Peer.String even moreJason A. Donenfeld
This reduces the allocation, branches, and amount of base64 encoding. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-05-14device: optimize Peer.StringJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-05-07device: add ID to repeated routinesJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-05-07device: remove unusual ... in messagesJason A. Donenfeld
We dont use ... in any other present progressive messages except these. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-05-07device: avoid verbose log line during ordinary shutdown sequenceJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>