summaryrefslogtreecommitdiffhomepage
path: root/device
AgeCommit message (Collapse)Author
2021-01-29device: use new model queues for handshakesJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-29device: simplify peer queue lockingJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: reduce nesting when staging packetJason A. Donenfeld
Suggested-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28global: bump copyrightJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: do not allow get to run while set runsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: avoid hex allocations in IpcGetJason A. Donenfeld
benchmark old ns/op new ns/op delta BenchmarkUAPIGet-16 2872 2157 -24.90% benchmark old allocs new allocs delta BenchmarkUAPIGet-16 30 18 -40.00% benchmark old bytes new bytes delta BenchmarkUAPIGet-16 737 256 -65.26% Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: the psk is not a chapoly keyJason A. Donenfeld
It's a separate type of key that gets hashed into the chain. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-27device: get rid of nonce routineJason A. Donenfeld
This moves to a simple queue with no routine processing it, to reduce scheduler pressure. This splits latency in half! benchmark old ns/op new ns/op delta BenchmarkThroughput-16 2394 2364 -1.25% BenchmarkLatency-16 259652 120810 -53.47% Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-27device: avoid deadlock when changing private key and removing self peersJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-27device: use linked list for per-peer allowed-ip traversalJason A. Donenfeld
This makes the IpcGet method much faster. We also refactor the traversal API to use a callback so that we don't need to allocate at all. Avoiding allocations we do self-masking on insertion, which in turn means that split intermediate nodes require a copy of the bits. benchmark old ns/op new ns/op delta BenchmarkUAPIGet-16 3243 2659 -18.01% benchmark old allocs new allocs delta BenchmarkUAPIGet-16 35 30 -14.29% benchmark old bytes new bytes delta BenchmarkUAPIGet-16 1218 737 -39.49% This benchmark is good, though it's only for a pair of peers, each with only one allowedips. As this grows, the delta expands considerably. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-26device: combine debug and info log levels into 'verbose'Jason A. Donenfeld
There are very few cases, if any, in which a user only wants one of these levels, so combine it into a single level. While we're at it, reduce indirection on the loggers by using an empty function rather than a nil function pointer. It's not like we have retpolines anyway, and we were always calling through a function with a branch prior, so this seems like a net gain. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-26device: change logging interface to use functionsJosh Bleecher Snyder
This commit overhauls wireguard-go's logging. The primary, motivating change is to use a function instead of a *log.Logger as the basic unit of logging. Using functions provides a lot more flexibility for people to bring their own logging system. It also introduces logging helper methods on Device. These reduce line noise at the call site. They also allow for log functions to be nil; when nil, instead of generating a log line and throwing it away, we don't bother generating it at all. This spares allocation and pointless work. This is a breaking change, although the fix required of clients is fairly straightforward. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-26device: fix shadowing of err in IpcHandleJosh Bleecher Snyder
The declaration of err in nextByte, err := buffered.ReadByte shadows the declaration of err in op, err := buffered.ReadString('\n') above. As a result, the assignments to err in err = ipcErrorf(ipc.IpcErrorInvalid, "trailing character in UAPI get: %c", nextByte) and in err = device.IpcGetOperation(buffered.Writer) do not modify the correct err variable. Found by staticcheck. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-26device: remove extra error argJosh Bleecher Snyder
Caught by go vet. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-26device: reduce allocs in Device.IpcGetOperationBrad Fitzpatrick
Plenty more to go, but a start: name old time/op new time/op delta UAPIGet-4 6.37µs ± 2% 5.56µs ± 1% -12.70% (p=0.000 n=8+8) name old alloc/op new alloc/op delta UAPIGet-4 1.98kB ± 0% 1.22kB ± 0% -38.71% (p=0.000 n=10+10) name old allocs/op new allocs/op delta UAPIGet-4 42.0 ± 0% 35.0 ± 0% -16.67% (p=0.000 n=10+10) Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2021-01-26device: add benchmark for UAPI Device.IpcGetOperationJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: allow pipelining UAPI requestsJason A. Donenfeld
The original spec ends with \n\n especially for this reason. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-25device: serialize access to IpcSetOperationJosh Bleecher Snyder
Interleaves IpcSetOperations would spell trouble. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: simplify handling of IPC set endpointJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: remove close processing fwmarkJosh Bleecher Snyder
Also, a behavior change: Stop treating a blank value as 0. It's not in the spec. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: remove unnecessary commentJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: introduce new IPC error message for unknown errorJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: correct IPC error number for I/O errorsJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: simplify IpcHandle error handlingJosh Bleecher Snyder
Unify the handling of unexpected UAPI errors. The comment that says "should never happen" is incorrect; this could happen due to I/O errors. Correct it. Change error message capitalization for consistency. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: split IpcSetOperation into partsJosh Bleecher Snyder
The goal of this change is to make the structure of IpcSetOperation easier to follow. IpcSetOperation contains a small state machine: It starts by configuring the device, then shifts to configuring one peer at a time. Having the code all in one giant method obscured that structure. Split out the parts into helper functions and encapsulate the peer state. This makes the overall structure more apparent. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: expand IPCErrorJosh Bleecher Snyder
Expand IPCError to contain a wrapped error, and add a helper to make constructing such errors easier. Add a defer-based "log on returned error" to IpcSetOperation. This lets us simplify all of the error return paths. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: remove dead codeJosh Bleecher Snyder
If device.NewPeer returns a nil error, then the returned peer is always non-nil. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-25device: return errors from ipc scannerJosh Bleecher Snyder
The code as written will drop any read errors on the floor. Fix that. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: allow compiling with Go 1.15Jason A. Donenfeld
Until we depend on Go 1.16 (which isn't released yet), alias our own variable to the private member of the net package. This will allow an easy find replace to make this go away when we eventually switch to 1.16. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-20device: remove unused fields from DummyDatagram and DummyBindJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: remove unused trie test codeJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: remove unnecessary zeroingJosh Bleecher Snyder
Newly allocated objects are already zeroed. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: remove QueueInboundElement.droppedJosh Bleecher Snyder
Now that we block when enqueueing to the decryption queue, there is only one case in which we "drop" a inbound element, when decryption fails. We can use a simple, obvious, sync-free sentinel for that, elem.packet == nil. Also, we can return the message buffer to the pool slightly later, which further simplifies the code. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: remove QueueOutboundElement.droppedJosh Bleecher Snyder
If we block when enqueuing encryption elements to the queue, then we never drop them. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: check returned errors from NewPeer in TestNoiseHandshakeJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: remove selects from encrypt/decrypt/inbound/outbound enqueuingJosh Bleecher Snyder
Block instead. Backpressure here is fine, probably preferable. This reduces code complexity. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: put handshake buffer in pool in FlushPacketQueuesJosh Bleecher Snyder
This appears to have been an oversight. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-20device: use channel close to shut down and drain decryption channelJosh Bleecher Snyder
This is similar to commit e1fa1cc5560020e67d33aa7e74674853671cf0a0, but for the decryption channel. It is an alternative fix to f9f655567930a4cd78d40fa4ba0d58503335ae6a. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-08device: receive: do not exit immediately on transient UDP receive errorsJason A. Donenfeld
Some users report seeing lines like: > Routine: receive incoming IPv4 - stopped Popping up unexpectedly. Let's sleep and try again before failing, and also log the error, and perhaps we'll eventually understand this situation better in future versions. Because we have to distinguish between the socket being closed explicitly and whatever error this is, we bump the module to require Go 1.16. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-07device: receive: drain decryption queue before exiting RoutineDecryptionJason A. Donenfeld
It's possible for RoutineSequentialReceiver to try to lock an elem after RoutineDecryption has exited. Before this meant we didn't then unlock the elem, so the whole program deadlocked. As well, it looks like the flush code (which is now potentially unnecessary?) wasn't properly dropping the buffers for the not-already-dropped case. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-07device: add latency and throughput benchmarksJosh Bleecher Snyder
These obviously don't perfectly capture real world performance, in which syscalls and network links have a significant impact. Nevertheless, they capture some of the internal performance factors, and they're easy and convenient to work with. Hat tip to Avery Pennarun for help designing the throughput benchmark. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07device: use LogLevelError for benchmarkingJosh Bleecher Snyder
This keeps the output minimal and focused on the benchmark results. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07device: make test infrastructure usable with benchmarksJosh Bleecher Snyder
Switch from *testing.T to testing.TB. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07all: use ++ to incrementJosh Bleecher Snyder
Make the code slightly more idiomatic. No functional changes. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07device: remove unnecessary zeroingJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07device: call wg.Add outside the goroutineJosh Bleecher Snyder
One of the first rules of WaitGroups is that you call wg.Add outside of a goroutine, not inside it. Fix this embarrassing mistake. This prevents an extremely rare race condition (2 per 100,000 runs) which could occur when attempting to start a new peer concurrently with shutting down a device. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07device: remove QueueInboundElement leak with stopped peersJosh Bleecher Snyder
This is particularly problematic on mobile, where there is a fixed number of elements. If most of them leak, it'll impact performance; if all of them leak, the device will permanently deadlock. I have a test that detects element leaks, which is how I found this one. There are some remaining leaks that I have not yet tracked down, but this is the most prominent by far. I will commit the test when it passes reliably. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07device: simplify UAPI helper methodsJosh Bleecher Snyder
bufio is not required. strings.Builder is cheaper than bytes.Buffer for constructing strings. io.Writer is more flexible than io.StringWriter, and just as cheap (when used with io.WriteString). Run gofmt. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-01-07device: fix alignment of peer stats memberJason A. Donenfeld
This was shifted by 2 bytes when making persistent keepalive into a u32. Fix it by placing it after the aligned region. Fixes: e739ff7 ("device: fix persistent_keepalive_interval data races") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-07device: add UAPI helper methodsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>