summaryrefslogtreecommitdiffhomepage
AgeCommit message (Collapse)Author
2021-02-09conn: use errors.Is for unwrappingJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: handshake routine writes into encryption queueJason A. Donenfeld
Since RoutineHandshake calls peer.SendKeepalive(), it potentially is a writer into the encryption queue, so we need to bump the wg count. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: make RoutineReadFromTUN keep encryption queue aliveJosh Bleecher Snyder
RoutineReadFromTUN can trigger a call to SendStagedPackets. SendStagedPackets attempts to protect against sending on the encryption queue by checking peer.isRunning and device.isClosed. However, those are subject to TOCTOU bugs. If that happens, we get this: goroutine 1254 [running]: golang.zx2c4.com/wireguard/device.(*Peer).SendStagedPackets(0xc000798300) .../wireguard-go/device/send.go:321 +0x125 golang.zx2c4.com/wireguard/device.(*Device).RoutineReadFromTUN(0xc000014780) .../wireguard-go/device/send.go:271 +0x21c created by golang.zx2c4.com/wireguard/device.NewDevice .../wireguard-go/device/device.go:315 +0x298 Fix this with a simple, big hammer: Keep the encryption queue alive as long as it might be written to. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-09conn: try harder to have v4 and v6 ports agreeJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: only allocate peer queues onceJosh Bleecher Snyder
This serves two purposes. First, it makes repeatedly stopping then starting a peer cheaper. Second, it prevents a data race observed accessing the queues. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-09device: clarify device.state.state docs (again)Josh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-09device: run fewer iterations in TestUpDownJosh Bleecher Snyder
The high iteration count was useful when TestUpDown was the nexus of new bugs to investigate. Now that it has stabilized, that's less valuable. And it slows down running the tests and crowds out other tests. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-09device: run fewer trials in TestWaitPool when race detector enabledJosh Bleecher Snyder
On a many-core machine with the race detector enabled, this test can take several minutes to complete. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-09device: remove nil elem check in finalizersJosh Bleecher Snyder
This is not necessary, and removing it speeds up detection of UAF bugs. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-09device: rename unsafeRemovePeer to removePeerLockedJason A. Donenfeld
This matches the new naming scheme of upLocked and downLocked. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: remove deviceStateNewJason A. Donenfeld
It's never used and we won't have a use for it. Also, move to go-running stringer, for those without GOPATHs. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: fix comment typo and shorten state.mu.Lock to state.LockJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: fix typo in commentJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: fix alignment on 32-bit machines and test for itJason A. Donenfeld
The test previously checked the offset within a substruct, not the offset within the allocated struct, so this adds the two together. It then fixes an alignment crash on 32-bit machines. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: do not log on idempotent device state changeJason A. Donenfeld
Part of being actually idempotent is that we shouldn't penalize code that takes advantage of this property with a log splat. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: do not attach finalizer to non-returned objectJason A. Donenfeld
Before, the code attached a finalizer to an object that wasn't returned, resulting in immediate garbage collection. Instead return the actual pointer. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: lock elem in autodraining queue before freeingJason A. Donenfeld
Without this, we wind up freeing packets that the encryption/decryption queues still have, resulting in a UaF. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: remove listen port race in testsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-09device: generate test keys on the flyJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-08device: remove mutex from Peer send/receiveJosh Bleecher Snyder
The immediate motivation for this change is an observed deadlock. 1. A goroutine calls peer.Stop. That calls peer.queue.Lock(). 2. Another goroutine is in RoutineSequentialReceiver. It receives an elem from peer.queue.inbound. 3. The peer.Stop goroutine calls close(peer.queue.inbound), close(peer.queue.outbound), and peer.stopping.Wait(). It blocks waiting for RoutineSequentialReceiver and RoutineSequentialSender to exit. 4. The RoutineSequentialReceiver goroutine calls peer.SendStagedPackets(). SendStagedPackets attempts peer.queue.RLock(). That blocks forever because the peer.Stop goroutine holds a write lock on that mutex. A background motivation for this change is that it can be expensive to have a mutex in the hot code path of RoutineSequential*. The mutex was necessary to avoid attempting to send elems on a closed channel. This commit removes that danger by never closing the channel. Instead, we send a sentinel nil value on the channel to indicate to the receiver that it should exit. The only problem with this is that if the receiver exits, we could write an elem into the channel which would never get received. If it never gets received, it cannot get returned to the device pools. To work around this, we use a finalizer. When the channel can be GC'd, the finalizer drains any remaining elements from the channel and restores them to the device pool. After that change, peer.queue.RWMutex no longer makes sense where it is. It is only used to prevent concurrent calls to Start and Stop. Move it to a more sensible location and make it a plain sync.Mutex. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: create channels.goJosh Bleecher Snyder
We have a bunch of stupid channel tricks, and I'm about to add more. Give them their own file. This commit is 100% code movement. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: print direction when ping transit failsJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: separate timersInit from timersStartJosh Bleecher Snyder
timersInit sets up the timers. It need only be done once per peer. timersStart does the work to prepare the timers for a newly running peer. It needs to be done every time a peer starts. Separate the two and call them in the appropriate places. This prevents data races on the peer's timers fields when starting and stopping peers. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: don't track device interface state in RoutineTUNEventReaderJosh Bleecher Snyder
We already track this state elsewhere. No need to duplicate. The cost of calling changeState is negligible. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: improve MTU change handlingJosh Bleecher Snyder
The old code silently accepted negative MTUs. It also set MTUs above the maximum. It also had hard to follow deeply nested conditionals. Add more paranoid handling, and make the code more straight-line. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: remove device.state.stopping from RoutineTUNEventReaderJosh Bleecher Snyder
The TUN event reader does three things: Change MTU, device up, and device down. Changing the MTU after the device is closed does no harm. Device up and device down don't make sense after the device is closed, but we can check that condition before proceeding with changeState. There's thus no reason to block device.Close on RoutineTUNEventReader exiting. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: overhaul device state managementJosh Bleecher Snyder
This commit simplifies device state management. It creates a single unified state variable and documents its semantics. It also makes state changes more atomic. As an example of the sort of bug that occurred due to non-atomic state changes, the following sequence of events used to occur approximately every 2.5 million test runs: * RoutineTUNEventReader received an EventDown event. * It called device.Down, which called device.setUpDown. * That set device.state.changing, but did not yet attempt to lock device.state.Mutex. * Test completion called device.Close. * device.Close locked device.state.Mutex. * device.Close blocked on a call to device.state.stopping.Wait. * device.setUpDown then attempted to lock device.state.Mutex and blocked. Deadlock results. setUpDown cannot progress because device.state.Mutex is locked. Until setUpDown returns, RoutineTUNEventReader cannot call device.state.stopping.Done. Until device.state.stopping.Done gets called, device.state.stopping.Wait is blocked. As long as device.state.stopping.Wait is blocked, device.state.Mutex cannot be unlocked. This commit fixes that deadlock by holding device.state.mu when checking that the device is not closed. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: remove unnecessary zeroing in peer.SendKeepaliveJosh Bleecher Snyder
elem.packet is always already nil. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: remove device.state.stopping from RoutineHandshakeJosh Bleecher Snyder
It is no longer necessary. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-08device: remove device.state.stopping from RoutineDecryptionJosh Bleecher Snyder
It is no longer necessary, as of 454de6f3e64abd2a7bf9201579cd92eea5280996 (device: use channel close to shut down and drain decryption channel). Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-04main: add back version fileJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-03tai64n: add string representation for error messagesJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-03device: take peer handshake when reinitializing last sent handshakeJason A. Donenfeld
This papers over other unrelated races, unfortunately. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-03device: fix goroutine leak testJosh Bleecher Snyder
The leak test had rare flakes. If a system goroutine started at just the wrong moment, you'd get a false positive. Instead of looping until the goroutines look good and then checking, exit completely as soon as the number of goroutines looks good. Also, check more frequently, in an attempt to complete faster. Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-03device: add up/down stress testJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-03device: pass cfg strings around in tests instead of readerJason A. Donenfeld
This makes it easier to tag things onto the end manually for quick hacks. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-03device: benchmark the waitpool to compare it to the prior channelsJason A. Donenfeld
Here is the old implementation: type WaitPool struct { c chan interface{} } func NewWaitPool(max uint32, new func() interface{}) *WaitPool { p := &WaitPool{c: make(chan interface{}, max)} for i := uint32(0); i < max; i++ { p.c <- new() } return p } func (p *WaitPool) Get() interface{} { return <- p.c } func (p *WaitPool) Put(x interface{}) { p.c <- x } It performs worse than the new one: name old time/op new time/op delta WaitPool-16 16.4µs ± 5% 15.1µs ± 3% -7.86% (p=0.008 n=5+5) Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-03device: test that we do not leak goroutinesJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-03device: tie encryption queue lifetime to the peers that write to itJosh Bleecher Snyder
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
2021-02-02device: use a waiting sync.Pool instead of a channelJason A. Donenfeld
Channels are FIFO which means we have guaranteed cache misses. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-29device: reduce number of append calls when paddingJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-29device: use int64 instead of atomic.Value for time stampJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-29device: use new model queues for handshakesJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-29device: simplify peer queue lockingJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: reduce nesting when staging packetJason A. Donenfeld
Suggested-by: Josh Bleecher Snyder <josh@tailscale.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28global: bump copyrightJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: remove version stringJason A. Donenfeld
This is what modules are for, and Go binaries can introspect. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: do not allow get to run while set runsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: avoid hex allocations in IpcGetJason A. Donenfeld
benchmark old ns/op new ns/op delta BenchmarkUAPIGet-16 2872 2157 -24.90% benchmark old allocs new allocs delta BenchmarkUAPIGet-16 30 18 -40.00% benchmark old bytes new bytes delta BenchmarkUAPIGet-16 737 256 -65.26% Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-01-28device: the psk is not a chapoly keyJason A. Donenfeld
It's a separate type of key that gets hashed into the chain. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>