Age | Commit message (Collapse) | Author |
|
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Unify the handling of unexpected UAPI errors.
The comment that says "should never happen" is incorrect;
this could happen due to I/O errors. Correct it.
Change error message capitalization for consistency.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
The goal of this change is to make the structure
of IpcSetOperation easier to follow.
IpcSetOperation contains a small state machine:
It starts by configuring the device,
then shifts to configuring one peer at a time.
Having the code all in one giant method obscured that structure.
Split out the parts into helper functions and encapsulate the peer state.
This makes the overall structure more apparent.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Expand IPCError to contain a wrapped error,
and add a helper to make constructing such errors easier.
Add a defer-based "log on returned error" to IpcSetOperation.
This lets us simplify all of the error return paths.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
If device.NewPeer returns a nil error,
then the returned peer is always non-nil.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
The code as written will drop any read errors on the floor.
Fix that.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
In order to avoid even the flirtation with passing on these dependencies
to ordinary consumers of wireguard-go, this commit makes a new go.mod
that's entirely separate from the root one.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
The Go linker isn't smart enough to prevent gvisor from being pulled
into modules that use other parts of tun/, due to the types exposed. So,
we put this into its own standalone module.
We use this as an opportunity to introduce some example code as well.
I'm still not happy that this not only clutters this repo's go.sum, but
all the other projects that consume it, but it seems like making a new
module inside of this repo will lead to even greater confusion.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Until we depend on Go 1.16 (which isn't released yet), alias our own
variable to the private member of the net package. This will allow an
easy find replace to make this go away when we eventually switch to
1.16.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Minor style fix.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Type tcpip.Error is not an error.
I've filed https://github.com/google/gvisor/issues/5314
to fix this upstream.
Until that is fixed, use %v instead of %w,
to keep vet happy.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Newly allocated objects are already zeroed.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Now that we block when enqueueing to the decryption queue,
there is only one case in which we "drop" a inbound element,
when decryption fails.
We can use a simple, obvious, sync-free sentinel for that, elem.packet == nil.
Also, we can return the message buffer to the pool slightly later,
which further simplifies the code.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
If we block when enqueuing encryption elements to the queue,
then we never drop them.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Caught by 'go vet'.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Block instead. Backpressure here is fine, probably preferable.
This reduces code complexity.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This appears to have been an oversight.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This is similar to commit e1fa1cc5560020e67d33aa7e74674853671cf0a0,
but for the decryption channel.
It is an alternative fix to f9f655567930a4cd78d40fa4ba0d58503335ae6a.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This allows people to initiate connections over WireGuard without any
underlying operating system support.
I'm not crazy about the trash it adds to go.sum, but the code this
actually adds to the binaries seems contained to the gvisor repo.
For the TCP/IP implementation, it uses gvisor. And it borrows some
internals from the Go standard library's resolver in order to bring Dial
and DialContext to tun_net, along with the LookupHost helper function.
This allows for things like HTTP2-over-TLS to work quite well:
package main
import (
"io"
"log"
"net"
"net/http"
"golang.zx2c4.com/wireguard/device"
"golang.zx2c4.com/wireguard/tun"
)
func main() {
tun, tnet, err := tun.CreateNetTUN([]net.IP{net.ParseIP("192.168.4.29")}, []net.IP{net.ParseIP("8.8.8.8"), net.ParseIP("8.8.4.4")}, 1420)
if err != nil {
log.Panic(err)
}
dev := device.NewDevice(tun, &device.Logger{log.Default(), log.Default(), log.Default()})
dev.IpcSet(`private_key=a8dac1d8a70a751f0f699fb14ba1cff7b79cf4fbd8f09f44c6e6a90d0369604f
public_key=25123c5dcd3328ff645e4f2a3fce0d754400d3887a0cb7c56f0267e20fbf3c5b
endpoint=163.172.161.0:12912
allowed_ip=0.0.0.0/0
`)
dev.Up()
client := http.Client{
Transport: &http.Transport{
DialContext: tnet.DialContext,
},
}
resp, err := client.Get("https://www.zx2c4.com/ip")
if err != nil {
log.Panic(err)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Panic(err)
}
log.Println(string(body))
}
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Some users report seeing lines like:
> Routine: receive incoming IPv4 - stopped
Popping up unexpectedly. Let's sleep and try again before failing, and
also log the error, and perhaps we'll eventually understand this
situation better in future versions.
Because we have to distinguish between the socket being closed
explicitly and whatever error this is, we bump the module to require Go
1.16.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
If Close is called after ReceiveIPvX, then ReceiveIPvX will block on an
invalid or potentially reused fd.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
It's possible for RoutineSequentialReceiver to try to lock an elem after
RoutineDecryption has exited. Before this meant we didn't then unlock
the elem, so the whole program deadlocked.
As well, it looks like the flush code (which is now potentially
unnecessary?) wasn't properly dropping the buffers for the
not-already-dropped case.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
These obviously don't perfectly capture real world performance,
in which syscalls and network links have a significant impact.
Nevertheless, they capture some of the internal performance factors,
and they're easy and convenient to work with.
Hat tip to Avery Pennarun for help designing the throughput benchmark.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This keeps the output minimal and focused on the benchmark results.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Switch from *testing.T to testing.TB.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Since _32 and _64 aren't valid goarchs, they don't match _GOOS_GOARCH,
and so the existing tags wind up not being restricted to windows-only.
This fixes the problem by adding windows to the tags explicitly. We
could also fix it by calling the files _32_windows or _64_windows, but
that changes the convention with the other single-arch files.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Persnickety consumers can now do:
func init() {
tun.WintunPool, _ = wintun.MakePool("Flurp")
tun.WintunStaticRequestedGUID, _ = windows.GUIDFromString("{5ae2716f-0b3e-4dc4-a8b5-48eba11a6e16}")
}
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Make the code slightly more idiomatic. No functional changes.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
One of the first rules of WaitGroups is that you call wg.Add
outside of a goroutine, not inside it. Fix this embarrassing mistake.
This prevents an extremely rare race condition (2 per 100,000 runs)
which could occur when attempting to start a new peer
concurrently with shutting down a device.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This is particularly problematic on mobile,
where there is a fixed number of elements.
If most of them leak, it'll impact performance;
if all of them leak, the device will permanently deadlock.
I have a test that detects element leaks, which is how I found this one.
There are some remaining leaks that I have not yet tracked down,
but this is the most prominent by far.
I will commit the test when it passes reliably.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
bufio is not required.
strings.Builder is cheaper than bytes.Buffer for constructing strings.
io.Writer is more flexible than io.StringWriter,
and just as cheap (when used with io.WriteString).
Run gofmt.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This was shifted by 2 bytes when making persistent keepalive into a u32.
Fix it by placing it after the aligned region.
Fixes: e739ff7 ("device: fix persistent_keepalive_interval data races")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
SO_REUSEADDR does not make sense for unicast UDP sockets.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
People are actually hitting this condition, so make it uniform. Also,
change a printf into a println, to match the other conventions.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
|
|
Found by the race detector and existing tests.
To avoid introducing a lock into this hot path,
calculate and cache whether any peers exist.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Access keypair.sendNonce atomically.
Eliminate one unnecessary initialization to zero.
Mutate handshake.lastSentHandshake with the mutex held.
Co-authored-by: David Anderson <danderson@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
They're called elem in most places.
Rename a few local variables to make it consistent.
This makes it easier to grep the code for things like elem.Drop.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This is a similar treatment to the handling of the encryption
channel found a few commits ago: Use the closing of the channel
to manage goroutine lifetime and shutdown.
It is considerably simpler because there is only a single writer.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Co-authored-by: David Anderson <danderson@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
The new test introduced in this commit used to deadlock about 1% of the time.
I believe that the deadlock occurs as follows:
* The test completes, calling device.Close.
* device.Close closes device.signals.stop.
* RoutineEncryption stops.
* The deferred function in RoutineEncryption drains device.queue.encryption.
* RoutineEncryption exits.
* A peer's RoutineNonce processes an element queued in peer.queue.nonce.
* RoutineNonce puts that element into the outbound and encryption queues.
* RoutineSequentialSender reads that elements from the outbound queue.
* It waits for that element to get Unlocked by RoutineEncryption.
* RoutineEncryption has already exited, so RoutineSequentialSender blocks forever.
* device.RemoveAllPeers calls peer.Stop on all peers.
* peer.Stop waits for peer.routines.stopping, which blocks forever.
Rather than attempt to add even more ordering to the already complex
centralized shutdown orchestration, this commit moves towards a
data-flow-oriented shutdown.
The device.queue.encryption gets closed when there will be no more writes to it.
All device.queue.encryption readers always read until the channel is closed and then exit.
We thus guarantee that any element that enters the encryption queue also exits it.
This removes the need for central control of the lifetime of RoutineEncryption,
removes the need to drain the encryption queue on shutdown, and simplifies RoutineEncryption.
This commit also fixes a data race. When RoutineSequentialSender
drains its queue on shutdown, it needs to lock the elem before operating on it,
just as the main body does.
The new test in this commit passed 50k iterations with the race detector enabled
and 150k iterations with the race detector disabled, with no failures.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
Since we already have it packed into a uint64
in a known byte order, write it back out again
the same byte order instead of copying byte by byte.
This should also generate more efficient code,
because the compiler can do a single uint64 write,
instead of eight bounds checks and eight byte writes.
Due to a missed optimization, it actually generates a mishmash
of smaller writes: 1 byte, 4 bytes, 2 bytes, 1 byte.
This is https://golang.org/issue/41663.
The code is still better than before, and will get better yet
once that compiler bug gets fixed.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|
|
This makes it easier to work with configs in tests.
It'll see heavier use over upcoming commits;
this commit only adds the infrastructure.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
|