Age | Commit message (Collapse) | Author |
|
In "queueing: preserve flow hash across packet scrubbing", we were
required to slightly increase the size of the receive replay counter to
something still fairly small, but an increase nonetheless. It turns out
that we can recoup some of the additional memory overhead by splitting
up the prior union type into two distinct types. Before, we used the
same "noise_counter" union for both sending and receiving, with sending
just using a simple atomic64_t, while receiving used the full replay
counter checker. This meant that most of the memory being allocated for
the sending counter was being wasted. Since the old "noise_counter" type
increased in size in the prior commit, now is a good time to split up
that union type into a distinct "noise_replay_ counter" for receiving
and a boring atomic64_t for sending, each using neither more nor less
memory than required.
Also, since sometimes the replay counter is accessed without
necessitating additional accesses to the bitmap, we can reduce cache
misses by hoisting the always-necessary lock above the bitmap in the
struct layout. We also change a "noise_replay_counter" stack allocation
to kmalloc in a -DDEBUG selftest so that KASAN doesn't trigger a stack
frame warning.
All and all, removing a bit of abstraction in this commit makes the code
simpler and smaller, in addition to the motivating memory usage
recuperation. For example, passing around raw "noise_symmetric_key"
structs is something that really only makes sense within noise.c, in the
one place where the sending and receiving keys can safely be thought of
as the same type of object; subsequent to that, it's important that we
uniformly access these through keypair->{sending,receiving}, where their
distinct roles are always made explicit. So this patch allows us to draw
that distinction clearly as well.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Prior we read the preshared key after dropping the handshake lock, which
isn't an actual crypto issue if it races, but it's still not quite
correct. So copy that part of the state into a temporary like we do with
the rest of the handshake state variables. Then we can release the lock,
operate on the temporary, and zero it out at the end of the function. In
performance tests, the impact of this was entirely unnoticable, probably
because those bytes are coming from the same cacheline as other things
that are being copied out in the same manner.
Reported-by: Matt Dunwoodie <ncon@noconroy.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
We precompute the static-static ECDH during configuration time, in order
to save an expensive computation later when receiving network packets.
However, not all ECDH computations yield a contributory result. Prior,
we were just not letting those peers be added to the interface. However,
this creates a strange inconsistency, since it was still possible to add
other weird points, like a valid public key plus a low-order point, and,
like points that result in zeros, a handshake would not complete. In
order to make the behavior more uniform and less surprising, simply
allow all peers to be added. Then, we'll error out later when doing the
crypto if there's an issue. This also adds more separation between the
crypto layer and the configuration layer.
Discussed-with: Mathias Hall-Andersen <mathias@hall-andersen.dk>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Our static-static calculation returns a failure if the public key is of
low order. We check for this when peers are added, and don't allow them
to be added if they're low order, except in the case where we haven't
yet been given a private key. In that case, we would defer the removal
of the peer until we're given a private key, since at that point we're
doing new static-static calculations which incur failures we can act on.
This meant, however, that we wound up removing peers rather late in the
configuration flow.
Syzkaller points out that peer_remove calls flush_workqueue, which in
turn might then wait for sending a handshake initiation to complete.
Since handshake initiation needs the static identity lock, holding the
static identity lock while calling peer_remove can result in a rare
deadlock. We have precisely this case in this situation of late-stage
peer removal based on an invalid public key. We can't drop the lock when
removing, because then incoming handshakes might interact with a bogus
static-static calculation.
While the band-aid patch for this would involve breaking up the peer
removal into two steps like wg_peer_remove_all does, in order to solve
the locking issue, there's actually a much more elegant way of fixing
this:
If the static-static calculation succeeds with one private key, it
*must* succeed with all others, because all 32-byte strings map to valid
private keys, thanks to clamping. That means we can get rid of this
silly dance and locking headaches of removing peers late in the
configuration flow, and instead just reject them early on, regardless of
whether the device has yet been assigned a private key. For the case
where the device doesn't yet have a private key, we safely use zeros
just for the purposes of checking for low order points by way of
checking the output of the calculation.
The following PoC will trigger the deadlock:
ip link add wg0 type wireguard
ip addr add 10.0.0.1/24 dev wg0
ip link set wg0 up
ping -f 10.0.0.2 &
while true; do
wg set wg0 private-key /dev/null peer AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= allowed-ips 10.0.0.0/24 endpoint 10.0.0.3:1234
wg set wg0 private-key <(echo AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=)
done
[ 0.949105] ======================================================
[ 0.949550] WARNING: possible circular locking dependency detected
[ 0.950143] 5.5.0-debug+ #18 Not tainted
[ 0.950431] ------------------------------------------------------
[ 0.950959] wg/89 is trying to acquire lock:
[ 0.951252] ffff8880333e2128 ((wq_completion)wg-kex-wg0){+.+.}, at: flush_workqueue+0xe3/0x12f0
[ 0.951865]
[ 0.951865] but task is already holding lock:
[ 0.952280] ffff888032819bc0 (&wg->static_identity.lock){++++}, at: wg_set_device+0x95d/0xcc0
[ 0.953011]
[ 0.953011] which lock already depends on the new lock.
[ 0.953011]
[ 0.953651]
[ 0.953651] the existing dependency chain (in reverse order) is:
[ 0.954292]
[ 0.954292] -> #2 (&wg->static_identity.lock){++++}:
[ 0.954804] lock_acquire+0x127/0x350
[ 0.955133] down_read+0x83/0x410
[ 0.955428] wg_noise_handshake_create_initiation+0x97/0x700
[ 0.955885] wg_packet_send_handshake_initiation+0x13a/0x280
[ 0.956401] wg_packet_handshake_send_worker+0x10/0x20
[ 0.956841] process_one_work+0x806/0x1500
[ 0.957167] worker_thread+0x8c/0xcb0
[ 0.957549] kthread+0x2ee/0x3b0
[ 0.957792] ret_from_fork+0x24/0x30
[ 0.958234]
[ 0.958234] -> #1 ((work_completion)(&peer->transmit_handshake_work)){+.+.}:
[ 0.958808] lock_acquire+0x127/0x350
[ 0.959075] process_one_work+0x7ab/0x1500
[ 0.959369] worker_thread+0x8c/0xcb0
[ 0.959639] kthread+0x2ee/0x3b0
[ 0.959896] ret_from_fork+0x24/0x30
[ 0.960346]
[ 0.960346] -> #0 ((wq_completion)wg-kex-wg0){+.+.}:
[ 0.960945] check_prev_add+0x167/0x1e20
[ 0.961351] __lock_acquire+0x2012/0x3170
[ 0.961725] lock_acquire+0x127/0x350
[ 0.961990] flush_workqueue+0x106/0x12f0
[ 0.962280] peer_remove_after_dead+0x160/0x220
[ 0.962600] wg_set_device+0xa24/0xcc0
[ 0.962994] genl_rcv_msg+0x52f/0xe90
[ 0.963298] netlink_rcv_skb+0x111/0x320
[ 0.963618] genl_rcv+0x1f/0x30
[ 0.963853] netlink_unicast+0x3f6/0x610
[ 0.964245] netlink_sendmsg+0x700/0xb80
[ 0.964586] __sys_sendto+0x1dd/0x2c0
[ 0.964854] __x64_sys_sendto+0xd8/0x1b0
[ 0.965141] do_syscall_64+0x90/0xd9a
[ 0.965408] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 0.965769]
[ 0.965769] other info that might help us debug this:
[ 0.965769]
[ 0.966337] Chain exists of:
[ 0.966337] (wq_completion)wg-kex-wg0 --> (work_completion)(&peer->transmit_handshake_work) --> &wg->static_identity.lock
[ 0.966337]
[ 0.967417] Possible unsafe locking scenario:
[ 0.967417]
[ 0.967836] CPU0 CPU1
[ 0.968155] ---- ----
[ 0.968497] lock(&wg->static_identity.lock);
[ 0.968779] lock((work_completion)(&peer->transmit_handshake_work));
[ 0.969345] lock(&wg->static_identity.lock);
[ 0.969809] lock((wq_completion)wg-kex-wg0);
[ 0.970146]
[ 0.970146] *** DEADLOCK ***
[ 0.970146]
[ 0.970531] 5 locks held by wg/89:
[ 0.970908] #0: ffffffff827433c8 (cb_lock){++++}, at: genl_rcv+0x10/0x30
[ 0.971400] #1: ffffffff82743480 (genl_mutex){+.+.}, at: genl_rcv_msg+0x642/0xe90
[ 0.971924] #2: ffffffff827160c0 (rtnl_mutex){+.+.}, at: wg_set_device+0x9f/0xcc0
[ 0.972488] #3: ffff888032819de0 (&wg->device_update_lock){+.+.}, at: wg_set_device+0xb0/0xcc0
[ 0.973095] #4: ffff888032819bc0 (&wg->static_identity.lock){++++}, at: wg_set_device+0x95d/0xcc0
[ 0.973653]
[ 0.973653] stack backtrace:
[ 0.973932] CPU: 1 PID: 89 Comm: wg Not tainted 5.5.0-debug+ #18
[ 0.974476] Call Trace:
[ 0.974638] dump_stack+0x97/0xe0
[ 0.974869] check_noncircular+0x312/0x3e0
[ 0.975132] ? print_circular_bug+0x1f0/0x1f0
[ 0.975410] ? __kernel_text_address+0x9/0x30
[ 0.975727] ? unwind_get_return_address+0x51/0x90
[ 0.976024] check_prev_add+0x167/0x1e20
[ 0.976367] ? graph_lock+0x70/0x160
[ 0.976682] __lock_acquire+0x2012/0x3170
[ 0.976998] ? register_lock_class+0x1140/0x1140
[ 0.977323] lock_acquire+0x127/0x350
[ 0.977627] ? flush_workqueue+0xe3/0x12f0
[ 0.977890] flush_workqueue+0x106/0x12f0
[ 0.978147] ? flush_workqueue+0xe3/0x12f0
[ 0.978410] ? find_held_lock+0x2c/0x110
[ 0.978662] ? lock_downgrade+0x6e0/0x6e0
[ 0.978919] ? queue_rcu_work+0x60/0x60
[ 0.979166] ? netif_napi_del+0x151/0x3b0
[ 0.979501] ? peer_remove_after_dead+0x160/0x220
[ 0.979871] peer_remove_after_dead+0x160/0x220
[ 0.980232] wg_set_device+0xa24/0xcc0
[ 0.980516] ? deref_stack_reg+0x8e/0xc0
[ 0.980801] ? set_peer+0xe10/0xe10
[ 0.981040] ? __ww_mutex_check_waiters+0x150/0x150
[ 0.981430] ? __nla_validate_parse+0x163/0x270
[ 0.981719] ? genl_family_rcv_msg_attrs_parse+0x13f/0x310
[ 0.982078] genl_rcv_msg+0x52f/0xe90
[ 0.982348] ? genl_family_rcv_msg_attrs_parse+0x310/0x310
[ 0.982690] ? register_lock_class+0x1140/0x1140
[ 0.983049] netlink_rcv_skb+0x111/0x320
[ 0.983298] ? genl_family_rcv_msg_attrs_parse+0x310/0x310
[ 0.983645] ? netlink_ack+0x880/0x880
[ 0.983888] genl_rcv+0x1f/0x30
[ 0.984168] netlink_unicast+0x3f6/0x610
[ 0.984443] ? netlink_detachskb+0x60/0x60
[ 0.984729] ? find_held_lock+0x2c/0x110
[ 0.984976] netlink_sendmsg+0x700/0xb80
[ 0.985220] ? netlink_broadcast_filtered+0xa60/0xa60
[ 0.985533] __sys_sendto+0x1dd/0x2c0
[ 0.985763] ? __x64_sys_getpeername+0xb0/0xb0
[ 0.986039] ? sockfd_lookup_light+0x17/0x160
[ 0.986397] ? __sys_recvmsg+0x8c/0xf0
[ 0.986711] ? __sys_recvmsg_sock+0xd0/0xd0
[ 0.987018] __x64_sys_sendto+0xd8/0x1b0
[ 0.987283] ? lockdep_hardirqs_on+0x39b/0x5a0
[ 0.987666] do_syscall_64+0x90/0xd9a
[ 0.987903] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 0.988223] RIP: 0033:0x7fe77c12003e
[ 0.988508] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 4
[ 0.989666] RSP: 002b:00007fffada2ed58 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
[ 0.990137] RAX: ffffffffffffffda RBX: 00007fe77c159d48 RCX: 00007fe77c12003e
[ 0.990583] RDX: 0000000000000040 RSI: 000055fd1d38e020 RDI: 0000000000000004
[ 0.991091] RBP: 000055fd1d38e020 R08: 000055fd1cb63358 R09: 000000000000000c
[ 0.991568] R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000002c
[ 0.992014] R13: 0000000000000004 R14: 000055fd1d38e020 R15: 0000000000000001
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Reported-by: Derrick Pallas <derrick@pallas.us>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Coarse ktime is broken until [1] in 5.2 and kernels without the
backport, so we use fallback code there.
The fallback code has also been improved significantly. It now only uses
slower clocks on kernels < 3.17, at the expense of some accuracy we're
not overly concerned about.
[1] https://lore.kernel.org/lkml/tip-e3ff9c3678b4d80e22d2557b68726174578eaf52@git.kernel.org/
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
The hashtable allocations are quite large, and cause the device allocation in
the net framework to stall sometimes while it tries to find a contiguous region
that can fit the device struct:
[<0000000000000000>] __switch_to+0x94/0xb8
[<0000000000000000>] __alloc_pages_nodemask+0x764/0x7e8
[<0000000000000000>] kmalloc_order+0x20/0x40
[<0000000000000000>] __kmalloc+0x144/0x1a0
[<0000000000000000>] alloc_netdev_mqs+0x5c/0x368
[<0000000000000000>] rtnl_create_link+0x48/0x180
[<0000000000000000>] rtnl_newlink+0x410/0x708
[<0000000000000000>] rtnetlink_rcv_msg+0x190/0x1f8
[<0000000000000000>] netlink_rcv_skb+0x4c/0xf8
[<0000000000000000>] rtnetlink_rcv+0x30/0x40
[<0000000000000000>] netlink_unicast+0x18c/0x208
[<0000000000000000>] netlink_sendmsg+0x19c/0x348
[<0000000000000000>] sock_sendmsg+0x3c/0x58
[<0000000000000000>] ___sys_sendmsg+0x290/0x2b0
[<0000000000000000>] __sys_sendmsg+0x58/0xa0
[<0000000000000000>] SyS_sendmsg+0x10/0x20
[<0000000000000000>] el0_svc_naked+0x34/0x38
[<0000000000000000>] 0xffffffffffffffff
To fix the allocation stalls, decouple the hashtable allocations from the device
allocation and allocate the hashtables with kvmalloc's implicit __GFP_NORETRY
so that the allocations fall back to vmalloc with little resistance.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This mitigates unrelated sidechannel attacks that think they can turn
WireGuard into a useful time oracle.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
The former was just a wrapper around the latter, and so upstream is now
removing it.
Also adjust the compat kludge to deal with this.
Reported-by: Alex Xu <alex@alxu.ca>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Suggested-by: Jann Horn <jann@thejh.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This required a bit of pruning of our christmas trees.
Suggested-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
I understand why this must be done, though I'm not so happy about having
to do it. In some places, it puts us over 80 chars and we have to break
lines up in further ugly ways. And in general, I think this makes things
harder to read. Yet another thing we must do to please upstream.
Maybe this can be replaced in the future by some kind of automatic
module namespacing logic in the linker, or even combined with LTO and
aggressive symbol stripping.
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
The kernel has very specific rules correlating file type with comment
type, and also SPDX identifiers can't be merged with other comments.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Suggested-by: Sultan Alsawaf <sultanxda@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This is the worst commit in the whole repo, making the code much less
readable, but so it goes with upstream maintainers.
We are now woefully wrapped at 80 columns.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Completely rework peer removal to ensure peers don't jump between
contexts and create races.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
And in general it's good to prefer dereferencing entry.peer from a
handshake object rather than a keypair object, when possible, since
keypairs could disappear before their underlying peer.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Blocks like:
if (node_placement(*trie, key, cidr, bits, &node, lock)) {
node->peer = peer;
return 0;
}
May result in a double read when adjusting the refcount, in the highly
unlikely case of LTO and an overly smart compiler.
While we're at it, replace rcu_assign_pointer(X, NULL); with
RCU_INIT_POINTER.
Reported-by: Jann Horn <jann@thejh.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Suggested-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Generally if we're inaccurate by a few nanoseconds, it doesn't matter.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Since this is a network protocol, expirations need to be accounted for,
even across system suspend. On real systems, this isn't a problem, since
we're clearing all keys before suspend. But on Android, where we don't
do that, this is something of a problem. So, we switch to using boottime
instead of jiffies.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Otherwise, get_random_bytes_wait gets called from
curve25519_generate_secret, and at the same time, a user might use the
wg(8) utility, which then wants to grab a read lock for what we're write
locking.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
We don't want the local private key to not correspond with a precomputed
ss or precomputed cookie hash at any intermediate point.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Usually this is called from handshake_init, where locking doesn't matter
because nothing references it yet, but it's also called when changing
the device private key, so it's probably a good thing to not process a
handshake with a ss precomputation that's part old and part new.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
It's good to have SPDX identifiers in all files as the Linux kernel
developers are working to add these identifiers to all files.
Update all files with the correct SPDX license identifier based on the license
text of the project or based on the license in the file itself. The SPDX
identifier is a legally binding shorthand, which can be used instead of the
full boiler plate text.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Modified-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This gets us nanoseconds instead of microseconds, which is better, and
we can do this pretty much without freaking out existing userspace,
which doesn't actually make use of the nano/micro seconds field:
zx2c4@thinkpad ~ $ cat a.c
void main()
{
puts(sizeof(struct timeval) == sizeof(struct timespec) ? "success" : "failure");
}
zx2c4@thinkpad ~ $ gcc a.c -m64 && ./a.out
success
zx2c4@thinkpad ~ $ gcc a.c -m32 && ./a.out
success
This doesn't solve y2038 problem, but timespec64 isn't yet a thing in
userspace.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
These changes were suggested by checkpatch.pl, but actually cause big
problems depending on the options. Revert.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This should only really be contended in extremely exceptional cases, so
changing from a mutex to a spinlock is likely fine.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This removes our dependency on padata and moves to a different mode of
multiprocessing that is more efficient.
This began as Samuel Holland's GSoC project and was gradually
reworked/redesigned/rebased into this present commit, which is a
combination of his initial contribution and my subsequent rewriting and
redesigning.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|