Age | Commit message (Collapse) | Author |
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Reported-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This will redirect to whichever archive kernel.org thinks is best.
Suggested-by: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Otherwise it's too easy to trigger cookie reply messages.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Otherwise, get_random_bytes_wait gets called from
curve25519_generate_secret, and at the same time, a user might use the
wg(8) utility, which then wants to grab a read lock for what we're write
locking.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
We don't want the local private key to not correspond with a precomputed
ss or precomputed cookie hash at any intermediate point.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Usually this is called from handshake_init, where locking doesn't matter
because nothing references it yet, but it's also called when changing
the device private key, so it's probably a good thing to not process a
handshake with a ss precomputation that's part old and part new.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Since these are the only consumers, there's no need for locking.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
In rt kernels, spinlocks call schedule(), which means preemption can't
be disabled. The FPU disables preemption. Hence, we can either
restructure things to move the calls to kernel_fpu_begin/end to be
really close to the actual crypto routines, or we can do the slower
lazier solution of just not using the FPU at all on -rt kernels. This
patch goes with the latter lazy solution.
The reason why we don't place the calls to kernel_fpu_begin/end close to
the crypto routines in the first place is that they're very expensive,
as it usually involves a call to XSAVE. So on sane kernels, we benefit
from only having to call it once.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
The new flow offloading feature at the moment does not set the dst. We
have a patch pending to fix this upstream, but in the meantime, work
around it here.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This reduces memory access and the total opaque size.
Signed-off-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Our previous heuristic wasn't good enough, since CopperheadOS backported
CANARY_MASK without backporting get_random_u32, as Qualcomm did, so now
we just entirely rename all invocations of the function.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Qualcomm backported the get_random_u32 patch, even though kernel.org
didn't, which is a problem. They also backported another patch in the
same place which defines a new macro variable, so use this as a
differentiator.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
We do this after sending the keepalive anyway.
This is something of a regression, though, since before we'd cancel and
then send, but now we send and then cancel, so it introduces a potential
race, but hopefully that isn't too big of a deal.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
We reorganize this into also doing so on sending keepalives itself,
which means the state machine is much more consistent, even if this was
already implied.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
It's already done implicitly by recent kernels and it's not adding much
here.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
On systems with a 100hz tick, this causes handshakes to be retried in
slightly less than 5 seconds, which means they aren't sent at all. This
has the effect of the handshakes entirely stopping their retry cycle
until the next data packet is sent.
Reported-by: Andrew He <andrewhe@mit.edu>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This avoids an O(n^2) traversal in favor of an O(n) one.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Commit 5e3532e ("allowedips: use native endian on lookup") did two
things: It changed the endianness of (struct allowedips_node).bits to
native endian, and it moved the CIDR masking to the output code path
(walk_by_peer).
Adjust print_node in src/selftest/allowedips.h to deal with these
changes.
Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
The 3.10 kernel from Red Hat puts head_frag in between headers_start and
headers_end. This is triggered on the latest i40e driver. This results
in the packet being freed incorrectly, crashing the system. So, this
patch just ensures we don't zero any of the header bits.
The whole issue of zeroing header bits probably should be revisited
sometime somewhat soon.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|