Age | Commit message (Collapse) | Author |
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
If we take two references, the namespace and the device are never freed
in the usual manner. We should thus only take a reference to another
namespace when it is a different namespace from our own.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
One types:
for (i = 0 ...
So one should also type:
for_each_obj (obj ...
But the upstream kernel style guidelines are insane, and so we must
instead do:
for_each_obj(obj ...
Ugly, but one must choose his battles wisely.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This is faster, since it means adding a new peer is O(1) instead of
O(n). It's also safe to do because we're holding the device_update_lock
on both the ++ and the --.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Since the peer list is protected by the device_update_lock, and since
items are removed from the peer list before putting their final
reference, we don't actually need to take a reference when iterating.
This allows us to simplify the macro considerably.
Suggested-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Reported-by: Aaron M. D. Jones <aaronmdjones@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
This misses lines that are greater than 128 bytes -- read returns
-EINVAL -- but the lines we care about anyway are less than that, so we
simply keep looping and skip the bad reads when this happens. Not ideal,
but probably the best we can do in pure bash.
And, importantly, it's much better than streaming this over a TCP
socket, which was an even uglier hack.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Upstream's a2d3f3e33853ef52e5f66b41c3e8ee5710aa3305 broke things in
strange and confusing ways.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Although I was secretly proud of having figured out these unholy hacks,
it turns out -no-reboot lets us do things in a platform-independent way.
Suggested-by: Jann Horn <jann@thejh.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Now that we have the watchdog, killing the watchdog process is another
way of shutting down.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Useful for IDEs
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Otherwise we might get a situation like this:
CPU0 CPU1
---- ----
lock(tstats lock);
local_irq_disable();
lock(queue lock);
lock(tstats lock);
<Interrupt>
lock(queue lock);
CPU1 is waiting for CPU0 to release tstats lock. But CPU0, in the
interrupt handler, is waiting for CPU1 to release queue lock. The
solution is to disable interrupts on CPU0, so that this can't happen.
Note that this only affects 32-bit, since u64_stats_update_begin nops
out on native 64-bit platforms.
Reported-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Otherwise netfilter's ip_route_me_harder doesn't know how to reroute
this and we get a nasty loop.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|