summaryrefslogtreecommitdiffhomepage
path: root/src/hashtables.c
AgeCommit message (Collapse)Author
2019-03-25peerlookup: rename from hashtablesJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2019-02-03hashtables: decouple hashtable allocations from the main device allocationSultan Alsawaf
The hashtable allocations are quite large, and cause the device allocation in the net framework to stall sometimes while it tries to find a contiguous region that can fit the device struct: [<0000000000000000>] __switch_to+0x94/0xb8 [<0000000000000000>] __alloc_pages_nodemask+0x764/0x7e8 [<0000000000000000>] kmalloc_order+0x20/0x40 [<0000000000000000>] __kmalloc+0x144/0x1a0 [<0000000000000000>] alloc_netdev_mqs+0x5c/0x368 [<0000000000000000>] rtnl_create_link+0x48/0x180 [<0000000000000000>] rtnl_newlink+0x410/0x708 [<0000000000000000>] rtnetlink_rcv_msg+0x190/0x1f8 [<0000000000000000>] netlink_rcv_skb+0x4c/0xf8 [<0000000000000000>] rtnetlink_rcv+0x30/0x40 [<0000000000000000>] netlink_unicast+0x18c/0x208 [<0000000000000000>] netlink_sendmsg+0x19c/0x348 [<0000000000000000>] sock_sendmsg+0x3c/0x58 [<0000000000000000>] ___sys_sendmsg+0x290/0x2b0 [<0000000000000000>] __sys_sendmsg+0x58/0xa0 [<0000000000000000>] SyS_sendmsg+0x10/0x20 [<0000000000000000>] el0_svc_naked+0x34/0x38 [<0000000000000000>] 0xffffffffffffffff To fix the allocation stalls, decouple the hashtable allocations from the device allocation and allocate the hashtables with kvmalloc's implicit __GFP_NORETRY so that the allocations fall back to vmalloc with little resistance. Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2019-01-07global: update copyrightJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-10-09global: give if statements brackets and other cleanupsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-10-08global: more nitsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-10-08global: rename struct wireguard_ to struct wg_Jason A. Donenfeld
This required a bit of pruning of our christmas trees. Suggested-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-10-02global: prefix all functions with wg_Jason A. Donenfeld
I understand why this must be done, though I'm not so happy about having to do it. In some places, it puts us over 80 chars and we have to break lines up in further ugly ways. And in general, I think this makes things harder to read. Yet another thing we must do to please upstream. Maybe this can be replaced in the future by some kind of automatic module namespacing logic in the linker, or even combined with LTO and aggressive symbol stripping. Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-09-20global: put SPDX identifier on its own lineJason A. Donenfeld
The kernel has very specific rules correlating file type with comment type, and also SPDX identifiers can't be merged with other comments. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-09-16global: remove non-essential inline annotationsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-08-28global: run through clang-formatJason A. Donenfeld
This is the worst commit in the whole repo, making the code much less readable, but so it goes with upstream maintainers. We are now woefully wrapped at 80 columns. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-08-03peer: ensure destruction doesn't raceJason A. Donenfeld
Completely rework peer removal to ensure peers don't jump between contexts and create races. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-08-01hashtables: document immediate zeroing semanticsJason A. Donenfeld
Suggested-by: Jann Horn <jann@thejh.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-07-31peer: simplify rcu reference countsJason A. Donenfeld
Use RCU reference counts only when we must, and otherwise use a more reasonably named function. Reported-by: Jann Horn <jann@thejh.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2018-01-03global: year bumpJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-12-09global: add SPDX tags to all filesGreg Kroah-Hartman
It's good to have SPDX identifiers in all files as the Linux kernel developers are working to add these identifiers to all files. Update all files with the correct SPDX license identifier based on the license text of the project or based on the license in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Modified-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-10-31global: style nitsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-10-31global: infuriating kernel iterator styleJason A. Donenfeld
One types: for (i = 0 ... So one should also type: for_each_obj (obj ... But the upstream kernel style guidelines are insane, and so we must instead do: for_each_obj(obj ... Ugly, but one must choose his battles wisely. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-10-31global: accept decent check_patch.pl suggestionsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-10-03global: add space around variable declarationsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-08-08hashtables: if we have an index match, don't search further everJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-08-08hashtables: allow up to 2^{20} peers per interfaceJason A. Donenfeld
This allows for nearly 1 million peers per interface, which should be more than enough. If needed later, this number could easily be increased beyond this. We also increase the size of the hashtables to accommodate this upper bound. In the future, it might be smart to dynamically expand the hashtable instead of this hard coded compromise value between small systems and large systems. Ongoing work includes figuring out the most optimal scheme for these hashtables and for the insertion to mask their order from timing inference. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-06-08noise: fix race when replacing handshakeJason A. Donenfeld
Replacing an entry that's already been replaced is something that could happen when processing handshake messages in parallel, when starting up multiple instances on the same machine. Reported-by: Hubert Goisern <zweizweizwoelf@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-05-30style: spaces after for loopsJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-04-04locking: always use _bhJason A. Donenfeld
All locks are potentially between user context and softirq, which means we need to take the _bh variant. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-03-19hashtables: get_random_int is now more secure, so expose directlyJason A. Donenfeld
On 4.11, get_random_u32 now either uses chacha or rdrand, rather than the horrible former MD5 construction, so we feel more comfortable exposing RNG output directly. On older kernels, we fall back to something a bit disgusting. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-02-13compat: backport siphash & dst_cache from mainlineJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2017-01-10Update copyrightJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-12-16hashtables: use counter and int to ensure forward progressJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-12-16siphash: update against upstream submissionJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-12-12hashtables: ensure we get 64-bits of randomnessJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-12-11global: move to consistent use of uN instead of uintN_t for kernel codeJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-11-29hashtable: use random number each timeJason A. Donenfeld
Otherwise timing information might leak information about prior index entries. We also switch back to an explicit uint64_t because siphash needs something at least that size. (This partially reverts 1550e9ba597946c88e3e7e3e8dcf33c13dd76e5b. Willy's suggestion was wrong.) Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-11-21headers: cleanup noticesJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-11-15various: nits from willyJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-11-05c89: the static keyword is okay in c99, but not in c89Jason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-09-29Rework headers and includesJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-08-22hashtables: use rdrand() instead of counterJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-08-02c: specify static array size in function paramsJason A. Donenfeld
The C standard states: A declaration of a parameter as ``array of type'' shall be adjusted to ``qualified pointer to type'', where the type qualifiers (if any) are those specified within the [ and ] of the array type derivation. If the keyword static also appears within the [ and ] of the array type derivation, then for each call to the function, the value of the corresponding actual argument shall provide access to the first element of an array with at least as many elements as specified by the size expression. By changing void func(int array[4]) to void func(int array[static 4]), we automatically get the compiler checking argument sizes for us, which is quite nice. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-07-22index hashtable: run random indices through siphashJason A. Donenfeld
If /dev/urandom is a NOBUS RNG backdoor, like the infamous Dual_EC_DRBG, then sending 4 bytes of raw RNG output over the wire directly might not be such a great idea. This mitigates that vulnerability by, at some point before the indices are generated, creating a random secret. Then, for each session index, we simply run SipHash24 on an incrementing counter. This is probably overkill because /dev/urandom is probably not a backdoored RNG, and itself already uses several rounds of SHA-1 for mixing. If the kernel RNG is backdoored, there may very well be bigger problems at play. Four bytes is also not so many bytes. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2016-06-25Initial commitJason A. Donenfeld
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>