diff options
author | Jason A. Donenfeld <Jason@zx2c4.com> | 2017-09-19 02:56:21 +0200 |
---|---|---|
committer | Jason A. Donenfeld <Jason@zx2c4.com> | 2017-09-19 03:04:21 +0200 |
commit | 049d7bd343aabf3dc2cb7fdb69c6666f2fb1d373 (patch) | |
tree | f87983b26438c202e27ff1e652e75d65aa5a5971 /src/queueing.h | |
parent | c81c0eeea5bdef2fa454c43039f5e920699b2adb (diff) |
receive: use netif_receive_skb instead of netif_rx
netif_rx queues things up to a per-cpu backlog, whereas
netif_receive_skb immediately delivers the packet to the underlying
network device and mostly never fails. In the event where decrypting
packets is actually happening faster than the networking subsystem
receive them -- like with 65k packets with UDPv6 in `make test-qemu`
-- then this backlog fills up and we wind up dropping some packets.
This is fine and not all together terrible, but it does raise the
question of why we bothered spending CPU cycles decrypting those
packets if they were just going to be dropped anyway. So, moving from
netif_rx to netif_receive_skb means that whatever time netif_receive_skb
needs winds up slowing down the dequeuing of decryption packets, which
in turn means the decryption receive queue fills up sooner, so that we
drop packets before decryption, rather than after, thus saving precious
CPU cycles.
Potential downsides of this include not keeping the cache hot, or not
inundating the network subsystem with as many packets per second as
possible, but in preliminary benchmarks, no difference has yet been
observed.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Diffstat (limited to 'src/queueing.h')
0 files changed, 0 insertions, 0 deletions