summaryrefslogtreecommitdiffhomepage
path: root/src/send.c
diff options
context:
space:
mode:
authorJason A. Donenfeld <Jason@zx2c4.com>2020-05-04 18:35:12 -0600
committerJason A. Donenfeld <Jason@zx2c4.com>2020-05-04 18:36:18 -0600
commit2ab4726cc2808f1c9e8acac5120ae7be6a090ebb (patch)
tree1c9f757d89456dcce86d2511acdf317fdd22b8ad /src/send.c
parented03442d350e5af9dda7ad3c0dededf3b8b6b417 (diff)
send: cond_resched() when processing tx ringbuffers
Users with pathological hardware reported CPU stalls on CONFIG_ PREEMPT_VOLUNTARY=y, because the ringbuffers would stay full, meaning these workers would never terminate. That turned out not to be okay on systems without forced preemption. This commit adds a cond_resched() to the bottom of each loop iteration, so that these workers don't hog the core. We don't do this on encryption/decryption because the compat module here uses simd_relax, which already includes a call to schedule in preempt_enable. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Diffstat (limited to 'src/send.c')
-rw-r--r--src/send.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/src/send.c b/src/send.c
index 9290bb9..e6e89ad 100644
--- a/src/send.c
+++ b/src/send.c
@@ -284,6 +284,8 @@ void wg_packet_tx_worker(struct work_struct *work)
wg_noise_keypair_put(keypair, false);
wg_peer_put(peer);
+ if (need_resched())
+ cond_resched();
}
}