diff options
author | Ondrej Zajicek <santiago@crfreenet.org> | 2022-05-15 15:05:13 +0200 |
---|---|---|
committer | Ondrej Zajicek <santiago@crfreenet.org> | 2022-05-15 15:05:37 +0200 |
commit | ba2a076001be6e4140fbd2703cd134e5aa51450b (patch) | |
tree | 03f42c0f8ede487b0f4752cadbd4044b43812b94 /nest | |
parent | 207ac4853316ceffa6d56ff06f8904320d3bf45f (diff) |
BGP: Improve tx performance during feed/flush
The prefix hash table in BGP used the same hash function as the rtable.
When a batch of routes are exported during feed/flush to the BGP, they
all have similar hash values, so they are all crowded in a few slots in
the BGP prefix table (which is much smaller - around the size of the
batch - and uses higher bits from hash values), making it much slower due
to excessive collisions. Use a different hash function to avoid this.
Also, increase the batch size to fill 4k BGP packets and increase minimum
BGP bucket and prefix hash sizes to avoid back and forth resizing during
flushes.
This leads to order of magnitude faster flushes (on my test data).
Diffstat (limited to 'nest')
-rw-r--r-- | nest/rt-table.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/nest/rt-table.c b/nest/rt-table.c index a10979e6..c3563171 100644 --- a/nest/rt-table.c +++ b/nest/rt-table.c @@ -2380,7 +2380,7 @@ static void rt_prune_table(rtable *tab) { struct fib_iterator *fit = &tab->prune_fit; - int limit = 512; + int limit = 2000; struct channel *c; node *n, *x; |