Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
Conflicts:
proto/bgp/attrs.c
proto/pipe/pipe.c
|
|
|
|
|
|
|
|
unnecessary syscalls"
This reverts commit 7f0e59820899c30a243c18556ce2e3fb72d6d221.
|
|
avoid unnecessary syscalls"
This reverts commit bea582cbb53e30dd32a5b6829c7443e0e5558d11.
|
|
This reverts commit 6cd37713781a3092f8166b2178fae35cbfec1e28.
|
|
|
|
|
|
|
|
|
|
|
|
Alignment of slabs should be at least sizeof(ptr) to avoid unaligned
pointers in slab structures. Fixme: Use proper way to choose alignment
for internal allocators.
|
|
|
|
Attach a prefix trie to IP/VPN/ROA tables. Use it for net_route() and
net_roa_check(). This leads to 3-5x speedups for IPv4 and 5-10x
speedup for IPv6 of these calls.
TODO:
- Rebuild the trie during rt_prune_table()
- Better way to avoid trie_add_prefix() in net_get() for existing tables
- Make it configurable (?)
|
|
Add option to socket interface for nonlocal binding, i.e. binding to an
IP address that is not present on interfaces. This behaviour is enabled
when SKF_FREEBIND socket flag is set. For Linux systems, it is
implemented by IP_FREEBIND socket flag.
Minor changes done by commiter.
|
|
This feature is intended mostly for checking that BIRD's allocation
strategies don't consume much memory space. There are some cases where
withdrawing routes in a specific order lead to memory fragmentation and
this output should give the user at least a notion of how much memory is
actually used for data storage and how much memory is "just allocated"
or used for overhead.
Also raising the "system allocator overhead estimation" from 8 to 16
bytes; it is probably even more. I've found 16 as a local minimum in
best scenarios among reachable machines. I couldn't find any reasonable
method to estimate this value when BIRD starts up.
This commit also fixes the inaccurate computation of memory overhead for
slabs where the "system allocater overhead estimation" was improperly
added to the size of mmap-ed memory.
|
|
|
|
Introduce ipX_prefix_equal() and use it to simplify network matching code.
|
|
|
|
We can also quite simply allocate bigger blocks. Anyway, we need these
blocks to be aligned to their size which needs one mmap() two times
bigger and then two munmap()s returning the unaligned parts.
The user can specify -B <N> on startup when <N> is the exponent of 2,
setting the block size to 2^N. On most systems, N is 12, anyway if you
know that your configuration is going to eat gigabytes of RAM, you are
almost forced to raise your block size as you may easily get into memory
fragmentation issues or you have to raise your maximum mapping count,
e.g. "sysctl vm.max_map_count=(number)".
|
|
|
|
unnecessary syscalls
|
|
Use 16-way (4bit) branching in prefix trie instead of basic binary
branching. The change makes IPv4 prefix sets almost 3x faster, but
with more memory consumption and much more complicated algorithm.
Together with a previous filter change, it makes IPv4 prefix sets
about ~4.3x faster and slightly smaller (on my test data).
|
|
unnecessary syscalls
|
|
|
|
|
|
Add min/max key length fields to the MAC algorithm description and
validate configured keys before they are used.
|
|
Add support for specifying a password in hexadecimal format, The result
is the same whether a password is specified as a quoted string or a
hex-encoded byte string, this just makes it more convenient to input
high-entropy byte strings as MAC keys.
|
|
Import the blake2-kat.h header with test vector output from the blake
reference implementation, and add tests to mac_test.c to compare the
output of the Bird MAC algorithm implementations with that reference
output.
Since the reference implementation only has test vectors for the full
output size, there are no tests for the smaller-sized output variants.
|
|
The Babel MAC authentication RFC recommends implementing Blake2s as one of
the supported algorithms. In order to achieve do this, add the blake2b and
blake2s hash functions for MAC authentication. The hashing function
implementations are the reference implementations from blake2.net.
The Blake2 algorithms allow specifying an arbitrary output size, and the
Babel MAC spec says to implement Blake2s with 128-bit output. To satisfy
this, we add two different variants of each of the algorithms, one using
the default size (256 bits for Blake2s, 512 bits for Blake2b), and one
using half the default output size.
Update to BIRD coding style done by committer.
|
|
Simplify the code and fix an issue with getentropy() return value.
|
|
Add a wrapper function in sysdep to get random bytes, and required checks
in configure.ac to select how to do it. The configure script tries, in
order, getrandom(), getentropy() and reading from /dev/urandom.
|
|
underlying interface disappears
When an interface disappears, all the neighbors are freed as well. Seqno
requests were anyway not decoupled from them, leading to strange
segfaults. This fix adds a proper seqno request list inside neighbors to
make sure that no pointer to neighbor is kept after free.
|
|
|
|
|
|
For numeric operators, comma is used for disjunction in expressions like
"10, 20, 30..40". But for bitmask operators, comma is used for
conjunction in a way that does not really make much sense. Use always
explicit logical operators (&& and ||) to connect bitmask operators.
Thanks to Matt Corallo for the bugreport.
|
|
Add support to set or read outgoing MPLS labels using filters. Currently
this supports the addition of one label per route for the first next hop.
Minor changes by committer.
|
|
RFC 8955 is pretty clear that 000 is false and 111 is true.
|
|
Implement function flow_explicate_part() to convert flowspec numeric
expressions to a simple list of (disjoint, sorted) intervals. That could
be used in filters to build f_tree-based int-sets from them.
|
|
The code in tm_format_real_time() mixed up two buffers and their
sizes, which may cause crash in MRT dumping code.
Thanks to Piotr Wydrych for the bugreport.
|
|
|
|
From now, there are no auxiliary pointers stored in the free slab nodes.
This led to strange debugging problems if use-after-free happened in
slab-allocated structures, especially if the structure's first member is
a next pointer.
This also reduces the memory needed by 1 pointer per allocated object.
OTOH, we now rely on pages being aligned to their size's multiple, which
is quite common anyway.
|
|
In general, events are code handling some some condition, which is
scheduled when such condition happened and executed independently from
I/O loop. Work-events are a subgroup of events that are scheduled
repeatedly until some (often significant) work is done (e.g. feeding
routes to protocol). All scheduled events are executed during each
I/O loop iteration.
Separate work-events from regular events to a separate queue and
rate limit their execution to a fixed number per I/O loop iteration.
That should prevent excess latency when many work-events are
scheduled at one time (e.g. simultaneous reload of many BGP sessions).
|