Age | Commit message (Collapse) | Author |
|
|
|
|
|
All the memory is just freed implicitly on exit, no need for
page-by-page unmapping.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The resource pool system is highly hierarchical and keeping spare pages
in pools leads to unnecessarily complex memory management.
Loops have a flat hiearchy, at least for now, and it is therefore much
easier to keep care of pages, especially in cases of excessive virtual memory
fragmentation.
|
|
Also all loops have their basic resource pool for allocations which are
auto-freed when the loop is stopping.
|
|
This feature is intended mostly for checking that BIRD's allocation
strategies don't consume much memory space. There are some cases where
withdrawing routes in a specific order lead to memory fragmentation and
this output should give the user at least a notion of how much memory is
actually used for data storage and how much memory is "just allocated"
or used for overhead.
Also raising the "system allocator overhead estimation" from 8 to 16
bytes; it is probably even more. I've found 16 as a local minimum in
best scenarios among reachable machines. I couldn't find any reasonable
method to estimate this value when BIRD starts up.
This commit also fixes the inaccurate computation of memory overhead for
slabs where the "system allocater overhead estimation" was improperly
added to the size of mmap-ed memory.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
congestion
|
|
Lots of time was spent locking when accessing route attribute cache.
This overhead should be now reduced to a minimum.
|
|
|
|
|
|
|
|
|
|
On Linux, munmap() may fail with ENOMEM when virtual memory is too
fragmented. Working this around by just keeping such blocks for future
use.
|
|
|
|
This basically means that:
* there are some more levels of indirection and asynchronicity, mostly
in cleanup procedures, requiring correct lock ordering
* all the internal table operations (prune, next hop update) are done
without blocking the other parts of BIRD
* the protocols may get their own loops very soon
|
|
|
|
|
|
To access route attribute cache from multiple threads at once, we have
to lock the cache on writing. The route attributes data structures are
safe to read unless somebody tries to tamper with the cache itself.
|
|
This commit prevents use-after-free of routes belonging to protocols
which have been already destroyed, delaying also all the protocols'
shutdown until all of their routes have been finally propagated through
all the pipes down to the appropriate exports.
The use-after-free was somehow hypothetic yet theoretically possible in
rare conditions, when one BGP protocol authors a lot of routes and the
user deletes that protocol by reconfiguring in the same time as next hop
update is requested, causing rte_better() to be called on a
not-yet-pruned network prefix while the owner protocol has been already
freed.
In parallel execution environments, this would happen an inter-thread
use-after-free, causing possible heisenbugs or other nasty problems.
|
|
The corked procedure gets a callback when uncorked. Supported by table
maintenance routines and also BGP.
|
|
The former rt_event is dropped in favour of separate table events.
This allows for selective corking of NHU and prune.
|
|
|
|
There is a simple universal IO loop, taking care of events, timers and
sockets. Primarily, one instance of a protocol should use exactly one IO
loop to do all its work, as is now done in BFD.
Contrary to previous versions, the loop is now launched and cleaned by
the nest/proto.c code, allowing for a protocol to just request its own
loop by setting the loop's lock order in config higher than the_bird.
It is not supported nor checked if any protocol changed the requested
lock order in reconfigure. No protocol should do it at all.
|
|
|
|
In previous versions, every thread used its own time structures,
effectively leading to different time in every thread and strange
logging messages.
The time processing code now uses global atomic variables to keep
current time available for fast concurrent reading and safe updates.
|
|
On most of current hardware, getting monotonic clock is fast enough to
get it and write for each debug message.
|
|
|
|
|
|
|
|
thread-local variables
|
|
|
|
|
|
|