From b935dc85c3fca51de8e131d6aa2047f8a0404f0c Mon Sep 17 00:00:00 2001 From: rofl0r Date: Mon, 17 Dec 2018 00:23:09 +0000 Subject: simplify codebase by using one thread/conn, instead of preforked procs the existing codebase used an elaborate and complex approach for its parallelism: 5 different config file options, namely - MaxClients - MinSpareServers - MaxSpareServers - StartServers - MaxRequestsPerChild were used to steer how (and how many) parallel processes tinyproxy would spin up at start, how many processes at each point needed to be idle, etc. it seems all preforked processes would listen on the server port and compete with each other about who would get assigned the new incoming connections. since some data needs to be shared across those processes, a half- baked "shared memory" implementation was provided for this purpose. that implementation used to use files in the filesystem, and since it had a big FIXME comment, the author was well aware of how hackish that approach was. this entire complexity is now removed. the main thread enters a loop which polls on the listening fds, then spins up a new thread per connection, until the maximum number of connections (MaxClients) is hit. this is the only of the 5 config options left after this cleanup. since threads share the same address space, the code necessary for shared memory access has been removed. this means that the other 4 mentioned config option will now produce a parse error, when encountered. currently each thread uses a hardcoded default of 256KB per thread for the thread stack size, which is quite lavish and should be sufficient for even the worst C libraries, but people may want to tweak this value to the bare minimum, thus we may provide a new config option for this purpose in the future. i suspect that on heavily optimized C libraries such a musl, a stack size of 8-16 KB per thread could be sufficient. since the existing list implementation in vector.c did not provide a way to remove a single item from an existing list, i added my own list implementation from my libulz library which offers this functionality, rather than trying to add an ad-hoc, and perhaps buggy implementation to the vector_t list code. the sblist code is contained in an 80 line C file and as simple as it can get, while offering good performance and is proven bugfree due to years of use in other projects. --- docs/man5/tinyproxy.conf.txt.in | 28 +--------------------------- 1 file changed, 1 insertion(+), 27 deletions(-) (limited to 'docs/man5/tinyproxy.conf.txt.in') diff --git a/docs/man5/tinyproxy.conf.txt.in b/docs/man5/tinyproxy.conf.txt.in index b3b94ec..afd3b6b 100644 --- a/docs/man5/tinyproxy.conf.txt.in +++ b/docs/man5/tinyproxy.conf.txt.in @@ -176,37 +176,11 @@ The possible keywords and their descriptions are as follows: *MaxClients*:: - Tinyproxy creates one child process for each connected client. + Tinyproxy creates one thread for each connected client. This options specifies the absolute highest number processes that will be created. With other words, only MaxClients clients can be connected to Tinyproxy simultaneously. -*MinSpareServers*:: -*MaxSpareServers*:: - - Tinyproxy always keeps a certain number of idle child processes - so that it can handle new incoming client requests quickly. - `MinSpareServer` and `MaxSpareServers` control the lower and upper - limits for the number of spare processes. I.e. when the number of - spare servers drops below `MinSpareServers` then Tinyproxy will - start forking new spare processes in the background and when the - number of spare processes exceeds `MaxSpareServers` then Tinyproxy - will kill off extra processes. - -*StartServers*:: - - The number of servers to start initially. This should usually be - set to a value between MinSpareServers and MaxSpareServers. - -*MaxRequestsPerChild*:: - - This limits the number of connections that a child process - will handle before it is killed. The default value is `0` - which disables this feature. This option is meant as an - emergency measure in the case of problems with memory leakage. - In that case, setting `MaxRequestsPerChild` to a value of e.g. - 1000, or 10000 can be useful. - *Allow*:: *Deny*:: -- cgit v1.2.3