diff options
author | Klemens Schölhorn <klemens.schoelhorn@advantest.com> | 2022-05-17 18:06:48 +0200 |
---|---|---|
committer | Klemens Schölhorn <klemens.schoelhorn@advantest.com> | 2023-06-02 12:02:15 +0200 |
commit | 9fddb6e99fb981cd31eb8187dd2f47746b3625d8 (patch) | |
tree | dc2f13662bb8b8b15c7da4a42984a40c36bbbfa8 /sites | |
parent | 63fd01aa39b0d7cba587380a61b38590004c755d (diff) |
Allow limiting the number of concurrent prefetch requests
Currently prefetch will send all requests for all chunks for the file in
one shot. With the default chunk size of 32k, this can result in many
thousand outstanding requests for large files. Some servers like Serv-U
15.2.3.742 seem to be dropping requests after a certain number, which
results in the file download hanging indefinitely (or until the server
closes the connection).
Fix this issue by letting the user specify a limit for the number of
concurrent requests. This is similar to openssh's sftp, which limits
the number of concurrent requests to 64 by default.
Diffstat (limited to 'sites')
-rw-r--r-- | sites/www/changelog.rst | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/sites/www/changelog.rst b/sites/www/changelog.rst index c18890eb..9b903258 100644 --- a/sites/www/changelog.rst +++ b/sites/www/changelog.rst @@ -2,6 +2,12 @@ Changelog ========= +- :feature:`2058` (solves :issue:`1587` and possibly others) Add an explicit + ``max_concurrent_prefetch_requests`` argument to `paramiko.client.SSHClient.get` + and `paramiko.client.SSHClient.getfo`, allowing users to limit the number + of concurrent requests used during prefetch. Patch by ``@kschoelhorn``, with + a test by ``@bwinston-sdp``. + - :release:`3.2.0 <2023-05-25>` - :bug:`- major` Fixed a very sneaky bug found at the apparently rarely-traveled intersection of ``RSA-SHA2`` keys, certificates, SSH agents, |