[geeks] Network Slowness

der Mouse mouse at Rodents.Montreal.QC.CA
Thu May 24 12:03:34 CDT 2007


>>> OpenSSH SFTP using Blowfish ciphers.
>> SSH has a massive overhead in processing,
> Massive enough to slow a 1000Mbps connection to below 50Mbps?  No
> way.

Look into it before you make pronouncements like that. :)

Yes, it can be that bad.  I've lost track of what the endpoint machines
here were, but if they have comparatively weak CPUs, then it's possible
they can't shove more than 50 megabits a second through blowfish no
matter what.  (Unless they're oddball architectures, the blowfish
implementation is probably not at fault - while it certainly is one of
the suspects, openssh usually has tolerably good crypto code.)

Furthermore, using sftp can be relevant.  Despite the name, sftp is not
a file transfer protocol; it is a remote filesystem access protocol
(which can of course be (ab?)used to do file transfers).  I think it
permits having multiple outstanding requests, therefore permitting
pipelining of reads and writes, but even if I'm right, it's entirely
possible that the implementation in use doesn't actually take advantage
of that, in which case every block of data read or written will incur a
round-trip latency penalty.  Depending on how large those blocks are
and how much processing is involved, this can make a substantial
difference.  (I think with openssh, for example, sftp is implemented as
a separate process, probably incurring two more kernel/user crossings,
and a context switch, in each direction at each end.)

All of these will affect throughput.  Without measuring, I can't be
sure whether any of them are large enough to matter, but I certainly
find it plausible that they are - and that they add up to enough to
pull it down to 50 megabits end-to-end goodput.

/~\ The ASCII				der Mouse
\ / Ribbon Campaign
 X  Against HTML	       mouse at rodents.montreal.qc.ca
/ \ Email!	     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B



More information about the geeks mailing list