Fix limited Internet speeds caused by inappropriate socket buffering

Don't set SO_RCVBUF/SO_SNDBUF to fixed values, thus disabling autotuning.

Patch modeled after a patch suggestion from Daniel Havey <dhavey@gmail.com>
in https://cygwin.com/ml/cygwin-patches/2017-q1/msg00010.html:

At Windows we love what you are doing with Cygwin.  However, we have
been getting reports from our hardware vendors that iperf is slow on
Windows.  Iperf is of course compiled against the cygwin1.dll and we
believe we have traced the problem down to the function fdsock in
net.cc.  SO_RCVBUF and SO_SNDBUF are being manually set.  The comments
indicate that the idea was to increase the buffer size, but, this code
must have been written long ago because Windows has used autotuning
for a very long time now.  Please do not manually set SO_RCVBUF or
SO_SNDBUF as this will limit your internet speed.

I am providing a patch, an STC and my cygcheck -svr output.  Hope we
can fix this.  Please let me know if I can help further.

Simple Test Case:
I have a script that pings 4 times and then iperfs for 10 seconds to
debit.k-net.fr

With patch
$ bash buffer_test.sh 178.250.209.22
usage: bash buffer_test.sh <iperf server name>

Pinging 178.250.209.22 with 32 bytes of data:
Reply from 178.250.209.22: bytes=32 time=167ms TTL=34
Reply from 178.250.209.22: bytes=32 time=173ms TTL=34
Reply from 178.250.209.22: bytes=32 time=173ms TTL=34
Reply from 178.250.209.22: bytes=32 time=169ms TTL=34

Ping statistics for 178.250.209.22:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 167ms, Maximum = 173ms, Average = 170ms
------------------------------------------------------------
Client connecting to 178.250.209.22, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 10.137.196.108 port 58512 connected with 178.250.209.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   768 KBytes  6.29 Mbits/sec
[  3]  1.0- 2.0 sec  9.25 MBytes  77.6 Mbits/sec
[  3]  2.0- 3.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  3.0- 4.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  4.0- 5.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  5.0- 6.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  6.0- 7.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  7.0- 8.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  8.0- 9.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  9.0-10.0 sec  18.0 MBytes   151 Mbits/sec
[  3]  0.0-10.0 sec   154 MBytes   129 Mbits/sec

Without patch:
dahavey@DMH-DESKTOP ~
$ bash buffer_test.sh 178.250.209.22

Pinging 178.250.209.22 with 32 bytes of data:
Reply from 178.250.209.22: bytes=32 time=168ms TTL=34
Reply from 178.250.209.22: bytes=32 time=167ms TTL=34
Reply from 178.250.209.22: bytes=32 time=170ms TTL=34
Reply from 178.250.209.22: bytes=32 time=169ms TTL=34

Ping statistics for 178.250.209.22:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 167ms, Maximum = 170ms, Average = 168ms
------------------------------------------------------------
Client connecting to 178.250.209.22, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.137.196.108 port 58443 connected with 178.250.209.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   512 KBytes  4.19 Mbits/sec
[  3]  1.0- 2.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  2.0- 3.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  3.0- 4.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  4.0- 5.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  5.0- 6.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  6.0- 7.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  7.0- 8.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  8.0- 9.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  9.0-10.0 sec  1.50 MBytes  12.6 Mbits/sec
[  3]  0.0-10.1 sec  14.1 MBytes  11.7 Mbits/sec

The output shows that the RTT from my machine to the iperf server is
similar in both cases (about 170ms) however with the patch the
throughput averages 129 Mbps while without the patch the throughput
only averages 11.7 Mbps.  If we calculate the maximum throughput using
Bandwidth = Queue/RTT we get (212992 * 8)/0.170 = 10.0231 Mbps.  This
is just about what iperf is showing us without the patch since the
buffer size is set to 212992 I believe that the buffer size is
limiting the throughput.  With the patch we have no buffer limitation
(autotuning) and can develop the full potential bandwidth on the link.

If you want to duplicate the STC you will have to find an iperf server
(I found an extreme case) that has a large enough RTT distance from
you and try a few times.  I get varying results depending on Internet
traffic but without the patch never exceed the limit caused by the
buffering.

Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
This commit is contained in:
Corinna Vinschen 2017-02-03 21:46:01 +01:00
parent 06e7f0074c
commit 609d2b22af
1 changed files with 11 additions and 3 deletions

View File

@ -517,8 +517,6 @@ cygwin_getprotobynumber (int number)
bool bool
fdsock (cygheap_fdmanip& fd, const device *dev, SOCKET soc) fdsock (cygheap_fdmanip& fd, const device *dev, SOCKET soc)
{ {
int size;
fd = build_fh_dev (*dev); fd = build_fh_dev (*dev);
if (!fd.isopen ()) if (!fd.isopen ())
return false; return false;
@ -607,6 +605,13 @@ fdsock (cygheap_fdmanip& fd, const device *dev, SOCKET soc)
Mbits/sec with a 65535 send buffer. We want this to be a multiple Mbits/sec with a 65535 send buffer. We want this to be a multiple
of 1k, but since 64k breaks WSADuplicateSocket we use 63Kb. of 1k, but since 64k breaks WSADuplicateSocket we use 63Kb.
NOTE 4. Tests with iperf uncover a problem in setting the SO_RCVBUF
and SO_SNDBUF sizes. Windows is using autotuning since Windows Vista.
Manually setting SO_RCVBUF/SO_SNDBUF disables autotuning and leads to
inferior send/recv performance in scenarios with larger RTTs, as is
basically standard when accessing the internet. For a discussion,
see https://cygwin.com/ml/cygwin-patches/2017-q1/msg00010.html.
(*) Maximum normal TCP window size. Coincidence? */ (*) Maximum normal TCP window size. Coincidence? */
#ifdef __x86_64__ #ifdef __x86_64__
((fhandler_socket *) fd)->rmem () = 212992; ((fhandler_socket *) fd)->rmem () = 212992;
@ -615,6 +620,9 @@ fdsock (cygheap_fdmanip& fd, const device *dev, SOCKET soc)
((fhandler_socket *) fd)->rmem () = 64512; ((fhandler_socket *) fd)->rmem () = 64512;
((fhandler_socket *) fd)->wmem () = 64512; ((fhandler_socket *) fd)->wmem () = 64512;
#endif #endif
#if 0 /* See NOTE 4 above. */
int size;
if (::setsockopt (soc, SOL_SOCKET, SO_RCVBUF, if (::setsockopt (soc, SOL_SOCKET, SO_RCVBUF,
(char *) &((fhandler_socket *) fd)->rmem (), sizeof (int))) (char *) &((fhandler_socket *) fd)->rmem (), sizeof (int)))
{ {
@ -633,7 +641,7 @@ fdsock (cygheap_fdmanip& fd, const device *dev, SOCKET soc)
(size = sizeof (int), &size))) (size = sizeof (int), &size)))
system_printf ("getsockopt(SO_SNDBUF) failed, %u", WSAGetLastError ()); system_printf ("getsockopt(SO_SNDBUF) failed, %u", WSAGetLastError ());
} }
#endif
/* A unique ID is necessary to recognize fhandler entries which are /* A unique ID is necessary to recognize fhandler entries which are
duplicated by dup(2) or fork(2). This is used in BSD flock calls duplicated by dup(2) or fork(2). This is used in BSD flock calls
to identify the descriptor. */ to identify the descriptor. */