Increasing the maximum throughput of tcp/ip connections in linux -


i'm testing nodejs app serves media files stored in memory. media files approximately 2mb-5mb each in size. i'm trying figure out what's best way max out available ethernet channel (1 gbps or 10 gbps).

i'm testing in vm (virtualbox) ubuntu 16.04.1 lts. testing i'm using own nodejs script makes multiple outgoing requests server , logs console average bitrate. test nodejs script runs 1 minute , there configurable parameter n indicates how many simultaneous downloads can have @ once.

what noticed if increase number of simultaneous downloads average throughput goes down considerably. example:

  • if tun test n = 30 (30 simultaneous downloads) 125 mb/s overall, in case each of these requests served @ 3-5 mb/s
  • now, if run n = 300 overall bitrate of 90 mb/s or 30% lower
  • if run n = 600 overall bitrate of 80 mb/s.

any idea why doesn't seem scale higher number of simultaneous downloads? higher number of simultaneous connections doesn't have cpu impact, if try run same test script against nginx serving same files ssd identical numbers. cpu load on ubuntu nodejs server doesn't go higher 20%. if run benchmark locally on ubuntu 1800 mb/s throughput.

i understand benchmarking virtual ethernet card quite useless, yet can max out download rate 30 simultaneous connections, if use 300 simultaneous connection overall throughput goes down 30%, whereas i'd expect shouldn't go down more 5%.

what can done increase overall bitrate many simultaneous downloads?

p.s. there interesting post about maximum number of tcp/ip connections.


Comments

Popular posts from this blog

amazon web services - S3 Pre-signed POST validate file type? -

c# - Check Keyboard Input Winforms -