[Solved-4 Solutions] How to Increase the maximum number of tcp/ip connections in linux - Linux Tutorial



Problem:

How to increase the maximum number of tcp/ip connections in linux ?

Solution 1:

Maximum number of connections are impacted by certain limits on both client & server sides.

Client side: Increase the ephermal port range, and decrease the tcp_fin_timeout

Default values to find out:

sysctl net.ipv4.ip_local_port_range
sysctl net.ipv4.tcp_fin_timeout
  • The ephermal port range is defines as the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state. Usual system defaults are:
    • net.ipv4.ip_local_port_range = 32768 61000
    • net.ipv4.tcp_fin_timeout = 60
  • Basically the system cannot consistently to guarantee more than (61000 - 32768) / 60 = 470 sockets per second.
  • To increase the availability by decreasing the fin_timeout. Suppose we do both, we should see over 1500 outbound connections per second, more readily.

To change the values:

sysctl net.ipv4.ip_local_port_range="15000 61000"
sysctl net.ipv4.tcp_fin_timeout=30

The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of "activity."

Default Sysctl values on a typical linux box for tcp_tw_recycle & tcp_tw_reuse would be

net.ipv4.tcp_tw_recycle=0
net.ipv4.tcp_tw_reuse=0

Don't allow a connection from a "used" socket (in wait state) and force the sockets to last the complete time_wait cycle.

sysctl net.ipv4.tcp_tw_recycle=1
sysctl net.ipv4.tcp_tw_reuse=1 
  • To allows fast cycling of sockets in time_wait state and re-using them.
  • This change make sure that this does not conflict with the protocols to use for the application that needs these sockets.
  • On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket.
  • If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024.
  • Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.
sysctl net.core.somaxconn=1024

txqueuelen parameter an ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more the system can handle it.

ifconfig eth0 txqueuelen 5000
echo "/sbin/ifconfig eth0 txqueuelen 5000" >> /etc/rc.local

Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively.

sysctl net.core.netdev_max_backlog=2000
sysctl net.ipv4.tcp_max_syn_backlog=2048

Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.

Solution 2:

There are a couple of variables to set the max number of connections. Running out of file numbers first. To check ulimit -n. After that, there are settings in /proc, but those default to the tens of thousands.A single TCP connection ought to be able to use all of the bandwidth between two parties:

  • To check TCP window setting is large enough. Linux defaults are good for everything except really fast inet link (hundreds of mbps) or fast satellite links and also find your bandwidth*delay product.
  • Check for packet loss using ping to large packets (ping -s 1472 ...)
  • Check for rate limiting. On Linux, this is configured with tc
  • Confirm that the bandwidth is exists actually exists using e.g., iperf
  • Confirm the protocol is sane.
  • Connections is actually using (try netstat or lsof). If that number is substantial:
  • Have a lot of bandwidth, e.g., 100mbps+. In this case, actually wants to up the ulimit -n. Still, ~1000 connections (default on my system) is quite a few.
  • Some network problems to slowing down in connections (e.g., packet loss)
  • Have something else slow down, e.g., IO bandwidth, especially in seeking. Have you checked iostat -x?

Additionally, using a consumer-grade NAT router (Linksys, Netgear, DLink, etc.), beware that may exceed its abilities with thousands of connections.

Solution 3:

To determine OS connection limit is by catting nf_conntrack_max.For example: cat /proc/sys/net/netfilter/nf_conntrack_maxUse the following script to count the number of tcp connections to a given range of tcp ports. By default 1-65535.

#!/bin/bash
OS=$(uname)

case "$OS" in
    'SunOS')
            AWK=/usr/bin/nawk
            ;;
    'Linux')
            AWK=/bin/awk
            ;;
    'AIX')
            AWK=/usr/bin/awk
            ;;
esac

netstat -an | $AWK -v start=1 -v end=65535 ' $NF ~ /TIME_WAIT|ESTABLISHED/ && $4 !~ /127\.0\.0\.1/ {
    if ($1 ~ /\./)
            {sip=$1}
    else {sip=$4}

    if ( sip ~ /:/ )
            {d=2}
    else {d=5}

    split( sip, a, /:|\./ )

    if ( a[d] >= start && a[d] <= end ) {
            ++connections;
            }
    }
    END {print connections}'

Solution 4:

From server side:

  • To check if load balancer works correctly.
  • TCP timeouts turn slow into 503 Fast Immediate response, load balancer work correctly working resource to serve.

Using node server, you can use toobusy from npm. To implementation:

var toobusy = require('toobusy');
app.use(function(req, res, next) {
  if (toobusy()) res.send(503, "I'm busy right now, sorry.");
  else next();
});

From Client side:

  • To group calls in batch, reduce the traffic and total requests number between client and server.
  • Unnecessary duplicates requests handled by build a cache mid-layer.

Related Searches to - linux - linux tutorial - How to Increase the maximum number of tcp/ip connections in linux