Home    |    View Topics    |    Search    |    Contact Us    |   



Category:   OS (Other)  >   TCP/IP Stack Implementation Vendors:   FreeBSD, Microsoft, NetBSD, OpenBSD, [Multiple Authors/Vendors]
Microsoft Windows 2000, Linux 2.4, NetBSD, FreeBSD, and OpenBSD May Let Remote Users Affect TCP Performance
SecurityTracker Alert ID:  1001993
SecurityTracker URL:
CVE Reference:   GENERIC-MAP-NOMATCH   (Links to External Site)
Date:  Jul 13 2001
Impact:   Denial of service via network

Description:   A vulnerability was reported in several operating systems that allows remote users to cause the host to transmit a significantly higher number of packets and consume increased CPU resources to do so, creating a potential denial of service condition.

In the TCP protocol, the initiating host specifies a Maximum Segment Size (MSS) value to request that the remote host send an amount of TCP data no larger than the MSS value in any single IP packet. The purpose is presumably to indicate the largest TCP segment that, if fragmented during transmission, can be reassembled by the initiating host.

The reported minimums are listed below for several operating systems:

NetBSD = 32,
FreeBSD = 64,
OpenBSD = 64,
Linux 2.4 = 88,
Win2000 = 88.

The lower the value, the more overhead is required by the remote host, both in number of packets transmitted (with large headers and a small payload) and in CPU resources consumed to participate in the TCP protocol.

Impact:   A remote user can potentially impact host performance by setting a low MSS value and then requesting a large amount of data. This will result in the remote host transmitting a higher number of packets and TCP overhead than would normally be sent and may result in increased CPU resources to do this.
Solution:   No solution was available at the time of this entry.
Cause:   Configuration error
Underlying OS:  Linux (Any), UNIX (FreeBSD), UNIX (NetBSD), UNIX (OpenBSD), Windows (2000)

Message History:   None.

 Source Message Contents

Subject:  Small TCP packets == very large overhead == DoS?

On a lan far far away, a rouge packet was heading towards a
server, ready to start up a new storm ...

[I was going to start this by saying "years ago" but well...
that might be showing my age ;)]

Anyway, down to business.

If any of you have tested what happens to the ability of a box to
perform well when it has a small MTU you will know that setting the
MTY to (say) 56 on a diskless thing is a VERY VERY bad idea when NFS
read/write packets are generally 8k in size.  Do not try it on a NFS
thing unless you plan to reboot it, ok ?  Last time I did this was
when I worked out you could fragment packets inside the TCP header
and that lesson was enough for me ;_)

Following on from this, it occurs to me that the problem with the
above can possibly be reproduced with TCP.  How ?  That thing called
"maximum segment size".  The problem?  Well, the first is that there
does not appear to be a minimum.  The second is that it is negoiated
by the caller, not callee.  Did I hear someone say "oh dear" ?

What's this mean?  Well, if I connect to and set
my MSS to 143 (say), they need to send me 11 packets for every one
they would normally send me (with an MSS of 1436).  Total output
for them is 1876 bytes - a 30% increase.  However, that's not the
real problem.  My experience is that hosts, especially PC's, have
a lot of trouble handling *LOTS* of interrupts.  To send 2k out
via the network, it's no longer 2 packets but 20+ - a significant
increase in the workload.

A quick table (based on 20byte IP & TCP header)::
datalen    mss     packets     total bytes bytes %increase
1436       1436       1           1476            0
1436       1024       2           1516            3%
1436        768       2           1516            3%
1436        512       3           1556            5%
1436        256       6           1676           13%
1436        128      12           1916           30%
1436         64      23           2356           69%
1436         32      45           3236          119%
1426         28      52           3516          238% (MTU = 68)
1436         16      90           5036          241%
1436          8     180           8636          485%
1436          1    1436          58876         3989%

For Solaris, you can enforce a more sane minimum MSS than the
install default (1) with ndd:
ndd -set /dev/tcp tcp_mss_min 128

HP-UX 11.* is in the same basket as Solaris.

*BSD have varying minimums well above 1 - NetBSD at 32, FreeBSD at 64.
(OpenBSD's comment on this says 32 but the code says 64 - mmm, qwality!)

Linux 2.4 is 88

I can't see anything in the registry or MSDN which says what it
is for Windows.  By experimentation, Win2000 appears to be 88,
NT 4 appears to be 1

Do I need to mention any other OS ? :*)

Nothing else besides Solaris seems to have anything close to a
reasonable manner in which to tune the minimum value.

What's most surprising is that there does not appear to be a documented
minimum, just as there is no "minimum MTU" size for IP.  If there is,
please correct me.

About the only bonus to this is that there does not appear to be an
easy way to affect the MSS sent in the initial SYN packet.

Oh, so how's this a potential denial of service attack?  Generally,
network efficiency comes through sending lots of large packets...but
don't tell ATM folks that, of course :-)  Does it work?  *shrug* It
is not easy to test...the only testing I could do (with NetBSD) was
to use the TCP_MAXSEG setsockopt BUT this only affects the sending
MSS (now what use is that ? :-), but in testing, changing it from
the default 1460 to 1 caused number of packets to go from 9 to 2260
to write 1436 bytes of data to discard.  To send 100 * 1436 from
the NetBSD box to Solaris8 took 60 seconds (MSS of 1) vs ~1 with
an MSS of 1460.  Of even more significance, one connection like
this made almost no difference after the first run but running a
second saw kernel CPU jump to 30% on an SS20/712 (I suspect there
are some serious TCP tuning happening dynamically).  The sending
host was likewise afflicted with a signifcant CPU usage penalty if
more than one was running.  There were some very surprising things
happening too - with just one session active, ~170-200pps were
seen with netstat on Solaris, but with the second, it was between
1750 and 1850pps.  Can you say "ACK storm" ?  Oh, and for fun you
can enable TCP timestamping just to make those headers bigger and
run the system a bit harder whilst processing packets!

Oh, I haven't investigated the impact of ICMP PMTU discovery, but
from my reading of at least the BSD source code, the MTU for the
route will be ignored if it is less than the default MSS when
sending out the TCP SYN with the MSS option.  That aside, it will
still impact current connections and would appear to be a way to
force the _current_ MSS below that set at connect time.  On BSD,
it will not accept PMTU updates if the MTU is less than 296, on
Solaris8 and Linux 2.4 it just needs to be above 68 (hmmm, allows
you to get an effective MSS of less than 88 :). mmm, source code.

So, what are defences ?  Quite clearly the host operating system
needs to set a much more sane minimum MSS than 1.  Given there is
no minimum MTU for IP - well, maybe "68" - it's hard to derive
what it should be.  Anything below 40 should just be banned (that's
the point at which you're transmitting 50% data, 50% headers).
Most of the defaults, above, are chosen because it fits in well
with their internal network buffering (some use a default MSS of
512 rather than 536 for similar reasons).  But above that, what
do you choose? 80 for a 25/75 or something higher still?  Whatever
the choice and however it is calculated, it is not enough to just
enforce it when the MSS option is received.  It also needs to be
enforced when the MTU parameter is checked in ICMP "need frag"


p.s. I guess if I was one of those corporate types who get paid to
do security stuff I'd write this up as a white paper but like this
is the 'net man!

p.p.s.  So far as I know, nobody has covered this topic, from this
angle, before or if they have, I'm ultralame for not being out on
a saturday night when I could have been.


Go to the Top of This SecurityTracker Archive Page

Home   |    View Topics   |    Search   |    Contact Us

This web site uses cookies for web analytics. Learn More

Copyright 2021, LLC