[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
MTU
- Subject: MTU
- From: Grzegorz at Janoszka.pl (Grzegorz Janoszka)
- Date: Fri, 22 Jul 2016 22:10:05 +0200
- In-reply-to: <[email protected]>
- References: <CAPkb-7CRMNUK6Av-BBzUgMLh2YW0XfxDs4x4P4rJg6ookVNP=A@mail.gmail.com> <CAPkb-7B3geM2iydgmidq+gFh4oJvzbCj_eznQSvLRhi3LGgvCA@mail.gmail.com> <CAPkb-7C78wAAYrdfgZB65W=tJRumUb3qbVGVE1u28V7DTP82Ww@mail.gmail.com> <CAPkb-7A1RsiN=tv6YuggARbUx2u8EDOUyHc__BJ6iB=UFWxs4Q@mail.gmail.com> <CAPkb-7AQTQnuv9baaO=yEG0e+s0O1ppopafsCNoP8bSuCMysxg@mail.gmail.com> <CAPkb-7AXhoCrVnWv+pzbJGPW86mvzGmGuitqscqQ7AJFrriDRg@mail.gmail.com> <CAPkb-7BaGqybRDLRd01E65bmzoc-1ebnuYsEtYsSU8VLu5dp3A@mail.gmail.com> <CAPkb-7BAjMO2sdY=87J6S6yd8oecyr1gyJa_jqaQUy7=XU0oAw@mail.gmail.com> <CAPkb-7A5TgL+dL8D48sccHSCVcHWEah9sv3hZT-7j=kN4aKnYQ@mail.gmail.com> <CAPkb-7DtxSt=9vgBviZBSaxN57q=wkDJw+6m=RBmf01s9GtVXw@mail.gmail.com> <CAPkb-7AOoQ4=7cCuqwN96u-q5cBsr6q1aTnQ8sxW83LNqcjb2g@mail.gmail.com> <CAP-guGUcE_wrbZdZX7rO2nN1GGREuqVrWfJ37CqY7MRLBZTTSw@mail.gmail.com> <[email protected]> <[email protected]>
On 2016-07-22 20:20, Phil Rosenthal wrote:
>> On Jul 22, 2016, at 1:37 PM, Grzegorz Janoszka <Grzegorz at Janoszka.pl> wrote:
>> What I noticed a few years ago was that BGP convergence time was faster with higher MTU.
>> Full BGP table load took twice less time on MTU 9192 than on 1500.
>> Of course BGP has to be allowed to use higher MTU.
>>
>> Anyone else observed something similar?
>
> I have read about others experiencing this, and did some testing a few months back -- my experience was that for low latency links, there was a measurable but not huge difference. For high latency links, with Juniper anyway, there was a very negligible difference, because the TCP Window size is hard-coded at something small (16384?), so that ends up being the limit more than the tcp slow-start issues that MTU helps with.
I tested Cisco CRS-1 (or maybe already upgraded to CRS-3) to Juniper
MX480 or MX960 on about 10 ms latency link. It was iBGP carrying
internal routes plus full BGP table (both ways).
I think the bottleneck was CPU on the CRS side and maxing MSS helped a
lot. I recall doing later on tests Juniper to Juniper and indeed the
gain was not that big, but it was still visible.
Juniper command 'show system connections' showed MSS around 9kB. I
haven't checked TCP Window size.
--
Grzegorz Janoszka
- References:
- MTU
- From: baldur.norddahl at gmail.com (Baldur Norddahl)
- MTU
- From: bill at herrin.us (William Herrin)
- MTU
- From: Grzegorz at Janoszka.pl (Grzegorz Janoszka)
- MTU
- From: pr at isprime.com (Phil Rosenthal)