[j-nsp] SRX IPSEC performance
ashish verma
ashish.scit at gmail.com
Mon Sep 17 01:14:31 EDT 2012
Great that helps a lot!
I will try testing the 11.4 code as well.
On Mon, Sep 17, 2012 at 8:52 AM, Mike Devlin <gossamer at meeksnet.ca> wrote:
> Unfortunately, no forcing the traffic of a tunnel an SPU just isnt capable.
>
> The hashing of tunnels to SPCs also changes depending on the number of
> SPC, and the slots they are located in. Code also plays a factor. there
> was a more recent code 11.4 something release in May that did more a round
> robin distribution of the tunnels, instead of the hashing. It was designed
> to take into account, "what if, an SPC failed"
>
> I was running the SPC in combo mode, since it was a 3600, and my company
> didnt want to pay the additional fee to have it flipped into dedicated
> mode. 5800s, you just need 2 SPCs (3SPUs for flows, 1 for control) to
> achieve dedicated mode, 3600 needs a license.
>
> We were however configured in a fashion that the combo mode spc had
> nothing landing on it.
>
> reth0 interface was not configured with vlan tagging, but had 2 ips signed
> to reth0.0 interface in the same /28 IP space.
> in the ike config, where you specified the remote peer address (pretty
> sure its the gateway config, not logged into a box at the moment to verify)
> there is a hidden config you can use which is local-address, which allowed
> us to specify which of the 2 assigned to reth0.0 that association would
> use.
>
> I dont remember exactly what i used for an mtu, but i did do up all my
> math, so that i could minimize any fragmentation at any stage, since it
> will obviously reduce performance and throughput. i think it was 1450 on
> the reth interface, then subtracted the IPSEC headers, and all the other
> headers, and set the st0 mtu to that value.
>
> the process was a painful learning experience, and was sadly with
> production traffic. Took weeks of troubleshooting with A-TAC.
>
>
> On Sat, Sep 15, 2012 at 11:10 PM, ashish verma <ashish.scit at gmail.com>wrote:
>
>> Hi Mike, Devin
>>
>> Thanks for your replies.
>>
>> Mike, Do you have the CP running in dedicated mode ? What packet size did
>> you use for testing?
>>
>> kmd is quite useful in identifying which SPC will be used for the
>> specific tunnel. Is there a way we can force an IPSEC to terminate on a
>> required SPC to load balance better?
>>
>> Thanks again.
>>
>>
>> On Sun, Sep 16, 2012 at 12:49 PM, Mike Devlin <gossamer at meeksnet.ca>wrote:
>>
>>> So i have personally achieved 1.6G throughput per SPC on and SRX3600 on
>>> 10.4R9.2 code line.
>>>
>>> I was required to push 3.5G from a single source, which required the use
>>> of a hidden command in what i remember being the gateway config.
>>>
>>> i also had to pop out to the shell, and use "kmd -T ip1:ip2"
>>>
>>> The ips required here are those of the IKE association. We in the end,
>>> needed 2 IPs on both sides to split the traffic across 3 SPCs, and it
>>> required substantial planning to get these numbers.
>>>
>>> Going to 12 code, which i never got to test, i had an elaborate plan to
>>> attempt equal cost load balancing across multiple IPSEC VPNs on 5800s, but
>>> was unfortunately laid off before i got to work out the finer details of it.
>>>
>>>
>>>
>>>
>>> On Fri, Sep 14, 2012 at 8:49 AM, Devin Kennedy <
>>> devinkennedy415 at hotmail.com> wrote:
>>>
>>>> Hi Ashish:
>>>>
>>>> I recently tested the SRX3400 for IPsec tunnel setup rates and was able
>>>> to
>>>> setup 3600 tunnels using IxVPN testing tool. I only sent traffic
>>>> across the
>>>> tunnels for 1 minute but the testing was successful. We were running
>>>> 4x SPC
>>>> and 2xNPC in our configuration. We were using one GE WAN interface as
>>>> well.
>>>> Our primary purpose was just to test that number of IPsec tunnels that
>>>> we
>>>> needed for a future implementation.
>>>>
>>>>
>>>> Devin
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: juniper-nsp-bounces at puck.nether.net
>>>> [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of ashish verma
>>>> Sent: Thursday, September 13, 2012 5:35 PM
>>>> To: juniper-nsp
>>>> Subject: [j-nsp] SRX IPSEC performance
>>>>
>>>> Hi All,
>>>>
>>>> Has anyone here done IPSEC performance tests for SRX3k and share your
>>>> results?
>>>> Juniper claims that with 1400bytes of packet with 2SPC and 1NPC VPN
>>>> throughput is 3Gbps. How much have you achieved?
>>>>
>>>> Ashish
>>>> _______________________________________________
>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>
>>>> _______________________________________________
>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>
>>>
>>>
>>
>
More information about the juniper-nsp
mailing list