Ok, so, we've had a problem we've been slowly having to deal with more and more. We noticed 3-4 years ago (possibly sooner) that on connection speeds with throughputs over about 40-50 megabits, when pushing data across an ipsec tunnel we see no more than about 30-40 megabits. Our present setup is a hub and spoke model, with the hubs being Fortigate 1500D's presently running 5.4.8 or 5.6.6, and a 'brand new' 1000D running 5.6.6 (This hasn't entered production yet so it makes a good test bed for me) spread around the country. Our spokes are nearly all Fortigate 40C or Fortigate 60D (mostly the latter) though we're now rolling out 60E's. I've got some limited testing from a 100E as a spoke router as well. We've seen this pretty much regardless of latency and nearly regardless of how much higher than our previously mentioned threshhold (IE: whether a circuit is 50M, 500M or 1 gig, we see 30-40 megabits). Iperf has UDP running pretty much full speed, it's only TCP that's impacted. Even when I play with window sizes or thread counts we don't see much better speed (or dramatically worse).
We've got a ticket (again) with Fortigate, and are working with them, but I'm wondering if anyone here has seen this problem and tackled it, or has circuits over these speeds with tunnels running faster.
We've tried with and with NPU or ASIC offloading, we've played with MTU/MSS settings, we've run the gamut from our AES256/SHA256 DHgroup 19 standard all the way down to the dregs of DES with DHgroup 1 with no change. I've even tried playing with NATT. We see this on IKE1 and IKE2. We've tried NPU interconnects as well as standard vdom interlinks between our concentrator vdoms and the tunnel endpoint vdoms n the HUB routers.
As a comparison, a customer replaced one of our FG models with their sonicwall, and tied back to our 1500D they see noticably (though still somewhat hampered speeds, 55 megs on a 75 meg circuit). When we replace these with a velocloud unit, we get full speeds entirely across the network (I realize it's not exactly apples to apples).
I haven't verified it with my own testing, as I'm not in a position to, but another one of our engineers claims he was able to get the FG60D to do about 300+ when directly connected to another 60D and the 60E's were doing 500ish.
We've got a ticket (again) with Fortigate, and are working with them, but I'm wondering if anyone here has seen this problem and tackled it, or has circuits over these speeds with tunnels running faster.
We've tried with and with NPU or ASIC offloading, we've played with MTU/MSS settings, we've run the gamut from our AES256/SHA256 DHgroup 19 standard all the way down to the dregs of DES with DHgroup 1 with no change. I've even tried playing with NATT. We see this on IKE1 and IKE2. We've tried NPU interconnects as well as standard vdom interlinks between our concentrator vdoms and the tunnel endpoint vdoms n the HUB routers.
As a comparison, a customer replaced one of our FG models with their sonicwall, and tied back to our 1500D they see noticably (though still somewhat hampered speeds, 55 megs on a 75 meg circuit). When we replace these with a velocloud unit, we get full speeds entirely across the network (I realize it's not exactly apples to apples).
I haven't verified it with my own testing, as I'm not in a position to, but another one of our engineers claims he was able to get the FG60D to do about 300+ when directly connected to another 60D and the 60E's were doing 500ish.
The maximum transit unit, or MTU, can be an easy fix to network and packet issues. Here's how to modify MTU in a Fortinet firewall.
Posted by1 year ago
Archived
Hi all,
Essentially we have an IPsec tunnel to a partner. Said partner needs to run file transfers across our mutual tunnel. Both phases of the tunnel are up and operational, and most of their file transfer work just fine.
The issue only presents itself when our partner attempts to run these transfers as a batch job. If there transfer is over 10KB the session simply times out.
In addition to that I am receiving the following capture out of the sniffer:
1335.450446 74.113.137.109 -> 209.95.252.127: icmp: 74.113.137.109 unreachable - need to frag (mtu 1400)
Have any of you ever ran across this kind of message in the sniffer packets? At a glance I can tell the FG is passing along some complaint from a device that cannot fragment these packets properly, but I'm pretty baffled past that.
Any advice or thoughts would be appreciated.
Thanks!
7 comments