Palo Alto would do what you want. PA410 or 420 would probably do for your ships. They’re not at all rated for harsh conditions, but they’re about as robust as you’ll find for basic network gear. If you get a PA for the home office as well, you can use their SDWAN for connecting everything.
For switching…how many ports do you need on each ship? I’m using Unifi industrial switches in our manufacturing plants. They stand up to the Texas summers in a highly alkaline environment. They’re only ten ports though (8 poe).
In the same rack as the router we usually have two 48P PoE switches in a stack8ng configuration. Depending on the scale of the setup, we often have the same type of switches elsewhere, trunked in via 10gig fber. On rare occasions we have a bunch of extra fibers for which we use Aruba 3810 with only SFP ports.
We also have 100gig in use, but that’s only for a few closed off networks between the servers, with its dedicated Mellanox switch. While they do connect to the 10gig network via a breakout cable, the 100gig bandwidth is only needed internally in the cluster.
Palo Alto would do what you want. PA410 or 420 would probably do for your ships. They’re not at all rated for harsh conditions, but they’re about as robust as you’ll find for basic network gear. If you get a PA for the home office as well, you can use their SDWAN for connecting everything.
For switching…how many ports do you need on each ship? I’m using Unifi industrial switches in our manufacturing plants. They stand up to the Texas summers in a highly alkaline environment. They’re only ten ports though (8 poe).
+1 for Palo alto.
quite expensive, but I’ve had nothing but good experiences with their hardware.
In the same rack as the router we usually have two 48P PoE switches in a stack8ng configuration. Depending on the scale of the setup, we often have the same type of switches elsewhere, trunked in via 10gig fber. On rare occasions we have a bunch of extra fibers for which we use Aruba 3810 with only SFP ports.
We also have 100gig in use, but that’s only for a few closed off networks between the servers, with its dedicated Mellanox switch. While they do connect to the 10gig network via a breakout cable, the 100gig bandwidth is only needed internally in the cluster.