Fun fact: you can connect more than 2 systems in FC without a switch. Just hop along from one TX to the next RX and build a ring. (It's called FC-AL, "Arbitrated Loop".) Works with fibre if you break the duplex cables apart to make simplex connections.
Obviously the entire thing fails if any node is powered off or has any kind of problem, but it's quite funny to do ;)
The nice thing about FDDI was it could tolerate one node failure, because it has two rings. (Each node has two fiberoptic cables to both its neighbors.) If a node fails, its neighbors just send traffic the other direction.
> Obviously the entire thing fails if any node is powered off or has any kind of problem, but it's quite funny to do
Indeed it was funny enough to have that be a very common network topology back in the day. I've never had to investigate a token ring network failure but I can pretty much guarantee it was a frustrating exercise.
I was going to say that I thought token ring MAUs handled disconnected stations but then I read this (including the "setup aid" with an included battery) and now I'm not sure any more:
A few years ago I and a few friends found a pile of old PCI Express Fibre Channel cards from some manufacturer I can no longer remember. 2.5 gigabits per second being faster than one and people being on an undergrad budget, we soon started wondering whether they could be repurposed as network interface cards in our ad hoc server colo built in an old abandoned storage room, which didn't even exist in the original dorm floor plans.
A bit of googling turned out that it was possible and supported by the manufacturer - if you ran Linux 2.6 and were prepared to compile the kernel drivers yourself. We quickly ditched the project when we found out that the old driver made newer kernels panic and setting up link aggregation on normal gigabit ethernet cards wasn't that much of a hassle anyway.
I worked with FC for a while, and IP over FC was always a wonky kinda situation. It really made no sense that if you had a big expensive FC fabric, that you'd funnel your IP traffic through it.
It wasn't that much of a lift for most organizations to just have an IP network for IP type traffic, and leave their high performance FC fabric alone to do its thing.
The few situations when it was done was when someone had some rando traditional IP systems that you'd connect just because, but these were corner cases / kinda forgotten bit of equipment that you just were waiting to replace / time out kinda situations.
FCoE (fibre channel over ethernet) was similar. Not only was it an amalgamation of arguably incompatible standards, but it was an unusual use case. If you needed data security / reliability, etc you could go FC. If you needed a cheap data protocol you could go iSCSI.
FCoE was (and may still be) pushed heavily by Cisco. It was neat, being able to have your Cisco UCS devices talk to both Fibre Channel NetApp and Ethernet all on the same link.
It worked fairly well once it was up and running, and you set the QoS correctly so that FC traffic won out over Ethernet because Fiber Channel does NOT tolerate drops very well.
It was a cool idea in theory. One cable for everything! And then you realize that you still have to worry about power, KVM (or a dedicated IPMI), and any other potential add ons. Emulex worked on getting their chips as on board NICs, which would have been neat for that, but honestly the performance and lack of upgradability couldn't compete with add-on cards.
When the 8Gbps FC network used one color and the old routers only managed a couple 1Gbps links on the other available colors IPoFC seemed like a pretty good idea, especially when NetApp would only snapmirror over IP.
I certainly hope someone has turned that off by now.
I managed to get IP over Firewire working once but came to the same realization. There is never a time when it makes sense to deploy it. The solution was just unreliable enough that even if it was a bit faster you'd lose more hair trying to keep it working than it was worth.
I have used FC 8Gbit transceivers before in my homelab as a replacement for Multimode 10Gbit transceivers. They work quite well. No problems with those Ive still got running.
I used whichever were cheapest to buy on ebay.
WHich currently were Mellanox ConnectX-3 and HP branded Qlogic cards. Both work without a problem in my homelab.
I do have one server with a built-in Intel card, that is more annoying, so I'd steer away from those in the future. The Intel card did not take all transceivers. They only wanted Cisco branded transceivers (from the ones I had available. I'm gonna guess, Intel ones would be fine too).
Here we have a network card which normally pretends to be a disk, but here it's a network card pretending to be a scanner pretending to be a network card.
Brilliant.