- Fri 23 November 2018
- misc
One of the things I encountered in $DAYJOB is that my predecessors seemed oddly preoccupied with speccing Cat6 cable rather than Cat5e cable, for all applications.
Why?
For up to and including 1000baseT (gigabit ethernet), Cat6 is no better than Cat5e. It does not increase the length of a run from 100 meters; it just costs more.
Maybe they were trying to be future-proof for customers who they thought might upgrade to 10g-on-copper handoffs? We could give them the benefit of the doubt and assume that was the case. But Cat6 is not really suitable for 10GBASET. Every single previous Ethernet-on-UTP standard says 100m. But for 10GBASET, Cat6 only goes 55m. It is super easy to run out of distance in a datacenter if you can only go 55 meters including the patch cables.
If you want to go 100m with 10GBASET, you can use Cat6a. Reasonable people may disagree about whether this is a good direction to go (for datacenter work, I fall into the "fiber for all outside-rack applications" camp). If you want to go 100m with 1000baseT, Cat5e is entirely sufficient.
My best guess is that my predecessors had fallen victim to Nigel Tufnel Syndrome. Don't be that guy. Unless of course you're speccing ultra-thin cables or have found a situation where the Cat6 cable is actually less expensive than the Cat5e cable (in which case... you win!).