Re: OT: switches and repeaters

Stephen wrote:
On Sat, 14 Aug 2010 13:21:58 +0100, Jim <jim@xxxxxxx> wrote:

Stephen wrote:
On Fri, 13 Aug 2010 16:00:25 +0100, Jim <jim@xxxxxxx> wrote:

The Natural Philosopher wrote:
Gordon Henderson wrote:
In article <MPG.26cee938b13d8ef0989686@xxxxxxxxxxxxxxxxxxxx>,
Mugwump <mugwump_is@xxxxxxxxxxx> wrote:
In article <8cit68FhgdU5@xxxxxxxxxxxxxxxxxx>, rde42@xxxxxxxxxxx says...
On Thu, 12 Aug 2010 11:35:31 +0000, Gordon Henderson wrote:

In article <8cgjloFhgdU4@xxxxxxxxxxxxxxxxxx>, Bob Eager <rde42@xxxxxxxxxxx> wrote:
On Wed, 11 Aug 2010 21:38:42 +0000, Gordon Henderson wrote:
OK, dug the manual out.

The H version is the one with integral four port switch (the H, rather inaccurately, stands for Hub).
It is quite accurate., the routers contain a 4 port hub and not a switch.
FWIW: I have confirmed that the 4-port thingy inside the P-660H-D1 devices
I have is acting as a switch, and therefore separating traffic between
ports and not a hub which would allow all ports to see all traffic.

A switch is a hub anyway.
Its just a switched hub as opposed to a repeating hub.

Since almost no one makes repeating hub chipsets anymore, I would be surpised if such exists at all on any kit..
I've always understood that the main constraint on designing Cat 5 networks was the 205-metre maximum collision domain for Fast Ethernet (2 cable runs with a repeater). Since no-one uses repeaters any more, why do we still design networks with 100m maximum cable runs?
Nope - it is the other way around.

Cat 5 is designing for cabling in buildings (TIA / EIA?), and that
started out before 10 Base-T was a standard.

those standards specified 90m for the fixed wiring, and 10m for
patching into a wiring closet device / desktop device.

Ethernet just adopted what was already in wide use, then drove the
need for better cable types as data rates increased.
The first EIA/TIA-568 standard was published in July 1991.

i am pretty certain the standard codified existing practice so it goes
back further than that.

I don't remember much standardisation for data cabling in the 1980's. Nearly everything was point-to-point (or to multipoint!) - there wasn't any "structured cabling". Maximum speed depended on length and cable type. If anything, I think the TIA standard was following general telecomms practice for voice in having everything concentrated through floor and building distribution points, but with shorter horizontal runs to meet the data limits.

I don't think it referenced particular
standards for data transmission, but it did specify coax and UTP (up to 16MHz performance limits), and the maximum 90m cable run, even for fibre. Ethernet wasn't the only LAN in use at the time, but surely it was the collision limits that determined this figure. Why else would they have chosen it? Why specify a limit at all?

Ok - digging out an old book (Gigabit Ethernet by Rich Siefert)

In 1991 Ethernet standards for twisted pair stopped at 10 Base-T
speeds, and the collision domain for 10 Base whatever networks extends
to up to 4 Km (depending on the number of hops, segment types and so

100 Base-T standards 1st got approved in 1995, with proprietary stuff
in 1992 or 1993.
We may be stuck with it now, as everything is manufactured to a spec. to meet the standard, but it seems the reasoning behind that standard is obsolete.

True - (old saying about God not managing to build the world in 7 days
if he had had an installed base).

Dont forget architects have been laying out new building designs to
conform to this limit for a while now, so inertia is part of the

It isnt just collision domain timing that limits UTP Ethernet reach -
cross talk between the pairs and other signal issues get much worse as
cables get longer.

Yes, but the cable was designed to meet the 100m overall segment length. It could have been designed for longer distances, though would probably have been more expensive for the same speed.

The compromises involved in the EIA/TIA standard seem to be costing us now. The designers intended the four pairs to be shared between voice and data, but how many sites actually do that? The result is cabling with 50% or 75% redundancy - that's a lot of copper using up duct space and costing labour to terminate. A two-pair cable would have reduced costs significantly and met at least 99% of needs. It might even have been more practical to install 2-pair STP for better performance.

Higher speed services such as Gigabit ethernet have a shorter distance limit. The alternative is to install more expensive cabling. Cat 7/7a seems to cost 10 times as much as Cat 5e, so no-one will flood-wire that.

Since switches have become so cheap, a more cost-effective cabling design would have fewer horizontal runs to offices (but with higher-speed cables, where length permits) and more short-run distribution within them. I suspect this happens a lot in practice, with a resulting tangle of wires and equipment in places they weren't designed to fit.

In short, the structured cabling schemes that were laid out 20 years ago are not as helpful to modern network designers as the authors might have hoped.