Re: pci-e x 1 and pci-e x 16

In article <XpWdnc_cMs8HVO3ZRVnyqQ@xxxxxxxxx>, "Squibbly" <junk@xxxxxxxx> wrote:

"nos1eep" <*> wrote in message
Man-wai Chang <toylet.toylet@xxxxxxxxx>wrote :

|johns wrote:
|> This pci-e x1 seems to be a lost cause. There is pretty
|> much nothing for it. I just built up a new game box
|I am expecting a PCI-e x1 sound card. If they have no plan for that, I
|see no value for the x1 slots.

There are a number of PCI-e cards available,
Unfortunately there are no sound cards.

did i ask for link where i could get cards, im sure its easy im just asking
whats the difference between them two pci-e slots

I'll try and give you a little history and perspective.

In the past, we had the PCI bus. The PCI bus is shared, and
both onboard chips and the PCI slots, are all connected in
parallel. This doesn't do wonders for the signal quality,
but furtunately, the PCI bus standards have done a wonderful
job of making it all work. People are hardly aware of
issues with making PCI bus cards work, and that is a good

Southbridge ----+----+----+----+----+ <-- 133MB/sec total
| | | | | bus traffic shared
PCI PCI PCI PCI PCI by all the devices
slot slot slot slot slot

One of the advantages of PCI interfaces, is they are
dead easy to make. That is why so many companies have
been able to make products. There aren't a lot
of designers with PCI Express experience, and there
aren't (currently) enough incentives to make companies
put out more PCI Express cards.

PCI Express is a different concept. The interface is point to
point. One plugin device cannot interfere with the signal
quality of another device. From an engineering perspective,
that is the ideal situation, so no need for thousands of
hours of simulation test cases, to make the new interface

The PCI Express bus is serial and operates at a high speed.
The speed is so high in fact, that it is approaching the
limits of how fast cheap silicon interfaces can run.

PCI Express interfaces are referred to as "lanes". A lane
runs at 250MB/sec, and there is a separate path for TX and
RX. In other words, it can run full duplex, while the PCI
bus runs simplex, either transmitting or receiving at
any one time.

The PCI Express interface consists of just two wires. The
wires are a differential pair, and the signals go in opposite
directions from one another. When one wire has logic "0" on
it, the other one has logic "1". The differential interface
helps the thing to run at high speeds. There is a pair of
wires for TX and a pair of wires for RX. (You can think of
it as being almost like Ethernet wiring, and there are
packets flowing on there.)

The lanes are capacitively coupled. If you look next to the
video card slot, you'll see pairs of tiny chip capacitors
installed in series with a lane. That makes it easy to count
the lanes connected to a PCI Express slot, by just noting
the tiny chip capacitors.

So, to connect a PCI Express x1 slot, there are two wires
for TX, and two wires for RX. This cuts down the number of
signal pins signicantly from PCI. The PCI Express connector
still has to carry power to the card, and there are more
pins on the connector for power, than there are for signals.

Now that the low level details are out of the way, what
does the interconnect look like ? (Note - on some chipsets,
like Nforce4, the Northbridge and Southbridge are squashed
inside the same chip.)

| |<-----> \
| |<-----> \
| |<-----> \
| Northbridge | * \______ 16 lanes =
| | * / 4GB/sec to
| | * / the x16 slot,
| |<-----> / good for a
+-------------+ video card
| Hub bus 1GB/sec or
| HT bus at faster speed
| |<-----> PCI Express x1 slot 250MB/sec
| |
| |<-----> PCI Express x1 slot 250MB/sec
| Southbridge |
| |<-----> PCI Express x1 slot 250MB/sec
| |
| |<-----> Onboard Raid Controller etc

Notice how, with PCI Express, each x1 slot gets its own private
bandwidth. The private bandwidth is higher than the old PCI

Another thing to note, is there is nothing special about the
video card slot. For example, an Areca RAID controller, with
x8 PCI Express interface on it, has been run in a video card
slot on an A8N-SLI Deluxe. That means, if you wanted, you could
actually plug a PCI Express x1 ethernet card, into a video card
slot if you wanted.

The video card slot is longer, and the video card slot has
a hell of a lot of bandwidth to offer. The AGP 8X gave
2100MB/sec bandwidth in a single direction, and the PCI
Express is 4GB/sec in each direction (as there are 16 TX
pairs and 16 RX pairs of wires).

So the only thing different about the video card slot, is
it has 16 times as many wires connected to it, as does the
PCI Express x1 slot. The bandwidth is private and just
for the video card.

To be able to insert a disk controller card into the
video card slot, the BIOS has to support it. There are
probably some BIOS out there, that still don't like having
a non-video card plugged to the video card slot. But,
AFAIK, there is no architectural reason that other
cards can't be plugged in there. The Areca RAID card
proves it.

Note that, on boards like A8N32, where there are two PCI
Express x16 slots, in fact there is not sufficient
bandwidth on the Athlon64 processor, to actually handle
both PCI Express x16 slots at the same time. That means,
on average, each slot only pumps x8 lanes worth of traffic,
to keep the Athlon64 interface saturated. While it doesn't
make a dual x16 motherboard useless, it does help explain
why there is not a lot of performance difference.

There are other "cheats" in the industry as well. On
some motherboards, there is a x4 PCI Express slot (slot
is longer than a x1 slot), but the motherboard maker
only wired x2 lanes to the x4 connector. So it is possible
for a manufacturer to "cheat" on the connector bandwidth,
by not wiring all the lanes. This typically happens if
there aren't enough lanes on the chipset. Most customers
will never notice the difference :-)

Hope that helps with your confusion.