Re: 54 Processors?

edgould@xxxxxxxxxxxx (Ed Gould) writes:
> Now I know I am out of date on this but somewhere in the mists of
> time, I could swear that IBM came out saying that anthing above 18(???
> this is a number I am not sure of) was not good, in fact it was bad as
> the interprocessor costs was more overhead than they were
> worth. They sited some physics law (IFIRC) .
> Did IBM rethink the "law" or are they just throwing 54 processors out
> hoping no one will order it?
> My memory is cloudy but I seem to recall these statements around the
> time of the 168MP.

a big problem was strong memory consistency model and cache
invalidation model. two processor smp 370 cache machines ran at .9
times cycle of a single processor machine ... to allow for cross-cache
invalidation protocol chatter (any actual invalidates would slow the
machine down even further). this resulted in basic description that
two processor 370 hardware was 1.8times (aka 2*.9) of a uniprocessor
.... actual cross-cache invalidation overhead and additional smp
operating system overhead might make actual thruput 1.5 times a
uniprocessor machine.

we actually had a 16-way 370/158 design on the drawing boards (with
some cache consistency slight of hand) that never shipped ... minor
posting reference: Code density and performance?

3081 was supposed to be a native two-processor machine ... and there
never originally going to be a single processor version of the 3081.
eventually a single processor 3083 was produced (in large part because
TPF didn't have smp software support and a lot of TPF installations
were saturating their machines ... some TPF installations had used
vm370 on 3081 with a pair of virtual machines ... each running a TPF
guest). the 3083 processor was rated at something like 1.15 times the
hardware thruput of one 3081 processor (because they could eliminate
the slow-down for cross-cache chatter).

a 4-way 3084 was much worse ... because each cache had to listen for
chatter from three other processors ... rather than just one other

this was the time-frame when vm370 and mvs kernels went thru
restructuring to align kernel dynamic and static data on cache-line
boundaries and multiples of cache-line allocations (minimizing a lot
of cross-cache invalidation thrashing). supposedly this restructuing
got something over five percent increase in total system thruput.

later machines went to things like using a cache cycle time that was
much faster than rest of the processor (for handling all the
cross-cache chatter) and/or using more complex memory consistency
operations ... to relax the cross cache protocol chatter bottleneck.

around 1990, SCI (scallable coherent interface) defined a
memory consistency model that supported 64 memory "ports".

Convex produced the exampler using 64 two-processor boards where the
two processors on the same board shared the same L2 cache ... and then
the common L2 cache interfaced to the SCI memory access port. This
provided for shared-memory 128 (HP RISC) processor configuration.

in the same time, both DG and Sequent produced a four processor board
(using intel processors) that had shared L2 cache ... with 64 boards
in a SCI memory system ... supporting shared-memory 256 (intel)
processor configuration. Sequent was subsequently bought by IBM.

part of SCI was dual-simplex fiber optic asyncronous interface
.... rather than single, shared syncronous bus .... SCI defined bus
operation with essentially asyncronous (almost message like)
operations being performed (somewhat latency and thruput compensation
compared to single, shared syncronous bus).

SCI had definition for asyncronous memory bus operation. SCI also has
definition for I/O bus operation ... doing things like SCSI operations

IBM 9333 from hursley had done something similar with serial copper
.... effectively encapsulating scsi syncronous bus operations into
asyncronous message operations. Fiber channel standard (FCS, started
in the late 80s) also defined something similar for I/O protocols.

we had wanted to 9333 to evolve into FCS capatible infrastructure

but the 9333 stuff instead evolved into SSA.

ibm mainframe eventually adopted a form of FCS as FICON.

SCI, FCS, and 9333 ... were all looking at pairs of dual-simplex,
unidirectional serial transmission using asyncronous message flows
partially as latency compensation (not requiring end-to-end syncronous

a few recent postings mentioning 9333/ssa:
ttp:// something like a CTC on a PC IBM's mini computers--lack thereof IBM's mini computers--lack thereof

a few recent postings mentioning SCI shared memory programming on distributed memory model? Device and channel Device and channel something like a CTC on a PC Performance and Capacity Planning

Anne & Lynn Wheeler |

Relevant Pages

  • Re: Can extra processing threads help in this case?
    ... very memory bandwidth intensive way. ... How much memory is the process loading? ... This L3 cache is also twice as fast as the prior ... two machines, I gained an 800% improvement in wall clock ...
  • Re: Calculation Errors - Machine Dependent?
    ... Do the different machines have a single CPU or multiple? ... before cache has been flushed to memory (this can be a cache controller ...
  • Re: volatile and win32 multithreading
    ... Volatile makes the compiler produce code that reads from memory when ... the value is read and writes to memory when the value is written. ... What goes into the cache doesn't normally get written ... we can expect nearly all machines to start to have more and more ...
  • Re: DMV systems?
    ... can Solaris be ran on a Mainframe or not? ... in the 80s we participated in both fcs and sci standards activity. ... was that RIOS chips didn't support cache coherency ... ... SCI effort was an asyncronous memory bus implementation. ...
  • Re: OT:processor design
    ... whether the cache is huge or not. ... the huge latency to off-chip memory. ... advantage to deliberately design a processor to run at less than memory ... these machines tend to outperform newer ...