Re: DMV systems?



as400 wrote:
> Well, thanks for this information..I really appreciate it...
>
> And lastly, can Solaris (UNIX) be ran on a Mainframe or not? Because
> you said:
>
> " would say that most of the systems were mainframe based (IBM and
> Unisys) and non-Unix based OS's:"
>
> Please advise.

what you mean by mainframe?

i think sun talks about running on mainframe class machine.

original sun workstations was 68k before they produced risc sparc ...
minor reference to 68020/68030 machines
http://www.obsolyte.com/sun380/

not sparc and i86 machines are support ... this has minor reference to
both platforms
http://www.sun.com/software/security/securitycert/

i think that unisys logo'ed sequent's machine for a time ... as another
"mainframe class" machine.

this minor reference has unisys starting logo'ing sequent machines even
before the sequent NUMA-Q machines
http://lists.freebsd.org/pipermail/freebsd-smp/2003-July/000247.html

in the 80s we participated in both fcs and sci standards activity. fci
standards was somewhat the outgrowth of LLNL's work with a non-blocking
copper-wire switched remapped to 1gbit/sec fiber-optics. SCI was some
work out of SLAC to take fiber-optics and use it for asyncronous bus.

there had been this fiber-optic technology that had been kicking around
pok since the 70s that was having hard time getting out. part of it
appeared to be around the battle with communication division on who
"owned" stuff that crossed the wall surrounding the glass-house. my wife
was in the middle of this since she had done a stint in POK in charge of
loosely-coupled architecture ... where she produced peer-couple shared
data architecture
http://www.garlic.com/~lynn/subtopic.html#shareddata

... which didn't see much uptake until parallel sysplex ... except for
some work hear-and-there ... like IMS hot-standby.

in any case, supposedly the jurisdictional resolutioin was that CPD's
terminal controller paradigm (sna) supposedly owned anything that
crossed the boundary of the glass house walls.

one of the austin interconnect engineers took the pok fiber technology
... and tweaked it here and there ... getting about ten percent higher
thruput (220mbits/sec rather than 200mbits/sec for escon) and used
optical drivers that were at least an order of magnitude less expensive
(than escon). this was announced on rs/6000 as sla (serial link adapter).

he then wanted to go on and do an 800 mbit version of sla. we convinced
him to move to working on FCS ... where he became editor of the FCS
standards document. part of this was that rs/6000 was much more into the
market segment that highly prised interoperability ... and it was
difficult to have a lot of interoperability with proprietary
interconnect. one of the issues was FCS was basically a fully
asyncronous, full-duplex operation. Later there were some horrible
battles when some POK channel engineers became involved with FCS and
tried to do some unnatural acts like layering mainframe half-duplex
syncronous channel paradigm on an underlying infrastructure that is
asyncronous full-duplex (or dual simplex as i periodically refer to it)
... which i believe may now be referred to as ficon.

so in parallel with all this was the "SLAC" effort to use similar
fiber-optic technology for asyncronous bus operation rather than
asyncronous link/io operation. sci reference:
http://www.scizzl.com/

one of the reasons that we round up producing ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

that was suppose to use fcs for scale-up ... minor reference
http://www.garlic.com/~lynn/95.html#13

was that RIOS chips didn't support cache coherency ... i.e. and a major
SCI effort was an asyncronous memory bus implementation.

for a little more digression ... i've periodically asserted that much of
the 801/risc/romp/rios genre
http://www.garlic.com/~lynn/subtopic.html#801

was attempting to drastically simplify hardware after the disastrous
experience of future systems
http://www.garlic.com/~lynn/subtopic.html#futuresys

the other characteristic was that it seemed that the 801/risc had a
scalded cat reation to the enormous multiprocessor cache consistency
overhead exacted by the strong memory consistency paradigm of the
highend mainframe 370s. not only did 801/risc go to the opposite extreme
of future systems ... but also the opposite extreme from highend 370s
with regard to cache consistency. as a result, it was essentially
impossible to build scale-up multiprocessor system with SCI and rios
chips. essentially, the only scaleup fall-back was purely
loosely-coupled (aka cluster) operatiion with high-speed (i/o) interconnect.

convex built 128-way examplar with dual-processor board HP/RISC chips.
SCI asyncronous memory operation standard allows for 64 memory ports.
convex had dual-processor shared cache boards ... and 64 such boards
allowed for maximum 128-way configuration.

both sequent and data general did something similar using 64-port SCI
... except they used intel quad-processor boards (four processors
sharing cache that then used 64-port SCI to interface to memory).

this was the 256 processor Sequent numa-q machine that IBM inherited
when they bought sequent. it was also the machine that unisys was
logo'ing as a mainframe-class machine (but running a unix-derived
operating system).

in this time-frame a separate, somewhat parallel effort was started to
produce power/pc (as opposed to "power" which was the marketing name for
the rios chipset) ... which would have support for cache consistency
(and some number of other differences from original
801/risc/romp/rios/power).

misc. past fcs, sci, fiber, postings
http://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe"
still have a meaning?
http://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
http://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
http://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
http://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
http://www.garlic.com/~lynn/2002j.html#78 Future interconnects
http://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
http://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
http://www.garlic.com/~lynn/2003j.html#65 Cost of Message Passing ?
http://www.garlic.com/~lynn/2003p.html#1 An entirely new proprietary
hardware strategy
http://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex,
dual-simplex, etc
http://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or
is it the only choice we get?
http://www.garlic.com/~lynn/2005.html#38 something like a CTC on a PC
http://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
http://www.garlic.com/~lynn/2005d.html#20 shared memory programming on
distributed memory model?
http://www.garlic.com/~lynn/2005e.html#12 Device and channel
http://www.garlic.com/~lynn/2005h.html#7 IBM 360 channel assignments
http://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
http://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005m.html#55 54 Processors?
http://www.garlic.com/~lynn/2005n.html#6 Cache coherency protocols:
Write-update versus write-invalidate
http://www.garlic.com/~lynn/2005n.html#37 What was new&important in
computer architecture 10 years ago ?
http://www.garlic.com/~lynn/2005r.html#43 Numa-Q Information

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to listserv@xxxxxxxxxxx with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
.