Re: The Lisp Curse

On Jul 4, 2:13 pm, Alex McDonald <b...@xxxxxxxxxxx> wrote:
On Jul 4, 6:21 pm, Tarkin <tarkin...@xxxxxxxxx> wrote:

On Jul 4, 10:02 am, Fritz Wuehler

<fr...@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
If you're still operating ancient computing machine, then yes, it could.  If
a platform has lived that long, then yes, it could.  What computing platform
has lived that long?

System Z is the upward compatible system that exists today based on the
System 360 from 1964.

Roughly the same span of time as the Fillet-o-fish....

x86 is only 3 decades old. I think it's the longest lived platform, at
least without changes to it's instruction set.

Not even close. Object code from 1964 will run on today's IBM machines. Much
source from those days will still compile and most of it will assemble. If
it doesn't, you can almost always use an older version of a compiler or
assembler and get anything written since then to run.

And, oddly enough, critics of the WinTel architecture usually use
backwards compatability as a harsh criticism....

I'm not aware of any computing platform pre-dating microprocessors that
hasn't died.

Now you are. It's the most widely used commercial data processing platform
in the world, virtually all the Fortune 1000 companies have one or more of
them running their enterprise workload. Until the mid 1980s it was *the*
computer system the world ran on. It still is, but now it has company..

Even IBMs platforms have died or been changed substantially over time.  Have
they managed to keep their instruction set the same for 50 years?  By
"died", I mean that they are no longer in production, not that all produced
machines are no longer functional.  Once the platform has "died" the code
requires a complete rewrite for a new platform.  Recompiling code for HLLs
on the other hand is frequently a nonevent.  Yes, some code requires serious
rework.  It depends.

Yes, they have kept the same instruction set for 50 years, of course it is
now much bigger and has many more features, but none of the old design was
removed. For example it still has 24 bit addressing, but it also has 64 bit
(and 31 bit) addressing. It now has extra floating point registers for IEEE
in addition to the original IBM floating point regs. All the instructions
defined in the 1960's era manuals still work as stated.

Does my 6502 code still work?  Yes, if I pull out and power up a collectible
6502 based machine assuming the machine didn't fail upon power-up.  Are
6502s still used as the primary processor in a computing platform?  Not as
far as I'm aware.  Are 6502s still in production?  I don't know..  I know
that fast 6502 variants were produced until a decade or so ago for I/O
coprocessors.  So, the 6502 microprocessor and it's codebase is effectively
dead as a computing platform.  If I want my 6502 source code to work on x86,
I have to rewrite, recode, or port it.  Assembly code "dies".  It's too
dependent on the platform.

Again, not in all cases! One of the most important cases from a commercial
and technology viewpoint is IBM System Z. It is still being developed and
sold, hardware is being updated and sold. It's IBM's flagship and it's
amazing how little people who don't work on it know about it, considering
how important it is.

I'll restate one of your statements:
"It's IBM's flagship and it's amazing how little
people who don't work on it know about it, considering how important
it is."

snip dialogue about GCC,MVS, etc....

I meant the claim of not being able to write system code on an OS coded in
assembly, admittedly DOS is not run on a mainframe...  System code isn't
written in C, but could be, for DOS anyway.  My personal (stalled,
in-progress) OS for x86 is in C except for specialized x86 instructions.

I did not mean that at all. What I said is you cannot write system code in
any language but assembler on the mainframe. That's just the way it
is. That's not a theoretical argument and I only said it concerning the
platform I know about. I did not think it applied to other platforms but I
do know it's easier to write system code in C on UNIX because the OS is
written in C with C interface and needs libc to do basic stuff. It's not an
exact comparison because you can write system code in other languages
(assembly) on NIX but it's harder because you have to create mappings.. In

Not on LINUX. Kernel-level system calls use a register-based argument
system. Programming in assembler on Linux is a breeze, for me.
I'm guessing you're thinkg of BSD and derivatives, which almost
requires the use of C-isms and stack frames and such.

the IBM environment it is simply impossible, not *only* because of the
mappings and interface, but because of other issues that simply don't exist
on other architectures. No other language available on IBM systems can
support the requirements.

This is a deliberate architectural design decision.

Evidence please.

Evidence to the contrary, please.

IMO, one made
to enforce vendor-lock in, requiring hefty licenses to make or sell
third-party products, and keep the end-user dependant upon IBM.
Profitable? Yes. Important? Perhaps to IBM's shareholders, and
the shareholders of the companies that depend on IBM to run
their 'enterprise'.

Since Andy is trying to cut in with his new-found wikipedia knowledge, I am
including PL/X variants when I say "assembler" because that's what they
are. They aren't available outside of IBM (PL/X was, and I used it, but it
is no longer available) so all system code written by vendors is written in
assembler proper. It always has been.

1) used to exit nested code
This is unecessary: restructure the code, or use status flags, or setup a
fall-through, or use different flow control.

That's intellectual dishonesty. At the end of the day it's still a goto. If
you can tolerate extra branches in your loop structures or have to define
more switches to test just so you can say you avoided a goto, what have you
really accomplished?

You omit code restructuring, which in most cases would obviate  the
need for
goto; that's a shade of dishonesty, eh?

You didn't seem to know why a market for Ada programmers still exists, i.e.,
"...  so there has to be some market ..."  Maintaining Gov't software is one
likely reason, .e.g., 100 year code lifetime...

Oh, no, I understand why the market exists. I'm not sure why you thought I
mean that.

The reason MVS is so nice for assembler developers is the OS is written in
assembler and all the system interface is in assembler.

I hope you aware that some detractors of C consider  it a glorified

There are many OS projects for x86 written in assembler.  Usually, the
assembler based OS' for x86 don't seem to become as complete or as
professional as OS' coded in C.  They get the basics down and then stall.
That could be due to complexity or that could be due to a lack of (skilled)
x86 assembly programmers.  I can only speculate.  E.g., here is one example
of an OS in C that was written from scratch:

Ok but it doesn't matter. I'm talking about the most widely used,
longest-lived influential OS of all time, it's not a toy OS, it's in
production in tens of thousands of companies world wide. It's more
professional than any UNIX or LINUX or Windows will ever live to be. It has
complete, professional documentation sets, real error messages, real
recovery, real resource management. It's the highest quality OS and
programming environment you will never see. But you can see the doc if you
want, it's online.

That's a boatload of opinion you have there. Is there any scientific
work being
done with MVS/zSystem? When DARPA farmed out development of
ARPANet, where were zSys/MVS? And, of course, what about one of
Forth's earliest applications, Radio Telescope Astronomy? What did
Lorenz discover his strange attractor on? Can zSys/MVS sequence
the human, or, for that matter, _any_ genome?

I opine that you have a tremendous amount of worship for what
is essentially an overgrown Data Processing System, a kind of
Incredible Hulk of a spreadsheet/database/timesharing system.

Spreadsheets? Do you even know what you're talking about?

I mismatched metaphors there, perhaps.
An overgrown spreadsheet.
I am curious as to why you decline to opine on any appearances of
MVS / zSys at places / times of notable innovation and
paradigm-shifting discovery.

Quaterly reports of sales of the McFlurry do not interest me.
Please, enlighten me with something tremendous- control
systems at CERN, digital imaging of the surface of Mars,
adaptive learning networks....anything of real value to humanity?

Your bank account. Boring, yes, but of real value if you have one. Or
work on nuclear power simulations. Boring, yes, but of real value if
you happen to own one. Or large scale weather simulations. Is there
weather where you are? One of the real strengths of an IBM mainframe
system is the amount of data it can move around. Cray systems were
often fed IO by IBM mainframes; they were the only processors that
could keep up.

Towards the end there, you expose what I consider what a mainframe's
strength is: I/O. So, it seems there is something that we can agree
Nuclear simulations? A quick search turns up US D.o.E. / ANL, so,
I'll eat crow on that one.

But let's look at performance:
( )
"A 1740-group consistent P1 homogeneous twelve-isotope problem with 27
broad groups requires about 6.5 minutes of CPU time on an IBM 370/195.
The same problem requires approximately 30% less time on the CDC 7600
and approximately 50% less time on the RS6000 and the SS20 SUN


You want to model atomic particles and nuclear forces? Ok, use
an IBM solution. You want to manipulate those forces...check
out what CERN is using....

Yes, there is weather where I am. Any source for statitistics
accurate predictions? I'm prepared to pleasantly surprised, but my gut
tells me they wouldn't be much better than the Farmer's Almanac.

You can do that without libc.  Novices just don't understand how.  And, it's
not as flexible or guaranteed to be portable.  Yes, there are some trivial
language design mistakes in regards to memory allocation without libc,

UNIX delegated application memory management to libc. It's a chickenshit
"solution". How can you do malloc and free without libc? AFAIK you
can't. You can set the break point but it doesn't free memory after you set
it repeatedly, at least according to what I read. The more I learn about
UNIX and LINUX the more I realize I was right to avoid learning them all
these years, they're true crap.

Subjective. Though your pet may be able to track the sales of smart
a lot of them are running Android, which is Linux with some chrome....

If I am wrong, please tell me how I can dynamically allocate and free
storage for structures in assembly, or even discrete variables. I heard you
have to write your own memory manager in UNIX/LINUX if you don't use libc.

Yes. A trivial programming exercise. So, tell yourself how you
allocate and free storage structures in assembly: you write your own,
using the s/brk system call, easily done in Linux; other *nices, YMMV

<snipped stuff about headers>


That's a whole boatload of opinion there.

It's my experience one tends to get what one gives.
Don't be too proud of this technological terror that IBM
has constructed; the power to issue my bank statement
is insigificant next to the power of smashing hadrons