Re: NadaNet 3.1 Released
- From: mdj <mdj.mdj@xxxxxxxxx>
- Date: Tue, 1 Jun 2010 23:05:57 -0700 (PDT)
On Jun 2, 2:27 pm, "Michael J. Mahon" <mjma...@xxxxxxx> wrote:
I just noticed that a MacBook sells for twice what a comparably
configured PC notebook sells for--amazing! (I'm sure that some will
point out some wonderful luxury details of the MBP that are not in
the generic laptop, but none of those are meaningful to me if I'm
trying to write a paper, prepare a presentation, edit a photo, or
browse the web.)
"Apples to Apples" the difference seems to be about 40% at the lower
end, swells a little in the mid range, then drops to very close at the
high end. Of course, each of Apple's offerings is a premium product in
its market segment and if you make the comparison point say "15 inch
laptop" you can certainly beat their pricing by 50% or more.
I have used Sony (high end), HP (mid-range), and Toshiba (mid-range)
laptops over the years. The only difference I see between a 13" MBP
and a Toshiba 15" is that the MBP has an aluminum case (nice) and a
smaller screen (OK). Resolutions are comparable, both 4GB, the MBP
is 250GB while the Toshiba is 320GB, both have lots of ports and
networking, both have a 2.4GHz Core 2 Duo. Any difference in graphics
is irrelevant to me. The MBP is $1200 and the Toshiba is $475 at Fry's..
Right. I was going off RRP, so I suspect I'd find higher differences
if I shopped around. The proportional differences may not be the same
over here either since the shipping volumes for all players smaller
(22m vs 300m is quite a difference)
If I drop to a MacBook, losing the aluminum case and the backlit
keyboard (which I don't care about at all), I get closer to a 2:1
Mine often follows me to bed so I find the backlit keyboard to be not
I hasten to point out that I don't see any justification (for me)
for anything "higher end" than the Toshiba, whether PC or Mac, so I
may be blind to features that "high-enders" consider wonderful. ;-)
I do like big disks, but I find that $80 will get me a 500GB notebook
SATA/300 drive at Fry's, so I don't expect to pay much for it. ;-)
I see what you mean. As a long time Unix user, I found the Mac
compelling as a laptop since it offered me the best of both worlds -
a high degree of out-of-box Unix compatibility + MS Office and Adobe.
Prior to 2008 I never bothered owning a laptop, preferring my
computing 'tethered', but the 4-5 year lifespan made the cost equation
pretty compelling, even when considering the Mac price difference.
$1500 over 4 years is cheap enough for me, and the machine will
probably have resale value to boot.
I'm a bit overdue for a new desktop system (my main machine is still
a 3GHz Pentium 4 ;-), and I lean toward building a quad core machine
in the $700 range, which I expect will hold me for another 5 years.
Likewise. Mine's a little newer (a dual core AMD) but it'll probably
get the rebuild treatment as I want to consolidate desktop computing
to a single host - DVI and USB over ethernet means I can have a 'head'
at my desk, and at the TV for media, and bury the box in the cupboard
with the network kit.
Given how many people use computers for just the small list of
utilitarian tasks you mention, it will be interesting to see how
successful the iPad is at capturing that market. It certainly closes
the cost gap, and the "closed + less flexible" perspective can also be
a "simpler + more reliable" perspective. My mother for instance barely
touches her laptop, but is literally drooling over the iPad. Her
eyesight is failing her and the "pinch zoom scroll" touch interface
for reading books is a *huge* drawcard.
I was reading my Kindle on a train recently, and a woman sitting next
to me asked about *exactly* that feature! Enlarged type clearly meets
a real user need.
My prediction is that iPad-alike devices will destroy the market for
cheap generic laptops over time and if Apple capture even 1/4" of that
market, they'll be a lot better positioned than they are now.
Well, my laptop is my "desktop replacement" for months at a time,
including video encoding, so I doubt that an iPad will be more than
an "access device" to more computing horsepower. Still, I'm looking
forward to the next version. (One of "Mahon's Laws" is: Never get
version 1.x or version n.0 of anything. ;-) (I should really credit
Pope: "Be not the first by which the new is tried/nor yet the last
to lay the old aside.")
:-) My setup is similar, except I travel less often at the moment. I
suspect a next generation iPad to breed a few more ARM cores and make
encoding bearable, then power them all off afterward.
The Message Server displays an animated histogram of the number ofIt's fun to solve problems like this, but mainly because it *isn't*In my university days I tutored concurrent programming. We had the
easy! As a result, it's probably not the best way for a NadaNet
programmer to get their feet wet. ;-)
students build a virtual 'widget factory' in which conveyors and
processing points where represented as message oriented queues and
threads. It was a fun contrived example, and students quickly
discovered that doing a 'fancy' real-time display of the factory
presented a more difficult problem than the simulation itself ;-)
messages in each of the first 23 queues. This provides a nice
graphical display of the state of the computation (though the queues
are identified only by their hex number, so you have to remember
what each means ;-).
The RATRACE demo, in which 75 messages are passed around to random
recipients until they have each been passed 255 times, puts on quite
a show of randomly bouncing queue depths, until they all drain out at
If concurrency is implemented by message passing, then queue depth
makes a bottleneck quite obvious.
Absolutely. It's my favourite way of implementing parallelism for
exactly that reason, and it's proven itself over time to be scalable
as long as you find the right intercept point for the queue :-)
It certainly makes it clear where faster or more servers are useful.
Yes, and in professional development being able to scale out
horizontally at the queue points is a boon; boxes are cheaper than
Wow! A fully incremental assembler would be quite interesting...Sending an assembly or compilation to another machine raises the issueRight. I was more thinking of a more esoteric approach where the text
of a common file system. Although my File Server provides stateless
access, it does not support OPEN..(READ | WRITE)..CLOSE file access,
nor does it support sector- or block-level access. It handles the
subset of ProDOS commands that do not require keeping state between
operations, for which the atomicity of command execution is sufficient
to ensure consistency.
As a result, it is well suited to supporting programs written to use
BLOADs and BSAVEs for disk I/O (which are quite sufficient), but it does
not support programs that depend on the existing DOS or ProDOS command
set beyond the stateless ones.
If the text editor (B)loads the file into memory and (B)saves it back
to disk, then it could easily be modified to do so using remote File
Server(s), which could then host the assembler(s) and/or compiler(s)
on its/their local file systems. It would be a File Server that also
was an Assembler Server, for example.
editor would send 'delta' information to the message server, which an
assembler agent then subscribed to. In this way, the assembler agent
could incrementally 'assemble' each delta against the already accrued
symbol table, and send messages back which the editor could then use
to provide edit-time notification of errors/warnings.
Note that a 'delta' could be single line, a copy/paste chunk, or an
entire file that's just been loaded. a simple command could switch you
to the monitor, where your assembled work is quickly shipped to the
correct locations and your source shipped 'elsewhere' on the NadaNet
for quick recall when you're done testing.
The 'challenge' is in keeping and then storing assembly metadata
(symbols *and* their references) in a structure suitable for fast
lookup such that a delete/inserts ramifications can be quickly
calculated. I admit though, this is an ambitious undertaking and
there's a few different ways to skin the cat ...
Because of the need to keep the cumulative location counter, it would
be necessary not only to move the code in memory to handle insertions
and deletions, but also to "run" the location counter, fixing up all
:-) Sounds fun, doesn't it? Keeping the message passing between the
edit box and the build box lightweight enough to maintain full
interactivity is the other big challenge, but once working I suspect
it would be a very useful tool. I can't see a way to do it without a
dedicated message server, but that's not necessarily a problem.
I'm not sure what communication *back* from the assembler you wanted,
besides errors and the ability to run the code, but I don't see much
chance of communication being a bottleneck. The incremental assembler
will have *lots* to do for each message received.
I more meant ensuring each message is "byte sized" enough to avoid
lagging the interactivity of the text editor appreciably; that'd be a
show stopper for me as a 'user'
Many years ago, I was involved in a fully incremental Algol compiler,
developed as a classroom exercise. The data structures required to
handle all textual changes interactively were quite elaborate. Consider
the effect of editing a BEGIN..END pair to alter static scoping!
That must've been fun! An assembler is much easier to do, since
scoping isn't really a problem, just changing label references
(forwards) and branch labels is enough to get something useful
There's still the effect of removing a global label and thus
merging two local label scopes, at least momentarily.
This brings up one of the major issues with a "fully" incremental
anything: frequently a desired change between two consistent states
is made in multiple edits. In such cases, it is fruitless to try to
make sense of the partially edited source. One should always wait to
"do the assembly" when one expects to have a consistent source (lest
one be buried in spurious error messages ;-).
The main trick is not to bother the user while they're busy :-)
Of course, adding macros to the mix will make it a lot more
Right--if they are pure textual macros. If they are "structural"
macros, then they actually allow factoring out some of the work.
Frankly, I think I'd still lean toward the brute-force approach of
just re-assembling at each "consistent" point--as declared by the
user. (Kind of like the way we say "assemble" now. ;-)
Well, a compromise between the two could be the assemble machine
'guessing' when the consistent points are likely and pre-empting the
user, but any of this would be a productivity improvement IMO.
Many developers of interactive tools simply take the brute-force
approach afforded by large memories and fast processors, and just
recompile the whole unit after each edit! It's much trickier to
try to do the least amount of work. ;-)
This is why I still love the Apple II. Even with 1mb of RAM and 4Mhz
of speed in my IIe, it does not permit me to 'cheat' ;-)
I love that, too. Resource-constrained programming doesn't allow for
much sloppiness, or "merely pretty" abstractions that take their toll
in instructions executed.
Especially when you consider most high level languages are "merely
pretty" abstractions ;-)
Exactly. Block locks, anyone?A Print Server could be similarly implemented, assuming that it wasI don't think modifying Merlin 8 to do this would be that tricky...
only necessary to print disk files. (Of course, many Apple II apps
are not structured to place a disk file between the app and its printer,
so that could also be an issue for existing apps.)
You raise an interesting point--I have one //e that is configured for
general use: editing, software development, etc., with a CFFA. I have
so far not set up another capacious machine as a slave to run larger
apps, like assembly and compilation. This sounds like it could be fun!
It will require modifying an editor to do its file I/O using the File
Server, but that shouldn't be hard if it is reasonably structured...
The more generic solution, providing patches to ProDOS and DOS to accessI would like to see this happen. I do worry though given the
files stored on a remote machine, raises several issues (multi-machine
state, file and file system sharing, etc.) that I have chosen not to
address. Perhaps someone more comfortable with the internals of these
OSs would like to try for something more general?
synchronous nature of I/O on an Apple II that such a feature might
quickly become too powerful to be useful, if you know what I mean...
The fact that any change to any file causes updates to the global file
system data structures leads to high-traffic points of contention.
You'd probably need to decompose large read/write calls at the server
end into smaller requests to maintain a degree (illusionary)
concurrency, and use the message server to 'proxy' interaction
requests. Perhaps with locally buffered writes handled by an interrupt
handler you could maintain a degree of performance but it's one hell
of a complex project.
I generally prefer doing the 10% of the work that gets 90% of the
benefit--like the current File Server. ;-)
In my career, I live and die by the 80/20 rule. I consider 90/10 a
'bonus' for both sides when it happens ;-)
Performance or functionality problems that reveal themselves further
down the road, in the context of a real problem, can then be addressed
with real data to drive the design.
Which is why "premature optimisation is the root of all evil" ;-)
- Re: NadaNet 3.1 Released
- From: Michael J. Mahon
- Re: NadaNet 3.1 Released
- Prev by Date: Re: PRODOS file date query
- Next by Date: Re: PRODOS file date query
- Previous by thread: Re: NadaNet 3.1 Released
- Next by thread: Re: NadaNet 3.1 Released