Re: 32-bit vs. 64-bit x86 Speed
- From: haberg@xxxxxxxxxx (Hans Aberg)
- Date: 27 Apr 2007 11:30:05 -0400
johnl@xxxxxxxx (John Levine) wrote:
[In 32 bit mode, an Athlon 64 is basically a Pentium. He's saying that
these chips run legacy 32 bit code faster than 32 bit chips do, which is
not all that surprising if they can use the wide data paths for things
like code fetches. -John]
Some stuff I recalled after my post:
It was mentioned in the Apple Developers video "State of the Union",
that compiling for 64-bits is not always faster. Another source said
compiling 32-bit for 64-bit machines did not grow the code much, int's
being the same 32-bit, which John Levine says is atypical for
64-models [private communications]; perhaps Apple has decided to avoid
compatibility problems between 32- and 64-bit code.
The techniques they mentioned in that video to gain performance, for their
personal computers then, were, in hardware:
H1. More parallel CPUs, as pushing CPU frequency does not give as much.
H2. More RAM.
H3. More powerful graphics cards.
As for hard drives, the next upcoming Mac OS X, version 10.5, announced to
be released in October, contains an experimental ZFS:
This will admit for all practical purposes unlimited secondary memory, as
the hard drives comes along. Though (in reply to a question) the current
Mac 64-bit models have 16 MB primary memory (RAM) limit, the video
mentioned above discussed how performance for certain software could be
speeded up by using 8 GB. So I would think that when new CPU models come
forth from Intel, there will a much higher RAM limit. Intel announced some
chip with 32 CPUs which can be shut down to conserve energy, so many CPUS
seems to be a push on the personal computers side now (the maximum for
Macs is 8).
As for software, H1 calls for better concurrent computer languages, as
traditional threading by hand is tricky. Apple has introduced some
technique for it, which unfortunately I do not recall exactly what it
was :-); some hierarchical model (as in environments) perhaps. (From the
POSIX/UNIX standardization list, I recall that on decided to give C/C++
semantics in order to be able to properly handle atomic code, as
optimization otherwise can disturb it.) As for H3, graphics cards are now
more like computers, and can handle more advanced programming; I am not
sure how this will affect computer languages. Perhaps just more high level
commands in order to generate graphical effects. One technique mentioned
in the video above, though was to not store intermediate graphical
results, but to recompute them on the fly, at need.
- Prev by Date: Re: C Hashmap implementation
- Next by Date: Re: Strong Types ?
- Previous by thread: Re: 32-bit vs. 64-bit x86 Speed
- Next by thread: Re: 32-bit vs. 64-bit x86 Speed