Re: Code density and performance?
- From: nmm1@xxxxxxxxxxxxx (Nick Maclaren)
- Date: 2 Aug 2005 14:19:09 GMT
In article <3l9cu4F11ao21U1@xxxxxxxxxxxxxx>,
=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbrueggen-not@xxxxxxxxxxx> writes:
|> > But what matters most is to reduce the number of such transfers,
|> > not the cost per byte transferred;
We are all agreed there!
|> Of course,
|> large page sizes is only a small aspect of that - likely having "small"
|> page sizes but a very large pagein cluster factor would be even better,
|> because that will allow you to do more reasonable working set and in-
|> memory page list (both modified and free) list management later on.
Yes. This is what IBM did with MVS, after a brief and unsatisfactory
experience with traditional paging. Paging was effectively used just
to adjust the working set by dropping unused pages, and tasks were
swapped in (almost) their entirety.
My current proposal is more radical, and is to drop the paging
requirement altogether, so that TLB misses could be abolished.
One doesn't get much architectural simplification by reducing them,
but one gets a lot by abolishing them! Naturally, this would need
some changes to the system and linker design.