Re: Code density and performance?
- From: Anne & Lynn Wheeler <lynn@xxxxxxxxxx>
- Date: Tue, 02 Aug 2005 08:56:26 -0600
nmm1@xxxxxxxxxxxxx (Nick Maclaren) writes:
> Yes. This is what IBM did with MVS, after a brief and
> unsatisfactory experience with traditional paging. Paging was
> effectively used just to adjust the working set by dropping unused
> pages, and tasks were swapped in (almost) their entirety.
> My current proposal is more radical, and is to drop the paging
> requirement altogether, so that TLB misses could be abolished. One
> doesn't get much architectural simplification by reducing them, but
> one gets a lot by abolishing them! Naturally, this would need some
> changes to the system and linker design.
MVS had two separate issues .... one was the disk transfer bottleneck
issue ... which was addressed by *big pages* ... see previous
post (with numerous big page references)
http://www.garlic.com/~lynn/2005n.html#17 Code density and performance
it was referred to as swapping ... to distinquish it from 4k-at-a-time
paging .... but it was logically demand paging ... but in 40k blocks
.... and the 40k blocks were not necessarily contiguous virtual memory,
having been composed from referenced virtual pages during previous
the other issue ... was that their page replacement algorithm had
numerous difficiences ... and their page i/o pathlength was something
like 20 times that of vm370 .... so their page-at-a-time was decidedly
a simple example ... was in the period when virtual memory support was
being added to mvt (for os/vs2, svs ... precursor to mvs) ... was
that i did some number of visits to pok to talk about virtual memory
and page replacment algorithms. they had done some simulation had
decided (despite strong objections to the contrary) ... that it was
more efficient to select non-changed page for replacement (before
selecting a change page for replacement) ... since they could avoid
writing a non-changed page out to disk (since the previous copy was
still good). this had a strong bias to replacing code pages before
data pages. it turned out that it also had a strong bias to replacing
shared, high-use library code pages before replacing private,
lower-use data pages. this implementation extended well into the mvs
time-frame ... finally getting redone about the time of the changes
for big-pages. in any case, the various inefficiencies in the native
mvs paging impelementation (at least prior to the introduction of big
page support) drastically degraded mvs thruput anytime it started
doing any substantial paging at all.
one of the other characteristics of big-pages ... was that it
eliminated the concept of home location on disk for a virtual
page. when a big page was fetched into memory ... the related disk
location was immediately discarded. when components of big page were
selected for replacement ... they always had to be (re)written back to
disk (whether they had been changed during the most recent stay in
storage or not). the whole process was to try and optimize disk arm
useage/motion. it had sort of a moving cursor algorithm with current
active location/locality for the disk arm. recommendations for
available space for big-page allocation was typically ten times larger
than expected useage. this tended to make the area in front of the
moving cursor nearly empty ... so new writes would require the minimum
arm motion ... and the densest allocation of pages (that might page
fault and require fetching) was just behind the current moving cursor.
big pages tended to increase the number of bytes transferred ... more
writes since there was no conservation of non-changed replaced pages
using their previous disk location ... and fetches always brought the
full 40k in one blast ... even if a 4k-at-a-time strategy might have
only brought 24k-32k bytes. the issue was that it traded off real
storage utilization optimization and bytes-transferred optimization
for disk arm access optimization (drastically decreasing the number of
arm accesses per pages transferred).
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
- Re: Code density and performance?
- From: Nick Maclaren
- Re: Code density and performance?
- Prev by Date: Re: Code density and performance?
- Next by Date: Re: Code density and performance?
- Previous by thread: Re: Code density and performance?
- Next by thread: Re: Code density and performance?