Re: A real package manager in action



In article <wfE*ONfbr@xxxxxxxxxxxxxxxxxxxxxxxxxxx>,
Theo Markettos <theom+news@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Keith Hopper <kh@xxxxxxxxxxxxx> wrote:
May I offer a few questions which come to mind, based on my experience
using some of the available linux offerings of this genre?
[snip]
There needs to be a great deal of thought about the exact difference
between a module and a shared library. In general, a shared library behaves
from the run-time point of view rather like a group of modules - except for
the use of 'system space' as opposed to 'user space'. Is this difference
sufficiently significant to merit independent treatment?

I'm not sure I quite understand... packaging is about saying which packages
depend on which others, like the dependency graphs at:
http://compsoc.dur.ac.uk/~psn/debiangraph/

Forgive me if I am wrong, but while it might be possible to run
different versions of library code at the same time, I believe that this is
not possible with RISC OS modules as they may need to share common state.
This impingees on the version problem in a big way, I believe.

I don't see how a package (a bunch of files with instructions about where to
put them) necessarily cares about what's inside them. Package management is
about getting the files onto and off your system, not really about what
happens afterwards.

I believe package management is about permitting multiple versions of
'support' software to co-exist and co-execute - which, as I see it, does
impinge fairly heavily on the package manager and the 'installation
mechanism/system' which it models.

While upgrading something shared in common to fix bugs is a pretty
reasonable requirement, what happens to those bug fixes which actually do
change the behaviour? I am not, here, talking about enhancements, but about
the buggy behaviour which, when fixed, has changed. How do we reconcile
this with an application which - on the basis of live testing - accepted
the buggy behaviour as correct -- and now finds itself working differently?

You mean that program x.y is assuming the buggy behaviour of library a.b, so
when a.b is upgraded it'll break x.y? That's a bug in x.y, which should be
fixed by shipping a new version of x.y at the same time. I'm not sure it's
fair to freeze use of components because buggy applications might depend on
them... otherwise we'd never upgrade anything. Bugfixes in modules and
libraries should not change their APIs so correctly written programs will
not be affected by upgrades.

The APIs are not really important. What is important is the changes in
semantics which may occur as the result solely of a bug fix!

Before design of the manager, therefore, we have to ask (and answer) - what
is the ineluctable minimum meta-structure which must be cast in concrete
for the packet manager to be of use?

In sense we have at least !Boot, whose structure is well controlled. I
don't think the moving things around problem is insurmountable... as a user
I would quite like the system to know where my files can be found so I can
access them without having to wade down my directory tree by hand.


The boot structure is well controlled - although it does seem to be
getting more and more complicated as time goes by. Your point about wading
down by hand is well taken. However, in my experience of other packaging
systems on linux platforms, the multi-version problem can lead to a need to
manually revise some filing system links when things go wrong.

If I may digress to give an example - some linux applications check
actual file names as a method of checking version information, some check
the 'package' system, yet others rely on rpm versioning. All this can
happen on the one platform - with, occasionally, weird inconsistencies. For
example I cannot run the latest release of some software because embedded
into my linux box installations are older versions of some libraries - and
newer ones cannot co-exist - for whatever reason.

This is the kind of horror that I would like to see carefully designed
out of a good packaging system - but my suspicion is that if there is any
software which is not 'installed' by the package manbager we will land up
wih the probelsm I mention above.

There has been talk of different versions - stable/test/unstable -
etc. Again we have to ask the questions of all forms of dependency - how
are applications installed in one mode to interact with applications stored
in another - which happen to need different versions of some, say,
underlying library? This is one of the bug-bears revealed by most linux
packet managers which I have come across. Solving this problem in RISC OS
is not going to be easy - I'm not sure that any of the linux systems have a
really convincing answer - that I have come across, anyway!

Shared libraries aren't a problem: you can install multiple versions in
parallel. Debian handles the different versions of a library problem by
having packages labelled gtk-1.2 and gtk-2.0 so that you specify which
version you want. If you ask for 'gtk' manually you get the latest, though
software will probably specify the version it wants.

See my comments earlier. There are problems when applications refer to
sub-version numbers using the same identity and expecting different library
versions - it does happen!

The same could be done for modules: packages called sharedclib-castle and
sharedclib-rol, applications could depend on sharedclib which means either
will do. RISC OS won't support loading both modules at once so I'm not sure
the package manager can do anything about applications running which specify
opposites... but it's not the package manager's job to run applications
merely to install them.

No! Not merely! Its task is to install software in such a way that
there will be no execution-time inconsistencies when any combinations of
installed packages are operqting concurrently.

There will be critical points where a new application needs an updated
dependency not needed by some existing application. How is the user going
to be able to use both at the same time? We have to remember here that not
all updates just fix bugs transparently and/or add new functionality.
Sometime updates even modify earlier behaviour - without necessarily being
backward compatible. I just don't know the answer here either.

You mean like something needing a Toolbox upgrade whilst an existing Toolbox
application is still running? Again that's an OS problem: today you can
quite happily install a Toolbox upgrade but have to reboot to make it take
effect. The package manager is no different. The same goes for updating
other shared libraries/modules...Today you can manually install then: a
package manager just does it automatically.

Why on earth would I want to reboot my machine - perhaps acting as a
server - just because I have installed a new version of something? I
believe this should be done on the fly (look at the wonderful - old I admit
- mechanism of the Burroughs 5700 - 6800 operating systems which had
hardware support for just this scenario!)

Just a few further thoughts

Keith

--
Sky Development
.