linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Q: active/inactive/imported/exported group ?
@ 1999-11-22 11:28 Gerhard Fuernkranz
  1999-11-22 12:36 ` Heinz Mauelshagen
  0 siblings, 1 reply; 8+ messages in thread
From: Gerhard Fuernkranz @ 1999-11-22 11:28 UTC (permalink / raw)
  To: linux-lvm


Recently I've made my first steps with LVM and
I think I really like it. I've just a few questions:

I'm wondering, what's the actual difference betweent
an inactive, imported and an exported group?

My impression was, that exported groups are ignored
by "vgscan" and therefore cannot be activated later.
Is this merely the only difference?

Furthermore I tried to move an inactive but *imported*
group to another host:

   on host A:
   vgchange -an <group>

   move disks to host B
   on host B:
   vgscan
   vgchange -ay <group>

... and it seemed to work.

Is there anything wrong, if I transport disks
in this way? Can there be any bad side effects,
if I do *not* export on host A and then import on host B,
but only deactivate on host A and then scan and activate
the group on host B?

Is there a reason why I have to specify phys. disks for
the "vgimport" command? Would'nt it also be possible to
scan all disks for the group that is to be imported?

And one last question:
When does LVM assign minor device numbers to a volume?
Will the minor number of a volume remain constant as long
as the volume exists in the group or does LVM assign new
minor numbers each time a group or volume gets activated?


Thank you very much in advance,
Gerhard

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Q: active/inactive/imported/exported group ?
  1999-11-22 11:28 [linux-lvm] Q: active/inactive/imported/exported group ? Gerhard Fuernkranz
@ 1999-11-22 12:36 ` Heinz Mauelshagen
  1999-11-22 13:47   ` Gerhard Fuernkranz
  0 siblings, 1 reply; 8+ messages in thread
From: Heinz Mauelshagen @ 1999-11-22 12:36 UTC (permalink / raw)
  To: Gerhard Fuernkranz; +Cc: mge, linux-lvm

> 
> 
> Recently I've made my first steps with LVM and
> I think I really like it. I've just a few questions:
> 
> I'm wondering, what's the actual difference betweent
> an inactive, imported and an exported group?

Inactive - LVM driver doesn't have table information about a VG
           ("vgchange -ay" activates VG(s))
Imported - Metadata on PV(s) is set up for activation
           ("vgscan" can find these VG(s) and can build
            /etc/lvmtab + /etc/lvmtab.d/*)
Exported - Metadata on PV(s) is set up to move PV(s) to a different system

> 
> My impression was, that exported groups are ignored
> by "vgscan" and therefore cannot be activated later.
> Is this merely the only difference?

Yes.

> 
> Furthermore I tried to move an inactive but *imported*
> group to another host:
> 
>    on host A:
>    vgchange -an <group>
> 
>    move disks to host B
>    on host B:
>    vgscan
>    vgchange -ay <group>
> 
> ... and it seemed to work.

Yes it does.
But only in case that there's no VG with the same name on this system before.
Then you will not be able to scan and activate any of these VGs because
the VG name is not unique.

> 
> Is there anything wrong, if I transport disks
> in this way?

If you don't have a VG with the same name on the other system, it's o.k.
for now.
One of my TODO items for the future still is to have UUIDs
(Uniform Unique Identifiers) for VGs and for system ownership of a VG.

Until this will be implemented, you don't run into trouble.
So far it's a good possiblity to multiply installations 8*)


> Can there be any bad side effects,
> if I do *not* export on host A and then import on host B,
> but only deactivate on host A and then scan and activate
> the group on host B?

See above (VG name collision).

> 
> Is there a reason why I have to specify phys. disks for
> the "vgimport" command? Would'nt it also be possible to
> scan all disks for the group that is to be imported?

Yes, that's possible.
But this is an enhancement which has to be implemented.

A simple sh+grep+sed workaround for today is to do a pvscan, figure out
all PVs belonging to the exported VG, parse the PV name out
of this information and set up a corresponding vgimport command line.

> 
> And one last question:
> When does LVM assign minor device numbers to a volume?

At lvcreate time _and_ at vgscan time.

> Will the minor number of a volume remain constant as long
> as the volume exists in the group or does LVM assign new
> minor numbers each time a group or volume gets activated?

See my previous statement.

> 
> 
> Thank you very much in advance,
> Gerhard
> 

You're welcome.

Regards,
Heinz


--

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Systemmanagement CS-TS                           T-Nova
                                                 Entwicklungszentrum Darmstadt
Heinz Mauelshagen                                Otto-Roehm-Strasse 71c
Senior Systems Engineer                          Postfach 10 05 41
                                                 64205 Darmstadt
mge@ez-darmstadt.telekom.de                      Germany
                                                 +49 6151 886-425
                                                          FAX-386
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Q: active/inactive/imported/exported group ?
  1999-11-22 12:36 ` Heinz Mauelshagen
@ 1999-11-22 13:47   ` Gerhard Fuernkranz
  1999-11-22 16:28     ` mauelsha
  0 siblings, 1 reply; 8+ messages in thread
From: Gerhard Fuernkranz @ 1999-11-22 13:47 UTC (permalink / raw)
  To: Heinz Mauelshagen; +Cc: mge, linux-lvm


Heinz,

thanks for the fast answer -
just a few more questions:

Heinz Mauelshagen wrote:
...
> > And one last question:
> > When does LVM assign minor device numbers to a volume?
> 
> At lvcreate time _and_ at vgscan time.

Assume I have a group g1 with volume v1 and a group g2 with
volume v2. Now I turn off the machine, remove g1's physical
disks and power on the machine again. Does this mean, that
vgscan may assign a *different* minor number to v2 after the
reboot?


Concernig my other question (imported/exported/active/inactive)
probably I should explain, what I want to do:

I'd like to use LVM in a cluster with multihosted disks.
Each host *can* access all disks, but I'll guarantee (howsoever),
that only one host *does* access a disk/volume/volume-group
at a time.

My idea was, that all hosts see all groups on the shared disks,
but only *one* host activates a volume group at a time, while
all other hosts don't activate the same group (but probably
different ones). If I want to (logically) switch a volume group
to a different host, then I deactivate it on the source host,
re-"vgscan" on the target host and then activate the group on
the target host.

Do you think, this strategy is ok?
I assume, that LVM does not write anything to a physical disk
belonging to a deactivated group, correct?
Can I just re-"vgscan" a (deactivated) group at any time in order
to see all changes a different host has made to the group while it
was deactivated on the local host?

I did not want to export/import for two reasons:

- import does not search the disks for the group
- if a host with an activated group crashes, then it
  has no chance to export the group anyway.
  Nevertheless in this case I'd like to access the group
  from different host in a cluster.

For operation in a cluster I'd also desire, that
only one host can activate a group at time (-> i.e.
the host claims ownership and e.g. writes its host name
to all disks of the group) in order that no other host
can accidently activate the same group at the same time.
But there should also be a "forced" activation method to override
this security mechanism (-> in case that the owner crashes).
IMHO something like this is NYI?


Thanks,
Gerhard

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Q: active/inactive/imported/exported group ?
  1999-11-22 13:47   ` Gerhard Fuernkranz
@ 1999-11-22 16:28     ` mauelsha
  1999-11-22 18:29       ` Gerhard Fuernkranz
  0 siblings, 1 reply; 8+ messages in thread
From: mauelsha @ 1999-11-22 16:28 UTC (permalink / raw)
  To: Gerhard Fuernkranz; +Cc: mge, linux-lvm

> 
> Heinz,
> 
> thanks for the fast answer -
> just a few more questions:
> 
> Heinz Mauelshagen wrote:
> ...
> > > And one last question:
> > > When does LVM assign minor device numbers to a volume?
> > 
> > At lvcreate time _and_ at vgscan time.
> 
> Assume I have a group g1 with volume v1 and a group g2 with
> volume v2. Now I turn off the machine, remove g1's physical
> disks and power on the machine again. Does this mean, that
> vgscan may assign a *different* minor number to v2 after the
> reboot?

Yes.
But that's only a local resource for sure.

> 
> 
> Concernig my other question (imported/exported/active/inactive)
> probably I should explain, what I want to do:
> 
> I'd like to use LVM in a cluster with multihosted disks.
> Each host *can* access all disks, but I'll guarantee (howsoever),
> that only one host *does* access a disk/volume/volume-group
> at a time.
> 
> My idea was, that all hosts see all groups on the shared disks,
> but only *one* host activates a volume group at a time, while
> all other hosts don't activate the same group (but probably
> different ones). If I want to (logically) switch a volume group
> to a different host, then I deactivate it on the source host,
> re-"vgscan" on the target host and then activate the group on
> the target host.
> 
> Do you think, this strategy is ok?

Yes.

> I assume, that LVM does not write anything to a physical disk
> belonging to a deactivated group, correct?

Not till now.

> Can I just re-"vgscan" a (deactivated) group at any time in order
> to see all changes a different host has made to the group while it
> was deactivated on the local host?

Yes.

The read/write routines in the LVM library flush the buffer cache contents
of a physical volume to be sure to read actual data from disk(s) and
not to rely on cached data.
If like in your example one host changes a VG, the other one should
read actual VG data afterwards while doing a vgscan.


> 
> I did not want to export/import for two reasons:
> 
> - import does not search the disks for the group

Correct.
See my workaround proposal in my previous mail.

If you only have one exported VG you could import it for eg. with:

pvscan|grep EXPORTED|sed 's/^.*PV "//;s/".*$//'|xargs vgimport VGNAME

> - if a host with an activated group crashes, then it
>   has no chance to export the group anyway.
>   Nevertheless in this case I'd like to access the group
>   from different host in a cluster.
> 
> For operation in a cluster I'd also desire, that
> only one host can activate a group at time (-> i.e.
> the host claims ownership and e.g. writes its host name
> to all disks of the group)
> in order that no other host
> can accidently activate the same group at the same time.

I want to achive this by keeping a UUID of the VG owning sytem in the VGDA.

IMO we would need a cluster manager to take care of mastership of
shared resources.

> But there should also be a "forced" activation method to override
> this security mechanism (-> in case that the owner crashes).

But in this case a crashing host is not able to release the VG anyway.
So there must be a --force option to gain ownership of such a VG
_and_ the sure knowlegde that the crashed system is down _and_ stays down.
If not so there could be tricky races where the original 'owner' system
of our VG would try to regain ownership.

> IMHO something like this is NYI?

Not today 8*(

> 
> 
> Thanks,
> Gerhard
> 

Cheers,
Heinz


-- 

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Systemmanagement CS-TS                           T-Nova
                                                 Entwicklungszentrum Darmstadt
Heinz Mauelshagen                                Otto-Roehm-Strasse 71c
Senior Systems Engineer                          Postfach 10 05 41
                                                 64205 Darmstadt
mge@ez-darmstadt.telekom.de                      Germany
                                                 +49 6151 886-425
                                                          FAX-386
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Q: active/inactive/imported/exported group ?
  1999-11-22 16:28     ` mauelsha
@ 1999-11-22 18:29       ` Gerhard Fuernkranz
  1999-11-22 19:43         ` Michael Marxmeier
  1999-11-23 13:56         ` mauelsha
  0 siblings, 2 replies; 8+ messages in thread
From: Gerhard Fuernkranz @ 1999-11-22 18:29 UTC (permalink / raw)
  To: mauelsha; +Cc: mge, linux-lvm

mauelsha@u9etz.ez-darmstadt.telekom.de wrote:

> > Assume I have a group g1 with volume v1 and a group g2 with
> > volume v2. Now I turn off the machine, remove g1's physical
> > disks and power on the machine again. Does this mean, that
> > vgscan may assign a *different* minor number to v2 after the
> > reboot?
> 
> Yes.
> But that's only a local resource for sure.

That's what I was afraid of.
The problem is the following:
The NFS server identifies files by a file handle,
which also contains the major/minor number of the
disk, where the exported filesystem resides.
Assume there is a NFS server, which exports a filesystem
residing on a LVM volume. There also exist NFS clients,
which have hard-mounted this filesystem.
If the NFS server crashes and reboots, then the clients
should hang on NFS calls while the server is down, but
should resume normal operation once the server is back
(w/o umounting and re-mounting the FS!). But this only
works, if the volumes get the same minor device number
after the reboot in order that the old filehandle still
references the same file on the same device.

So I think there's also the reqirement for a "sticky"
minor device number for volumes (or at least for a specific
subset of volumes). VxVM also has recognized this and
provides a method to assign minor device numbers to volumes.

Do you have an idea, how I could integrate such a
feature into LVM? My 1st idea was the following:

- I have a table e.g. /etc/lvm_minors, which contains
  {volume_name, minor_number} pairs for all sticky
  volumes

- Change vgscan to do the following:
     ...
     for (all volumes of the group) {
        ...
        if (entry for volume exists in /etc/lvm_minors) {
            if (minor number from lvm_minors already in use
                for a different volume)
            {
                fail, print an error message and skip this volume
            }
            else {
               assign the minor number from /etc/lvm_minors
               to the volume
            }
        }
        else {
            assign a free minor number to this volume
            similar as it currently does
        }
        ...
     }

Any comments?


> I want to achive this by keeping a UUID of the VG owning sytem in the VGDA.

The question is: What is the "owning" system?
Is it the system which has imported the group?
Or the system which has activated the group at last?

> IMO we would need a cluster manager to take care of mastership of
> shared resources.

I currently don't want to use "real" shared resources, which
are configured at more than one host at the same time.

I'll guarantee, that only one host at a time will activate
a specific group. If a host crashes, I'll guarantee, that the
crashed host is actually down and stays down (of course it
may reboot, but it won't re-activate the volume after the reboot
automatically).

...
> But in this case a crashing host is not able to release the VG anyway.
> So there must be a --force option to gain ownership of such a VG
> _and_ the sure knowlegde that the crashed system is down _and_ stays down.
> If not so there could be tricky races where the original 'owner' system
> of our VG would try to regain ownership.

Currently I won't care very much about such race conditions as I'll
only bring the group online on one host at a time automatically.
I'd rather see the ownership only as additional security feature in
order that the sysadmin cannot accidently activate the same
group at the same time on a different host.

Gerhard

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Q: active/inactive/imported/exported group ?
  1999-11-22 18:29       ` Gerhard Fuernkranz
@ 1999-11-22 19:43         ` Michael Marxmeier
  1999-11-23 13:56         ` mauelsha
  1 sibling, 0 replies; 8+ messages in thread
From: Michael Marxmeier @ 1999-11-22 19:43 UTC (permalink / raw)
  To: gerhard.fuernkranz; +Cc: mauelsha, mge, linux-lvm

gerhard.fuernkranz@mchp.siemens.de wrote:

> So I think there's also the reqirement for a "sticky"
> minor device number for volumes (or at least for a specific
> subset of volumes). VxVM also has recognized this and
> provides a method to assign minor device numbers to volumes.

Your aproach with "sticky minors" might be a good idea until
we finally get additional options due to bigger major/minors.

> Do you have an idea, how I could integrate such a
> feature into LVM? My 1st idea was the following:
> 
> - I have a table e.g. /etc/lvm_minors, which contains
>   {volume_name, minor_number} pairs for all sticky
>   volumes
> 
> - Change vgscan to do the following:
>      ...

This should be rather easy as it's almost all done in userspace.
Most functionality is in the liblvm which probably can be extended
without interfering with the kernel proper.

> > I want to achive this by keeping a UUID of the VG owning sytem in the VGDA.
> 
> The question is: What is the "owning" system?
> Is it the system which has imported the group?
> Or the system which has activated the group at last?

IMO this is close. If you imported the VG or activated it recently,
another system trying to access the VG better get's in contact with you.
The UUID could serve for two purposes: Avoid the possible issues which
could be caused by different SCSI configurations and it could also serve
as a lock (more of a hint).

The VG could be marked as shared (accessible from different systems)
and LVM might call some external utility to let it resolve the
situation. If a HA framework exists, LVM could make use of it or the
admin could implement a simple rule in a shell script.


Michael

--
Michael Marxmeier           Marxmeier Software AG
E-Mail: mike@msede.com      Besenbruchstrasse 9
Phone : +49 202 2431440     42285 Wuppertal, Germany
Fax   : +49 202 2431420     http://www.msede.com/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Q: active/inactive/imported/exported group ?
  1999-11-22 18:29       ` Gerhard Fuernkranz
  1999-11-22 19:43         ` Michael Marxmeier
@ 1999-11-23 13:56         ` mauelsha
  1999-11-23 15:28           ` Gerhard Fuernkranz
  1 sibling, 1 reply; 8+ messages in thread
From: mauelsha @ 1999-11-23 13:56 UTC (permalink / raw)
  To: Gerhard Fuernkranz; +Cc: mge, linux-lvm

> mauelsha@u9etz.ez-darmstadt.telekom.de wrote:
> 
> > > Assume I have a group g1 with volume v1 and a group g2 with
> > > volume v2. Now I turn off the machine, remove g1's physical
> > > disks and power on the machine again. Does this mean, that
> > > vgscan may assign a *different* minor number to v2 after the
> > > reboot?
> > 
> > Yes.
> > But that's only a local resource for sure.
> 
> That's what I was afraid of.
> The problem is the following:
> The NFS server identifies files by a file handle,
> which also contains the major/minor number of the
> disk, where the exported filesystem resides.
> Assume there is a NFS server, which exports a filesystem
> residing on a LVM volume. There also exist NFS clients,
> which have hard-mounted this filesystem.
> If the NFS server crashes and reboots, then the clients
> should hang on NFS calls while the server is down, but
> should resume normal operation once the server is back
> (w/o umounting and re-mounting the FS!). But this only
> works, if the volumes get the same minor device number
> after the reboot in order that the old filehandle still
> references the same file on the same device.
> 
> So I think there's also the reqirement for a "sticky"
> minor device number for volumes (or at least for a specific
> subset of volumes). VxVM also has recognized this and
> provides a method to assign minor device numbers to volumes.

If you don't change your IO configuration vgscan will not produce
different LV minors anyway.

> 
> Do you have an idea, how I could integrate such a
> feature into LVM? My 1st idea was the following:
> 
> - I have a table e.g. /etc/lvm_minors, which contains
>   {volume_name, minor_number} pairs for all sticky
>   volumes
> 
> - Change vgscan to do the following:
>      ...
>      for (all volumes of the group) {
>         ...
>         if (entry for volume exists in /etc/lvm_minors) {
>             if (minor number from lvm_minors already in use
>                 for a different volume)
>             {
>                 fail, print an error message and skip this volume
>             }
>             else {
>                assign the minor number from /etc/lvm_minors
>                to the volume
>             }
>         }
>         else {
>             assign a free minor number to this volume
>             similar as it currently does
>         }
>         ...
>      }
> 
> Any comments?

This is an option.

I'ld rather prefer to let vgscan deal with 'sticky' minors based
on existing lvmtab entries and let it only use free minors for new VGs.

> 
> 
> > I want to achive this by keeping a UUID of the VG owning sytem in the VGDA.
> 
> The question is: What is the "owning" system?
> Is it the system which has imported the group?
> Or the system which has activated the group at last?
> 
> > IMO we would need a cluster manager to take care of mastership of
> > shared resources.
> 
> I currently don't want to use "real" shared resources, which
> are configured at more than one host at the same time.
> 
> I'll guarantee, that only one host at a time will activate
> a specific group. If a host crashes, I'll guarantee, that the
> crashed host is actually down and stays down (of course it
> may reboot, but it won't re-activate the volume after the reboot
> automatically).
> 

If you garanty that you must already have a cluster manager... 8*)

> ...
> > But in this case a crashing host is not able to release the VG anyway.
> > So there must be a --force option to gain ownership of such a VG
> > _and_ the sure knowlegde that the crashed system is down _and_ stays down.
> > If not so there could be tricky races where the original 'owner' system
> > of our VG would try to regain ownership.
> 
> Currently I won't care very much about such race conditions as I'll
> only bring the group online on one host at a time automatically.
> I'd rather see the ownership only as additional security feature in
> order that the sysadmin cannot accidently activate the same
> group at the same time on a different host.
> 

O.k., UUIDs are future anway.

Heinz


-- 

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Systemmanagement CS-TS                           T-Nova
                                                 Entwicklungszentrum Darmstadt
Heinz Mauelshagen                                Otto-Roehm-Strasse 71c
Senior Systems Engineer                          Postfach 10 05 41
                                                 64205 Darmstadt
mge@ez-darmstadt.telekom.de                      Germany
                                                 +49 6151 886-425
                                                          FAX-386
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Q: active/inactive/imported/exported group ?
  1999-11-23 13:56         ` mauelsha
@ 1999-11-23 15:28           ` Gerhard Fuernkranz
  0 siblings, 0 replies; 8+ messages in thread
From: Gerhard Fuernkranz @ 1999-11-23 15:28 UTC (permalink / raw)
  To: mauelsha; +Cc: mge, linux-lvm

mauelsha@u9etz.ez-darmstadt.telekom.de wrote:

> If you don't change your IO configuration vgscan will not produce
> different LV minors anyway.

see below.

> > Any comments?
> 
> This is an option.
> 
> I'ld rather prefer to let vgscan deal with 'sticky' minors based
> on existing lvmtab entries and let it only use free minors for new VGs.

The problem is again the cluster. Additionally to shared volumes I
also may have local volumes on each host. Each host will usually
see the same set of shared volumes, but the sets of each host's
local volumes may differ. But I also want, that a shared volume
gets the same minor number on every host in the cluster in order
that NFS server failover from one host to another one works.

So if I add new (shared) volumes on host A, then it is not guaranteed,
that the new minor number host A assigns to the volume is not already
in use on host B. With my sticky volume table I'd use the following
procedure to create a new shared volume:

1. find a free minor number (manually)
2. enter it to the table on every host in the cluster (manually)
3. then craete the volume: vgcreate on 1st host / vgscan on all
   other hosts - vgcreate/vgscan will assign the number from the
   sticky table.

Of course my sticky table can also be integrated in lvmtab.

A different approach could be:

- I've seen in the pvdata output, that the volume descriptor
  already contains a major/minor number.

- So the desired minor number could reside in the volume descriptor
  on the disk together with a sticky flag for the volume.

- such a sticky voulume could e.g. be created with
  "lvcreate ... --sticky=<minor> ...",
  (which only succeeds if the minor number is not already in use -
  either by a currently active volume or another sticky volume).

- vgscan will
  1. go through the currently active volumes
  2. through the sticky volumes and try to assign exactly
     the sticky minor number
  3. all other volumes and assign a free minor number


> If you garanty that you must already have a cluster manager... 8*)

Yes.


Thanks,
Gerhard

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~1999-11-23 15:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1999-11-22 11:28 [linux-lvm] Q: active/inactive/imported/exported group ? Gerhard Fuernkranz
1999-11-22 12:36 ` Heinz Mauelshagen
1999-11-22 13:47   ` Gerhard Fuernkranz
1999-11-22 16:28     ` mauelsha
1999-11-22 18:29       ` Gerhard Fuernkranz
1999-11-22 19:43         ` Michael Marxmeier
1999-11-23 13:56         ` mauelsha
1999-11-23 15:28           ` Gerhard Fuernkranz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).