linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: what is necessary for directory hard links
       [not found] ` <6B8og-1iB-17@gated-at.bofh.it>
@ 2006-07-22 16:59   ` Bodo Eggert
  2006-07-22 18:13     ` Joshua Hudson
  2006-07-23  2:19     ` Horst H. von Brand
  0 siblings, 2 replies; 21+ messages in thread
From: Bodo Eggert @ 2006-07-22 16:59 UTC (permalink / raw)
  To: Horst H. von Brand, Joshua Hudson, Linux Kernel Mailing List

Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:

> Joshua Hudson <joshudson@gmail.com> wrote:
>> This patch is the sum total of all that I had to change in the kernel
>> VFS layer to support hard links to directories
> 
> Can't be done, as it creates the possibility of loops.

Don't do that then?

> The "only files can
> be hardlinked" idea makes garbage collection (== deleting of unreachable
> objects) simple: Just check the number of references.
> 
> Detecting unconnected subgraphs uses a /lot/ of memory; and much worse, you
> have to stop (almost) all filesystem activity while doing it.

In order to disconnect a directory, you'd have to empty it first, and after
emptying a directory, it won't be part of a loop. Maybe emtying is the
problem ...


This feature was implemented, and I asume it was removed for a reason.
Can somebody remember?
-- 
Ich danke GMX dafür, die Verwendung meiner Adressen mittels per SPF
verbreiteten Lügen zu sabotieren.

http://david.woodhou.se/why-not-spf.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-22 16:59   ` what is necessary for directory hard links Bodo Eggert
@ 2006-07-22 18:13     ` Joshua Hudson
  2006-07-24  6:45       ` Nikita Danilov
  2006-07-23  2:19     ` Horst H. von Brand
  1 sibling, 1 reply; 21+ messages in thread
From: Joshua Hudson @ 2006-07-22 18:13 UTC (permalink / raw)
  To: linux-kernel

On 7/22/06, Bodo Eggert <7eggert@elstempel.de> wrote:
> Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:
>
> > Joshua Hudson <joshudson@gmail.com> wrote:
> >> This patch is the sum total of all that I had to change in the kernel
> >> VFS layer to support hard links to directories
> >
> > Can't be done, as it creates the possibility of loops.
>
> Don't do that then?
Exactly.

> > Detecting unconnected subgraphs uses a /lot/ of memory; and much worse, you
> > have to stop (almost) all filesystem activity while doing it.
I know.

I just decided the price is worth it.

In my filesystem, any attempt to create a loop of hard links
is detected and cancelled. Unlinking a directory requires it
to be empty if the last link is being removed. "." and ".."
links are counted separately from real links, so that is easy.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-22 16:59   ` what is necessary for directory hard links Bodo Eggert
  2006-07-22 18:13     ` Joshua Hudson
@ 2006-07-23  2:19     ` Horst H. von Brand
  2006-07-23 22:27       ` Vernon Mauery
  2006-07-25 23:43       ` Peter Chubb
  1 sibling, 2 replies; 21+ messages in thread
From: Horst H. von Brand @ 2006-07-23  2:19 UTC (permalink / raw)
  To: 7eggert; +Cc: Horst H. von Brand, Joshua Hudson, Linux Kernel Mailing List

Bodo Eggert <7eggert@elstempel.de> wrote:
> Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:
> > Joshua Hudson <joshudson@gmail.com> wrote:
> >> This patch is the sum total of all that I had to change in the kernel
> >> VFS layer to support hard links to directories

> > Can't be done, as it creates the possibility of loops.

> Don't do that then?

Stop /everything/ to make sure no concurrent activity creates a loop, while
checking the current mkdir(2) doesn't create one?

> > The "only files can
> > be hardlinked" idea makes garbage collection (== deleting of unreachable
> > objects) simple: Just check the number of references.
> > 
> > Detecting unconnected subgraphs uses a /lot/ of memory; and much worse, you
> > have to stop (almost) all filesystem activity while doing it.

> In order to disconnect a directory, you'd have to empty it first, and after
> emptying a directory, it won't be part of a loop. Maybe emtying is the
> problem ...

What does "emptying a directory" mean if there might be loops?

> This feature was implemented,

Never in my memory of any Unix (and lookalike) system in real use (I've
seen a few).

>                               and I asume it was removed for a reason.
> Can somebody remember?

See my objections.
-- 
Dr. Horst H. von Brand                   User #22616 counter.li.org
Departamento de Informatica                     Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria              +56 32 654239
Casilla 110-V, Valparaiso, Chile                Fax:  +56 32 797513

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-23  2:19     ` Horst H. von Brand
@ 2006-07-23 22:27       ` Vernon Mauery
  2006-07-25 23:43       ` Peter Chubb
  1 sibling, 0 replies; 21+ messages in thread
From: Vernon Mauery @ 2006-07-23 22:27 UTC (permalink / raw)
  To: Horst H. von Brand, lkml

On Saturday 22 July 2006 19:19, you wrote:
> Bodo Eggert <7eggert@elstempel.de> wrote:
> > Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:
> > > Joshua Hudson <joshudson@gmail.com> wrote:
> > >> This patch is the sum total of all that I had to change in the kernel
> > >> VFS layer to support hard links to directories
> > >
> > > Can't be done, as it creates the possibility of loops.
> >
> > Don't do that then?
>
> Stop /everything/ to make sure no concurrent activity creates a loop, while
> checking the current mkdir(2) doesn't create one?

This doesn't seem that big of an issue for people to be up in arms about.  You 
wouldn't have to stop /everything/, would you, just have an in kernel mutex 
in vfs_mkdir.  It's not the most commonly used system call in the book -- 
meaning serializing the checking/creating of new directories would not really 
hamper your system /that/ much.

Personally, I don't think hard linked directories are necessary or even that 
interesting, but they certainly aren't impossible to do.  I suppose there 
might be some specialty filesystem that might like to do hardlinked 
directories and I don't think the vfs core should make it difficult.  

--Vernon

> > > The "only files can
> > > be hardlinked" idea makes garbage collection (== deleting of
> > > unreachable objects) simple: Just check the number of references.
> > >
> > > Detecting unconnected subgraphs uses a /lot/ of memory; and much worse,
> > > you have to stop (almost) all filesystem activity while doing it.
> >
> > In order to disconnect a directory, you'd have to empty it first, and
> > after emptying a directory, it won't be part of a loop. Maybe emtying is
> > the problem ...
>
> What does "emptying a directory" mean if there might be loops?
>
> > This feature was implemented,
>
> Never in my memory of any Unix (and lookalike) system in real use (I've
> seen a few).
>
> >                               and I asume it was removed for a reason.
> > Can somebody remember?
>
> See my objections.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-22 18:13     ` Joshua Hudson
@ 2006-07-24  6:45       ` Nikita Danilov
  2006-07-24  7:25         ` Valdis.Kletnieks
  0 siblings, 1 reply; 21+ messages in thread
From: Nikita Danilov @ 2006-07-24  6:45 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: Linux Kernel Mailing List

Joshua Hudson writes:
 > On 7/22/06, Bodo Eggert <7eggert@elstempel.de> wrote:
 > > Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:
 > >
 > > > Joshua Hudson <joshudson@gmail.com> wrote:
 > > >> This patch is the sum total of all that I had to change in the kernel
 > > >> VFS layer to support hard links to directories
 > > >
 > > > Can't be done, as it creates the possibility of loops.
 > >
 > > Don't do that then?
 > Exactly.
 > 
 > > > Detecting unconnected subgraphs uses a /lot/ of memory; and much worse, you
 > > > have to stop (almost) all filesystem activity while doing it.
 > I know.
 > 
 > I just decided the price is worth it.
 > 
 > In my filesystem, any attempt to create a loop of hard links
 > is detected and cancelled. 

Can you elaborate a bit on this exciting mechanism? Obviously an ability
to efficiently detect loops would be a break-through in a
reference-counted garbage collection, somehow missed for last 40
years. :-)

 >                            Unlinking a directory requires it
 > to be empty if the last link is being removed. "." and ".."
 > links are counted separately from real links, so that is easy.

Nikita.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-24  6:45       ` Nikita Danilov
@ 2006-07-24  7:25         ` Valdis.Kletnieks
  2006-07-24 16:21           ` Joshua Hudson
  0 siblings, 1 reply; 21+ messages in thread
From: Valdis.Kletnieks @ 2006-07-24  7:25 UTC (permalink / raw)
  To: Nikita Danilov; +Cc: Joshua Hudson, Linux Kernel Mailing List

[-- Attachment #1: Type: text/plain, Size: 1218 bytes --]

On Mon, 24 Jul 2006 10:45:45 +0400, Nikita Danilov said:
> Joshua Hudson writes:

>  > In my filesystem, any attempt to create a loop of hard links
>  > is detected and cancelled. 
> 
> Can you elaborate a bit on this exciting mechanism? Obviously an ability
> to efficiently detect loops would be a break-through in a
> reference-counted garbage collection, somehow missed for last 40

It's actually pretty trivial to do if it's a toy filesystem and all the
relevant inodes are in-memory already.  The hard-to-solve part is getting
around the (apparent) need to walk across essentially the entire tree
structure making sure that you aren't creating a loop.  This can get
rather performance piggy - even /home on my laptop has some 400K
inodes on it, and a 'find /home -type d' takes 28 seconds.  That's a *long*
time to lock and freeze a filesystem.

Where it gets *really* messy is that it isn't just mkdir that's the problem -
once you let there be more than one path from the fs root to a given directory,
it gets *really* hard to make sure that any given 'mv' command isn't going to
to screw things up (is 'mv a/b/c/d ../../w/z/b' safe? How do you know, without
examining a *lot* of stuff under a/ and ../../w/?


[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-24  7:25         ` Valdis.Kletnieks
@ 2006-07-24 16:21           ` Joshua Hudson
  2006-07-24 17:55             ` Horst H. von Brand
  2006-07-24 18:22             ` Valdis.Kletnieks
  0 siblings, 2 replies; 21+ messages in thread
From: Joshua Hudson @ 2006-07-24 16:21 UTC (permalink / raw)
  To: linux-kernel

On 7/24/06, Valdis.Kletnieks@vt.edu <Valdis.Kletnieks@vt.edu> wrote:
> On Mon, 24 Jul 2006 10:45:45 +0400, Nikita Danilov said:
> > Joshua Hudson writes:
>
> >  > In my filesystem, any attempt to create a loop of hard links
> >  > is detected and cancelled.
> >
> > Can you elaborate a bit on this exciting mechanism? Obviously an ability
> > to efficiently detect loops would be a break-through in a
> > reference-counted garbage collection, somehow missed for last 40
>
> It's actually pretty trivial to do if it's a toy filesystem and all the
> relevant inodes are in-memory already.  The hard-to-solve part is getting
> around the (apparent) need to walk across essentially the entire tree
> structure making sure that you aren't creating a loop.  This can get
> rather performance piggy - even /home on my laptop has some 400K
> inodes on it, and a 'find /home -type d' takes 28 seconds.  That's a *long*
> time to lock and freeze a filesystem.

Actually, I walk from the source inode down to try to find the
target inode. If not found, this is not attempting to create a loop.
Should be obvious that the average case is much less than the
whole tree.

>
> Where it gets *really* messy is that it isn't just mkdir that's the problem -
> once you let there be more than one path from the fs root to a given directory,
> it gets *really* hard to make sure that any given 'mv' command isn't going to
> to screw things up (is 'mv a/b/c/d ../../w/z/b' safe? How do you know, without
> examining a *lot* of stuff under a/ and ../../w/?

mv /a/b/c/d ../../w/z/b is implemented as this in the filesystem:
ln /a/b/c/d ../../w/z/b && rm /a/b/c/d

So what it's going to do is try to find z under /a/b/c/d.

Oh, and Nakita's right about the NFS server stuff. Actually, I think
the current filesystem I use for this is totally incompatible with
NFS (cannot call d_splice_alias on directory dnodes) so that
doesn't concern me.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-24 16:21           ` Joshua Hudson
@ 2006-07-24 17:55             ` Horst H. von Brand
  2006-07-24 18:22             ` Valdis.Kletnieks
  1 sibling, 0 replies; 21+ messages in thread
From: Horst H. von Brand @ 2006-07-24 17:55 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: linux-kernel

Joshua Hudson <joshudson@gmail.com> wrote:
> On 7/24/06, Valdis.Kletnieks@vt.edu <Valdis.Kletnieks@vt.edu> wrote:
> > On Mon, 24 Jul 2006 10:45:45 +0400, Nikita Danilov said:
> > > Joshua Hudson writes:
> > >  > In my filesystem, any attempt to create a loop of hard links
> > >  > is detected and cancelled.
> > >
> > > Can you elaborate a bit on this exciting mechanism? Obviously an ability
> > > to efficiently detect loops would be a break-through in a
> > > reference-counted garbage collection, somehow missed for last 40

> > It's actually pretty trivial to do if it's a toy filesystem and all the
> > relevant inodes are in-memory already.  The hard-to-solve part is getting
> > around the (apparent) need to walk across essentially the entire tree
> > structure making sure that you aren't creating a loop.  This can get
> > rather performance piggy - even /home on my laptop has some 400K
> > inodes on it, and a 'find /home -type d' takes 28 seconds.  That's a *long*
> > time to lock and freeze a filesystem.

> Actually, I walk from the source inode down to try to find the
> target inode. If not found, this is not attempting to create a loop.
> Should be obvious that the average case is much less than the
> whole tree.

You have to consider the worst case... and /that/ one is very bad. 

> > Where it gets *really* messy is that it isn't just mkdir that's the
> > problem - once you let there be more than one path from the fs root to
> > a given directory, it gets *really* hard to make sure that any given
> > 'mv' command isn't going to to screw things up (is 'mv a/b/c/d
> > ../../w/z/b' safe? How do you know, without examining a *lot* of stuff
> > under a/ and ../../w/?

> mv /a/b/c/d ../../w/z/b is implemented as this in the filesystem:
> ln /a/b/c/d ../../w/z/b && rm /a/b/c/d
> 
> So what it's going to do is try to find z under /a/b/c/d.

What if the ln(1) creates a loop, that the rm(1) then breaks? I.e., move a
directory nearer to the root?

Also note that with your idea '..' becomes ambiguous, as w might have
/lots/ of parents here.

> Oh, and Nakita's right about the NFS server stuff. Actually, I think
> the current filesystem I use for this is totally incompatible with
> NFS (cannot call d_splice_alias on directory dnodes) so that
> doesn't concern me.

Anything that isn't even NFS-capable is almost useless, IMVHO. Unless you
come up with a successor to NFS in tandem, that is.
-- 
Dr. Horst H. von Brand                   User #22616 counter.li.org
Departamento de Informatica                     Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria              +56 32 654239
Casilla 110-V, Valparaiso, Chile                Fax:  +56 32 797513


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-24 16:21           ` Joshua Hudson
  2006-07-24 17:55             ` Horst H. von Brand
@ 2006-07-24 18:22             ` Valdis.Kletnieks
  2006-07-25  4:49               ` Joshua Hudson
  1 sibling, 1 reply; 21+ messages in thread
From: Valdis.Kletnieks @ 2006-07-24 18:22 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2122 bytes --]

On Mon, 24 Jul 2006 09:21:04 PDT, Joshua Hudson said:
> On 7/24/06, Valdis.Kletnieks@vt.edu <Valdis.Kletnieks@vt.edu> wrote:

> Actually, I walk from the source inode down to try to find the
> target inode. If not found, this is not attempting to create a loop.

The problem is that the "target inode" may not be the one obviously causing the
loop - you may be trying to link a directory into a/b/c, while the loop
is caused by a link from a/b/c/d/e/f/g back up to someplace.

Consider:

function mkdir_tree(foo) {
for i=1,3 do
  for j=1,3 do
   for k=1,3 do
	mkdir ${foo}/a${i}/b${j}/c${k}
}

mkdir_tree(x)
mkdir_tree(y)
mkdir_tree(z)

ln x/a2/b3/c3/d1 y
ln x/a1/b3/c3/d2 y
ln y/a1/b1/c3/d1 z
ln y/a1/b1/c2/d2 z
ln y/a1/b2/c1/d3 z

Now - ln z/a3/b1/c3/d1 x - how many directories do you need to
examine to tell if this is a loop?  Is it still a loop if I rm z/d1 first?
or z/d2? or do I need to remove z/d1 through z/d3 to prevent a loop?
Can I leave z/d1 there but remove y/a1/b2/c3/d1 instead?
Does your answer change if somebody did 'ln y/a1/b2/c4 z/a1/b3/d7'?

How many places would you have to look if I didn't give you a cheat sheet
of the ln commands I had done?  Yes, you have to search *every* directory
under x, y, and z.  All 120 of them.  And this is an artificially small
directory tree.  Think about a /usr/src/ that has 4 or 5 linux-kernel
trees in it, with some 1,650 directories per tree...

> Should be obvious that the average case is much less than the
> whole tree.

"The average case" is the one where the feature isn't used.  When you
actually *use* it, you get "not average case" behavior - not a good sign.

> > to screw things up (is 'mv a/b/c/d ../../w/z/b' safe? How do you know, without
> > examining a *lot* of stuff under a/ and ../../w/?
> 
> mv /a/b/c/d ../../w/z/b is implemented as this in the filesystem:
> ln /a/b/c/d ../../w/z/b && rm /a/b/c/d
> 
> So what it's going to do is try to find z under /a/b/c/d.

Even if that's sufficient (which it isn't), it's going to be painful to lock
the filesystem for 20 or 30 seconds while you walk everything to make sure
there's no problem.  


[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-24 18:22             ` Valdis.Kletnieks
@ 2006-07-25  4:49               ` Joshua Hudson
  2006-07-25 12:43                 ` Valdis.Kletnieks
  2006-07-25 14:47                 ` Horst H. von Brand
  0 siblings, 2 replies; 21+ messages in thread
From: Joshua Hudson @ 2006-07-25  4:49 UTC (permalink / raw)
  To: linux-kernel

On 7/24/06, Valdis.Kletnieks@vt.edu <Valdis.Kletnieks@vt.edu> wrote:
> On Mon, 24 Jul 2006 09:21:04 PDT, Joshua Hudson said:
> > On 7/24/06, Valdis.Kletnieks@vt.edu <Valdis.Kletnieks@vt.edu> wrote:
>
> > Actually, I walk from the source inode down to try to find the
> > target inode. If not found, this is not attempting to create a loop.
>
> The problem is that the "target inode" may not be the one obviously causing the
> loop - you may be trying to link a directory into a/b/c, while the loop
> is caused by a link from a/b/c/d/e/f/g back up to someplace.
>
> Consider:
[case of 3^4 directories ]
> Yes, you have to search *every* directory under x, y, and z.
With a mesh graph like that one, you are right.  Rather like my
test cases to see if the  loop-detection algorithm is working.

> And this is an artificially small
> directory tree.  Think about a /usr/src/ that has 4 or 5 linux-kernel
> trees in it, with some 1,650 directories per tree...
Interesting case.

>
> > Should be obvious that the average case is much less than the
> > whole tree.
>
> "The average case" is the one where the feature isn't used.  When you
> actually *use* it, you get "not average case" behavior - not a good sign.
>
My use cases were generally about creating links at the twig level.
In my consideration, there would be two or more topical trees arranged
by different criteria, and they would bottom out at the
topics, which are directories that hold a number of documents.

A document might be a single file, a few files together, or a
directory that contains a few files that should alwas be together. I
suppose that an entire source tree could be considered one document,
and there lies the flaw.

This system is an attempt to replace a system of hard links to files
that didn't work because certain MS applications use rename and create
when saving files. The filesystem is intended to be
exported over smbfs. It is still very-much a single-user scheme,
which is why the bad worst case didn't really concern me.

> >
> > mv /a/b/c/d ../../w/z/b is implemented as this in the filesystem:
> > ln /a/b/c/d ../../w/z/b && rm /a/b/c/d
> >
> > So what it's going to do is try to find z under /a/b/c/d.
>
> Even if that's sufficient (which it isn't), it's going to be painful to lock
> the filesystem for 20 or 30 seconds while you walk everything to make sure
> there's no problem.

Maybe someday I'll work out a system by which much less is locked.
Conceptually, all that is requred to lock for the algorithm
to work is creating hard-links to directories and renaming directories
cross-directory.

> (which it isn't)
Counterexample? I should swear that any cycle created by rename must
pass through the new parent into the victim and back to the new
parent.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-25  4:49               ` Joshua Hudson
@ 2006-07-25 12:43                 ` Valdis.Kletnieks
  2006-07-25 14:47                 ` Horst H. von Brand
  1 sibling, 0 replies; 21+ messages in thread
From: Valdis.Kletnieks @ 2006-07-25 12:43 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3217 bytes --]

On Mon, 24 Jul 2006 21:49:01 PDT, Joshua Hudson said:
> On 7/24/06, Valdis.Kletnieks@vt.edu <Valdis.Kletnieks@vt.edu> wrote:

> [case of 3^4 directories ]
> > Yes, you have to search *every* directory under x, y, and z.
> With a mesh graph like that one, you are right.  Rather like my
> test cases to see if the  loop-detection algorithm is working.

And why bother allowing it, if a mesh like that isn't something you want
to support and do it well?

> > And this is an artificially small
> > directory tree.  Think about a /usr/src/ that has 4 or 5 linux-kernel
> > trees in it, with some 1,650 directories per tree...
> Interesting case.

Not an "interesting case" - that's the real world.  You allow hard
links to directories, somebody on this list *is* going to hardlink their
directory of driver code into several kernel trees so they can regression
test compiles against multiple trees.

You don't want to deal with the results of allowing that, don't allow that. ;)

> > > So what it's going to do is try to find z under /a/b/c/d.
> >
> > Even if that's sufficient (which it isn't), it's going to be painful to lock
> > the filesystem for 20 or 30 seconds while you walk everything to make sure
> > there's no problem.
> 
> Maybe someday I'll work out a system by which much less is locked.
> Conceptually, all that is requred to lock for the algorithm
> to work is creating hard-links to directories and renaming directories
> cross-directory.

You need to lock a *lot* of stuff - otherwise 2 separate links can race and
form a loop, even though they are operating in very different parts of
the directory graph.

You can't even answer the question "Will a proposed link cause a loop?"
without locking the tree, since you have to search the whole tree.  An
apparently unrelated change in a part of the tree you've already searched
can cause a loop because what you started with isn't what you finished
with (see the unrelated thread about locking or copying the process list
so 'ps' doesn't race for the sort of issues involved).

> > (which it isn't)
> Counterexample? I should swear that any cycle created by rename must
> pass through the new parent into the victim and back to the new parent.

> > So what it's going to do is try to find z under /a/b/c/d.

I meant that 'z' may not be "under" a/b/c/d in the conventional sense -
the criterion is the much flabbier "must be reachable from" (which leads
to the "have to enumerate the whole frikking filesystem" issue).  d could
contain a link back "up" to e/f/g, which links back "down" to 'h/i/j/k",
and h/i/j/k/l has a link back "up" to something that eventually leads to z.

And you still haven't explained what '..' means in this context.  I'm
sure programmers will be *thrilled* to have to deal with this:

mkdir a b c
mkdir b/a b/d a/a
ln b/a a/z
ln c/b b/a
# Now watch the fun with a context-sensitive '../a':
cd a/z; cd ../a <- succeeds, but leaves you elsewhere
cd b/a; cd ../a <- succeeds, and leaves you where you started
cd c/a; cd ../a <- fails

(To be fair, most programs that have issues with this are racy against
renames of ../a already - but this one can cause fun and games even on a
filesystem mounted r/o and apparently immutable...)

[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-25  4:49               ` Joshua Hudson
  2006-07-25 12:43                 ` Valdis.Kletnieks
@ 2006-07-25 14:47                 ` Horst H. von Brand
  1 sibling, 0 replies; 21+ messages in thread
From: Horst H. von Brand @ 2006-07-25 14:47 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: linux-kernel

Joshua Hudson <joshudson@gmail.com> wrote:

[...]

> Maybe someday I'll work out a system by which much less is locked.
> Conceptually, all that is requred to lock for the algorithm
> to work is creating hard-links to directories and renaming directories
> cross-directory.

Some 40 years of filesystem development without finding a solution to that
conundrum would make that quite unlikely, but you are certainly welcome to
try.

> > (which it isn't)
> Counterexample? I should swear that any cycle created by rename must
> pass through the new parent into the victim and back to the new
> parent.

Right. The problem is that the fan-out can be humongous. In the worst case
(which you /have/ to handle right!) you need to look over /all/ the
directories in the filesystem. Meanwhile, you can't allow any operation
that changes the graph. Have you run fsck(8) lately? That should give you
an order-of-magnitude estimate for the time it could take...

Yes, in the "avergage case" it will be much, much less, but that is little
comfort for people who get bitten by "not so average" cases.

Remember that one of the things that made Linus unhappy with Minix was
exactly that the filesystem was single-threaded. Today's requirements for
Linux filesystem performance are /much/ higer than the ones of a lone hobby
user...
-- 
Dr. Horst H. von Brand                   User #22616 counter.li.org
Departamento de Informatica                     Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria              +56 32 654239
Casilla 110-V, Valparaiso, Chile                Fax:  +56 32 797513

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
       [not found] ` <6Cwov-5xl-5@gated-at.bofh.it>
@ 2006-07-25 21:28   ` Bodo Eggert
  2006-07-26  1:00     ` Horst H. von Brand
  0 siblings, 1 reply; 21+ messages in thread
From: Bodo Eggert @ 2006-07-25 21:28 UTC (permalink / raw)
  To: Horst H. von Brand, Joshua Hudson, linux-kernel

Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:
> Joshua Hudson <joshudson@gmail.com> wrote:

> [...]
> 
>> Maybe someday I'll work out a system by which much less is locked.
>> Conceptually, all that is requred to lock for the algorithm
>> to work is creating hard-links to directories and renaming directories
>> cross-directory.
> 
> Some 40 years of filesystem development without finding a solution to that
> conundrum would make that quite unlikely, but you are certainly welcome to
> try.

There is a simple solution against loops: No directory may contain a directory
with a lower inode number.

Off cause this would interfere with normal operations, so you'll allocate all
normal inodes above e.g. 0x800000 and don't test between those inodes.

If you want to hardlink, you'll use a different (privileged) mkdir call
that will allocate a choosen low inode number. This is also required for
the parents of the hardlinked directories.

You can also use the generic solution: Allow root to shoot his feet, and
make sure the gun works correctly.
-- 
Ich danke GMX dafür, die Verwendung meiner Adressen mittels per SPF
verbreiteten Lügen zu sabotieren.

http://david.woodhou.se/why-not-spf.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-23  2:19     ` Horst H. von Brand
  2006-07-23 22:27       ` Vernon Mauery
@ 2006-07-25 23:43       ` Peter Chubb
  1 sibling, 0 replies; 21+ messages in thread
From: Peter Chubb @ 2006-07-25 23:43 UTC (permalink / raw)
  To: Horst H. von Brand; +Cc: 7eggert, Joshua Hudson, Linux Kernel Mailing List

>>>>> "Horst" == Horst H von Brand <vonbrand@inf.utfsm.cl> writes:

Horst> Bodo Eggert <7eggert@elstempel.de> wrote:

>> This feature was implemented,

Horst> Never in my memory of any Unix (and lookalike) system in real
Horst> use (I've seen a few).

Back in edition 5 UNIX, there were /etc/link and
/etc/unlink that could operate on directories, if and only if you were
root.  I used to use them to deliberately muck up file systems for
system administration courses...

--
Dr Peter Chubb  http://www.gelato.unsw.edu.au  peterc AT gelato.unsw.edu.au
http://www.ertos.nicta.com.au           ERTOS within National ICT Australia

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-25 21:28   ` Bodo Eggert
@ 2006-07-26  1:00     ` Horst H. von Brand
  2006-07-26  9:13       ` Bodo Eggert
  0 siblings, 1 reply; 21+ messages in thread
From: Horst H. von Brand @ 2006-07-26  1:00 UTC (permalink / raw)
  To: 7eggert; +Cc: Horst H. von Brand, Joshua Hudson, linux-kernel

Bodo Eggert <7eggert@elstempel.de> wrote:
> Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:
> > Joshua Hudson <joshudson@gmail.com> wrote:
> 
> > [...]
> > 
> >> Maybe someday I'll work out a system by which much less is locked.
> >> Conceptually, all that is requred to lock for the algorithm
> >> to work is creating hard-links to directories and renaming directories
> >> cross-directory.
> > 
> > Some 40 years of filesystem development without finding a solution to that
> > conundrum would make that quite unlikely, but you are certainly welcome to
> > try.

> There is a simple solution against loops: No directory may contain a
> directory with a lower inode number.

This is a serious restriction...

> Off cause this would interfere with normal operations, so you'll allocate all
> normal inodes above e.g. 0x800000 and don't test between those inodes.

And allow loops there? I don't see how that solves anything...

> If you want to hardlink, you'll use a different (privileged) mkdir call
> that will allocate a choosen low inode number. This is also required for
> the parents of the hardlinked directories.

Argh... even /more/ illogical restrictions!

> You can also use the generic solution: Allow root to shoot his feet, and
> make sure the gun works correctly.

;-)
-- 
Dr. Horst H. von Brand                   User #22616 counter.li.org
Departamento de Informatica                     Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria              +56 32 654239
Casilla 110-V, Valparaiso, Chile                Fax:  +56 32 797513

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-26  1:00     ` Horst H. von Brand
@ 2006-07-26  9:13       ` Bodo Eggert
  0 siblings, 0 replies; 21+ messages in thread
From: Bodo Eggert @ 2006-07-26  9:13 UTC (permalink / raw)
  To: Horst H. von Brand; +Cc: 7eggert, Joshua Hudson, linux-kernel

On Tue, 25 Jul 2006, Horst H. von Brand wrote:
> Bodo Eggert <7eggert@elstempel.de> wrote:
> > Horst H. von Brand <vonbrand@inf.utfsm.cl> wrote:
> > > Joshua Hudson <joshudson@gmail.com> wrote:

> > > [...]
> > > 
> > >> Maybe someday I'll work out a system by which much less is locked.
> > >> Conceptually, all that is requred to lock for the algorithm
> > >> to work is creating hard-links to directories and renaming directories
> > >> cross-directory.
> > > 
> > > Some 40 years of filesystem development without finding a solution to that
> > > conundrum would make that quite unlikely, but you are certainly welcome to
> > > try.
> 
> > There is a simple solution against loops: No directory may contain a
> > directory with a lower inode number.
> 
> This is a serious restriction...

That's why I relax it in the next paragraph, so you'll only have to deal 
with inode numbers iff you want to hardlink directories.

> > Off cause this would interfere with normal operations, so you'll allocate all
> > normal inodes above e.g. 0x800000 and don't test between those inodes.
> 
> And allow loops there? I don't see how that solves anything...

No. Don't allow dir-hardliks there, so you can't create loops so you can 
freely create them without risking loops.

> > If you want to hardlink, you'll use a different (privileged) mkdir call
> > that will allocate a choosen low inode number. This is also required for
> > the parents of the hardlinked directories.
> 
> Argh... even /more/ illogical restrictions!

If no (hardlink) directory may contain a lower inode directory, all 
parents need to have a low inode.


A legal tree would e.g. look like this:

1-+-100
  | +-10001
  | `-10002
  +-101
  | +-10101
  | +-10002
  | `-10102
  `-10101

In order to create a loop, you'd e.g. need to link 101 into 10002. You 
can't, because they are ordered.

Directories above 0x800000 aren't ordered except being greater than low 
inodes, so you can create normal directories anywhere you desire, but you
obviously can't hardlink them.

-- 
It's redundant! It's redundant! 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-21 21:57   ` Joshua Hudson
@ 2006-07-24  6:42     ` Nikita Danilov
  0 siblings, 0 replies; 21+ messages in thread
From: Nikita Danilov @ 2006-07-24  6:42 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: Linux Kernel Mailing List

Joshua Hudson writes:
 > Some people seem to think that I am proposing to do something.
 > Understand that I have done it for 2.6.17-rc4 and am currently
 > involved in bringing it forward to newer kernels.
 > 
 > >
 > > What is the parent of a hard linked directory?  What is the parent if
 > > the link in "the parent" is deleted?
 > 
 > The parent is any and all directories that contain a link to the
 > stated directory.  ".." points back along the path the referrer used
 > to reach the current directory (this behavior is already in kernel:
 > didn't have to lift a finger to get it).

That won't work for kernel NFS server, using ->get_parent() method to
obtain parent directory.

Well... this is probably the smallest problem with hard-linked
directories, though.

Nikita.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-21 20:28 ` Rob Sims
@ 2006-07-21 21:57   ` Joshua Hudson
  2006-07-24  6:42     ` Nikita Danilov
  0 siblings, 1 reply; 21+ messages in thread
From: Joshua Hudson @ 2006-07-21 21:57 UTC (permalink / raw)
  To: linux-kernel

Some people seem to think that I am proposing to do something.
Understand that I have done it for 2.6.17-rc4 and am currently
involved in bringing it forward to newer kernels.

>
> What is the parent of a hard linked directory?  What is the parent if
> the link in "the parent" is deleted?

The parent is any and all directories that contain a link to the
stated directory.  ".." points back along the path the referrer used
to reach the current directory (this behavior is already in kernel:
didn't have to lift a finger to get it).

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-21  1:04 Joshua Hudson
  2006-07-21 18:49 ` Horst H. von Brand
@ 2006-07-21 20:28 ` Rob Sims
  2006-07-21 21:57   ` Joshua Hudson
  1 sibling, 1 reply; 21+ messages in thread
From: Rob Sims @ 2006-07-21 20:28 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 531 bytes --]

On Thu, Jul 20, 2006 at 06:04:49PM -0700, Joshua Hudson wrote:
> This patch is the sum total of all that I had to change in the kernel
> VFS layer to support hard links to directories and a few worse things
> (hard links to things that are sometimes directories). Never mind
> about do_path_lookup. I call that from an ioctl (flink).
 
What is the parent of a hard linked directory?  What is the parent if
the link in "the parent" is deleted?
-- 
Rob
  A man is known by the company he organizes.
  		-- Ambrose Bierce

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: what is necessary for directory hard links
  2006-07-21  1:04 Joshua Hudson
@ 2006-07-21 18:49 ` Horst H. von Brand
  2006-07-21 20:28 ` Rob Sims
  1 sibling, 0 replies; 21+ messages in thread
From: Horst H. von Brand @ 2006-07-21 18:49 UTC (permalink / raw)
  To: Joshua Hudson; +Cc: Linux Kernel Mailing List

Joshua Hudson <joshudson@gmail.com> wrote:
> This patch is the sum total of all that I had to change in the kernel
> VFS layer to support hard links to directories

Can't be done, as it creates the possibility of loops. The "only files can
be hardlinked" idea makes garbage collection (== deleting of unreachable
objects) simple: Just check the number of references.

Detecting unconnected subgraphs uses a /lot/ of memory; and much worse, you
have to stop (almost) all filesystem activity while doing it.

Besides, the flow "root down through directories" gives a natural order in
which to go locking stuff when needed; if there can be loops, the system
could easily deadlock.
-- 
Dr. Horst H. von Brand                   User #22616 counter.li.org
Departamento de Informatica                     Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria              +56 32 654239
Casilla 110-V, Valparaiso, Chile                Fax:  +56 32 797513


^ permalink raw reply	[flat|nested] 21+ messages in thread

* what is necessary for directory hard links
@ 2006-07-21  1:04 Joshua Hudson
  2006-07-21 18:49 ` Horst H. von Brand
  2006-07-21 20:28 ` Rob Sims
  0 siblings, 2 replies; 21+ messages in thread
From: Joshua Hudson @ 2006-07-21  1:04 UTC (permalink / raw)
  To: linux-kernel

This patch is the sum total of all that I had to change in the kernel
VFS layer to support hard links to directories and a few worse things
(hard links to things that are sometimes directories). Never mind
about do_path_lookup. I call that from an ioctl (flink).

Note the missing signed-off line. Not ready for mainline. Not adequitely tested.
Also understand that a filesystem cannot just set the flags and expect
hard links to work for it. There will be races: the target directory
for any operation is locked, but the item is not.
Oh, and any user will have to implement d_revalidate.

If you say this shouldn't be done maybe you're right but I'm doing it
for myself anyway.
If you have constructive feedback I will listen. If you want this in
kernel, you better have
a worthy user for it and be ready to hammer it. Tested with light load
on one filesystem on one architecture with one set of configuration
options doesn't quailfy. I haven't even passed it through the lock
validator yet because the backing filesystem isn't ready.

---
diff -rup -X linux-2.6.18-rc2/Documentation/dontdiff
linux-2.6.18-rc2/fs/namei.c linux-2.6.18-rc2-kb0/fs/namei.c
--- linux-2.6.18-rc2/fs/namei.c	2006-07-17 18:53:13.000000000 -0700
+++ linux-2.6.18-rc2-kb0/fs/namei.c	2006-07-17 17:35:43.000000000 -0700
@@ -1069,7 +1069,7 @@ set_it:
 }

 /* Returns 0 and nd will be valid on success; Retuns error, otherwise. */
-static int fastcall do_path_lookup(int dfd, const char *name,
+int fastcall do_path_lookup(int dfd, const char *name,
 				unsigned int flags, struct nameidata *nd)
 {
 	int retval = 0;
@@ -1416,13 +1416,22 @@ static inline int lookup_flags(unsigned
 }

 /*
+ * Return true if assumptions in Documentation/filesystems/directory-locking
+ * are false [fs must set]. FS must have replacement locking in itself.
+ */
+static inline int fs_is_insane(struct inode *inode)
+{
+	return inode->i_sb->s_type->fs_flags & FS_OWN_LOCKS;
+}
+
+/*
  * p1 and p2 should be directories on the same fs.
  */
 struct dentry *lock_rename(struct dentry *p1, struct dentry *p2)
 {
 	struct dentry *p;

-	if (p1 == p2) {
+	if (p1->d_inode == p2->d_inode) {
 		mutex_lock_nested(&p1->d_inode->i_mutex, I_MUTEX_PARENT);
 		return NULL;
 	}
@@ -1453,7 +1462,7 @@ struct dentry *lock_rename(struct dentry
 void unlock_rename(struct dentry *p1, struct dentry *p2)
 {
 	mutex_unlock(&p1->d_inode->i_mutex);
-	if (p1 != p2) {
+	if (p1->d_inode != p2->d_inode) {
 		mutex_unlock(&p2->d_inode->i_mutex);
 		mutex_unlock(&p1->d_inode->i_sb->s_vfs_rename_mutex);
 	}
@@ -1967,7 +1976,8 @@ int vfs_rmdir(struct inode *dir, struct

 	DQUOT_INIT(dir);

-	mutex_lock(&dentry->d_inode->i_mutex);
+	if (!(unlikely(fs_is_insane(dentry->d_inode))))
+		mutex_lock(&dentry->d_inode->i_mutex);
 	dentry_unhash(dentry);
 	if (d_mountpoint(dentry))
 		error = -EBUSY;
@@ -1975,11 +1985,12 @@ int vfs_rmdir(struct inode *dir, struct
 		error = security_inode_rmdir(dir, dentry);
 		if (!error) {
 			error = dir->i_op->rmdir(dir, dentry);
-			if (!error)
+			if (!error && likely(dentry->d_inode->i_nlink == 0))
 				dentry->d_inode->i_flags |= S_DEAD;
 		}
 	}
-	mutex_unlock(&dentry->d_inode->i_mutex);
+	if (!(unlikely(fs_is_insane(dentry->d_inode))))
+		mutex_unlock(&dentry->d_inode->i_mutex);
 	if (!error) {
 		d_delete(dentry);
 	}
@@ -2046,7 +2057,8 @@ int vfs_unlink(struct inode *dir, struct

 	DQUOT_INIT(dir);

-	mutex_lock(&dentry->d_inode->i_mutex);
+	if (!(unlikely(fs_is_insane(dentry->d_inode))))
+		mutex_lock(&dentry->d_inode->i_mutex);
 	if (d_mountpoint(dentry))
 		error = -EBUSY;
 	else {
@@ -2054,7 +2066,8 @@ int vfs_unlink(struct inode *dir, struct
 		if (!error)
 			error = dir->i_op->unlink(dir, dentry);
 	}
-	mutex_unlock(&dentry->d_inode->i_mutex);
+	if (!(unlikely(fs_is_insane(dentry->d_inode))))
+		mutex_unlock(&dentry->d_inode->i_mutex);

 	/* We don't d_delete() NFS sillyrenamed files--they still exist. */
 	if (!error && !(dentry->d_flags & DCACHE_NFSFS_RENAMED)) {
@@ -2216,16 +2229,20 @@ int vfs_link(struct dentry *old_dentry,
 	if (!dir->i_op || !dir->i_op->link)
 		return -EPERM;
 	if (S_ISDIR(old_dentry->d_inode->i_mode))
-		return -EPERM;
+		if (!old_dentry->d_inode->i_sb->s_type->fs_flags
+					& FS_ALLOW_HLINK_DIR)
+			return -EPERM;

 	error = security_inode_link(old_dentry, dir, new_dentry);
 	if (error)
 		return error;

-	mutex_lock(&old_dentry->d_inode->i_mutex);
+	if (!(unlikely(fs_is_insane(old_dentry->d_inode))))
+		mutex_lock(&old_dentry->d_inode->i_mutex);
 	DQUOT_INIT(dir);
 	error = dir->i_op->link(old_dentry, dir, new_dentry);
-	mutex_unlock(&old_dentry->d_inode->i_mutex);
+	if (!(unlikely(fs_is_insane(old_dentry->d_inode))))
+		mutex_unlock(&old_dentry->d_inode->i_mutex);
 	if (!error)
 		fsnotify_create(dir, new_dentry);
 	return error;
@@ -2343,7 +2360,8 @@ static int vfs_rename_dir(struct inode *

 	target = new_dentry->d_inode;
 	if (target) {
-		mutex_lock(&target->i_mutex);
+		if (!(unlikely(fs_is_insane(target))))
+			mutex_lock(&target->i_mutex);
 		dentry_unhash(new_dentry);
 	}
 	if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry))
@@ -2353,7 +2371,8 @@ static int vfs_rename_dir(struct inode *
 	if (target) {
 		if (!error)
 			target->i_flags |= S_DEAD;
-		mutex_unlock(&target->i_mutex);
+		if (!(unlikely(fs_is_insane(target))))
+			mutex_unlock(&target->i_mutex);
 		if (d_unhashed(new_dentry))
 			d_rehash(new_dentry);
 		dput(new_dentry);
@@ -2375,7 +2394,7 @@ static int vfs_rename_other(struct inode

 	dget(new_dentry);
 	target = new_dentry->d_inode;
-	if (target)
+	if (target && !(unlikely(fs_is_insane(target))))
 		mutex_lock(&target->i_mutex);
 	if (d_mountpoint(old_dentry)||d_mountpoint(new_dentry))
 		error = -EBUSY;
@@ -2386,7 +2405,7 @@ static int vfs_rename_other(struct inode
 		if (!(old_dir->i_sb->s_type->fs_flags & FS_ODD_RENAME))
 			d_move(old_dentry, new_dentry);
 	}
-	if (target)
+	if (target && !(unlikely(fs_is_insane(target))))
 		mutex_unlock(&target->i_mutex);
 	dput(new_dentry);
 	return error;
@@ -2713,6 +2732,7 @@ EXPORT_SYMBOL(__page_symlink);
 EXPORT_SYMBOL(page_symlink);
 EXPORT_SYMBOL(page_symlink_inode_operations);
 EXPORT_SYMBOL(path_lookup);
+EXPORT_SYMBOL(do_path_lookup);
 EXPORT_SYMBOL(path_release);
 EXPORT_SYMBOL(path_walk);
 EXPORT_SYMBOL(permission);
diff -rup -X linux-2.6.18-rc2/Documentation/dontdiff
linux-2.6.18-rc2/include/linux/fs.h
linux-2.6.18-rc2-kb0/include/linux/fs.h
--- linux-2.6.18-rc2/include/linux/fs.h	2006-07-17 18:53:25.000000000 -0700
+++ linux-2.6.18-rc2-kb0/include/linux/fs.h	2006-07-17 17:30:58.000000000 -0700
@@ -91,6 +91,8 @@ extern int dir_notify_enable;
 /* public flags for file_system_type */
 #define FS_REQUIRES_DEV 1
 #define FS_BINARY_MOUNTDATA 2
+#define FS_ALLOW_HLINK_DIR 4
+#define FS_OWN_LOCKS 8
 #define FS_REVAL_DOT	16384	/* Check the paths ".", ".." for staleness */
 #define FS_ODD_RENAME	32768	/* Temporary stuff; will go away as soon
 				  * as nfs_rename() will be cleaned up
Only in linux-2.6.18-rc2-kb0/include/linux: utsrelease.h

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2006-07-26  9:14 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <6ARGK-19L-5@gated-at.bofh.it>
     [not found] ` <6B8og-1iB-17@gated-at.bofh.it>
2006-07-22 16:59   ` what is necessary for directory hard links Bodo Eggert
2006-07-22 18:13     ` Joshua Hudson
2006-07-24  6:45       ` Nikita Danilov
2006-07-24  7:25         ` Valdis.Kletnieks
2006-07-24 16:21           ` Joshua Hudson
2006-07-24 17:55             ` Horst H. von Brand
2006-07-24 18:22             ` Valdis.Kletnieks
2006-07-25  4:49               ` Joshua Hudson
2006-07-25 12:43                 ` Valdis.Kletnieks
2006-07-25 14:47                 ` Horst H. von Brand
2006-07-23  2:19     ` Horst H. von Brand
2006-07-23 22:27       ` Vernon Mauery
2006-07-25 23:43       ` Peter Chubb
     [not found] <6CcT1-1lH-39@gated-at.bofh.it>
     [not found] ` <6Cwov-5xl-5@gated-at.bofh.it>
2006-07-25 21:28   ` Bodo Eggert
2006-07-26  1:00     ` Horst H. von Brand
2006-07-26  9:13       ` Bodo Eggert
2006-07-21  1:04 Joshua Hudson
2006-07-21 18:49 ` Horst H. von Brand
2006-07-21 20:28 ` Rob Sims
2006-07-21 21:57   ` Joshua Hudson
2006-07-24  6:42     ` Nikita Danilov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).