linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: NULL pointer dereference when access /proc/net
       [not found] <CAFt=RON+KYYf5yt9vM3TdOSn4zco+3XtFyi3VDRr1vbQUBPZ0g@mail.gmail.com>
@ 2021-04-25 16:50 ` Al Viro
  2021-04-25 17:04   ` haosdent
  0 siblings, 1 reply; 11+ messages in thread
From: Al Viro @ 2021-04-25 16:50 UTC (permalink / raw)
  To: haosdent; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

On Sun, Apr 25, 2021 at 11:22:15PM +0800, haosdent wrote:
> Hi, Alexander Viro and dear Linux Filesystems maintainers, recently we
> encounter a NULL pointer dereference Oops in our production.
> 
> We have attempted to analyze the core dump and compare it with source code
> in the past few weeks, currently still could not understand why
> `dentry->d_inode` become NULL while other fields look normal.

Not really - the crucial part is ->d_count == -128, i.e. it's already past
__dentry_kill().

> [19521409.514784] RIP: 0010:__atime_needs_update+0x5/0x190

Which tree is that?  __atime_needs_update() had been introduced in
4.8 and disappeared in 4.18; anything of that age straight on mainline 
would have a plenty of interesting problems.  If you have some patches
applied on top of that...  Depends on what those are, obviously.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-25 16:50 ` NULL pointer dereference when access /proc/net Al Viro
@ 2021-04-25 17:04   ` haosdent
  2021-04-25 17:14     ` haosdent
  2021-04-25 17:22     ` Al Viro
  0 siblings, 2 replies; 11+ messages in thread
From: haosdent @ 2021-04-25 17:04 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

Hi, Alexander, thanks a lot for your quick reply.

> Not really - the crucial part is ->d_count == -128, i.e. it's already past
> __dentry_kill().

Thanks a lot for your information, we would check this.

> Which tree is that?
> If you have some patches applied on top of that...

We use Ubuntu Linux Kernel "4.15.0-42.45~16.04.1" from launchpad directly
without any modification,  the mapping Linux Kernel should be
"4.15.18" according
to https://people.canonical.com/~kernel/info/kernel-version-map.html

On Mon, Apr 26, 2021 at 12:50 AM Al Viro <viro@zeniv.linux.org.uk> wrote:
>
> On Sun, Apr 25, 2021 at 11:22:15PM +0800, haosdent wrote:
> > Hi, Alexander Viro and dear Linux Filesystems maintainers, recently we
> > encounter a NULL pointer dereference Oops in our production.
> >
> > We have attempted to analyze the core dump and compare it with source code
> > in the past few weeks, currently still could not understand why
> > `dentry->d_inode` become NULL while other fields look normal.
>
> Not really - the crucial part is ->d_count == -128, i.e. it's already past
> __dentry_kill().
>
> > [19521409.514784] RIP: 0010:__atime_needs_update+0x5/0x190
>
> Which tree is that?  __atime_needs_update() had been introduced in
> 4.8 and disappeared in 4.18; anything of that age straight on mainline
> would have a plenty of interesting problems.  If you have some patches
> applied on top of that...  Depends on what those are, obviously.



-- 
Best Regards,
Haosdent Huang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-25 17:04   ` haosdent
@ 2021-04-25 17:14     ` haosdent
  2021-04-25 17:22     ` Al Viro
  1 sibling, 0 replies; 11+ messages in thread
From: haosdent @ 2021-04-25 17:14 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

> Not really - the crucial part is ->d_count == -128, i.e. it's already past
> __dentry_kill().

Is it possible that dentry is garbage collected due to memory usage,
but it still is stored in the dentry cache.

Available memory is 5% when this crash happens, not sure if this helps.
```
crash> kmem -i
                 PAGES        TOTAL      PERCENTAGE
    TOTAL MEM  32795194     125.1 GB         ----
         FREE  1870573       7.1 GB    5% of TOTAL MEM
         USED  30924621       118 GB   94% of TOTAL MEM
       SHARED  14145523        54 GB   43% of TOTAL MEM
      BUFFERS   112953     441.2 MB    0% of TOTAL MEM
       CACHED  14362325      54.8 GB   43% of TOTAL MEM
         SLAB   664531       2.5 GB    2% of TOTAL MEM

   TOTAL HUGE        0            0         ----
    HUGE FREE        0            0    0% of TOTAL HUGE

   TOTAL SWAP        0            0         ----
    SWAP USED        0            0    0% of TOTAL SWAP
    SWAP FREE        0            0    0% of TOTAL SWAP

 COMMIT LIMIT  16397597      62.6 GB         ----
    COMMITTED  27786060       106 GB  169% of TOTAL LIMIT
```

On Mon, Apr 26, 2021 at 1:04 AM haosdent <haosdent@gmail.com> wrote:
>
> Hi, Alexander, thanks a lot for your quick reply.
>
> > Not really - the crucial part is ->d_count == -128, i.e. it's already past
> > __dentry_kill().
>
> Thanks a lot for your information, we would check this.
>
> > Which tree is that?
> > If you have some patches applied on top of that...
>
> We use Ubuntu Linux Kernel "4.15.0-42.45~16.04.1" from launchpad directly
> without any modification,  the mapping Linux Kernel should be
> "4.15.18" according
> to https://people.canonical.com/~kernel/info/kernel-version-map.html
>
> On Mon, Apr 26, 2021 at 12:50 AM Al Viro <viro@zeniv.linux.org.uk> wrote:
> >
> > On Sun, Apr 25, 2021 at 11:22:15PM +0800, haosdent wrote:
> > > Hi, Alexander Viro and dear Linux Filesystems maintainers, recently we
> > > encounter a NULL pointer dereference Oops in our production.
> > >
> > > We have attempted to analyze the core dump and compare it with source code
> > > in the past few weeks, currently still could not understand why
> > > `dentry->d_inode` become NULL while other fields look normal.
> >
> > Not really - the crucial part is ->d_count == -128, i.e. it's already past
> > __dentry_kill().
> >
> > > [19521409.514784] RIP: 0010:__atime_needs_update+0x5/0x190
> >
> > Which tree is that?  __atime_needs_update() had been introduced in
> > 4.8 and disappeared in 4.18; anything of that age straight on mainline
> > would have a plenty of interesting problems.  If you have some patches
> > applied on top of that...  Depends on what those are, obviously.
>
>
>
> --
> Best Regards,
> Haosdent Huang



-- 
Best Regards,
Haosdent Huang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-25 17:04   ` haosdent
  2021-04-25 17:14     ` haosdent
@ 2021-04-25 17:22     ` Al Viro
  2021-04-25 18:00       ` haosdent
  2021-04-26 17:16       ` haosdent
  1 sibling, 2 replies; 11+ messages in thread
From: Al Viro @ 2021-04-25 17:22 UTC (permalink / raw)
  To: haosdent; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

On Mon, Apr 26, 2021 at 01:04:46AM +0800, haosdent wrote:
> Hi, Alexander, thanks a lot for your quick reply.
> 
> > Not really - the crucial part is ->d_count == -128, i.e. it's already past
> > __dentry_kill().
> 
> Thanks a lot for your information, we would check this.
> 
> > Which tree is that?
> > If you have some patches applied on top of that...
> 
> We use Ubuntu Linux Kernel "4.15.0-42.45~16.04.1" from launchpad directly
> without any modification,  the mapping Linux Kernel should be
> "4.15.18" according
> to https://people.canonical.com/~kernel/info/kernel-version-map.html

Umm...  OK, I don't have it Ubuntu source at hand, but the thing to look into
would be
	* nd->flags contains LOOKUP_RCU
	* in the mainline from that period (i.e. back when __atime_needs_update()
used to exist) we had atime_needs_update_rcu() called in get_link() under those
conditions, with
static inline bool atime_needs_update_rcu(const struct path *path,
				          struct inode *inode)
{
	return __atime_needs_update(path, inode, true);
}
and __atime_needs_update() passing its last argument (rcu:true in this case) to
relatime_need_update() in
	if (!relatime_need_update(path, inode, now, rcu))
relatime_need_update() hitting
	update_ovl_inode_times(path->dentry, inode, rcu);
and update_ovl_inode_times() starting with
	if (rcu || likely(!(dentry->d_flags & DCACHE_OP_REAL)))
		return;
with subsequent accesses to ->d_inode.  Those obviously are *NOT* supposed
to be reached in rcu mode, due to that check.

Your oops looks like something similar to that call chain had been involved and
somehow had managed to get through to those ->d_inode uses.

Again, in RCU mode we really, really should not assume ->d_inode stable.  That's
why atime_needs_update() gets inode as a separate argument and does *NOT* look
at path->dentry at all.  In the kernels of 4.8..4.18 period there it used to do
so, but only in non-RCU mode (which is the reason for explicit rcu argument passed
through that callchain).

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-25 17:22     ` Al Viro
@ 2021-04-25 18:00       ` haosdent
  2021-04-25 18:15         ` haosdent
  2021-04-26 17:16       ` haosdent
  1 sibling, 1 reply; 11+ messages in thread
From: haosdent @ 2021-04-25 18:00 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

> In the kernels of 4.8..4.18 period there it used to do
> so, but only in non-RCU mode (which is the reason for explicit rcu argument passed
> through that callchain).

Yep, we saw the `inode` parameter pass to `__atime_needs_update` is already NULL

```
bool __atime_needs_update(const struct path *path, struct inode *inode,
  bool rcu)
{
struct vfsmount *mnt = path->mnt;
struct timespec now;

if (inode->i_flags & S_NOATIME)   <=== Oops at here because the params
inode is NULL
return false;
```

```
    [exception RIP: __atime_needs_update+5]
    ...  **RSI: 0000000000000000**  <=== the second params of
__atime_needs_update "struct inode *inode" is NULL
```

On Mon, Apr 26, 2021 at 1:22 AM Al Viro <viro@zeniv.linux.org.uk> wrote:
>
> On Mon, Apr 26, 2021 at 01:04:46AM +0800, haosdent wrote:
> > Hi, Alexander, thanks a lot for your quick reply.
> >
> > > Not really - the crucial part is ->d_count == -128, i.e. it's already past
> > > __dentry_kill().
> >
> > Thanks a lot for your information, we would check this.
> >
> > > Which tree is that?
> > > If you have some patches applied on top of that...
> >
> > We use Ubuntu Linux Kernel "4.15.0-42.45~16.04.1" from launchpad directly
> > without any modification,  the mapping Linux Kernel should be
> > "4.15.18" according
> > to https://people.canonical.com/~kernel/info/kernel-version-map.html
>
> Umm...  OK, I don't have it Ubuntu source at hand, but the thing to look into
> would be
>         * nd->flags contains LOOKUP_RCU
>         * in the mainline from that period (i.e. back when __atime_needs_update()
> used to exist) we had atime_needs_update_rcu() called in get_link() under those
> conditions, with
> static inline bool atime_needs_update_rcu(const struct path *path,
>                                           struct inode *inode)
> {
>         return __atime_needs_update(path, inode, true);
> }
> and __atime_needs_update() passing its last argument (rcu:true in this case) to
> relatime_need_update() in
>         if (!relatime_need_update(path, inode, now, rcu))
> relatime_need_update() hitting
>         update_ovl_inode_times(path->dentry, inode, rcu);
> and update_ovl_inode_times() starting with
>         if (rcu || likely(!(dentry->d_flags & DCACHE_OP_REAL)))
>                 return;
> with subsequent accesses to ->d_inode.  Those obviously are *NOT* supposed
> to be reached in rcu mode, due to that check.
>
> Your oops looks like something similar to that call chain had been involved and
> somehow had managed to get through to those ->d_inode uses.
>
> Again, in RCU mode we really, really should not assume ->d_inode stable.  That's
> why atime_needs_update() gets inode as a separate argument and does *NOT* look
> at path->dentry at all.  In the kernels of 4.8..4.18 period there it used to do
> so, but only in non-RCU mode (which is the reason for explicit rcu argument passed
> through that callchain).



-- 
Best Regards,
Haosdent Huang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-25 18:00       ` haosdent
@ 2021-04-25 18:15         ` haosdent
  0 siblings, 0 replies; 11+ messages in thread
From: haosdent @ 2021-04-25 18:15 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

> in RCU mode we really, really should not assume ->d_inode stable.

Got it, but looks like the ->d_inode is NULL after out of RCU.

In `lookup_fast` and `walk_component`

```
  dentry = __d_lookup_rcu(parent, &nd->last, &seq);
  ...
  *inode = d_backing_inode(dentry);
```

```
static int walk_component(struct nameidata *nd, int flags)
  ...
  err = lookup_fast(nd, &path, &inode, &seq);
  if (unlikely(err <= 0)) {
    ...
    path.dentry = lookup_slow(&nd->last, nd->path.dentry, nd->flags);
    ...
    seq = 0; /* we are already out of RCU mode */
    inode = d_backing_inode(path.dentry);
  }
```

On Mon, Apr 26, 2021 at 2:00 AM haosdent <haosdent@gmail.com> wrote:
>
> > In the kernels of 4.8..4.18 period there it used to do
> > so, but only in non-RCU mode (which is the reason for explicit rcu argument passed
> > through that callchain).
>
> Yep, we saw the `inode` parameter pass to `__atime_needs_update` is already NULL
>
> ```
> bool __atime_needs_update(const struct path *path, struct inode *inode,
>   bool rcu)
> {
> struct vfsmount *mnt = path->mnt;
> struct timespec now;
>
> if (inode->i_flags & S_NOATIME)   <=== Oops at here because the params
> inode is NULL
> return false;
> ```
>
> ```
>     [exception RIP: __atime_needs_update+5]
>     ...  **RSI: 0000000000000000**  <=== the second params of
> __atime_needs_update "struct inode *inode" is NULL
> ```
>
> On Mon, Apr 26, 2021 at 1:22 AM Al Viro <viro@zeniv.linux.org.uk> wrote:
> >
> > On Mon, Apr 26, 2021 at 01:04:46AM +0800, haosdent wrote:
> > > Hi, Alexander, thanks a lot for your quick reply.
> > >
> > > > Not really - the crucial part is ->d_count == -128, i.e. it's already past
> > > > __dentry_kill().
> > >
> > > Thanks a lot for your information, we would check this.
> > >
> > > > Which tree is that?
> > > > If you have some patches applied on top of that...
> > >
> > > We use Ubuntu Linux Kernel "4.15.0-42.45~16.04.1" from launchpad directly
> > > without any modification,  the mapping Linux Kernel should be
> > > "4.15.18" according
> > > to https://people.canonical.com/~kernel/info/kernel-version-map.html
> >
> > Umm...  OK, I don't have it Ubuntu source at hand, but the thing to look into
> > would be
> >         * nd->flags contains LOOKUP_RCU
> >         * in the mainline from that period (i.e. back when __atime_needs_update()
> > used to exist) we had atime_needs_update_rcu() called in get_link() under those
> > conditions, with
> > static inline bool atime_needs_update_rcu(const struct path *path,
> >                                           struct inode *inode)
> > {
> >         return __atime_needs_update(path, inode, true);
> > }
> > and __atime_needs_update() passing its last argument (rcu:true in this case) to
> > relatime_need_update() in
> >         if (!relatime_need_update(path, inode, now, rcu))
> > relatime_need_update() hitting
> >         update_ovl_inode_times(path->dentry, inode, rcu);
> > and update_ovl_inode_times() starting with
> >         if (rcu || likely(!(dentry->d_flags & DCACHE_OP_REAL)))
> >                 return;
> > with subsequent accesses to ->d_inode.  Those obviously are *NOT* supposed
> > to be reached in rcu mode, due to that check.
> >
> > Your oops looks like something similar to that call chain had been involved and
> > somehow had managed to get through to those ->d_inode uses.
> >
> > Again, in RCU mode we really, really should not assume ->d_inode stable.  That's
> > why atime_needs_update() gets inode as a separate argument and does *NOT* look
> > at path->dentry at all.  In the kernels of 4.8..4.18 period there it used to do
> > so, but only in non-RCU mode (which is the reason for explicit rcu argument passed
> > through that callchain).
>
>
>
> --
> Best Regards,
> Haosdent Huang



-- 
Best Regards,
Haosdent Huang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-25 17:22     ` Al Viro
  2021-04-25 18:00       ` haosdent
@ 2021-04-26 17:16       ` haosdent
  2021-04-26 17:30         ` Al Viro
  1 sibling, 1 reply; 11+ messages in thread
From: haosdent @ 2021-04-26 17:16 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

> really should not assume ->d_inode stable

Hi, Alexander, sorry to disturb you again. Today I try to check what
`dentry->d_inode` and `nd->link_inode` looks like when `dentry` is
already been killed in `__dentry_kill`.

```
nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
```

It looks like `dentry->d_inode` could be NULL while `nd->link_inode`
is always has value.
But this make me confuse, by right `nd->link_inode` is get from
`dentry->d_inode`, right?

For example, in `walk_component`, suppose we go into `lookup_slow`,

```
static int walk_component(struct nameidata *nd, int flags)
   if (unlikely(err <= 0)) {
    ...
    path.dentry = lookup_slow(&nd->last, nd->path.dentry, nd->flags);
    ...
    inode = d_backing_inode(path.dentry);     <=== get `inode` from
`dentry->d_inode`.
  }
  return step_into(nd, &path, flags, inode, seq);  <=== set `inode` to
`nd->link_inode`.
}

```

then in `step_into` -> `pick_link`

```
static int pick_link(struct nameidata *nd, struct path *link,
     struct inode *inode, unsigned seq)
{
  ...
  nd->link_inode = inode;  <=== set `inode` to `nd->link_inode`.
}
```

So for the mismatch of `nd->link_inode` and `dentry->d_inode` in
following output. Do it means in Thread 1,  `walk_component`
get a `dentry` from `d_lookup`, at the same time, in Thread 2,
`__dentry_kill` is run and set `dentry->d_inode` to NULL.

```
nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
```

If these concurrent operations in `dentry->d_inode` could happen, how
we ensure `nd->link_inode = inode` and `d_backing_inode` are always
run before
`__dentry_kill`? I still could not find the questions for this from
dcache's code, sorry for the stupid question.

On Mon, Apr 26, 2021 at 1:22 AM Al Viro <viro@zeniv.linux.org.uk> wrote:
>
> On Mon, Apr 26, 2021 at 01:04:46AM +0800, haosdent wrote:
> > Hi, Alexander, thanks a lot for your quick reply.
> >
> > > Not really - the crucial part is ->d_count == -128, i.e. it's already past
> > > __dentry_kill().
> >
> > Thanks a lot for your information, we would check this.
> >
> > > Which tree is that?
> > > If you have some patches applied on top of that...
> >
> > We use Ubuntu Linux Kernel "4.15.0-42.45~16.04.1" from launchpad directly
> > without any modification,  the mapping Linux Kernel should be
> > "4.15.18" according
> > to https://people.canonical.com/~kernel/info/kernel-version-map.html
>
> Umm...  OK, I don't have it Ubuntu source at hand, but the thing to look into
> would be
>         * nd->flags contains LOOKUP_RCU
>         * in the mainline from that period (i.e. back when __atime_needs_update()
> used to exist) we had atime_needs_update_rcu() called in get_link() under those
> conditions, with
> static inline bool atime_needs_update_rcu(const struct path *path,
>                                           struct inode *inode)
> {
>         return __atime_needs_update(path, inode, true);
> }
> and __atime_needs_update() passing its last argument (rcu:true in this case) to
> relatime_need_update() in
>         if (!relatime_need_update(path, inode, now, rcu))
> relatime_need_update() hitting
>         update_ovl_inode_times(path->dentry, inode, rcu);
> and update_ovl_inode_times() starting with
>         if (rcu || likely(!(dentry->d_flags & DCACHE_OP_REAL)))
>                 return;
> with subsequent accesses to ->d_inode.  Those obviously are *NOT* supposed
> to be reached in rcu mode, due to that check.
>
> Your oops looks like something similar to that call chain had been involved and
> somehow had managed to get through to those ->d_inode uses.
>
> Again, in RCU mode we really, really should not assume ->d_inode stable.  That's
> why atime_needs_update() gets inode as a separate argument and does *NOT* look
> at path->dentry at all.  In the kernels of 4.8..4.18 period there it used to do
> so, but only in non-RCU mode (which is the reason for explicit rcu argument passed
> through that callchain).



-- 
Best Regards,
Haosdent Huang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-26 17:16       ` haosdent
@ 2021-04-26 17:30         ` Al Viro
  2021-05-03 15:31           ` haosdent
  0 siblings, 1 reply; 11+ messages in thread
From: Al Viro @ 2021-04-26 17:30 UTC (permalink / raw)
  To: haosdent; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

On Tue, Apr 27, 2021 at 01:16:44AM +0800, haosdent wrote:
> > really should not assume ->d_inode stable
> 
> Hi, Alexander, sorry to disturb you again. Today I try to check what
> `dentry->d_inode` and `nd->link_inode` looks like when `dentry` is
> already been killed in `__dentry_kill`.
> 
> ```
> nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> ```
> 
> It looks like `dentry->d_inode` could be NULL while `nd->link_inode`
> is always has value.
> But this make me confuse, by right `nd->link_inode` is get from
> `dentry->d_inode`, right?

It's sampled from there, yes.  And in RCU mode there's nothing to
prevent a previously positive dentry from getting negative and/or
killed.  ->link_inode (used to - it's gone these days) go with
->seq, which had been sampled from dentry->d_seq before fetching
->d_inode and then verified to have ->d_seq remain unchanged.
That gives you "dentry used to have this inode at the time it
had this d_seq", and that's what gets used to validate the sucker
when we switch to non-RCU mode (look at legitimize_links()).

IOW, we know that
	* at some point during the pathwalk that sucker had this inode
	* the inode won't get freed until we drop out of RCU mode
	* if we need to go to non-RCU (and thus grab dentry references)
while we still need that inode, we will verify that nothing has happened
to that link (same ->d_seq, so it still refers to the same inode) and
grab dentry reference, making sure it won't go away or become negative
under us.  Or we'll fail (in case something _has_ happened to dentry)
and repeat the entire thing in non-RCU mode.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-04-26 17:30         ` Al Viro
@ 2021-05-03 15:31           ` haosdent
  2021-05-06 10:21             ` haosdent
  0 siblings, 1 reply; 11+ messages in thread
From: haosdent @ 2021-05-03 15:31 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

Hi, Alexander, thanks a lot for your detailed explanations. I take a
look at the code again and the thread and I realize there are some
incorrect representations in my previous word.

>>   struct inode *d_inode = 0x0         <======= d_inode is NULL and cause Oops!

Actually it is Oops at `inode->i_flags` directly instead of `d_inode`,
so the code would not further to change the access time.

```
bool __atime_needs_update(const struct path *path, struct inode *inode,
  bool rcu)
{
struct vfsmount *mnt = path->mnt;
struct timespec now;

if (inode->i_flags & S_NOATIME)    <======= Oops at here according to
`[19521409.372016] IP: __atime_needs_update+0x5/0x190`.
return false;
```

But it looks impossible if we take a look at "walk_component > lookup_fast".

Let me introduce why it goes through "walk_component > lookup_fast"
instead of "walk_component > lookup_slow" first,

in "walk_component > step_into > pick_link", the code would set
`nameidata->stack->seq` to `seq` which is comes from the passed in
parameters.
If the code goes through "walk_component > lookup_slow", `seq` would
be 0 and then `nameidata->stack->seq` would be 0.
If the code goes through "walk_component > lookup_slow", `seq` would
be `dentry->d_seq->sequence`.
According to the contents in attachment files "nameidata.txt" and
"dentry.txt", `dentry->d_seq->sequence` is 4, and
`nameidata->stack->seq` is 4 as well.
So looks like the code goes through "walk_component > lookup_fast" and
"walk_component > step_into > pick_link".

The `inode` parameter in `__atime_needs_update` comes from
`nameidata->link_inode`. But in attachment file "nameidata.txt", we
could found `nameidata->link_inode` is NULL already.
Because the code goes through "walk_component > lookup_fast" and
"walk_component > step_into > pick_link", the `inode` assign to
`nameidata->link_inode` must comes from `lookup_fast`.

So it looks like something wrong in `lookup_fast`. Let me continue to
explain why this looks impossible.

In `walk_component`, `lookup_fast` have to return 1 (> 0), otherwise
it would fallback to `lookup_slow`.

```
err = lookup_fast(nd, &path, &inode, &seq);
if (unlikely(err <= 0)) {
  if (err < 0)
    return err;
  path.dentry = lookup_slow(&nd->last, nd->path.dentry,
        nd->flags);
}

return step_into
```

Because for our case, it looks like the code goes through
"walk_component > lookup_fast" and "walk_component > step_into >
pick_link",
This infers `lookup_fast` return 1 in this Oops.

Because `lookup_fast` return 1, it looks like the code goes through
the following path.

```
  if (nd->flags & LOOKUP_RCU) {
    ...

    *inode = d_backing_inode(dentry);
    negative = d_is_negative(dentry);

    ...
      ...
      if (negative)
        return -ENOENT;
      path->mnt = mnt;
      path->dentry = dentry;
      if (likely(__follow_mount_rcu(nd, path, inode, seqp)))
        return 1;
```

As we see, if `*inode` is NULL, it should return `-ENOENT` because `if
(negative)` would be true, which is conflict with "`lookup_fast`
return 1".

And in `__d_clear_type_and_inode`, it always sets the dentry to
negative first and then sets d_inode to NULL.

```
static inline void __d_clear_type_and_inode(struct dentry *dentry)
{`
  unsigned flags = READ_ONCE(dentry->d_flags);

  flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU);   // Set dentry to
negative first.
  WRITE_ONCE(dentry->d_flags, flags);
      // memory barrier
  dentry->d_inode = NULL;
               // Then set d_inode to NULL.
}
```

So looks like `inode` in `lookup_fast` should not be NULL if it could
skip `if (negative)` check even in the RCU case. Unless

```
# in lookup_fast method
  *inode = d_backing_inode(dentry);
  negative = d_is_negative(dentry);
```

is reorder to

```
# in lookup_fast method
  negative = d_is_negative(dentry);
  *inode = d_backing_inode(dentry);
```

when CPU executing the code. But is this possible in RCU?

I diff my local ubuntu's code with upstream tag v4.15.18, it looks
like no different in `fs/namei.c`, `fs/dcache.c`, `fs/proc`.
So possible the problem may happen to upstream tag v4.15.18 as well,
sadly my script still could not reproduce the issue on the server so
far,
would like to see if any insights from you then I could continue to
check what's wrong in this Oops, thank you in advance!

On Tue, Apr 27, 2021 at 1:30 AM Al Viro <viro@zeniv.linux.org.uk> wrote:
>
> On Tue, Apr 27, 2021 at 01:16:44AM +0800, haosdent wrote:
> > > really should not assume ->d_inode stable
> >
> > Hi, Alexander, sorry to disturb you again. Today I try to check what
> > `dentry->d_inode` and `nd->link_inode` looks like when `dentry` is
> > already been killed in `__dentry_kill`.
> >
> > ```
> > nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> > dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> > nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> > dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> > nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> > dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> > ```
> >
> > It looks like `dentry->d_inode` could be NULL while `nd->link_inode`
> > is always has value.
> > But this make me confuse, by right `nd->link_inode` is get from
> > `dentry->d_inode`, right?
>
> It's sampled from there, yes.  And in RCU mode there's nothing to
> prevent a previously positive dentry from getting negative and/or
> killed.  ->link_inode (used to - it's gone these days) go with
> ->seq, which had been sampled from dentry->d_seq before fetching
> ->d_inode and then verified to have ->d_seq remain unchanged.
> That gives you "dentry used to have this inode at the time it
> had this d_seq", and that's what gets used to validate the sucker
> when we switch to non-RCU mode (look at legitimize_links()).
>
> IOW, we know that
>         * at some point during the pathwalk that sucker had this inode
>         * the inode won't get freed until we drop out of RCU mode
>         * if we need to go to non-RCU (and thus grab dentry references)
> while we still need that inode, we will verify that nothing has happened
> to that link (same ->d_seq, so it still refers to the same inode) and
> grab dentry reference, making sure it won't go away or become negative
> under us.  Or we'll fail (in case something _has_ happened to dentry)
> and repeat the entire thing in non-RCU mode.



-- 
Best Regards,
Haosdent Huang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: NULL pointer dereference when access /proc/net
  2021-05-03 15:31           ` haosdent
@ 2021-05-06 10:21             ` haosdent
  0 siblings, 0 replies; 11+ messages in thread
From: haosdent @ 2021-05-06 10:21 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-kernel, linux-fsdevel, zhengyu.duan, Haosong Huang

Oh, Al, I saw you mentioned the reoder issue at
https://groups.google.com/g/syzkaller/c/0SW33jMcrXQ/m/lHJUsWHVBwAJ
and with a follow-up patch
https://gist.githubusercontent.com/dvyukov/67fe363d5ce2e2b06c71/raw/4d1b6c23f8dff7e0f8e2e3cab7e50208fddb0570/gistfile1.txt
However, it looks it would not work in some previous version
https://github.com/torvalds/linux/blob/v4.16/fs/dcache.c#L496

Because we set `d_hahs.prev` to `NULL` at

```
void __d_drop(struct dentry *dentry)
{
  ___d_drop(dentry);
  dentry->d_hash.pprev = NULL;
}
```

then in `dentry_unlink_inode`, `raw_write_seqcount_begin` would be
skipped due to `if (hashed)` condition is false.

```
static void dentry_unlink_inode(struct dentry * dentry)
    __releases(dentry->d_lock)
    __releases(dentry->d_inode->i_lock)
{
    struct inode *inode = dentry->d_inode;
    bool hashed = !d_unhashed(dentry);

    if (hashed)
        raw_write_seqcount_begin(&dentry->d_seq);  <--- Looks would
skip because hashed is false.
    __d_clear_type_and_inode(dentry);
    hlist_del_init(&dentry->d_u.d_alias);
    if (hashed)
        raw_write_seqcount_end(&dentry->d_seq);     <--- Looks would
skip because hashed is false.
...
```

Should we backport
https://github.com/torvalds/linux/commit/4c0d7cd5c8416b1ef41534d19163cb07ffaa03ab
and https://github.com/torvalds/linux/commit/0632a9ac7bc0a32f8251a53b3925775f0a7c4da6
to previous versions?

On Mon, May 3, 2021 at 11:31 PM haosdent <haosdent@gmail.com> wrote:
>
> Hi, Alexander, thanks a lot for your detailed explanations. I take a
> look at the code again and the thread and I realize there are some
> incorrect representations in my previous word.
>
> >>   struct inode *d_inode = 0x0         <======= d_inode is NULL and cause Oops!
>
> Actually it is Oops at `inode->i_flags` directly instead of `d_inode`,
> so the code would not further to change the access time.
>
> ```
> bool __atime_needs_update(const struct path *path, struct inode *inode,
>   bool rcu)
> {
> struct vfsmount *mnt = path->mnt;
> struct timespec now;
>
> if (inode->i_flags & S_NOATIME)    <======= Oops at here according to
> `[19521409.372016] IP: __atime_needs_update+0x5/0x190`.
> return false;
> ```
>
> But it looks impossible if we take a look at "walk_component > lookup_fast".
>
> Let me introduce why it goes through "walk_component > lookup_fast"
> instead of "walk_component > lookup_slow" first,
>
> in "walk_component > step_into > pick_link", the code would set
> `nameidata->stack->seq` to `seq` which is comes from the passed in
> parameters.
> If the code goes through "walk_component > lookup_slow", `seq` would
> be 0 and then `nameidata->stack->seq` would be 0.
> If the code goes through "walk_component > lookup_slow", `seq` would
> be `dentry->d_seq->sequence`.
> According to the contents in attachment files "nameidata.txt" and
> "dentry.txt", `dentry->d_seq->sequence` is 4, and
> `nameidata->stack->seq` is 4 as well.
> So looks like the code goes through "walk_component > lookup_fast" and
> "walk_component > step_into > pick_link".
>
> The `inode` parameter in `__atime_needs_update` comes from
> `nameidata->link_inode`. But in attachment file "nameidata.txt", we
> could found `nameidata->link_inode` is NULL already.
> Because the code goes through "walk_component > lookup_fast" and
> "walk_component > step_into > pick_link", the `inode` assign to
> `nameidata->link_inode` must comes from `lookup_fast`.
>
> So it looks like something wrong in `lookup_fast`. Let me continue to
> explain why this looks impossible.
>
> In `walk_component`, `lookup_fast` have to return 1 (> 0), otherwise
> it would fallback to `lookup_slow`.
>
> ```
> err = lookup_fast(nd, &path, &inode, &seq);
> if (unlikely(err <= 0)) {
>   if (err < 0)
>     return err;
>   path.dentry = lookup_slow(&nd->last, nd->path.dentry,
>         nd->flags);
> }
>
> return step_into
> ```
>
> Because for our case, it looks like the code goes through
> "walk_component > lookup_fast" and "walk_component > step_into >
> pick_link",
> This infers `lookup_fast` return 1 in this Oops.
>
> Because `lookup_fast` return 1, it looks like the code goes through
> the following path.
>
> ```
>   if (nd->flags & LOOKUP_RCU) {
>     ...
>
>     *inode = d_backing_inode(dentry);
>     negative = d_is_negative(dentry);
>
>     ...
>       ...
>       if (negative)
>         return -ENOENT;
>       path->mnt = mnt;
>       path->dentry = dentry;
>       if (likely(__follow_mount_rcu(nd, path, inode, seqp)))
>         return 1;
> ```
>
> As we see, if `*inode` is NULL, it should return `-ENOENT` because `if
> (negative)` would be true, which is conflict with "`lookup_fast`
> return 1".
>
> And in `__d_clear_type_and_inode`, it always sets the dentry to
> negative first and then sets d_inode to NULL.
>
> ```
> static inline void __d_clear_type_and_inode(struct dentry *dentry)
> {`
>   unsigned flags = READ_ONCE(dentry->d_flags);
>
>   flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU);   // Set dentry to
> negative first.
>   WRITE_ONCE(dentry->d_flags, flags);
>       // memory barrier
>   dentry->d_inode = NULL;
>                // Then set d_inode to NULL.
> }
> ```
>
> So looks like `inode` in `lookup_fast` should not be NULL if it could
> skip `if (negative)` check even in the RCU case. Unless
>
> ```
> # in lookup_fast method
>   *inode = d_backing_inode(dentry);
>   negative = d_is_negative(dentry);
> ```
>
> is reorder to
>
> ```
> # in lookup_fast method
>   negative = d_is_negative(dentry);
>   *inode = d_backing_inode(dentry);
> ```
>
> when CPU executing the code. But is this possible in RCU?
>
> I diff my local ubuntu's code with upstream tag v4.15.18, it looks
> like no different in `fs/namei.c`, `fs/dcache.c`, `fs/proc`.
> So possible the problem may happen to upstream tag v4.15.18 as well,
> sadly my script still could not reproduce the issue on the server so
> far,
> would like to see if any insights from you then I could continue to
> check what's wrong in this Oops, thank you in advance!
>
> On Tue, Apr 27, 2021 at 1:30 AM Al Viro <viro@zeniv.linux.org.uk> wrote:
> >
> > On Tue, Apr 27, 2021 at 01:16:44AM +0800, haosdent wrote:
> > > > really should not assume ->d_inode stable
> > >
> > > Hi, Alexander, sorry to disturb you again. Today I try to check what
> > > `dentry->d_inode` and `nd->link_inode` looks like when `dentry` is
> > > already been killed in `__dentry_kill`.
> > >
> > > ```
> > > nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> > > dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> > > nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> > > dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> > > nd->last.name: net/sockstat, dentry->d_lockref.count: -128,
> > > dentry->d_inode: (nil), nd->link_inode: 0xffffffffab299966
> > > ```
> > >
> > > It looks like `dentry->d_inode` could be NULL while `nd->link_inode`
> > > is always has value.
> > > But this make me confuse, by right `nd->link_inode` is get from
> > > `dentry->d_inode`, right?
> >
> > It's sampled from there, yes.  And in RCU mode there's nothing to
> > prevent a previously positive dentry from getting negative and/or
> > killed.  ->link_inode (used to - it's gone these days) go with
> > ->seq, which had been sampled from dentry->d_seq before fetching
> > ->d_inode and then verified to have ->d_seq remain unchanged.
> > That gives you "dentry used to have this inode at the time it
> > had this d_seq", and that's what gets used to validate the sucker
> > when we switch to non-RCU mode (look at legitimize_links()).
> >
> > IOW, we know that
> >         * at some point during the pathwalk that sucker had this inode
> >         * the inode won't get freed until we drop out of RCU mode
> >         * if we need to go to non-RCU (and thus grab dentry references)
> > while we still need that inode, we will verify that nothing has happened
> > to that link (same ->d_seq, so it still refers to the same inode) and
> > grab dentry reference, making sure it won't go away or become negative
> > under us.  Or we'll fail (in case something _has_ happened to dentry)
> > and repeat the entire thing in non-RCU mode.
>
>
>
> --
> Best Regards,
> Haosdent Huang



-- 
Best Regards,
Haosdent Huang

^ permalink raw reply	[flat|nested] 11+ messages in thread

* NULL pointer dereference when access /proc/net
@ 2021-04-25 15:47 haosdent
  0 siblings, 0 replies; 11+ messages in thread
From: haosdent @ 2021-04-25 15:47 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel, viro
  Cc: Haosong Huang, zhengyu.duan, 黄浩松

[-- Attachment #1: Type: text/plain, Size: 7902 bytes --]

Hi, Alexander Viro and dear Linux Filesystems maintainers, recently we
encounter a NULL pointer dereference Oops in our production.

We have attempted to analyze the core dump and compare it with source
code in the past few weeks, currently still could not understand why
`dentry->d_inode` become NULL while other fields look normal.

Here is the call stack trace of this Oops.

```
[19521409.363839] BUG: unable to handle kernel NULL pointer
dereference at 000000000000000c
[19521409.372016] IP: __atime_needs_update+0x5/0x190
[19521409.376757] PGD 80000020326ad067 P4D 80000020326ad067 PUD 200fd06067 PMD 0
[19521409.384025] Oops: 0000 [#1] SMP PTI
[19521409.387796] Modules linked in: veth ipt_MASQUERADE
nf_nat_masquerade_ipv4 nf_conntrack_netlink nfnetlink xfrm_user
xfrm_algo xt_addrtype iptable_nat nf_nat_ipv4 nf_nat br_netfilter
bridge stp llc aufs overlay cpuid iptable_filter ip_tables cls_cgroup
sch_htb xt_multiport ufs qnx4 hfsplus hfs minix ntfs msdos jfs xfs
ip6table_filter ip6_tables nf_conntrack_ipv4 nf_defrag_ipv4 xt_tcpudp
xt_conntrack x_tables bonding nls_utf8 isofs ib_iser rdma_cm iw_cm
ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
toa(OE) nf_conntrack lp parport intel_rapl skx_edac
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass
intel_cstate intel_rapl_perf ipmi_ssif ipmi_si dcdbas mei_me mei
ipmi_devintf lpc_ich shpchp ipmi_msghandler acpi_power_meter mac_hid
autofs4 btrfs zstd_compress
[19521409.458627]  raid10 raid456 async_raid6_recov async_memcpy
async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0
multipath linear crct10dif_pclmul crc32_pclmul mgag200
ghash_clmulni_intel ttm pcbc drm_kms_helper aesni_intel syscopyarea
aes_x86_64 sysfillrect ixgbe igb sysimgblt crypto_simd fb_sys_fops dca
i2c_algo_bit glue_helper ptp megaraid_sas ahci drm cryptd mdio
pps_core libahci [last unloaded: ip_tables]
[19521409.496053] CPU: 46 PID: 10855 Comm: node-exporter Tainted: G
       OE    4.15.0-42-generic #46~16.04.1+4
[19521409.506851] Hardware name: Dell Inc. PowerEdge R740xd/08D89F,
BIOS 1.4.9 06/29/2018
[19521409.514784] RIP: 0010:__atime_needs_update+0x5/0x190
[19521409.520026] RSP: 0018:ffff9dee09c2fc48 EFLAGS: 00010202
[19521409.525528] RAX: ffff8a4281d01ec0 RBX: fefefefefefefeff RCX:
0000000000000040
[19521409.532942] RDX: 0000000000000001 RSI: 0000000000000000 RDI:
ffff9dee09c2fde8
[19521409.540354] RBP: ffff9dee09c2fca8 R08: ffff9dee09c2fbf4 R09:
ffff9dee09c2fd90
[19521409.547761] R10: ffff8a34397b4022 R11: 6b636f732f74656e R12:
2f2f2f2f2f2f2f2f
[19521409.555176] R13: 0000000000000000 R14: ffff8a34397b4026 R15:
ffff9dee09c2fde8
[19521409.562592] FS:  000000c000218090(0000)
GS:ffff8a3b401c0000(0000) knlGS:0000000000000000
[19521409.570976] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[19521409.577001] CR2: 000000000000000c CR3: 000000203ad22005 CR4:
00000000007606e0
[19521409.584415] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[19521409.592937] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400
[19521409.601419] PKRU: 55555554
[19521409.605464] Call Trace:
[19521409.609235]  ? link_path_walk+0x3e4/0x5a0
[19521409.614546]  ? path_init+0x177/0x2f0
[19521409.619423]  path_openat+0xe4/0x1770
[19521409.624282]  ? ttwu_do_wakeup+0x1e/0x140
[19521409.629465]  ? ttwu_do_activate+0x77/0x80
[19521409.634713]  ? try_to_wake_up+0x59/0x480
[19521409.639864]  do_filp_open+0x9b/0x110
[19521409.644638]  ? __check_object_size+0xaf/0x1b0
[19521409.650176]  ? path_get+0x27/0x30
[19521409.654652]  do_sys_open+0x1bb/0x2c0
[19521409.659372]  ? do_sys_open+0x1bb/0x2c0
[19521409.664254]  SyS_openat+0x14/0x20
[19521409.668677]  do_syscall_64+0x73/0x130
[19521409.673479]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[19521409.679602] RIP: 0033:0x4a5c9a
[19521409.683706] RSP: 002b:000000c000304ab0 EFLAGS: 00000202
ORIG_RAX: 0000000000000101
[19521409.692316] RAX: ffffffffffffffda RBX: 000000c00002f400 RCX:
00000000004a5c9a
[19521409.700483] RDX: 0000000000080000 RSI: 000000c00175a020 RDI:
ffffffffffffff9c
[19521409.708640] RBP: 000000c000304b28 R08: 0000000000000000 R09:
0000000000000000
[19521409.716806] R10: 0000000000000000 R11: 0000000000000202 R12:
ffffffffffffffff
[19521409.724962] R13: 0000000000000002 R14: 0000000000000001 R15:
0000000000000100
[19521409.733145] Code: 83 ec 08 0f 0d 8f 80 05 00 00 e8 87 ff ff ff
48 85 c0 74 10 48 89 c7 48 89 45 f8 e8 56 d4 ff ff 48 8b 45 f8 c9 c3
0f 1f 44 00 00 <f6> 46 0c 02 0f 85 9b 00 00 00 83 7e 04 ff 0f 84 91 00
00 00 83
[19521409.753799] RIP: __atime_needs_update+0x5/0x190 RSP: ffff9dee09c2fc48
[19521409.761228] CR2: 000000000000000c
```

In the coredump, we try to figure out how this NULL pointer Oops
happen. It looks like when the program `node-exporter` tries to access
`/proc/net/sockstat`, when `walk_component()` the `/proc/net`, it got
a dentry which `d_inode` is NULL while other fields have data.

```
struct dentry {
  ...
  d_name = {
    {
      {
        hash = 2805607892,
        len = 3
      },
      hash_len = 15690509780
    },
    name = 0xffff8a4281d01ef8 "net"
  },
  struct inode *d_inode = 0x0         <======= d_inode is NULL and cause Oops!
  -> NULL
  d_iname = "net\000:01:00.0\000\000sage_in_bytes\000B\212\377",
...
```

We extra the nameidata from the crash dump as well, `link_inode` is
NULL, looks like either `lookup_slow` or `lookup_fast` return a dentry
which `inode` is NULL while other fields look normal.

```
struct nameidata {
  last = {
    {
      {
        hash = 2805607892,
        len = 3
      },
      hash_len = 15690509780
    },
    name = 0xffff8a34397b4022 "net/sockstat"
  },
  struct filename *name = 0xffff8a34397b4000
  -> {
       name = 0xffff8a34397b401c "/proc/net/sockstat",
       uptr = 0xc00175a020 <Address 0xc00175a020 out of bounds>,
       aname = 0xffff8a3b2f3c9860,
       refcnt = 2,
       iname = 0xffff8a34397b401c "/proc/net/sockstat"
     }
  struct nameidata *saved = 0x0
  -> NULL
  struct inode *link_inode = 0x0      <======= link_inode is NULL as well!
  -> NULL
}
```

We try to reproduce this question at the beginning, however, it looks
difficult to reproduce. We keep running `while true; do cat
/proc/net/sockstat; done`, but could not reproduce so far. In the past
year, we only found two similar crashes in thousands of servers in our
production.

By right `link_inode` should always have values according to our tiny
bpftrace program result.

```
# /tmp/trace_walk_component.bt
kprobe:walk_component {
  $p=((struct nameidata*) arg0);
  printf("nameidata->last.name: %s, nameidata->link_inode: %p\n",
str($p->last.name), $p->link_inode);
}
```

```
# Output
nameidata->last.name: net/sockstat, nameidata->link_inode: 0xffffffffab299966
nameidata->last.name: net/sockstat, nameidata->link_inode: 0xffff9a4efe813ab8
nameidata->last.name: net/sockstat, nameidata->link_inode: 0xffffffffab299966
nameidata->last.name: net/sockstat, nameidata->link_inode: 0xffff9a4efe813ab8
nameidata->last.name: net/sockstat, nameidata->link_inode: 0xffffffffab299966
nameidata->last.name: net/sockstat, nameidata->link_inode: 0xffffffffab299966
```

We try to search in past kernel threads, could not find a similar
crash yet, but could find a similar case in another user's blog
https://utcc.utoronto.ca/~cks/space/blog/linux/Ubuntu1804OddKernelPanic
. However, in that blog, the user didn't figure out the reason as well
although their crash stack same as us exactly.

Is this a known bug that makes dentry become corrupt? Because we could
not reproduce this issue so far, it is difficult to verify if this is
fixed in mainline. So we write this email to see if any insights from
other Linux developers, any replies would be appreciated.

Thank you in advanace.

Attach files are the dentry and nameidata which extra from the core
dump, not sure if there are helpful to check this Oops.

-- 
Best Regards,
Haosdent Huang

[-- Attachment #2: dentry.txt --]
[-- Type: text/plain, Size: 15045 bytes --]

struct dentry {
  d_flags = 32904, 
  d_seq = {
    sequence = 4
  }, 
  d_hash = {
    next = 0x0, 
    pprev = 0x0
  }, 
  struct dentry *d_parent = 0xffff8a4b3b809500
  -> {
       d_flags = 2097152, 
       d_seq = {
         sequence = 2
       }, 
       d_hash = {
         next = 0x0, 
         pprev = 0x0
       }, 
       d_parent = 0xffff8a4b3b809500, 
       d_name = {
         {
           {
             hash = 0, 
             len = 1
           }, 
           hash_len = 4294967296
         }, 
         name = 0xffff8a4b3b809538 "/"
       }, 
       d_inode = 0xffff8a4b3b818598, 
       d_iname = "/\000d-usb-pod.ko\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000", 
       d_lockref = {
         {
           lock_count = 3208340570112, 
           {
             lock = {
               {
                 rlock = {
                   raw_lock = {
                     val = {
                       counter = 0
                     }
                   }
                 }
               }
             }, 
             count = 747
           }
         }
       }, 
       d_op = 0x0, 
       d_sb = 0xffff8a3b3f80e800, 
       d_time = 0, 
       d_fsdata = 0x0, 
       {
         d_lru = {
           next = 0xffff8a4b3b809580, 
           prev = 0xffff8a4b3b809580
         }, 
         d_wait = 0xffff8a4b3b809580
       }, 
       d_child = {
         next = 0xffff8a4b3b809590, 
         prev = 0xffff8a4b3b809590
       }, 
       d_subdirs = {
         next = 0xffff8a42a2734a50, 
         prev = 0xffff8a4b3b808bd0
       }, 
       d_u = {
         d_alias = {
           next = 0x0, 
           pprev = 0xffff8a4b3b8186d8
         }, 
         d_in_lookup_hash = {
           next = 0x0, 
           pprev = 0xffff8a4b3b8186d8
         }, 
         d_rcu = {
           next = 0x0, 
           func = 0xffff8a4b3b8186d8
         }
       }
     }
  d_name = {
    {
      {
        hash = 2805607892, 
        len = 3
      }, 
      hash_len = 15690509780
    }, 
    name = 0xffff8a4281d01ef8 "net"
  }, 
  struct inode *d_inode = 0x0
  -> NULL
  d_iname = "net\000:01:00.0\000\000sage_in_bytes\000B\212\377", 
  d_lockref = {
    {
      lock_count = 18446743523953737728, 
      {
        lock = {
          {
            rlock = {
              raw_lock = {
                val = {
                  counter = 0
                }
              }
            }
          }
        }, 
        count = -128
      }
    }
  }, 
  const struct dentry_operations *d_op = 0xffffffff82441ec0
  -> <simple_dentry_operations> {
       d_revalidate = 0x0, 
       d_weak_revalidate = 0x0, 
       d_hash = 0x0, 
       d_compare = 0x0, 
       d_delete = 0xffffffff818a2cf0 <always_delete_dentry>, 
       d_init = 0x0, 
       d_release = 0x0, 
       d_prune = 0x0, 
       d_iput = 0x0, 
       d_dname = 0x0, 
       d_automount = 0x0, 
       d_manage = 0x0, 
       d_real = 0x0
     }
  struct super_block *d_sb = 0xffff8a3b3f80e800
  -> {
       s_list = {
         next = 0xffff8a3b3aecf000, 
         prev = 0xffff8a3b3f80d000
       }, 
       s_dev = 4, 
       s_blocksize_bits = 10 '\n', 
       s_blocksize = 1024, 
       s_maxbytes = 2147483647, 
       s_type = 0xffffffff82b3bc80, 
       s_op = 0xffffffff82446100, 
       dq_op = 0x0, 
       s_qcop = 0x0, 
       s_export_op = 0x0, 
       s_flags = 1614809098, 
       s_iflags = 22, 
       s_magic = 40864, 
       s_root = 0xffff8a4b3b809500, 
       s_umount = {
         count = {
           counter = 0
         }, 
         wait_list = {
           next = 0xffff8a3b3f80e878, 
           prev = 0xffff8a3b3f80e878
         }, 
         wait_lock = {
           raw_lock = {
             val = {
               counter = 0
             }
           }
         }, 
         osq = {
           tail = {
             counter = 0
           }
         }, 
         owner = 0x1
       }, 
       s_count = 1, 
       s_active = {
         counter = 17
       }, 
       s_security = 0x0, 
       s_xattr = 0x0, 
       s_cop = 0x0, 
       s_anon = {
         first = 0x0
       }, 
       s_mounts = {
         next = 0xffff8a4b3be4aa70, 
         prev = 0xffff8a4b25348370
       }, 
       s_bdev = 0x0, 
       s_bdi = 0xffffffff82aff940, 
       s_mtd = 0x0, 
       s_instances = {
         next = 0x0, 
         pprev = 0xffff8a3b232370e8
       }, 
       s_quota_types = 0, 
       s_dquot = {
         flags = 0, 
         dqio_sem = {
           count = {
             counter = 0
           }, 
           wait_list = {
             next = 0xffff8a3b3f80e910, 
             prev = 0xffff8a3b3f80e910
           }, 
           wait_lock = {
             raw_lock = {
               val = {
                 counter = 0
               }
             }
           }, 
           osq = {
             tail = {
               counter = 0
             }
           }, 
           owner = 0x0
         }, 
         files = {0x0, 0x0, 0x0}, 
         info = {{
             dqi_format = 0x0, 
             dqi_fmt_id = 0, 
             dqi_dirty_list = {
               next = 0x0, 
               prev = 0x0
             }, 
             dqi_flags = 0, 
             dqi_bgrace = 0, 
             dqi_igrace = 0, 
             dqi_max_spc_limit = 0, 
             dqi_max_ino_limit = 0, 
             dqi_priv = 0x0
           }, {
             dqi_format = 0x0, 
             dqi_fmt_id = 0, 
             dqi_dirty_list = {
               next = 0x0, 
               prev = 0x0
             }, 
             dqi_flags = 0, 
             dqi_bgrace = 0, 
             dqi_igrace = 0, 
             dqi_max_spc_limit = 0, 
             dqi_max_ino_limit = 0, 
             dqi_priv = 0x0
           }, {
             dqi_format = 0x0, 
             dqi_fmt_id = 0, 
             dqi_dirty_list = {
               next = 0x0, 
               prev = 0x0
             }, 
             dqi_flags = 0, 
             dqi_bgrace = 0, 
             dqi_igrace = 0, 
             dqi_max_spc_limit = 0, 
             dqi_max_ino_limit = 0, 
             dqi_priv = 0x0
           }}, 
         ops = {0x0, 0x0, 0x0}
       }, 
       s_writers = {
         frozen = 0, 
         wait_unfrozen = {
           lock = {
             {
               rlock = {
                 raw_lock = {
                   val = {
                     counter = 0
                   }
                 }
               }
             }
           }, 
           head = {
             next = 0xffff8a3b3f80ea48, 
             prev = 0xffff8a3b3f80ea48
           }
         }, 
         rw_sem = {{
             rss = {
               gp_state = 0, 
               gp_count = 0, 
               gp_wait = {
                 lock = {
                   {
                     rlock = {
                       raw_lock = {
                         val = {
                           counter = 0
                         }
                       }
                     }
                   }
                 }, 
                 head = {
                   next = 0xffff8a3b3f80ea68, 
                   prev = 0xffff8a3b3f80ea68
                 }
               }, 
               cb_state = 0, 
               cb_head = {
                 next = 0x0, 
                 func = 0x0
               }, 
               gp_type = RCU_SCHED_SYNC
             }, 
             read_count = 0x2896c, 
             rw_sem = {
               count = {
                 counter = 0
               }, 
               wait_list = {
                 next = 0xffff8a3b3f80eaa8, 
                 prev = 0xffff8a3b3f80eaa8
               }, 
               wait_lock = {
                 raw_lock = {
                   val = {
                     counter = 0
                   }
                 }
               }, 
               osq = {
                 tail = {
                   counter = 0
                 }
               }, 
               owner = 0x0
             }, 
             writer = {
               task = 0x0
             }, 
             readers_block = 0
           }, {
             rss = {
               gp_state = 0, 
               gp_count = 0, 
               gp_wait = {
                 lock = {
                   {
                     rlock = {
                       raw_lock = {
                         val = {
                           counter = 0
                         }
                       }
                     }
                   }
                 }, 
                 head = {
                   next = 0xffff8a3b3f80eae8, 
                   prev = 0xffff8a3b3f80eae8
                 }
               }, 
               cb_state = 0, 
               cb_head = {
                 next = 0x0, 
                 func = 0x0
               }, 
               gp_type = RCU_SCHED_SYNC
             }, 
             read_count = 0x289a8, 
             rw_sem = {
               count = {
                 counter = 0
               }, 
               wait_list = {
                 next = 0xffff8a3b3f80eb28, 
                 prev = 0xffff8a3b3f80eb28
               }, 
               wait_lock = {
                 raw_lock = {
                   val = {
                     counter = 0
                   }
                 }
               }, 
               osq = {
                 tail = {
                   counter = 0
                 }
               }, 
               owner = 0x0
             }, 
             writer = {
               task = 0x0
             }, 
             readers_block = 0
           }, {
             rss = {
               gp_state = 0, 
               gp_count = 0, 
               gp_wait = {
                 lock = {
                   {
                     rlock = {
                       raw_lock = {
                         val = {
                           counter = 0
                         }
                       }
                     }
                   }
                 }, 
                 head = {
                   next = 0xffff8a3b3f80eb68, 
                   prev = 0xffff8a3b3f80eb68
                 }
               }, 
               cb_state = 0, 
               cb_head = {
                 next = 0x0, 
                 func = 0x0
               }, 
               gp_type = RCU_SCHED_SYNC
             }, 
             read_count = 0x289ac, 
             rw_sem = {
               count = {
                 counter = 0
               }, 
               wait_list = {
                 next = 0xffff8a3b3f80eba8, 
                 prev = 0xffff8a3b3f80eba8
               }, 
               wait_lock = {
                 raw_lock = {
                   val = {
                     counter = 0
                   }
                 }
               }, 
               osq = {
                 tail = {
                   counter = 0
                 }
               }, 
               owner = 0x0
             }, 
             writer = {
               task = 0x0
             }, 
             readers_block = 0
           }}
       }, 
       s_id = "proc\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000", 
       s_uuid = {
         b = "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"
       }, 
       s_fs_info = 0xffffffff82a59c60, 
       s_max_links = 0, 
       s_mode = 0, 
       s_time_gran = 1, 
       s_vfs_rename_mutex = {
         owner = {
           counter = 0
         }, 
         wait_lock = {
           {
             rlock = {
               raw_lock = {
                 val = {
                   counter = 0
                 }
               }
             }
           }
         }, 
         osq = {
           tail = {
             counter = 0
           }
         }, 
         wait_list = {
           next = 0xffff8a3b3f80ec30, 
           prev = 0xffff8a3b3f80ec30
         }
       }, 
       s_subtype = 0x0, 
       s_d_op = 0x0, 
       cleancache_poolid = -1, 
       s_shrink = {
         count_objects = 0xffffffff81878e80 <perf_trace_consume_skb>, 
         scan_objects = 0xffffffff8187a6d0 <super_cache_scan>, 
         seeks = 2, 
         batch = 1024, 
         flags = 3, 
         list = {
           next = 0xffff8a3b3aecf480, 
           prev = 0xffff8a3b3f80d480
         }, 
         nr_deferred = 0xffff8a3b3f8016f0
       }, 
       s_remove_count = {
         counter = 0
       }, 
       s_readonly_remount = 0, 
       s_dio_done_wq = 0x0, 
       s_pins = {
         first = 0x0
       }, 
       s_user_ns = 0xffffffff82a52f00, 
       s_dentry_lru = {
         node = 0xffff8a4b3be2ec80, 
         list = {
           next = 0xffff8a3b3f80d508, 
           prev = 0xffff8a3b3f80ed08
         }
       }, 
       s_inode_lru = {
         node = 0xffff8a4b3be2e300, 
         list = {
           next = 0xffff8a3b3f80ecc8, 
           prev = 0xffff8a3b3aecf4c8
         }
       }, 
       rcu = {
         next = 0x0, 
         func = 0x0
       }, 
       destroy_work = {
         data = {
           counter = 0
         }, 
         entry = {
           next = 0x0, 
           prev = 0x0
         }, 
         func = 0x0
       }, 
       s_sync_lock = {
         owner = {
           counter = 0
         }, 
         wait_lock = {
           {
             rlock = {
               raw_lock = {
                 val = {
                   counter = 0
                 }
               }
             }
           }
         }, 
         osq = {
           tail = {
             counter = 0
           }
         }, 
         wait_list = {
           next = 0xffff8a3b3f80ed58, 
           prev = 0xffff8a3b3f80ed58
         }
       }, 
       s_stack_depth = 2, 
       s_inode_list_lock = {
         {
           rlock = {
             raw_lock = {
               val = {
                 counter = 0
               }
             }
           }
         }
       }, 
       s_inodes = {
         next = 0xffff8a3b3710f0f8, 
         prev = 0xffff8a3b3714a148
       }, 
       s_inode_wblist_lock = {
         {
           rlock = {
             raw_lock = {
               val = {
                 counter = 0
               }
             }
           }
         }
       }, 
       s_inodes_wb = {
         next = 0xffff8a3b3f80eda0, 
         prev = 0xffff8a3b3f80eda0
       }
     }
  d_time = 18446614616713760768, 
  void *d_fsdata = 0x0
  -> NULL
  {
    d_lru = {
      next = 0xffff8a4281d01f40, 
      prev = 0xffff8a4281d01f40
    }, 
    d_wait = 0xffff8a4281d01f40
  }, 
  d_child = {
    next = 0xffff8a42996b94d0, 
    prev = 0xffff8a4b3b8095a0
  }, 
  d_subdirs = {
    next = 0xffff8a4281d01f60, 
    prev = 0xffff8a4281d01f60
  }, 
  d_u = {
    d_alias = {
      next = 0xffff8a4281d000b0, 
      pprev = 0xffffffff81891070 <__d_free>
    }, 
    d_in_lookup_hash = {
      next = 0xffff8a4281d000b0, 
      pprev = 0xffffffff81891070 <__d_free>
    }, 
    d_rcu = {
      next = 0xffff8a4281d000b0, 
      func = 0xffffffff81891070 <__d_free>
    }
  }
}

[-- Attachment #3: nameidata.txt --]
[-- Type: text/plain, Size: 6615 bytes --]

struct nameidata {
  path = {
    mnt = 0xffff8a4b3452e2a0, 
    dentry = 0xffff8a4b3b809500
  }, 
  last = {
    {
      {
        hash = 2805607892, 
        len = 3
      }, 
      hash_len = 15690509780
    }, 
    name = 0xffff8a34397b4022 "net/sockstat"
  }, 
  root = {
    mnt = 0xffff8a4b33b522a0, 
    dentry = 0xffff8a4b34be75c0
  }, 
  struct inode *inode = 0xffff8a4b3b818598
  -> {
       i_mode = 16749, 
       i_opflags = 3, 
       i_uid = {
         val = 0
       }, 
       i_gid = {
         val = 0
       }, 
       i_flags = 0, 
       i_acl = 0xffffffffffffffff, 
       i_default_acl = 0xffffffffffffffff, 
       i_op = 0xffffffff82446200, 
       i_sb = 0xffff8a3b3f80e800, 
       i_mapping = 0xffff8a4b3b818710, 
       i_security = 0x0, 
       i_ino = 1, 
       {
         i_nlink = 8, 
         __i_nlink = 8
       }, 
       i_rdev = 0, 
       i_size = 0, 
       i_atime = {
         tv_sec = 1598869309, 
         tv_nsec = 284000000
       }, 
       i_mtime = {
         tv_sec = 1598869309, 
         tv_nsec = 284000000
       }, 
       i_ctime = {
         tv_sec = 1598869309, 
         tv_nsec = 284000000
       }, 
       i_lock = {
         {
           rlock = {
             raw_lock = {
               val = {
                 counter = 0
               }
             }
           }
         }
       }, 
       i_bytes = 0, 
       i_blkbits = 10, 
       i_write_hint = WRITE_LIFE_NOT_SET, 
       i_blocks = 0, 
       i_state = 0, 
       i_rwsem = {
         count = {
           counter = 0
         }, 
         wait_list = {
           next = 0xffff8a4b3b818648, 
           prev = 0xffff8a4b3b818648
         }, 
         wait_lock = {
           raw_lock = {
             val = {
               counter = 0
             }
           }
         }, 
         osq = {
           tail = {
             counter = 0
           }
         }, 
         owner = 0x1
       }, 
       dirtied_when = 0, 
       dirtied_time_when = 0, 
       i_hash = {
         next = 0x0, 
         pprev = 0x0
       }, 
       i_io_list = {
         next = 0xffff8a4b3b818688, 
         prev = 0xffff8a4b3b818688
       }, 
       i_wb = 0x0, 
       i_wb_frn_winner = 0, 
       i_wb_frn_avg_time = 0, 
       i_wb_frn_history = 0, 
       i_lru = {
         next = 0xffff8a4b3b8186a8, 
         prev = 0xffff8a4b3b8186a8
       }, 
       i_sb_list = {
         next = 0xffff8a4b3b8186b8, 
         prev = 0xffff8a4b3b8186b8
       }, 
       i_wb_list = {
         next = 0xffff8a4b3b8186c8, 
         prev = 0xffff8a4b3b8186c8
       }, 
       {
         i_dentry = {
           first = 0xffff8a4b3b8095b0
         }, 
         i_rcu = {
           next = 0xffff8a4b3b8095b0, 
           func = 0x0
         }
       }, 
       i_version = 0, 
       i_count = {
         counter = 1
       }, 
       i_dio_count = {
         counter = 0
       }, 
       i_writecount = {
         counter = 0
       }, 
       i_readcount = {
         counter = 0
       }, 
       i_fop = 0xffffffff824462c0, 
       i_flctx = 0x0, 
       i_data = {
         host = 0xffff8a4b3b818598, 
         page_tree = {
           gfp_mask = 18350112, 
           rnode = 0x0
         }, 
         tree_lock = {
           {
             rlock = {
               raw_lock = {
                 val = {
                   counter = 0
                 }
               }
             }
           }
         }, 
         i_mmap_writable = {
           counter = 0
         }, 
         i_mmap = {
           rb_root = {
             rb_node = 0x0
           }, 
           rb_leftmost = 0x0
         }, 
         i_mmap_rwsem = {
           count = {
             counter = 0
           }, 
           wait_list = {
             next = 0xffff8a4b3b818748, 
             prev = 0xffff8a4b3b818748
           }, 
           wait_lock = {
             raw_lock = {
               val = {
                 counter = 0
               }
             }
           }, 
           osq = {
             tail = {
               counter = 0
             }
           }, 
           owner = 0x0
         }, 
         nrpages = 0, 
         nrexceptional = 0, 
         writeback_index = 0, 
         a_ops = 0xffffffff82441460, 
         flags = 0, 
         private_lock = {
           {
             rlock = {
               raw_lock = {
                 val = {
                   counter = 0
                 }
               }
             }
           }
         }, 
         gfp_mask = 21102794, 
         private_list = {
           next = 0xffff8a4b3b818798, 
           prev = 0xffff8a4b3b818798
         }, 
         private_data = 0x0, 
         wb_err = 0
       }, 
       i_devices = {
         next = 0xffff8a4b3b8187b8, 
         prev = 0xffff8a4b3b8187b8
       }, 
       {
         i_pipe = 0x2f97885e, 
         i_bdev = 0x2f97885e, 
         i_cdev = 0x2f97885e, 
         i_link = 0x2f97885e <Address 0x2f97885e out of bounds>, 
         i_dir_seq = 798459998
       }, 
       i_generation = 0, 
       i_fsnotify_mask = 0, 
       i_fsnotify_marks = 0x0, 
       i_crypt_info = 0x0, 
       i_private = 0x0
     }
  flags = 81, 
  seq = 2, 
  m_seq = 1323106, 
  last_type = 0, 
  depth = 1, 
  total_link_count = 1, 
  struct saved *stack = 0xffff9dee09c2fde8
  -> {
       link = {
         mnt = 0xffff8a4b3452e2a0, 
         dentry = 0xffff8a4281d01ec0
       }, 
       done = {
         fn = 0x0, 
         arg = 0xffffffff818703af <__check_object_size+175>
       }, 
       name = 0xffff8a3b2f3c9a88 "\240\"\265\063K\212\377\377\300u\276\064K\212\377\377", 
       seq = 4
     }
  internal = {{
      link = {
        mnt = 0xffff8a4b3452e2a0, 
        dentry = 0xffff8a4281d01ec0
      }, 
      done = {
        fn = 0x0, 
        arg = 0xffffffff818703af <__check_object_size+175>
      }, 
      name = 0xffff8a3b2f3c9a88 "\240\"\265\063K\212\377\377\300u\276\064K\212\377\377", 
      seq = 4
    }, {
      link = {
        mnt = 0xffffffff81881947 <path_get+39>, 
        dentry = 0x23c313ddc0523f00
      }, 
      done = {
        fn = 0xd, 
        arg = 0x0
      }, 
      name = 0x4000 <Address 0x4000 out of bounds>, 
      seq = 762128000
    }}, 
  struct filename *name = 0xffff8a34397b4000
  -> {
       name = 0xffff8a34397b401c "/proc/net/sockstat", 
       uptr = 0xc00175a020 <Address 0xc00175a020 out of bounds>, 
       aname = 0xffff8a3b2f3c9860, 
       refcnt = 2, 
       iname = 0xffff8a34397b401c "/proc/net/sockstat"
     }
  struct nameidata *saved = 0x0
  -> NULL
  struct inode *link_inode = 0x0
  -> NULL
  root_seq = 2, 
  dfd = -100
}

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-05-06 10:21 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAFt=RON+KYYf5yt9vM3TdOSn4zco+3XtFyi3VDRr1vbQUBPZ0g@mail.gmail.com>
2021-04-25 16:50 ` NULL pointer dereference when access /proc/net Al Viro
2021-04-25 17:04   ` haosdent
2021-04-25 17:14     ` haosdent
2021-04-25 17:22     ` Al Viro
2021-04-25 18:00       ` haosdent
2021-04-25 18:15         ` haosdent
2021-04-26 17:16       ` haosdent
2021-04-26 17:30         ` Al Viro
2021-05-03 15:31           ` haosdent
2021-05-06 10:21             ` haosdent
2021-04-25 15:47 haosdent

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).