All of lore.kernel.org
 help / color / mirror / Atom feed
* [Cluster-devel] BUG? racy access to i_diskflags
       [not found] <AANLkTi=+JtX-68=40B57K7cs9_F47Skhb7PfxJn9Dmor@mail.gmail.com>
@ 2010-08-17 10:40 ` Steven Whitehouse
  2022-06-15  8:43 ` Steven Whitehouse
  1 sibling, 0 replies; 2+ messages in thread
From: Steven Whitehouse @ 2010-08-17 10:40 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

On Tue, 2010-08-17 at 13:28 +0900, ?? shin hong wrote:
> Hi. I am reporting an issue suspected as racy
> while I read inode_go_lock() at gfs2/glops.c in Linux 2.6.35.
> 
> Since I do not have much background on GFS2, I am not certain
> whether the issue is serious or not. But please examine the issue
> and let me know your opinion.
> 
> It seems that inode_go_lock() accesses gfs2_inode's i_diskflags field
> without any lock held.
> 
> But, as do_gfs2_set_flags() updates gfs2_inode's i_diskflags,
> concurrent executions with with inode_go_lock() might result
> race conditions.
> 
> Could you examine the issue please?
> 
> Sincerely
> Shin Hong

That looks ok to me. The access in inode_go_lock() occurs when the glock
on the inode has been acquired but before any process (such as might be
calling so_gfs2_set_flags() for example) will be able to access the
inode.

The flags access in inode_go_lock() is there to ensure that in the event
of a node crashing when it is part way through truncating an inode, that
the truncated blocks are not seen by any other processes after the
event. It is required because it is impossible to guarantee that a
truncation will always fit inside a single transaction,

Steve.




^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Cluster-devel] BUG? racy access to i_diskflags
       [not found] <AANLkTi=+JtX-68=40B57K7cs9_F47Skhb7PfxJn9Dmor@mail.gmail.com>
  2010-08-17 10:40 ` [Cluster-devel] BUG? racy access to i_diskflags Steven Whitehouse
@ 2022-06-15  8:43 ` Steven Whitehouse
  1 sibling, 0 replies; 2+ messages in thread
From: Steven Whitehouse @ 2022-06-15  8:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

On Tue, 2010-08-17 at 13:28 +0900, ?? shin hong wrote:
> Hi. I am reporting an issue suspected as racy
> while I read inode_go_lock() at gfs2/glops.c in Linux 2.6.35.
> 
> Since I do not have much background on GFS2, I am not certain
> whether the issue is serious or not. But please examine the issue
> and let me know your opinion.
> 
> It seems that inode_go_lock() accesses gfs2_inode's i_diskflags field
> without any lock held.
> 
> But, as do_gfs2_set_flags() updates gfs2_inode's i_diskflags,
> concurrent executions with with inode_go_lock() might result
> race conditions.
> 
> Could you examine the issue please?
> 
> Sincerely
> Shin Hong

Yes, inode_go_lock() does examine those flags, but the layers above
that call should ensure that it is single threaded in effect. The
setting of flags required a glock is held, and inode_go_lock() would be
called as part of the glock acquisition, and it is single threaded even
if a shared lock is requested, so it will have completed before
do_gfs2_set_flags() is called. Or perhaps I should say, it should have
completed before then unless you have found a code path where that is
not the case?

Steve.



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-06-15  8:43 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <AANLkTi=+JtX-68=40B57K7cs9_F47Skhb7PfxJn9Dmor@mail.gmail.com>
2010-08-17 10:40 ` [Cluster-devel] BUG? racy access to i_diskflags Steven Whitehouse
2022-06-15  8:43 ` Steven Whitehouse

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.