linux-next.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Coverity: read_one_block_group(): Concurrent data access violations
@ 2019-11-12  1:36 coverity-bot
  2019-11-12  2:05 ` Qu WenRuo
  0 siblings, 1 reply; 4+ messages in thread
From: coverity-bot @ 2019-11-12  1:36 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: David Sterba, Anand Jain, Johannes Thumshirn,
	Gustavo A. R. Silva, linux-next

Hello!

This is an experimental automated report about issues detected by Coverity
from a scan of next-20191108 as part of the linux-next weekly scan project:
https://scan.coverity.com/projects/linux-next-weekly-scan

You're getting this email because you were associated with the identified
lines of code (noted below) that were touched by recent commits:

593669fa8fd7 ("btrfs: block-group: Refactor btrfs_read_block_groups()")

Coverity reported the following:

*** CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
/fs/btrfs/block-group.c: 1721 in read_one_block_group()
1715     		 *    truncate the old free space cache inode and
1716     		 *    setup a new one.
1717     		 * b) Setting 'dirty flag' makes sure that we flush
1718     		 *    the new space cache info onto disk.
1719     		 */
1720     		if (btrfs_test_opt(info, SPACE_CACHE))
vvv     CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
vvv     Accessing "cache->disk_cache_state" without holding lock "btrfs_block_group_cache.lock". Elsewhere, "btrfs_block_group_cache.disk_cache_state" is accessed with "btrfs_block_group_cache.lock" held 12 out of 13 times (6 of these accesses strongly imply that it is necessary).
1721     			cache->disk_cache_state = BTRFS_DC_CLEAR;
1722     	}
1723     	read_extent_buffer(leaf, &bgi, btrfs_item_ptr_offset(leaf, slot),
1724     			   sizeof(bgi));
1725     	if (!mixed && ((cache->flags & BTRFS_BLOCK_GROUP_METADATA) &&
1726     	    (cache->flags & BTRFS_BLOCK_GROUP_DATA))) {

If this is a false positive, please let us know so we can mark it as
such, or teach the Coverity rules to be smarter. If not, please make
sure fixes get into linux-next. :) For patches fixing this, please
include these lines (but double-check the "Fixes" first):

Reported-by: coverity-bot <keescook+coverity-bot@chromium.org>
Addresses-Coverity-ID: 1487834 ("Concurrent data access violations")
Fixes: 593669fa8fd7 ("btrfs: block-group: Refactor btrfs_read_block_groups()")


Thanks for your attention!

-- 
Coverity-bot

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Coverity: read_one_block_group(): Concurrent data access violations
  2019-11-12  1:36 Coverity: read_one_block_group(): Concurrent data access violations coverity-bot
@ 2019-11-12  2:05 ` Qu WenRuo
  2019-11-12 22:39   ` Kees Cook
  2019-11-13 12:54   ` David Sterba
  0 siblings, 2 replies; 4+ messages in thread
From: Qu WenRuo @ 2019-11-12  2:05 UTC (permalink / raw)
  To: coverity-bot
  Cc: David Sterba, Anand Jain, Johannes Thumshirn,
	Gustavo A. R. Silva, linux-next



On 2019/11/12 上午9:36, coverity-bot wrote:
> Hello!
> 
> This is an experimental automated report about issues detected by Coverity
> from a scan of next-20191108 as part of the linux-next weekly scan project:
> https://scan.coverity.com/projects/linux-next-weekly-scan
> 
> You're getting this email because you were associated with the identified
> lines of code (noted below) that were touched by recent commits:
> 
> 593669fa8fd7 ("btrfs: block-group: Refactor btrfs_read_block_groups()")
> 
> Coverity reported the following:
> 
> *** CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
> /fs/btrfs/block-group.c: 1721 in read_one_block_group()
> 1715     		 *    truncate the old free space cache inode and
> 1716     		 *    setup a new one.
> 1717     		 * b) Setting 'dirty flag' makes sure that we flush
> 1718     		 *    the new space cache info onto disk.
> 1719     		 */
> 1720     		if (btrfs_test_opt(info, SPACE_CACHE))
> vvv     CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
> vvv     Accessing "cache->disk_cache_state" without holding lock "btrfs_block_group_cache.lock". Elsewhere, "btrfs_block_group_cache.disk_cache_state" is accessed with "btrfs_block_group_cache.lock" held 12 out of 13 times (6 of these accesses strongly imply that it is necessary).

It's a false alert, as read_one_block_group() is running in mount
context, nobody else can access the fs yet.

Of course we can hold the lock as it's going to hit fast path and no
performance change at all, but I'm not sure what's the proper way to do
in btrfs.

David, any idea on this?

Thanks,
Qu

> 1721     			cache->disk_cache_state = BTRFS_DC_CLEAR;
> 1722     	}
> 1723     	read_extent_buffer(leaf, &bgi, btrfs_item_ptr_offset(leaf, slot),
> 1724     			   sizeof(bgi));
> 1725     	if (!mixed && ((cache->flags & BTRFS_BLOCK_GROUP_METADATA) &&
> 1726     	    (cache->flags & BTRFS_BLOCK_GROUP_DATA))) {
> 
> If this is a false positive, please let us know so we can mark it as
> such, or teach the Coverity rules to be smarter. If not, please make
> sure fixes get into linux-next. :) For patches fixing this, please
> include these lines (but double-check the "Fixes" first):
> 
> Reported-by: coverity-bot <keescook+coverity-bot@chromium.org>
> Addresses-Coverity-ID: 1487834 ("Concurrent data access violations")
> Fixes: 593669fa8fd7 ("btrfs: block-group: Refactor btrfs_read_block_groups()")
> 
> 
> Thanks for your attention!
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Coverity: read_one_block_group(): Concurrent data access violations
  2019-11-12  2:05 ` Qu WenRuo
@ 2019-11-12 22:39   ` Kees Cook
  2019-11-13 12:54   ` David Sterba
  1 sibling, 0 replies; 4+ messages in thread
From: Kees Cook @ 2019-11-12 22:39 UTC (permalink / raw)
  To: Qu WenRuo
  Cc: David Sterba, Anand Jain, Johannes Thumshirn,
	Gustavo A. R. Silva, linux-next

On Tue, Nov 12, 2019 at 02:05:40AM +0000, Qu WenRuo wrote:
> 
> 
> On 2019/11/12 上午9:36, coverity-bot wrote:
> > Hello!
> > 
> > This is an experimental automated report about issues detected by Coverity
> > from a scan of next-20191108 as part of the linux-next weekly scan project:
> > https://scan.coverity.com/projects/linux-next-weekly-scan
> > 
> > You're getting this email because you were associated with the identified
> > lines of code (noted below) that were touched by recent commits:
> > 
> > 593669fa8fd7 ("btrfs: block-group: Refactor btrfs_read_block_groups()")
> > 
> > Coverity reported the following:
> > 
> > *** CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
> > /fs/btrfs/block-group.c: 1721 in read_one_block_group()
> > 1715     		 *    truncate the old free space cache inode and
> > 1716     		 *    setup a new one.
> > 1717     		 * b) Setting 'dirty flag' makes sure that we flush
> > 1718     		 *    the new space cache info onto disk.
> > 1719     		 */
> > 1720     		if (btrfs_test_opt(info, SPACE_CACHE))
> > vvv     CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
> > vvv     Accessing "cache->disk_cache_state" without holding lock "btrfs_block_group_cache.lock". Elsewhere, "btrfs_block_group_cache.disk_cache_state" is accessed with "btrfs_block_group_cache.lock" held 12 out of 13 times (6 of these accesses strongly imply that it is necessary).
> 
> It's a false alert, as read_one_block_group() is running in mount
> context, nobody else can access the fs yet.
> 
> Of course we can hold the lock as it's going to hit fast path and no
> performance change at all, but I'm not sure what's the proper way to do
> in btrfs.

Okay, thanks for double-checking! Yeah, this looks like a hard one to
teach Coverity about... I'll add it to my notes! :)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Coverity: read_one_block_group(): Concurrent data access violations
  2019-11-12  2:05 ` Qu WenRuo
  2019-11-12 22:39   ` Kees Cook
@ 2019-11-13 12:54   ` David Sterba
  1 sibling, 0 replies; 4+ messages in thread
From: David Sterba @ 2019-11-13 12:54 UTC (permalink / raw)
  To: Qu WenRuo
  Cc: coverity-bot, David Sterba, Anand Jain, Johannes Thumshirn,
	Gustavo A. R. Silva, linux-next

On Tue, Nov 12, 2019 at 02:05:40AM +0000, Qu WenRuo wrote:
> On 2019/11/12 上午9:36, coverity-bot wrote:
> > Hello!
> > 
> > This is an experimental automated report about issues detected by Coverity
> > from a scan of next-20191108 as part of the linux-next weekly scan project:
> > https://scan.coverity.com/projects/linux-next-weekly-scan
> > 
> > You're getting this email because you were associated with the identified
> > lines of code (noted below) that were touched by recent commits:
> > 
> > 593669fa8fd7 ("btrfs: block-group: Refactor btrfs_read_block_groups()")
> > 
> > Coverity reported the following:
> > 
> > *** CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
> > /fs/btrfs/block-group.c: 1721 in read_one_block_group()
> > 1715     		 *    truncate the old free space cache inode and
> > 1716     		 *    setup a new one.
> > 1717     		 * b) Setting 'dirty flag' makes sure that we flush
> > 1718     		 *    the new space cache info onto disk.
> > 1719     		 */
> > 1720     		if (btrfs_test_opt(info, SPACE_CACHE))
> > vvv     CID 1487834:  Concurrent data access violations  (MISSING_LOCK)
> > vvv     Accessing "cache->disk_cache_state" without holding lock "btrfs_block_group_cache.lock". Elsewhere, "btrfs_block_group_cache.disk_cache_state" is accessed with "btrfs_block_group_cache.lock" held 12 out of 13 times (6 of these accesses strongly imply that it is necessary).
> 
> It's a false alert, as read_one_block_group() is running in mount
> context, nobody else can access the fs yet.
> 
> Of course we can hold the lock as it's going to hit fast path and no
> performance change at all, but I'm not sure what's the proper way to do
> in btrfs.
> 
> David, any idea on this?

We'd have to add some annotation for the static checkers that would let
them know that the code section is running single threaded. It would be
still desirable to catch concurrency issues in the rest of the code so
eg. completely disabling checks for a certain structure would not be
good.

The mount or unmount functions are probably the best examples where the
instrumentation would be useful and also not intrusive. In open_ctree it
would be from the beginning of the function until the call to
btrfs_init_workqueues.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-11-13 12:54 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-12  1:36 Coverity: read_one_block_group(): Concurrent data access violations coverity-bot
2019-11-12  2:05 ` Qu WenRuo
2019-11-12 22:39   ` Kees Cook
2019-11-13 12:54   ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).