Linux-EROFS Archive on lore.kernel.org
 help / color / Atom feed
* [bug report] staging: erofs: tidy up decompression frontend
@ 2019-11-14 19:10 Dan Carpenter
  2019-11-14 22:00 ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Dan Carpenter @ 2019-11-14 19:10 UTC (permalink / raw)
  To: xiang; +Cc: linux-fsdevel, linux-erofs

Hello Gao Xiang,

The patch 97e86a858bc3: "staging: erofs: tidy up decompression
frontend" from Jul 31, 2019, leads to the following static checker
warning:

	fs/erofs/zdata.c:443 z_erofs_register_collection()
	error: double unlocked 'cl->lock' (orig line 439)

fs/erofs/zdata.c
   432          cl = z_erofs_primarycollection(pcl);
   433          cl->pageofs = map->m_la & ~PAGE_MASK;
   434  
   435          /*
   436           * lock all primary followed works before visible to others
   437           * and mutex_trylock *never* fails for a new pcluster.
   438           */
   439          mutex_trylock(&cl->lock);
                ^^^^^^^^^^^^^^^^^^^^^^^^
   440  
   441          err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
   442          if (err) {
   443                  mutex_unlock(&cl->lock);
                        ^^^^^^^^^^^^^^^^^^^^^^^
How can we unlock if we don't know that the trylock succeeded?

   444                  kmem_cache_free(pcluster_cachep, pcl);
   445                  return -EAGAIN;
   446          }
   447          /* used to check tail merging loop due to corrupted images */
   448          if (clt->owned_head == Z_EROFS_PCLUSTER_TAIL)
   449                  clt->tailpcl = pcl;
   450          clt->owned_head = &pcl->next;
   451          clt->pcl = pcl;
   452          clt->cl = cl;
   453          return 0;

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-11-14 19:10 [bug report] staging: erofs: tidy up decompression frontend Dan Carpenter
@ 2019-11-14 22:00 ` Matthew Wilcox
  2019-11-15  0:45   ` Gao Xiang via Linux-erofs
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2019-11-14 22:00 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-fsdevel, xiang, linux-erofs

On Thu, Nov 14, 2019 at 10:10:03PM +0300, Dan Carpenter wrote:
> 	fs/erofs/zdata.c:443 z_erofs_register_collection()
> 	error: double unlocked 'cl->lock' (orig line 439)
> 
> fs/erofs/zdata.c
>    432          cl = z_erofs_primarycollection(pcl);
>    433          cl->pageofs = map->m_la & ~PAGE_MASK;
>    434  
>    435          /*
>    436           * lock all primary followed works before visible to others
>    437           * and mutex_trylock *never* fails for a new pcluster.
>    438           */
>    439          mutex_trylock(&cl->lock);
>                 ^^^^^^^^^^^^^^^^^^^^^^^^
>    440  
>    441          err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
>    442          if (err) {
>    443                  mutex_unlock(&cl->lock);
>                         ^^^^^^^^^^^^^^^^^^^^^^^
> How can we unlock if we don't know that the trylock succeeded?

The comment says it'll always succeed.  That said, this is an uncommon
pattern -- usually we just mutex_lock().  If there's a good reason to use
mutex_trylock() instead, then I'd prefer it to be guarded with a BUG_ON.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-11-14 22:00 ` Matthew Wilcox
@ 2019-11-15  0:45   ` Gao Xiang via Linux-erofs
  2019-11-15  0:45     ` Gao Xiang
  0 siblings, 1 reply; 10+ messages in thread
From: Gao Xiang via Linux-erofs @ 2019-11-15  0:45 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, xiang, linux-erofs, Dan Carpenter

Hi Dan and Matthew,

On Thu, Nov 14, 2019 at 02:00:15PM -0800, Matthew Wilcox wrote:
> On Thu, Nov 14, 2019 at 10:10:03PM +0300, Dan Carpenter wrote:
> > 	fs/erofs/zdata.c:443 z_erofs_register_collection()
> > 	error: double unlocked 'cl->lock' (orig line 439)
> > 
> > fs/erofs/zdata.c
> >    432          cl = z_erofs_primarycollection(pcl);
> >    433          cl->pageofs = map->m_la & ~PAGE_MASK;
> >    434  
> >    435          /*
> >    436           * lock all primary followed works before visible to others
> >    437           * and mutex_trylock *never* fails for a new pcluster.
> >    438           */
> >    439          mutex_trylock(&cl->lock);
> >                 ^^^^^^^^^^^^^^^^^^^^^^^^
> >    440  
> >    441          err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
> >    442          if (err) {
> >    443                  mutex_unlock(&cl->lock);
> >                         ^^^^^^^^^^^^^^^^^^^^^^^
> > How can we unlock if we don't know that the trylock succeeded?
> 
> The comment says it'll always succeed.  That said, this is an uncommon
> pattern -- usually we just mutex_lock().  If there's a good reason to use
> mutex_trylock() instead, then I'd prefer it to be guarded with a BUG_ON.
>

I think there is no actual problem here. If I am wrong, please kindly point out.
The selected code snippet is too short. The current full code is

static struct z_erofs_collection *clregister(struct z_erofs_collector *clt,
					     struct inode *inode,
					     struct erofs_map_blocks *map)
{
	struct z_erofs_pcluster *pcl;
	struct z_erofs_collection *cl;
	int err;

	/* no available workgroup, let's allocate one */
	pcl = kmem_cache_alloc(pcluster_cachep, GFP_NOFS);
	if (!pcl)
		return ERR_PTR(-ENOMEM);

^ Note that this is a new object here, which is guaranteed that the lock
was always unlocked with the last free (and it firstly inited in init_once).

	z_erofs_pcluster_init_always(pcl);

	pcl->obj.index = map->m_pa >> PAGE_SHIFT;

	pcl->length = (map->m_llen << Z_EROFS_PCLUSTER_LENGTH_BIT) |
		(map->m_flags & EROFS_MAP_FULL_MAPPED ?
			Z_EROFS_PCLUSTER_FULL_LENGTH : 0);

	if (map->m_flags & EROFS_MAP_ZIPPED)
		pcl->algorithmformat = Z_EROFS_COMPRESSION_LZ4;
	else
		pcl->algorithmformat = Z_EROFS_COMPRESSION_SHIFTED;

	pcl->clusterbits = EROFS_I(inode)->z_physical_clusterbits[0];
	pcl->clusterbits -= PAGE_SHIFT;

	/* new pclusters should be claimed as type 1, primary and followed */
	pcl->next = clt->owned_head;
	clt->mode = COLLECT_PRIMARY_FOLLOWED;

	cl = z_erofs_primarycollection(pcl);
	cl->pageofs = map->m_la & ~PAGE_MASK;

	/*
	 * lock all primary followed works before visible to others
	 * and mutex_trylock *never* fails for a new pcluster.
	 */
	mutex_trylock(&cl->lock);

^ That was simply once guarded by BUG_ON, but checkpatch.pl raised a warning,
I can use DBG_BUGON here instead.

	err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
	if (err) {
		mutex_unlock(&cl->lock);

^ free with unlock as a convention as one example above.

		kmem_cache_free(pcluster_cachep, pcl);
		return ERR_PTR(-EAGAIN);
	}

Thanks,
Gao Xiang


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-11-15  0:45   ` Gao Xiang via Linux-erofs
@ 2019-11-15  0:45     ` Gao Xiang
  0 siblings, 0 replies; 10+ messages in thread
From: Gao Xiang @ 2019-11-15  0:45 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Dan Carpenter, xiang, linux-erofs, linux-fsdevel

Hi Dan and Matthew,

On Thu, Nov 14, 2019 at 02:00:15PM -0800, Matthew Wilcox wrote:
> On Thu, Nov 14, 2019 at 10:10:03PM +0300, Dan Carpenter wrote:
> > 	fs/erofs/zdata.c:443 z_erofs_register_collection()
> > 	error: double unlocked 'cl->lock' (orig line 439)
> > 
> > fs/erofs/zdata.c
> >    432          cl = z_erofs_primarycollection(pcl);
> >    433          cl->pageofs = map->m_la & ~PAGE_MASK;
> >    434  
> >    435          /*
> >    436           * lock all primary followed works before visible to others
> >    437           * and mutex_trylock *never* fails for a new pcluster.
> >    438           */
> >    439          mutex_trylock(&cl->lock);
> >                 ^^^^^^^^^^^^^^^^^^^^^^^^
> >    440  
> >    441          err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
> >    442          if (err) {
> >    443                  mutex_unlock(&cl->lock);
> >                         ^^^^^^^^^^^^^^^^^^^^^^^
> > How can we unlock if we don't know that the trylock succeeded?
> 
> The comment says it'll always succeed.  That said, this is an uncommon
> pattern -- usually we just mutex_lock().  If there's a good reason to use
> mutex_trylock() instead, then I'd prefer it to be guarded with a BUG_ON.
>

I think there is no actual problem here. If I am wrong, please kindly point out.
The selected code snippet is too short. The current full code is

static struct z_erofs_collection *clregister(struct z_erofs_collector *clt,
					     struct inode *inode,
					     struct erofs_map_blocks *map)
{
	struct z_erofs_pcluster *pcl;
	struct z_erofs_collection *cl;
	int err;

	/* no available workgroup, let's allocate one */
	pcl = kmem_cache_alloc(pcluster_cachep, GFP_NOFS);
	if (!pcl)
		return ERR_PTR(-ENOMEM);

^ Note that this is a new object here, which is guaranteed that the lock
was always unlocked with the last free (and it firstly inited in init_once).

	z_erofs_pcluster_init_always(pcl);

	pcl->obj.index = map->m_pa >> PAGE_SHIFT;

	pcl->length = (map->m_llen << Z_EROFS_PCLUSTER_LENGTH_BIT) |
		(map->m_flags & EROFS_MAP_FULL_MAPPED ?
			Z_EROFS_PCLUSTER_FULL_LENGTH : 0);

	if (map->m_flags & EROFS_MAP_ZIPPED)
		pcl->algorithmformat = Z_EROFS_COMPRESSION_LZ4;
	else
		pcl->algorithmformat = Z_EROFS_COMPRESSION_SHIFTED;

	pcl->clusterbits = EROFS_I(inode)->z_physical_clusterbits[0];
	pcl->clusterbits -= PAGE_SHIFT;

	/* new pclusters should be claimed as type 1, primary and followed */
	pcl->next = clt->owned_head;
	clt->mode = COLLECT_PRIMARY_FOLLOWED;

	cl = z_erofs_primarycollection(pcl);
	cl->pageofs = map->m_la & ~PAGE_MASK;

	/*
	 * lock all primary followed works before visible to others
	 * and mutex_trylock *never* fails for a new pcluster.
	 */
	mutex_trylock(&cl->lock);

^ That was simply once guarded by BUG_ON, but checkpatch.pl raised a warning,
I can use DBG_BUGON here instead.

	err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
	if (err) {
		mutex_unlock(&cl->lock);

^ free with unlock as a convention as one example above.

		kmem_cache_free(pcluster_cachep, pcl);
		return ERR_PTR(-EAGAIN);
	}

Thanks,
Gao Xiang


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-08-27 10:42     ` Dan Carpenter
@ 2019-08-27 10:46       ` Gao Xiang
  0 siblings, 0 replies; 10+ messages in thread
From: Gao Xiang @ 2019-08-27 10:46 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: xiang, linux-erofs

On Tue, Aug 27, 2019 at 01:42:40PM +0300, Dan Carpenter wrote:
> It turns out that my Smatch cross function DB was slightly out of date
> after the move from staging/ to fs/.  Once I rebuilt the DB then the
> warning went away.
> 
> Anyway, thanks for taking the time to look at this.

That's ok, thanks for taking time reporting as well :-)

Cheers,
Gao Xiang

> 
> regards,
> dan carpenter
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-08-27  9:53   ` Dan Carpenter
  2019-08-27 10:05     ` Gao Xiang
@ 2019-08-27 10:42     ` Dan Carpenter
  2019-08-27 10:46       ` Gao Xiang
  1 sibling, 1 reply; 10+ messages in thread
From: Dan Carpenter @ 2019-08-27 10:42 UTC (permalink / raw)
  To: Gao Xiang; +Cc: xiang, linux-erofs

It turns out that my Smatch cross function DB was slightly out of date
after the move from staging/ to fs/.  Once I rebuilt the DB then the
warning went away.

Anyway, thanks for taking the time to look at this.

regards,
dan carpenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-08-27  9:53   ` Dan Carpenter
@ 2019-08-27 10:05     ` Gao Xiang
  2019-08-27 10:42     ` Dan Carpenter
  1 sibling, 0 replies; 10+ messages in thread
From: Gao Xiang @ 2019-08-27 10:05 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: xiang, linux-erofs

On Tue, Aug 27, 2019 at 12:53:47PM +0300, Dan Carpenter wrote:
> On Tue, Aug 27, 2019 at 05:36:29PM +0800, Gao Xiang wrote:
> > Hi Dan,
> > 
> > Thanks for your report.
> > 
> > On Tue, Aug 27, 2019 at 12:03:55PM +0300, Dan Carpenter wrote:
> > > Hello Gao Xiang,
> > > 
> > > This is a semi-automatic email about new static checker warnings.
> > > 
> > > The patch 97e86a858bc3: "staging: erofs: tidy up decompression
> > > frontend" from Jul 31, 2019, leads to the following Smatch complaint:
> > > 
> > >     fs/erofs/zdata.c:670 z_erofs_do_read_page()
> > >     error: we previously assumed 'clt->cl' could be null (see line 596)
> > > 
> > > fs/erofs/zdata.c
> > >    595			/* didn't get a valid collection previously (very rare) */
> > >    596			if (!clt->cl)
> > >                             ^^^^^^^^
> > > New NULL check.
> > > 
> > >    597				goto restart_now;
> > >    598			goto hitted;
> > >    599		}
> > >    600	
> > >    601		/* go ahead the next map_blocks */
> > >    602		debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
> > >    603	
> > >    604		if (z_erofs_collector_end(clt))
> > >    605			fe->backmost = false;
> > >    606	
> > >    607		map->m_la = offset + cur;
> > >    608		map->m_llen = 0;
> > >    609		err = z_erofs_map_blocks_iter(inode, map, 0);
> > >    610		if (unlikely(err))
> > >    611			goto err_out;
> > >    612	
> > >    613	restart_now:
> > >    614		if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
> > >    615			goto hitted;
> > >    616	
> > >    617		err = z_erofs_collector_begin(clt, inode, map);
> > 
> > At a glance, clt->cl will be all initialized in all successful paths
> > in z_erofs_collector_begin, or it all fall back into err_out...
> > I have no idea what is wrong here...
> > 
> > Some detailed path from Smatch for NIL dereferences?
> > 
> 
> Ah.  Sorry for that.  It's a false positive.  I will investigate and
> fix Smatch.

Yeah.. I was little confused, since this patch mostly renames many names...
and the main logic is unchanged for months... and for this case there are 2 paths...

 1) hit line 614 --> goto hitted --> hit line 633 --> goto next_part; (will skip line 670);
 2) hit line 617 --> go into z_erofs_collector_begin -->
    all successful paths will assign clt->cl, so clt->cl != NULL...

Though z_erofs_do_read_page is currently somewhat complicated (mostly due to some
historical fixes in order to backport friendly), I will simplify this function
in the later version (..and with care in case of introducing new bugs) :-)

Thanks,
Gao Xiang

> 
> regards,
> dan carpenter
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-08-27  9:36 ` Gao Xiang
@ 2019-08-27  9:53   ` Dan Carpenter
  2019-08-27 10:05     ` Gao Xiang
  2019-08-27 10:42     ` Dan Carpenter
  0 siblings, 2 replies; 10+ messages in thread
From: Dan Carpenter @ 2019-08-27  9:53 UTC (permalink / raw)
  To: Gao Xiang; +Cc: xiang, linux-erofs

On Tue, Aug 27, 2019 at 05:36:29PM +0800, Gao Xiang wrote:
> Hi Dan,
> 
> Thanks for your report.
> 
> On Tue, Aug 27, 2019 at 12:03:55PM +0300, Dan Carpenter wrote:
> > Hello Gao Xiang,
> > 
> > This is a semi-automatic email about new static checker warnings.
> > 
> > The patch 97e86a858bc3: "staging: erofs: tidy up decompression
> > frontend" from Jul 31, 2019, leads to the following Smatch complaint:
> > 
> >     fs/erofs/zdata.c:670 z_erofs_do_read_page()
> >     error: we previously assumed 'clt->cl' could be null (see line 596)
> > 
> > fs/erofs/zdata.c
> >    595			/* didn't get a valid collection previously (very rare) */
> >    596			if (!clt->cl)
> >                             ^^^^^^^^
> > New NULL check.
> > 
> >    597				goto restart_now;
> >    598			goto hitted;
> >    599		}
> >    600	
> >    601		/* go ahead the next map_blocks */
> >    602		debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
> >    603	
> >    604		if (z_erofs_collector_end(clt))
> >    605			fe->backmost = false;
> >    606	
> >    607		map->m_la = offset + cur;
> >    608		map->m_llen = 0;
> >    609		err = z_erofs_map_blocks_iter(inode, map, 0);
> >    610		if (unlikely(err))
> >    611			goto err_out;
> >    612	
> >    613	restart_now:
> >    614		if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
> >    615			goto hitted;
> >    616	
> >    617		err = z_erofs_collector_begin(clt, inode, map);
> 
> At a glance, clt->cl will be all initialized in all successful paths
> in z_erofs_collector_begin, or it all fall back into err_out...
> I have no idea what is wrong here...
> 
> Some detailed path from Smatch for NIL dereferences?
> 

Ah.  Sorry for that.  It's a false positive.  I will investigate and
fix Smatch.

regards,
dan carpenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [bug report] staging: erofs: tidy up decompression frontend
  2019-08-27  9:03 Dan Carpenter
@ 2019-08-27  9:36 ` Gao Xiang
  2019-08-27  9:53   ` Dan Carpenter
  0 siblings, 1 reply; 10+ messages in thread
From: Gao Xiang @ 2019-08-27  9:36 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: xiang, linux-erofs

Hi Dan,

Thanks for your report.

On Tue, Aug 27, 2019 at 12:03:55PM +0300, Dan Carpenter wrote:
> Hello Gao Xiang,
> 
> This is a semi-automatic email about new static checker warnings.
> 
> The patch 97e86a858bc3: "staging: erofs: tidy up decompression
> frontend" from Jul 31, 2019, leads to the following Smatch complaint:
> 
>     fs/erofs/zdata.c:670 z_erofs_do_read_page()
>     error: we previously assumed 'clt->cl' could be null (see line 596)
> 
> fs/erofs/zdata.c
>    595			/* didn't get a valid collection previously (very rare) */
>    596			if (!clt->cl)
>                             ^^^^^^^^
> New NULL check.
> 
>    597				goto restart_now;
>    598			goto hitted;
>    599		}
>    600	
>    601		/* go ahead the next map_blocks */
>    602		debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
>    603	
>    604		if (z_erofs_collector_end(clt))
>    605			fe->backmost = false;
>    606	
>    607		map->m_la = offset + cur;
>    608		map->m_llen = 0;
>    609		err = z_erofs_map_blocks_iter(inode, map, 0);
>    610		if (unlikely(err))
>    611			goto err_out;
>    612	
>    613	restart_now:
>    614		if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
>    615			goto hitted;
>    616	
>    617		err = z_erofs_collector_begin(clt, inode, map);

At a glance, clt->cl will be all initialized in all successful paths
in z_erofs_collector_begin, or it all fall back into err_out...
I have no idea what is wrong here...

Some detailed path from Smatch for NIL dereferences?

Thanks,
Gao Xiang

>    618		if (unlikely(err))
>    619			goto err_out;
>    620	
>    621		/* preload all compressed pages (maybe downgrade role if necessary) */
>    622		if (should_alloc_managed_pages(fe, sbi->cache_strategy, map->m_la))
>    623			cache_strategy = DELAYEDALLOC;
>    624		else
>    625			cache_strategy = DONTALLOC;
>    626	
>    627		preload_compressed_pages(clt, MNGD_MAPPING(sbi),
>    628					 cache_strategy, pagepool);
>    629	
>    630		tight &= (clt->mode >= COLLECT_PRIMARY_HOOKED);
>    631	hitted:
>    632		cur = end - min_t(unsigned int, offset + end - map->m_la, end);
>    633		if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED))) {
>    634			zero_user_segment(page, cur, end);
>    635			goto next_part;
>    636		}
>    637	
>    638		/* let's derive page type */
>    639		page_type = cur ? Z_EROFS_VLE_PAGE_TYPE_HEAD :
>    640			(!spiltted ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
>    641				(tight ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
>    642					Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED));
>    643	
>    644		if (cur)
>    645			tight &= (clt->mode >= COLLECT_PRIMARY_FOLLOWED);
>    646	
>    647	retry:
>    648		err = z_erofs_attach_page(clt, page, page_type);
>    649		/* should allocate an additional staging page for pagevec */
>    650		if (err == -EAGAIN) {
>    651			struct page *const newpage =
>    652				__stagingpage_alloc(pagepool, GFP_NOFS);
>    653	
>    654			err = z_erofs_attach_page(clt, newpage,
>    655						  Z_EROFS_PAGE_TYPE_EXCLUSIVE);
>    656			if (likely(!err))
>    657				goto retry;
>    658		}
>    659	
>    660		if (unlikely(err))
>    661			goto err_out;
>    662	
>    663		index = page->index - (map->m_la >> PAGE_SHIFT);
>    664	
>    665		z_erofs_onlinepage_fixup(page, index, true);
>    666	
>    667		/* bump up the number of spiltted parts of a page */
>    668		++spiltted;
>    669		/* also update nr_pages */
>    670		clt->cl->nr_pages = max_t(pgoff_t, clt->cl->nr_pages, index + 1);
>                 ^^^^^^^^^^^^^^^^^                  ^^^^^^^^^^^^^^^^^
> Unchecked dereferences.
> 
>    671	next_part:
>    672		/* can be used for verification */
> 
> regards,
> dan carpenter

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [bug report] staging: erofs: tidy up decompression frontend
@ 2019-08-27  9:03 Dan Carpenter
  2019-08-27  9:36 ` Gao Xiang
  0 siblings, 1 reply; 10+ messages in thread
From: Dan Carpenter @ 2019-08-27  9:03 UTC (permalink / raw)
  To: xiang; +Cc: linux-erofs

Hello Gao Xiang,

This is a semi-automatic email about new static checker warnings.

The patch 97e86a858bc3: "staging: erofs: tidy up decompression
frontend" from Jul 31, 2019, leads to the following Smatch complaint:

    fs/erofs/zdata.c:670 z_erofs_do_read_page()
    error: we previously assumed 'clt->cl' could be null (see line 596)

fs/erofs/zdata.c
   595			/* didn't get a valid collection previously (very rare) */
   596			if (!clt->cl)
                            ^^^^^^^^
New NULL check.

   597				goto restart_now;
   598			goto hitted;
   599		}
   600	
   601		/* go ahead the next map_blocks */
   602		debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
   603	
   604		if (z_erofs_collector_end(clt))
   605			fe->backmost = false;
   606	
   607		map->m_la = offset + cur;
   608		map->m_llen = 0;
   609		err = z_erofs_map_blocks_iter(inode, map, 0);
   610		if (unlikely(err))
   611			goto err_out;
   612	
   613	restart_now:
   614		if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
   615			goto hitted;
   616	
   617		err = z_erofs_collector_begin(clt, inode, map);
   618		if (unlikely(err))
   619			goto err_out;
   620	
   621		/* preload all compressed pages (maybe downgrade role if necessary) */
   622		if (should_alloc_managed_pages(fe, sbi->cache_strategy, map->m_la))
   623			cache_strategy = DELAYEDALLOC;
   624		else
   625			cache_strategy = DONTALLOC;
   626	
   627		preload_compressed_pages(clt, MNGD_MAPPING(sbi),
   628					 cache_strategy, pagepool);
   629	
   630		tight &= (clt->mode >= COLLECT_PRIMARY_HOOKED);
   631	hitted:
   632		cur = end - min_t(unsigned int, offset + end - map->m_la, end);
   633		if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED))) {
   634			zero_user_segment(page, cur, end);
   635			goto next_part;
   636		}
   637	
   638		/* let's derive page type */
   639		page_type = cur ? Z_EROFS_VLE_PAGE_TYPE_HEAD :
   640			(!spiltted ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
   641				(tight ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
   642					Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED));
   643	
   644		if (cur)
   645			tight &= (clt->mode >= COLLECT_PRIMARY_FOLLOWED);
   646	
   647	retry:
   648		err = z_erofs_attach_page(clt, page, page_type);
   649		/* should allocate an additional staging page for pagevec */
   650		if (err == -EAGAIN) {
   651			struct page *const newpage =
   652				__stagingpage_alloc(pagepool, GFP_NOFS);
   653	
   654			err = z_erofs_attach_page(clt, newpage,
   655						  Z_EROFS_PAGE_TYPE_EXCLUSIVE);
   656			if (likely(!err))
   657				goto retry;
   658		}
   659	
   660		if (unlikely(err))
   661			goto err_out;
   662	
   663		index = page->index - (map->m_la >> PAGE_SHIFT);
   664	
   665		z_erofs_onlinepage_fixup(page, index, true);
   666	
   667		/* bump up the number of spiltted parts of a page */
   668		++spiltted;
   669		/* also update nr_pages */
   670		clt->cl->nr_pages = max_t(pgoff_t, clt->cl->nr_pages, index + 1);
                ^^^^^^^^^^^^^^^^^                  ^^^^^^^^^^^^^^^^^
Unchecked dereferences.

   671	next_part:
   672		/* can be used for verification */

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, back to index

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-14 19:10 [bug report] staging: erofs: tidy up decompression frontend Dan Carpenter
2019-11-14 22:00 ` Matthew Wilcox
2019-11-15  0:45   ` Gao Xiang via Linux-erofs
2019-11-15  0:45     ` Gao Xiang
  -- strict thread matches above, loose matches on Subject: below --
2019-08-27  9:03 Dan Carpenter
2019-08-27  9:36 ` Gao Xiang
2019-08-27  9:53   ` Dan Carpenter
2019-08-27 10:05     ` Gao Xiang
2019-08-27 10:42     ` Dan Carpenter
2019-08-27 10:46       ` Gao Xiang

Linux-EROFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-erofs/0 linux-erofs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-erofs linux-erofs/ https://lore.kernel.org/linux-erofs \
		linux-erofs@lists.ozlabs.org linux-erofs@ozlabs.org
	public-inbox-index linux-erofs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.ozlabs.lists.linux-erofs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git