linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Sterba <dsterba@suse.cz>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
	Chris Mason <clm@fb.com>, Josef Bacik <josef@toxicpanda.com>,
	David Sterba <dsterba@suse.com>,
	linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-hardening@vger.kernel.org
Subject: Re: [PATCH][next] btrfs: Fix multiple out-of-bounds warnings
Date: Fri, 2 Jul 2021 13:17:47 +0200	[thread overview]
Message-ID: <20210702111747.GF2610@twin.jikos.cz> (raw)
In-Reply-To: <ba89916a-f141-2962-2526-89bd43e75a42@gmx.com>

On Fri, Jul 02, 2021 at 06:20:33PM +0800, Qu Wenruo wrote:
> 
> 
> On 2021/7/2 上午9:06, Gustavo A. R. Silva wrote:
> > Fix the following out-of-bounds warnings by using a flexible-array
> > member *pages[] at the bottom of struct extent_buffer:
> >
> > fs/btrfs/disk-io.c:225:34: warning: array subscript 1 is above array bounds of ‘struct page *[1]’ [-Warray-bounds]
> 
> The involved code looks like:
> 
> static void csum_tree_block(struct extent_buffer *buf, u8 *result)
> {
>          struct btrfs_fs_info *fs_info = buf->fs_info;
>          const int num_pages = fs_info->nodesize >> PAGE_SHIFT;
> 	...
>          for (i = 1; i < num_pages; i++) {
>                  kaddr = page_address(buf->pages[i]);
>                  crypto_shash_update(shash, kaddr, PAGE_SIZE);
>          }
> 
> For Power case, the page size is 64K and nodesize is at most 64K for
> btrfs, thus num_pages will either be 0 or 1.
> 
> In that case, the for loop should never get reached, thus it's not
> possible to really get beyond the boundary.
> 
> To me, the real problem is we have no way to tell compiler that
> fs_info->nodesize is ensured to be no larger than 64K.
> 
> 
> Although using flex array can mask the problem, but it's really masking
> the problem as now compiler has no idea how large the array can really be.

Agreed, that's the problem, we'd be switching compile-time static
information about the array with dynamic.

> David still has the final say on how to fix it, but I'm really wondering
> is there any way to give compiler some hint about the possible value
> range for things like fs_info->nodesize?

We can add some macros that are also page size dependent and evaluate to
a constant that can be in turn used to optimize the loop to a single
call of the loop body.

Looking at csum_tree_block we should really use the num_extent_pages
helper that does the same thing but handles when nodesize >> PAGE_SIZE
is zero (and returns 1).

      reply	other threads:[~2021-07-02 11:20 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-02  1:06 [PATCH][next] btrfs: Fix multiple out-of-bounds warnings Gustavo A. R. Silva
2021-07-02 10:20 ` Qu Wenruo
2021-07-02 11:17   ` David Sterba [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210702111747.GF2610@twin.jikos.cz \
    --to=dsterba@suse.cz \
    --cc=clm@fb.com \
    --cc=dsterba@suse.com \
    --cc=gustavoars@kernel.org \
    --cc=josef@toxicpanda.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-hardening@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).