linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <tom.leiming@gmail.com>
To: Bart Van Assche <bvanassche@acm.org>
Cc: Ming Lei <ming.lei@redhat.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block <linux-block@vger.kernel.org>,
	stable <stable@vger.kernel.org>, Mark Ray <mark.ray@hpe.com>,
	Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [PATCH V2] blk-mq: avoid sysfs buffer overflow by too many CPU cores
Date: Fri, 16 Aug 2019 12:17:31 +0800	[thread overview]
Message-ID: <CACVXFVNZJswn_zu_K+N2ooLbq1qqrkbknW0Km6R-mHm_nzc=xA@mail.gmail.com> (raw)
In-Reply-To: <effdfa46-880f-2d05-19be-8af4f451b8f4@acm.org>

On Fri, Aug 16, 2019 at 11:42 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 8/15/19 7:54 PM, Ming Lei wrote:
> > It is reported that sysfs buffer overflow can be triggered in case
> > of too many CPU cores(>841 on 4K PAGE_SIZE) when showing CPUs in
> > blk_mq_hw_sysfs_cpus_show().
> >
> > So use cpumap_print_to_pagebuf() to print the info and fix the potential
> > buffer overflow issue.
> >
> > Cc: stable@vger.kernel.org
> > Cc: Mark Ray <mark.ray@hpe.com>
> > Cc: Greg KH <gregkh@linuxfoundation.org>
> > Fixes: 676141e48af7("blk-mq: don't dump CPU -> hw queue map on driver load")
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >   block/blk-mq-sysfs.c | 15 +--------------
> >   1 file changed, 1 insertion(+), 14 deletions(-)
> >
> > diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
> > index d6e1a9bd7131..4d0d32377ba3 100644
> > --- a/block/blk-mq-sysfs.c
> > +++ b/block/blk-mq-sysfs.c
> > @@ -166,20 +166,7 @@ static ssize_t blk_mq_hw_sysfs_nr_reserved_tags_show(struct blk_mq_hw_ctx *hctx,
> >
> >   static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
> >   {
> > -     unsigned int i, first = 1;
> > -     ssize_t ret = 0;
> > -
> > -     for_each_cpu(i, hctx->cpumask) {
> > -             if (first)
> > -                     ret += sprintf(ret + page, "%u", i);
> > -             else
> > -                     ret += sprintf(ret + page, ", %u", i);
> > -
> > -             first = 0;
> > -     }
> > -
> > -     ret += sprintf(ret + page, "\n");
> > -     return ret;
> > +     return cpumap_print_to_pagebuf(true, page, hctx->cpumask);
> >   }
> >
> >   static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_nr_tags = {
>
> Although this patch looks fine to me, shouldn't this attribute be
> documented under Documentation/ABI/?

That is another problem, not closely related with this buffer-overflow issue.

I suggest to fix the buffer overflow first, which is triggered from userspace.


Thanks,
Ming Lei

  reply	other threads:[~2019-08-16  4:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-16  2:54 [PATCH V2] blk-mq: avoid sysfs buffer overflow by too many CPU cores Ming Lei
2019-08-16  3:39 ` Bart Van Assche
2019-08-16  4:17   ` Ming Lei [this message]
2019-08-16  7:09     ` Greg KH
2019-08-16  7:09 ` Greg KH
2019-08-19  6:12 ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACVXFVNZJswn_zu_K+N2ooLbq1qqrkbknW0Km6R-mHm_nzc=xA@mail.gmail.com' \
    --to=tom.leiming@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-block@vger.kernel.org \
    --cc=mark.ray@hpe.com \
    --cc=ming.lei@redhat.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).