bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Gaurav Singh <gaurav1086@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Martin KaFai Lau <kafai@fb.com>, Song Liu <songliubraving@fb.com>,
	Yonghong Song <yhs@fb.com>, Andrii Nakryiko <andriin@fb.com>,
	KP Singh <kpsingh@chromium.org>,
	netdev@vger.kernel.org (open list:XDP (eXpress Data Path)),
	bpf@vger.kernel.org (open list:XDP (eXpress Data Path)),
	linux-kernel@vger.kernel.org (open list)
Subject: Re: [PATCH] bpf: alloc_record_per_cpu Add null check after malloc
Date: Tue, 9 Jun 2020 14:23:15 +0200	[thread overview]
Message-ID: <20200609142315.4d131599@carbon> (raw)
In-Reply-To: <20200609120804.10569-1-gaurav1086@gmail.com>

On Tue,  9 Jun 2020 08:08:03 -0400
Gaurav Singh <gaurav1086@gmail.com> wrote:

> The memset call is made right after malloc call. To fix this, add the null check right after malloc and then do memset.

Did you read the section about how long lines should be in desc?

> Signed-off-by: Gaurav Singh <gaurav1086@gmail.com>
> ---
>  samples/bpf/xdp_rxq_info_user.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> diff --git a/samples/bpf/xdp_rxq_info_user.c b/samples/bpf/xdp_rxq_info_user.c
> index 4fe47502ebed..490b07b7df78 100644
> --- a/samples/bpf/xdp_rxq_info_user.c
> +++ b/samples/bpf/xdp_rxq_info_user.c
> @@ -202,11 +202,11 @@ static struct datarec *alloc_record_per_cpu(void)
>  	size = sizeof(struct datarec) * nr_cpus;
>  	array = malloc(size);
> -	memset(array, 0, size);
>  	if (!array) {
>  		fprintf(stderr, "Mem alloc error (nr_cpus:%u)\n", nr_cpus);
>  		exit(EXIT_FAIL_MEM);
>  	}
> +	memset(array, 0, size);
>  	return array;
>  }

Looking at code, this bug happen in more places. Please fix up all locations.

I think this fix should go through the "bpf" tree.
Please read:

Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2020-06-09 12:23 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-09 12:08 [PATCH] bpf: alloc_record_per_cpu Add null check after malloc Gaurav Singh
2020-06-09 12:23 ` Jesper Dangaard Brouer [this message]
  -- strict thread matches above, loose matches on Subject: below --
2020-06-09 11:38 Gaurav Singh
2020-06-09 11:38 ` [PATCH] bpf: alloc_record_per_cpu Add null check after malloc Gaurav Singh
2020-06-09 11:55   ` Greg KH
     [not found] <CAFAFadDVe1Au2eJ8ho_cK1riwf9FDaGck3o+VEcKpqRgO5qXdA@mail.gmail.com>
2020-06-09  6:50 ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200609142315.4d131599@carbon \
    --to=jbrouer@redhat.com \
    --cc=andriin@fb.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=gaurav1086@gmail.com \
    --cc=hawk@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=kafai@fb.com \
    --cc=kpsingh@chromium.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=yhs@fb.com \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).