linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Puranjay Mohan <puranjay12@gmail.com>
To: Mark Rutland <mark.rutland@arm.com>
Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
	martin.lau@linux.dev, song@kernel.org, catalin.marinas@arm.com,
	bpf@vger.kernel.org, kpsingh@kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH bpf-next v3 3/3] bpf, arm64: use bpf_jit_binary_pack_alloc
Date: Thu, 22 Jun 2023 10:47:08 +0200	[thread overview]
Message-ID: <CANk7y0jtm9yYobPLsMEHAem+R-wKjVOLWo=EeU-bojYks9tetQ@mail.gmail.com> (raw)
In-Reply-To: <ZJQE9PIjxuA3Q8Sm@FVFF77S0Q05N>

Hi Mark,

On Thu, Jun 22, 2023 at 10:23 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Wed, Jun 21, 2023 at 10:57:20PM +0200, Puranjay Mohan wrote:
> > On Wed, Jun 21, 2023 at 5:31 PM Mark Rutland <mark.rutland@arm.com> wrote:
> > > On Mon, Jun 19, 2023 at 10:01:21AM +0000, Puranjay Mohan wrote:
> > > > @@ -1562,34 +1610,39 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> > > >
> > > >       /* 3. Extra pass to validate JITed code. */
> > > >       if (validate_ctx(&ctx)) {
> > > > -             bpf_jit_binary_free(header);
> > > >               prog = orig_prog;
> > > > -             goto out_off;
> > > > +             goto out_free_hdr;
> > > >       }
> > > >
> > > >       /* And we're done. */
> > > >       if (bpf_jit_enable > 1)
> > > >               bpf_jit_dump(prog->len, prog_size, 2, ctx.image);
> > > >
> > > > -     bpf_flush_icache(header, ctx.image + ctx.idx);
> > > > +     bpf_flush_icache(ro_header, ctx.ro_image + ctx.idx);
> > >
> > > I think this is too early; we haven't copied the instructions into the
> > > ro_header yet, so that still contains stale instructions.
> > >
> > > IIUC at the whole point of this is to pack multiple programs into shared ROX
> > > pages, and so there can be an executable mapping of the RO page at this point,
> > > and the CPU can fetch stale instructions throught that.
> > >
> > > Note that *regardless* of whether there is an executeable mapping at this point
> > > (and even if no executable mapping exists until after the copy), we at least
> > > need a data cache clean to the PoU *after* the copy (so fetches don't get a
> > > stale value from the PoU), and the I-cache maintenance has to happeon the VA
> > > the instrutions will be executed from (or VIPT I-caches can still contain stale
> > > instructions).
> >
> > Thanks for catching this, It is a big miss from my side.
> >
> > I was able to reproduce the boot issue in the other thread on my
> > raspberry pi. I think it is connected to the
> > wrong I-cache handling done by me.
> >
> > As you rightly pointed out: We need to do bpf_flush_icache() after
> > copying the instructions to the ro_header or the CPU can run
> > incorrect instructions.
> >
> > When I move the call to bpf_flush_icache() after
> > bpf_jit_binary_pack_finalize() (this does the copy to ro_header), the
> > boot issue
> > is fixed. Would this change be enough to make this work or I would
> > need to do more with the data cache as well to catch other
> > edge cases?
>
> AFAICT, bpf_flush_icache() calls flush_icache_range(). Despite its name,
> flush_icache_range() has d-cache maintenance, i-cache maintenance, and context
> synchronization (i.e. it does everything necessary).
>
> As long as you call that with the VAs the code will be executed from, that
> should be sufficient, and you don't need to do any other work.

Thanks for explaining this.
After reading your explanation, I feel this should work.

bpf_jit_binary_pack_finalize() will copy the instructions from
rw_header to ro_header.
After the copy, calling bpf_flush_icache(ro_header, ctx.ro_image +
ctx.idx); will invalidate the caches
for the VAs in the ro_header, this is where the code will be executed from.

I will send the v4 patchset with this change.

Thanks,
Puranjay

  reply	other threads:[~2023-06-22  8:47 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-19 10:01 [PATCH bpf-next v3 0/3] bpf, arm64: use BPF prog pack allocator in BPF JIT Puranjay Mohan
2023-06-19 10:01 ` [PATCH bpf-next v3 1/3] bpf: make bpf_prog_pack allocator portable Puranjay Mohan
2023-06-19 10:01 ` [PATCH bpf-next v3 2/3] arm64: patching: Add aarch64_insn_copy() Puranjay Mohan
2023-06-19 10:01 ` [PATCH bpf-next v3 3/3] bpf, arm64: use bpf_jit_binary_pack_alloc Puranjay Mohan
2023-06-20 23:24   ` Song Liu
2023-06-21 15:31   ` Mark Rutland
2023-06-21 16:24     ` Alexei Starovoitov
2023-06-21 20:57     ` Puranjay Mohan
2023-06-22  8:23       ` Mark Rutland
2023-06-22  8:47         ` Puranjay Mohan [this message]
2023-06-22  9:36           ` Mark Rutland
2023-06-20 23:40 ` [PATCH bpf-next v3 0/3] bpf, arm64: use BPF prog pack allocator in BPF JIT patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANk7y0jtm9yYobPLsMEHAem+R-wKjVOLWo=EeU-bojYks9tetQ@mail.gmail.com' \
    --to=puranjay12@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=daniel@iogearbox.net \
    --cc=kpsingh@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=martin.lau@linux.dev \
    --cc=song@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).