From: Jiri Olsa <olsajiri@gmail.com>
To: Yonghong Song <yhs@meta.com>
Cc: Jiri Olsa <olsajiri@gmail.com>,
Alexei Starovoitov <alexei.starovoitov@gmail.com>,
Song Liu <song@kernel.org>, Hao Sun <sunhao.th@gmail.com>,
Peter Zijlstra <peterz@infradead.org>, bpf <bpf@vger.kernel.org>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
John Fastabend <john.fastabend@gmail.com>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@linux.dev>,
Yonghong Song <yhs@fb.com>, KP Singh <kpsingh@kernel.org>,
Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>,
David Miller <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
netdev <netdev@vger.kernel.org>,
Thorsten Leemhuis <regressions@leemhuis.info>
Subject: Re: BUG: unable to handle kernel paging request in bpf_dispatcher_xdp
Date: Fri, 9 Dec 2022 22:53:04 +0100 [thread overview]
Message-ID: <Y5OuQNmkoIvcV6IL@krava> (raw)
In-Reply-To: <5c9d77bf-75f5-954a-c691-39869bb22127@meta.com>
On Fri, Dec 09, 2022 at 12:31:06PM -0800, Yonghong Song wrote:
>
>
> On 12/9/22 7:20 AM, Jiri Olsa wrote:
> > On Fri, Dec 09, 2022 at 02:50:55PM +0100, Jiri Olsa wrote:
> > > On Fri, Dec 09, 2022 at 12:22:37PM +0100, Jiri Olsa wrote:
> > >
> > > SBIP
> > >
> > > > > > > > > >
> > > > > > > > > > I'm trying to understand the severity of the issues and
> > > > > > > > > > whether we need to revert that commit asap since the merge window
> > > > > > > > > > is about to start.
> > > > > > > > >
> > > > > > > > > Jiri, Peter,
> > > > > > > > >
> > > > > > > > > ping.
> > > > > > > > >
> > > > > > > > > cc-ing Thorsten, since he's tracking it now.
> > > > > > > > >
> > > > > > > > > The config has CONFIG_X86_KERNEL_IBT=y.
> > > > > > > > > Is it related?
> > > > > > > >
> > > > > > > > sorry for late reply.. I still did not find the reason,
> > > > > > > > but I did not try with IBT yet, will test now
> > > > > > >
> > > > > > > no difference with IBT enabled, can't reproduce the issue
> > > > > > >
> > > > > >
> > > > > > ok, scratch that.. the reproducer got stuck on wifi init :-\
> > > > > >
> > > > > > after I fix that I can now reproduce on my local config with
> > > > > > IBT enabled or disabled.. it's something else
> > > > >
> > > > > I'm getting the error also when reverting the static call change,
> > > > > looking for good commit, bisecting
> > > > >
> > > > > I'm getting fail with:
> > > > > f0c4d9fc9cc9 (tag: v6.1-rc4) Linux 6.1-rc4
> > > > >
> > > > > v6.1-rc1 is ok
> > > >
> > > > so far I narrowed it down between rc1 and rc3.. bisect got me nowhere so far
> > > >
> > > > attaching some more logs
> > >
> > > looking at the code.. how do we ensure that code running through
> > > bpf_prog_run_xdp will not get dispatcher image changed while
> > > it's being exetuted
> > >
> > > we use 'the other half' of the image when we add/remove programs,
> > > but could bpf_dispatcher_update race with bpf_prog_run_xdp like:
> > >
> > >
> > > cpu 0: cpu 1:
> > >
> > > bpf_prog_run_xdp
> > > ...
> > > bpf_dispatcher_xdp_func
> > > start exec image at offset 0x0
> > >
> > > bpf_dispatcher_update
> > > update image at offset 0x800
> > > bpf_dispatcher_update
> > > update image at offset 0x0
> > >
> > > still in image at offset 0x0
> > >
> > >
> > > that might explain why I wasn't able to trigger that on
> > > bare metal just in qemu
> >
> > I tried patch below and it fixes the issue for me and seems
> > to confirm the race above.. but not sure it's the best fix
> >
> > jirka
> >
> >
> > ---
> > diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c
> > index c19719f48ce0..6a2ced102fc7 100644
> > --- a/kernel/bpf/dispatcher.c
> > +++ b/kernel/bpf/dispatcher.c
> > @@ -124,6 +124,7 @@ static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs)
> > }
> > __BPF_DISPATCHER_UPDATE(d, new ?: (void *)&bpf_dispatcher_nop_func);
> > + synchronize_rcu_tasks();
> > if (new)
> > d->image_off = noff;
>
> This might work. In arch/x86/kernel/alternative.c, we have following
> code and comments. For text_poke, synchronize_rcu_tasks() might be able
> to avoid concurrent execution and update.
so my idea was that we need to ensure all the current callers of
bpf_dispatcher_xdp_func (which should have rcu read lock, based
on the comment in bpf_prog_run_xdp) are gone before and new ones
execute the new image, so the next call to the bpf_dispatcher_update
will be safe to overwrite the other half of the image
jirka
>
> /**
> * text_poke_copy - Copy instructions into (an unused part of) RX memory
> * @addr: address to modify
> * @opcode: source of the copy
> * @len: length to copy, could be more than 2x PAGE_SIZE
> *
> * Not safe against concurrent execution; useful for JITs to dump
> * new code blocks into unused regions of RX memory. Can be used in
> * conjunction with synchronize_rcu_tasks() to wait for existing
> * execution to quiesce after having made sure no existing functions
> * pointers are live.
> */
> void *text_poke_copy(void *addr, const void *opcode, size_t len)
> {
> unsigned long start = (unsigned long)addr;
> size_t patched = 0;
>
> if (WARN_ON_ONCE(core_kernel_text(start)))
> return NULL;
>
> mutex_lock(&text_mutex);
> while (patched < len) {
> unsigned long ptr = start + patched;
> size_t s;
>
> s = min_t(size_t, PAGE_SIZE * 2 - offset_in_page(ptr), len -
> patched);
>
> __text_poke(text_poke_memcpy, (void *)ptr, opcode + patched,
> s);
> patched += s;
> }
> mutex_unlock(&text_mutex);
> return addr;
> }
next prev parent reply other threads:[~2022-12-09 21:53 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-06 3:28 BUG: unable to handle kernel paging request in bpf_dispatcher_xdp Hao Sun
2022-12-06 6:46 ` Hao Sun
2022-12-06 15:18 ` Jiri Olsa
2022-12-07 19:57 ` Alexei Starovoitov
2022-12-08 17:48 ` Alexei Starovoitov
2022-12-08 18:06 ` Jiri Olsa
2022-12-08 22:26 ` Jiri Olsa
2022-12-08 23:02 ` Jiri Olsa
2022-12-09 7:09 ` Jiri Olsa
2022-12-09 11:22 ` Jiri Olsa
2022-12-09 13:50 ` Jiri Olsa
2022-12-09 15:20 ` Jiri Olsa
2022-12-09 20:31 ` Yonghong Song
2022-12-09 21:53 ` Jiri Olsa [this message]
2022-12-09 22:41 ` Daniel Borkmann
2022-12-09 23:07 ` Jiri Olsa
2022-12-09 23:29 ` Jiri Olsa
2022-12-09 23:32 ` Daniel Borkmann
2022-12-09 23:34 ` Jakub Kicinski
2022-12-10 0:06 ` Jiri Olsa
2022-12-10 0:38 ` Paul E. McKenney
2022-12-10 13:05 ` Jiri Olsa
2022-12-10 1:12 ` Alexei Starovoitov
2022-12-10 13:11 ` Jiri Olsa
2022-12-12 15:04 ` Jiri Olsa
2022-12-13 2:26 ` Hao Sun
2022-12-08 8:44 ` BUG: unable to handle kernel paging request in bpf_dispatcher_xdp #forregzbot Thorsten Leemhuis
2022-12-19 9:59 ` Thorsten Leemhuis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y5OuQNmkoIvcV6IL@krava \
--to=olsajiri@gmail.com \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=haoluo@google.com \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kpsingh@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=martin.lau@linux.dev \
--cc=netdev@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=regressions@leemhuis.info \
--cc=sdf@google.com \
--cc=song@kernel.org \
--cc=sunhao.th@gmail.com \
--cc=yhs@fb.com \
--cc=yhs@meta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).