All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Jakub Kicinski <kuba@kernel.org>,
	Andrii Nakryiko <andriin@fb.com>,
	linux-arch@vger.kernel.org, bpf <bpf@vger.kernel.org>,
	Networking <netdev@vger.kernel.org>,
	Alexei Starovoitov <ast@fb.com>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Kernel Team <kernel-team@fb.com>,
	Jonathan Lemon <jonathan.lemon@gmail.com>
Subject: Re: [PATCH bpf-next 1/6] bpf: implement BPF ring buffer and verifier support for it
Date: Thu, 14 May 2020 15:13:09 -0700	[thread overview]
Message-ID: <20200514221309.GV2869@paulmck-ThinkPad-P72> (raw)
In-Reply-To: <CAEf4Bzbj-WvRkoGxkSFtK5_1JfQxthoFid398C97RM0ppBb0dA@mail.gmail.com>

On Thu, May 14, 2020 at 02:30:11PM -0700, Andrii Nakryiko wrote:
> On Thu, May 14, 2020 at 1:39 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >
> > Jakub Kicinski <kuba@kernel.org> writes:
> >
> > > On Wed, 13 May 2020 12:25:27 -0700 Andrii Nakryiko wrote:
> > >> One interesting implementation bit, that significantly simplifies (and thus
> > >> speeds up as well) implementation of both producers and consumers is how data
> > >> area is mapped twice contiguously back-to-back in the virtual memory. This
> > >> allows to not take any special measures for samples that have to wrap around
> > >> at the end of the circular buffer data area, because the next page after the
> > >> last data page would be first data page again, and thus the sample will still
> > >> appear completely contiguous in virtual memory. See comment and a simple ASCII
> > >> diagram showing this visually in bpf_ringbuf_area_alloc().
> > >
> > > Out of curiosity - is this 100% okay to do in the kernel and user space
> > > these days? Is this bit part of the uAPI in case we need to back out of
> > > it?
> > >
> > > In the olden days virtually mapped/tagged caches could get confused
> > > seeing the same physical memory have two active virtual mappings, or
> > > at least that's what I've been told in school :)
> >
> > Yes, caching the same thing twice causes coherency problems.
> >
> > VIVT can be found in ARMv5, MIPS, NDS32 and Unicore32.
> >
> > > Checking with Paul - he says that could have been the case for Itanium
> > > and PA-RISC CPUs.
> >
> > Itanium: PIPT L1/L2.
> > PA-RISC: VIPT L1 and PIPT L2

Thank you, Thomas!

> > Thanks,
> 
> Jakub, thanks for bringing this up.

Indeed!  I had completely forgotten about it.

> Thomas, Paul, what kind of problems are we talking about here? What
> are the possible problems in practice?

One CPU stores into one of the mappings, and then it (or some other CPU)
subsequently sees the old value via the other mapping, maybe for a short
time, or maybe indefinitely, depending.  This sort of thing can happen
when the same location in the two mappings map to different location in
the cache.  The store via one virtual address then is placed into one
location in the cache, but the reads from the other virtual address are
referring to some other location in the cache.

In the past, some systems have documented virtual address offsets that
are guaranteed to work, presumably because those offsets force the two
views of the same physical memory to share the same location in the cache.

> So just for the context, all the metadata (record header) that is
> written/read under lock and with smp_store_release/smp_load_acquire is
> written through the one set of page mappings (the first one). Only
> some of sample payload might go into the second set of mapped pages.
> Does this mean that user-space might read some old payloads in such
> case?

That could happen, depending on which CPU accessed what physical
memory using which virtual address.

> I could work-around that in user-space, by mmaping twice the same
> range, one after the other (second mmap would use MAP_FIXED flag, of
> course). So that's not a big deal.

That would work, assuming you mean to map double the size of memory
and then handle the wraparound case very very carefully.  ;-)

But you need only do that on VI*T systems, if that helps.

> But on the kernel side it's crucial property, because it allows BPF
> programs to work with data with the assumption that all data is
> linearly mapped. If we can't do that, reserve() API is impossible to
> implement. So in that case, I'd rather enable BPF ring buffer only on
> platforms that won't have these problems, instead of removing
> reserve/commit API altogether.

You could flush the local CPU's cache before reading past the end,
but only if it is guaranteed that no other CPU is accessing that same
memory using the other mapping.  (No convinced that this is feasible,
but who knows?)

I see that linux-arch is copied, so do any of the affected architectures
object to being left out?

> Well, another way is to just "discard" remaining space at the end, if
> it's not sufficient for entire record. That's doable, there will
> always be at least 8 bytes available for record header, so not a
> problem in that regard. But I would appreciate if you can help me
> understand full implications of caching physical memory twice.
> 
> Also just for my education, with VIVT caches, if user-space
> application mmap()'s same region of memory twice (without MAP_FIXED),
> wouldn't that cause similar problems? Can't this happen today with
> mmap() API? Why is that not a problem?

It does indeed affect userspace applications as well.  And I haven't
heard about this being a problem for a very long time, which might be
why I had forgotten about it.

But the underlying problem is that on VIVT and VIPT platforms, mapping
the same physical memory to two different virtual addresses can cause
that same memory to appear twice in the cache, and the resulting pair
of cachelines will not be guaranteed to be in sync with each other.
So CPUs accessing this memory through the two virtual addresses might
see different values.  Which can come as a bit of a surprise to many
algorithms.

							Thanx, Paul

  reply	other threads:[~2020-05-14 22:13 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-13 19:25 [PATCH bpf-next 0/6] BPF ring buffer Andrii Nakryiko
2020-05-13 19:25 ` [PATCH bpf-next 1/6] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
2020-05-13 20:57   ` kbuild test robot
2020-05-13 20:57     ` kbuild test robot
2020-05-13 21:58   ` Alan Maguire
2020-05-14  5:59     ` Andrii Nakryiko
2020-05-14 22:25       ` Alan Maguire
2020-05-13 22:16   ` kbuild test robot
2020-05-13 22:16     ` kbuild test robot
2020-05-14 16:50   ` Jonathan Lemon
2020-05-14 20:11     ` Andrii Nakryiko
2020-05-14 17:33   ` sdf
2020-05-14 20:18     ` Andrii Nakryiko
2020-05-14 20:53       ` sdf
2020-05-14 21:13         ` Andrii Nakryiko
2020-05-14 21:56           ` Stanislav Fomichev
2020-05-14 19:06   ` Alexei Starovoitov
2020-05-14 20:49     ` Andrii Nakryiko
2020-05-14 19:18   ` Jakub Kicinski
2020-05-14 19:18     ` Jakub Kicinski
2020-05-14 20:39     ` Thomas Gleixner
2020-05-14 21:30       ` Andrii Nakryiko
2020-05-14 22:13         ` Paul E. McKenney [this message]
2020-05-14 22:56         ` Alexei Starovoitov
2020-05-14 23:06           ` Andrii Nakryiko
2020-05-13 19:25 ` [PATCH bpf-next 2/6] tools/memory-model: add BPF ringbuf MPSC litmus tests Andrii Nakryiko
2020-05-13 19:25 ` [PATCH bpf-next 3/6] bpf: track reference type in verifier Andrii Nakryiko
2020-05-13 19:25 ` [PATCH bpf-next 4/6] libbpf: add BPF ring buffer support Andrii Nakryiko
2020-05-13 19:25 ` [PATCH bpf-next 5/6] selftests/bpf: add BPF ringbuf selftests Andrii Nakryiko
2020-05-13 19:25 ` [PATCH bpf-next 6/6] bpf: add BPF ringbuf and perf buffer benchmarks Andrii Nakryiko
2020-05-13 22:49 ` [PATCH bpf-next 0/6] BPF ring buffer Jonathan Lemon
2020-05-14  6:08   ` Andrii Nakryiko
2020-05-14 16:30     ` Jonathan Lemon
2020-05-14 20:06       ` Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200514221309.GV2869@paulmck-ThinkPad-P72 \
    --to=paulmck@kernel.org \
    --cc=andrii.nakryiko@gmail.com \
    --cc=andriin@fb.com \
    --cc=ast@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jonathan.lemon@gmail.com \
    --cc=kernel-team@fb.com \
    --cc=kuba@kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.