netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/7] BPF ring buffer
@ 2020-05-17 19:57 Andrii Nakryiko
  2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
                   ` (6 more replies)
  0 siblings, 7 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon

Implement a new BPF ring buffer, as presented at BPF virtual conference ([0]).
It presents an alternative to perf buffer, following its semantics closely,
but allowing sharing same instance of ring buffer across multiple CPUs
efficiently.

Most patches have extensive commentary explaining various aspects, so I'll
keep cover letter short. Overall structure of the patch set:
- patch #1 adds BPF ring buffer implementation to kernel and necessary
  verifier support;
- patch #2 adds litmus tests validating all the memory orderings and locking
  is correct;
- patch #3 is an optional patch that generalizes verifier's reference tracking
  machinery to capture type of reference;
- patch #4 adds libbpf consumer implementation for BPF ringbuf;
- path #5 adds selftest, both for single BPF ring buf use case, as well as
  using it with array/hash of maps;
- patch #6 adds extensive benchmarks and provide some analysis in commit
  message, it build upon selftests/bpf's bench runner.
- patch #7 adds most of patch #1 commit message as a doc under
  Documentation/bpf/ringbuf.txt.

  [0] https://docs.google.com/presentation/d/18ITdg77Bj6YDOH2LghxrnFxiPWe0fAqcmJY95t_qr0w

v1->v2:
- commit()/discard()/output() accept flags (NO_WAKEUP/FORCE_WAKEUP) (Stanislav);
- bpf_ringbuf_query() added, returning available data size, ringbuf size,
  consumer/producer positions, needed to implement smarter notification policy
  (Stanislav);
- added ringbuf UAPI constants to include/uapi/linux/bpf.h (Jonathan);
- fixed sample size check, added proper ringbuf size check (Jonathan, Alexei);
- wake_up_all() is done through irq_work (Alexei);
- consistent use of smp_load_acquire/smp_store_release, no
  READ_ONCE/WRITE_ONCE (Alexei);
- added Documentation/bpf/ringbuf.txt (Stanislav);
- updated litmus test with smp_load_acquire/smp_store_release changes;
- added ring_buffer__consume() API to libbpf for busy-polling;
- ring_buffer__poll() on success returns number of records consumed;
- fixed EPOLL notifications, don't assume available data, done similarly to
  perfbuf's implementation;
- both ringbuf and perfbuf now have --rb-sampled mode, instead of
  pb-raw/pb-custom mode, updated benchmark results;
- extended ringbuf selftests to validate epoll logic/manual notification
  logic, as well as bpf_ringbuf_query().

Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>

Andrii Nakryiko (7):
  bpf: implement BPF ring buffer and verifier support for it
  tools/memory-model: add BPF ringbuf MPSC litmus tests
  bpf: track reference type in verifier
  libbpf: add BPF ring buffer support
  selftests/bpf: add BPF ringbuf selftests
  bpf: add BPF ringbuf and perf buffer benchmarks
  docs/bpf: add BPF ring buffer design notes

 Documentation/bpf/ringbuf.txt                 | 191 ++++++
 include/linux/bpf.h                           |  13 +
 include/linux/bpf_types.h                     |   1 +
 include/linux/bpf_verifier.h                  |  12 +
 include/uapi/linux/bpf.h                      |  84 ++-
 kernel/bpf/Makefile                           |   2 +-
 kernel/bpf/helpers.c                          |  10 +
 kernel/bpf/ringbuf.c                          | 487 +++++++++++++++
 kernel/bpf/syscall.c                          |  12 +
 kernel/bpf/verifier.c                         | 253 ++++++--
 kernel/trace/bpf_trace.c                      |  10 +
 tools/include/uapi/linux/bpf.h                |  90 ++-
 tools/lib/bpf/Build                           |   2 +-
 tools/lib/bpf/libbpf.h                        |  21 +
 tools/lib/bpf/libbpf.map                      |   5 +
 tools/lib/bpf/libbpf_probes.c                 |   5 +
 tools/lib/bpf/ringbuf.c                       | 285 +++++++++
 .../litmus-tests/mpsc-rb+1p1c+bounded.litmus  |  92 +++
 .../litmus-tests/mpsc-rb+1p1c+unbound.litmus  |  83 +++
 .../litmus-tests/mpsc-rb+2p1c+bounded.litmus  | 152 +++++
 .../litmus-tests/mpsc-rb+2p1c+unbound.litmus  | 137 +++++
 tools/testing/selftests/bpf/Makefile          |   5 +-
 tools/testing/selftests/bpf/bench.c           |  16 +
 .../selftests/bpf/benchs/bench_ringbufs.c     | 566 ++++++++++++++++++
 .../bpf/benchs/run_bench_ringbufs.sh          |  75 +++
 .../selftests/bpf/prog_tests/ringbuf.c        | 211 +++++++
 .../selftests/bpf/prog_tests/ringbuf_multi.c  | 102 ++++
 .../selftests/bpf/progs/perfbuf_bench.c       |  33 +
 .../selftests/bpf/progs/ringbuf_bench.c       |  60 ++
 .../selftests/bpf/progs/test_ringbuf.c        |  78 +++
 .../selftests/bpf/progs/test_ringbuf_multi.c  |  77 +++
 31 files changed, 3112 insertions(+), 58 deletions(-)
 create mode 100644 Documentation/bpf/ringbuf.txt
 create mode 100644 kernel/bpf/ringbuf.c
 create mode 100644 tools/lib/bpf/ringbuf.c
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus
 create mode 100644 tools/testing/selftests/bpf/benchs/bench_ringbufs.c
 create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_ringbufs.sh
 create mode 100644 tools/testing/selftests/bpf/prog_tests/ringbuf.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
 create mode 100644 tools/testing/selftests/bpf/progs/perfbuf_bench.c
 create mode 100644 tools/testing/selftests/bpf/progs/ringbuf_bench.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf_multi.c

-- 
2.24.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
@ 2020-05-17 19:57 ` Andrii Nakryiko
  2020-05-19 12:57   ` kbuild test robot
                     ` (3 more replies)
  2020-05-17 19:57 ` [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests Andrii Nakryiko
                   ` (5 subsequent siblings)
  6 siblings, 4 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon

This commits adds a new MPSC ring buffer implementation into BPF ecosystem,
which allows multiple CPUs to submit data to a single shared ring buffer. On
the consumption side, only single consumer is assumed.

Motivation
----------
There are two distinctive motivators for this work, which are not satisfied by
existing perf buffer, which prompted creation of a new ring buffer
implementation.
  - more efficient memory utilization by sharing ring buffer across CPUs;
  - preserving ordering of events that happen sequentially in time, even
  across multiple CPUs (e.g., fork/exec/exit events for a task).

These two problems are independent, but perf buffer fails to satisfy both.
Both are a result of a choice to have per-CPU perf ring buffer.  Both can be
also solved by having an MPSC implementation of ring buffer. The ordering
problem could technically be solved for perf buffer with some in-kernel
counting, but given the first one requires an MPSC buffer, the same solution
would solve the second problem automatically.

Semantics and APIs
------------------
Single ring buffer is presented to BPF programs as an instance of BPF map of
type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately
rejected.

One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make
BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce
"same CPU only" rule. This would be more familiar interface compatible with
existing perf buffer use in BPF, but would fail if application needed more
advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses
this with current approach. Additionally, given the performance of BPF
ringbuf, many use cases would just opt into a simple single ring buffer shared
among all CPUs, for which current approach would be an overkill.

Another approach could introduce a new concept, alongside BPF map, to
represent generic "container" object, which doesn't necessarily have key/value
interface with lookup/update/delete operations. This approach would add a lot
of extra infrastructure that has to be built for observability and verifier
support. It would also add another concept that BPF developers would have to
familiarize themselves with, new syntax in libbpf, etc. But then would really
provide no additional benefits over the approach of using a map.
BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so
doesn't few other map types (e.g., queue and stack; array doesn't support
delete, etc).

The approach chosen has an advantage of re-using existing BPF map
infrastructure (introspection APIs in kernel, libbpf support, etc), being
familiar concept (no need to teach users a new type of object in BPF program),
and utilizing existing tooling (bpftool). For common scenario of using
a single ring buffer for all CPUs, it's as simple and straightforward, as
would be with a dedicated "container" object. On the other hand, by being
a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to
implement a wide variety of topologies, from one ring buffer for each CPU
(e.g., as a replacement for perf buffer use cases), to a complicated
application hashing/sharding of ring buffers (e.g., having a small pool of
ring buffers with hashed task's tgid being a look up key to preserve order,
but reduce contention).

Key and value sizes are enforced to be zero. max_entries is used to specify
the size of ring buffer and has to be a power of 2 value.

There are a bunch of similarities between perf buffer
(BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
  - variable-length records;
  - if there is no more space left in ring buffer, reservation fails, no
    blocking;
  - memory-mappable data area for user-space applications for ease of
    consumption and high performance;
  - epoll notifications for new incoming data;
  - but still the ability to do busy polling for new data to achieve the
    lowest latency, if necessary.

BPF ringbuf provides two sets of APIs to BPF programs:
  - bpf_ringbuf_output() allows to *copy* data from one place to a ring
    buffer, similarly to bpf_perf_event_output();
  - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs
    split the whole process into two steps. First, a fixed amount of space is
    reserved. If successful, a pointer to a data inside ring buffer data area
    is returned, which BPF programs can use similarly to a data inside
    array/hash maps. Once ready, this piece of memory is either committed or
    discarded. Discard is similar to commit, but makes consumer ignore the
    record.

bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because
record has to be prepared in some other place first. But it allows to submit
records of the length that's not known to verifier beforehand. It also closely
matches bpf_perf_event_output(), so will simplify migration significantly.

bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory
pointer directly to ring buffer memory. In a lot of cases records are larger
than BPF stack space allows, so many programs have use extra per-CPU array as
a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs
completely. But in exchange, it only allows a known constant size of memory to
be reserved, such that verifier can verify that BPF program can't access
memory outside its reserved record space. bpf_ringbuf_output(), while slightly
slower due to extra memory copy, covers some use cases that are not suitable
for bpf_ringbuf_reserve().

The difference between commit and discard is very small. Discard just marks
a record as discarded, and such records are supposed to be ignored by consumer
code. Discard is useful for some advanced use-cases, such as ensuring
all-or-nothing multi-record submission, or emulating temporary malloc()/free()
within single BPF program invocation.

Each reserved record is tracked by verifier through existing
reference-tracking logic, similar to socket ref-tracking. It is thus
impossible to reserve a record, but forget to submit (or discard) it.

bpf_ringbuf_query() helper allows to query various properties of ring buffer.
Currently 4 are supported:
  - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer;
  - BPF_RB_RING_SIZE returns the size of ring buffer;
  - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of
    consumer/producer, respectively.
Returned values are momentarily snapshots of ring buffer state and could be
off by the time helper returns, so this should be used only for
debugging/reporting reasons or for implementing various heuristics, that take
into account highly-changeable nature of some of those characteristics.

One such heuristic might involve more fine-grained control over poll/epoll
notifications about new data availability in ring buffer. Together with
BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers,
it allows BPF program a high degree of control and, e.g., more efficient
batched notifications. Default self-balancing strategy, though, should be
adequate for most applications and will work reliable and efficiently already.

Design and implementation
-------------------------
This reserve/commit schema allows a natural way for multiple producers, either
on different CPUs or even on the same CPU/in the same BPF program, to reserve
independent records and work with them without blocking other producers. This
means that if BPF program was interruped by another BPF program sharing the
same ring buffer, they will both get a record reserved (provided there is
enough space left) and can work with it and submit it independently. This
applies to NMI context as well, except that due to using a spinlock during
reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock,
in which case reservation will fail even if ring buffer is not full.

The ring buffer itself internally is implemented as a power-of-2 sized
circular buffer, with two logical and ever-increasing counters (which might
wrap around on 32-bit architectures, that's not a problem):
  - consumer counter shows up to which logical position consumer consumed the
    data;
  - producer counter denotes amount of data reserved by all producers.

Each time a record is reserved, producer that "owns" the record will
successfully advance producer counter. At that point, data is still not yet
ready to be consumed, though. Each record has 8 byte header, which contains
the length of reserved record, as well as two extra bits: busy bit to denote
that record is still being worked on, and discard bit, which might be set at
commit time if record is discarded. In the latter case, consumer is supposed
to skip the record and move on to the next one. Record header also encodes
record's relative offset from the beginning of ring buffer data area (in
pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only
the pointer to the record itself, without requiring also the pointer to ring
buffer itself. Ring buffer memory location will be restored from record
metadata header. This significantly simplifies verifier, as well as improving
API usability.

Producer counter increments are serialized under spinlock, so there is
a strict ordering between reservations. Commits, on the other hand, are
completely lockless and independent. All records become available to consumer
in the order of reservations, but only after all previous records where
already committed. It is thus possible for slow producers to temporarily hold
off submitted records, that were reserved later.

Reservation/commit/consumer protocol is verified by litmus tests in
tools/memory-model/litmus-tests, see mpsc-rb*.litmus files.

One interesting implementation bit, that significantly simplifies (and thus
speeds up as well) implementation of both producers and consumers is how data
area is mapped twice contiguously back-to-back in the virtual memory. This
allows to not take any special measures for samples that have to wrap around
at the end of the circular buffer data area, because the next page after the
last data page would be first data page again, and thus the sample will still
appear completely contiguous in virtual memory. See comment and a simple ASCII
diagram showing this visually in bpf_ringbuf_area_alloc().

Another feature that distinguishes BPF ringbuf from perf ring buffer is
a self-pacing notifications of new data being availability.
bpf_ringbuf_commit() implementation will send a notification of new record
being available after commit only if consumer has already caught up right up
to the record being committed. If not, consumer still has to catch up and thus
will see new data anyways without needing an extra poll notification.
Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that
this allows to achieve a very high throughput without having to resort to
tricks like "notify only every Nth sample", which are necessary with perf
buffer. For extreme cases, when BPF program wants more manual control of
notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and
BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data
availability, but require extra caution and diligence in using this API.

Comparison to alternatives
--------------------------
Before considering implementing BPF ring buffer from scratch existing
alternatives in kernel were evaluated, but didn't seem to meet the needs. They
largely fell into few categores:
  - per-CPU buffers (perf, ftrace, etc), which don't satisfy two motivations
    outlined above (ordering and memory consumption);
  - linked list-based implementations; while some were multi-producer designs,
    consuming these from user-space would be very complicated and most
    probably not performant; memory-mapping contiguous piece of memory is
    simpler and more performant for user-space consumers;
  - io_uring is SPSC, but also requires fixed-sized elements. Naively turning
    SPSC queue into MPSC w/ lock would have subpar performance compared to
    locked reserve + lockless commit, as with BPF ring buffer. Fixed sized
    elements would be too limiting for BPF programs, given existing BPF
    programs heavily rely on variable-sized perf buffer already;
  - specialized implementations (like a new printk ring buffer, [0]) with lots
    of printk-specific limitations and implications, that didn't seem to fit
    well for intended use with BPF programs.

  [0] https://lwn.net/Articles/779550/

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 include/linux/bpf.h            |  13 +
 include/linux/bpf_types.h      |   1 +
 include/linux/bpf_verifier.h   |   4 +
 include/uapi/linux/bpf.h       |  84 +++++-
 kernel/bpf/Makefile            |   2 +-
 kernel/bpf/helpers.c           |  10 +
 kernel/bpf/ringbuf.c           | 487 +++++++++++++++++++++++++++++++++
 kernel/bpf/syscall.c           |  12 +
 kernel/bpf/verifier.c          | 157 ++++++++---
 kernel/trace/bpf_trace.c       |  10 +
 tools/include/uapi/linux/bpf.h |  90 +++++-
 11 files changed, 832 insertions(+), 38 deletions(-)
 create mode 100644 kernel/bpf/ringbuf.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index efe8836b5c48..e5884f7f801c 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -90,6 +90,8 @@ struct bpf_map_ops {
 	int (*map_direct_value_meta)(const struct bpf_map *map,
 				     u64 imm, u32 *off);
 	int (*map_mmap)(struct bpf_map *map, struct vm_area_struct *vma);
+	__poll_t (*map_poll)(struct bpf_map *map, struct file *filp,
+			     struct poll_table_struct *pts);
 };
 
 struct bpf_map_memory {
@@ -244,6 +246,9 @@ enum bpf_arg_type {
 	ARG_PTR_TO_LONG,	/* pointer to long */
 	ARG_PTR_TO_SOCKET,	/* pointer to bpf_sock (fullsock) */
 	ARG_PTR_TO_BTF_ID,	/* pointer to in-kernel struct */
+	ARG_PTR_TO_ALLOC_MEM,	/* pointer to dynamically allocated memory */
+	ARG_PTR_TO_ALLOC_MEM_OR_NULL,	/* pointer to dynamically allocated memory or NULL */
+	ARG_CONST_ALLOC_SIZE_OR_ZERO,	/* number of allocated bytes requested */
 };
 
 /* type of values returned from helper functions */
@@ -255,6 +260,7 @@ enum bpf_return_type {
 	RET_PTR_TO_SOCKET_OR_NULL,	/* returns a pointer to a socket or NULL */
 	RET_PTR_TO_TCP_SOCK_OR_NULL,	/* returns a pointer to a tcp_sock or NULL */
 	RET_PTR_TO_SOCK_COMMON_OR_NULL,	/* returns a pointer to a sock_common or NULL */
+	RET_PTR_TO_ALLOC_MEM_OR_NULL,	/* returns a pointer to dynamically allocated memory or NULL */
 };
 
 /* eBPF function prototype used by verifier to allow BPF_CALLs from eBPF programs
@@ -322,6 +328,8 @@ enum bpf_reg_type {
 	PTR_TO_XDP_SOCK,	 /* reg points to struct xdp_sock */
 	PTR_TO_BTF_ID,		 /* reg points to kernel struct */
 	PTR_TO_BTF_ID_OR_NULL,	 /* reg points to kernel struct or NULL */
+	PTR_TO_MEM,		 /* reg points to valid memory region */
+	PTR_TO_MEM_OR_NULL,	 /* reg points to valid memory region or NULL */
 };
 
 /* The information passed from prog-specific *_is_valid_access
@@ -1611,6 +1619,11 @@ extern const struct bpf_func_proto bpf_tcp_sock_proto;
 extern const struct bpf_func_proto bpf_jiffies64_proto;
 extern const struct bpf_func_proto bpf_get_ns_current_pid_tgid_proto;
 extern const struct bpf_func_proto bpf_event_output_data_proto;
+extern const struct bpf_func_proto bpf_ringbuf_output_proto;
+extern const struct bpf_func_proto bpf_ringbuf_reserve_proto;
+extern const struct bpf_func_proto bpf_ringbuf_submit_proto;
+extern const struct bpf_func_proto bpf_ringbuf_discard_proto;
+extern const struct bpf_func_proto bpf_ringbuf_query_proto;
 
 const struct bpf_func_proto *bpf_tracing_func_proto(
 	enum bpf_func_id func_id, const struct bpf_prog *prog);
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 29d22752fc87..fa8e1b552acd 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -118,6 +118,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_STACK, stack_map_ops)
 #if defined(CONFIG_BPF_JIT)
 BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops)
 #endif
+BPF_MAP_TYPE(BPF_MAP_TYPE_RINGBUF, ringbuf_map_ops)
 
 BPF_LINK_TYPE(BPF_LINK_TYPE_RAW_TRACEPOINT, raw_tracepoint)
 BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index ea833087e853..ca08db4ffb5f 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -54,6 +54,8 @@ struct bpf_reg_state {
 
 		u32 btf_id; /* for PTR_TO_BTF_ID */
 
+		u32 mem_size; /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
+
 		/* Max size from any of the above. */
 		unsigned long raw;
 	};
@@ -63,6 +65,8 @@ struct bpf_reg_state {
 	 * offset, so they can share range knowledge.
 	 * For PTR_TO_MAP_VALUE_OR_NULL this is used to share which map value we
 	 * came from, when one is tested for != NULL.
+	 * For PTR_TO_MEM_OR_NULL this is used to identify memory allocation
+	 * for the purpose of tracking that it's freed.
 	 * For PTR_TO_SOCKET this is used to share which pointers retain the
 	 * same reference to the socket, to determine proper reference freeing.
 	 */
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index b9b8a0f63b91..285973d10bdf 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -147,6 +147,7 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_SK_STORAGE,
 	BPF_MAP_TYPE_DEVMAP_HASH,
 	BPF_MAP_TYPE_STRUCT_OPS,
+	BPF_MAP_TYPE_RINGBUF,
 };
 
 /* Note that tracing related programs such as
@@ -3153,6 +3154,59 @@ union bpf_attr {
  *		**bpf_sk_cgroup_id**\ ().
  *	Return
  *		The id is returned or 0 in case the id could not be retrieved.
+ *
+ * void *bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)
+ * 	Description
+ * 		Copy *size* bytes from *data* into a ring buffer *ringbuf*.
+ * 		If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
+ * 		new data availability is sent.
+ * 		IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
+ * 		new data availability is sent unconditionally.
+ * 	Return
+ * 		0, on success;
+ * 		< 0, on error.
+ *
+ * void *bpf_ringbuf_reserve(void *ringbuf, u64 size, u64 flags)
+ * 	Description
+ * 		Reserve *size* bytes of payload in a ring buffer *ringbuf*.
+ * 	Return
+ * 		Valid pointer with *size* bytes of memory available; NULL,
+ * 		otherwise.
+ *
+ * void bpf_ringbuf_submit(void *data, u64 flags)
+ * 	Description
+ * 		Submit reserved ring buffer sample, pointed to by *data*.
+ * 		If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
+ * 		new data availability is sent.
+ * 		IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
+ * 		new data availability is sent unconditionally.
+ * 	Return
+ * 		Nothing. Always succeeds.
+ *
+ * void bpf_ringbuf_discard(void *data, u64 flags)
+ * 	Description
+ * 		Discard reserved ring buffer sample, pointed to by *data*.
+ * 		If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
+ * 		new data availability is sent.
+ * 		IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
+ * 		new data availability is sent unconditionally.
+ * 	Return
+ * 		Nothing. Always succeeds.
+ *
+ * u64 bpf_ringbuf_query(void *ringbuf, u64 flags)
+ *	Description
+ *		Query various characteristics of provided ring buffer. What
+ *		exactly is queries is determined by *flags*:
+ *		  - BPF_RB_AVAIL_DATA - amount of data not yet consumed;
+ *		  - BPF_RB_RING_SIZE - the size of ring buffer;
+ *		  - BPF_RB_CONS_POS - consumer position (can wrap around);
+ *		  - BPF_RB_PROD_POS - producer(s) position (can wrap around);
+ *		Data returned is just a momentary snapshots of actual values
+ *		and could be inaccurate, so this facility should be used to
+ *		power heuristics and for reporting, not to make 100% correct
+ *		calculation.
+ *	Return
+ *		Requested value, or 0, if flags are not recognized.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -3284,7 +3338,12 @@ union bpf_attr {
 	FN(seq_printf),			\
 	FN(seq_write),			\
 	FN(sk_cgroup_id),		\
-	FN(sk_ancestor_cgroup_id),
+	FN(sk_ancestor_cgroup_id),	\
+	FN(ringbuf_output),		\
+	FN(ringbuf_reserve),		\
+	FN(ringbuf_submit),		\
+	FN(ringbuf_discard),		\
+	FN(ringbuf_query),
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call
@@ -3394,6 +3453,29 @@ enum {
 	BPF_F_GET_BRANCH_RECORDS_SIZE	= (1ULL << 0),
 };
 
+/* BPF_FUNC_bpf_ringbuf_commit, BPF_FUNC_bpf_ringbuf_discard, and
+ * BPF_FUNC_bpf_ringbuf_output flags.
+ */
+enum {
+	BPF_RB_NO_WAKEUP		= (1ULL << 0),
+	BPF_RB_FORCE_WAKEUP		= (1ULL << 1),
+};
+
+/* BPF_FUNC_bpf_ringbuf_query flags */
+enum {
+	BPF_RB_AVAIL_DATA = 0,
+	BPF_RB_RING_SIZE = 1,
+	BPF_RB_CONS_POS = 2,
+	BPF_RB_PROD_POS = 3,
+};
+
+/* BPF ring buffer constants */
+enum {
+	BPF_RINGBUF_BUSY_BIT		= (1U << 31),
+	BPF_RINGBUF_DISCARD_BIT		= (1U << 30),
+	BPF_RINGBUF_HDR_SZ		= 8,
+};
+
 /* Mode for BPF_FUNC_skb_adjust_room helper. */
 enum bpf_adj_room_mode {
 	BPF_ADJ_ROOM_NET,
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 37b2d8620153..c9aada6c1806 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -4,7 +4,7 @@ CFLAGS_core.o += $(call cc-disable-warning, override-init)
 
 obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o bpf_iter.o map_iter.o task_iter.o
 obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o
-obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o
+obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o
 obj-$(CONFIG_BPF_SYSCALL) += disasm.o
 obj-$(CONFIG_BPF_JIT) += trampoline.o
 obj-$(CONFIG_BPF_SYSCALL) += btf.o
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 886949fdcece..972ba5e40c4b 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -629,6 +629,16 @@ bpf_base_func_proto(enum bpf_func_id func_id)
 		return &bpf_ktime_get_ns_proto;
 	case BPF_FUNC_ktime_get_boot_ns:
 		return &bpf_ktime_get_boot_ns_proto;
+	case BPF_FUNC_ringbuf_output:
+		return &bpf_ringbuf_output_proto;
+	case BPF_FUNC_ringbuf_reserve:
+		return &bpf_ringbuf_reserve_proto;
+	case BPF_FUNC_ringbuf_submit:
+		return &bpf_ringbuf_submit_proto;
+	case BPF_FUNC_ringbuf_discard:
+		return &bpf_ringbuf_discard_proto;
+	case BPF_FUNC_ringbuf_query:
+		return &bpf_ringbuf_query_proto;
 	default:
 		break;
 	}
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
new file mode 100644
index 000000000000..3c19f0f07726
--- /dev/null
+++ b/kernel/bpf/ringbuf.c
@@ -0,0 +1,487 @@
+#include <linux/bpf.h>
+#include <linux/btf.h>
+#include <linux/err.h>
+#include <linux/irq_work.h>
+#include <linux/slab.h>
+#include <linux/filter.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/wait.h>
+#include <linux/poll.h>
+#include <uapi/linux/btf.h>
+
+#define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
+
+/* non-mmap()'able part of bpf_ringbuf (everything up to consumer page) */
+#define RINGBUF_PGOFF \
+	(offsetof(struct bpf_ringbuf, consumer_pos) >> PAGE_SHIFT)
+/* consumer page and producer page */
+#define RINGBUF_POS_PAGES 2
+
+#define RINGBUF_MAX_RECORD_SZ (UINT_MAX/4)
+
+/* Maximum size of ring buffer area is limited by 32-bit page offset within
+ * record header, counted in pages. Reserve 8 bits for extensibility, and take
+ * into account few extra pages for consumer/producer pages and
+ * non-mmap()'able parts. This gives 64GB limit, which seems plenty for single
+ * ring buffer.
+ */
+#define RINGBUF_MAX_DATA_SZ \
+	(((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE)
+
+struct bpf_ringbuf {
+	wait_queue_head_t waitq;
+	struct irq_work work;
+	u64 mask;
+	spinlock_t spinlock ____cacheline_aligned_in_smp;
+	/* Consumer and producer counters are put into separate pages to allow
+	 * mapping consumer page as r/w, but restrict producer page to r/o.
+	 * This protects producer position from being modified by user-space
+	 * application and ruining in-kernel position tracking.
+	 */
+	unsigned long consumer_pos __aligned(PAGE_SIZE);
+	unsigned long producer_pos __aligned(PAGE_SIZE);
+	char data[] __aligned(PAGE_SIZE);
+};
+
+struct bpf_ringbuf_map {
+	struct bpf_map map;
+	struct bpf_map_memory memory;
+	struct bpf_ringbuf *rb;
+};
+
+/* 8-byte ring buffer record header structure */
+struct bpf_ringbuf_hdr {
+	u32 len;
+	u32 pg_off;
+};
+
+static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
+{
+	const gfp_t flags = GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN |
+			    __GFP_ZERO;
+	int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES;
+	int nr_data_pages = data_sz >> PAGE_SHIFT;
+	int nr_pages = nr_meta_pages + nr_data_pages;
+	struct page **pages, *page;
+	size_t array_size;
+	void *addr;
+	int i;
+
+	/* Each data page is mapped twice to allow "virtual"
+	 * continuous read of samples wrapping around the end of ring
+	 * buffer area:
+	 * ------------------------------------------------------
+	 * | meta pages |  real data pages  |  same data pages  |
+	 * ------------------------------------------------------
+	 * |            | 1 2 3 4 5 6 7 8 9 | 1 2 3 4 5 6 7 8 9 |
+	 * ------------------------------------------------------
+	 * |            | TA             DA | TA             DA |
+	 * ------------------------------------------------------
+	 *                               ^^^^^^^
+	 *                                  |
+	 * Here, no need to worry about special handling of wrapped-around
+	 * data due to double-mapped data pages. This works both in kernel and
+	 * when mmap()'ed in user-space, simplifying both kernel and
+	 * user-space implementations significantly.
+	 */
+	array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages);
+	if (array_size > PAGE_SIZE)
+		pages = vmalloc_node(array_size, numa_node);
+	else
+		pages = kmalloc_node(array_size, flags, numa_node);
+	if (!pages)
+		return NULL;
+
+	for (i = 0; i < nr_pages; i++) {
+		page = alloc_pages_node(numa_node, flags, 0);
+		if (!page) {
+			nr_pages = i;
+			goto err_free_pages;
+		}
+		pages[i] = page;
+		if (i >= nr_meta_pages)
+			pages[nr_data_pages + i] = page;
+	}
+
+	addr = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
+		    VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
+	if (addr)
+		return addr;
+
+err_free_pages:
+	for (i = 0; i < nr_pages; i++)
+		free_page((unsigned long)pages[i]);
+	kvfree(pages);
+	return NULL;
+}
+
+static void bpf_ringbuf_notify(struct irq_work *work)
+{
+	struct bpf_ringbuf *rb = container_of(work, struct bpf_ringbuf, work);
+
+	wake_up_all(&rb->waitq);
+}
+
+static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
+{
+	struct bpf_ringbuf *rb;
+
+	if (!data_sz || !PAGE_ALIGNED(data_sz))
+		return ERR_PTR(-EINVAL);
+
+	if (data_sz > RINGBUF_MAX_DATA_SZ)
+		return ERR_PTR(-E2BIG);
+
+	rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
+	if (!rb)
+		return ERR_PTR(-ENOMEM);
+
+	spin_lock_init(&rb->spinlock);
+	init_waitqueue_head(&rb->waitq);
+	init_irq_work(&rb->work, bpf_ringbuf_notify);
+
+	rb->mask = data_sz - 1;
+	rb->consumer_pos = 0;
+	rb->producer_pos = 0;
+
+	return rb;
+}
+
+static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
+{
+	struct bpf_ringbuf_map *rb_map;
+	u64 cost;
+	int err;
+
+	if (attr->map_flags & ~RINGBUF_CREATE_FLAG_MASK)
+		return ERR_PTR(-EINVAL);
+
+	if (attr->key_size || attr->value_size ||
+	    attr->max_entries == 0 || !PAGE_ALIGNED(attr->max_entries))
+		return ERR_PTR(-EINVAL);
+
+	rb_map = kzalloc(sizeof(*rb_map), GFP_USER);
+	if (!rb_map)
+		return ERR_PTR(-ENOMEM);
+
+	bpf_map_init_from_attr(&rb_map->map, attr);
+
+	cost = sizeof(struct bpf_ringbuf_map) +
+	       sizeof(struct bpf_ringbuf) +
+	       attr->max_entries;
+	err = bpf_map_charge_init(&rb_map->map.memory, cost);
+	if (err)
+		goto err_free_map;
+
+	rb_map->rb = bpf_ringbuf_alloc(attr->max_entries, rb_map->map.numa_node);
+	if (IS_ERR(rb_map->rb)) {
+		err = PTR_ERR(rb_map->rb);
+		goto err_uncharge;
+	}
+
+	return &rb_map->map;
+
+err_uncharge:
+	bpf_map_charge_finish(&rb_map->map.memory);
+err_free_map:
+	kfree(rb_map);
+	return ERR_PTR(err);
+}
+
+static void bpf_ringbuf_free(struct bpf_ringbuf *ringbuf)
+{
+	kvfree(ringbuf);
+}
+
+static void ringbuf_map_free(struct bpf_map *map)
+{
+	struct bpf_ringbuf_map *rb_map;
+
+	/* at this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
+	 * so the programs (can be more than one that used this map) were
+	 * disconnected from events. Wait for outstanding critical sections in
+	 * these programs to complete
+	 */
+	synchronize_rcu();
+
+	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+	bpf_ringbuf_free(rb_map->rb);
+	kfree(rb_map);
+}
+
+static void *ringbuf_map_lookup_elem(struct bpf_map *map, void *key)
+{
+	return ERR_PTR(-ENOTSUPP);
+}
+
+static int ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value,
+				   u64 flags)
+{
+	return -ENOTSUPP;
+}
+
+static int ringbuf_map_delete_elem(struct bpf_map *map, void *key)
+{
+	return -ENOTSUPP;
+}
+
+static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
+				    void *next_key)
+{
+	return -ENOTSUPP;
+}
+
+static size_t bpf_ringbuf_mmap_page_cnt(const struct bpf_ringbuf *rb)
+{
+	size_t data_pages = (rb->mask + 1) >> PAGE_SHIFT;
+
+	/* consumer page + producer page + 2 x data pages */
+	return RINGBUF_POS_PAGES + 2 * data_pages;
+}
+
+static int ringbuf_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
+{
+	struct bpf_ringbuf_map *rb_map;
+	size_t mmap_sz;
+
+	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+	mmap_sz = bpf_ringbuf_mmap_page_cnt(rb_map->rb) << PAGE_SHIFT;
+
+	if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > mmap_sz)
+		return -EINVAL;
+
+	return remap_vmalloc_range(vma, rb_map->rb,
+				   vma->vm_pgoff + RINGBUF_PGOFF);
+}
+
+static unsigned long ringbuf_avail_data_sz(struct bpf_ringbuf *rb)
+{
+	unsigned long cons_pos, prod_pos;
+
+	cons_pos = smp_load_acquire(&rb->consumer_pos);
+	prod_pos = smp_load_acquire(&rb->producer_pos);
+	return prod_pos - cons_pos;
+}
+
+static __poll_t ringbuf_map_poll(struct bpf_map *map, struct file *filp,
+				 struct poll_table_struct *pts)
+{
+	struct bpf_ringbuf_map *rb_map;
+
+	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+	poll_wait(filp, &rb_map->rb->waitq, pts);
+
+	if (ringbuf_avail_data_sz(rb_map->rb))
+		return EPOLLIN | EPOLLRDNORM;
+	return 0;
+}
+
+const struct bpf_map_ops ringbuf_map_ops = {
+	.map_alloc = ringbuf_map_alloc,
+	.map_free = ringbuf_map_free,
+	.map_mmap = ringbuf_map_mmap,
+	.map_poll = ringbuf_map_poll,
+	.map_lookup_elem = ringbuf_map_lookup_elem,
+	.map_update_elem = ringbuf_map_update_elem,
+	.map_delete_elem = ringbuf_map_delete_elem,
+	.map_get_next_key = ringbuf_map_get_next_key,
+};
+
+/* Given pointer to ring buffer record metadata and struct bpf_ringbuf itself,
+ * calculate offset from record metadata to ring buffer in pages, rounded
+ * down. This page offset is stored as part of record metadata and allows to
+ * restore struct bpf_ringbuf * from record pointer. This page offset is
+ * stored at offset 4 of record metadata header.
+ */
+static size_t bpf_ringbuf_rec_pg_off(struct bpf_ringbuf *rb,
+				     struct bpf_ringbuf_hdr *hdr)
+{
+	return ((void *)hdr - (void *)rb) >> PAGE_SHIFT;
+}
+
+/* Given pointer to ring buffer record header, restore pointer to struct
+ * bpf_ringbuf itself by using page offset stored at offset 4
+ */
+static struct bpf_ringbuf *
+bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
+{
+	unsigned long addr = (unsigned long)(void *)hdr;
+	unsigned long off = (unsigned long)hdr->pg_off << PAGE_SHIFT;
+
+	return (void*)((addr & PAGE_MASK) - off);
+}
+
+static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+{
+	unsigned long cons_pos, prod_pos, new_prod_pos, flags;
+	u32 len, pg_off;
+	struct bpf_ringbuf_hdr *hdr;
+
+	if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
+		return NULL;
+
+	len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
+	cons_pos = smp_load_acquire(&rb->consumer_pos);
+
+	if (in_nmi()) {
+		if (!spin_trylock_irqsave(&rb->spinlock, flags))
+			return NULL;
+	} else {
+		spin_lock_irqsave(&rb->spinlock, flags);
+	}
+
+	prod_pos = rb->producer_pos;
+	new_prod_pos = prod_pos + len;
+
+	/* check for out of ringbuf space by ensuring producer position
+	 * doesn't advance more than (ringbuf_size - 1) ahead
+	 */
+	if (new_prod_pos - cons_pos > rb->mask) {
+		spin_unlock_irqrestore(&rb->spinlock, flags);
+		return NULL;
+	}
+
+	hdr = (void *)rb->data + (prod_pos & rb->mask);
+	pg_off = bpf_ringbuf_rec_pg_off(rb, hdr);
+	hdr->len = size | BPF_RINGBUF_BUSY_BIT;
+	hdr->pg_off = pg_off;
+
+	/* ensure header is written before updating producer positions */
+	smp_wmb();
+
+	/* pairs with consumer's smp_load_acquire() */
+	smp_store_release(&rb->producer_pos, new_prod_pos);
+
+	spin_unlock_irqrestore(&rb->spinlock, flags);
+
+	return (void *)hdr + BPF_RINGBUF_HDR_SZ;
+}
+
+BPF_CALL_3(bpf_ringbuf_reserve, struct bpf_map *, map, u64, size, u64, flags)
+{
+	struct bpf_ringbuf_map *rb_map;
+
+	if (unlikely(flags))
+		return 0;
+
+	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+	return (unsigned long)__bpf_ringbuf_reserve(rb_map->rb, size);
+}
+
+const struct bpf_func_proto bpf_ringbuf_reserve_proto = {
+	.func		= bpf_ringbuf_reserve,
+	.ret_type	= RET_PTR_TO_ALLOC_MEM_OR_NULL,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_CONST_ALLOC_SIZE_OR_ZERO,
+	.arg3_type	= ARG_ANYTHING,
+};
+
+static void bpf_ringbuf_commit(void *sample, u64 flags, bool discard)
+{
+	unsigned long rec_pos, cons_pos;
+	struct bpf_ringbuf_hdr *hdr;
+	struct bpf_ringbuf *rb;
+	u32 new_len;
+
+	hdr = sample - BPF_RINGBUF_HDR_SZ;
+	rb = bpf_ringbuf_restore_from_rec(hdr);
+	new_len = hdr->len ^ BPF_RINGBUF_BUSY_BIT;
+	if (discard)
+		new_len |= BPF_RINGBUF_DISCARD_BIT;
+
+	/* update record header with correct final size prefix */
+	xchg(&hdr->len, new_len);
+
+	/* if consumer caught up and is waiting for our record, notify about
+	 * new data availability
+	 */
+	rec_pos = (void *)hdr - (void *)rb->data;
+	cons_pos = smp_load_acquire(&rb->consumer_pos) & rb->mask;
+
+	if (flags & BPF_RB_FORCE_WAKEUP)
+		irq_work_queue(&rb->work);
+	else if (cons_pos == rec_pos && !(flags & BPF_RB_NO_WAKEUP))
+		irq_work_queue(&rb->work);
+}
+
+BPF_CALL_2(bpf_ringbuf_submit, void *, sample, u64, flags)
+{
+	bpf_ringbuf_commit(sample, flags, false /* discard */);
+	return 0;
+}
+
+const struct bpf_func_proto bpf_ringbuf_submit_proto = {
+	.func		= bpf_ringbuf_submit,
+	.ret_type	= RET_VOID,
+	.arg1_type	= ARG_PTR_TO_ALLOC_MEM,
+	.arg2_type	= ARG_ANYTHING,
+};
+
+BPF_CALL_2(bpf_ringbuf_discard, void *, sample, u64, flags)
+{
+	bpf_ringbuf_commit(sample, flags, true /* discard */);
+	return 0;
+}
+
+const struct bpf_func_proto bpf_ringbuf_discard_proto = {
+	.func		= bpf_ringbuf_discard,
+	.ret_type	= RET_VOID,
+	.arg1_type	= ARG_PTR_TO_ALLOC_MEM,
+	.arg2_type	= ARG_ANYTHING,
+};
+
+BPF_CALL_4(bpf_ringbuf_output, struct bpf_map *, map, void *, data, u64, size,
+	   u64, flags)
+{
+	struct bpf_ringbuf_map *rb_map;
+	void *rec;
+
+	if (unlikely(flags & ~(BPF_RB_NO_WAKEUP | BPF_RB_FORCE_WAKEUP)))
+		return -EINVAL;
+
+	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+	rec = __bpf_ringbuf_reserve(rb_map->rb, size);
+	if (!rec)
+		return -EAGAIN;
+
+	memcpy(rec, data, size);
+	bpf_ringbuf_commit(rec, flags, false /* discard */);
+	return 0;
+}
+
+const struct bpf_func_proto bpf_ringbuf_output_proto = {
+	.func		= bpf_ringbuf_output,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_MEM,
+	.arg3_type	= ARG_CONST_SIZE_OR_ZERO,
+	.arg4_type	= ARG_ANYTHING,
+};
+
+BPF_CALL_2(bpf_ringbuf_query, struct bpf_map *, map, u64, flags)
+{
+	struct bpf_ringbuf *rb;
+
+	rb = container_of(map, struct bpf_ringbuf_map, map)->rb;
+
+	switch (flags) {
+	case BPF_RB_AVAIL_DATA:
+		return ringbuf_avail_data_sz(rb);
+	case BPF_RB_RING_SIZE:
+		return rb->mask + 1;
+	case BPF_RB_CONS_POS:
+		return smp_load_acquire(&rb->consumer_pos);
+	case BPF_RB_PROD_POS:
+		return smp_load_acquire(&rb->producer_pos);
+	default:
+		return 0;
+	}
+}
+
+const struct bpf_func_proto bpf_ringbuf_query_proto = {
+	.func		= bpf_ringbuf_query,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_ANYTHING,
+};
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 57dfc98289d5..680e9ccc8e70 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -26,6 +26,7 @@
 #include <linux/audit.h>
 #include <uapi/linux/btf.h>
 #include <linux/bpf_lsm.h>
+#include <linux/poll.h>
 
 #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
 			  (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \
@@ -651,6 +652,16 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
 	return err;
 }
 
+static __poll_t bpf_map_poll(struct file *filp, struct poll_table_struct *pts)
+{
+	struct bpf_map *map = filp->private_data;
+
+	if (map->ops->map_poll)
+		return map->ops->map_poll(map, filp, pts);
+
+	return EPOLLERR;
+}
+
 const struct file_operations bpf_map_fops = {
 #ifdef CONFIG_PROC_FS
 	.show_fdinfo	= bpf_map_show_fdinfo,
@@ -659,6 +670,7 @@ const struct file_operations bpf_map_fops = {
 	.read		= bpf_dummy_read,
 	.write		= bpf_dummy_write,
 	.mmap		= bpf_map_mmap,
+	.poll		= bpf_map_poll,
 };
 
 int bpf_map_new_fd(struct bpf_map *map, int flags)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9c7d67d65d8c..4ce3e031554d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -233,6 +233,7 @@ struct bpf_call_arg_meta {
 	bool pkt_access;
 	int regno;
 	int access_size;
+	int mem_size;
 	u64 msize_max_value;
 	int ref_obj_id;
 	int func_id;
@@ -399,7 +400,8 @@ static bool reg_type_may_be_null(enum bpf_reg_type type)
 	       type == PTR_TO_SOCKET_OR_NULL ||
 	       type == PTR_TO_SOCK_COMMON_OR_NULL ||
 	       type == PTR_TO_TCP_SOCK_OR_NULL ||
-	       type == PTR_TO_BTF_ID_OR_NULL;
+	       type == PTR_TO_BTF_ID_OR_NULL ||
+	       type == PTR_TO_MEM_OR_NULL;
 }
 
 static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg)
@@ -413,7 +415,9 @@ static bool reg_type_may_be_refcounted_or_null(enum bpf_reg_type type)
 	return type == PTR_TO_SOCKET ||
 		type == PTR_TO_SOCKET_OR_NULL ||
 		type == PTR_TO_TCP_SOCK ||
-		type == PTR_TO_TCP_SOCK_OR_NULL;
+		type == PTR_TO_TCP_SOCK_OR_NULL ||
+		type == PTR_TO_MEM ||
+		type == PTR_TO_MEM_OR_NULL;
 }
 
 static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
@@ -427,7 +431,9 @@ static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
  */
 static bool is_release_function(enum bpf_func_id func_id)
 {
-	return func_id == BPF_FUNC_sk_release;
+	return func_id == BPF_FUNC_sk_release ||
+	       func_id == BPF_FUNC_ringbuf_submit ||
+	       func_id == BPF_FUNC_ringbuf_discard;
 }
 
 static bool may_be_acquire_function(enum bpf_func_id func_id)
@@ -435,7 +441,8 @@ static bool may_be_acquire_function(enum bpf_func_id func_id)
 	return func_id == BPF_FUNC_sk_lookup_tcp ||
 		func_id == BPF_FUNC_sk_lookup_udp ||
 		func_id == BPF_FUNC_skc_lookup_tcp ||
-		func_id == BPF_FUNC_map_lookup_elem;
+		func_id == BPF_FUNC_map_lookup_elem ||
+	        func_id == BPF_FUNC_ringbuf_reserve;
 }
 
 static bool is_acquire_function(enum bpf_func_id func_id,
@@ -445,7 +452,8 @@ static bool is_acquire_function(enum bpf_func_id func_id,
 
 	if (func_id == BPF_FUNC_sk_lookup_tcp ||
 	    func_id == BPF_FUNC_sk_lookup_udp ||
-	    func_id == BPF_FUNC_skc_lookup_tcp)
+	    func_id == BPF_FUNC_skc_lookup_tcp ||
+	    func_id == BPF_FUNC_ringbuf_reserve)
 		return true;
 
 	if (func_id == BPF_FUNC_map_lookup_elem &&
@@ -485,6 +493,8 @@ static const char * const reg_type_str[] = {
 	[PTR_TO_XDP_SOCK]	= "xdp_sock",
 	[PTR_TO_BTF_ID]		= "ptr_",
 	[PTR_TO_BTF_ID_OR_NULL]	= "ptr_or_null_",
+	[PTR_TO_MEM]		= "mem",
+	[PTR_TO_MEM_OR_NULL]	= "mem_or_null",
 };
 
 static char slot_type_char[] = {
@@ -2459,32 +2469,31 @@ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
 	return 0;
 }
 
-/* check read/write into map element returned by bpf_map_lookup_elem() */
-static int __check_map_access(struct bpf_verifier_env *env, u32 regno, int off,
-			      int size, bool zero_size_allowed)
+/* check read/write into memory region (e.g., map value, ringbuf sample, etc) */
+static int __check_mem_access(struct bpf_verifier_env *env, int off,
+			      int size, u32 mem_size, bool zero_size_allowed)
 {
-	struct bpf_reg_state *regs = cur_regs(env);
-	struct bpf_map *map = regs[regno].map_ptr;
+	bool size_ok = size > 0 || (size == 0 && zero_size_allowed);
 
-	if (off < 0 || size < 0 || (size == 0 && !zero_size_allowed) ||
-	    off + size > map->value_size) {
-		verbose(env, "invalid access to map value, value_size=%d off=%d size=%d\n",
-			map->value_size, off, size);
-		return -EACCES;
-	}
-	return 0;
+	if (off >= 0 && size_ok && off + size <= mem_size)
+		return 0;
+
+	verbose(env, "invalid access to memory, mem_size=%u off=%d size=%d\n",
+		mem_size, off, size);
+	return -EACCES;
 }
 
-/* check read/write into a map element with possible variable offset */
-static int check_map_access(struct bpf_verifier_env *env, u32 regno,
-			    int off, int size, bool zero_size_allowed)
+/* check read/write into a memory region with possible variable offset */
+static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
+				   int off, int size, u32 mem_size,
+				   bool zero_size_allowed)
 {
 	struct bpf_verifier_state *vstate = env->cur_state;
 	struct bpf_func_state *state = vstate->frame[vstate->curframe];
 	struct bpf_reg_state *reg = &state->regs[regno];
 	int err;
 
-	/* We may have adjusted the register to this map value, so we
+	/* We may have adjusted the register pointing to memory region, so we
 	 * need to try adding each of min_value and max_value to off
 	 * to make sure our theoretical access will be safe.
 	 */
@@ -2501,14 +2510,14 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
 	    (reg->smin_value == S64_MIN ||
 	     (off + reg->smin_value != (s64)(s32)(off + reg->smin_value)) ||
 	      reg->smin_value + off < 0)) {
-		verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
+		verbose(env, "R%d min value is negative, either use unsigned index or do an if (index >=0) check.\n",
 			regno);
 		return -EACCES;
 	}
-	err = __check_map_access(env, regno, reg->smin_value + off, size,
+	err = __check_mem_access(env, reg->smin_value + off, size, mem_size,
 				 zero_size_allowed);
 	if (err) {
-		verbose(env, "R%d min value is outside of the array range\n",
+		verbose(env, "R%d min value is outside of the memory region\n",
 			regno);
 		return err;
 	}
@@ -2518,18 +2527,38 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
 	 * If reg->umax_value + off could overflow, treat that as unbounded too.
 	 */
 	if (reg->umax_value >= BPF_MAX_VAR_OFF) {
-		verbose(env, "R%d unbounded memory access, make sure to bounds check any array access into a map\n",
+		verbose(env, "R%d unbounded memory access, make sure to bounds check any memory region access\n",
 			regno);
 		return -EACCES;
 	}
-	err = __check_map_access(env, regno, reg->umax_value + off, size,
+	err = __check_mem_access(env, reg->umax_value + off, size, mem_size,
 				 zero_size_allowed);
-	if (err)
-		verbose(env, "R%d max value is outside of the array range\n",
+	if (err) {
+		verbose(env, "R%d max value is outside of the memory region\n",
 			regno);
+		return err;
+	}
+
+	return 0;
+}
+
+/* check read/write into a map element with possible variable offset */
+static int check_map_access(struct bpf_verifier_env *env, u32 regno,
+			    int off, int size, bool zero_size_allowed)
+{
+	struct bpf_verifier_state *vstate = env->cur_state;
+	struct bpf_func_state *state = vstate->frame[vstate->curframe];
+	struct bpf_reg_state *reg = &state->regs[regno];
+	struct bpf_map *map = reg->map_ptr;
+	int err;
+
+	err = check_mem_region_access(env, regno, off, size, map->value_size,
+				      zero_size_allowed);
+	if (err)
+		return err;
 
-	if (map_value_has_spin_lock(reg->map_ptr)) {
-		u32 lock = reg->map_ptr->spin_lock_off;
+	if (map_value_has_spin_lock(map)) {
+		u32 lock = map->spin_lock_off;
 
 		/* if any part of struct bpf_spin_lock can be touched by
 		 * load/store reject this program.
@@ -3211,6 +3240,16 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 				mark_reg_unknown(env, regs, value_regno);
 			}
 		}
+	} else if (reg->type == PTR_TO_MEM) {
+		if (t == BPF_WRITE && value_regno >= 0 &&
+		    is_pointer_value(env, value_regno)) {
+			verbose(env, "R%d leaks addr into mem\n", value_regno);
+			return -EACCES;
+		}
+		err = check_mem_region_access(env, regno, off, size,
+					      reg->mem_size, false);
+		if (!err && t == BPF_READ && value_regno >= 0)
+			mark_reg_unknown(env, regs, value_regno);
 	} else if (reg->type == PTR_TO_CTX) {
 		enum bpf_reg_type reg_type = SCALAR_VALUE;
 		u32 btf_id = 0;
@@ -3548,6 +3587,10 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
 			return -EACCES;
 		return check_map_access(env, regno, reg->off, access_size,
 					zero_size_allowed);
+	case PTR_TO_MEM:
+		return check_mem_region_access(env, regno, reg->off,
+					       access_size, reg->mem_size,
+					       zero_size_allowed);
 	default: /* scalar_value|ptr_to_stack or invalid ptr */
 		return check_stack_boundary(env, regno, access_size,
 					    zero_size_allowed, meta);
@@ -3652,6 +3695,17 @@ static bool arg_type_is_mem_size(enum bpf_arg_type type)
 	       type == ARG_CONST_SIZE_OR_ZERO;
 }
 
+static bool arg_type_is_alloc_mem_ptr(enum bpf_arg_type type)
+{
+	return type == ARG_PTR_TO_ALLOC_MEM ||
+	       type == ARG_PTR_TO_ALLOC_MEM_OR_NULL;
+}
+
+static bool arg_type_is_alloc_size(enum bpf_arg_type type)
+{
+	return type == ARG_CONST_ALLOC_SIZE_OR_ZERO;
+}
+
 static bool arg_type_is_int_ptr(enum bpf_arg_type type)
 {
 	return type == ARG_PTR_TO_INT ||
@@ -3711,7 +3765,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
 			 type != expected_type)
 			goto err_type;
 	} else if (arg_type == ARG_CONST_SIZE ||
-		   arg_type == ARG_CONST_SIZE_OR_ZERO) {
+		   arg_type == ARG_CONST_SIZE_OR_ZERO ||
+		   arg_type == ARG_CONST_ALLOC_SIZE_OR_ZERO) {
 		expected_type = SCALAR_VALUE;
 		if (type != expected_type)
 			goto err_type;
@@ -3782,13 +3837,29 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
 		 * happens during stack boundary checking.
 		 */
 		if (register_is_null(reg) &&
-		    arg_type == ARG_PTR_TO_MEM_OR_NULL)
+		    (arg_type == ARG_PTR_TO_MEM_OR_NULL ||
+		     arg_type == ARG_PTR_TO_ALLOC_MEM_OR_NULL))
 			/* final test in check_stack_boundary() */;
 		else if (!type_is_pkt_pointer(type) &&
 			 type != PTR_TO_MAP_VALUE &&
+			 type != PTR_TO_MEM &&
 			 type != expected_type)
 			goto err_type;
 		meta->raw_mode = arg_type == ARG_PTR_TO_UNINIT_MEM;
+	} else if (arg_type_is_alloc_mem_ptr(arg_type)) {
+		expected_type = PTR_TO_MEM;
+		if (register_is_null(reg) &&
+		    arg_type == ARG_PTR_TO_ALLOC_MEM_OR_NULL)
+			/* final test in check_stack_boundary() */;
+		else if (type != expected_type)
+			goto err_type;
+		if (meta->ref_obj_id) {
+			verbose(env, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n",
+				regno, reg->ref_obj_id,
+				meta->ref_obj_id);
+			return -EFAULT;
+		}
+		meta->ref_obj_id = reg->ref_obj_id;
 	} else if (arg_type_is_int_ptr(arg_type)) {
 		expected_type = PTR_TO_STACK;
 		if (!type_is_pkt_pointer(type) &&
@@ -3884,6 +3955,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
 					      zero_size_allowed, meta);
 		if (!err)
 			err = mark_chain_precision(env, regno);
+	} else if (arg_type_is_alloc_size(arg_type)) {
+		if (!tnum_is_const(reg->var_off)) {
+			verbose(env, "R%d unbounded size, use 'var &= const' or 'if (var < const)'\n",
+				regno);
+			return -EACCES;
+		}
+		meta->mem_size = reg->var_off.value;
 	} else if (arg_type_is_int_ptr(arg_type)) {
 		int size = int_ptr_type_to_size(arg_type);
 
@@ -3920,6 +3998,14 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 		    func_id != BPF_FUNC_xdp_output)
 			goto error;
 		break;
+	case BPF_MAP_TYPE_RINGBUF:
+		if (func_id != BPF_FUNC_ringbuf_output &&
+		    func_id != BPF_FUNC_ringbuf_reserve &&
+		    func_id != BPF_FUNC_ringbuf_submit &&
+		    func_id != BPF_FUNC_ringbuf_discard &&
+		    func_id != BPF_FUNC_ringbuf_query)
+			goto error;
+		break;
 	case BPF_MAP_TYPE_STACK_TRACE:
 		if (func_id != BPF_FUNC_get_stackid)
 			goto error;
@@ -4646,6 +4732,11 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 		mark_reg_known_zero(env, regs, BPF_REG_0);
 		regs[BPF_REG_0].type = PTR_TO_TCP_SOCK_OR_NULL;
 		regs[BPF_REG_0].id = ++env->id_gen;
+	} else if (fn->ret_type == RET_PTR_TO_ALLOC_MEM_OR_NULL) {
+		mark_reg_known_zero(env, regs, BPF_REG_0);
+		regs[BPF_REG_0].type = PTR_TO_MEM_OR_NULL;
+		regs[BPF_REG_0].id = ++env->id_gen;
+		regs[BPF_REG_0].mem_size = meta.mem_size;
 	} else {
 		verbose(env, "unknown return type %d of func %s#%d\n",
 			fn->ret_type, func_id_name(func_id), func_id);
@@ -6585,6 +6676,8 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
 			reg->type = PTR_TO_TCP_SOCK;
 		} else if (reg->type == PTR_TO_BTF_ID_OR_NULL) {
 			reg->type = PTR_TO_BTF_ID;
+		} else if (reg->type == PTR_TO_MEM_OR_NULL) {
+			reg->type = PTR_TO_MEM;
 		}
 		if (is_null) {
 			/* We don't need id and ref_obj_id from this point
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 9531f54d0a3a..ab325904a68a 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1088,6 +1088,16 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_perf_event_read_value_proto;
 	case BPF_FUNC_get_ns_current_pid_tgid:
 		return &bpf_get_ns_current_pid_tgid_proto;
+	case BPF_FUNC_ringbuf_output:
+		return &bpf_ringbuf_output_proto;
+	case BPF_FUNC_ringbuf_reserve:
+		return &bpf_ringbuf_reserve_proto;
+	case BPF_FUNC_ringbuf_submit:
+		return &bpf_ringbuf_submit_proto;
+	case BPF_FUNC_ringbuf_discard:
+		return &bpf_ringbuf_discard_proto;
+	case BPF_FUNC_ringbuf_query:
+		return &bpf_ringbuf_query_proto;
 	default:
 		return NULL;
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 146c742f1d49..285973d10bdf 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -73,7 +73,7 @@ struct bpf_insn {
 /* Key of an a BPF_MAP_TYPE_LPM_TRIE entry */
 struct bpf_lpm_trie_key {
 	__u32	prefixlen;	/* up to 32 for AF_INET, 128 for AF_INET6 */
-	__u8	data[];	/* Arbitrary size */
+	__u8	data[0];	/* Arbitrary size */
 };
 
 struct bpf_cgroup_storage_key {
@@ -147,6 +147,7 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_SK_STORAGE,
 	BPF_MAP_TYPE_DEVMAP_HASH,
 	BPF_MAP_TYPE_STRUCT_OPS,
+	BPF_MAP_TYPE_RINGBUF,
 };
 
 /* Note that tracing related programs such as
@@ -2015,8 +2016,8 @@ union bpf_attr {
  * int bpf_xdp_adjust_tail(struct xdp_buff *xdp_md, int delta)
  * 	Description
  * 		Adjust (move) *xdp_md*\ **->data_end** by *delta* bytes. It is
- * 		only possible to shrink the packet as of this writing,
- * 		therefore *delta* must be a negative integer.
+ * 		possible to both shrink and grow the packet tail.
+ * 		Shrink done via *delta* being a negative integer.
  *
  * 		A call to this helper is susceptible to change the underlying
  * 		packet buffer. Therefore, at load time, all checks on pointers
@@ -3153,6 +3154,59 @@ union bpf_attr {
  *		**bpf_sk_cgroup_id**\ ().
  *	Return
  *		The id is returned or 0 in case the id could not be retrieved.
+ *
+ * void *bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)
+ * 	Description
+ * 		Copy *size* bytes from *data* into a ring buffer *ringbuf*.
+ * 		If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
+ * 		new data availability is sent.
+ * 		IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
+ * 		new data availability is sent unconditionally.
+ * 	Return
+ * 		0, on success;
+ * 		< 0, on error.
+ *
+ * void *bpf_ringbuf_reserve(void *ringbuf, u64 size, u64 flags)
+ * 	Description
+ * 		Reserve *size* bytes of payload in a ring buffer *ringbuf*.
+ * 	Return
+ * 		Valid pointer with *size* bytes of memory available; NULL,
+ * 		otherwise.
+ *
+ * void bpf_ringbuf_submit(void *data, u64 flags)
+ * 	Description
+ * 		Submit reserved ring buffer sample, pointed to by *data*.
+ * 		If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
+ * 		new data availability is sent.
+ * 		IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
+ * 		new data availability is sent unconditionally.
+ * 	Return
+ * 		Nothing. Always succeeds.
+ *
+ * void bpf_ringbuf_discard(void *data, u64 flags)
+ * 	Description
+ * 		Discard reserved ring buffer sample, pointed to by *data*.
+ * 		If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
+ * 		new data availability is sent.
+ * 		IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of
+ * 		new data availability is sent unconditionally.
+ * 	Return
+ * 		Nothing. Always succeeds.
+ *
+ * u64 bpf_ringbuf_query(void *ringbuf, u64 flags)
+ *	Description
+ *		Query various characteristics of provided ring buffer. What
+ *		exactly is queries is determined by *flags*:
+ *		  - BPF_RB_AVAIL_DATA - amount of data not yet consumed;
+ *		  - BPF_RB_RING_SIZE - the size of ring buffer;
+ *		  - BPF_RB_CONS_POS - consumer position (can wrap around);
+ *		  - BPF_RB_PROD_POS - producer(s) position (can wrap around);
+ *		Data returned is just a momentary snapshots of actual values
+ *		and could be inaccurate, so this facility should be used to
+ *		power heuristics and for reporting, not to make 100% correct
+ *		calculation.
+ *	Return
+ *		Requested value, or 0, if flags are not recognized.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -3284,7 +3338,12 @@ union bpf_attr {
 	FN(seq_printf),			\
 	FN(seq_write),			\
 	FN(sk_cgroup_id),		\
-	FN(sk_ancestor_cgroup_id),
+	FN(sk_ancestor_cgroup_id),	\
+	FN(ringbuf_output),		\
+	FN(ringbuf_reserve),		\
+	FN(ringbuf_submit),		\
+	FN(ringbuf_discard),		\
+	FN(ringbuf_query),
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call
@@ -3394,6 +3453,29 @@ enum {
 	BPF_F_GET_BRANCH_RECORDS_SIZE	= (1ULL << 0),
 };
 
+/* BPF_FUNC_bpf_ringbuf_commit, BPF_FUNC_bpf_ringbuf_discard, and
+ * BPF_FUNC_bpf_ringbuf_output flags.
+ */
+enum {
+	BPF_RB_NO_WAKEUP		= (1ULL << 0),
+	BPF_RB_FORCE_WAKEUP		= (1ULL << 1),
+};
+
+/* BPF_FUNC_bpf_ringbuf_query flags */
+enum {
+	BPF_RB_AVAIL_DATA = 0,
+	BPF_RB_RING_SIZE = 1,
+	BPF_RB_CONS_POS = 2,
+	BPF_RB_PROD_POS = 3,
+};
+
+/* BPF ring buffer constants */
+enum {
+	BPF_RINGBUF_BUSY_BIT		= (1U << 31),
+	BPF_RINGBUF_DISCARD_BIT		= (1U << 30),
+	BPF_RINGBUF_HDR_SZ		= 8,
+};
+
 /* Mode for BPF_FUNC_skb_adjust_room helper. */
 enum bpf_adj_room_mode {
 	BPF_ADJ_ROOM_NET,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests
  2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
  2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
@ 2020-05-17 19:57 ` Andrii Nakryiko
  2020-05-22  0:34   ` Paul E. McKenney
  2020-05-17 19:57 ` [PATCH v2 bpf-next 3/7] bpf: track reference type in verifier Andrii Nakryiko
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon

Add 4 litmus tests for BPF ringbuf implementation, divided into two different
use cases.

First, two unbounded case, one with 1 producer and another with
2 producers, single consumer. All reservations are supposed to succeed.

Second, bounded case with only 1 record allowed in ring buffer at any given
time. Here failures to reserve space are expected. Again, 1- and 2- producer
cases, single consumer, are validated.

Just for the fun of it, I also wrote a 3-producer cases, it took *16 hours* to
validate, but came back successful as well. I'm not including it in this
patch, because it's not practical to run it. See output for all included
4 cases and one 3-producer one with bounded use case.

Each litmust test implements producer/consumer protocol for BPF ring buffer
implementation found in kernel/bpf/ringbuf.c. Due to limitations, all records
are assumed equal-sized and producer/consumer counters are incremented by 1.
This doesn't change the correctness of the algorithm, though.

Verification results:
/* 1p1c bounded case */
$ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+1p1c+bounded.litmus
Test mpsc-rb+1p1c+bounded Allowed
States 2
0:rFail=0; 1:rFail=0; cx=0; dropped=0; len1=1; px=1;
0:rFail=0; 1:rFail=0; cx=1; dropped=0; len1=1; px=1;
Ok
Witnesses
Positive: 3 Negative: 0
Condition exists (0:rFail=0 /\ 1:rFail=0 /\ dropped=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
Observation mpsc-rb+1p1c+bounded Always 3 0
Time mpsc-rb+1p1c+bounded 0.03
Hash=5bdad0f41557a641370e7fa6b8eb2f43

/* 2p1c bounded case */
$ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+2p1c+bounded.litmus
Test mpsc-rb+2p1c+bounded Allowed
States 4
0:rFail=0; 1:rFail=0; 2:rFail=0; cx=0; dropped=1; len1=1; px=1;
0:rFail=0; 1:rFail=0; 2:rFail=0; cx=1; dropped=0; len1=1; px=2;
0:rFail=0; 1:rFail=0; 2:rFail=0; cx=1; dropped=1; len1=1; px=1;
0:rFail=0; 1:rFail=0; 2:rFail=0; cx=2; dropped=0; len1=1; px=2;
Ok
Witnesses
Positive: 22 Negative: 0
Condition exists (0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ len1=1 /\ (dropped=0 /\ px=2 /\ (cx=1 \/ cx=2) \/ dropped=1 /\ px=1 /\ (cx=0 \/ cx=1)))
Observation mpsc-rb+2p1c+bounded Always 22 0
Time mpsc-rb+2p1c+bounded 119.38
Hash=e2f8f442a02bf7d8c2988ba82cf002d2

/* 1p1c unbounded case */
$ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+1p1c+unbound.litmus
Test mpsc-rb+1p1c+unbound Allowed
States 2
0:rFail=0; 1:rFail=0; cx=0; len1=1; px=1;
0:rFail=0; 1:rFail=0; cx=1; len1=1; px=1;
Ok
Witnesses
Positive: 3 Negative: 0
Condition exists (0:rFail=0 /\ 1:rFail=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
Observation mpsc-rb+1p1c+unbound Always 3 0
Time mpsc-rb+1p1c+unbound 0.02
Hash=be9de6487d8e27c3d37802d122e4a87c

/* 2p1c unbounded case */
$ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+2p1c+unbound.litmus
Test mpsc-rb+2p1c+unbound Allowed
States 3
0:rFail=0; 1:rFail=0; 2:rFail=0; cx=0; len1=1; len2=1; px=2;
0:rFail=0; 1:rFail=0; 2:rFail=0; cx=1; len1=1; len2=1; px=2;
0:rFail=0; 1:rFail=0; 2:rFail=0; cx=2; len1=1; len2=1; px=2;
Ok
Witnesses
Positive: 42 Negative: 0
Condition exists (0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ px=2 /\ len1=1 /\ len2=1 /\ (cx=0 \/ cx=1 \/ cx=2))
Observation mpsc-rb+2p1c+unbound Always 42 0
Time mpsc-rb+2p1c+unbound 39.19
Hash=f0352aba9bdc03dd0b1def7d0c4956fa

/* 3p1c bounded case */
$ herd7 -unroll 0 -conf linux-kernel.cfg mpsc-rb+3p1c+bounded.litmus
Test mpsc+ringbuf-spinlock Allowed
States 5
0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=0; len1=1; len2=1; px=2;
0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=1; len1=1; len2=1; px=2;
0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=1; len1=1; len2=1; px=3;
0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=2; len1=1; len2=1; px=2;
0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=2; len1=1; len2=1; px=3;
Ok
Witnesses
Positive: 558 Negative: 0
Condition exists (0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ 3:rFail=0 /\ len1=1 /\ len2=1 /\ (px=2 /\ (cx=0 \/ cx=1 \/ cx=2) \/ px=3 /\ (cx=1 \/ cx=2)))
Observation mpsc+ringbuf-spinlock Always 558 0
Time mpsc+ringbuf-spinlock 57487.24
Hash=133977dba930d167b4e1b4a6923d5687

Cc: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../litmus-tests/mpsc-rb+1p1c+bounded.litmus  |  92 +++++++++++
 .../litmus-tests/mpsc-rb+1p1c+unbound.litmus  |  83 ++++++++++
 .../litmus-tests/mpsc-rb+2p1c+bounded.litmus  | 152 ++++++++++++++++++
 .../litmus-tests/mpsc-rb+2p1c+unbound.litmus  | 137 ++++++++++++++++
 4 files changed, 464 insertions(+)
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus
 create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus

diff --git a/tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus
new file mode 100644
index 000000000000..cafd17afe11e
--- /dev/null
+++ b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus
@@ -0,0 +1,92 @@
+C mpsc-rb+1p1c+bounded
+
+(*
+ * Result: Always
+ *
+ * This litmus test validates BPF ring buffer implementation under the
+ * following assumptions:
+ * - 1 producer;
+ * - 1 consumer;
+ * - ring buffer has capacity for only 1 record.
+ *
+ * Expectations:
+ * - 1 record pushed into ring buffer;
+ * - 0 or 1 element is consumed.
+ * - no failures.
+ *)
+
+{
+	max_len = 1;
+	len1 = 0;
+	px = 0;
+	cx = 0;
+	dropped = 0;
+}
+
+P0(int *len1, int *cx, int *px)
+{
+	int *rLenPtr;
+	int rLen;
+	int rPx;
+	int rCx;
+	int rFail;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	rPx = smp_load_acquire(px);
+	if (rCx < rPx) {
+		if (rCx == 0)
+			rLenPtr = len1;
+		else
+			rFail = 1;
+
+		rLen = smp_load_acquire(rLenPtr);
+		if (rLen == 0) {
+			rFail = 1;
+		} else if (rLen == 1) {
+			rCx = rCx + 1;
+			smp_store_release(cx, rCx);
+		}
+	}
+}
+
+P1(int *len1, spinlock_t *rb_lock, int *px, int *cx, int *dropped, int *max_len)
+{
+	int rPx;
+	int rCx;
+	int rFail;
+	int *rLenPtr;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	spin_lock(rb_lock);
+
+	rPx = *px;
+	if (rPx - rCx >= *max_len) {
+		atomic_inc(dropped);
+		spin_unlock(rb_lock);
+	} else {
+		if (rPx == 0)
+			rLenPtr = len1;
+		else
+			rFail = 1;
+
+		*rLenPtr = -1;
+		smp_wmb();
+		smp_store_release(px, rPx + 1);
+
+		spin_unlock(rb_lock);
+
+		smp_store_release(rLenPtr, 1);
+	}
+}
+
+exists (
+	0:rFail=0 /\ 1:rFail=0
+	/\
+	(
+		(dropped=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
+	)
+)
diff --git a/tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus
new file mode 100644
index 000000000000..84f660598015
--- /dev/null
+++ b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus
@@ -0,0 +1,83 @@
+C mpsc-rb+1p1c+unbound
+
+(*
+ * Result: Always
+ *
+ * This litmus test validates BPF ring buffer implementation under the
+ * following assumptions:
+ * - 1 producer;
+ * - 1 consumer;
+ * - ring buffer capacity is unbounded.
+ *
+ * Expectations:
+ * - 1 record pushed into ring buffer;
+ * - 0 or 1 element is consumed.
+ * - no failures.
+ *)
+
+{
+	len1 = 0;
+	px = 0;
+	cx = 0;
+}
+
+P0(int *len1, int *cx, int *px)
+{
+	int *rLenPtr;
+	int rLen;
+	int rPx;
+	int rCx;
+	int rFail;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	rPx = smp_load_acquire(px);
+	if (rCx < rPx) {
+		if (rCx == 0)
+			rLenPtr = len1;
+		else
+			rFail = 1;
+
+		rLen = smp_load_acquire(rLenPtr);
+		if (rLen == 0) {
+			rFail = 1;
+		} else if (rLen == 1) {
+			rCx = rCx + 1;
+			smp_store_release(cx, rCx);
+		}
+	}
+}
+
+P1(int *len1, spinlock_t *rb_lock, int *px, int *cx)
+{
+	int rPx;
+	int rCx;
+	int rFail;
+	int *rLenPtr;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	spin_lock(rb_lock);
+
+	rPx = *px;
+	if (rPx == 0)
+		rLenPtr = len1;
+	else
+		rFail = 1;
+
+	*rLenPtr = -1;
+	smp_wmb();
+	smp_store_release(px, rPx + 1);
+
+	spin_unlock(rb_lock);
+
+	smp_store_release(rLenPtr, 1);
+}
+
+exists (
+	0:rFail=0 /\ 1:rFail=0
+	/\ px=1 /\ len1=1
+	/\ (cx=0 \/ cx=1)
+)
diff --git a/tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus
new file mode 100644
index 000000000000..900104c4933b
--- /dev/null
+++ b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus
@@ -0,0 +1,152 @@
+C mpsc-rb+2p1c+bounded
+
+(*
+ * Result: Always
+ *
+ * This litmus test validates BPF ring buffer implementation under the
+ * following assumptions:
+ * - 2 identical producers;
+ * - 1 consumer;
+ * - ring buffer has capacity for only 1 record.
+ *
+ * Expectations:
+ * - either 1 or 2 records are pushed into ring buffer;
+ * - 0, 1, or 2 elements are consumed by consumer;
+ * - appropriate number of dropped records is recorded to satisfy ring buffer
+ *   size bounds;
+ * - no failures.
+ *)
+
+{
+	max_len = 1;
+	len1 = 0;
+	px = 0;
+	cx = 0;
+	dropped = 0;
+}
+
+P0(int *len1, int *cx, int *px)
+{
+	int *rLenPtr;
+	int rLen;
+	int rPx;
+	int rCx;
+	int rFail;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	rPx = smp_load_acquire(px);
+	if (rCx < rPx) {
+		if (rCx == 0)
+			rLenPtr = len1;
+		else if (rCx == 1)
+			rLenPtr = len1;
+		else
+			rFail = 1;
+
+		rLen = smp_load_acquire(rLenPtr);
+		if (rLen == 0) {
+			rFail = 1;
+		} else if (rLen == 1) {
+			rCx = rCx + 1;
+			smp_store_release(cx, rCx);
+		}
+	}
+
+	rPx = smp_load_acquire(px);
+	if (rCx < rPx) {
+		if (rCx == 0)
+			rLenPtr = len1;
+		else if (rCx == 1)
+			rLenPtr = len1;
+		else
+			rFail = 1;
+
+		rLen = smp_load_acquire(rLenPtr);
+		if (rLen == 0) {
+			rFail = 1;
+		} else if (rLen == 1) {
+			rCx = rCx + 1;
+			smp_store_release(cx, rCx);
+		}
+	}
+}
+
+P1(int *len1, spinlock_t *rb_lock, int *px, int *cx, int *dropped, int *max_len)
+{
+	int rPx;
+	int rCx;
+	int rFail;
+	int *rLenPtr;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	spin_lock(rb_lock);
+
+	rPx = *px;
+	if (rPx - rCx >= *max_len) {
+		atomic_inc(dropped);
+		spin_unlock(rb_lock);
+	} else {
+		if (rPx == 0)
+			rLenPtr = len1;
+		else if (rPx == 1)
+			rLenPtr = len1;
+		else
+			rFail = 1;
+
+		*rLenPtr = -1;
+		smp_wmb();
+		smp_store_release(px, rPx + 1);
+
+		spin_unlock(rb_lock);
+
+		smp_store_release(rLenPtr, 1);
+	}
+}
+
+P2(int *len1, spinlock_t *rb_lock, int *px, int *cx, int *dropped, int *max_len)
+{
+	int rPx;
+	int rCx;
+	int rFail;
+	int *rLenPtr;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	spin_lock(rb_lock);
+
+	rPx = *px;
+	if (rPx - rCx >= *max_len) {
+		atomic_inc(dropped);
+		spin_unlock(rb_lock);
+	} else {
+		if (rPx == 0)
+			rLenPtr = len1;
+		else if (rPx == 1)
+			rLenPtr = len1;
+		else
+			rFail = 1;
+
+		*rLenPtr = -1;
+		smp_wmb();
+		smp_store_release(px, rPx + 1);
+
+		spin_unlock(rb_lock);
+
+		smp_store_release(rLenPtr, 1);
+	}
+}
+
+exists (
+	0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ len1=1
+	/\
+	(
+		(dropped = 0 /\ px=2 /\ (cx=1 \/ cx=2))
+		\/
+		(dropped = 1 /\ px=1 /\ (cx=0 \/ cx=1))
+	)
+)
diff --git a/tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus
new file mode 100644
index 000000000000..83372e9eb079
--- /dev/null
+++ b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus
@@ -0,0 +1,137 @@
+C mpsc-rb+2p1c+unbound
+
+(*
+ * Result: Always
+ *
+ * This litmus test validates BPF ring buffer implementation under the
+ * following assumptions:
+ * - 2 identical producers;
+ * - 1 consumer;
+ * - ring buffer capacity is unbounded.
+ *
+ * Expectations:
+ * - 2 records pushed into ring buffer;
+ * - 0, 1, or 2 elements are consumed.
+ * - no failures.
+ *)
+
+{
+	len1 = 0;
+	len2 = 0;
+	px = 0;
+	cx = 0;
+}
+
+P0(int *len1, int *len2, int *cx, int *px)
+{
+	int *rLenPtr;
+	int rLen;
+	int rPx;
+	int rCx;
+	int rFail;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	rPx = smp_load_acquire(px);
+	if (rCx < rPx) {
+		if (rCx == 0)
+			rLenPtr = len1;
+		else if (rCx == 1)
+			rLenPtr = len2;
+		else
+			rFail = 1;
+
+		rLen = smp_load_acquire(rLenPtr);
+		if (rLen == 0) {
+			rFail = 1;
+		} else if (rLen == 1) {
+			rCx = rCx + 1;
+			smp_store_release(cx, rCx);
+		}
+	}
+
+	rPx = smp_load_acquire(px);
+	if (rCx < rPx) {
+		if (rCx == 0)
+			rLenPtr = len1;
+		else if (rCx == 1)
+			rLenPtr = len2;
+		else
+			rFail = 1;
+
+		rLen = smp_load_acquire(rLenPtr);
+		if (rLen == 0) {
+			rFail = 1;
+		} else if (rLen == 1) {
+			rCx = rCx + 1;
+			smp_store_release(cx, rCx);
+		}
+	}
+}
+
+P1(int *len1, int *len2, spinlock_t *rb_lock, int *px, int *cx)
+{
+	int rPx;
+	int rCx;
+	int rFail;
+	int *rLenPtr;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	spin_lock(rb_lock);
+
+	rPx = *px;
+	if (rPx == 0)
+		rLenPtr = len1;
+	else if (rPx == 1)
+		rLenPtr = len2;
+	else
+		rFail = 1;
+
+	*rLenPtr = -1;
+	smp_wmb();
+	smp_store_release(px, rPx + 1);
+
+	spin_unlock(rb_lock);
+
+	smp_store_release(rLenPtr, 1);
+}
+
+P2(int *len1, int *len2, spinlock_t *rb_lock, int *px, int *cx)
+{
+	int rPx;
+	int rCx;
+	int rFail;
+	int *rLenPtr;
+
+	rFail = 0;
+	rCx = smp_load_acquire(cx);
+
+	spin_lock(rb_lock);
+
+	rPx = *px;
+	if (rPx == 0)
+		rLenPtr = len1;
+	else if (rPx == 1)
+		rLenPtr = len2;
+	else
+		rFail = 1;
+
+	*rLenPtr = -1;
+	smp_wmb();
+	smp_store_release(px, rPx + 1);
+
+	spin_unlock(rb_lock);
+
+	smp_store_release(rLenPtr, 1);
+}
+
+exists (
+	0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0
+	/\
+	px=2 /\ len1=1 /\ len2=1
+	/\
+	(cx=0 \/ cx=1 \/ cx=2)
+)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 bpf-next 3/7] bpf: track reference type in verifier
  2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
  2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
  2020-05-17 19:57 ` [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests Andrii Nakryiko
@ 2020-05-17 19:57 ` Andrii Nakryiko
  2020-05-22  1:13   ` Alexei Starovoitov
  2020-05-17 19:57 ` [PATCH v2 bpf-next 4/7] libbpf: add BPF ring buffer support Andrii Nakryiko
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon

This patch extends verifier's reference tracking logic with tracking
a reference type and ensuring acquire()/release() functions accept only the
right types of references. Currently, ambiguity between socket and ringbuf
record references is resolved through use of different types of input
arguments to bpf_sk_release() and bpf_ringbuf_commit(): ARG_PTR_TO_SOCK_COMMON
and ARG_PTR_TO_ALLOC_MEM, respectively. It is thus impossible to pass ringbuf
record pointer to bpf_sk_release() (and vice versa for socket).

On the other hand, patch #1 added ARG_PTR_TO_ALLOC_MEM arg type, which, from
the point of view of verifier, is a pointer to a fixed-sized allocated memory
region. This is generic enough concept that could be used for other BPF
helpers (e.g., malloc/free pair, if added). once we have that, there will be
nothing to prevent passing ringbuf record to bpf_mem_free() (or whatever)
helper. To that end, this patch adds a capability to specify and track
reference types, that would be validated by verifier to ensure correct match
between acquire() and release() helpers.

This patch can be postponed for later, so is posted separate from other
verifier changes.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 include/linux/bpf_verifier.h |  8 +++
 kernel/bpf/verifier.c        | 96 +++++++++++++++++++++++++++++-------
 2 files changed, 86 insertions(+), 18 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index ca08db4ffb5f..26b2c4730f5a 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -164,6 +164,12 @@ struct bpf_stack_state {
 	u8 slot_type[BPF_REG_SIZE];
 };
 
+enum bpf_ref_type {
+	BPF_REF_INVALID,
+	BPF_REF_SOCKET,
+	BPF_REF_RINGBUF,
+};
+
 struct bpf_reference_state {
 	/* Track each reference created with a unique id, even if the same
 	 * instruction creates the reference multiple times (eg, via CALL).
@@ -173,6 +179,8 @@ struct bpf_reference_state {
 	 * is used purely to inform the user of a reference leak.
 	 */
 	int insn_idx;
+	/* Type of reference being tracked */
+	enum bpf_ref_type ref_type;
 };
 
 /* state of the program:
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4ce3e031554d..2a6a48c1e10b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -436,6 +436,19 @@ static bool is_release_function(enum bpf_func_id func_id)
 	       func_id == BPF_FUNC_ringbuf_discard;
 }
 
+static enum bpf_ref_type get_release_ref_type(enum bpf_func_id func_id)
+{
+	switch (func_id) {
+	case BPF_FUNC_sk_release:
+		return BPF_REF_SOCKET;
+	case BPF_FUNC_ringbuf_submit:
+	case BPF_FUNC_ringbuf_discard:
+		return BPF_REF_RINGBUF;
+	default:
+		return BPF_REF_INVALID;
+	}
+}
+
 static bool may_be_acquire_function(enum bpf_func_id func_id)
 {
 	return func_id == BPF_FUNC_sk_lookup_tcp ||
@@ -464,6 +477,28 @@ static bool is_acquire_function(enum bpf_func_id func_id,
 	return false;
 }
 
+static enum bpf_ref_type get_acquire_ref_type(enum bpf_func_id func_id,
+					      const struct bpf_map *map)
+{
+	enum bpf_map_type map_type = map ? map->map_type : BPF_MAP_TYPE_UNSPEC;
+
+	switch (func_id) {
+	case BPF_FUNC_sk_lookup_tcp:
+	case BPF_FUNC_sk_lookup_udp:
+	case BPF_FUNC_skc_lookup_tcp:
+		return BPF_REF_SOCKET;
+	case BPF_FUNC_map_lookup_elem:
+		if (map_type == BPF_MAP_TYPE_SOCKMAP ||
+		    map_type == BPF_MAP_TYPE_SOCKHASH)
+			return BPF_REF_SOCKET;
+		return BPF_REF_INVALID;
+	case BPF_FUNC_ringbuf_reserve:
+		return BPF_REF_RINGBUF;
+	default:
+		return BPF_REF_INVALID;
+	}
+}
+
 static bool is_ptr_cast_function(enum bpf_func_id func_id)
 {
 	return func_id == BPF_FUNC_tcp_sock ||
@@ -736,7 +771,8 @@ static int realloc_func_state(struct bpf_func_state *state, int stack_size,
  * On success, returns a valid pointer id to associate with the register
  * On failure, returns a negative errno.
  */
-static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx)
+static int acquire_reference_state(struct bpf_verifier_env *env,
+				   int insn_idx, enum bpf_ref_type ref_type)
 {
 	struct bpf_func_state *state = cur_func(env);
 	int new_ofs = state->acquired_refs;
@@ -748,25 +784,32 @@ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx)
 	id = ++env->id_gen;
 	state->refs[new_ofs].id = id;
 	state->refs[new_ofs].insn_idx = insn_idx;
+	state->refs[new_ofs].ref_type = ref_type;
 
 	return id;
 }
 
 /* release function corresponding to acquire_reference_state(). Idempotent. */
-static int release_reference_state(struct bpf_func_state *state, int ptr_id)
+static int release_reference_state(struct bpf_func_state *state, int ptr_id,
+				   enum bpf_ref_type ref_type)
 {
+	struct bpf_reference_state *ref;
 	int i, last_idx;
 
 	last_idx = state->acquired_refs - 1;
 	for (i = 0; i < state->acquired_refs; i++) {
-		if (state->refs[i].id == ptr_id) {
-			if (last_idx && i != last_idx)
-				memcpy(&state->refs[i], &state->refs[last_idx],
-				       sizeof(*state->refs));
-			memset(&state->refs[last_idx], 0, sizeof(*state->refs));
-			state->acquired_refs--;
-			return 0;
-		}
+		ref = &state->refs[i];
+		if (ref->id != ptr_id)
+			continue;
+
+		if (ref_type != BPF_REF_INVALID && ref->ref_type != ref_type)
+			return -EINVAL;
+
+		if (i != last_idx)
+			memcpy(ref, &state->refs[last_idx], sizeof(*ref));
+		memset(&state->refs[last_idx], 0, sizeof(*ref));
+		state->acquired_refs--;
+		return 0;
 	}
 	return -EINVAL;
 }
@@ -4296,14 +4339,13 @@ static void release_reg_references(struct bpf_verifier_env *env,
 /* The pointer with the specified id has released its reference to kernel
  * resources. Identify all copies of the same pointer and clear the reference.
  */
-static int release_reference(struct bpf_verifier_env *env,
-			     int ref_obj_id)
+static int release_reference(struct bpf_verifier_env *env, int ref_obj_id,
+			     enum bpf_ref_type ref_type)
 {
 	struct bpf_verifier_state *vstate = env->cur_state;
-	int err;
-	int i;
+	int err, i;
 
-	err = release_reference_state(cur_func(env), ref_obj_id);
+	err = release_reference_state(cur_func(env), ref_obj_id, ref_type);
 	if (err)
 		return err;
 
@@ -4664,7 +4706,16 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 			return err;
 		}
 	} else if (is_release_function(func_id)) {
-		err = release_reference(env, meta.ref_obj_id);
+		enum bpf_ref_type ref_type;
+
+		ref_type = get_release_ref_type(func_id);
+		if (ref_type == BPF_REF_INVALID) {
+			verbose(env, "unrecognized reference accepted by func %s#%d\n",
+				func_id_name(func_id), func_id);
+			return -EFAULT;
+		}
+
+		err = release_reference(env, meta.ref_obj_id, ref_type);
 		if (err) {
 			verbose(env, "func %s#%d reference has not been acquired before\n",
 				func_id_name(func_id), func_id);
@@ -4747,8 +4798,17 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 		/* For release_reference() */
 		regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
 	} else if (is_acquire_function(func_id, meta.map_ptr)) {
-		int id = acquire_reference_state(env, insn_idx);
+		enum bpf_ref_type ref_type;
+		int id;
+
+		ref_type = get_acquire_ref_type(func_id, meta.map_ptr);
+		if (ref_type == BPF_REF_INVALID) {
+			verbose(env, "unrecognized reference returned by func %s#%d\n",
+				func_id_name(func_id), func_id);
+			return -EINVAL;
+		}
 
+		id = acquire_reference_state(env, insn_idx, ref_type);
 		if (id < 0)
 			return id;
 		/* For mark_ptr_or_null_reg() */
@@ -6731,7 +6791,7 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
 		 * No one could have freed the reference state before
 		 * doing the NULL check.
 		 */
-		WARN_ON_ONCE(release_reference_state(state, id));
+		WARN_ON_ONCE(release_reference_state(state, id, BPF_REF_INVALID));
 
 	for (i = 0; i <= vstate->curframe; i++)
 		__mark_ptr_or_null_regs(vstate->frame[i], id, is_null);
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 bpf-next 4/7] libbpf: add BPF ring buffer support
  2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2020-05-17 19:57 ` [PATCH v2 bpf-next 3/7] bpf: track reference type in verifier Andrii Nakryiko
@ 2020-05-17 19:57 ` Andrii Nakryiko
  2020-05-22  1:15   ` Alexei Starovoitov
  2020-05-17 19:57 ` [PATCH v2 bpf-next 5/7] selftests/bpf: add BPF ringbuf selftests Andrii Nakryiko
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon

Declaring and instantiating BPF ring buffer doesn't require any changes to
libbpf, as it's just another type of maps. So using existing BTF-defined maps
syntax with __uint(type, BPF_MAP_TYPE_RINGBUF) and __uint(max_elements,
<size-of-ring-buf>) is all that's necessary to create and use BPF ring buffer.

This patch adds BPF ring buffer consumer to libbpf. It is very similar to
perf_buffer implementation in terms of API, but also attempts to fix some
minor problems and inconveniences with existing perf_buffer API.

ring_buffer support both single ring buffer use case (with just using
ring_buffer__new()), as well as allows to add more ring buffers, each with its
own callback and context. This allows to efficiently poll and consume
multiple, potentially completely independent, ring buffers, using single
epoll instance.

The latter is actually a problem in practice for applications
that are using multiple sets of perf buffers. They have to create multiple
instances for struct perf_buffer and poll them independently or in a loop,
each approach having its own problems (e.g., inability to use a common poll
timeout). struct ring_buffer eliminates this problem by aggregating many
independent ring buffer instances under the single "ring buffer manager".

Second, perf_buffer's callback can't return error, so applications that need
to stop polling due to error in data or data signalling the end, have to use
extra mechanisms to signal that polling has to stop. ring_buffer's callback
can return error, which will be passed through back to user code and can be
acted upon appropariately.

Two APIs allow to consume ring buffer data:
  - ring_buffer__poll(), which will wait for data availability notification
    and will consume data only from reported ring buffer(s); this API allows
    to efficiently use resources by reading data only when it becomes
    available;
  - ring_buffer__consume(), will attempt to read new records regardless of
    data availablity notification sub-system. This API is useful for cases
    when lowest latency is required, in expense of burning CPU resources.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/Build           |   2 +-
 tools/lib/bpf/libbpf.h        |  21 +++
 tools/lib/bpf/libbpf.map      |   5 +
 tools/lib/bpf/libbpf_probes.c |   5 +
 tools/lib/bpf/ringbuf.c       | 285 ++++++++++++++++++++++++++++++++++
 5 files changed, 317 insertions(+), 1 deletion(-)
 create mode 100644 tools/lib/bpf/ringbuf.c

diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build
index e3962cfbc9a6..190366d05588 100644
--- a/tools/lib/bpf/Build
+++ b/tools/lib/bpf/Build
@@ -1,3 +1,3 @@
 libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
 	    netlink.o bpf_prog_linfo.o libbpf_probes.o xsk.o hashmap.o \
-	    btf_dump.o
+	    btf_dump.o ringbuf.o
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 8ea69558f0a8..e0ee0906202b 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -478,6 +478,27 @@ LIBBPF_API int bpf_get_link_xdp_id(int ifindex, __u32 *prog_id, __u32 flags);
 LIBBPF_API int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
 				     size_t info_size, __u32 flags);
 
+/* Ring buffer APIs */
+struct ring_buffer;
+
+typedef int (*ring_buffer_sample_fn)(void *ctx, void *data, size_t size);
+
+struct ring_buffer_opts {
+	size_t sz; /* size of this struct, for forward/backward compatiblity */
+};
+
+#define ring_buffer_opts__last_field sz
+
+LIBBPF_API struct ring_buffer *
+ring_buffer__new(int map_fd, ring_buffer_sample_fn sample_cb, void *ctx,
+		 const struct ring_buffer_opts *opts);
+LIBBPF_API void ring_buffer__free(struct ring_buffer *rb);
+LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd,
+				ring_buffer_sample_fn sample_cb, void *ctx);
+LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms);
+LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb);
+
+/* Perf buffer APIs */
 struct perf_buffer;
 
 typedef void (*perf_buffer_sample_fn)(void *ctx, int cpu,
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 0133d469d30b..51e58a6215de 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -262,4 +262,9 @@ LIBBPF_0.0.9 {
 		bpf_link_get_fd_by_id;
 		bpf_link_get_next_id;
 		bpf_program__attach_iter;
+		ring_buffer__add;
+		ring_buffer__consume;
+		ring_buffer__free;
+		ring_buffer__new;
+		ring_buffer__poll;
 } LIBBPF_0.0.8;
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index 2c92059c0c90..10cd8d1891f5 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -238,6 +238,11 @@ bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex)
 		if (btf_fd < 0)
 			return false;
 		break;
+	case BPF_MAP_TYPE_RINGBUF:
+		key_size = 0;
+		value_size = 0;
+		max_entries = 4096;
+		break;
 	case BPF_MAP_TYPE_UNSPEC:
 	case BPF_MAP_TYPE_HASH:
 	case BPF_MAP_TYPE_ARRAY:
diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
new file mode 100644
index 000000000000..dcbb0b92f5ee
--- /dev/null
+++ b/tools/lib/bpf/ringbuf.c
@@ -0,0 +1,285 @@
+// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+/*
+ * Ring buffer operations.
+ *
+ * Copyright (C) 2020 Facebook, Inc.
+ */
+#include <stdlib.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include <linux/err.h>
+#include <linux/bpf.h>
+#include <asm/barrier.h>
+#include <sys/mman.h>
+#include <sys/epoll.h>
+#include <tools/libc_compat.h>
+
+#include "libbpf.h"
+#include "libbpf_internal.h"
+#include "bpf.h"
+
+/* make sure libbpf doesn't use kernel-only integer typedefs */
+#pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64
+
+struct ring {
+	ring_buffer_sample_fn sample_cb;
+	void *ctx;
+	void *data;
+	unsigned long *consumer_pos;
+	unsigned long *producer_pos;
+	unsigned long mask;
+	int map_fd;
+};
+
+struct ring_buffer {
+	struct epoll_event *events;
+	struct ring *rings;
+	size_t page_size;
+	int epoll_fd;
+	int ring_cnt;
+};
+
+static void ringbuf_unmap_ring(struct ring_buffer *rb, struct ring *r)
+{
+	if (r->consumer_pos) {
+		munmap(r->consumer_pos, rb->page_size);
+		r->consumer_pos = NULL;
+	}
+	if (r->producer_pos) {
+		munmap(r->producer_pos, rb->page_size + 2 * (r->mask + 1));
+		r->producer_pos = NULL;
+	}
+}
+
+/* Add extra RINGBUF maps to this ring buffer manager */
+int ring_buffer__add(struct ring_buffer *rb, int map_fd,
+		     ring_buffer_sample_fn sample_cb, void *ctx)
+{
+	struct bpf_map_info info;
+	__u32 len = sizeof(info);
+	struct epoll_event *e;
+	struct ring *r;
+	void *tmp;
+	int err;
+
+	memset(&info, 0, sizeof(info));
+
+	err = bpf_obj_get_info_by_fd(map_fd, &info, &len);
+	if (err) {
+		err = -errno;
+		pr_warn("ringbuf: failed to get map info for fd=%d: %d\n",
+			map_fd, err);
+		return err;
+	}
+
+	if (info.type != BPF_MAP_TYPE_RINGBUF) {
+		pr_warn("ringbuf: map fd=%d is not BPF_MAP_TYPE_RINGBUF\n",
+			map_fd);
+		return -EINVAL;
+	}
+
+	tmp = reallocarray(rb->rings, rb->ring_cnt + 1, sizeof(*rb->rings));
+	if (!tmp)
+		return -ENOMEM;
+	rb->rings = tmp;
+
+	tmp = reallocarray(rb->events, rb->ring_cnt + 1, sizeof(*rb->events));
+	if (!tmp)
+		return -ENOMEM;
+	rb->events = tmp;
+
+	r = &rb->rings[rb->ring_cnt];
+	memset(r, 0, sizeof(*r));
+
+	r->map_fd = map_fd;
+	r->sample_cb = sample_cb;
+	r->ctx = ctx;
+	r->mask = info.max_entries - 1;
+
+	/* Map writable consumer page */
+	tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
+		   map_fd, 0);
+	if (tmp == MAP_FAILED) {
+		err = -errno;
+		pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
+			map_fd, err);
+		return err;
+	}
+	r->consumer_pos = tmp;
+
+	/* Map read-only producer page and data pages. We map twice as big
+	 * data size to allow simple reading of samples that wrap around the
+	 * end of a ring buffer. See kernel implementation for details.
+	 * */
+	tmp = mmap(NULL, rb->page_size + 2 * info.max_entries, PROT_READ,
+		   MAP_SHARED, map_fd, rb->page_size);
+	if (tmp == MAP_FAILED) {
+		err = -errno;
+		ringbuf_unmap_ring(rb, r);
+		pr_warn("ringbuf: failed to mmap data pages for map fd=%d: %d\n",
+			map_fd, err);
+		return err;
+	}
+	r->producer_pos = tmp;
+	r->data = tmp + rb->page_size;
+
+	e = &rb->events[rb->ring_cnt];
+	memset(e, 0, sizeof(*e));
+
+	e->events = EPOLLIN;
+	e->data.fd = rb->ring_cnt;
+	if (epoll_ctl(rb->epoll_fd, EPOLL_CTL_ADD, map_fd, e) < 0) {
+		err = -errno;
+		ringbuf_unmap_ring(rb, r);
+		pr_warn("ringbuf: failed to epoll add map fd=%d: %d\n",
+			map_fd, err);
+		return err;
+	}
+
+	rb->ring_cnt++;
+	return 0;
+}
+
+void ring_buffer__free(struct ring_buffer *rb)
+{
+	int i;
+
+	if (!rb)
+		return;
+
+	for (i = 0; i < rb->ring_cnt; ++i)
+		ringbuf_unmap_ring(rb, &rb->rings[i]);
+	if (rb->epoll_fd >= 0)
+		close(rb->epoll_fd);
+
+	free(rb->events);
+	free(rb->rings);
+	free(rb);
+}
+
+struct ring_buffer *
+ring_buffer__new(int map_fd, ring_buffer_sample_fn sample_cb, void *ctx,
+		 const struct ring_buffer_opts *opts)
+{
+	struct ring_buffer *rb;
+	int err;
+
+	if (!OPTS_VALID(opts, ring_buffer_opts))
+		return NULL;
+
+	rb = calloc(1, sizeof(*rb));
+	if (!rb)
+		return NULL;
+
+	rb->page_size = getpagesize();
+
+	rb->epoll_fd = epoll_create1(EPOLL_CLOEXEC);
+	if (rb->epoll_fd < 0) {
+		err = -errno;
+		pr_warn("ringbuf: failed to create epoll instance: %d\n", err);
+		goto err_out;
+	}
+
+	err = ring_buffer__add(rb, map_fd, sample_cb, ctx);
+	if (err)
+		goto err_out;
+
+	return rb;
+
+err_out:
+	ring_buffer__free(rb);
+	return NULL;
+}
+
+static inline int roundup_len(__u32 len)
+{
+	/* clear out top 2 bits */
+	len <<= 2;
+	len >>= 2;
+	/* add length prefix */
+	len += BPF_RINGBUF_HDR_SZ;
+	/* round up to 8 byte alignment */
+	return (len + 7) / 8 * 8;
+}
+
+static int ringbuf_process_ring(struct ring* r)
+{
+	int *len_ptr, len, err, cnt = 0;
+	unsigned long cons_pos, prod_pos;
+	bool got_new_data;
+	void *sample;
+
+	cons_pos = smp_load_acquire(r->consumer_pos);
+	do {
+		got_new_data = false;
+		prod_pos = smp_load_acquire(r->producer_pos);
+		while (cons_pos < prod_pos) {
+			len_ptr = r->data + (cons_pos & r->mask);
+			len = smp_load_acquire(len_ptr);
+
+			/* sample not committed yet, bail out for now */
+			if (len & BPF_RINGBUF_BUSY_BIT)
+				goto done;
+
+			got_new_data = true;
+			cons_pos += roundup_len(len);
+
+			if ((len & BPF_RINGBUF_DISCARD_BIT) == 0) {
+				sample = (void *)len_ptr + BPF_RINGBUF_HDR_SZ;
+				err = r->sample_cb(r->ctx, sample, len);
+				if (err) {
+					/* update consumer pos and bail out */
+					smp_store_release(r->consumer_pos,
+							  cons_pos);
+					return err;
+				}
+				cnt++;
+			}
+
+			smp_store_release(r->consumer_pos, cons_pos);
+		}
+	} while (got_new_data);
+done:
+	return cnt;
+}
+
+/* Consume available ring buffer(s) data without event polling.
+ * Returns number of records consumed across all registered ring buffers, or
+ * negative number if any of the callbacks return error.
+ */
+int ring_buffer__consume(struct ring_buffer *rb)
+{
+	int i, err, res = 0;
+
+	for (i = 0; i < rb->ring_cnt; i++) {
+		struct ring *ring = &rb->rings[i];
+
+		err = ringbuf_process_ring(ring);
+		if (err < 0)
+			return err;
+		res += err;
+	}
+	return res;
+}
+
+/* Poll for available data and consume records, if any are available.
+ * Returns number of records consumed, or negative number, if any of the
+ * registered callbacks returned error.
+ */
+int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
+{
+	int i, cnt, err, res = 0;
+
+	cnt = epoll_wait(rb->epoll_fd, rb->events, rb->ring_cnt, timeout_ms);
+	for (i = 0; i < cnt; i++) {
+		__u32 ring_id = rb->events[i].data.fd;
+		struct ring *ring = &rb->rings[ring_id];
+
+		err = ringbuf_process_ring(ring);
+		if (err < 0)
+			return err;
+		res += cnt;
+	}
+	return cnt < 0 ? -errno : res;
+}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 bpf-next 5/7] selftests/bpf: add BPF ringbuf selftests
  2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2020-05-17 19:57 ` [PATCH v2 bpf-next 4/7] libbpf: add BPF ring buffer support Andrii Nakryiko
@ 2020-05-17 19:57 ` Andrii Nakryiko
  2020-05-22  1:20   ` Alexei Starovoitov
  2020-05-17 19:57 ` [PATCH v2 bpf-next 6/7] bpf: add BPF ringbuf and perf buffer benchmarks Andrii Nakryiko
  2020-05-17 19:57 ` [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes Andrii Nakryiko
  6 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon

Both singleton BPF ringbuf and BPF ringbuf with map-in-map use cases are tested.
Also reserve+submit/discards and output variants of API are validated.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/ringbuf.c        | 211 ++++++++++++++++++
 .../selftests/bpf/prog_tests/ringbuf_multi.c  | 102 +++++++++
 .../selftests/bpf/progs/test_ringbuf.c        |  78 +++++++
 .../selftests/bpf/progs/test_ringbuf_multi.c  |  77 +++++++
 4 files changed, 468 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/ringbuf.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf_multi.c

diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
new file mode 100644
index 000000000000..bb8541f240e2
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#include <linux/compiler.h>
+#include <asm/barrier.h>
+#include <test_progs.h>
+#include <sys/mman.h>
+#include <sys/epoll.h>
+#include <time.h>
+#include <sched.h>
+#include <signal.h>
+#include <pthread.h>
+#include <sys/sysinfo.h>
+#include <linux/perf_event.h>
+#include <linux/ring_buffer.h>
+#include "test_ringbuf.skel.h"
+
+#define EDONE 7777
+
+static int duration = 0;
+
+struct sample {
+	int pid;
+	int seq;
+	long value;
+	char comm[16];
+};
+
+static int sample_cnt;
+
+static int process_sample(void *ctx, void *data, size_t len)
+{
+	struct sample *s = data;
+
+	sample_cnt++;
+
+	switch (s->seq) {
+	case 0:
+		CHECK(s->value != 333, "sample1_value", "exp %ld, got %ld\n",
+		      333L, s->value);
+		return 0;
+	case 1:
+		CHECK(s->value != 777, "sample2_value", "exp %ld, got %ld\n",
+		      777L, s->value);
+		return -EDONE;
+	default:
+		/* we don't care about the rest */
+		return 0;
+	}
+}
+
+static struct test_ringbuf *skel;
+static struct ring_buffer *ringbuf;
+
+static void trigger_samples()
+{
+	skel->bss->dropped = 0;
+	skel->bss->total = 0;
+	skel->bss->discarded = 0;
+
+	/* trigger exactly two samples */
+	skel->bss->value = 333;
+	syscall(__NR_getpgid);
+	skel->bss->value = 777;
+	syscall(__NR_getpgid);
+}
+
+static void *poll_thread(void *input)
+{
+	long timeout = (long)input;
+
+	return (void *)(long)ring_buffer__poll(ringbuf, timeout);
+}
+
+void test_ringbuf(void)
+{
+	const size_t rec_sz = BPF_RINGBUF_HDR_SZ + sizeof(struct sample);
+	pthread_t thread;
+	long bg_ret = -1;
+	int err;
+
+	skel = test_ringbuf__open_and_load();
+	if (CHECK(!skel, "skel_open_load", "skeleton open&load failed\n"))
+		return;
+
+	/* only trigger BPF program for current process */
+	skel->bss->pid = getpid();
+
+	ringbuf = ring_buffer__new(bpf_map__fd(skel->maps.ringbuf),
+				   process_sample, NULL, NULL);
+	if (CHECK(!ringbuf, "ringbuf_create", "failed to create ringbuf\n"))
+		goto cleanup;
+
+	err = test_ringbuf__attach(skel);
+	if (CHECK(err, "skel_attach", "skeleton attachment failed: %d\n", err))
+		goto cleanup;
+
+	trigger_samples();
+
+	/* 2 submitted + 1 discarded records */
+	CHECK(skel->bss->avail_data != 3 * rec_sz,
+	      "err_avail_size", "exp %ld, got %ld\n",
+	      3L * rec_sz, skel->bss->avail_data);
+	CHECK(skel->bss->ring_size != 4096,
+	      "err_ring_size", "exp %ld, got %ld\n",
+	      4096L, skel->bss->ring_size);
+	CHECK(skel->bss->cons_pos != 0,
+	      "err_cons_pos", "exp %ld, got %ld\n",
+	      0L, skel->bss->cons_pos);
+	CHECK(skel->bss->prod_pos != 3 * rec_sz,
+	      "err_prod_pos", "exp %ld, got %ld\n",
+	      3L * rec_sz, skel->bss->prod_pos);
+
+	/* poll for samples */
+	err = ring_buffer__poll(ringbuf, -1);
+
+	/* -EDONE is used as an indicator that we are done */
+	if (CHECK(err != -EDONE, "err_done", "done err: %d\n", err))
+		goto cleanup;
+
+	/* we expect extra polling to return nothing */
+	err = ring_buffer__poll(ringbuf, 0);
+	if (CHECK(err != 0, "extra_samples", "poll result: %d\n", err))
+		goto cleanup;
+
+	CHECK(skel->bss->dropped != 0, "err_dropped", "exp %ld, got %ld\n",
+	      0L, skel->bss->dropped);
+	CHECK(skel->bss->total != 2, "err_total", "exp %ld, got %ld\n",
+	      2L, skel->bss->total);
+	CHECK(skel->bss->discarded != 1, "err_discarded", "exp %ld, got %ld\n",
+	      1L, skel->bss->discarded);
+
+	/* now validate consumer position is updated and returned */
+	trigger_samples();
+	CHECK(skel->bss->cons_pos != 3 * rec_sz,
+	      "err_cons_pos", "exp %ld, got %ld\n",
+	      3L * rec_sz, skel->bss->cons_pos);
+	err = ring_buffer__poll(ringbuf, -1);
+	CHECK(err <= 0, "poll_err", "err %d\n", err);
+
+	/* start poll in background w/ long timeout */
+	err = pthread_create(&thread, NULL, poll_thread, (void *)(long)10000);
+	if (CHECK(err, "bg_poll", "pthread_create failed: %d\n", err))
+		goto cleanup;
+
+	/* turn off notifications now */
+	skel->bss->flags = BPF_RB_NO_WAKEUP;
+
+	/* give background thread a bit of a time */
+	usleep(50000);
+	trigger_samples();
+	/* sleeping arbitrarily is bad, but no better way to know that
+	 * epoll_wait() **DID NOT** unblock in background thread
+	 */
+	usleep(50000);
+	/* background poll should still be blocked */
+	err = pthread_tryjoin_np(thread, (void **)&bg_ret);
+	if (CHECK(err != EBUSY, "try_join", "err %d\n", err))
+		goto cleanup;
+
+	/* BPF side did everything right */
+	CHECK(skel->bss->dropped != 0, "err_dropped", "exp %ld, got %ld\n",
+	      0L, skel->bss->dropped);
+	CHECK(skel->bss->total != 2, "err_total", "exp %ld, got %ld\n",
+	      2L, skel->bss->total);
+	CHECK(skel->bss->discarded != 1, "err_discarded", "exp %ld, got %ld\n",
+	      1L, skel->bss->discarded);
+
+	/* clear flags to return to "adaptive" notification mode */
+	skel->bss->flags = 0;
+
+	/* produce new samples, no notification should be triggered, because
+	 * consumer is now behind
+	 */
+	trigger_samples();
+
+	/* background poll should still be blocked */
+	err = pthread_tryjoin_np(thread, (void **)&bg_ret);
+	if (CHECK(err != EBUSY, "try_join", "err %d\n", err))
+		goto cleanup;
+
+	/* now force notifications */
+	skel->bss->flags = BPF_RB_FORCE_WAKEUP;
+	sample_cnt = 0;
+	trigger_samples();
+
+	/* now we should get a pending notification */
+	usleep(50000);
+	err = pthread_tryjoin_np(thread, (void **)&bg_ret);
+	if (CHECK(err, "join_bg", "err %d\n", err))
+		goto cleanup;
+
+	if (CHECK(bg_ret != 1, "bg_ret", "epoll_wait result: %ld", bg_ret))
+		goto cleanup;
+
+	/* 3 rounds, 2 samples each */
+	CHECK(sample_cnt != 6, "wrong_sample_cnt",
+	      "expected to see %d samples, got %d\n", 6, sample_cnt);
+
+	/* BPF side did everything right */
+	CHECK(skel->bss->dropped != 0, "err_dropped", "exp %ld, got %ld\n",
+	      0L, skel->bss->dropped);
+	CHECK(skel->bss->total != 2, "err_total", "exp %ld, got %ld\n",
+	      2L, skel->bss->total);
+	CHECK(skel->bss->discarded != 1, "err_discarded", "exp %ld, got %ld\n",
+	      1L, skel->bss->discarded);
+
+	test_ringbuf__detach(skel);
+cleanup:
+	ring_buffer__free(ringbuf);
+	test_ringbuf__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
new file mode 100644
index 000000000000..78e450609803
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#include <test_progs.h>
+#include <sys/epoll.h>
+#include "test_ringbuf_multi.skel.h"
+
+static int duration = 0;
+
+struct sample {
+	int pid;
+	int seq;
+	long value;
+	char comm[16];
+};
+
+static int process_sample(void *ctx, void *data, size_t len)
+{
+	int ring = (unsigned long)ctx;
+	struct sample *s = data;
+
+	switch (s->seq) {
+	case 0:
+		CHECK(ring != 1, "sample1_ring", "exp %d, got %d\n", 1, ring);
+		CHECK(s->value != 333, "sample1_value", "exp %ld, got %ld\n",
+		      333L, s->value);
+		break;
+	case 1:
+		CHECK(ring != 2, "sample2_ring", "exp %d, got %d\n", 2, ring);
+		CHECK(s->value != 777, "sample2_value", "exp %ld, got %ld\n",
+		      777L, s->value);
+		break;
+	default:
+		CHECK(true, "extra_sample", "unexpected sample seq %d, val %ld\n",
+		      s->seq, s->value);
+		return -1;
+	}
+
+	return 0;
+}
+
+void test_ringbuf_multi(void)
+{
+	struct test_ringbuf_multi *skel;
+	struct ring_buffer *ringbuf;
+	int err;
+
+	skel = test_ringbuf_multi__open_and_load();
+	if (CHECK(!skel, "skel_open_load", "skeleton open&load failed\n"))
+		return;
+
+	/* only trigger BPF program for current process */
+	skel->bss->pid = getpid();
+
+	ringbuf = ring_buffer__new(bpf_map__fd(skel->maps.ringbuf1),
+				   process_sample, (void *)(long)1, NULL);
+	if (CHECK(!ringbuf, "ringbuf_create", "failed to create ringbuf\n"))
+		goto cleanup;
+
+	err = ring_buffer__add(ringbuf, bpf_map__fd(skel->maps.ringbuf2),
+			      process_sample, (void *)(long)2);
+	if (CHECK(err, "ringbuf_add", "failed to add another ring\n"))
+		goto cleanup;
+
+	err = test_ringbuf_multi__attach(skel);
+	if (CHECK(err, "skel_attach", "skeleton attachment failed: %d\n", err))
+		goto cleanup;
+
+	/* trigger few samples, some will be skipped */
+	skel->bss->target_ring = 0;
+	skel->bss->value = 333;
+	syscall(__NR_getpgid);
+
+	/* skipped, no ringbuf in slot 1 */
+	skel->bss->target_ring = 1;
+	skel->bss->value = 555;
+	syscall(__NR_getpgid);
+
+	skel->bss->target_ring = 2;
+	skel->bss->value = 777;
+	syscall(__NR_getpgid);
+
+	/* poll for samples, should get 2 ringbufs back */
+	err = ring_buffer__poll(ringbuf, -1);
+	if (CHECK(err != 4, "poll_res", "expected 4 records, got %d\n", err))
+		goto cleanup;
+
+	/* expect extra polling to return nothing */
+	err = ring_buffer__poll(ringbuf, 0);
+	if (CHECK(err < 0, "extra_samples", "poll result: %d\n", err))
+		goto cleanup;
+
+	CHECK(skel->bss->dropped != 0, "err_dropped", "exp %ld, got %ld\n",
+	      0L, skel->bss->dropped);
+	CHECK(skel->bss->skipped != 1, "err_skipped", "exp %ld, got %ld\n",
+	      1L, skel->bss->skipped);
+	CHECK(skel->bss->total != 2, "err_total", "exp %ld, got %ld\n",
+	      2L, skel->bss->total);
+
+cleanup:
+	ring_buffer__free(ringbuf);
+	test_ringbuf_multi__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf.c b/tools/testing/selftests/bpf/progs/test_ringbuf.c
new file mode 100644
index 000000000000..79114eafd283
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ringbuf.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct sample {
+	int pid;
+	int seq;
+	long value;
+	char comm[16];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+	__uint(max_entries, 1 << 12);
+} ringbuf SEC(".maps");
+
+/* inputs */
+int pid = 0;
+long value = 0;
+long flags = 0;
+
+/* outputs */
+long total = 0;
+long discarded = 0;
+long dropped = 0;
+
+long avail_data = 0;
+long ring_size = 0;
+long cons_pos = 0;
+long prod_pos = 0;
+
+/* inner state */
+long seq = 0;
+
+SEC("tp/syscalls/sys_enter_getpgid")
+int test_ringbuf(void *ctx)
+{
+	int cur_pid = bpf_get_current_pid_tgid() >> 32;
+	struct sample *sample;
+	int zero = 0;
+
+	if (cur_pid != pid)
+		return 0;
+
+	sample = bpf_ringbuf_reserve(&ringbuf, sizeof(*sample), 0);
+	if (!sample) {
+		__sync_fetch_and_add(&dropped, 1);
+		return 1;
+	}
+
+	sample->pid = pid;
+	bpf_get_current_comm(sample->comm, sizeof(sample->comm));
+	sample->value = value;
+
+	sample->seq = seq++;
+	__sync_fetch_and_add(&total, 1);
+
+	if (sample->seq & 1) {
+		/* copy from reserved sample to a new one... */
+		bpf_ringbuf_output(&ringbuf, sample, sizeof(*sample), flags);
+		/* ...and then discard reserved sample */
+		bpf_ringbuf_discard(sample, flags);
+		__sync_fetch_and_add(&discarded, 1);
+	} else {
+		bpf_ringbuf_submit(sample, flags);
+	}
+
+	avail_data = bpf_ringbuf_query(&ringbuf, BPF_RB_AVAIL_DATA);
+	ring_size = bpf_ringbuf_query(&ringbuf, BPF_RB_RING_SIZE);
+	cons_pos = bpf_ringbuf_query(&ringbuf, BPF_RB_CONS_POS);
+	prod_pos = bpf_ringbuf_query(&ringbuf, BPF_RB_PROD_POS);
+
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
new file mode 100644
index 000000000000..7eb85dd9cd66
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct sample {
+	int pid;
+	int seq;
+	long value;
+	char comm[16];
+};
+
+struct ringbuf_map {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+	__uint(max_entries, 1 << 12);
+} ringbuf1 SEC(".maps"),
+  ringbuf2 SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+	__uint(max_entries, 4);
+	__type(key, int);
+	__array(values, struct ringbuf_map);
+} ringbuf_arr SEC(".maps") = {
+	.values = {
+		[0] = &ringbuf1,
+		[2] = &ringbuf2,
+	},
+};
+
+/* inputs */
+int pid = 0;
+int target_ring = 0;
+long value = 0;
+
+/* outputs */
+long total = 0;
+long dropped = 0;
+long skipped = 0;
+
+SEC("tp/syscalls/sys_enter_getpgid")
+int test_ringbuf(void *ctx)
+{
+	int cur_pid = bpf_get_current_pid_tgid() >> 32;
+	struct sample *sample;
+	void *rb;
+	int zero = 0;
+
+	if (cur_pid != pid)
+		return 0;
+
+	rb = bpf_map_lookup_elem(&ringbuf_arr, &target_ring);
+	if (!rb) {
+		skipped += 1;
+		return 1;
+	}
+
+	sample = bpf_ringbuf_reserve(rb, sizeof(*sample), 0);
+	if (!sample) {
+		dropped += 1;
+		return 1;
+	}
+
+	sample->pid = pid;
+	bpf_get_current_comm(sample->comm, sizeof(sample->comm));
+	sample->value = value;
+
+	sample->seq = total;
+	total += 1;
+
+	bpf_ringbuf_submit(sample, 0);
+
+	return 0;
+}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 bpf-next 6/7] bpf: add BPF ringbuf and perf buffer benchmarks
  2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
                   ` (4 preceding siblings ...)
  2020-05-17 19:57 ` [PATCH v2 bpf-next 5/7] selftests/bpf: add BPF ringbuf selftests Andrii Nakryiko
@ 2020-05-17 19:57 ` Andrii Nakryiko
  2020-05-22  1:21   ` Alexei Starovoitov
  2020-05-17 19:57 ` [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes Andrii Nakryiko
  6 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon

Extend bench framework with ability to have benchmark-provided child argument
parser for custom benchmark-specific parameters. This makes bench generic code
modular and independent from any specific benchmark.

Also implement a set of benchmarks for new BPF ring buffer and existing perf
buffer. 4 benchmarks were implemented: 2 variations for each of BPF ringbuf
and perfbuf:,
  - rb-libbpf utilizes stock libbpf ring_buffer manager for reading data;
  - rb-custom implements custom ring buffer setup and reading code, to
    eliminate overheads inherent in generic libbpf code due to callback
    functions and the need to update consumer position after each consumed
    record, instead of batching updates (due to pessimistic assumption that
    user callback might take long time and thus could unnecessarily hold ring
    buffer space for too long);
  - pb-libbpf uses stock libbpf perf_buffer code with all the default
    settings, though uses higher-performance raw event callback to minimize
    unnecessary overhead;
  - pb-custom implements its own custom consumer code to minimize any possible
    overhead of generic libbpf implementation and indirect function calls.

All of the test support default, no data notification skipped, mode, as well
as sampled mode (with --rb-sampled flag), which allows to trigger epoll
notification less frequently and reduce overhead. As will be shown, this mode
is especially critical for perf buffer, which suffers from high overhead of
wakeups in kernel.

Otherwise, all benchamrks implement similar way to generate a batch of records
by using fentry/sys_getpgid BPF program, which pushes a bunch of records in
a tight loop and records number of successful and dropped samples. Each record
is a small 8-byte integer, to minimize the effect of memory copying with
bpf_perf_event_output() and bpf_ringbuf_output().

Benchmarks that have only one producer implement optional back-to-back mode,
in which record production and consumption is alternating on the same CPU.
This is the highest-throughput happy case, showing ultimate performance
achievable with either BPF ringbuf or perfbuf.

All the below scenarios are implemented in a script in
benchs/run_bench_ringbufs.sh. Tests were performed on 28-core/56-thread
Intel Xeon CPU E5-2680 v4 @ 2.40GHz CPU.

Single-producer, parallel producer
==================================
rb-libbpf            12.054 ± 0.320M/s (drops 0.000 ± 0.000M/s)
rb-custom            8.158 ± 0.118M/s (drops 0.001 ± 0.003M/s)
pb-libbpf            0.931 ± 0.007M/s (drops 0.000 ± 0.000M/s)
pb-custom            0.965 ± 0.003M/s (drops 0.000 ± 0.000M/s)

Single-producer, parallel producer, sampled notification
========================================================
rb-libbpf            11.563 ± 0.067M/s (drops 0.000 ± 0.000M/s)
rb-custom            15.895 ± 0.076M/s (drops 0.000 ± 0.000M/s)
pb-libbpf            9.889 ± 0.032M/s (drops 0.000 ± 0.000M/s)
pb-custom            9.866 ± 0.028M/s (drops 0.000 ± 0.000M/s)

Single producer on one CPU, consumer on another one, both running at full
speed. Curiously, rb-libbpf has higher throughput than objectively faster (due
to more lightweight consumer code path) rb-custom. It appears that faster
consumer causes kernel to send notifications more frequently, because consumer
appears to be caught up more frequently. Performance of perfbuf suffers from
default "no sampling" policy and huge overhead that causes.

In sampled mode, rb-custom is winning very significantly eliminating too
frequent in-kernel wakeups, the gain appears to be more than 2x.

Perf buffer achieves even more impressive wins, compared to stock perfbuf
settings, with 10x improvements in throughput with 1:500 sampling rate. The
trade-off is that with sampling, application might not get next X events until
X+1st arrives, which is not always acceptable. With steady influx of events,
though, this shouldn't be a problem.

Overall, single-producer performance of ring buffers seems to be better no
matter the sampled/non-sampled modes, but it especially beats ring buffer
without sampling due to its adaptive notification approach.

Single-producer, back-to-back mode
==================================
rb-libbpf            15.507 ± 0.247M/s (drops 0.000 ± 0.000M/s)
rb-libbpf-sampled    14.692 ± 0.195M/s (drops 0.000 ± 0.000M/s)
rb-custom            21.449 ± 0.157M/s (drops 0.000 ± 0.000M/s)
rb-custom-sampled    20.024 ± 0.386M/s (drops 0.000 ± 0.000M/s)
pb-libbpf            1.601 ± 0.015M/s (drops 0.000 ± 0.000M/s)
pb-libbpf-sampled    8.545 ± 0.064M/s (drops 0.000 ± 0.000M/s)
pb-custom            1.607 ± 0.022M/s (drops 0.000 ± 0.000M/s)
pb-custom-sampled    8.988 ± 0.144M/s (drops 0.000 ± 0.000M/s)

Here we test a back-to-back mode, which is arguably best-case scenario both
for BPF ringbuf and perfbuf, because there is no contention and for ringbuf
also no excessive notification, because consumer appears to be behind after
the first record. For ringbuf, custom consumer code clearly wins with 21.5 vs
16 million records per second exchanged between producer and consumer. Sampled
mode actually hurts a bit due to slightly slower producer logic (it needs to
fetch amount of data available to decide whether to skip or force notification).

Perfbuf with wakeup sampling gets 5.5x throughput increase, compared to
no-sampling version. There also doesn't seem to be noticeable overhead from
generic libbpf handling code.

Perfbuf back-to-back, effect of sample rate
===========================================
pb-sampled-1         1.035 ± 0.012M/s (drops 0.000 ± 0.000M/s)
pb-sampled-5         3.476 ± 0.087M/s (drops 0.000 ± 0.000M/s)
pb-sampled-10        5.094 ± 0.136M/s (drops 0.000 ± 0.000M/s)
pb-sampled-25        7.118 ± 0.153M/s (drops 0.000 ± 0.000M/s)
pb-sampled-50        8.169 ± 0.156M/s (drops 0.000 ± 0.000M/s)
pb-sampled-100       8.887 ± 0.136M/s (drops 0.000 ± 0.000M/s)
pb-sampled-250       9.180 ± 0.209M/s (drops 0.000 ± 0.000M/s)
pb-sampled-500       9.353 ± 0.281M/s (drops 0.000 ± 0.000M/s)
pb-sampled-1000      9.411 ± 0.217M/s (drops 0.000 ± 0.000M/s)
pb-sampled-2000      9.464 ± 0.167M/s (drops 0.000 ± 0.000M/s)
pb-sampled-3000      9.575 ± 0.273M/s (drops 0.000 ± 0.000M/s)

This benchmark shows the effect of event sampling for perfbuf. Back-to-back
mode for highest throughput. Just doing every 5th record notification gives
3.5x speed up. 250-500 appears to be the point of diminishing return, with
almost 9x speed up. Most benchmarks use 500 as the default sampling for pb-raw
and pb-custom.

Ringbuf back-to-back, effect of sample rate
===========================================
rb-sampled-1         1.106 ± 0.010M/s (drops 0.000 ± 0.000M/s)
rb-sampled-5         4.746 ± 0.149M/s (drops 0.000 ± 0.000M/s)
rb-sampled-10        7.706 ± 0.164M/s (drops 0.000 ± 0.000M/s)
rb-sampled-25        12.893 ± 0.273M/s (drops 0.000 ± 0.000M/s)
rb-sampled-50        15.961 ± 0.361M/s (drops 0.000 ± 0.000M/s)
rb-sampled-100       18.203 ± 0.445M/s (drops 0.000 ± 0.000M/s)
rb-sampled-250       19.962 ± 0.786M/s (drops 0.000 ± 0.000M/s)
rb-sampled-500       20.881 ± 0.551M/s (drops 0.000 ± 0.000M/s)
rb-sampled-1000      21.317 ± 0.532M/s (drops 0.000 ± 0.000M/s)
rb-sampled-2000      21.331 ± 0.535M/s (drops 0.000 ± 0.000M/s)
rb-sampled-3000      21.688 ± 0.392M/s (drops 0.000 ± 0.000M/s)

Similar benchmark for ring buffer also shows a great advantage (in terms of
throughput) of skipping notifications. Skipping every 5th one gives 4x boost.
Also similar to perfbuf case, 250-500 seems to be the point of diminishing
returns, giving roughly 20x better results.

Keep in mind, for this test, notifications are controlled manually with
BPF_RB_NO_WAKEUP and BPF_RB_FORCE_WAKEUP. As can be seen from previous
benchmarks, adaptive notifications based on consumer's positions provides same
(or even slightly better due to simpler load generator on BPF side) benefits in
favorable back-to-back scenario. Over zealous and fast consumer, which is
almost always caught up, will make thoughput numbers smaller. That's the case
when manual notification control might prove to be extremely beneficial.

Ringbuf back-to-back, reserve+commit vs output
==============================================
reserve              22.819 ± 0.503M/s (drops 0.000 ± 0.000M/s)
output               18.906 ± 0.433M/s (drops 0.000 ± 0.000M/s)

Ringbuf sampled, reserve+commit vs output
=========================================
reserve-sampled      15.350 ± 0.132M/s (drops 0.000 ± 0.000M/s)
output-sampled       14.195 ± 0.144M/s (drops 0.000 ± 0.000M/s)

BPF ringbuf supports two sets of APIs with various usability and performance
tradeoffs: bpf_ringbuf_reserve()+bpf_ringbuf_commit() vs bpf_ringbuf_output().
This benchmark clearly shows superiority of reserve+commit approach, despite
using a small 8-byte record size.

Single-producer, consumer/producer competing on the same CPU, low batch count
=============================================================================
rb-libbpf            3.045 ± 0.020M/s (drops 3.536 ± 0.148M/s)
rb-custom            3.055 ± 0.022M/s (drops 3.893 ± 0.066M/s)
pb-libbpf            1.393 ± 0.024M/s (drops 0.000 ± 0.000M/s)
pb-custom            1.407 ± 0.016M/s (drops 0.000 ± 0.000M/s)

This benchmark shows one of the worst-case scenarios, in which producer and
consumer do not coordinate *and* fight for the same CPU. No batch count and
sampling settings were able to eliminate drops for ringbuffer, producer is
just too fast for consumer to keep up. But ringbuf and perfbuf still able to
pass through quite a lot of messages, which is more than enough for a lot of
applications.

Ringbuf, multi-producer contention
==================================
rb-libbpf nr_prod 1  10.916 ± 0.399M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 2  4.931 ± 0.030M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 3  4.880 ± 0.006M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 4  3.926 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 8  4.011 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 12 3.967 ± 0.016M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 16 2.604 ± 0.030M/s (drops 0.001 ± 0.002M/s)
rb-libbpf nr_prod 20 2.233 ± 0.003M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 24 2.085 ± 0.015M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 28 2.055 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 32 1.962 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 36 2.089 ± 0.005M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 40 2.118 ± 0.006M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 44 2.105 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 48 2.120 ± 0.058M/s (drops 0.000 ± 0.001M/s)
rb-libbpf nr_prod 52 2.074 ± 0.024M/s (drops 0.007 ± 0.014M/s)

Ringbuf uses a very short-duration spinlock during reservation phase, to check
few invariants, increment producer count and set record header. This is the
biggest point of contention for ringbuf implementation. This benchmark
evaluates the effect of multiple competing writers on overall throughput of
a single shared ringbuffer.

Overall throughput drops almost 2x when going from single to two
highly-contended producers, gradually dropping with additional competing
producers.  Performance drop stabilizes at around 20 producers and hovers
around 2mln even with 50+ fighting producers, which is a 5x drop compared to
non-contended case. Good kernel implementation in kernel helps maintain decent
performance here.

Note, that in the intended real-world scenarios, it's not expected to get even
close to such a high levels of contention. But if contention will become
a problem, there is always an option of sharding few ring buffers across a set
of CPUs.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/testing/selftests/bpf/Makefile          |   5 +-
 tools/testing/selftests/bpf/bench.c           |  16 +
 .../selftests/bpf/benchs/bench_ringbufs.c     | 566 ++++++++++++++++++
 .../bpf/benchs/run_bench_ringbufs.sh          |  75 +++
 .../selftests/bpf/progs/perfbuf_bench.c       |  33 +
 .../selftests/bpf/progs/ringbuf_bench.c       |  60 ++
 6 files changed, 754 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/benchs/bench_ringbufs.c
 create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_ringbufs.sh
 create mode 100644 tools/testing/selftests/bpf/progs/perfbuf_bench.c
 create mode 100644 tools/testing/selftests/bpf/progs/ringbuf_bench.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index e716e931d0c9..3ce548eff8a8 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -413,12 +413,15 @@ $(OUTPUT)/bench_%.o: benchs/bench_%.c bench.h
 	$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
 $(OUTPUT)/bench_rename.o: $(OUTPUT)/test_overhead.skel.h
 $(OUTPUT)/bench_trigger.o: $(OUTPUT)/trigger_bench.skel.h
+$(OUTPUT)/bench_ringbufs.o: $(OUTPUT)/ringbuf_bench.skel.h \
+			    $(OUTPUT)/perfbuf_bench.skel.h
 $(OUTPUT)/bench.o: bench.h testing_helpers.h
 $(OUTPUT)/bench: LDLIBS += -lm
 $(OUTPUT)/bench: $(OUTPUT)/bench.o $(OUTPUT)/testing_helpers.o \
 		 $(OUTPUT)/bench_count.o \
 		 $(OUTPUT)/bench_rename.o \
-		 $(OUTPUT)/bench_trigger.o
+		 $(OUTPUT)/bench_trigger.o \
+		 $(OUTPUT)/bench_ringbufs.o
 	$(call msg,BINARY,,$@)
 	$(CC) $(LDFLAGS) -o $@ $(filter %.a %.o,$^) $(LDLIBS)
 
diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
index 14390689ef90..944ad4721c83 100644
--- a/tools/testing/selftests/bpf/bench.c
+++ b/tools/testing/selftests/bpf/bench.c
@@ -130,6 +130,13 @@ static const struct argp_option opts[] = {
 	{},
 };
 
+extern struct argp bench_ringbufs_argp;
+
+static const struct argp_child bench_parsers[] = {
+	{ &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 },
+	{},
+};
+
 static error_t parse_arg(int key, char *arg, struct argp_state *state)
 {
 	static int pos_args;
@@ -208,6 +215,7 @@ static void parse_cmdline_args(int argc, char **argv)
 		.options = opts,
 		.parser = parse_arg,
 		.doc = argp_program_doc,
+		.children = bench_parsers,
 	};
 	if (argp_parse(&argp, argc, argv, 0, NULL, NULL))
 		exit(1);
@@ -310,6 +318,10 @@ extern const struct bench bench_trig_rawtp;
 extern const struct bench bench_trig_kprobe;
 extern const struct bench bench_trig_fentry;
 extern const struct bench bench_trig_fmodret;
+extern const struct bench bench_rb_libbpf;
+extern const struct bench bench_rb_custom;
+extern const struct bench bench_pb_libbpf;
+extern const struct bench bench_pb_custom;
 
 static const struct bench *benchs[] = {
 	&bench_count_global,
@@ -327,6 +339,10 @@ static const struct bench *benchs[] = {
 	&bench_trig_kprobe,
 	&bench_trig_fentry,
 	&bench_trig_fmodret,
+	&bench_rb_libbpf,
+	&bench_rb_custom,
+	&bench_pb_libbpf,
+	&bench_pb_custom,
 };
 
 static void setup_benchmark()
diff --git a/tools/testing/selftests/bpf/benchs/bench_ringbufs.c b/tools/testing/selftests/bpf/benchs/bench_ringbufs.c
new file mode 100644
index 000000000000..da87c7f31891
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/bench_ringbufs.c
@@ -0,0 +1,566 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include <asm/barrier.h>
+#include <linux/perf_event.h>
+#include <linux/ring_buffer.h>
+#include <sys/epoll.h>
+#include <sys/mman.h>
+#include <argp.h>
+#include <stdlib.h>
+#include "bench.h"
+#include "ringbuf_bench.skel.h"
+#include "perfbuf_bench.skel.h"
+
+static struct {
+	bool back2back;
+	int batch_cnt;
+	bool sampled;
+	int sample_rate;
+	int ringbuf_sz; /* per-ringbuf, in bytes */
+	bool ringbuf_use_output; /* use slower output API */
+	int perfbuf_sz; /* per-CPU size, in pages */
+} args = {
+	.back2back = false,
+	.batch_cnt = 500,
+	.sampled = false,
+	.sample_rate = 500,
+	.ringbuf_sz = 512 * 1024,
+	.ringbuf_use_output = false,
+	.perfbuf_sz = 128,
+};
+
+enum {
+	ARG_RB_BACK2BACK = 2000,
+	ARG_RB_USE_OUTPUT = 2001,
+	ARG_RB_BATCH_CNT = 2002,
+	ARG_RB_SAMPLED = 2003,
+	ARG_RB_SAMPLE_RATE = 2004,
+};
+
+static const struct argp_option opts[] = {
+	{ "rb-b2b", ARG_RB_BACK2BACK, NULL, 0, "Back-to-back mode"},
+	{ "rb-use-output", ARG_RB_USE_OUTPUT, NULL, 0, "Use bpf_ringbuf_output() instead of bpf_ringbuf_reserve()"},
+	{ "rb-batch-cnt", ARG_RB_BATCH_CNT, "CNT", 0, "Set BPF-side record batch count"},
+	{ "rb-sampled", ARG_RB_SAMPLED, NULL, 0, "Notification sampling"},
+	{ "rb-sample-rate", ARG_RB_SAMPLE_RATE, "RATE", 0, "Notification sample rate"},
+	{},
+};
+
+static error_t parse_arg(int key, char *arg, struct argp_state *state)
+{
+	switch (key) {
+	case ARG_RB_BACK2BACK:
+		args.back2back = true;
+		break;
+	case ARG_RB_USE_OUTPUT:
+		args.ringbuf_use_output = true;
+		break;
+	case ARG_RB_BATCH_CNT:
+		args.batch_cnt = strtol(arg, NULL, 10);
+		if (args.batch_cnt < 0) {
+			fprintf(stderr, "Invalid batch count.");
+			argp_usage(state);
+		}
+		break;
+	case ARG_RB_SAMPLED:
+		args.sampled = true;
+		break;
+	case ARG_RB_SAMPLE_RATE:
+		args.sample_rate = strtol(arg, NULL, 10);
+		if (args.sample_rate < 0) {
+			fprintf(stderr, "Invalid perfbuf sample rate.");
+			argp_usage(state);
+		}
+		break;
+	default:
+		return ARGP_ERR_UNKNOWN;
+	}
+	return 0;
+}
+
+/* exported into benchmark runner */
+const struct argp bench_ringbufs_argp = {
+	.options = opts,
+	.parser = parse_arg,
+};
+
+/* RINGBUF-LIBBPF benchmark */
+
+static struct counter buf_hits;
+
+static inline void bufs_trigger_batch()
+{
+	(void)syscall(__NR_getpgid);
+}
+
+static void bufs_validate()
+{
+	if (env.consumer_cnt != 1) {
+		fprintf(stderr, "rb-libbpf benchmark doesn't support multi-consumer!\n");
+		exit(1);
+	}
+
+	if (args.back2back && env.producer_cnt > 1) {
+		fprintf(stderr, "back-to-back mode makes sense only for single-producer case!\n");
+		exit(1);
+	}
+}
+
+static void *bufs_sample_producer(void *input)
+{
+	if (args.back2back) {
+		/* initial batch to get everything started */
+		bufs_trigger_batch();
+		return NULL;
+	}
+
+	while (true)
+		bufs_trigger_batch();
+	return NULL;
+}
+
+static struct ringbuf_libbpf_ctx {
+	struct ringbuf_bench *skel;
+	struct ring_buffer *ringbuf;
+} ringbuf_libbpf_ctx;
+
+static void ringbuf_libbpf_measure(struct bench_res *res)
+{
+	struct ringbuf_libbpf_ctx *ctx = &ringbuf_libbpf_ctx;
+
+	res->hits = atomic_swap(&buf_hits.value, 0);
+	res->drops = atomic_swap(&ctx->skel->bss->dropped, 0);
+}
+
+static struct ringbuf_bench *ringbuf_setup_skeleton()
+{
+	struct ringbuf_bench *skel;
+
+	setup_libbpf();
+
+	skel = ringbuf_bench__open();
+	if (!skel) {
+		fprintf(stderr, "failed to open skeleton\n");
+		exit(1);
+	}
+
+	skel->rodata->batch_cnt = args.batch_cnt;
+	skel->rodata->use_output = args.ringbuf_use_output ? 1 : 0;
+
+	if (args.sampled)
+		/* record data + header take 16 bytes */
+		skel->rodata->wakeup_data_size = args.sample_rate * 16;
+
+	bpf_map__resize(skel->maps.ringbuf, args.ringbuf_sz);
+
+	if (ringbuf_bench__load(skel)) {
+		fprintf(stderr, "failed to load skeleton\n");
+		exit(1);
+	}
+
+	return skel;
+}
+
+static int buf_process_sample(void *ctx, void *data, size_t len)
+{
+	atomic_inc(&buf_hits.value);
+	return 0;
+}
+
+static void ringbuf_libbpf_setup()
+{
+	struct ringbuf_libbpf_ctx *ctx = &ringbuf_libbpf_ctx;
+	struct bpf_link *link;
+
+	ctx->skel = ringbuf_setup_skeleton();
+	ctx->ringbuf = ring_buffer__new(bpf_map__fd(ctx->skel->maps.ringbuf),
+					buf_process_sample, NULL, NULL);
+	if (!ctx->ringbuf) {
+		fprintf(stderr, "failed to create ringbuf\n");
+		exit(1);
+	}
+
+	link = bpf_program__attach(ctx->skel->progs.bench_ringbuf);
+	if (IS_ERR(link)) {
+		fprintf(stderr, "failed to attach program!\n");
+		exit(1);
+	}
+}
+
+static void *ringbuf_libbpf_consumer(void *input)
+{
+	struct ringbuf_libbpf_ctx *ctx = &ringbuf_libbpf_ctx;
+
+	while (ring_buffer__poll(ctx->ringbuf, -1) >= 0) {
+		if (args.back2back)
+			bufs_trigger_batch();
+	}
+	fprintf(stderr, "ringbuf polling failed!\n");
+	return NULL;
+}
+
+/* RINGBUF-CUSTOM benchmark */
+struct ringbuf_custom {
+	__u64 *consumer_pos;
+	__u64 *producer_pos;
+	__u64 mask;
+	void *data;
+	int map_fd;
+};
+
+static struct ringbuf_custom_ctx {
+	struct ringbuf_bench *skel;
+	struct ringbuf_custom ringbuf;
+	int epoll_fd;
+	struct epoll_event event;
+} ringbuf_custom_ctx;
+
+static void ringbuf_custom_measure(struct bench_res *res)
+{
+	struct ringbuf_custom_ctx *ctx = &ringbuf_custom_ctx;
+
+	res->hits = atomic_swap(&buf_hits.value, 0);
+	res->drops = atomic_swap(&ctx->skel->bss->dropped, 0);
+}
+
+static void ringbuf_custom_setup()
+{
+	struct ringbuf_custom_ctx *ctx = &ringbuf_custom_ctx;
+	const size_t page_size = getpagesize();
+	struct bpf_link *link;
+	struct ringbuf_custom *r;
+	void *tmp;
+	int err;
+
+	ctx->skel = ringbuf_setup_skeleton();
+
+	ctx->epoll_fd = epoll_create1(EPOLL_CLOEXEC);
+	if (ctx->epoll_fd < 0) {
+		fprintf(stderr, "failed to create epoll fd: %d\n", -errno);
+		exit(1);
+	}
+
+	r = &ctx->ringbuf;
+	r->map_fd = bpf_map__fd(ctx->skel->maps.ringbuf);
+	r->mask = args.ringbuf_sz - 1;
+
+	/* Map writable consumer page */
+	tmp = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
+		   r->map_fd, 0);
+	if (tmp == MAP_FAILED) {
+		fprintf(stderr, "failed to mmap consumer page: %d\n", -errno);
+		exit(1);
+	}
+	r->consumer_pos = tmp;
+
+	/* Map read-only producer page and data pages. */
+	tmp = mmap(NULL, page_size + 2 * args.ringbuf_sz, PROT_READ, MAP_SHARED,
+		   r->map_fd, page_size);
+	if (tmp == MAP_FAILED) {
+		fprintf(stderr, "failed to mmap data pages: %d\n", -errno);
+		exit(1);
+	}
+	r->producer_pos = tmp;
+	r->data = tmp + page_size;
+
+	ctx->event.events = EPOLLIN;
+	err = epoll_ctl(ctx->epoll_fd, EPOLL_CTL_ADD, r->map_fd, &ctx->event);
+	if (err < 0) {
+		fprintf(stderr, "failed to epoll add ringbuf: %d\n", -errno);
+		exit(1);
+	}
+
+	link = bpf_program__attach(ctx->skel->progs.bench_ringbuf);
+	if (IS_ERR(link)) {
+		fprintf(stderr, "failed to attach program\n");
+		exit(1);
+	}
+}
+
+#define RINGBUF_BUSY_BIT (1 << 31)
+#define RINGBUF_DISCARD_BIT (1 << 30)
+#define RINGBUF_META_LEN 8
+
+static inline int roundup_len(__u32 len)
+{
+	/* clear out top 2 bits */
+	len <<= 2;
+	len >>= 2;
+	/* add length prefix */
+	len += RINGBUF_META_LEN;
+	/* round up to 8 byte alignment */
+	return (len + 7) / 8 * 8;
+}
+
+static void ringbuf_custom_process_ring(struct ringbuf_custom *r)
+{
+	unsigned long cons_pos, prod_pos;
+	int *len_ptr, len;
+	bool got_new_data;
+
+	cons_pos = smp_load_acquire(r->consumer_pos);
+	while (true) {
+		got_new_data = false;
+		prod_pos = smp_load_acquire(r->producer_pos);
+		while (cons_pos < prod_pos) {
+			len_ptr = r->data + (cons_pos & r->mask);
+			len = smp_load_acquire(len_ptr);
+
+			/* sample not committed yet, bail out for now */
+			if (len & RINGBUF_BUSY_BIT)
+				return;
+
+			got_new_data = true;
+			cons_pos += roundup_len(len);
+
+			atomic_inc(&buf_hits.value);
+		}
+		if (got_new_data)
+			smp_store_release(r->consumer_pos, cons_pos);
+		else
+			break;
+	};
+}
+
+static void *ringbuf_custom_consumer(void *input)
+{
+	struct ringbuf_custom_ctx *ctx = &ringbuf_custom_ctx;
+	int cnt;
+
+	do {
+		if (args.back2back)
+			bufs_trigger_batch();
+		cnt = epoll_wait(ctx->epoll_fd, &ctx->event, 1, -1);
+		if (cnt > 0)
+			ringbuf_custom_process_ring(&ctx->ringbuf);
+	} while (cnt >= 0);
+	fprintf(stderr, "ringbuf polling failed!\n");
+	return 0;
+}
+
+/* PERFBUF-LIBBPF benchmark */
+static struct perfbuf_libbpf_ctx {
+	struct perfbuf_bench *skel;
+	struct perf_buffer *perfbuf;
+} perfbuf_libbpf_ctx;
+
+static void perfbuf_measure(struct bench_res *res)
+{
+	struct perfbuf_libbpf_ctx *ctx = &perfbuf_libbpf_ctx;
+
+	res->hits = atomic_swap(&buf_hits.value, 0);
+	res->drops = atomic_swap(&ctx->skel->bss->dropped, 0);
+}
+
+static struct perfbuf_bench *perfbuf_setup_skeleton()
+{
+	struct perfbuf_bench *skel;
+
+	setup_libbpf();
+
+	skel = perfbuf_bench__open();
+	if (!skel) {
+		fprintf(stderr, "failed to open skeleton\n");
+		exit(1);
+	}
+
+	skel->rodata->batch_cnt = args.batch_cnt;
+
+	if (perfbuf_bench__load(skel)) {
+		fprintf(stderr, "failed to load skeleton\n");
+		exit(1);
+	}
+
+	return skel;
+}
+
+static enum bpf_perf_event_ret
+perfbuf_process_sample_raw(void *input_ctx, int cpu,
+			   struct perf_event_header *e)
+{
+	switch (e->type) {
+	case PERF_RECORD_SAMPLE:
+		atomic_inc(&buf_hits.value);
+		break;
+	case PERF_RECORD_LOST:
+		break;
+	default:
+		return LIBBPF_PERF_EVENT_ERROR;
+	}
+	return LIBBPF_PERF_EVENT_CONT;
+}
+
+static void perfbuf_libbpf_setup()
+{
+	struct perfbuf_libbpf_ctx *ctx = &perfbuf_libbpf_ctx;
+	struct perf_event_attr attr;
+	struct perf_buffer_raw_opts pb_opts = {
+		.event_cb = perfbuf_process_sample_raw,
+		.ctx = (void *)(long)0,
+		.attr = &attr,
+	};
+	struct bpf_link *link;
+
+	ctx->skel = perfbuf_setup_skeleton();
+
+	memset(&attr, 0, sizeof(attr));
+	attr.config = PERF_COUNT_SW_BPF_OUTPUT,
+	attr.type = PERF_TYPE_SOFTWARE;
+	attr.sample_type = PERF_SAMPLE_RAW;
+	/* notify only every Nth sample */
+	if (args.sampled) {
+		attr.sample_period = args.sample_rate;
+		attr.wakeup_events = args.sample_rate;
+	} else {
+		attr.sample_period = 1;
+		attr.wakeup_events = 1;
+	}
+
+	if (args.sample_rate > args.batch_cnt) {
+		fprintf(stderr, "sample rate %d is too high for given batch count %d\n",
+			args.sample_rate, args.batch_cnt);
+		exit(1);
+	}
+
+	ctx->perfbuf = perf_buffer__new_raw(bpf_map__fd(ctx->skel->maps.perfbuf),
+					    args.perfbuf_sz, &pb_opts);
+	if (!ctx->perfbuf) {
+		fprintf(stderr, "failed to create perfbuf\n");
+		exit(1);
+	}
+
+	link = bpf_program__attach(ctx->skel->progs.bench_perfbuf);
+	if (IS_ERR(link)) {
+		fprintf(stderr, "failed to attach program\n");
+		exit(1);
+	}
+}
+
+static void *perfbuf_libbpf_consumer(void *input)
+{
+	struct perfbuf_libbpf_ctx *ctx = &perfbuf_libbpf_ctx;
+
+	while (perf_buffer__poll(ctx->perfbuf, -1) >= 0) {
+		if (args.back2back)
+			bufs_trigger_batch();
+	}
+	fprintf(stderr, "perfbuf polling failed!\n");
+	return NULL;
+}
+
+/* PERFBUF-CUSTOM benchmark */
+
+/* copies of internal libbpf definitions */
+struct perf_cpu_buf {
+	struct perf_buffer *pb;
+	void *base; /* mmap()'ed memory */
+	void *buf; /* for reconstructing segmented data */
+	size_t buf_size;
+	int fd;
+	int cpu;
+	int map_key;
+};
+
+struct perf_buffer {
+	perf_buffer_event_fn event_cb;
+	perf_buffer_sample_fn sample_cb;
+	perf_buffer_lost_fn lost_cb;
+	void *ctx; /* passed into callbacks */
+
+	size_t page_size;
+	size_t mmap_size;
+	struct perf_cpu_buf **cpu_bufs;
+	struct epoll_event *events;
+	int cpu_cnt; /* number of allocated CPU buffers */
+	int epoll_fd; /* perf event FD */
+	int map_fd; /* BPF_MAP_TYPE_PERF_EVENT_ARRAY BPF map FD */
+};
+
+static void *perfbuf_custom_consumer(void *input)
+{
+	struct perfbuf_libbpf_ctx *ctx = &perfbuf_libbpf_ctx;
+	struct perf_buffer *pb = ctx->perfbuf;
+	struct perf_cpu_buf *cpu_buf;
+	struct perf_event_mmap_page *header;
+	size_t mmap_mask = pb->mmap_size - 1;
+	struct perf_event_header *ehdr;
+	__u64 data_head, data_tail;
+	size_t ehdr_size;
+	void *base;
+	int i, cnt;
+
+	while (true) {
+		if (args.back2back)
+			bufs_trigger_batch();
+		cnt = epoll_wait(pb->epoll_fd, pb->events, pb->cpu_cnt, -1);
+		if (cnt <= 0) {
+			fprintf(stderr, "perf epoll failed: %d\n", -errno);
+			exit(1);
+		}
+
+		for (i = 0; i < cnt; ++i) {
+			cpu_buf = pb->events[i].data.ptr;
+			header = cpu_buf->base;
+			base = ((void *)header) + pb->page_size;
+
+			data_head = ring_buffer_read_head(header);
+			data_tail = header->data_tail;
+			while (data_head != data_tail) {
+				ehdr = base + (data_tail & mmap_mask);
+				ehdr_size = ehdr->size;
+
+				if (ehdr->type == PERF_RECORD_SAMPLE)
+					atomic_inc(&buf_hits.value);
+
+				data_tail += ehdr_size;
+			}
+			ring_buffer_write_tail(header, data_tail);
+		}
+	}
+	return NULL;
+}
+
+const struct bench bench_rb_libbpf = {
+	.name = "rb-libbpf",
+	.validate = bufs_validate,
+	.setup = ringbuf_libbpf_setup,
+	.producer_thread = bufs_sample_producer,
+	.consumer_thread = ringbuf_libbpf_consumer,
+	.measure = ringbuf_libbpf_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
+
+const struct bench bench_rb_custom = {
+	.name = "rb-custom",
+	.validate = bufs_validate,
+	.setup = ringbuf_custom_setup,
+	.producer_thread = bufs_sample_producer,
+	.consumer_thread = ringbuf_custom_consumer,
+	.measure = ringbuf_custom_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
+
+const struct bench bench_pb_libbpf = {
+	.name = "pb-libbpf",
+	.validate = bufs_validate,
+	.setup = perfbuf_libbpf_setup,
+	.producer_thread = bufs_sample_producer,
+	.consumer_thread = perfbuf_libbpf_consumer,
+	.measure = perfbuf_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
+
+const struct bench bench_pb_custom = {
+	.name = "pb-custom",
+	.validate = bufs_validate,
+	.setup = perfbuf_libbpf_setup,
+	.producer_thread = bufs_sample_producer,
+	.consumer_thread = perfbuf_custom_consumer,
+	.measure = perfbuf_measure,
+	.report_progress = hits_drops_report_progress,
+	.report_final = hits_drops_report_final,
+};
+
diff --git a/tools/testing/selftests/bpf/benchs/run_bench_ringbufs.sh b/tools/testing/selftests/bpf/benchs/run_bench_ringbufs.sh
new file mode 100755
index 000000000000..af4aa04caba6
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/run_bench_ringbufs.sh
@@ -0,0 +1,75 @@
+#!/bin/bash
+
+set -eufo pipefail
+
+RUN_BENCH="sudo ./bench -w3 -d10 -a"
+
+function hits()
+{
+	echo "$*" | sed -E "s/.*hits\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+M\/s).*/\1/"
+}
+
+function drops()
+{
+	echo "$*" | sed -E "s/.*drops\s+([0-9]+\.[0-9]+ ± [0-9]+\.[0-9]+M\/s).*/\1/"
+}
+
+function header()
+{
+	local len=${#1}
+
+	printf "\n%s\n" "$1"
+	for i in $(seq 1 $len); do printf '='; done
+	printf '\n'
+}
+
+function summarize()
+{
+	bench="$1"
+	summary=$(echo $2 | tail -n1)
+	printf "%-20s %s (drops %s)\n" "$bench" "$(hits $summary)" "$(drops $summary)"
+}
+
+header "Single-producer, parallel producer"
+for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
+	summarize $b "$($RUN_BENCH $b)"
+done
+
+header "Single-producer, parallel producer, sampled notification"
+for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
+	summarize $b "$($RUN_BENCH --rb-sampled $b)"
+done
+
+header "Single-producer, back-to-back mode"
+for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
+	summarize $b "$($RUN_BENCH --rb-b2b $b)"
+	summarize $b-sampled "$($RUN_BENCH --rb-sampled --rb-b2b $b)"
+done
+
+header "Ringbuf back-to-back, effect of sample rate"
+for b in 1 5 10 25 50 100 250 500 1000 2000 3000; do
+	summarize "rb-sampled-$b" "$($RUN_BENCH --rb-b2b --rb-batch-cnt $b --rb-sampled --rb-sample-rate $b rb-custom)"
+done
+header "Perfbuf back-to-back, effect of sample rate"
+for b in 1 5 10 25 50 100 250 500 1000 2000 3000; do
+	summarize "pb-sampled-$b" "$($RUN_BENCH --rb-b2b --rb-batch-cnt $b --rb-sampled --rb-sample-rate $b pb-custom)"
+done
+
+header "Ringbuf back-to-back, reserve+commit vs output"
+summarize "reserve" "$($RUN_BENCH --rb-b2b                 rb-custom)"
+summarize "output"  "$($RUN_BENCH --rb-b2b --rb-use-output rb-custom)"
+
+header "Ringbuf sampled, reserve+commit vs output"
+summarize "reserve-sampled" "$($RUN_BENCH --rb-sampled                 rb-custom)"
+summarize "output-sampled"  "$($RUN_BENCH --rb-sampled --rb-use-output rb-custom)"
+
+header "Single-producer, consumer/producer competing on the same CPU, low batch count"
+for b in rb-libbpf rb-custom pb-libbpf pb-custom; do
+	summarize $b "$($RUN_BENCH --rb-batch-cnt 1 --rb-sample-rate 1 --prod-affinity 0 --cons-affinity 0 $b)"
+done
+
+header "Ringbuf, multi-producer contention"
+for b in 1 2 3 4 8 12 16 20 24 28 32 36 40 44 48 52; do
+	summarize "rb-libbpf nr_prod $b" "$($RUN_BENCH -p$b --rb-batch-cnt 50 rb-libbpf)"
+done
+
diff --git a/tools/testing/selftests/bpf/progs/perfbuf_bench.c b/tools/testing/selftests/bpf/progs/perfbuf_bench.c
new file mode 100644
index 000000000000..e5ab4836a641
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/perfbuf_bench.c
@@ -0,0 +1,33 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2020 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct {
+	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
+	__uint(value_size, sizeof(int));
+	__uint(key_size, sizeof(int));
+} perfbuf SEC(".maps");
+
+const volatile int batch_cnt = 0;
+
+long sample_val = 42;
+long dropped __attribute__((aligned(128))) = 0;
+
+SEC("fentry/__x64_sys_getpgid")
+int bench_perfbuf(void *ctx)
+{
+	__u64 *sample;
+	int i;
+
+	for (i = 0; i < batch_cnt; i++) {
+		if (bpf_perf_event_output(ctx, &perfbuf, BPF_F_CURRENT_CPU,
+					  &sample_val, sizeof(sample_val)))
+			__sync_add_and_fetch(&dropped, 1);
+	}
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/ringbuf_bench.c b/tools/testing/selftests/bpf/progs/ringbuf_bench.c
new file mode 100644
index 000000000000..123607d314d6
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/ringbuf_bench.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2020 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+
+const volatile int batch_cnt = 0;
+const volatile long use_output = 0;
+
+long sample_val = 42;
+long dropped __attribute__((aligned(128))) = 0;
+
+const volatile long wakeup_data_size = 0;
+
+static __always_inline long get_flags()
+{
+	long sz;
+
+	if (!wakeup_data_size)
+		return 0;
+
+	sz = bpf_ringbuf_query(&ringbuf, BPF_RB_AVAIL_DATA);
+	return sz >= wakeup_data_size ? BPF_RB_FORCE_WAKEUP : BPF_RB_NO_WAKEUP;
+}
+
+SEC("fentry/__x64_sys_getpgid")
+int bench_ringbuf(void *ctx)
+{
+	long *sample, flags;
+	int i;
+
+	if (!use_output) {
+		for (i = 0; i < batch_cnt; i++) {
+			sample = bpf_ringbuf_reserve(&ringbuf,
+					             sizeof(sample_val), 0);
+			if (!sample) {
+				__sync_add_and_fetch(&dropped, 1);
+			} else {
+				*sample = sample_val;
+				flags = get_flags();
+				bpf_ringbuf_submit(sample, flags);
+			}
+		}
+	} else {
+		for (i = 0; i < batch_cnt; i++) {
+			flags = get_flags();
+			if (bpf_ringbuf_output(&ringbuf, &sample_val,
+					       sizeof(sample_val), flags))
+				__sync_add_and_fetch(&dropped, 1);
+		}
+	}
+	return 0;
+}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes
  2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
                   ` (5 preceding siblings ...)
  2020-05-17 19:57 ` [PATCH v2 bpf-next 6/7] bpf: add BPF ringbuf and perf buffer benchmarks Andrii Nakryiko
@ 2020-05-17 19:57 ` Andrii Nakryiko
  2020-05-22  1:23   ` Alexei Starovoitov
  2020-05-25  9:59   ` Alban Crequy
  6 siblings, 2 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-17 19:57 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Paul E . McKenney,
	Jonathan Lemon, Stanislav Fomichev

Add commit description from patch #1 as a stand-alone documentation under
Documentation/bpf, as it might be more convenient format, in long term
perspective.

Suggested-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 Documentation/bpf/ringbuf.txt | 191 ++++++++++++++++++++++++++++++++++
 1 file changed, 191 insertions(+)
 create mode 100644 Documentation/bpf/ringbuf.txt

diff --git a/Documentation/bpf/ringbuf.txt b/Documentation/bpf/ringbuf.txt
new file mode 100644
index 000000000000..72f0a2480db3
--- /dev/null
+++ b/Documentation/bpf/ringbuf.txt
@@ -0,0 +1,191 @@
+BPF ring buffer
+==============
+
+Motivation
+----------
+There are two distinctive motivators for this work, which are not satisfied by
+existing perf buffer, which prompted creation of a new ring buffer
+implementation.
+  - more efficient memory utilization by sharing ring buffer across CPUs;
+  - preserving ordering of events that happen sequentially in time, even
+  across multiple CPUs (e.g., fork/exec/exit events for a task).
+
+These two problems are independent, but perf buffer fails to satisfy both.
+Both are a result of a choice to have per-CPU perf ring buffer.  Both can be
+also solved by having an MPSC implementation of ring buffer. The ordering
+problem could technically be solved for perf buffer with some in-kernel
+counting, but given the first one requires an MPSC buffer, the same solution
+would solve the second problem automatically.
+
+Semantics and APIs
+------------------
+Single ring buffer is presented to BPF programs as an instance of BPF map of
+type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately
+rejected.
+
+One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make
+BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce
+"same CPU only" rule. This would be more familiar interface compatible with
+existing perf buffer use in BPF, but would fail if application needed more
+advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses
+this with current approach. Additionally, given the performance of BPF
+ringbuf, many use cases would just opt into a simple single ring buffer shared
+among all CPUs, for which current approach would be an overkill.
+
+Another approach could introduce a new concept, alongside BPF map, to
+represent generic "container" object, which doesn't necessarily have key/value
+interface with lookup/update/delete operations. This approach would add a lot
+of extra infrastructure that has to be built for observability and verifier
+support. It would also add another concept that BPF developers would have to
+familiarize themselves with, new syntax in libbpf, etc. But then would really
+provide no additional benefits over the approach of using a map.
+BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so
+doesn't few other map types (e.g., queue and stack; array doesn't support
+delete, etc).
+
+The approach chosen has an advantage of re-using existing BPF map
+infrastructure (introspection APIs in kernel, libbpf support, etc), being
+familiar concept (no need to teach users a new type of object in BPF program),
+and utilizing existing tooling (bpftool). For common scenario of using
+a single ring buffer for all CPUs, it's as simple and straightforward, as
+would be with a dedicated "container" object. On the other hand, by being
+a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to
+implement a wide variety of topologies, from one ring buffer for each CPU
+(e.g., as a replacement for perf buffer use cases), to a complicated
+application hashing/sharding of ring buffers (e.g., having a small pool of
+ring buffers with hashed task's tgid being a look up key to preserve order,
+but reduce contention).
+
+Key and value sizes are enforced to be zero. max_entries is used to specify
+the size of ring buffer and has to be a power of 2 value.
+
+There are a bunch of similarities between perf buffer
+(BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
+  - variable-length records;
+  - if there is no more space left in ring buffer, reservation fails, no
+    blocking;
+  - memory-mappable data area for user-space applications for ease of
+    consumption and high performance;
+  - epoll notifications for new incoming data;
+  - but still the ability to do busy polling for new data to achieve the
+    lowest latency, if necessary.
+
+BPF ringbuf provides two sets of APIs to BPF programs:
+  - bpf_ringbuf_output() allows to *copy* data from one place to a ring
+    buffer, similarly to bpf_perf_event_output();
+  - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs
+    split the whole process into two steps. First, a fixed amount of space is
+    reserved. If successful, a pointer to a data inside ring buffer data area
+    is returned, which BPF programs can use similarly to a data inside
+    array/hash maps. Once ready, this piece of memory is either committed or
+    discarded. Discard is similar to commit, but makes consumer ignore the
+    record.
+
+bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because
+record has to be prepared in some other place first. But it allows to submit
+records of the length that's not known to verifier beforehand. It also closely
+matches bpf_perf_event_output(), so will simplify migration significantly.
+
+bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory
+pointer directly to ring buffer memory. In a lot of cases records are larger
+than BPF stack space allows, so many programs have use extra per-CPU array as
+a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs
+completely. But in exchange, it only allows a known constant size of memory to
+be reserved, such that verifier can verify that BPF program can't access
+memory outside its reserved record space. bpf_ringbuf_output(), while slightly
+slower due to extra memory copy, covers some use cases that are not suitable
+for bpf_ringbuf_reserve().
+
+The difference between commit and discard is very small. Discard just marks
+a record as discarded, and such records are supposed to be ignored by consumer
+code. Discard is useful for some advanced use-cases, such as ensuring
+all-or-nothing multi-record submission, or emulating temporary malloc()/free()
+within single BPF program invocation.
+
+Each reserved record is tracked by verifier through existing
+reference-tracking logic, similar to socket ref-tracking. It is thus
+impossible to reserve a record, but forget to submit (or discard) it.
+
+bpf_ringbuf_query() helper allows to query various properties of ring buffer.
+Currently 4 are supported:
+  - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer;
+  - BPF_RB_RING_SIZE returns the size of ring buffer;
+  - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of
+    consumer/producer, respectively.
+Returned values are momentarily snapshots of ring buffer state and could be
+off by the time helper returns, so this should be used only for
+debugging/reporting reasons or for implementing various heuristics, that take
+into account highly-changeable nature of some of those characteristics.
+
+One such heuristic might involve more fine-grained control over poll/epoll
+notifications about new data availability in ring buffer. Together with
+BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers,
+it allows BPF program a high degree of control and, e.g., more efficient
+batched notifications. Default self-balancing strategy, though, should be
+adequate for most applications and will work reliable and efficiently already.
+
+Design and implementation
+-------------------------
+This reserve/commit schema allows a natural way for multiple producers, either
+on different CPUs or even on the same CPU/in the same BPF program, to reserve
+independent records and work with them without blocking other producers. This
+means that if BPF program was interruped by another BPF program sharing the
+same ring buffer, they will both get a record reserved (provided there is
+enough space left) and can work with it and submit it independently. This
+applies to NMI context as well, except that due to using a spinlock during
+reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock,
+in which case reservation will fail even if ring buffer is not full.
+
+The ring buffer itself internally is implemented as a power-of-2 sized
+circular buffer, with two logical and ever-increasing counters (which might
+wrap around on 32-bit architectures, that's not a problem):
+  - consumer counter shows up to which logical position consumer consumed the
+    data;
+  - producer counter denotes amount of data reserved by all producers.
+
+Each time a record is reserved, producer that "owns" the record will
+successfully advance producer counter. At that point, data is still not yet
+ready to be consumed, though. Each record has 8 byte header, which contains
+the length of reserved record, as well as two extra bits: busy bit to denote
+that record is still being worked on, and discard bit, which might be set at
+commit time if record is discarded. In the latter case, consumer is supposed
+to skip the record and move on to the next one. Record header also encodes
+record's relative offset from the beginning of ring buffer data area (in
+pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only
+the pointer to the record itself, without requiring also the pointer to ring
+buffer itself. Ring buffer memory location will be restored from record
+metadata header. This significantly simplifies verifier, as well as improving
+API usability.
+
+Producer counter increments are serialized under spinlock, so there is
+a strict ordering between reservations. Commits, on the other hand, are
+completely lockless and independent. All records become available to consumer
+in the order of reservations, but only after all previous records where
+already committed. It is thus possible for slow producers to temporarily hold
+off submitted records, that were reserved later.
+
+Reservation/commit/consumer protocol is verified by litmus tests in
+tools/memory-model/litmus-tests, see mpsc-rb*.litmus files.
+
+One interesting implementation bit, that significantly simplifies (and thus
+speeds up as well) implementation of both producers and consumers is how data
+area is mapped twice contiguously back-to-back in the virtual memory. This
+allows to not take any special measures for samples that have to wrap around
+at the end of the circular buffer data area, because the next page after the
+last data page would be first data page again, and thus the sample will still
+appear completely contiguous in virtual memory. See comment and a simple ASCII
+diagram showing this visually in bpf_ringbuf_area_alloc().
+
+Another feature that distinguishes BPF ringbuf from perf ring buffer is
+a self-pacing notifications of new data being availability.
+bpf_ringbuf_commit() implementation will send a notification of new record
+being available after commit only if consumer has already caught up right up
+to the record being committed. If not, consumer still has to catch up and thus
+will see new data anyways without needing an extra poll notification.
+Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that
+this allows to achieve a very high throughput without having to resort to
+tricks like "notify only every Nth sample", which are necessary with perf
+buffer. For extreme cases, when BPF program wants more manual control of
+notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and
+BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data
+availability, but require extra caution and diligence in using this API.
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
@ 2020-05-19 12:57   ` kbuild test robot
  2020-05-19 23:53   ` kbuild test robot
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 33+ messages in thread
From: kbuild test robot @ 2020-05-19 12:57 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, netdev, ast, daniel
  Cc: kbuild-all, andrii.nakryiko, kernel-team, Andrii Nakryiko,
	Paul E . McKenney, Jonathan Lemon

[-- Attachment #1: Type: text/plain, Size: 3403 bytes --]

Hi Andrii,

I love your patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]
[also build test WARNING on next-20200518]
[cannot apply to bpf/master rcu/dev v5.7-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Andrii-Nakryiko/BPF-ring-buffer/20200518-040019
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: nds32-allmodconfig (attached as .config)
compiler: nds32le-linux-gcc (GCC) 9.3.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=nds32 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

In file included from ./arch/nds32/include/generated/asm/bug.h:1,
from include/linux/bug.h:5,
from include/linux/thread_info.h:12,
from include/asm-generic/preempt.h:5,
from ./arch/nds32/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:78,
from include/linux/spinlock.h:51,
from include/linux/seqlock.h:36,
from include/linux/time.h:6,
from include/linux/ktime.h:24,
from include/linux/timer.h:6,
from include/linux/workqueue.h:9,
from include/linux/bpf.h:9,
from kernel/bpf/ringbuf.c:1:
include/linux/dma-mapping.h: In function 'dma_map_resource':
arch/nds32/include/asm/memory.h:82:32: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
82 | #define pfn_valid(pfn)  ((pfn) >= PHYS_PFN_OFFSET && (pfn) < (PHYS_PFN_OFFSET + max_mapnr))
|                                ^~
include/asm-generic/bug.h:139:27: note: in definition of macro 'WARN_ON_ONCE'
139 |  int __ret_warn_once = !!(condition);            |                           ^~~~~~~~~
include/linux/dma-mapping.h:352:19: note: in expansion of macro 'pfn_valid'
352 |  if (WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr))))
|                   ^~~~~~~~~
kernel/bpf/ringbuf.c: In function 'bpf_ringbuf_alloc':
>> kernel/bpf/ringbuf.c:133:14: warning: comparison is always false due to limited range of data type [-Wtype-limits]
133 |  if (data_sz > RINGBUF_MAX_DATA_SZ)
|              ^

vim +133 kernel/bpf/ringbuf.c

   125	
   126	static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
   127	{
   128		struct bpf_ringbuf *rb;
   129	
   130		if (!data_sz || !PAGE_ALIGNED(data_sz))
   131			return ERR_PTR(-EINVAL);
   132	
 > 133		if (data_sz > RINGBUF_MAX_DATA_SZ)
   134			return ERR_PTR(-E2BIG);
   135	
   136		rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
   137		if (!rb)
   138			return ERR_PTR(-ENOMEM);
   139	
   140		spin_lock_init(&rb->spinlock);
   141		init_waitqueue_head(&rb->waitq);
   142		init_irq_work(&rb->work, bpf_ringbuf_notify);
   143	
   144		rb->mask = data_sz - 1;
   145		rb->consumer_pos = 0;
   146		rb->producer_pos = 0;
   147	
   148		return rb;
   149	}
   150	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 55899 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
  2020-05-19 12:57   ` kbuild test robot
@ 2020-05-19 23:53   ` kbuild test robot
  2020-05-22  0:25   ` Paul E. McKenney
  2020-05-22  1:07   ` Alexei Starovoitov
  3 siblings, 0 replies; 33+ messages in thread
From: kbuild test robot @ 2020-05-19 23:53 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, netdev, ast, daniel
  Cc: kbuild-all, clang-built-linux, andrii.nakryiko, kernel-team,
	Andrii Nakryiko, Paul E . McKenney, Jonathan Lemon

[-- Attachment #1: Type: text/plain, Size: 2498 bytes --]

Hi Andrii,

I love your patch! Perhaps something to improve:

[auto build test WARNING on bpf-next/master]
[also build test WARNING on next-20200519]
[cannot apply to bpf/master rcu/dev v5.7-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Andrii-Nakryiko/BPF-ring-buffer/20200518-040019
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: arm-randconfig-r006-20200519 (attached as .config)
compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project 135b877874fae96b4372c8a3fbfaa8ff44ff86e3)
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm cross compiling tool for clang build
        # apt-get install binutils-arm-linux-gnueabi
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

>> kernel/bpf/ringbuf.c:133:14: warning: result of comparison of constant 68719464448 with expression of type 'size_t' (aka 'unsigned int') is always false [-Wtautological-constant-out-of-range-compare]
if (data_sz > RINGBUF_MAX_DATA_SZ)
~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~
1 warning generated.

vim +133 kernel/bpf/ringbuf.c

   125	
   126	static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
   127	{
   128		struct bpf_ringbuf *rb;
   129	
   130		if (!data_sz || !PAGE_ALIGNED(data_sz))
   131			return ERR_PTR(-EINVAL);
   132	
 > 133		if (data_sz > RINGBUF_MAX_DATA_SZ)
   134			return ERR_PTR(-E2BIG);
   135	
   136		rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
   137		if (!rb)
   138			return ERR_PTR(-ENOMEM);
   139	
   140		spin_lock_init(&rb->spinlock);
   141		init_waitqueue_head(&rb->waitq);
   142		init_irq_work(&rb->work, bpf_ringbuf_notify);
   143	
   144		rb->mask = data_sz - 1;
   145		rb->consumer_pos = 0;
   146		rb->producer_pos = 0;
   147	
   148		return rb;
   149	}
   150	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 34885 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
  2020-05-19 12:57   ` kbuild test robot
  2020-05-19 23:53   ` kbuild test robot
@ 2020-05-22  0:25   ` Paul E. McKenney
  2020-05-22 18:46     ` Andrii Nakryiko
  2020-05-22  1:07   ` Alexei Starovoitov
  3 siblings, 1 reply; 33+ messages in thread
From: Paul E. McKenney @ 2020-05-22  0:25 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team, Jonathan Lemon

On Sun, May 17, 2020 at 12:57:21PM -0700, Andrii Nakryiko wrote:
> This commits adds a new MPSC ring buffer implementation into BPF ecosystem,
> which allows multiple CPUs to submit data to a single shared ring buffer. On
> the consumption side, only single consumer is assumed.

[ . . . ]

Focusing just on the ring-buffer mechanism, with a question or two
below.  Looks pretty close, actually!

							Thanx, Paul

> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  include/linux/bpf.h            |  13 +
>  include/linux/bpf_types.h      |   1 +
>  include/linux/bpf_verifier.h   |   4 +
>  include/uapi/linux/bpf.h       |  84 +++++-
>  kernel/bpf/Makefile            |   2 +-
>  kernel/bpf/helpers.c           |  10 +
>  kernel/bpf/ringbuf.c           | 487 +++++++++++++++++++++++++++++++++
>  kernel/bpf/syscall.c           |  12 +
>  kernel/bpf/verifier.c          | 157 ++++++++---
>  kernel/trace/bpf_trace.c       |  10 +
>  tools/include/uapi/linux/bpf.h |  90 +++++-
>  11 files changed, 832 insertions(+), 38 deletions(-)
>  create mode 100644 kernel/bpf/ringbuf.c

[ . . . ]

> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
> new file mode 100644
> index 000000000000..3c19f0f07726
> --- /dev/null
> +++ b/kernel/bpf/ringbuf.c
> @@ -0,0 +1,487 @@
> +#include <linux/bpf.h>
> +#include <linux/btf.h>
> +#include <linux/err.h>
> +#include <linux/irq_work.h>
> +#include <linux/slab.h>
> +#include <linux/filter.h>
> +#include <linux/mm.h>
> +#include <linux/vmalloc.h>
> +#include <linux/wait.h>
> +#include <linux/poll.h>
> +#include <uapi/linux/btf.h>
> +
> +#define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
> +
> +/* non-mmap()'able part of bpf_ringbuf (everything up to consumer page) */
> +#define RINGBUF_PGOFF \
> +	(offsetof(struct bpf_ringbuf, consumer_pos) >> PAGE_SHIFT)
> +/* consumer page and producer page */
> +#define RINGBUF_POS_PAGES 2
> +
> +#define RINGBUF_MAX_RECORD_SZ (UINT_MAX/4)
> +
> +/* Maximum size of ring buffer area is limited by 32-bit page offset within
> + * record header, counted in pages. Reserve 8 bits for extensibility, and take
> + * into account few extra pages for consumer/producer pages and
> + * non-mmap()'able parts. This gives 64GB limit, which seems plenty for single
> + * ring buffer.
> + */
> +#define RINGBUF_MAX_DATA_SZ \
> +	(((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE)
> +
> +struct bpf_ringbuf {
> +	wait_queue_head_t waitq;
> +	struct irq_work work;
> +	u64 mask;
> +	spinlock_t spinlock ____cacheline_aligned_in_smp;
> +	/* Consumer and producer counters are put into separate pages to allow
> +	 * mapping consumer page as r/w, but restrict producer page to r/o.
> +	 * This protects producer position from being modified by user-space
> +	 * application and ruining in-kernel position tracking.
> +	 */
> +	unsigned long consumer_pos __aligned(PAGE_SIZE);
> +	unsigned long producer_pos __aligned(PAGE_SIZE);
> +	char data[] __aligned(PAGE_SIZE);
> +};
> +
> +struct bpf_ringbuf_map {
> +	struct bpf_map map;
> +	struct bpf_map_memory memory;
> +	struct bpf_ringbuf *rb;
> +};
> +
> +/* 8-byte ring buffer record header structure */
> +struct bpf_ringbuf_hdr {
> +	u32 len;
> +	u32 pg_off;
> +};
> +
> +static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
> +{
> +	const gfp_t flags = GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN |
> +			    __GFP_ZERO;
> +	int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES;
> +	int nr_data_pages = data_sz >> PAGE_SHIFT;
> +	int nr_pages = nr_meta_pages + nr_data_pages;
> +	struct page **pages, *page;
> +	size_t array_size;
> +	void *addr;
> +	int i;
> +
> +	/* Each data page is mapped twice to allow "virtual"
> +	 * continuous read of samples wrapping around the end of ring
> +	 * buffer area:
> +	 * ------------------------------------------------------
> +	 * | meta pages |  real data pages  |  same data pages  |
> +	 * ------------------------------------------------------
> +	 * |            | 1 2 3 4 5 6 7 8 9 | 1 2 3 4 5 6 7 8 9 |
> +	 * ------------------------------------------------------
> +	 * |            | TA             DA | TA             DA |
> +	 * ------------------------------------------------------
> +	 *                               ^^^^^^^
> +	 *                                  |
> +	 * Here, no need to worry about special handling of wrapped-around
> +	 * data due to double-mapped data pages. This works both in kernel and
> +	 * when mmap()'ed in user-space, simplifying both kernel and
> +	 * user-space implementations significantly.
> +	 */
> +	array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages);
> +	if (array_size > PAGE_SIZE)
> +		pages = vmalloc_node(array_size, numa_node);
> +	else
> +		pages = kmalloc_node(array_size, flags, numa_node);
> +	if (!pages)
> +		return NULL;
> +
> +	for (i = 0; i < nr_pages; i++) {
> +		page = alloc_pages_node(numa_node, flags, 0);
> +		if (!page) {
> +			nr_pages = i;
> +			goto err_free_pages;
> +		}
> +		pages[i] = page;
> +		if (i >= nr_meta_pages)
> +			pages[nr_data_pages + i] = page;
> +	}
> +
> +	addr = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
> +		    VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
> +	if (addr)
> +		return addr;
> +
> +err_free_pages:
> +	for (i = 0; i < nr_pages; i++)
> +		free_page((unsigned long)pages[i]);
> +	kvfree(pages);
> +	return NULL;
> +}
> +
> +static void bpf_ringbuf_notify(struct irq_work *work)
> +{
> +	struct bpf_ringbuf *rb = container_of(work, struct bpf_ringbuf, work);
> +
> +	wake_up_all(&rb->waitq);
> +}
> +
> +static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
> +{
> +	struct bpf_ringbuf *rb;
> +
> +	if (!data_sz || !PAGE_ALIGNED(data_sz))
> +		return ERR_PTR(-EINVAL);
> +
> +	if (data_sz > RINGBUF_MAX_DATA_SZ)
> +		return ERR_PTR(-E2BIG);
> +
> +	rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
> +	if (!rb)
> +		return ERR_PTR(-ENOMEM);
> +
> +	spin_lock_init(&rb->spinlock);
> +	init_waitqueue_head(&rb->waitq);
> +	init_irq_work(&rb->work, bpf_ringbuf_notify);
> +
> +	rb->mask = data_sz - 1;
> +	rb->consumer_pos = 0;
> +	rb->producer_pos = 0;
> +
> +	return rb;
> +}
> +
> +static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
> +{
> +	struct bpf_ringbuf_map *rb_map;
> +	u64 cost;
> +	int err;
> +
> +	if (attr->map_flags & ~RINGBUF_CREATE_FLAG_MASK)
> +		return ERR_PTR(-EINVAL);
> +
> +	if (attr->key_size || attr->value_size ||
> +	    attr->max_entries == 0 || !PAGE_ALIGNED(attr->max_entries))
> +		return ERR_PTR(-EINVAL);
> +
> +	rb_map = kzalloc(sizeof(*rb_map), GFP_USER);
> +	if (!rb_map)
> +		return ERR_PTR(-ENOMEM);
> +
> +	bpf_map_init_from_attr(&rb_map->map, attr);
> +
> +	cost = sizeof(struct bpf_ringbuf_map) +
> +	       sizeof(struct bpf_ringbuf) +
> +	       attr->max_entries;
> +	err = bpf_map_charge_init(&rb_map->map.memory, cost);
> +	if (err)
> +		goto err_free_map;
> +
> +	rb_map->rb = bpf_ringbuf_alloc(attr->max_entries, rb_map->map.numa_node);
> +	if (IS_ERR(rb_map->rb)) {
> +		err = PTR_ERR(rb_map->rb);
> +		goto err_uncharge;
> +	}
> +
> +	return &rb_map->map;
> +
> +err_uncharge:
> +	bpf_map_charge_finish(&rb_map->map.memory);
> +err_free_map:
> +	kfree(rb_map);
> +	return ERR_PTR(err);
> +}
> +
> +static void bpf_ringbuf_free(struct bpf_ringbuf *ringbuf)
> +{
> +	kvfree(ringbuf);
> +}
> +
> +static void ringbuf_map_free(struct bpf_map *map)
> +{
> +	struct bpf_ringbuf_map *rb_map;
> +
> +	/* at this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
> +	 * so the programs (can be more than one that used this map) were
> +	 * disconnected from events. Wait for outstanding critical sections in
> +	 * these programs to complete
> +	 */
> +	synchronize_rcu();
> +
> +	rb_map = container_of(map, struct bpf_ringbuf_map, map);
> +	bpf_ringbuf_free(rb_map->rb);
> +	kfree(rb_map);
> +}
> +
> +static void *ringbuf_map_lookup_elem(struct bpf_map *map, void *key)
> +{
> +	return ERR_PTR(-ENOTSUPP);
> +}
> +
> +static int ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value,
> +				   u64 flags)
> +{
> +	return -ENOTSUPP;
> +}
> +
> +static int ringbuf_map_delete_elem(struct bpf_map *map, void *key)
> +{
> +	return -ENOTSUPP;
> +}
> +
> +static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
> +				    void *next_key)
> +{
> +	return -ENOTSUPP;
> +}
> +
> +static size_t bpf_ringbuf_mmap_page_cnt(const struct bpf_ringbuf *rb)
> +{
> +	size_t data_pages = (rb->mask + 1) >> PAGE_SHIFT;
> +
> +	/* consumer page + producer page + 2 x data pages */
> +	return RINGBUF_POS_PAGES + 2 * data_pages;
> +}
> +
> +static int ringbuf_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
> +{
> +	struct bpf_ringbuf_map *rb_map;
> +	size_t mmap_sz;
> +
> +	rb_map = container_of(map, struct bpf_ringbuf_map, map);
> +	mmap_sz = bpf_ringbuf_mmap_page_cnt(rb_map->rb) << PAGE_SHIFT;
> +
> +	if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > mmap_sz)
> +		return -EINVAL;
> +
> +	return remap_vmalloc_range(vma, rb_map->rb,
> +				   vma->vm_pgoff + RINGBUF_PGOFF);
> +}
> +
> +static unsigned long ringbuf_avail_data_sz(struct bpf_ringbuf *rb)
> +{
> +	unsigned long cons_pos, prod_pos;
> +
> +	cons_pos = smp_load_acquire(&rb->consumer_pos);

What happens if there is a delay here?  (The delay might be due to
interrupts, preemption in PREEMPT=y kernels, vCPU preemption, ...)

If this is called from a producer holding the lock, then the only
->consumer_pos can change, and that can only decrease the amount of
data available.  Besides which, ->consumer_pos is sampled first.
But why would a producer care how much data was queued, as opposed
to how much free space was available?

From the consumer, only ->producer_pos can change, and that can only
increase the amount of data available.  (Assuming that producers cannot
erase old data on wrap-around before the consumer consumes it.)

So probably nothing bad happens.

On the bit about the producer holding the lock, some lockdep assertions
might make things easier on your future self.

> +	prod_pos = smp_load_acquire(&rb->producer_pos);
> +	return prod_pos - cons_pos;
> +}
> +
> +static __poll_t ringbuf_map_poll(struct bpf_map *map, struct file *filp,
> +				 struct poll_table_struct *pts)
> +{
> +	struct bpf_ringbuf_map *rb_map;
> +
> +	rb_map = container_of(map, struct bpf_ringbuf_map, map);
> +	poll_wait(filp, &rb_map->rb->waitq, pts);
> +
> +	if (ringbuf_avail_data_sz(rb_map->rb))
> +		return EPOLLIN | EPOLLRDNORM;
> +	return 0;
> +}
> +
> +const struct bpf_map_ops ringbuf_map_ops = {
> +	.map_alloc = ringbuf_map_alloc,
> +	.map_free = ringbuf_map_free,
> +	.map_mmap = ringbuf_map_mmap,
> +	.map_poll = ringbuf_map_poll,
> +	.map_lookup_elem = ringbuf_map_lookup_elem,
> +	.map_update_elem = ringbuf_map_update_elem,
> +	.map_delete_elem = ringbuf_map_delete_elem,
> +	.map_get_next_key = ringbuf_map_get_next_key,
> +};
> +
> +/* Given pointer to ring buffer record metadata and struct bpf_ringbuf itself,
> + * calculate offset from record metadata to ring buffer in pages, rounded
> + * down. This page offset is stored as part of record metadata and allows to
> + * restore struct bpf_ringbuf * from record pointer. This page offset is
> + * stored at offset 4 of record metadata header.
> + */
> +static size_t bpf_ringbuf_rec_pg_off(struct bpf_ringbuf *rb,
> +				     struct bpf_ringbuf_hdr *hdr)
> +{
> +	return ((void *)hdr - (void *)rb) >> PAGE_SHIFT;
> +}
> +
> +/* Given pointer to ring buffer record header, restore pointer to struct
> + * bpf_ringbuf itself by using page offset stored at offset 4
> + */
> +static struct bpf_ringbuf *
> +bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
> +{
> +	unsigned long addr = (unsigned long)(void *)hdr;
> +	unsigned long off = (unsigned long)hdr->pg_off << PAGE_SHIFT;
> +
> +	return (void*)((addr & PAGE_MASK) - off);
> +}
> +
> +static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
> +{
> +	unsigned long cons_pos, prod_pos, new_prod_pos, flags;
> +	u32 len, pg_off;
> +	struct bpf_ringbuf_hdr *hdr;
> +
> +	if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
> +		return NULL;
> +
> +	len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
> +	cons_pos = smp_load_acquire(&rb->consumer_pos);

There might be a longish delay acquiring the spinlock, which could mean
that cons_pos was out of date, which might result in an unnecessary
producer-side failure.  Why not pick up cons_pos after the lock is
acquired?  After all, it is in the same cache line as the lock, so this
should have negligible effect on lock-hold time.

(Unless you had either really small cachelines or really big locks.)

> +	if (in_nmi()) {
> +		if (!spin_trylock_irqsave(&rb->spinlock, flags))
> +			return NULL;
> +	} else {
> +		spin_lock_irqsave(&rb->spinlock, flags);
> +	}
> +
> +	prod_pos = rb->producer_pos;
> +	new_prod_pos = prod_pos + len;
> +
> +	/* check for out of ringbuf space by ensuring producer position
> +	 * doesn't advance more than (ringbuf_size - 1) ahead
> +	 */
> +	if (new_prod_pos - cons_pos > rb->mask) {
> +		spin_unlock_irqrestore(&rb->spinlock, flags);
> +		return NULL;
> +	}
> +
> +	hdr = (void *)rb->data + (prod_pos & rb->mask);
> +	pg_off = bpf_ringbuf_rec_pg_off(rb, hdr);
> +	hdr->len = size | BPF_RINGBUF_BUSY_BIT;
> +	hdr->pg_off = pg_off;
> +
> +	/* ensure header is written before updating producer positions */
> +	smp_wmb();

The smp_store_release() makes this unnecessary with respect to
->producer_pos.  So what later write is it also ordering against?
If none, this smp_wmb() can go away.

And if the later write is the xchg() in bpf_ringbuf_commit(), the
xchg() implies full barriers before and after, so that the smp_wmb()
could still go away.

So other than the smp_store_release() and the xchg(), what later write
is the smp_wmb() ordering against?

> +	/* pairs with consumer's smp_load_acquire() */
> +	smp_store_release(&rb->producer_pos, new_prod_pos);
> +
> +	spin_unlock_irqrestore(&rb->spinlock, flags);
> +
> +	return (void *)hdr + BPF_RINGBUF_HDR_SZ;
> +}
> +
> +BPF_CALL_3(bpf_ringbuf_reserve, struct bpf_map *, map, u64, size, u64, flags)
> +{
> +	struct bpf_ringbuf_map *rb_map;
> +
> +	if (unlikely(flags))
> +		return 0;
> +
> +	rb_map = container_of(map, struct bpf_ringbuf_map, map);
> +	return (unsigned long)__bpf_ringbuf_reserve(rb_map->rb, size);
> +}
> +
> +const struct bpf_func_proto bpf_ringbuf_reserve_proto = {
> +	.func		= bpf_ringbuf_reserve,
> +	.ret_type	= RET_PTR_TO_ALLOC_MEM_OR_NULL,
> +	.arg1_type	= ARG_CONST_MAP_PTR,
> +	.arg2_type	= ARG_CONST_ALLOC_SIZE_OR_ZERO,
> +	.arg3_type	= ARG_ANYTHING,
> +};
> +
> +static void bpf_ringbuf_commit(void *sample, u64 flags, bool discard)
> +{
> +	unsigned long rec_pos, cons_pos;
> +	struct bpf_ringbuf_hdr *hdr;
> +	struct bpf_ringbuf *rb;
> +	u32 new_len;
> +
> +	hdr = sample - BPF_RINGBUF_HDR_SZ;
> +	rb = bpf_ringbuf_restore_from_rec(hdr);
> +	new_len = hdr->len ^ BPF_RINGBUF_BUSY_BIT;
> +	if (discard)
> +		new_len |= BPF_RINGBUF_DISCARD_BIT;
> +
> +	/* update record header with correct final size prefix */
> +	xchg(&hdr->len, new_len);
> +
> +	/* if consumer caught up and is waiting for our record, notify about
> +	 * new data availability
> +	 */
> +	rec_pos = (void *)hdr - (void *)rb->data;
> +	cons_pos = smp_load_acquire(&rb->consumer_pos) & rb->mask;
> +
> +	if (flags & BPF_RB_FORCE_WAKEUP)
> +		irq_work_queue(&rb->work);
> +	else if (cons_pos == rec_pos && !(flags & BPF_RB_NO_WAKEUP))
> +		irq_work_queue(&rb->work);
> +}

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests
  2020-05-17 19:57 ` [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests Andrii Nakryiko
@ 2020-05-22  0:34   ` Paul E. McKenney
  2020-05-22 18:51     ` Andrii Nakryiko
  0 siblings, 1 reply; 33+ messages in thread
From: Paul E. McKenney @ 2020-05-22  0:34 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team, Jonathan Lemon

On Sun, May 17, 2020 at 12:57:22PM -0700, Andrii Nakryiko wrote:
> Add 4 litmus tests for BPF ringbuf implementation, divided into two different
> use cases.
> 
> First, two unbounded case, one with 1 producer and another with
> 2 producers, single consumer. All reservations are supposed to succeed.
> 
> Second, bounded case with only 1 record allowed in ring buffer at any given
> time. Here failures to reserve space are expected. Again, 1- and 2- producer
> cases, single consumer, are validated.
> 
> Just for the fun of it, I also wrote a 3-producer cases, it took *16 hours* to
> validate, but came back successful as well. I'm not including it in this
> patch, because it's not practical to run it. See output for all included
> 4 cases and one 3-producer one with bounded use case.
> 
> Each litmust test implements producer/consumer protocol for BPF ring buffer
> implementation found in kernel/bpf/ringbuf.c. Due to limitations, all records
> are assumed equal-sized and producer/consumer counters are incremented by 1.
> This doesn't change the correctness of the algorithm, though.

Very cool!!!

However, these should go into Documentation/litmus-tests/bpf-rb or similar.
Please take a look at Documentation/litmus-tests/ in -rcu, -tip, and
-next, including the README file.

The tools/memory-model/litmus-tests directory is for basic examples,
not for the more complex real-world ones like these guys.  ;-)

								Thanx, Paul

> Verification results:
> /* 1p1c bounded case */
> $ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+1p1c+bounded.litmus
> Test mpsc-rb+1p1c+bounded Allowed
> States 2
> 0:rFail=0; 1:rFail=0; cx=0; dropped=0; len1=1; px=1;
> 0:rFail=0; 1:rFail=0; cx=1; dropped=0; len1=1; px=1;
> Ok
> Witnesses
> Positive: 3 Negative: 0
> Condition exists (0:rFail=0 /\ 1:rFail=0 /\ dropped=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
> Observation mpsc-rb+1p1c+bounded Always 3 0
> Time mpsc-rb+1p1c+bounded 0.03
> Hash=5bdad0f41557a641370e7fa6b8eb2f43
> 
> /* 2p1c bounded case */
> $ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+2p1c+bounded.litmus
> Test mpsc-rb+2p1c+bounded Allowed
> States 4
> 0:rFail=0; 1:rFail=0; 2:rFail=0; cx=0; dropped=1; len1=1; px=1;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; cx=1; dropped=0; len1=1; px=2;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; cx=1; dropped=1; len1=1; px=1;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; cx=2; dropped=0; len1=1; px=2;
> Ok
> Witnesses
> Positive: 22 Negative: 0
> Condition exists (0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ len1=1 /\ (dropped=0 /\ px=2 /\ (cx=1 \/ cx=2) \/ dropped=1 /\ px=1 /\ (cx=0 \/ cx=1)))
> Observation mpsc-rb+2p1c+bounded Always 22 0
> Time mpsc-rb+2p1c+bounded 119.38
> Hash=e2f8f442a02bf7d8c2988ba82cf002d2
> 
> /* 1p1c unbounded case */
> $ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+1p1c+unbound.litmus
> Test mpsc-rb+1p1c+unbound Allowed
> States 2
> 0:rFail=0; 1:rFail=0; cx=0; len1=1; px=1;
> 0:rFail=0; 1:rFail=0; cx=1; len1=1; px=1;
> Ok
> Witnesses
> Positive: 3 Negative: 0
> Condition exists (0:rFail=0 /\ 1:rFail=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
> Observation mpsc-rb+1p1c+unbound Always 3 0
> Time mpsc-rb+1p1c+unbound 0.02
> Hash=be9de6487d8e27c3d37802d122e4a87c
> 
> /* 2p1c unbounded case */
> $ herd7 -unroll 0 -conf linux-kernel.cfg litmus-tests/mpsc-rb+2p1c+unbound.litmus
> Test mpsc-rb+2p1c+unbound Allowed
> States 3
> 0:rFail=0; 1:rFail=0; 2:rFail=0; cx=0; len1=1; len2=1; px=2;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; cx=1; len1=1; len2=1; px=2;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; cx=2; len1=1; len2=1; px=2;
> Ok
> Witnesses
> Positive: 42 Negative: 0
> Condition exists (0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ px=2 /\ len1=1 /\ len2=1 /\ (cx=0 \/ cx=1 \/ cx=2))
> Observation mpsc-rb+2p1c+unbound Always 42 0
> Time mpsc-rb+2p1c+unbound 39.19
> Hash=f0352aba9bdc03dd0b1def7d0c4956fa
> 
> /* 3p1c bounded case */
> $ herd7 -unroll 0 -conf linux-kernel.cfg mpsc-rb+3p1c+bounded.litmus
> Test mpsc+ringbuf-spinlock Allowed
> States 5
> 0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=0; len1=1; len2=1; px=2;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=1; len1=1; len2=1; px=2;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=1; len1=1; len2=1; px=3;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=2; len1=1; len2=1; px=2;
> 0:rFail=0; 1:rFail=0; 2:rFail=0; 3:rFail=0; cx=2; len1=1; len2=1; px=3;
> Ok
> Witnesses
> Positive: 558 Negative: 0
> Condition exists (0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ 3:rFail=0 /\ len1=1 /\ len2=1 /\ (px=2 /\ (cx=0 \/ cx=1 \/ cx=2) \/ px=3 /\ (cx=1 \/ cx=2)))
> Observation mpsc+ringbuf-spinlock Always 558 0
> Time mpsc+ringbuf-spinlock 57487.24
> Hash=133977dba930d167b4e1b4a6923d5687
> 
> Cc: Paul E. McKenney <paulmck@kernel.org>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  .../litmus-tests/mpsc-rb+1p1c+bounded.litmus  |  92 +++++++++++
>  .../litmus-tests/mpsc-rb+1p1c+unbound.litmus  |  83 ++++++++++
>  .../litmus-tests/mpsc-rb+2p1c+bounded.litmus  | 152 ++++++++++++++++++
>  .../litmus-tests/mpsc-rb+2p1c+unbound.litmus  | 137 ++++++++++++++++
>  4 files changed, 464 insertions(+)
>  create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus
>  create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus
>  create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus
>  create mode 100644 tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus
> 
> diff --git a/tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus
> new file mode 100644
> index 000000000000..cafd17afe11e
> --- /dev/null
> +++ b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+bounded.litmus
> @@ -0,0 +1,92 @@
> +C mpsc-rb+1p1c+bounded
> +
> +(*
> + * Result: Always
> + *
> + * This litmus test validates BPF ring buffer implementation under the
> + * following assumptions:
> + * - 1 producer;
> + * - 1 consumer;
> + * - ring buffer has capacity for only 1 record.
> + *
> + * Expectations:
> + * - 1 record pushed into ring buffer;
> + * - 0 or 1 element is consumed.
> + * - no failures.
> + *)
> +
> +{
> +	max_len = 1;
> +	len1 = 0;
> +	px = 0;
> +	cx = 0;
> +	dropped = 0;
> +}
> +
> +P0(int *len1, int *cx, int *px)
> +{
> +	int *rLenPtr;
> +	int rLen;
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0)
> +			rLenPtr = len1;
> +		else
> +			rFail = 1;
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +}
> +
> +P1(int *len1, spinlock_t *rb_lock, int *px, int *cx, int *dropped, int *max_len)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx - rCx >= *max_len) {
> +		atomic_inc(dropped);
> +		spin_unlock(rb_lock);
> +	} else {
> +		if (rPx == 0)
> +			rLenPtr = len1;
> +		else
> +			rFail = 1;
> +
> +		*rLenPtr = -1;
> +		smp_wmb();
> +		smp_store_release(px, rPx + 1);
> +
> +		spin_unlock(rb_lock);
> +
> +		smp_store_release(rLenPtr, 1);
> +	}
> +}
> +
> +exists (
> +	0:rFail=0 /\ 1:rFail=0
> +	/\
> +	(
> +		(dropped=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
> +	)
> +)
> diff --git a/tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus
> new file mode 100644
> index 000000000000..84f660598015
> --- /dev/null
> +++ b/tools/memory-model/litmus-tests/mpsc-rb+1p1c+unbound.litmus
> @@ -0,0 +1,83 @@
> +C mpsc-rb+1p1c+unbound
> +
> +(*
> + * Result: Always
> + *
> + * This litmus test validates BPF ring buffer implementation under the
> + * following assumptions:
> + * - 1 producer;
> + * - 1 consumer;
> + * - ring buffer capacity is unbounded.
> + *
> + * Expectations:
> + * - 1 record pushed into ring buffer;
> + * - 0 or 1 element is consumed.
> + * - no failures.
> + *)
> +
> +{
> +	len1 = 0;
> +	px = 0;
> +	cx = 0;
> +}
> +
> +P0(int *len1, int *cx, int *px)
> +{
> +	int *rLenPtr;
> +	int rLen;
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0)
> +			rLenPtr = len1;
> +		else
> +			rFail = 1;
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +}
> +
> +P1(int *len1, spinlock_t *rb_lock, int *px, int *cx)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx == 0)
> +		rLenPtr = len1;
> +	else
> +		rFail = 1;
> +
> +	*rLenPtr = -1;
> +	smp_wmb();
> +	smp_store_release(px, rPx + 1);
> +
> +	spin_unlock(rb_lock);
> +
> +	smp_store_release(rLenPtr, 1);
> +}
> +
> +exists (
> +	0:rFail=0 /\ 1:rFail=0
> +	/\ px=1 /\ len1=1
> +	/\ (cx=0 \/ cx=1)
> +)
> diff --git a/tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus
> new file mode 100644
> index 000000000000..900104c4933b
> --- /dev/null
> +++ b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+bounded.litmus
> @@ -0,0 +1,152 @@
> +C mpsc-rb+2p1c+bounded
> +
> +(*
> + * Result: Always
> + *
> + * This litmus test validates BPF ring buffer implementation under the
> + * following assumptions:
> + * - 2 identical producers;
> + * - 1 consumer;
> + * - ring buffer has capacity for only 1 record.
> + *
> + * Expectations:
> + * - either 1 or 2 records are pushed into ring buffer;
> + * - 0, 1, or 2 elements are consumed by consumer;
> + * - appropriate number of dropped records is recorded to satisfy ring buffer
> + *   size bounds;
> + * - no failures.
> + *)
> +
> +{
> +	max_len = 1;
> +	len1 = 0;
> +	px = 0;
> +	cx = 0;
> +	dropped = 0;
> +}
> +
> +P0(int *len1, int *cx, int *px)
> +{
> +	int *rLenPtr;
> +	int rLen;
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0)
> +			rLenPtr = len1;
> +		else if (rCx == 1)
> +			rLenPtr = len1;
> +		else
> +			rFail = 1;
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0)
> +			rLenPtr = len1;
> +		else if (rCx == 1)
> +			rLenPtr = len1;
> +		else
> +			rFail = 1;
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +}
> +
> +P1(int *len1, spinlock_t *rb_lock, int *px, int *cx, int *dropped, int *max_len)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx - rCx >= *max_len) {
> +		atomic_inc(dropped);
> +		spin_unlock(rb_lock);
> +	} else {
> +		if (rPx == 0)
> +			rLenPtr = len1;
> +		else if (rPx == 1)
> +			rLenPtr = len1;
> +		else
> +			rFail = 1;
> +
> +		*rLenPtr = -1;
> +		smp_wmb();
> +		smp_store_release(px, rPx + 1);
> +
> +		spin_unlock(rb_lock);
> +
> +		smp_store_release(rLenPtr, 1);
> +	}
> +}
> +
> +P2(int *len1, spinlock_t *rb_lock, int *px, int *cx, int *dropped, int *max_len)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx - rCx >= *max_len) {
> +		atomic_inc(dropped);
> +		spin_unlock(rb_lock);
> +	} else {
> +		if (rPx == 0)
> +			rLenPtr = len1;
> +		else if (rPx == 1)
> +			rLenPtr = len1;
> +		else
> +			rFail = 1;
> +
> +		*rLenPtr = -1;
> +		smp_wmb();
> +		smp_store_release(px, rPx + 1);
> +
> +		spin_unlock(rb_lock);
> +
> +		smp_store_release(rLenPtr, 1);
> +	}
> +}
> +
> +exists (
> +	0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0 /\ len1=1
> +	/\
> +	(
> +		(dropped = 0 /\ px=2 /\ (cx=1 \/ cx=2))
> +		\/
> +		(dropped = 1 /\ px=1 /\ (cx=0 \/ cx=1))
> +	)
> +)
> diff --git a/tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus
> new file mode 100644
> index 000000000000..83372e9eb079
> --- /dev/null
> +++ b/tools/memory-model/litmus-tests/mpsc-rb+2p1c+unbound.litmus
> @@ -0,0 +1,137 @@
> +C mpsc-rb+2p1c+unbound
> +
> +(*
> + * Result: Always
> + *
> + * This litmus test validates BPF ring buffer implementation under the
> + * following assumptions:
> + * - 2 identical producers;
> + * - 1 consumer;
> + * - ring buffer capacity is unbounded.
> + *
> + * Expectations:
> + * - 2 records pushed into ring buffer;
> + * - 0, 1, or 2 elements are consumed.
> + * - no failures.
> + *)
> +
> +{
> +	len1 = 0;
> +	len2 = 0;
> +	px = 0;
> +	cx = 0;
> +}
> +
> +P0(int *len1, int *len2, int *cx, int *px)
> +{
> +	int *rLenPtr;
> +	int rLen;
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0)
> +			rLenPtr = len1;
> +		else if (rCx == 1)
> +			rLenPtr = len2;
> +		else
> +			rFail = 1;
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0)
> +			rLenPtr = len1;
> +		else if (rCx == 1)
> +			rLenPtr = len2;
> +		else
> +			rFail = 1;
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +}
> +
> +P1(int *len1, int *len2, spinlock_t *rb_lock, int *px, int *cx)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx == 0)
> +		rLenPtr = len1;
> +	else if (rPx == 1)
> +		rLenPtr = len2;
> +	else
> +		rFail = 1;
> +
> +	*rLenPtr = -1;
> +	smp_wmb();
> +	smp_store_release(px, rPx + 1);
> +
> +	spin_unlock(rb_lock);
> +
> +	smp_store_release(rLenPtr, 1);
> +}
> +
> +P2(int *len1, int *len2, spinlock_t *rb_lock, int *px, int *cx)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +	rCx = smp_load_acquire(cx);
> +
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx == 0)
> +		rLenPtr = len1;
> +	else if (rPx == 1)
> +		rLenPtr = len2;
> +	else
> +		rFail = 1;
> +
> +	*rLenPtr = -1;
> +	smp_wmb();
> +	smp_store_release(px, rPx + 1);
> +
> +	spin_unlock(rb_lock);
> +
> +	smp_store_release(rLenPtr, 1);
> +}
> +
> +exists (
> +	0:rFail=0 /\ 1:rFail=0 /\ 2:rFail=0
> +	/\
> +	px=2 /\ len1=1 /\ len2=1
> +	/\
> +	(cx=0 \/ cx=1 \/ cx=2)
> +)
> -- 
> 2.24.1
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
                     ` (2 preceding siblings ...)
  2020-05-22  0:25   ` Paul E. McKenney
@ 2020-05-22  1:07   ` Alexei Starovoitov
  2020-05-22 18:48     ` Andrii Nakryiko
  2020-05-25 20:34     ` Andrii Nakryiko
  3 siblings, 2 replies; 33+ messages in thread
From: Alexei Starovoitov @ 2020-05-22  1:07 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team,
	Paul E . McKenney, Jonathan Lemon

On Sun, May 17, 2020 at 12:57:21PM -0700, Andrii Nakryiko wrote:
> -	if (off < 0 || size < 0 || (size == 0 && !zero_size_allowed) ||
> -	    off + size > map->value_size) {
> -		verbose(env, "invalid access to map value, value_size=%d off=%d size=%d\n",
> -			map->value_size, off, size);
> -		return -EACCES;
> -	}
> -	return 0;
> +	if (off >= 0 && size_ok && off + size <= mem_size)
> +		return 0;
> +
> +	verbose(env, "invalid access to memory, mem_size=%u off=%d size=%d\n",
> +		mem_size, off, size);
> +	return -EACCES;

iirc invalid access to map value is one of most common verifier errors that
people see when they're use unbounded access. Generalizing it to memory is
technically correct, but it makes the message harder to decipher.
What is 'mem_size' ? Without context it is difficult to guess that
it's actually size of map value element.
Could you make this error message more human friendly depending on
type of pointer?

>  	if (err) {
> -		verbose(env, "R%d min value is outside of the array range\n",
> +		verbose(env, "R%d min value is outside of the memory region\n",
>  			regno);
>  		return err;
>  	}
> @@ -2518,18 +2527,38 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
>  	 * If reg->umax_value + off could overflow, treat that as unbounded too.
>  	 */
>  	if (reg->umax_value >= BPF_MAX_VAR_OFF) {
> -		verbose(env, "R%d unbounded memory access, make sure to bounds check any array access into a map\n",
> +		verbose(env, "R%d unbounded memory access, make sure to bounds check any memory region access\n",
>  			regno);
>  		return -EACCES;
>  	}
> -	err = __check_map_access(env, regno, reg->umax_value + off, size,
> +	err = __check_mem_access(env, reg->umax_value + off, size, mem_size,
>  				 zero_size_allowed);
> -	if (err)
> -		verbose(env, "R%d max value is outside of the array range\n",
> +	if (err) {
> +		verbose(env, "R%d max value is outside of the memory region\n",
>  			regno);

I'm not that worried about above three generalizations of errors,
but if you can make it friendly by describing type of memory region
I think it will be a plus.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 3/7] bpf: track reference type in verifier
  2020-05-17 19:57 ` [PATCH v2 bpf-next 3/7] bpf: track reference type in verifier Andrii Nakryiko
@ 2020-05-22  1:13   ` Alexei Starovoitov
  2020-05-22 18:53     ` Andrii Nakryiko
  0 siblings, 1 reply; 33+ messages in thread
From: Alexei Starovoitov @ 2020-05-22  1:13 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team,
	Paul E . McKenney, Jonathan Lemon

On Sun, May 17, 2020 at 12:57:23PM -0700, Andrii Nakryiko wrote:
>  
> +static enum bpf_ref_type get_release_ref_type(enum bpf_func_id func_id)
> +{
> +	switch (func_id) {
> +	case BPF_FUNC_sk_release:
> +		return BPF_REF_SOCKET;
> +	case BPF_FUNC_ringbuf_submit:
> +	case BPF_FUNC_ringbuf_discard:
> +		return BPF_REF_RINGBUF;
> +	default:
> +		return BPF_REF_INVALID;
> +	}
> +}
> +
>  static bool may_be_acquire_function(enum bpf_func_id func_id)
>  {
>  	return func_id == BPF_FUNC_sk_lookup_tcp ||
> @@ -464,6 +477,28 @@ static bool is_acquire_function(enum bpf_func_id func_id,
>  	return false;
>  }
>  
> +static enum bpf_ref_type get_acquire_ref_type(enum bpf_func_id func_id,
> +					      const struct bpf_map *map)
> +{
> +	enum bpf_map_type map_type = map ? map->map_type : BPF_MAP_TYPE_UNSPEC;
> +
> +	switch (func_id) {
> +	case BPF_FUNC_sk_lookup_tcp:
> +	case BPF_FUNC_sk_lookup_udp:
> +	case BPF_FUNC_skc_lookup_tcp:
> +		return BPF_REF_SOCKET;
> +	case BPF_FUNC_map_lookup_elem:
> +		if (map_type == BPF_MAP_TYPE_SOCKMAP ||
> +		    map_type == BPF_MAP_TYPE_SOCKHASH)
> +			return BPF_REF_SOCKET;
> +		return BPF_REF_INVALID;
> +	case BPF_FUNC_ringbuf_reserve:
> +		return BPF_REF_RINGBUF;
> +	default:
> +		return BPF_REF_INVALID;
> +	}
> +}

Two switch() stmts to convert helpers to REF is kinda hacky.
I think get_release_ref_type() could have got the ref's type as btf_id from its
arguments which would have made it truly generic and 'enum bpf_ref_type' would
be unnecessary, but sk_lookup_* helpers return u64 and don't have btf_id of
return value. I think we better fix that. Then btf_id would be that ref type.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 4/7] libbpf: add BPF ring buffer support
  2020-05-17 19:57 ` [PATCH v2 bpf-next 4/7] libbpf: add BPF ring buffer support Andrii Nakryiko
@ 2020-05-22  1:15   ` Alexei Starovoitov
  2020-05-22 18:56     ` Andrii Nakryiko
  0 siblings, 1 reply; 33+ messages in thread
From: Alexei Starovoitov @ 2020-05-22  1:15 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team,
	Paul E . McKenney, Jonathan Lemon

On Sun, May 17, 2020 at 12:57:24PM -0700, Andrii Nakryiko wrote:
> +
> +static inline int roundup_len(__u32 len)
> +{
> +	/* clear out top 2 bits */
> +	len <<= 2;
> +	len >>= 2;

what this is for?
Overflow prevention?
but kernel checked the size already?

> +	/* add length prefix */
> +	len += BPF_RINGBUF_HDR_SZ;
> +	/* round up to 8 byte alignment */
> +	return (len + 7) / 8 * 8;
> +}
> +

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 5/7] selftests/bpf: add BPF ringbuf selftests
  2020-05-17 19:57 ` [PATCH v2 bpf-next 5/7] selftests/bpf: add BPF ringbuf selftests Andrii Nakryiko
@ 2020-05-22  1:20   ` Alexei Starovoitov
  2020-05-22 18:58     ` Andrii Nakryiko
  0 siblings, 1 reply; 33+ messages in thread
From: Alexei Starovoitov @ 2020-05-22  1:20 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team,
	Paul E . McKenney, Jonathan Lemon

On Sun, May 17, 2020 at 12:57:25PM -0700, Andrii Nakryiko wrote:
> diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> new file mode 100644
> index 000000000000..7eb85dd9cd66
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> @@ -0,0 +1,77 @@
> +// SPDX-License-Identifier: GPL-2.0
> +// Copyright (c) 2019 Facebook

oops ;)

> +
> +#include <linux/bpf.h>
> +#include <bpf/bpf_helpers.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +struct sample {
> +	int pid;
> +	int seq;
> +	long value;
> +	char comm[16];
> +};
> +
> +struct ringbuf_map {
> +	__uint(type, BPF_MAP_TYPE_RINGBUF);
> +	__uint(max_entries, 1 << 12);
> +} ringbuf1 SEC(".maps"),
> +  ringbuf2 SEC(".maps");
> +
> +struct {
> +	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
> +	__uint(max_entries, 4);
> +	__type(key, int);
> +	__array(values, struct ringbuf_map);
> +} ringbuf_arr SEC(".maps") = {
> +	.values = {
> +		[0] = &ringbuf1,
> +		[2] = &ringbuf2,
> +	},
> +};

the tests look great. Very easy to understand the usage model.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 6/7] bpf: add BPF ringbuf and perf buffer benchmarks
  2020-05-17 19:57 ` [PATCH v2 bpf-next 6/7] bpf: add BPF ringbuf and perf buffer benchmarks Andrii Nakryiko
@ 2020-05-22  1:21   ` Alexei Starovoitov
  2020-05-22 19:07     ` Andrii Nakryiko
  0 siblings, 1 reply; 33+ messages in thread
From: Alexei Starovoitov @ 2020-05-22  1:21 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team,
	Paul E . McKenney, Jonathan Lemon

On Sun, May 17, 2020 at 12:57:26PM -0700, Andrii Nakryiko wrote:
> +
> +static inline int roundup_len(__u32 len)
> +{
> +	/* clear out top 2 bits */
> +	len <<= 2;
> +	len >>= 2;
> +	/* add length prefix */
> +	len += RINGBUF_META_LEN;
> +	/* round up to 8 byte alignment */
> +	return (len + 7) / 8 * 8;
> +}

the same round_up again?

> +
> +static void ringbuf_custom_process_ring(struct ringbuf_custom *r)
> +{
> +	unsigned long cons_pos, prod_pos;
> +	int *len_ptr, len;
> +	bool got_new_data;
> +
> +	cons_pos = smp_load_acquire(r->consumer_pos);
> +	while (true) {
> +		got_new_data = false;
> +		prod_pos = smp_load_acquire(r->producer_pos);
> +		while (cons_pos < prod_pos) {
> +			len_ptr = r->data + (cons_pos & r->mask);
> +			len = smp_load_acquire(len_ptr);
> +
> +			/* sample not committed yet, bail out for now */
> +			if (len & RINGBUF_BUSY_BIT)
> +				return;
> +
> +			got_new_data = true;
> +			cons_pos += roundup_len(len);
> +
> +			atomic_inc(&buf_hits.value);
> +		}
> +		if (got_new_data)
> +			smp_store_release(r->consumer_pos, cons_pos);
> +		else
> +			break;
> +	};
> +}

copy paste from libbpf? why?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes
  2020-05-17 19:57 ` [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes Andrii Nakryiko
@ 2020-05-22  1:23   ` Alexei Starovoitov
  2020-05-22 19:08     ` Andrii Nakryiko
  2020-05-25  9:59   ` Alban Crequy
  1 sibling, 1 reply; 33+ messages in thread
From: Alexei Starovoitov @ 2020-05-22  1:23 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team,
	Paul E . McKenney, Jonathan Lemon, Stanislav Fomichev

On Sun, May 17, 2020 at 12:57:27PM -0700, Andrii Nakryiko wrote:
> Add commit description from patch #1 as a stand-alone documentation under
> Documentation/bpf, as it might be more convenient format, in long term
> perspective.
> 
> Suggested-by: Stanislav Fomichev <sdf@google.com>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  Documentation/bpf/ringbuf.txt | 191 ++++++++++++++++++++++++++++++++++

Thanks for the doc. Looks great, but could you make it .rst from the start?
otherwise soon after somebody will be converting it adding churn.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-22  0:25   ` Paul E. McKenney
@ 2020-05-22 18:46     ` Andrii Nakryiko
  2020-05-25 16:01       ` Paul E. McKenney
  0 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 18:46 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Jonathan Lemon

On Thu, May 21, 2020 at 5:25 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Sun, May 17, 2020 at 12:57:21PM -0700, Andrii Nakryiko wrote:
> > This commits adds a new MPSC ring buffer implementation into BPF ecosystem,
> > which allows multiple CPUs to submit data to a single shared ring buffer. On
> > the consumption side, only single consumer is assumed.
>
> [ . . . ]
>
> Focusing just on the ring-buffer mechanism, with a question or two
> below.  Looks pretty close, actually!
>
>                                                         Thanx, Paul
>

Thanks for review, Paul!

> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  include/linux/bpf.h            |  13 +
> >  include/linux/bpf_types.h      |   1 +
> >  include/linux/bpf_verifier.h   |   4 +
> >  include/uapi/linux/bpf.h       |  84 +++++-
> >  kernel/bpf/Makefile            |   2 +-
> >  kernel/bpf/helpers.c           |  10 +
> >  kernel/bpf/ringbuf.c           | 487 +++++++++++++++++++++++++++++++++
> >  kernel/bpf/syscall.c           |  12 +
> >  kernel/bpf/verifier.c          | 157 ++++++++---
> >  kernel/trace/bpf_trace.c       |  10 +
> >  tools/include/uapi/linux/bpf.h |  90 +++++-
> >  11 files changed, 832 insertions(+), 38 deletions(-)
> >  create mode 100644 kernel/bpf/ringbuf.c
>
> [ . . . ]
>
> > diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
> > new file mode 100644
> > index 000000000000..3c19f0f07726
> > --- /dev/null
> > +++ b/kernel/bpf/ringbuf.c
> > @@ -0,0 +1,487 @@
> > +#include <linux/bpf.h>
> > +#include <linux/btf.h>
> > +#include <linux/err.h>
> > +#include <linux/irq_work.h>
> > +#include <linux/slab.h>
> > +#include <linux/filter.h>
> > +#include <linux/mm.h>
> > +#include <linux/vmalloc.h>
> > +#include <linux/wait.h>
> > +#include <linux/poll.h>
> > +#include <uapi/linux/btf.h>
> > +
> > +#define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
> > +
> > +/* non-mmap()'able part of bpf_ringbuf (everything up to consumer page) */
> > +#define RINGBUF_PGOFF \
> > +     (offsetof(struct bpf_ringbuf, consumer_pos) >> PAGE_SHIFT)
> > +/* consumer page and producer page */
> > +#define RINGBUF_POS_PAGES 2
> > +
> > +#define RINGBUF_MAX_RECORD_SZ (UINT_MAX/4)
> > +
> > +/* Maximum size of ring buffer area is limited by 32-bit page offset within
> > + * record header, counted in pages. Reserve 8 bits for extensibility, and take
> > + * into account few extra pages for consumer/producer pages and
> > + * non-mmap()'able parts. This gives 64GB limit, which seems plenty for single
> > + * ring buffer.
> > + */
> > +#define RINGBUF_MAX_DATA_SZ \
> > +     (((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE)
> > +
> > +struct bpf_ringbuf {
> > +     wait_queue_head_t waitq;
> > +     struct irq_work work;
> > +     u64 mask;
> > +     spinlock_t spinlock ____cacheline_aligned_in_smp;
> > +     /* Consumer and producer counters are put into separate pages to allow
> > +      * mapping consumer page as r/w, but restrict producer page to r/o.
> > +      * This protects producer position from being modified by user-space
> > +      * application and ruining in-kernel position tracking.
> > +      */
> > +     unsigned long consumer_pos __aligned(PAGE_SIZE);
> > +     unsigned long producer_pos __aligned(PAGE_SIZE);
> > +     char data[] __aligned(PAGE_SIZE);
> > +};
> > +
> > +struct bpf_ringbuf_map {
> > +     struct bpf_map map;
> > +     struct bpf_map_memory memory;
> > +     struct bpf_ringbuf *rb;
> > +};
> > +
> > +/* 8-byte ring buffer record header structure */
> > +struct bpf_ringbuf_hdr {
> > +     u32 len;
> > +     u32 pg_off;
> > +};
> > +
> > +static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
> > +{
> > +     const gfp_t flags = GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN |
> > +                         __GFP_ZERO;
> > +     int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES;
> > +     int nr_data_pages = data_sz >> PAGE_SHIFT;
> > +     int nr_pages = nr_meta_pages + nr_data_pages;
> > +     struct page **pages, *page;
> > +     size_t array_size;
> > +     void *addr;
> > +     int i;
> > +
> > +     /* Each data page is mapped twice to allow "virtual"
> > +      * continuous read of samples wrapping around the end of ring
> > +      * buffer area:
> > +      * ------------------------------------------------------
> > +      * | meta pages |  real data pages  |  same data pages  |
> > +      * ------------------------------------------------------
> > +      * |            | 1 2 3 4 5 6 7 8 9 | 1 2 3 4 5 6 7 8 9 |
> > +      * ------------------------------------------------------
> > +      * |            | TA             DA | TA             DA |
> > +      * ------------------------------------------------------
> > +      *                               ^^^^^^^
> > +      *                                  |
> > +      * Here, no need to worry about special handling of wrapped-around
> > +      * data due to double-mapped data pages. This works both in kernel and
> > +      * when mmap()'ed in user-space, simplifying both kernel and
> > +      * user-space implementations significantly.
> > +      */
> > +     array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages);
> > +     if (array_size > PAGE_SIZE)
> > +             pages = vmalloc_node(array_size, numa_node);
> > +     else
> > +             pages = kmalloc_node(array_size, flags, numa_node);
> > +     if (!pages)
> > +             return NULL;
> > +
> > +     for (i = 0; i < nr_pages; i++) {
> > +             page = alloc_pages_node(numa_node, flags, 0);
> > +             if (!page) {
> > +                     nr_pages = i;
> > +                     goto err_free_pages;
> > +             }
> > +             pages[i] = page;
> > +             if (i >= nr_meta_pages)
> > +                     pages[nr_data_pages + i] = page;
> > +     }
> > +
> > +     addr = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
> > +                 VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
> > +     if (addr)
> > +             return addr;
> > +
> > +err_free_pages:
> > +     for (i = 0; i < nr_pages; i++)
> > +             free_page((unsigned long)pages[i]);
> > +     kvfree(pages);
> > +     return NULL;
> > +}
> > +
> > +static void bpf_ringbuf_notify(struct irq_work *work)
> > +{
> > +     struct bpf_ringbuf *rb = container_of(work, struct bpf_ringbuf, work);
> > +
> > +     wake_up_all(&rb->waitq);
> > +}
> > +
> > +static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
> > +{
> > +     struct bpf_ringbuf *rb;
> > +
> > +     if (!data_sz || !PAGE_ALIGNED(data_sz))
> > +             return ERR_PTR(-EINVAL);
> > +
> > +     if (data_sz > RINGBUF_MAX_DATA_SZ)
> > +             return ERR_PTR(-E2BIG);
> > +
> > +     rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
> > +     if (!rb)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     spin_lock_init(&rb->spinlock);
> > +     init_waitqueue_head(&rb->waitq);
> > +     init_irq_work(&rb->work, bpf_ringbuf_notify);
> > +
> > +     rb->mask = data_sz - 1;
> > +     rb->consumer_pos = 0;
> > +     rb->producer_pos = 0;
> > +
> > +     return rb;
> > +}
> > +
> > +static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
> > +{
> > +     struct bpf_ringbuf_map *rb_map;
> > +     u64 cost;
> > +     int err;
> > +
> > +     if (attr->map_flags & ~RINGBUF_CREATE_FLAG_MASK)
> > +             return ERR_PTR(-EINVAL);
> > +
> > +     if (attr->key_size || attr->value_size ||
> > +         attr->max_entries == 0 || !PAGE_ALIGNED(attr->max_entries))
> > +             return ERR_PTR(-EINVAL);
> > +
> > +     rb_map = kzalloc(sizeof(*rb_map), GFP_USER);
> > +     if (!rb_map)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     bpf_map_init_from_attr(&rb_map->map, attr);
> > +
> > +     cost = sizeof(struct bpf_ringbuf_map) +
> > +            sizeof(struct bpf_ringbuf) +
> > +            attr->max_entries;
> > +     err = bpf_map_charge_init(&rb_map->map.memory, cost);
> > +     if (err)
> > +             goto err_free_map;
> > +
> > +     rb_map->rb = bpf_ringbuf_alloc(attr->max_entries, rb_map->map.numa_node);
> > +     if (IS_ERR(rb_map->rb)) {
> > +             err = PTR_ERR(rb_map->rb);
> > +             goto err_uncharge;
> > +     }
> > +
> > +     return &rb_map->map;
> > +
> > +err_uncharge:
> > +     bpf_map_charge_finish(&rb_map->map.memory);
> > +err_free_map:
> > +     kfree(rb_map);
> > +     return ERR_PTR(err);
> > +}
> > +
> > +static void bpf_ringbuf_free(struct bpf_ringbuf *ringbuf)
> > +{
> > +     kvfree(ringbuf);
> > +}
> > +
> > +static void ringbuf_map_free(struct bpf_map *map)
> > +{
> > +     struct bpf_ringbuf_map *rb_map;
> > +
> > +     /* at this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
> > +      * so the programs (can be more than one that used this map) were
> > +      * disconnected from events. Wait for outstanding critical sections in
> > +      * these programs to complete
> > +      */
> > +     synchronize_rcu();
> > +
> > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > +     bpf_ringbuf_free(rb_map->rb);
> > +     kfree(rb_map);
> > +}
> > +
> > +static void *ringbuf_map_lookup_elem(struct bpf_map *map, void *key)
> > +{
> > +     return ERR_PTR(-ENOTSUPP);
> > +}
> > +
> > +static int ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value,
> > +                                u64 flags)
> > +{
> > +     return -ENOTSUPP;
> > +}
> > +
> > +static int ringbuf_map_delete_elem(struct bpf_map *map, void *key)
> > +{
> > +     return -ENOTSUPP;
> > +}
> > +
> > +static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
> > +                                 void *next_key)
> > +{
> > +     return -ENOTSUPP;
> > +}
> > +
> > +static size_t bpf_ringbuf_mmap_page_cnt(const struct bpf_ringbuf *rb)
> > +{
> > +     size_t data_pages = (rb->mask + 1) >> PAGE_SHIFT;
> > +
> > +     /* consumer page + producer page + 2 x data pages */
> > +     return RINGBUF_POS_PAGES + 2 * data_pages;
> > +}
> > +
> > +static int ringbuf_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
> > +{
> > +     struct bpf_ringbuf_map *rb_map;
> > +     size_t mmap_sz;
> > +
> > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > +     mmap_sz = bpf_ringbuf_mmap_page_cnt(rb_map->rb) << PAGE_SHIFT;
> > +
> > +     if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > mmap_sz)
> > +             return -EINVAL;
> > +
> > +     return remap_vmalloc_range(vma, rb_map->rb,
> > +                                vma->vm_pgoff + RINGBUF_PGOFF);
> > +}
> > +
> > +static unsigned long ringbuf_avail_data_sz(struct bpf_ringbuf *rb)
> > +{
> > +     unsigned long cons_pos, prod_pos;
> > +
> > +     cons_pos = smp_load_acquire(&rb->consumer_pos);
>
> What happens if there is a delay here?  (The delay might be due to
> interrupts, preemption in PREEMPT=y kernels, vCPU preemption, ...)
>
> If this is called from a producer holding the lock, then the only
> ->consumer_pos can change, and that can only decrease the amount of
> data available.  Besides which, ->consumer_pos is sampled first.
> But why would a producer care how much data was queued, as opposed
> to how much free space was available?

see second point below, it's for BPF program to implement custom
notification policy/heuristics

>
> From the consumer, only ->producer_pos can change, and that can only
> increase the amount of data available.  (Assuming that producers cannot
> erase old data on wrap-around before the consumer consumes it.)

right, they can't overwrite data

>
> So probably nothing bad happens.
>
> On the bit about the producer holding the lock, some lockdep assertions
> might make things easier on your future self.

So this function is currently called in two situations, both not
holding the spinlock. I'm fully aware this is a racy way to do this,
but it's ok for the applications.

So the first place is during initial poll "subscription", when we need
to figure out if there is already some unconsumed data in ringbuffer
to let poll/epoll subsystem know whether caller can do read without
waiting. If this is done from consumer thread, it's race free, because
consumer position won't change, so either we'll see that producer >
consumer, or if we race with producer, producer will see that consumer
pos == to-be-committed record pos, so will send notification. If epoll
happens from non-consumer thread, it's similarly racy to perf buffer
poll subscription implementation, but with next record subscriber will
get notification anyways.

The second place is from BPF side (so potential producer), but it's
outside of producer lock. It's called from bpf_ringbuf_query() helper
which is supposed to return various potentially stale properties of
ringbuf (e.g., consumer/producer position, amount of unconsumed data,
etc). The use case here is for BPF program to implement its own
flexible poll notification strategy (or whatever else creative
programmer will come up with, of course). E.g., batched notification,
which happens not on every record, but only once enough data is
accumulated. In this case exact amount of unconsumed data is not
critical, it's only a heuristics. And it's more convenient than amount
of data left free, IMO, but both can be calculated using the other,
provided the total size of ring buffer is known (which can be returned
by bpf_ringbuf_query() helper as well).

So I think in both cases I don't want to take spinlock. If this is not
done under spinlock, then I have to read consumer pos first, producer
pos second, otherwise I can get into a situation of having
consumer_pos > producer_pos (due to delays), which is really bad.

It's a bit wordy explanation, but I hope this makes sense.

>
> > +     prod_pos = smp_load_acquire(&rb->producer_pos);
> > +     return prod_pos - cons_pos;
> > +}
> > +
> > +static __poll_t ringbuf_map_poll(struct bpf_map *map, struct file *filp,
> > +                              struct poll_table_struct *pts)
> > +{
> > +     struct bpf_ringbuf_map *rb_map;
> > +
> > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > +     poll_wait(filp, &rb_map->rb->waitq, pts);
> > +
> > +     if (ringbuf_avail_data_sz(rb_map->rb))
> > +             return EPOLLIN | EPOLLRDNORM;
> > +     return 0;
> > +}
> > +
> > +const struct bpf_map_ops ringbuf_map_ops = {
> > +     .map_alloc = ringbuf_map_alloc,
> > +     .map_free = ringbuf_map_free,
> > +     .map_mmap = ringbuf_map_mmap,
> > +     .map_poll = ringbuf_map_poll,
> > +     .map_lookup_elem = ringbuf_map_lookup_elem,
> > +     .map_update_elem = ringbuf_map_update_elem,
> > +     .map_delete_elem = ringbuf_map_delete_elem,
> > +     .map_get_next_key = ringbuf_map_get_next_key,
> > +};
> > +
> > +/* Given pointer to ring buffer record metadata and struct bpf_ringbuf itself,
> > + * calculate offset from record metadata to ring buffer in pages, rounded
> > + * down. This page offset is stored as part of record metadata and allows to
> > + * restore struct bpf_ringbuf * from record pointer. This page offset is
> > + * stored at offset 4 of record metadata header.
> > + */
> > +static size_t bpf_ringbuf_rec_pg_off(struct bpf_ringbuf *rb,
> > +                                  struct bpf_ringbuf_hdr *hdr)
> > +{
> > +     return ((void *)hdr - (void *)rb) >> PAGE_SHIFT;
> > +}
> > +
> > +/* Given pointer to ring buffer record header, restore pointer to struct
> > + * bpf_ringbuf itself by using page offset stored at offset 4
> > + */
> > +static struct bpf_ringbuf *
> > +bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
> > +{
> > +     unsigned long addr = (unsigned long)(void *)hdr;
> > +     unsigned long off = (unsigned long)hdr->pg_off << PAGE_SHIFT;
> > +
> > +     return (void*)((addr & PAGE_MASK) - off);
> > +}
> > +
> > +static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
> > +{
> > +     unsigned long cons_pos, prod_pos, new_prod_pos, flags;
> > +     u32 len, pg_off;
> > +     struct bpf_ringbuf_hdr *hdr;
> > +
> > +     if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
> > +             return NULL;
> > +
> > +     len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
> > +     cons_pos = smp_load_acquire(&rb->consumer_pos);
>
> There might be a longish delay acquiring the spinlock, which could mean
> that cons_pos was out of date, which might result in an unnecessary
> producer-side failure.  Why not pick up cons_pos after the lock is
> acquired?  After all, it is in the same cache line as the lock, so this
> should have negligible effect on lock-hold time.

Right, the goal was to minimize amount of time that spinlock is hold.

Spinlock and consumer_pos are certainly not on the same cache line,
consumer_pos is deliberately on a different *page* altogether (due to
mmap() and to avoid unnecessary sharing between consumers, who don't
touch spinlock, and producers, who do use lock to coordinate).

So I chose to potentially drop sample due to a bit stale consumer pos,
but speed up locked section.

>
> (Unless you had either really small cachelines or really big locks.
>
> > +     if (in_nmi()) {
> > +             if (!spin_trylock_irqsave(&rb->spinlock, flags))
> > +                     return NULL;
> > +     } else {
> > +             spin_lock_irqsave(&rb->spinlock, flags);
> > +     }
> > +
> > +     prod_pos = rb->producer_pos;
> > +     new_prod_pos = prod_pos + len;
> > +
> > +     /* check for out of ringbuf space by ensuring producer position
> > +      * doesn't advance more than (ringbuf_size - 1) ahead
> > +      */
> > +     if (new_prod_pos - cons_pos > rb->mask) {
> > +             spin_unlock_irqrestore(&rb->spinlock, flags);
> > +             return NULL;
> > +     }
> > +
> > +     hdr = (void *)rb->data + (prod_pos & rb->mask);
> > +     pg_off = bpf_ringbuf_rec_pg_off(rb, hdr);
> > +     hdr->len = size | BPF_RINGBUF_BUSY_BIT;
> > +     hdr->pg_off = pg_off;
> > +
> > +     /* ensure header is written before updating producer positions */
> > +     smp_wmb();
>
> The smp_store_release() makes this unnecessary with respect to
> ->producer_pos.  So what later write is it also ordering against?
> If none, this smp_wmb() can go away.
>
> And if the later write is the xchg() in bpf_ringbuf_commit(), the
> xchg() implies full barriers before and after, so that the smp_wmb()
> could still go away.
>
> So other than the smp_store_release() and the xchg(), what later write
> is the smp_wmb() ordering against?

I *think* my intent here was to make sure that when consumer reads
producer_pos, it sees hdr->len with busy bit set. But I also think
that you are right and smp_store_release (producer_pos) ensures that
if consumer does smp_read_acquire(producer_pos) it will see up-to-date
hdr->len, for a given value of producer_pos. So yeah, I think I should
drop smp_wmb(). I dropped it in litmus tests, and they still pass,
which gives me a bit more confidence as well. :)

I say "I *think*", because this algorithm started off completely
locklessly without producer spinlock, so maybe I needed this barrier
for something there, but I honestly don't remember details of lockless
implementation by now.

So in summary, yeah, I'll drop smp_wmb().

>
> > +     /* pairs with consumer's smp_load_acquire() */
> > +     smp_store_release(&rb->producer_pos, new_prod_pos);
> > +
> > +     spin_unlock_irqrestore(&rb->spinlock, flags);
> > +
> > +     return (void *)hdr + BPF_RINGBUF_HDR_SZ;
> > +}
> > +
> > +BPF_CALL_3(bpf_ringbuf_reserve, struct bpf_map *, map, u64, size, u64, flags)
> > +{
> > +     struct bpf_ringbuf_map *rb_map;
> > +
> > +     if (unlikely(flags))
> > +             return 0;
> > +
> > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > +     return (unsigned long)__bpf_ringbuf_reserve(rb_map->rb, size);
> > +}
> > +
> > +const struct bpf_func_proto bpf_ringbuf_reserve_proto = {
> > +     .func           = bpf_ringbuf_reserve,
> > +     .ret_type       = RET_PTR_TO_ALLOC_MEM_OR_NULL,
> > +     .arg1_type      = ARG_CONST_MAP_PTR,
> > +     .arg2_type      = ARG_CONST_ALLOC_SIZE_OR_ZERO,
> > +     .arg3_type      = ARG_ANYTHING,
> > +};
> > +
> > +static void bpf_ringbuf_commit(void *sample, u64 flags, bool discard)
> > +{
> > +     unsigned long rec_pos, cons_pos;
> > +     struct bpf_ringbuf_hdr *hdr;
> > +     struct bpf_ringbuf *rb;
> > +     u32 new_len;
> > +
> > +     hdr = sample - BPF_RINGBUF_HDR_SZ;
> > +     rb = bpf_ringbuf_restore_from_rec(hdr);
> > +     new_len = hdr->len ^ BPF_RINGBUF_BUSY_BIT;
> > +     if (discard)
> > +             new_len |= BPF_RINGBUF_DISCARD_BIT;
> > +
> > +     /* update record header with correct final size prefix */
> > +     xchg(&hdr->len, new_len);
> > +
> > +     /* if consumer caught up and is waiting for our record, notify about
> > +      * new data availability
> > +      */
> > +     rec_pos = (void *)hdr - (void *)rb->data;
> > +     cons_pos = smp_load_acquire(&rb->consumer_pos) & rb->mask;
> > +
> > +     if (flags & BPF_RB_FORCE_WAKEUP)
> > +             irq_work_queue(&rb->work);
> > +     else if (cons_pos == rec_pos && !(flags & BPF_RB_NO_WAKEUP))
> > +             irq_work_queue(&rb->work);
> > +}

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-22  1:07   ` Alexei Starovoitov
@ 2020-05-22 18:48     ` Andrii Nakryiko
  2020-05-25 20:34     ` Andrii Nakryiko
  1 sibling, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 18:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon

On Thu, May 21, 2020 at 6:07 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sun, May 17, 2020 at 12:57:21PM -0700, Andrii Nakryiko wrote:
> > -     if (off < 0 || size < 0 || (size == 0 && !zero_size_allowed) ||
> > -         off + size > map->value_size) {
> > -             verbose(env, "invalid access to map value, value_size=%d off=%d size=%d\n",
> > -                     map->value_size, off, size);
> > -             return -EACCES;
> > -     }
> > -     return 0;
> > +     if (off >= 0 && size_ok && off + size <= mem_size)
> > +             return 0;
> > +
> > +     verbose(env, "invalid access to memory, mem_size=%u off=%d size=%d\n",
> > +             mem_size, off, size);
> > +     return -EACCES;
>
> iirc invalid access to map value is one of most common verifier errors that
> people see when they're use unbounded access. Generalizing it to memory is
> technically correct, but it makes the message harder to decipher.
> What is 'mem_size' ? Without context it is difficult to guess that
> it's actually size of map value element.
> Could you make this error message more human friendly depending on
> type of pointer?

yep, sure, better verifier errors are extremely important, I think

>
> >       if (err) {
> > -             verbose(env, "R%d min value is outside of the array range\n",
> > +             verbose(env, "R%d min value is outside of the memory region\n",
> >                       regno);
> >               return err;
> >       }
> > @@ -2518,18 +2527,38 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
> >        * If reg->umax_value + off could overflow, treat that as unbounded too.
> >        */
> >       if (reg->umax_value >= BPF_MAX_VAR_OFF) {
> > -             verbose(env, "R%d unbounded memory access, make sure to bounds check any array access into a map\n",
> > +             verbose(env, "R%d unbounded memory access, make sure to bounds check any memory region access\n",
> >                       regno);
> >               return -EACCES;
> >       }
> > -     err = __check_map_access(env, regno, reg->umax_value + off, size,
> > +     err = __check_mem_access(env, reg->umax_value + off, size, mem_size,
> >                                zero_size_allowed);
> > -     if (err)
> > -             verbose(env, "R%d max value is outside of the array range\n",
> > +     if (err) {
> > +             verbose(env, "R%d max value is outside of the memory region\n",
> >                       regno);
>
> I'm not that worried about above three generalizations of errors,
> but if you can make it friendly by describing type of memory region
> I think it will be a plus.

I agree, will update

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests
  2020-05-22  0:34   ` Paul E. McKenney
@ 2020-05-22 18:51     ` Andrii Nakryiko
  2020-05-25 23:33       ` Andrii Nakryiko
  0 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 18:51 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Jonathan Lemon

On Thu, May 21, 2020 at 5:34 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Sun, May 17, 2020 at 12:57:22PM -0700, Andrii Nakryiko wrote:
> > Add 4 litmus tests for BPF ringbuf implementation, divided into two different
> > use cases.
> >
> > First, two unbounded case, one with 1 producer and another with
> > 2 producers, single consumer. All reservations are supposed to succeed.
> >
> > Second, bounded case with only 1 record allowed in ring buffer at any given
> > time. Here failures to reserve space are expected. Again, 1- and 2- producer
> > cases, single consumer, are validated.
> >
> > Just for the fun of it, I also wrote a 3-producer cases, it took *16 hours* to
> > validate, but came back successful as well. I'm not including it in this
> > patch, because it's not practical to run it. See output for all included
> > 4 cases and one 3-producer one with bounded use case.
> >
> > Each litmust test implements producer/consumer protocol for BPF ring buffer
> > implementation found in kernel/bpf/ringbuf.c. Due to limitations, all records
> > are assumed equal-sized and producer/consumer counters are incremented by 1.
> > This doesn't change the correctness of the algorithm, though.
>
> Very cool!!!
>
> However, these should go into Documentation/litmus-tests/bpf-rb or similar.
> Please take a look at Documentation/litmus-tests/ in -rcu, -tip, and
> -next, including the README file.
>
> The tools/memory-model/litmus-tests directory is for basic examples,
> not for the more complex real-world ones like these guys.  ;-)

Oh, ok, I didn't realize there are more litmus tests under
Documentation/litmus-tests... Might have saved me some time (more
examples to learn from!) when I was writing mine :) Will check those
and move everything.

>
>                                                                 Thanx, Paul
>

[...]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 3/7] bpf: track reference type in verifier
  2020-05-22  1:13   ` Alexei Starovoitov
@ 2020-05-22 18:53     ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 18:53 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon

On Thu, May 21, 2020 at 6:13 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sun, May 17, 2020 at 12:57:23PM -0700, Andrii Nakryiko wrote:
> >
> > +static enum bpf_ref_type get_release_ref_type(enum bpf_func_id func_id)
> > +{
> > +     switch (func_id) {
> > +     case BPF_FUNC_sk_release:
> > +             return BPF_REF_SOCKET;
> > +     case BPF_FUNC_ringbuf_submit:
> > +     case BPF_FUNC_ringbuf_discard:
> > +             return BPF_REF_RINGBUF;
> > +     default:
> > +             return BPF_REF_INVALID;
> > +     }
> > +}
> > +
> >  static bool may_be_acquire_function(enum bpf_func_id func_id)
> >  {
> >       return func_id == BPF_FUNC_sk_lookup_tcp ||
> > @@ -464,6 +477,28 @@ static bool is_acquire_function(enum bpf_func_id func_id,
> >       return false;
> >  }
> >
> > +static enum bpf_ref_type get_acquire_ref_type(enum bpf_func_id func_id,
> > +                                           const struct bpf_map *map)
> > +{
> > +     enum bpf_map_type map_type = map ? map->map_type : BPF_MAP_TYPE_UNSPEC;
> > +
> > +     switch (func_id) {
> > +     case BPF_FUNC_sk_lookup_tcp:
> > +     case BPF_FUNC_sk_lookup_udp:
> > +     case BPF_FUNC_skc_lookup_tcp:
> > +             return BPF_REF_SOCKET;
> > +     case BPF_FUNC_map_lookup_elem:
> > +             if (map_type == BPF_MAP_TYPE_SOCKMAP ||
> > +                 map_type == BPF_MAP_TYPE_SOCKHASH)
> > +                     return BPF_REF_SOCKET;
> > +             return BPF_REF_INVALID;
> > +     case BPF_FUNC_ringbuf_reserve:
> > +             return BPF_REF_RINGBUF;
> > +     default:
> > +             return BPF_REF_INVALID;
> > +     }
> > +}
>
> Two switch() stmts to convert helpers to REF is kinda hacky.
> I think get_release_ref_type() could have got the ref's type as btf_id from its
> arguments which would have made it truly generic and 'enum bpf_ref_type' would
> be unnecessary, but sk_lookup_* helpers return u64 and don't have btf_id of
> return value. I think we better fix that. Then btf_id would be that ref type.

True, sure. I'll drop this patch in next version, because everything
works without it. Then we can generalize this eventually.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 4/7] libbpf: add BPF ring buffer support
  2020-05-22  1:15   ` Alexei Starovoitov
@ 2020-05-22 18:56     ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 18:56 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon

On Thu, May 21, 2020 at 6:15 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sun, May 17, 2020 at 12:57:24PM -0700, Andrii Nakryiko wrote:
> > +
> > +static inline int roundup_len(__u32 len)
> > +{
> > +     /* clear out top 2 bits */
> > +     len <<= 2;
> > +     len >>= 2;
>
> what this is for?
> Overflow prevention?
> but kernel checked the size already?

No, as comment above says, just clearing upper two bits (busy bit and
discard bit). Busy bit should be 0, but discard bit could be set for
discarded record. I could have written this equivalently as:

len = (len & ~(1 << BPF_RINGBUF_DISCARD_BIT));

But I think compiler won't optimize it to two bit shifts, assuming bit
#31 might be set.

>
> > +     /* add length prefix */
> > +     len += BPF_RINGBUF_HDR_SZ;
> > +     /* round up to 8 byte alignment */
> > +     return (len + 7) / 8 * 8;
> > +}
> > +

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 5/7] selftests/bpf: add BPF ringbuf selftests
  2020-05-22  1:20   ` Alexei Starovoitov
@ 2020-05-22 18:58     ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 18:58 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon

On Thu, May 21, 2020 at 6:20 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sun, May 17, 2020 at 12:57:25PM -0700, Andrii Nakryiko wrote:
> > diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> > new file mode 100644
> > index 000000000000..7eb85dd9cd66
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c
> > @@ -0,0 +1,77 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +// Copyright (c) 2019 Facebook
>
> oops ;)
>

heh, still living in good old 2019... :) fixing...

> > +
> > +#include <linux/bpf.h>
> > +#include <bpf/bpf_helpers.h>
> > +
> > +char _license[] SEC("license") = "GPL";
> > +
> > +struct sample {
> > +     int pid;
> > +     int seq;
> > +     long value;
> > +     char comm[16];
> > +};
> > +
> > +struct ringbuf_map {
> > +     __uint(type, BPF_MAP_TYPE_RINGBUF);
> > +     __uint(max_entries, 1 << 12);
> > +} ringbuf1 SEC(".maps"),
> > +  ringbuf2 SEC(".maps");
> > +
> > +struct {
> > +     __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
> > +     __uint(max_entries, 4);
> > +     __type(key, int);
> > +     __array(values, struct ringbuf_map);
> > +} ringbuf_arr SEC(".maps") = {
> > +     .values = {
> > +             [0] = &ringbuf1,
> > +             [2] = &ringbuf2,
> > +     },
> > +};
>
> the tests look great. Very easy to understand the usage model.

great, thanks!

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 6/7] bpf: add BPF ringbuf and perf buffer benchmarks
  2020-05-22  1:21   ` Alexei Starovoitov
@ 2020-05-22 19:07     ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 19:07 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon

On Thu, May 21, 2020 at 6:21 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sun, May 17, 2020 at 12:57:26PM -0700, Andrii Nakryiko wrote:
> > +
> > +static inline int roundup_len(__u32 len)
> > +{
> > +     /* clear out top 2 bits */
> > +     len <<= 2;
> > +     len >>= 2;
> > +     /* add length prefix */
> > +     len += RINGBUF_META_LEN;
> > +     /* round up to 8 byte alignment */
> > +     return (len + 7) / 8 * 8;
> > +}
>
> the same round_up again?
>
> > +
> > +static void ringbuf_custom_process_ring(struct ringbuf_custom *r)
> > +{
> > +     unsigned long cons_pos, prod_pos;
> > +     int *len_ptr, len;
> > +     bool got_new_data;
> > +
> > +     cons_pos = smp_load_acquire(r->consumer_pos);
> > +     while (true) {
> > +             got_new_data = false;
> > +             prod_pos = smp_load_acquire(r->producer_pos);
> > +             while (cons_pos < prod_pos) {
> > +                     len_ptr = r->data + (cons_pos & r->mask);
> > +                     len = smp_load_acquire(len_ptr);
> > +
> > +                     /* sample not committed yet, bail out for now */
> > +                     if (len & RINGBUF_BUSY_BIT)
> > +                             return;
> > +
> > +                     got_new_data = true;
> > +                     cons_pos += roundup_len(len);
> > +
> > +                     atomic_inc(&buf_hits.value);
> > +             }
> > +             if (got_new_data)
> > +                     smp_store_release(r->consumer_pos, cons_pos);
> > +             else
> > +                     break;
> > +     };
> > +}
>
> copy paste from libbpf? why?

Yep, it's deliberate, see description of rb-custom benchmark in commit message.

Basically, I was worried that generic libbpf callback calling might
introduce a noticeable slowdown and wanted to test the case where
ultimate peformance was necessary and someone would just implement
custom reading loop with inlined processing logic. And it turned out
to be noticeable, see benchmark results for rb-libbpf and rb-custom
benchmarks. So I satisfied my curiosity :), but if you think that's
not necessary, I can drop rb-custom (and, similarly, pb-custom for
perfbuf). It still seems useful to me, though (and is sort of an
example of minimal correct implementation of ringbuf/perfbuf reading).

Btw, apart from speed up from avoiding indirect calls (due to
callback), this algorithm also does "batched"
smp_store_release(r->consumer_pos) only once after each inner
`while(cons_pos < prod_pos)` loop, so there is theoretically less
cache line bouncing, which might give a bit more speed as well. Libbpf
pessimistically does smp_store_release() after each record in
assumption that customer-provided callback might take a while, so it's
better to give producers a bit more space back as early as possible.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes
  2020-05-22  1:23   ` Alexei Starovoitov
@ 2020-05-22 19:08     ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-22 19:08 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon,
	Stanislav Fomichev

On Thu, May 21, 2020 at 6:23 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sun, May 17, 2020 at 12:57:27PM -0700, Andrii Nakryiko wrote:
> > Add commit description from patch #1 as a stand-alone documentation under
> > Documentation/bpf, as it might be more convenient format, in long term
> > perspective.
> >
> > Suggested-by: Stanislav Fomichev <sdf@google.com>
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  Documentation/bpf/ringbuf.txt | 191 ++++++++++++++++++++++++++++++++++
>
> Thanks for the doc. Looks great, but could you make it .rst from the start?
> otherwise soon after somebody will be converting it adding churn.

Yeah, sure, will do conversion in v3.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes
  2020-05-17 19:57 ` [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes Andrii Nakryiko
  2020-05-22  1:23   ` Alexei Starovoitov
@ 2020-05-25  9:59   ` Alban Crequy
  2020-05-25 19:12     ` Andrii Nakryiko
  1 sibling, 1 reply; 33+ messages in thread
From: Alban Crequy @ 2020-05-25  9:59 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, Daniel Borkmann, andrii.nakryiko, kernel-team,
	Paul E . McKenney, Jonathan Lemon, Stanislav Fomichev,
	Alban Crequy, mauricio, kai

Hi,

Thanks. Both motivators look very interesting to me:

On Sun, 17 May 2020 at 21:58, Andrii Nakryiko <andriin@fb.com> wrote:
[...]
> +Motivation
> +----------
> +There are two distinctive motivators for this work, which are not satisfied by
> +existing perf buffer, which prompted creation of a new ring buffer
> +implementation.
> +  - more efficient memory utilization by sharing ring buffer across CPUs;

I have a use case with traceloop
(https://github.com/kinvolk/traceloop) where I use one
BPF_MAP_TYPE_PERF_EVENT_ARRAY per container, so when the number of
containers times the number of CPU is high, it can use a lot of
memory.

> +  - preserving ordering of events that happen sequentially in time, even
> +  across multiple CPUs (e.g., fork/exec/exit events for a task).

I had the problem to keep track of TCP connections and when
tcp-connect and tcp-close events can be on different CPUs, it makes it
difficult to get the correct order.

[...]
> +There are a bunch of similarities between perf buffer
> +(BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
> +  - variable-length records;
> +  - if there is no more space left in ring buffer, reservation fails, no
> +    blocking;
[...]

BPF_MAP_TYPE_PERF_EVENT_ARRAY can be set as both 'overwriteable' and
'backward': if there is no more space left in ring buffer, it would
then overwrite the old events. For that, the buffer needs to be
prepared with mmap(...PROT_READ) instead of mmap(...PROT_READ |
PROT_WRITE), and set the write_backward flag. See details in commit
9ecda41acb97 ("perf/core: Add ::write_backward attribute to perf
event"):

struct perf_event_attr attr = {0,};
attr.write_backward = 1; /* backward */
fd = perf_event_open_map(&attr, ...);
base = mmap(fd, 0, size, PROT_READ /* overwriteable */, MAP_SHARED);

I use overwriteable and backward ring buffers in traceloop: buffers
are continuously overwritten and are usually not read, except when a
user explicitly asks for it (e.g. to inspect the last few events of an
application after a crash). If BPF_MAP_TYPE_RINGBUF implements the
same features, then I would be able to switch and use less memory.

Do you think it will be possible to implement that in BPF_MAP_TYPE_RINGBUF?

Cheers,
Alban

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-22 18:46     ` Andrii Nakryiko
@ 2020-05-25 16:01       ` Paul E. McKenney
  2020-05-25 18:45         ` Andrii Nakryiko
  0 siblings, 1 reply; 33+ messages in thread
From: Paul E. McKenney @ 2020-05-25 16:01 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Jonathan Lemon

On Fri, May 22, 2020 at 11:46:49AM -0700, Andrii Nakryiko wrote:
> On Thu, May 21, 2020 at 5:25 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > On Sun, May 17, 2020 at 12:57:21PM -0700, Andrii Nakryiko wrote:
> > > This commits adds a new MPSC ring buffer implementation into BPF ecosystem,
> > > which allows multiple CPUs to submit data to a single shared ring buffer. On
> > > the consumption side, only single consumer is assumed.
> >
> > [ . . . ]
> >
> > Focusing just on the ring-buffer mechanism, with a question or two
> > below.  Looks pretty close, actually!
> >
> >                                                         Thanx, Paul
> >
> 
> Thanks for review, Paul!
> 
> > > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > > ---
> > >  include/linux/bpf.h            |  13 +
> > >  include/linux/bpf_types.h      |   1 +
> > >  include/linux/bpf_verifier.h   |   4 +
> > >  include/uapi/linux/bpf.h       |  84 +++++-
> > >  kernel/bpf/Makefile            |   2 +-
> > >  kernel/bpf/helpers.c           |  10 +
> > >  kernel/bpf/ringbuf.c           | 487 +++++++++++++++++++++++++++++++++
> > >  kernel/bpf/syscall.c           |  12 +
> > >  kernel/bpf/verifier.c          | 157 ++++++++---
> > >  kernel/trace/bpf_trace.c       |  10 +
> > >  tools/include/uapi/linux/bpf.h |  90 +++++-
> > >  11 files changed, 832 insertions(+), 38 deletions(-)
> > >  create mode 100644 kernel/bpf/ringbuf.c
> >
> > [ . . . ]
> >
> > > diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
> > > new file mode 100644
> > > index 000000000000..3c19f0f07726
> > > --- /dev/null
> > > +++ b/kernel/bpf/ringbuf.c
> > > @@ -0,0 +1,487 @@
> > > +#include <linux/bpf.h>
> > > +#include <linux/btf.h>
> > > +#include <linux/err.h>
> > > +#include <linux/irq_work.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/filter.h>
> > > +#include <linux/mm.h>
> > > +#include <linux/vmalloc.h>
> > > +#include <linux/wait.h>
> > > +#include <linux/poll.h>
> > > +#include <uapi/linux/btf.h>
> > > +
> > > +#define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
> > > +
> > > +/* non-mmap()'able part of bpf_ringbuf (everything up to consumer page) */
> > > +#define RINGBUF_PGOFF \
> > > +     (offsetof(struct bpf_ringbuf, consumer_pos) >> PAGE_SHIFT)
> > > +/* consumer page and producer page */
> > > +#define RINGBUF_POS_PAGES 2
> > > +
> > > +#define RINGBUF_MAX_RECORD_SZ (UINT_MAX/4)
> > > +
> > > +/* Maximum size of ring buffer area is limited by 32-bit page offset within
> > > + * record header, counted in pages. Reserve 8 bits for extensibility, and take
> > > + * into account few extra pages for consumer/producer pages and
> > > + * non-mmap()'able parts. This gives 64GB limit, which seems plenty for single
> > > + * ring buffer.
> > > + */
> > > +#define RINGBUF_MAX_DATA_SZ \
> > > +     (((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE)
> > > +
> > > +struct bpf_ringbuf {
> > > +     wait_queue_head_t waitq;
> > > +     struct irq_work work;
> > > +     u64 mask;
> > > +     spinlock_t spinlock ____cacheline_aligned_in_smp;
> > > +     /* Consumer and producer counters are put into separate pages to allow
> > > +      * mapping consumer page as r/w, but restrict producer page to r/o.
> > > +      * This protects producer position from being modified by user-space
> > > +      * application and ruining in-kernel position tracking.
> > > +      */
> > > +     unsigned long consumer_pos __aligned(PAGE_SIZE);
> > > +     unsigned long producer_pos __aligned(PAGE_SIZE);
> > > +     char data[] __aligned(PAGE_SIZE);
> > > +};
> > > +
> > > +struct bpf_ringbuf_map {
> > > +     struct bpf_map map;
> > > +     struct bpf_map_memory memory;
> > > +     struct bpf_ringbuf *rb;
> > > +};
> > > +
> > > +/* 8-byte ring buffer record header structure */
> > > +struct bpf_ringbuf_hdr {
> > > +     u32 len;
> > > +     u32 pg_off;
> > > +};
> > > +
> > > +static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
> > > +{
> > > +     const gfp_t flags = GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN |
> > > +                         __GFP_ZERO;
> > > +     int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES;
> > > +     int nr_data_pages = data_sz >> PAGE_SHIFT;
> > > +     int nr_pages = nr_meta_pages + nr_data_pages;
> > > +     struct page **pages, *page;
> > > +     size_t array_size;
> > > +     void *addr;
> > > +     int i;
> > > +
> > > +     /* Each data page is mapped twice to allow "virtual"
> > > +      * continuous read of samples wrapping around the end of ring
> > > +      * buffer area:
> > > +      * ------------------------------------------------------
> > > +      * | meta pages |  real data pages  |  same data pages  |
> > > +      * ------------------------------------------------------
> > > +      * |            | 1 2 3 4 5 6 7 8 9 | 1 2 3 4 5 6 7 8 9 |
> > > +      * ------------------------------------------------------
> > > +      * |            | TA             DA | TA             DA |
> > > +      * ------------------------------------------------------
> > > +      *                               ^^^^^^^
> > > +      *                                  |
> > > +      * Here, no need to worry about special handling of wrapped-around
> > > +      * data due to double-mapped data pages. This works both in kernel and
> > > +      * when mmap()'ed in user-space, simplifying both kernel and
> > > +      * user-space implementations significantly.
> > > +      */
> > > +     array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages);
> > > +     if (array_size > PAGE_SIZE)
> > > +             pages = vmalloc_node(array_size, numa_node);
> > > +     else
> > > +             pages = kmalloc_node(array_size, flags, numa_node);
> > > +     if (!pages)
> > > +             return NULL;
> > > +
> > > +     for (i = 0; i < nr_pages; i++) {
> > > +             page = alloc_pages_node(numa_node, flags, 0);
> > > +             if (!page) {
> > > +                     nr_pages = i;
> > > +                     goto err_free_pages;
> > > +             }
> > > +             pages[i] = page;
> > > +             if (i >= nr_meta_pages)
> > > +                     pages[nr_data_pages + i] = page;
> > > +     }
> > > +
> > > +     addr = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
> > > +                 VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
> > > +     if (addr)
> > > +             return addr;
> > > +
> > > +err_free_pages:
> > > +     for (i = 0; i < nr_pages; i++)
> > > +             free_page((unsigned long)pages[i]);
> > > +     kvfree(pages);
> > > +     return NULL;
> > > +}
> > > +
> > > +static void bpf_ringbuf_notify(struct irq_work *work)
> > > +{
> > > +     struct bpf_ringbuf *rb = container_of(work, struct bpf_ringbuf, work);
> > > +
> > > +     wake_up_all(&rb->waitq);
> > > +}
> > > +
> > > +static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
> > > +{
> > > +     struct bpf_ringbuf *rb;
> > > +
> > > +     if (!data_sz || !PAGE_ALIGNED(data_sz))
> > > +             return ERR_PTR(-EINVAL);
> > > +
> > > +     if (data_sz > RINGBUF_MAX_DATA_SZ)
> > > +             return ERR_PTR(-E2BIG);
> > > +
> > > +     rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
> > > +     if (!rb)
> > > +             return ERR_PTR(-ENOMEM);
> > > +
> > > +     spin_lock_init(&rb->spinlock);
> > > +     init_waitqueue_head(&rb->waitq);
> > > +     init_irq_work(&rb->work, bpf_ringbuf_notify);
> > > +
> > > +     rb->mask = data_sz - 1;
> > > +     rb->consumer_pos = 0;
> > > +     rb->producer_pos = 0;
> > > +
> > > +     return rb;
> > > +}
> > > +
> > > +static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
> > > +{
> > > +     struct bpf_ringbuf_map *rb_map;
> > > +     u64 cost;
> > > +     int err;
> > > +
> > > +     if (attr->map_flags & ~RINGBUF_CREATE_FLAG_MASK)
> > > +             return ERR_PTR(-EINVAL);
> > > +
> > > +     if (attr->key_size || attr->value_size ||
> > > +         attr->max_entries == 0 || !PAGE_ALIGNED(attr->max_entries))
> > > +             return ERR_PTR(-EINVAL);
> > > +
> > > +     rb_map = kzalloc(sizeof(*rb_map), GFP_USER);
> > > +     if (!rb_map)
> > > +             return ERR_PTR(-ENOMEM);
> > > +
> > > +     bpf_map_init_from_attr(&rb_map->map, attr);
> > > +
> > > +     cost = sizeof(struct bpf_ringbuf_map) +
> > > +            sizeof(struct bpf_ringbuf) +
> > > +            attr->max_entries;
> > > +     err = bpf_map_charge_init(&rb_map->map.memory, cost);
> > > +     if (err)
> > > +             goto err_free_map;
> > > +
> > > +     rb_map->rb = bpf_ringbuf_alloc(attr->max_entries, rb_map->map.numa_node);
> > > +     if (IS_ERR(rb_map->rb)) {
> > > +             err = PTR_ERR(rb_map->rb);
> > > +             goto err_uncharge;
> > > +     }
> > > +
> > > +     return &rb_map->map;
> > > +
> > > +err_uncharge:
> > > +     bpf_map_charge_finish(&rb_map->map.memory);
> > > +err_free_map:
> > > +     kfree(rb_map);
> > > +     return ERR_PTR(err);
> > > +}
> > > +
> > > +static void bpf_ringbuf_free(struct bpf_ringbuf *ringbuf)
> > > +{
> > > +     kvfree(ringbuf);
> > > +}
> > > +
> > > +static void ringbuf_map_free(struct bpf_map *map)
> > > +{
> > > +     struct bpf_ringbuf_map *rb_map;
> > > +
> > > +     /* at this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
> > > +      * so the programs (can be more than one that used this map) were
> > > +      * disconnected from events. Wait for outstanding critical sections in
> > > +      * these programs to complete
> > > +      */
> > > +     synchronize_rcu();
> > > +
> > > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > > +     bpf_ringbuf_free(rb_map->rb);
> > > +     kfree(rb_map);
> > > +}
> > > +
> > > +static void *ringbuf_map_lookup_elem(struct bpf_map *map, void *key)
> > > +{
> > > +     return ERR_PTR(-ENOTSUPP);
> > > +}
> > > +
> > > +static int ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value,
> > > +                                u64 flags)
> > > +{
> > > +     return -ENOTSUPP;
> > > +}
> > > +
> > > +static int ringbuf_map_delete_elem(struct bpf_map *map, void *key)
> > > +{
> > > +     return -ENOTSUPP;
> > > +}
> > > +
> > > +static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
> > > +                                 void *next_key)
> > > +{
> > > +     return -ENOTSUPP;
> > > +}
> > > +
> > > +static size_t bpf_ringbuf_mmap_page_cnt(const struct bpf_ringbuf *rb)
> > > +{
> > > +     size_t data_pages = (rb->mask + 1) >> PAGE_SHIFT;
> > > +
> > > +     /* consumer page + producer page + 2 x data pages */
> > > +     return RINGBUF_POS_PAGES + 2 * data_pages;
> > > +}
> > > +
> > > +static int ringbuf_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
> > > +{
> > > +     struct bpf_ringbuf_map *rb_map;
> > > +     size_t mmap_sz;
> > > +
> > > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > > +     mmap_sz = bpf_ringbuf_mmap_page_cnt(rb_map->rb) << PAGE_SHIFT;
> > > +
> > > +     if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > mmap_sz)
> > > +             return -EINVAL;
> > > +
> > > +     return remap_vmalloc_range(vma, rb_map->rb,
> > > +                                vma->vm_pgoff + RINGBUF_PGOFF);
> > > +}
> > > +
> > > +static unsigned long ringbuf_avail_data_sz(struct bpf_ringbuf *rb)
> > > +{
> > > +     unsigned long cons_pos, prod_pos;
> > > +
> > > +     cons_pos = smp_load_acquire(&rb->consumer_pos);
> >
> > What happens if there is a delay here?  (The delay might be due to
> > interrupts, preemption in PREEMPT=y kernels, vCPU preemption, ...)
> >
> > If this is called from a producer holding the lock, then the only
> > ->consumer_pos can change, and that can only decrease the amount of
> > data available.  Besides which, ->consumer_pos is sampled first.
> > But why would a producer care how much data was queued, as opposed
> > to how much free space was available?
> 
> see second point below, it's for BPF program to implement custom
> notification policy/heuristics
> 
> >
> > From the consumer, only ->producer_pos can change, and that can only
> > increase the amount of data available.  (Assuming that producers cannot
> > erase old data on wrap-around before the consumer consumes it.)
> 
> right, they can't overwrite data
> 
> >
> > So probably nothing bad happens.
> >
> > On the bit about the producer holding the lock, some lockdep assertions
> > might make things easier on your future self.
> 
> So this function is currently called in two situations, both not
> holding the spinlock. I'm fully aware this is a racy way to do this,
> but it's ok for the applications.
> 
> So the first place is during initial poll "subscription", when we need
> to figure out if there is already some unconsumed data in ringbuffer
> to let poll/epoll subsystem know whether caller can do read without
> waiting. If this is done from consumer thread, it's race free, because
> consumer position won't change, so either we'll see that producer >
> consumer, or if we race with producer, producer will see that consumer
> pos == to-be-committed record pos, so will send notification. If epoll
> happens from non-consumer thread, it's similarly racy to perf buffer
> poll subscription implementation, but with next record subscriber will
> get notification anyways.
> 
> The second place is from BPF side (so potential producer), but it's
> outside of producer lock. It's called from bpf_ringbuf_query() helper
> which is supposed to return various potentially stale properties of
> ringbuf (e.g., consumer/producer position, amount of unconsumed data,
> etc). The use case here is for BPF program to implement its own
> flexible poll notification strategy (or whatever else creative
> programmer will come up with, of course). E.g., batched notification,
> which happens not on every record, but only once enough data is
> accumulated. In this case exact amount of unconsumed data is not
> critical, it's only a heuristics. And it's more convenient than amount
> of data left free, IMO, but both can be calculated using the other,
> provided the total size of ring buffer is known (which can be returned
> by bpf_ringbuf_query() helper as well).
> 
> So I think in both cases I don't want to take spinlock. If this is not
> done under spinlock, then I have to read consumer pos first, producer
> pos second, otherwise I can get into a situation of having
> consumer_pos > producer_pos (due to delays), which is really bad.
> 
> It's a bit wordy explanation, but I hope this makes sense.

Fair enough, and the smp_load_acquire() calls do telegraph the lockless
access.  But what if timing results in the difference below coming out
negative, that is, a very large unsigned long value?

> > > +     prod_pos = smp_load_acquire(&rb->producer_pos);
> > > +     return prod_pos - cons_pos;
> > > +}
> > > +
> > > +static __poll_t ringbuf_map_poll(struct bpf_map *map, struct file *filp,
> > > +                              struct poll_table_struct *pts)
> > > +{
> > > +     struct bpf_ringbuf_map *rb_map;
> > > +
> > > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > > +     poll_wait(filp, &rb_map->rb->waitq, pts);
> > > +
> > > +     if (ringbuf_avail_data_sz(rb_map->rb))
> > > +             return EPOLLIN | EPOLLRDNORM;
> > > +     return 0;
> > > +}
> > > +
> > > +const struct bpf_map_ops ringbuf_map_ops = {
> > > +     .map_alloc = ringbuf_map_alloc,
> > > +     .map_free = ringbuf_map_free,
> > > +     .map_mmap = ringbuf_map_mmap,
> > > +     .map_poll = ringbuf_map_poll,
> > > +     .map_lookup_elem = ringbuf_map_lookup_elem,
> > > +     .map_update_elem = ringbuf_map_update_elem,
> > > +     .map_delete_elem = ringbuf_map_delete_elem,
> > > +     .map_get_next_key = ringbuf_map_get_next_key,
> > > +};
> > > +
> > > +/* Given pointer to ring buffer record metadata and struct bpf_ringbuf itself,
> > > + * calculate offset from record metadata to ring buffer in pages, rounded
> > > + * down. This page offset is stored as part of record metadata and allows to
> > > + * restore struct bpf_ringbuf * from record pointer. This page offset is
> > > + * stored at offset 4 of record metadata header.
> > > + */
> > > +static size_t bpf_ringbuf_rec_pg_off(struct bpf_ringbuf *rb,
> > > +                                  struct bpf_ringbuf_hdr *hdr)
> > > +{
> > > +     return ((void *)hdr - (void *)rb) >> PAGE_SHIFT;
> > > +}
> > > +
> > > +/* Given pointer to ring buffer record header, restore pointer to struct
> > > + * bpf_ringbuf itself by using page offset stored at offset 4
> > > + */
> > > +static struct bpf_ringbuf *
> > > +bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
> > > +{
> > > +     unsigned long addr = (unsigned long)(void *)hdr;
> > > +     unsigned long off = (unsigned long)hdr->pg_off << PAGE_SHIFT;
> > > +
> > > +     return (void*)((addr & PAGE_MASK) - off);
> > > +}
> > > +
> > > +static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
> > > +{
> > > +     unsigned long cons_pos, prod_pos, new_prod_pos, flags;
> > > +     u32 len, pg_off;
> > > +     struct bpf_ringbuf_hdr *hdr;
> > > +
> > > +     if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
> > > +             return NULL;
> > > +
> > > +     len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
> > > +     cons_pos = smp_load_acquire(&rb->consumer_pos);
> >
> > There might be a longish delay acquiring the spinlock, which could mean
> > that cons_pos was out of date, which might result in an unnecessary
> > producer-side failure.  Why not pick up cons_pos after the lock is
> > acquired?  After all, it is in the same cache line as the lock, so this
> > should have negligible effect on lock-hold time.
> 
> Right, the goal was to minimize amount of time that spinlock is hold.
> 
> Spinlock and consumer_pos are certainly not on the same cache line,
> consumer_pos is deliberately on a different *page* altogether (due to
> mmap() and to avoid unnecessary sharing between consumers, who don't
> touch spinlock, and producers, who do use lock to coordinate).
> 
> So I chose to potentially drop sample due to a bit stale consumer pos,
> but speed up locked section.

OK, fair enough!

> > (Unless you had either really small cachelines or really big locks.
> >
> > > +     if (in_nmi()) {
> > > +             if (!spin_trylock_irqsave(&rb->spinlock, flags))
> > > +                     return NULL;
> > > +     } else {
> > > +             spin_lock_irqsave(&rb->spinlock, flags);
> > > +     }
> > > +
> > > +     prod_pos = rb->producer_pos;
> > > +     new_prod_pos = prod_pos + len;
> > > +
> > > +     /* check for out of ringbuf space by ensuring producer position
> > > +      * doesn't advance more than (ringbuf_size - 1) ahead
> > > +      */
> > > +     if (new_prod_pos - cons_pos > rb->mask) {
> > > +             spin_unlock_irqrestore(&rb->spinlock, flags);
> > > +             return NULL;
> > > +     }
> > > +
> > > +     hdr = (void *)rb->data + (prod_pos & rb->mask);
> > > +     pg_off = bpf_ringbuf_rec_pg_off(rb, hdr);
> > > +     hdr->len = size | BPF_RINGBUF_BUSY_BIT;
> > > +     hdr->pg_off = pg_off;
> > > +
> > > +     /* ensure header is written before updating producer positions */
> > > +     smp_wmb();
> >
> > The smp_store_release() makes this unnecessary with respect to
> > ->producer_pos.  So what later write is it also ordering against?
> > If none, this smp_wmb() can go away.
> >
> > And if the later write is the xchg() in bpf_ringbuf_commit(), the
> > xchg() implies full barriers before and after, so that the smp_wmb()
> > could still go away.
> >
> > So other than the smp_store_release() and the xchg(), what later write
> > is the smp_wmb() ordering against?
> 
> I *think* my intent here was to make sure that when consumer reads
> producer_pos, it sees hdr->len with busy bit set. But I also think
> that you are right and smp_store_release (producer_pos) ensures that
> if consumer does smp_read_acquire(producer_pos) it will see up-to-date
> hdr->len, for a given value of producer_pos. So yeah, I think I should
> drop smp_wmb(). I dropped it in litmus tests, and they still pass,
> which gives me a bit more confidence as well. :)
> 
> I say "I *think*", because this algorithm started off completely
> locklessly without producer spinlock, so maybe I needed this barrier
> for something there, but I honestly don't remember details of lockless
> implementation by now.
> 
> So in summary, yeah, I'll drop smp_wmb().

Sounds good!

							Thanx, Paul

> > > +     /* pairs with consumer's smp_load_acquire() */
> > > +     smp_store_release(&rb->producer_pos, new_prod_pos);
> > > +
> > > +     spin_unlock_irqrestore(&rb->spinlock, flags);
> > > +
> > > +     return (void *)hdr + BPF_RINGBUF_HDR_SZ;
> > > +}
> > > +
> > > +BPF_CALL_3(bpf_ringbuf_reserve, struct bpf_map *, map, u64, size, u64, flags)
> > > +{
> > > +     struct bpf_ringbuf_map *rb_map;
> > > +
> > > +     if (unlikely(flags))
> > > +             return 0;
> > > +
> > > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > > +     return (unsigned long)__bpf_ringbuf_reserve(rb_map->rb, size);
> > > +}
> > > +
> > > +const struct bpf_func_proto bpf_ringbuf_reserve_proto = {
> > > +     .func           = bpf_ringbuf_reserve,
> > > +     .ret_type       = RET_PTR_TO_ALLOC_MEM_OR_NULL,
> > > +     .arg1_type      = ARG_CONST_MAP_PTR,
> > > +     .arg2_type      = ARG_CONST_ALLOC_SIZE_OR_ZERO,
> > > +     .arg3_type      = ARG_ANYTHING,
> > > +};
> > > +
> > > +static void bpf_ringbuf_commit(void *sample, u64 flags, bool discard)
> > > +{
> > > +     unsigned long rec_pos, cons_pos;
> > > +     struct bpf_ringbuf_hdr *hdr;
> > > +     struct bpf_ringbuf *rb;
> > > +     u32 new_len;
> > > +
> > > +     hdr = sample - BPF_RINGBUF_HDR_SZ;
> > > +     rb = bpf_ringbuf_restore_from_rec(hdr);
> > > +     new_len = hdr->len ^ BPF_RINGBUF_BUSY_BIT;
> > > +     if (discard)
> > > +             new_len |= BPF_RINGBUF_DISCARD_BIT;
> > > +
> > > +     /* update record header with correct final size prefix */
> > > +     xchg(&hdr->len, new_len);
> > > +
> > > +     /* if consumer caught up and is waiting for our record, notify about
> > > +      * new data availability
> > > +      */
> > > +     rec_pos = (void *)hdr - (void *)rb->data;
> > > +     cons_pos = smp_load_acquire(&rb->consumer_pos) & rb->mask;
> > > +
> > > +     if (flags & BPF_RB_FORCE_WAKEUP)
> > > +             irq_work_queue(&rb->work);
> > > +     else if (cons_pos == rec_pos && !(flags & BPF_RB_NO_WAKEUP))
> > > +             irq_work_queue(&rb->work);
> > > +}

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-25 16:01       ` Paul E. McKenney
@ 2020-05-25 18:45         ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-25 18:45 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Jonathan Lemon

On Mon, May 25, 2020 at 9:01 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Fri, May 22, 2020 at 11:46:49AM -0700, Andrii Nakryiko wrote:
> > On Thu, May 21, 2020 at 5:25 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > >
> > > On Sun, May 17, 2020 at 12:57:21PM -0700, Andrii Nakryiko wrote:
> > > > This commits adds a new MPSC ring buffer implementation into BPF ecosystem,
> > > > which allows multiple CPUs to submit data to a single shared ring buffer. On
> > > > the consumption side, only single consumer is assumed.
> > >
> > > [ . . . ]
> > >
> > > Focusing just on the ring-buffer mechanism, with a question or two
> > > below.  Looks pretty close, actually!
> > >
> > >                                                         Thanx, Paul
> > >
> >
> > Thanks for review, Paul!
> >
> > > > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > > > ---

[...]

> > > > +
> > > > +static unsigned long ringbuf_avail_data_sz(struct bpf_ringbuf *rb)
> > > > +{
> > > > +     unsigned long cons_pos, prod_pos;
> > > > +
> > > > +     cons_pos = smp_load_acquire(&rb->consumer_pos);
> > >
> > > What happens if there is a delay here?  (The delay might be due to
> > > interrupts, preemption in PREEMPT=y kernels, vCPU preemption, ...)
> > >
> > > If this is called from a producer holding the lock, then the only
> > > ->consumer_pos can change, and that can only decrease the amount of
> > > data available.  Besides which, ->consumer_pos is sampled first.
> > > But why would a producer care how much data was queued, as opposed
> > > to how much free space was available?
> >
> > see second point below, it's for BPF program to implement custom
> > notification policy/heuristics
> >
> > >
> > > From the consumer, only ->producer_pos can change, and that can only
> > > increase the amount of data available.  (Assuming that producers cannot
> > > erase old data on wrap-around before the consumer consumes it.)
> >
> > right, they can't overwrite data
> >
> > >
> > > So probably nothing bad happens.
> > >
> > > On the bit about the producer holding the lock, some lockdep assertions
> > > might make things easier on your future self.
> >
> > So this function is currently called in two situations, both not
> > holding the spinlock. I'm fully aware this is a racy way to do this,
> > but it's ok for the applications.
> >
> > So the first place is during initial poll "subscription", when we need
> > to figure out if there is already some unconsumed data in ringbuffer
> > to let poll/epoll subsystem know whether caller can do read without
> > waiting. If this is done from consumer thread, it's race free, because
> > consumer position won't change, so either we'll see that producer >
> > consumer, or if we race with producer, producer will see that consumer
> > pos == to-be-committed record pos, so will send notification. If epoll
> > happens from non-consumer thread, it's similarly racy to perf buffer
> > poll subscription implementation, but with next record subscriber will
> > get notification anyways.
> >
> > The second place is from BPF side (so potential producer), but it's
> > outside of producer lock. It's called from bpf_ringbuf_query() helper
> > which is supposed to return various potentially stale properties of
> > ringbuf (e.g., consumer/producer position, amount of unconsumed data,
> > etc). The use case here is for BPF program to implement its own
> > flexible poll notification strategy (or whatever else creative
> > programmer will come up with, of course). E.g., batched notification,
> > which happens not on every record, but only once enough data is
> > accumulated. In this case exact amount of unconsumed data is not
> > critical, it's only a heuristics. And it's more convenient than amount
> > of data left free, IMO, but both can be calculated using the other,
> > provided the total size of ring buffer is known (which can be returned
> > by bpf_ringbuf_query() helper as well).
> >
> > So I think in both cases I don't want to take spinlock. If this is not
> > done under spinlock, then I have to read consumer pos first, producer
> > pos second, otherwise I can get into a situation of having
> > consumer_pos > producer_pos (due to delays), which is really bad.
> >
> > It's a bit wordy explanation, but I hope this makes sense.
>
> Fair enough, and the smp_load_acquire() calls do telegraph the lockless
> access.  But what if timing results in the difference below coming out
> negative, that is, a very large unsigned long value?

This can happen only if, in the meantime, consumer_pos gets updated at
least once, while producer_pos advances by at least 2^32 or 2^64,
depending on architecture. This is essentially impossible on 64-bit
arches, and extremely unlikely on 32-bit ones. So I'd say it is ok,
especially given that this is for heuristics?

>
> > > > +     prod_pos = smp_load_acquire(&rb->producer_pos);
> > > > +     return prod_pos - cons_pos;
> > > > +}
> > > > +
> > > > +static __poll_t ringbuf_map_poll(struct bpf_map *map, struct file *filp,
> > > > +                              struct poll_table_struct *pts)
> > > > +{
> > > > +     struct bpf_ringbuf_map *rb_map;
> > > > +
> > > > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > > > +     poll_wait(filp, &rb_map->rb->waitq, pts);
> > > > +
> > > > +     if (ringbuf_avail_data_sz(rb_map->rb))
> > > > +             return EPOLLIN | EPOLLRDNORM;
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +const struct bpf_map_ops ringbuf_map_ops = {
> > > > +     .map_alloc = ringbuf_map_alloc,
> > > > +     .map_free = ringbuf_map_free,
> > > > +     .map_mmap = ringbuf_map_mmap,
> > > > +     .map_poll = ringbuf_map_poll,
> > > > +     .map_lookup_elem = ringbuf_map_lookup_elem,
> > > > +     .map_update_elem = ringbuf_map_update_elem,
> > > > +     .map_delete_elem = ringbuf_map_delete_elem,
> > > > +     .map_get_next_key = ringbuf_map_get_next_key,
> > > > +};
> > > > +
> > > > +/* Given pointer to ring buffer record metadata and struct bpf_ringbuf itself,
> > > > + * calculate offset from record metadata to ring buffer in pages, rounded
> > > > + * down. This page offset is stored as part of record metadata and allows to
> > > > + * restore struct bpf_ringbuf * from record pointer. This page offset is
> > > > + * stored at offset 4 of record metadata header.
> > > > + */
> > > > +static size_t bpf_ringbuf_rec_pg_off(struct bpf_ringbuf *rb,
> > > > +                                  struct bpf_ringbuf_hdr *hdr)
> > > > +{
> > > > +     return ((void *)hdr - (void *)rb) >> PAGE_SHIFT;
> > > > +}
> > > > +
> > > > +/* Given pointer to ring buffer record header, restore pointer to struct
> > > > + * bpf_ringbuf itself by using page offset stored at offset 4
> > > > + */
> > > > +static struct bpf_ringbuf *
> > > > +bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
> > > > +{
> > > > +     unsigned long addr = (unsigned long)(void *)hdr;
> > > > +     unsigned long off = (unsigned long)hdr->pg_off << PAGE_SHIFT;
> > > > +
> > > > +     return (void*)((addr & PAGE_MASK) - off);
> > > > +}
> > > > +
> > > > +static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
> > > > +{
> > > > +     unsigned long cons_pos, prod_pos, new_prod_pos, flags;
> > > > +     u32 len, pg_off;
> > > > +     struct bpf_ringbuf_hdr *hdr;
> > > > +
> > > > +     if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
> > > > +             return NULL;
> > > > +
> > > > +     len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
> > > > +     cons_pos = smp_load_acquire(&rb->consumer_pos);
> > >
> > > There might be a longish delay acquiring the spinlock, which could mean
> > > that cons_pos was out of date, which might result in an unnecessary
> > > producer-side failure.  Why not pick up cons_pos after the lock is
> > > acquired?  After all, it is in the same cache line as the lock, so this
> > > should have negligible effect on lock-hold time.
> >
> > Right, the goal was to minimize amount of time that spinlock is hold.
> >
> > Spinlock and consumer_pos are certainly not on the same cache line,
> > consumer_pos is deliberately on a different *page* altogether (due to
> > mmap() and to avoid unnecessary sharing between consumers, who don't
> > touch spinlock, and producers, who do use lock to coordinate).
> >
> > So I chose to potentially drop sample due to a bit stale consumer pos,
> > but speed up locked section.
>
> OK, fair enough!
>
> > > (Unless you had either really small cachelines or really big locks.
> > >
> > > > +     if (in_nmi()) {
> > > > +             if (!spin_trylock_irqsave(&rb->spinlock, flags))
> > > > +                     return NULL;
> > > > +     } else {
> > > > +             spin_lock_irqsave(&rb->spinlock, flags);
> > > > +     }
> > > > +
> > > > +     prod_pos = rb->producer_pos;
> > > > +     new_prod_pos = prod_pos + len;
> > > > +
> > > > +     /* check for out of ringbuf space by ensuring producer position
> > > > +      * doesn't advance more than (ringbuf_size - 1) ahead
> > > > +      */
> > > > +     if (new_prod_pos - cons_pos > rb->mask) {
> > > > +             spin_unlock_irqrestore(&rb->spinlock, flags);
> > > > +             return NULL;
> > > > +     }
> > > > +
> > > > +     hdr = (void *)rb->data + (prod_pos & rb->mask);
> > > > +     pg_off = bpf_ringbuf_rec_pg_off(rb, hdr);
> > > > +     hdr->len = size | BPF_RINGBUF_BUSY_BIT;
> > > > +     hdr->pg_off = pg_off;
> > > > +
> > > > +     /* ensure header is written before updating producer positions */
> > > > +     smp_wmb();
> > >
> > > The smp_store_release() makes this unnecessary with respect to
> > > ->producer_pos.  So what later write is it also ordering against?
> > > If none, this smp_wmb() can go away.
> > >
> > > And if the later write is the xchg() in bpf_ringbuf_commit(), the
> > > xchg() implies full barriers before and after, so that the smp_wmb()
> > > could still go away.
> > >
> > > So other than the smp_store_release() and the xchg(), what later write
> > > is the smp_wmb() ordering against?
> >
> > I *think* my intent here was to make sure that when consumer reads
> > producer_pos, it sees hdr->len with busy bit set. But I also think
> > that you are right and smp_store_release (producer_pos) ensures that
> > if consumer does smp_read_acquire(producer_pos) it will see up-to-date
> > hdr->len, for a given value of producer_pos. So yeah, I think I should
> > drop smp_wmb(). I dropped it in litmus tests, and they still pass,
> > which gives me a bit more confidence as well. :)
> >
> > I say "I *think*", because this algorithm started off completely
> > locklessly without producer spinlock, so maybe I needed this barrier
> > for something there, but I honestly don't remember details of lockless
> > implementation by now.
> >
> > So in summary, yeah, I'll drop smp_wmb().
>
> Sounds good!
>
>                                                         Thanx, Paul
>
> > > > +     /* pairs with consumer's smp_load_acquire() */
> > > > +     smp_store_release(&rb->producer_pos, new_prod_pos);
> > > > +
> > > > +     spin_unlock_irqrestore(&rb->spinlock, flags);
> > > > +
> > > > +     return (void *)hdr + BPF_RINGBUF_HDR_SZ;
> > > > +}
> > > > +
> > > > +BPF_CALL_3(bpf_ringbuf_reserve, struct bpf_map *, map, u64, size, u64, flags)
> > > > +{
> > > > +     struct bpf_ringbuf_map *rb_map;
> > > > +
> > > > +     if (unlikely(flags))
> > > > +             return 0;
> > > > +
> > > > +     rb_map = container_of(map, struct bpf_ringbuf_map, map);
> > > > +     return (unsigned long)__bpf_ringbuf_reserve(rb_map->rb, size);
> > > > +}
> > > > +
> > > > +const struct bpf_func_proto bpf_ringbuf_reserve_proto = {
> > > > +     .func           = bpf_ringbuf_reserve,
> > > > +     .ret_type       = RET_PTR_TO_ALLOC_MEM_OR_NULL,
> > > > +     .arg1_type      = ARG_CONST_MAP_PTR,
> > > > +     .arg2_type      = ARG_CONST_ALLOC_SIZE_OR_ZERO,
> > > > +     .arg3_type      = ARG_ANYTHING,
> > > > +};
> > > > +
> > > > +static void bpf_ringbuf_commit(void *sample, u64 flags, bool discard)
> > > > +{
> > > > +     unsigned long rec_pos, cons_pos;
> > > > +     struct bpf_ringbuf_hdr *hdr;
> > > > +     struct bpf_ringbuf *rb;
> > > > +     u32 new_len;
> > > > +
> > > > +     hdr = sample - BPF_RINGBUF_HDR_SZ;
> > > > +     rb = bpf_ringbuf_restore_from_rec(hdr);
> > > > +     new_len = hdr->len ^ BPF_RINGBUF_BUSY_BIT;
> > > > +     if (discard)
> > > > +             new_len |= BPF_RINGBUF_DISCARD_BIT;
> > > > +
> > > > +     /* update record header with correct final size prefix */
> > > > +     xchg(&hdr->len, new_len);
> > > > +
> > > > +     /* if consumer caught up and is waiting for our record, notify about
> > > > +      * new data availability
> > > > +      */
> > > > +     rec_pos = (void *)hdr - (void *)rb->data;
> > > > +     cons_pos = smp_load_acquire(&rb->consumer_pos) & rb->mask;
> > > > +
> > > > +     if (flags & BPF_RB_FORCE_WAKEUP)
> > > > +             irq_work_queue(&rb->work);
> > > > +     else if (cons_pos == rec_pos && !(flags & BPF_RB_NO_WAKEUP))
> > > > +             irq_work_queue(&rb->work);
> > > > +}

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes
  2020-05-25  9:59   ` Alban Crequy
@ 2020-05-25 19:12     ` Andrii Nakryiko
  0 siblings, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-25 19:12 UTC (permalink / raw)
  To: Alban Crequy
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon,
	Stanislav Fomichev, Alban Crequy, mauricio, kai

On Mon, May 25, 2020 at 3:00 AM Alban Crequy <alban.crequy@gmail.com> wrote:
>
> Hi,
>
> Thanks. Both motivators look very interesting to me:
>
> On Sun, 17 May 2020 at 21:58, Andrii Nakryiko <andriin@fb.com> wrote:
> [...]
> > +Motivation
> > +----------
> > +There are two distinctive motivators for this work, which are not satisfied by
> > +existing perf buffer, which prompted creation of a new ring buffer
> > +implementation.
> > +  - more efficient memory utilization by sharing ring buffer across CPUs;
>
> I have a use case with traceloop
> (https://github.com/kinvolk/traceloop) where I use one
> BPF_MAP_TYPE_PERF_EVENT_ARRAY per container, so when the number of
> containers times the number of CPU is high, it can use a lot of
> memory.
>
> > +  - preserving ordering of events that happen sequentially in time, even
> > +  across multiple CPUs (e.g., fork/exec/exit events for a task).
>
> I had the problem to keep track of TCP connections and when
> tcp-connect and tcp-close events can be on different CPUs, it makes it
> difficult to get the correct order.

Yep, in one of BPF applications I've written, handling out-of-order
events was major complication to the design of data structures, as
well as user-space implementation logic.

>
> [...]
> > +There are a bunch of similarities between perf buffer
> > +(BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
> > +  - variable-length records;
> > +  - if there is no more space left in ring buffer, reservation fails, no
> > +    blocking;
> [...]
>
> BPF_MAP_TYPE_PERF_EVENT_ARRAY can be set as both 'overwriteable' and
> 'backward': if there is no more space left in ring buffer, it would
> then overwrite the old events. For that, the buffer needs to be
> prepared with mmap(...PROT_READ) instead of mmap(...PROT_READ |
> PROT_WRITE), and set the write_backward flag. See details in commit
> 9ecda41acb97 ("perf/core: Add ::write_backward attribute to perf
> event"):
>
> struct perf_event_attr attr = {0,};
> attr.write_backward = 1; /* backward */
> fd = perf_event_open_map(&attr, ...);
> base = mmap(fd, 0, size, PROT_READ /* overwriteable */, MAP_SHARED);
>
> I use overwriteable and backward ring buffers in traceloop: buffers
> are continuously overwritten and are usually not read, except when a
> user explicitly asks for it (e.g. to inspect the last few events of an
> application after a crash). If BPF_MAP_TYPE_RINGBUF implements the
> same features, then I would be able to switch and use less memory.
>
> Do you think it will be possible to implement that in BPF_MAP_TYPE_RINGBUF?
>

I think it could be implemented similarly. Consumer_pos would be
ignored, producer_pos would point to the beginning of record and
decremented on new reservation. All the implementation and semantics
would stay. Extending ringbuf itself to enable this is also trivial,
it could be just extra map_flag passed when map is created,
consumer_pos page would become mmap()'able as R/O, of course.

But I fail to see how consumer can be 100% certain it's not reading
garbage data, especially on 32-bit architectures, where wrapping over
32-bit producer position is actually quite easy. Just checking
producer position before/after read isn't completely correct. Ignoring
that problem, the only sane way (IMO) to do this would mean copying
each record into a "stable" memory, before actually doing anything
with it, which is a pretty bad performance hit as well.

So all in all, such mode could be added, but certainly in a separate
patch set and after some good discussion :).

> Cheers,
> Alban

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it
  2020-05-22  1:07   ` Alexei Starovoitov
  2020-05-22 18:48     ` Andrii Nakryiko
@ 2020-05-25 20:34     ` Andrii Nakryiko
  1 sibling, 0 replies; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-25 20:34 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Paul E . McKenney, Jonathan Lemon

On Thu, May 21, 2020 at 6:07 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sun, May 17, 2020 at 12:57:21PM -0700, Andrii Nakryiko wrote:
> > -     if (off < 0 || size < 0 || (size == 0 && !zero_size_allowed) ||
> > -         off + size > map->value_size) {
> > -             verbose(env, "invalid access to map value, value_size=%d off=%d size=%d\n",
> > -                     map->value_size, off, size);
> > -             return -EACCES;
> > -     }
> > -     return 0;
> > +     if (off >= 0 && size_ok && off + size <= mem_size)
> > +             return 0;
> > +
> > +     verbose(env, "invalid access to memory, mem_size=%u off=%d size=%d\n",
> > +             mem_size, off, size);
> > +     return -EACCES;
>
> iirc invalid access to map value is one of most common verifier errors that
> people see when they're use unbounded access. Generalizing it to memory is
> technically correct, but it makes the message harder to decipher.
> What is 'mem_size' ? Without context it is difficult to guess that
> it's actually size of map value element.
> Could you make this error message more human friendly depending on
> type of pointer?

I realized that __check_pkt_access is essentially identical and
unified map value, packet, and memory access checks, with custom log
per type of register.

>
> >       if (err) {
> > -             verbose(env, "R%d min value is outside of the array range\n",
> > +             verbose(env, "R%d min value is outside of the memory region\n",
> >                       regno);
> >               return err;
> >       }
> > @@ -2518,18 +2527,38 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
> >        * If reg->umax_value + off could overflow, treat that as unbounded too.
> >        */
> >       if (reg->umax_value >= BPF_MAX_VAR_OFF) {
> > -             verbose(env, "R%d unbounded memory access, make sure to bounds check any array access into a map\n",
> > +             verbose(env, "R%d unbounded memory access, make sure to bounds check any memory region access\n",
> >                       regno);
> >               return -EACCES;
> >       }
> > -     err = __check_map_access(env, regno, reg->umax_value + off, size,
> > +     err = __check_mem_access(env, reg->umax_value + off, size, mem_size,
> >                                zero_size_allowed);
> > -     if (err)
> > -             verbose(env, "R%d max value is outside of the array range\n",
> > +     if (err) {
> > +             verbose(env, "R%d max value is outside of the memory region\n",
> >                       regno);
>
> I'm not that worried about above three generalizations of errors,
> but if you can make it friendly by describing type of memory region
> I think it will be a plus.

These I left as is (i.e., generic "memory region"), because they were
wrong before (it's more general than just array access), but also
didn't want to clutter code with extra switches or mappings to string,
given that these messages will go right after custom message from
__check_mem_access. Hope that's fine.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests
  2020-05-22 18:51     ` Andrii Nakryiko
@ 2020-05-25 23:33       ` Andrii Nakryiko
  2020-05-26  3:05         ` Paul E. McKenney
  0 siblings, 1 reply; 33+ messages in thread
From: Andrii Nakryiko @ 2020-05-25 23:33 UTC (permalink / raw)
  To: Paul E . McKenney
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Jonathan Lemon

On Fri, May 22, 2020 at 11:51 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, May 21, 2020 at 5:34 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > On Sun, May 17, 2020 at 12:57:22PM -0700, Andrii Nakryiko wrote:
> > > Add 4 litmus tests for BPF ringbuf implementation, divided into two different
> > > use cases.
> > >
> > > First, two unbounded case, one with 1 producer and another with
> > > 2 producers, single consumer. All reservations are supposed to succeed.
> > >
> > > Second, bounded case with only 1 record allowed in ring buffer at any given
> > > time. Here failures to reserve space are expected. Again, 1- and 2- producer
> > > cases, single consumer, are validated.
> > >
> > > Just for the fun of it, I also wrote a 3-producer cases, it took *16 hours* to
> > > validate, but came back successful as well. I'm not including it in this
> > > patch, because it's not practical to run it. See output for all included
> > > 4 cases and one 3-producer one with bounded use case.
> > >
> > > Each litmust test implements producer/consumer protocol for BPF ring buffer
> > > implementation found in kernel/bpf/ringbuf.c. Due to limitations, all records
> > > are assumed equal-sized and producer/consumer counters are incremented by 1.
> > > This doesn't change the correctness of the algorithm, though.
> >
> > Very cool!!!
> >
> > However, these should go into Documentation/litmus-tests/bpf-rb or similar.
> > Please take a look at Documentation/litmus-tests/ in -rcu, -tip, and
> > -next, including the README file.
> >
> > The tools/memory-model/litmus-tests directory is for basic examples,
> > not for the more complex real-world ones like these guys.  ;-)
>
> Oh, ok, I didn't realize there are more litmus tests under
> Documentation/litmus-tests... Might have saved me some time (more
> examples to learn from!) when I was writing mine :) Will check those
> and move everything.
>

Ok, so Documentation/litmus-tests is not present in bpf-next, so I
guess I'll have to split this patch out and post it separately. BTW,
it's not in -rcu tree either, should I post this against linux-next
tree directly?


> >
> >                                                                 Thanx, Paul
> >
>
> [...]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests
  2020-05-25 23:33       ` Andrii Nakryiko
@ 2020-05-26  3:05         ` Paul E. McKenney
  0 siblings, 0 replies; 33+ messages in thread
From: Paul E. McKenney @ 2020-05-26  3:05 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Jonathan Lemon

On Mon, May 25, 2020 at 04:33:15PM -0700, Andrii Nakryiko wrote:
> On Fri, May 22, 2020 at 11:51 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Thu, May 21, 2020 at 5:34 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > >
> > > On Sun, May 17, 2020 at 12:57:22PM -0700, Andrii Nakryiko wrote:
> > > > Add 4 litmus tests for BPF ringbuf implementation, divided into two different
> > > > use cases.
> > > >
> > > > First, two unbounded case, one with 1 producer and another with
> > > > 2 producers, single consumer. All reservations are supposed to succeed.
> > > >
> > > > Second, bounded case with only 1 record allowed in ring buffer at any given
> > > > time. Here failures to reserve space are expected. Again, 1- and 2- producer
> > > > cases, single consumer, are validated.
> > > >
> > > > Just for the fun of it, I also wrote a 3-producer cases, it took *16 hours* to
> > > > validate, but came back successful as well. I'm not including it in this
> > > > patch, because it's not practical to run it. See output for all included
> > > > 4 cases and one 3-producer one with bounded use case.
> > > >
> > > > Each litmust test implements producer/consumer protocol for BPF ring buffer
> > > > implementation found in kernel/bpf/ringbuf.c. Due to limitations, all records
> > > > are assumed equal-sized and producer/consumer counters are incremented by 1.
> > > > This doesn't change the correctness of the algorithm, though.
> > >
> > > Very cool!!!
> > >
> > > However, these should go into Documentation/litmus-tests/bpf-rb or similar.
> > > Please take a look at Documentation/litmus-tests/ in -rcu, -tip, and
> > > -next, including the README file.
> > >
> > > The tools/memory-model/litmus-tests directory is for basic examples,
> > > not for the more complex real-world ones like these guys.  ;-)
> >
> > Oh, ok, I didn't realize there are more litmus tests under
> > Documentation/litmus-tests... Might have saved me some time (more
> > examples to learn from!) when I was writing mine :) Will check those
> > and move everything.
> 
> Ok, so Documentation/litmus-tests is not present in bpf-next, so I
> guess I'll have to split this patch out and post it separately. BTW,
> it's not in -rcu tree either, should I post this against linux-next
> tree directly?

It is on branch "dev" of -rcu, though yet not on branch "master".

							Thanx, Paul

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2020-05-26  3:05 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-17 19:57 [PATCH v2 bpf-next 0/7] BPF ring buffer Andrii Nakryiko
2020-05-17 19:57 ` [PATCH v2 bpf-next 1/7] bpf: implement BPF ring buffer and verifier support for it Andrii Nakryiko
2020-05-19 12:57   ` kbuild test robot
2020-05-19 23:53   ` kbuild test robot
2020-05-22  0:25   ` Paul E. McKenney
2020-05-22 18:46     ` Andrii Nakryiko
2020-05-25 16:01       ` Paul E. McKenney
2020-05-25 18:45         ` Andrii Nakryiko
2020-05-22  1:07   ` Alexei Starovoitov
2020-05-22 18:48     ` Andrii Nakryiko
2020-05-25 20:34     ` Andrii Nakryiko
2020-05-17 19:57 ` [PATCH v2 bpf-next 2/7] tools/memory-model: add BPF ringbuf MPSC litmus tests Andrii Nakryiko
2020-05-22  0:34   ` Paul E. McKenney
2020-05-22 18:51     ` Andrii Nakryiko
2020-05-25 23:33       ` Andrii Nakryiko
2020-05-26  3:05         ` Paul E. McKenney
2020-05-17 19:57 ` [PATCH v2 bpf-next 3/7] bpf: track reference type in verifier Andrii Nakryiko
2020-05-22  1:13   ` Alexei Starovoitov
2020-05-22 18:53     ` Andrii Nakryiko
2020-05-17 19:57 ` [PATCH v2 bpf-next 4/7] libbpf: add BPF ring buffer support Andrii Nakryiko
2020-05-22  1:15   ` Alexei Starovoitov
2020-05-22 18:56     ` Andrii Nakryiko
2020-05-17 19:57 ` [PATCH v2 bpf-next 5/7] selftests/bpf: add BPF ringbuf selftests Andrii Nakryiko
2020-05-22  1:20   ` Alexei Starovoitov
2020-05-22 18:58     ` Andrii Nakryiko
2020-05-17 19:57 ` [PATCH v2 bpf-next 6/7] bpf: add BPF ringbuf and perf buffer benchmarks Andrii Nakryiko
2020-05-22  1:21   ` Alexei Starovoitov
2020-05-22 19:07     ` Andrii Nakryiko
2020-05-17 19:57 ` [PATCH v2 bpf-next 7/7] docs/bpf: add BPF ring buffer design notes Andrii Nakryiko
2020-05-22  1:23   ` Alexei Starovoitov
2020-05-22 19:08     ` Andrii Nakryiko
2020-05-25  9:59   ` Alban Crequy
2020-05-25 19:12     ` Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).