bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v4] libbpf: perfbuf: Add API to get the ring buffer
@ 2022-07-15 17:15 Jon Doron
  2022-07-15 17:40 ` Yonghong Song
  0 siblings, 1 reply; 4+ messages in thread
From: Jon Doron @ 2022-07-15 17:15 UTC (permalink / raw)
  To: bpf, ast, andrii, daniel; +Cc: Jon Doron

From: Jon Doron <jond@wiz.io>

Add support for writing a custom event reader, by exposing the ring
buffer.

With the new API perf_buffer__buffer() you will get access to the
raw mmaped()'ed per-cpu underlying memory of the ring buffer.

This region contains both the perf buffer data and header
(struct perf_event_mmap_page), which manages the ring buffer
state (head/tail positions, when accessing the head/tail position
it's important to take into consideration SMP).
With this type of low level access one can implement different types of
consumers here are few simple examples where this API helps with:

1. perf_event_read_simple is allocating using malloc, perhaps you want
   to handle the wrap-around in some other way.
2. Since perf buf is per-cpu then the order of the events is not
   guarnteed, for example:
   Given 3 events where each event has a timestamp t0 < t1 < t2,
   and the events are spread on more than 1 CPU, then we can end
   up with the following state in the ring buf:
   CPU[0] => [t0, t2]
   CPU[1] => [t1]
   When you consume the events from CPU[0], you could know there is
   a t1 missing, (assuming there are no drops, and your event data
   contains a sequential index).
   So now one can simply do the following, for CPU[0], you can store
   the address of t0 and t2 in an array (without moving the tail, so
   there data is not perished) then move on the CPU[1] and set the
   address of t1 in the same array.
   So you end up with something like:
   void **arr[] = [&t0, &t1, &t2], now you can consume it orderely
   and move the tails as you process in order.
3. Assuming there are multiple CPUs and we want to start draining the
   messages from them, then we can "pick" with which one to start with
   according to the remaining free space in the ring buffer.

Signed-off-by: Jon Doron <jond@wiz.io>
---
 tools/lib/bpf/libbpf.c   | 16 ++++++++++++++++
 tools/lib/bpf/libbpf.h   | 16 ++++++++++++++++
 tools/lib/bpf/libbpf.map |  2 ++
 3 files changed, 34 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index e89cc9c885b3..c18bdb9b6e85 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -12485,6 +12485,22 @@ int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_idx)
 	return cpu_buf->fd;
 }
 
+int perf_buffer__buffer(struct perf_buffer *pb, int buf_idx, void **buf, size_t *buf_size)
+{
+	struct perf_cpu_buf *cpu_buf;
+
+	if (buf_idx >= pb->cpu_cnt)
+		return libbpf_err(-EINVAL);
+
+	cpu_buf = pb->cpu_bufs[buf_idx];
+	if (!cpu_buf)
+		return libbpf_err(-ENOENT);
+
+	*buf = cpu_buf->base;
+	*buf_size = pb->mmap_size;
+	return 0;
+}
+
 /*
  * Consume data from perf ring buffer corresponding to slot *buf_idx* in
  * PERF_EVENT_ARRAY BPF map without waiting/polling. If there is no data to
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 9e9a3fd3edd8..9cd9fc1a16d2 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -1381,6 +1381,22 @@ LIBBPF_API int perf_buffer__consume(struct perf_buffer *pb);
 LIBBPF_API int perf_buffer__consume_buffer(struct perf_buffer *pb, size_t buf_idx);
 LIBBPF_API size_t perf_buffer__buffer_cnt(const struct perf_buffer *pb);
 LIBBPF_API int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_idx);
+/**
+ * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
+ * memory region of the ring buffer.
+ * This ring buffer can be used to implement a custom events consumer.
+ * The ring buffer starts with the *struct perf_event_mmap_page*, which
+ * holds the ring buffer managment fields, when accessing the header
+ * structure it's important to be SMP aware.
+ * You can refer to *perf_event_read_simple* for a simple example.
+ * @param pb the perf buffer structure
+ * @param buf_idx the buffer index to retreive
+ * @param buf (out) gets the base pointer of the mmap()'ed memory
+ * @param buf_size (out) gets the size of the mmap()'ed region
+ * @return 0 on success, negative error code for failure
+ */
+LIBBPF_API int perf_buffer__buffer(struct perf_buffer *pb, int buf_idx, void **buf,
+				   size_t *buf_size);
 
 typedef enum bpf_perf_event_ret
 	(*bpf_perf_event_print_t)(struct perf_event_header *hdr,
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 52973cffc20c..75cf9d4c871b 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -461,5 +461,7 @@ LIBBPF_0.8.0 {
 } LIBBPF_0.7.0;
 
 LIBBPF_1.0.0 {
+	global:
+		perf_buffer__buffer;
 	local: *;
 };
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v4] libbpf: perfbuf: Add API to get the ring buffer
  2022-07-15 17:15 [PATCH bpf-next v4] libbpf: perfbuf: Add API to get the ring buffer Jon Doron
@ 2022-07-15 17:40 ` Yonghong Song
  2022-07-15 17:47   ` Jon Doron
  0 siblings, 1 reply; 4+ messages in thread
From: Yonghong Song @ 2022-07-15 17:40 UTC (permalink / raw)
  To: Jon Doron, bpf, ast, andrii, daniel; +Cc: Jon Doron



On 7/15/22 10:15 AM, Jon Doron wrote:
> From: Jon Doron <jond@wiz.io>
> 
> Add support for writing a custom event reader, by exposing the ring
> buffer.
> 
> With the new API perf_buffer__buffer() you will get access to the
> raw mmaped()'ed per-cpu underlying memory of the ring buffer.
> 
> This region contains both the perf buffer data and header
> (struct perf_event_mmap_page), which manages the ring buffer
> state (head/tail positions, when accessing the head/tail position
> it's important to take into consideration SMP).
> With this type of low level access one can implement different types of
> consumers here are few simple examples where this API helps with:
> 
> 1. perf_event_read_simple is allocating using malloc, perhaps you want
>     to handle the wrap-around in some other way.
> 2. Since perf buf is per-cpu then the order of the events is not
>     guarnteed, for example:
>     Given 3 events where each event has a timestamp t0 < t1 < t2,
>     and the events are spread on more than 1 CPU, then we can end
>     up with the following state in the ring buf:
>     CPU[0] => [t0, t2]
>     CPU[1] => [t1]
>     When you consume the events from CPU[0], you could know there is
>     a t1 missing, (assuming there are no drops, and your event data
>     contains a sequential index).
>     So now one can simply do the following, for CPU[0], you can store
>     the address of t0 and t2 in an array (without moving the tail, so
>     there data is not perished) then move on the CPU[1] and set the
>     address of t1 in the same array.
>     So you end up with something like:
>     void **arr[] = [&t0, &t1, &t2], now you can consume it orderely
>     and move the tails as you process in order.
> 3. Assuming there are multiple CPUs and we want to start draining the
>     messages from them, then we can "pick" with which one to start with
>     according to the remaining free space in the ring buffer.
> 
> Signed-off-by: Jon Doron <jond@wiz.io>
> ---
>   tools/lib/bpf/libbpf.c   | 16 ++++++++++++++++
>   tools/lib/bpf/libbpf.h   | 16 ++++++++++++++++
>   tools/lib/bpf/libbpf.map |  2 ++
>   3 files changed, 34 insertions(+)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index e89cc9c885b3..c18bdb9b6e85 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -12485,6 +12485,22 @@ int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_idx)
>   	return cpu_buf->fd;
>   }
>   
> +int perf_buffer__buffer(struct perf_buffer *pb, int buf_idx, void **buf, size_t *buf_size)
> +{
> +	struct perf_cpu_buf *cpu_buf;
> +
> +	if (buf_idx >= pb->cpu_cnt)
> +		return libbpf_err(-EINVAL);
> +
> +	cpu_buf = pb->cpu_bufs[buf_idx];
> +	if (!cpu_buf)
> +		return libbpf_err(-ENOENT);
> +
> +	*buf = cpu_buf->base;
> +	*buf_size = pb->mmap_size;
> +	return 0;
> +}
> +
>   /*
>    * Consume data from perf ring buffer corresponding to slot *buf_idx* in
>    * PERF_EVENT_ARRAY BPF map without waiting/polling. If there is no data to
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 9e9a3fd3edd8..9cd9fc1a16d2 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -1381,6 +1381,22 @@ LIBBPF_API int perf_buffer__consume(struct perf_buffer *pb);
>   LIBBPF_API int perf_buffer__consume_buffer(struct perf_buffer *pb, size_t buf_idx);
>   LIBBPF_API size_t perf_buffer__buffer_cnt(const struct perf_buffer *pb);
>   LIBBPF_API int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_idx);
> +/**
> + * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
> + * memory region of the ring buffer.
> + * This ring buffer can be used to implement a custom events consumer.
> + * The ring buffer starts with the *struct perf_event_mmap_page*, which
> + * holds the ring buffer managment fields, when accessing the header
> + * structure it's important to be SMP aware.
> + * You can refer to *perf_event_read_simple* for a simple example.
> + * @param pb the perf buffer structure
> + * @param buf_idx the buffer index to retreive
> + * @param buf (out) gets the base pointer of the mmap()'ed memory
> + * @param buf_size (out) gets the size of the mmap()'ed region
> + * @return 0 on success, negative error code for failure
> + */
> +LIBBPF_API int perf_buffer__buffer(struct perf_buffer *pb, int buf_idx, void **buf,
> +				   size_t *buf_size);
>   
>   typedef enum bpf_perf_event_ret
>   	(*bpf_perf_event_print_t)(struct perf_event_header *hdr,
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 52973cffc20c..75cf9d4c871b 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -461,5 +461,7 @@ LIBBPF_0.8.0 {
>   } LIBBPF_0.7.0;
>   
>   LIBBPF_1.0.0 {
> +	global:
> +		perf_buffer__buffer;

You probably use a old version of bpf-next?
The latest bpf-next has

LIBBPF_1.0.0 {
         global:
                 bpf_prog_query_opts;
                 btf__add_enum64;
                 btf__add_enum64_value;
                 libbpf_bpf_attach_type_str;
                 libbpf_bpf_link_type_str;
                 libbpf_bpf_map_type_str;
                 libbpf_bpf_prog_type_str;
};

You need to add perf_buffer__buffer after libbpf_bpf_prog_type_str 
(alphabet order).

>   	local: *;
>   };

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v4] libbpf: perfbuf: Add API to get the ring buffer
  2022-07-15 17:40 ` Yonghong Song
@ 2022-07-15 17:47   ` Jon Doron
  2022-07-15 17:54     ` Yonghong Song
  0 siblings, 1 reply; 4+ messages in thread
From: Jon Doron @ 2022-07-15 17:47 UTC (permalink / raw)
  To: Yonghong Song; +Cc: bpf, ast, andrii, daniel, Jon Doron

On 15/07/2022, Yonghong Song wrote:
>
>
>On 7/15/22 10:15 AM, Jon Doron wrote:
>>From: Jon Doron <jond@wiz.io>
>>
>>Add support for writing a custom event reader, by exposing the ring
>>buffer.
>>
>>With the new API perf_buffer__buffer() you will get access to the
>>raw mmaped()'ed per-cpu underlying memory of the ring buffer.
>>
>>This region contains both the perf buffer data and header
>>(struct perf_event_mmap_page), which manages the ring buffer
>>state (head/tail positions, when accessing the head/tail position
>>it's important to take into consideration SMP).
>>With this type of low level access one can implement different types of
>>consumers here are few simple examples where this API helps with:
>>
>>1. perf_event_read_simple is allocating using malloc, perhaps you want
>>    to handle the wrap-around in some other way.
>>2. Since perf buf is per-cpu then the order of the events is not
>>    guarnteed, for example:
>>    Given 3 events where each event has a timestamp t0 < t1 < t2,
>>    and the events are spread on more than 1 CPU, then we can end
>>    up with the following state in the ring buf:
>>    CPU[0] => [t0, t2]
>>    CPU[1] => [t1]
>>    When you consume the events from CPU[0], you could know there is
>>    a t1 missing, (assuming there are no drops, and your event data
>>    contains a sequential index).
>>    So now one can simply do the following, for CPU[0], you can store
>>    the address of t0 and t2 in an array (without moving the tail, so
>>    there data is not perished) then move on the CPU[1] and set the
>>    address of t1 in the same array.
>>    So you end up with something like:
>>    void **arr[] = [&t0, &t1, &t2], now you can consume it orderely
>>    and move the tails as you process in order.
>>3. Assuming there are multiple CPUs and we want to start draining the
>>    messages from them, then we can "pick" with which one to start with
>>    according to the remaining free space in the ring buffer.
>>
>>Signed-off-by: Jon Doron <jond@wiz.io>
>>---
>>  tools/lib/bpf/libbpf.c   | 16 ++++++++++++++++
>>  tools/lib/bpf/libbpf.h   | 16 ++++++++++++++++
>>  tools/lib/bpf/libbpf.map |  2 ++
>>  3 files changed, 34 insertions(+)
>>
>>diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>>index e89cc9c885b3..c18bdb9b6e85 100644
>>--- a/tools/lib/bpf/libbpf.c
>>+++ b/tools/lib/bpf/libbpf.c
>>@@ -12485,6 +12485,22 @@ int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_idx)
>>  	return cpu_buf->fd;
>>  }
>>+int perf_buffer__buffer(struct perf_buffer *pb, int buf_idx, void **buf, size_t *buf_size)
>>+{
>>+	struct perf_cpu_buf *cpu_buf;
>>+
>>+	if (buf_idx >= pb->cpu_cnt)
>>+		return libbpf_err(-EINVAL);
>>+
>>+	cpu_buf = pb->cpu_bufs[buf_idx];
>>+	if (!cpu_buf)
>>+		return libbpf_err(-ENOENT);
>>+
>>+	*buf = cpu_buf->base;
>>+	*buf_size = pb->mmap_size;
>>+	return 0;
>>+}
>>+
>>  /*
>>   * Consume data from perf ring buffer corresponding to slot *buf_idx* in
>>   * PERF_EVENT_ARRAY BPF map without waiting/polling. If there is no data to
>>diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>>index 9e9a3fd3edd8..9cd9fc1a16d2 100644
>>--- a/tools/lib/bpf/libbpf.h
>>+++ b/tools/lib/bpf/libbpf.h
>>@@ -1381,6 +1381,22 @@ LIBBPF_API int perf_buffer__consume(struct perf_buffer *pb);
>>  LIBBPF_API int perf_buffer__consume_buffer(struct perf_buffer *pb, size_t buf_idx);
>>  LIBBPF_API size_t perf_buffer__buffer_cnt(const struct perf_buffer *pb);
>>  LIBBPF_API int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_idx);
>>+/**
>>+ * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
>>+ * memory region of the ring buffer.
>>+ * This ring buffer can be used to implement a custom events consumer.
>>+ * The ring buffer starts with the *struct perf_event_mmap_page*, which
>>+ * holds the ring buffer managment fields, when accessing the header
>>+ * structure it's important to be SMP aware.
>>+ * You can refer to *perf_event_read_simple* for a simple example.
>>+ * @param pb the perf buffer structure
>>+ * @param buf_idx the buffer index to retreive
>>+ * @param buf (out) gets the base pointer of the mmap()'ed memory
>>+ * @param buf_size (out) gets the size of the mmap()'ed region
>>+ * @return 0 on success, negative error code for failure
>>+ */
>>+LIBBPF_API int perf_buffer__buffer(struct perf_buffer *pb, int buf_idx, void **buf,
>>+				   size_t *buf_size);
>>  typedef enum bpf_perf_event_ret
>>  	(*bpf_perf_event_print_t)(struct perf_event_header *hdr,
>>diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
>>index 52973cffc20c..75cf9d4c871b 100644
>>--- a/tools/lib/bpf/libbpf.map
>>+++ b/tools/lib/bpf/libbpf.map
>>@@ -461,5 +461,7 @@ LIBBPF_0.8.0 {
>>  } LIBBPF_0.7.0;
>>  LIBBPF_1.0.0 {
>>+	global:
>>+		perf_buffer__buffer;
>
>You probably use a old version of bpf-next?
>The latest bpf-next has
>
>LIBBPF_1.0.0 {
>        global:
>                bpf_prog_query_opts;
>                btf__add_enum64;
>                btf__add_enum64_value;
>                libbpf_bpf_attach_type_str;
>                libbpf_bpf_link_type_str;
>                libbpf_bpf_map_type_str;
>                libbpf_bpf_prog_type_str;
>};
>
>You need to add perf_buffer__buffer after libbpf_bpf_prog_type_str 
>(alphabet order).
>

I was working on top of origin/master in:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
The HEAD is:
commit 9b59ec8d50a1f28747ceff9a4f39af5deba9540e (origin/master, origin/HEAD)

Is there a different git I should rebase on?

>>  	local: *;
>>  };

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v4] libbpf: perfbuf: Add API to get the ring buffer
  2022-07-15 17:47   ` Jon Doron
@ 2022-07-15 17:54     ` Yonghong Song
  0 siblings, 0 replies; 4+ messages in thread
From: Yonghong Song @ 2022-07-15 17:54 UTC (permalink / raw)
  To: Jon Doron; +Cc: bpf, ast, andrii, daniel, Jon Doron



On 7/15/22 10:47 AM, Jon Doron wrote:
> On 15/07/2022, Yonghong Song wrote:
>>
>>
>> On 7/15/22 10:15 AM, Jon Doron wrote:
>>> From: Jon Doron <jond@wiz.io>
>>>
>>> Add support for writing a custom event reader, by exposing the ring
>>> buffer.
>>>
>>> With the new API perf_buffer__buffer() you will get access to the
>>> raw mmaped()'ed per-cpu underlying memory of the ring buffer.
>>>
>>> This region contains both the perf buffer data and header
>>> (struct perf_event_mmap_page), which manages the ring buffer
>>> state (head/tail positions, when accessing the head/tail position
>>> it's important to take into consideration SMP).
>>> With this type of low level access one can implement different types of
>>> consumers here are few simple examples where this API helps with:
>>>
>>> 1. perf_event_read_simple is allocating using malloc, perhaps you want
>>>    to handle the wrap-around in some other way.
>>> 2. Since perf buf is per-cpu then the order of the events is not
>>>    guarnteed, for example:
>>>    Given 3 events where each event has a timestamp t0 < t1 < t2,
>>>    and the events are spread on more than 1 CPU, then we can end
>>>    up with the following state in the ring buf:
>>>    CPU[0] => [t0, t2]
>>>    CPU[1] => [t1]
>>>    When you consume the events from CPU[0], you could know there is
>>>    a t1 missing, (assuming there are no drops, and your event data
>>>    contains a sequential index).
>>>    So now one can simply do the following, for CPU[0], you can store
>>>    the address of t0 and t2 in an array (without moving the tail, so
>>>    there data is not perished) then move on the CPU[1] and set the
>>>    address of t1 in the same array.
>>>    So you end up with something like:
>>>    void **arr[] = [&t0, &t1, &t2], now you can consume it orderely
>>>    and move the tails as you process in order.
>>> 3. Assuming there are multiple CPUs and we want to start draining the
>>>    messages from them, then we can "pick" with which one to start with
>>>    according to the remaining free space in the ring buffer.
>>>
>>> Signed-off-by: Jon Doron <jond@wiz.io>
>>> ---
>>>  tools/lib/bpf/libbpf.c   | 16 ++++++++++++++++
>>>  tools/lib/bpf/libbpf.h   | 16 ++++++++++++++++
>>>  tools/lib/bpf/libbpf.map |  2 ++
>>>  3 files changed, 34 insertions(+)
>>>
>>> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
>>> index e89cc9c885b3..c18bdb9b6e85 100644
>>> --- a/tools/lib/bpf/libbpf.c
>>> +++ b/tools/lib/bpf/libbpf.c
>>> @@ -12485,6 +12485,22 @@ int perf_buffer__buffer_fd(const struct 
>>> perf_buffer *pb, size_t buf_idx)
>>>      return cpu_buf->fd;
>>>  }
>>> +int perf_buffer__buffer(struct perf_buffer *pb, int buf_idx, void 
>>> **buf, size_t *buf_size)
>>> +{
>>> +    struct perf_cpu_buf *cpu_buf;
>>> +
>>> +    if (buf_idx >= pb->cpu_cnt)
>>> +        return libbpf_err(-EINVAL);
>>> +
>>> +    cpu_buf = pb->cpu_bufs[buf_idx];
>>> +    if (!cpu_buf)
>>> +        return libbpf_err(-ENOENT);
>>> +
>>> +    *buf = cpu_buf->base;
>>> +    *buf_size = pb->mmap_size;
>>> +    return 0;
>>> +}
>>> +
>>>  /*
>>>   * Consume data from perf ring buffer corresponding to slot 
>>> *buf_idx* in
>>>   * PERF_EVENT_ARRAY BPF map without waiting/polling. If there is no 
>>> data to
>>> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>>> index 9e9a3fd3edd8..9cd9fc1a16d2 100644
>>> --- a/tools/lib/bpf/libbpf.h
>>> +++ b/tools/lib/bpf/libbpf.h
>>> @@ -1381,6 +1381,22 @@ LIBBPF_API int perf_buffer__consume(struct 
>>> perf_buffer *pb);
>>>  LIBBPF_API int perf_buffer__consume_buffer(struct perf_buffer *pb, 
>>> size_t buf_idx);
>>>  LIBBPF_API size_t perf_buffer__buffer_cnt(const struct perf_buffer 
>>> *pb);
>>>  LIBBPF_API int perf_buffer__buffer_fd(const struct perf_buffer *pb, 
>>> size_t buf_idx);
>>> +/**
>>> + * @brief **perf_buffer__buffer()** returns the per-cpu raw 
>>> mmap()'ed underlying
>>> + * memory region of the ring buffer.
>>> + * This ring buffer can be used to implement a custom events consumer.
>>> + * The ring buffer starts with the *struct perf_event_mmap_page*, which
>>> + * holds the ring buffer managment fields, when accessing the header
>>> + * structure it's important to be SMP aware.
>>> + * You can refer to *perf_event_read_simple* for a simple example.
>>> + * @param pb the perf buffer structure
>>> + * @param buf_idx the buffer index to retreive
>>> + * @param buf (out) gets the base pointer of the mmap()'ed memory
>>> + * @param buf_size (out) gets the size of the mmap()'ed region
>>> + * @return 0 on success, negative error code for failure
>>> + */
>>> +LIBBPF_API int perf_buffer__buffer(struct perf_buffer *pb, int 
>>> buf_idx, void **buf,
>>> +                   size_t *buf_size);
>>>  typedef enum bpf_perf_event_ret
>>>      (*bpf_perf_event_print_t)(struct perf_event_header *hdr,
>>> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
>>> index 52973cffc20c..75cf9d4c871b 100644
>>> --- a/tools/lib/bpf/libbpf.map
>>> +++ b/tools/lib/bpf/libbpf.map
>>> @@ -461,5 +461,7 @@ LIBBPF_0.8.0 {
>>>  } LIBBPF_0.7.0;
>>>  LIBBPF_1.0.0 {
>>> +    global:
>>> +        perf_buffer__buffer;
>>
>> You probably use a old version of bpf-next?
>> The latest bpf-next has
>>
>> LIBBPF_1.0.0 {
>>        global:
>>                bpf_prog_query_opts;
>>                btf__add_enum64;
>>                btf__add_enum64_value;
>>                libbpf_bpf_attach_type_str;
>>                libbpf_bpf_link_type_str;
>>                libbpf_bpf_map_type_str;
>>                libbpf_bpf_prog_type_str;
>> };
>>
>> You need to add perf_buffer__buffer after libbpf_bpf_prog_type_str 
>> (alphabet order).
>>
> 
> I was working on top of origin/master in:
> git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
> The HEAD is:
> commit 9b59ec8d50a1f28747ceff9a4f39af5deba9540e (origin/master, 
> origin/HEAD)
> 
> Is there a different git I should rebase on?

Please work on 
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git

Note the patch subject
   PATCH bpf-next v4] libbpf: perfbuf: Add API to get the ring buffer
So the patch is supposed to be against bpf-next tree.

> 
>>>      local: *;
>>>  };

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-07-15 17:54 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-15 17:15 [PATCH bpf-next v4] libbpf: perfbuf: Add API to get the ring buffer Jon Doron
2022-07-15 17:40 ` Yonghong Song
2022-07-15 17:47   ` Jon Doron
2022-07-15 17:54     ` Yonghong Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).