All of lore.kernel.org
 help / color / mirror / Atom feed
* BPF memory model
@ 2023-09-07 22:00 Josh Don
  2023-09-08  8:42 ` Paul E. McKenney
  0 siblings, 1 reply; 13+ messages in thread
From: Josh Don @ 2023-09-07 22:00 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Hao Luo, davemarchevsky, Tejun Heo, David Vernet, Neel Natu,
	Jack Humphries, bpf, ast

Hi Paul,

I was chatting with Dave Marchevsky about the BPF memory model, and
had some followup questions you might be able to answer.

I've been using the built-in RMW operations to do a lot of lockless
programming, for concurrent BPF-BPF, but also especially for
userspace-BPF (the latter of which has become a lot more interesting
with the sched_ext work from Meta). It would of course be nice to
sometimes lower the synchronization overhead to a hardware barrier or
a compiler barrier, to allow for general use acquire/release semantics
(rather than needing to fall back to a lock RMW instruction). I saw
your presentation from 2021 on this topic here:
https://lpc.events/event/11/contributions/941/attachments/859/1667/bpf-memory-model.2020.09.22a.pdf

Has there been any further interest in supporting additional
kernel-style atomics in BPF that you know of?

And on a different BPF note, one thing I wasn't sure about was the
ability of the cpu to reorder loads and stores across the BPF program
call boundary. For example, could the load of "z" in the BPF program
below be reordered before the store to x in the kernel? I'm sure that
no compiler barrier is ever necessary here since the BPF program is
compiled separately from the kernel, but I'm not sure whether a
hardware barrier is necessary.
<kernel>
x = 3
call_bpf();
  <bpf>
  int y = z;

Best,
Josh

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-07 22:00 BPF memory model Josh Don
@ 2023-09-08  8:42 ` Paul E. McKenney
  2023-09-08 20:26   ` Josh Don
  2023-09-18 15:09   ` Barret Rhoden
  0 siblings, 2 replies; 13+ messages in thread
From: Paul E. McKenney @ 2023-09-08  8:42 UTC (permalink / raw)
  To: Josh Don
  Cc: Hao Luo, davemarchevsky, Tejun Heo, David Vernet, Neel Natu,
	Jack Humphries, bpf, ast

On Thu, Sep 07, 2023 at 03:00:56PM -0700, Josh Don wrote:
> Hi Paul,
> 
> I was chatting with Dave Marchevsky about the BPF memory model, and
> had some followup questions you might be able to answer.
> 
> I've been using the built-in RMW operations to do a lot of lockless
> programming, for concurrent BPF-BPF, but also especially for
> userspace-BPF (the latter of which has become a lot more interesting
> with the sched_ext work from Meta). It would of course be nice to
> sometimes lower the synchronization overhead to a hardware barrier or
> a compiler barrier, to allow for general use acquire/release semantics
> (rather than needing to fall back to a lock RMW instruction). I saw
> your presentation from 2021 on this topic here:
> https://lpc.events/event/11/contributions/941/attachments/859/1667/bpf-memory-model.2020.09.22a.pdf
> 
> Has there been any further interest in supporting additional
> kernel-style atomics in BPF that you know of?

This is one of the first that I have heard of.  ;-)

But what BPF programs are you running that are seeing excessive
synchronization overhead?  That will tell us which operations to
start with.  (Or maybe it is time to just add the full Linux-kernel
atomic-operations kitchen sink, but that would not normally be the way
to bet.)

> And on a different BPF note, one thing I wasn't sure about was the
> ability of the cpu to reorder loads and stores across the BPF program
> call boundary. For example, could the load of "z" in the BPF program
> below be reordered before the store to x in the kernel? I'm sure that
> no compiler barrier is ever necessary here since the BPF program is
> compiled separately from the kernel, but I'm not sure whether a
> hardware barrier is necessary.
> <kernel>
> x = 3
> call_bpf();
>   <bpf>
>   int y = z;

Given that a major goal of BPF is the ability to add low-overhead
programs to code on fastpaths, I would not expect any implicit barriers
in that case.  Consider for example counting the number of calls to a
"hot" function in the Linux kernel, in which case adding full ordering
would incur unacceptable performance degradation.  I would instead
expect that the BPF program would need to add explicit barriers or
ordered RMW operations.

But people will not be shy about correcting me if I am confused on
either of these points!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-08  8:42 ` Paul E. McKenney
@ 2023-09-08 20:26   ` Josh Don
  2023-09-08 22:07     ` Tejun Heo
  2023-09-18 15:09   ` Barret Rhoden
  1 sibling, 1 reply; 13+ messages in thread
From: Josh Don @ 2023-09-08 20:26 UTC (permalink / raw)
  To: paulmck
  Cc: Hao Luo, davemarchevsky, Tejun Heo, David Vernet, Neel Natu,
	Jack Humphries, bpf, ast

On Fri, Sep 8, 2023 at 1:43 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Thu, Sep 07, 2023 at 03:00:56PM -0700, Josh Don wrote:
> > Has there been any further interest in supporting additional
> > kernel-style atomics in BPF that you know of?
>
> This is one of the first that I have heard of.  ;-)
>
> But what BPF programs are you running that are seeing excessive
> synchronization overhead?  That will tell us which operations to
> start with.  (Or maybe it is time to just add the full Linux-kernel
> atomic-operations kitchen sink, but that would not normally be the way
> to bet.)

I'm writing BPF programs for scheduling (ie. sched_ext), so these are
getting invoked in hot paths and invoked concurrently across multiple
cpus (for example, pick_next_task, enqueue_task, etc.). The kernel is
responsible for relaying ground truth, userspace makes O(ms)
scheduling decisions, and BPF makes O(us) scheduling decisions.
BPF-BPF concurrency is possible with spinlocks and RMW, BPF-userspace
can currently only really use RMW. My line of questioning is more
forward looking, as I'm preemptively thinking of how to ensure
kernel-like scheduling performance, since BPF spinlock or RMW is
sometimes overkill :) I would think that barrier() and smp_mb() would
probably be the minimum viable set (at least for x86) that people
would find useful, but maybe others can chime in.

> > And on a different BPF note, one thing I wasn't sure about was the
> > ability of the cpu to reorder loads and stores across the BPF program
> > call boundary. For example, could the load of "z" in the BPF program
> > below be reordered before the store to x in the kernel? I'm sure that
> > no compiler barrier is ever necessary here since the BPF program is
> > compiled separately from the kernel, but I'm not sure whether a
> > hardware barrier is necessary.
> > <kernel>
> > x = 3
> > call_bpf();
> >   <bpf>
> >   int y = z;
>
> Given that a major goal of BPF is the ability to add low-overhead
> programs to code on fastpaths, I would not expect any implicit barriers
> in that case.  Consider for example counting the number of calls to a
> "hot" function in the Linux kernel, in which case adding full ordering
> would incur unacceptable performance degradation.  I would instead
> expect that the BPF program would need to add explicit barriers or
> ordered RMW operations.

Yep, that was my expectation as well. On the plus, this gives the
flexibility of only adding barriers where they are really needed.

Best,
Josh

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-08 20:26   ` Josh Don
@ 2023-09-08 22:07     ` Tejun Heo
  2023-09-08 23:16       ` Alexei Starovoitov
  0 siblings, 1 reply; 13+ messages in thread
From: Tejun Heo @ 2023-09-08 22:07 UTC (permalink / raw)
  To: Josh Don
  Cc: paulmck, Hao Luo, davemarchevsky, David Vernet, Neel Natu,
	Jack Humphries, bpf, ast

Hello,

On Fri, Sep 08, 2023 at 01:26:11PM -0700, Josh Don wrote:
> I'm writing BPF programs for scheduling (ie. sched_ext), so these are
> getting invoked in hot paths and invoked concurrently across multiple
> cpus (for example, pick_next_task, enqueue_task, etc.). The kernel is
> responsible for relaying ground truth, userspace makes O(ms)
> scheduling decisions, and BPF makes O(us) scheduling decisions.
> BPF-BPF concurrency is possible with spinlocks and RMW, BPF-userspace
> can currently only really use RMW. My line of questioning is more
> forward looking, as I'm preemptively thinking of how to ensure
> kernel-like scheduling performance, since BPF spinlock or RMW is
> sometimes overkill :) I would think that barrier() and smp_mb() would
> probably be the minimum viable set (at least for x86) that people
> would find useful, but maybe others can chime in.

My personal favorite set is store_release/load_acquire(). I have a hard time
thinking up cases which can't be covered by them and they're basically free
on x86.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-08 22:07     ` Tejun Heo
@ 2023-09-08 23:16       ` Alexei Starovoitov
  2023-09-09 12:47         ` Paul E. McKenney
  0 siblings, 1 reply; 13+ messages in thread
From: Alexei Starovoitov @ 2023-09-08 23:16 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Josh Don, Paul E. McKenney, Hao Luo, Dave Marchevsky,
	David Vernet, Neel Natu, Jack Humphries, bpf, Alexei Starovoitov,
	Christoph Hellwig, Dave Thaler, Jose E. Marchesi

On Fri, Sep 8, 2023 at 3:07 PM Tejun Heo <tj@kernel.org> wrote:
>
> Hello,
>
> On Fri, Sep 08, 2023 at 01:26:11PM -0700, Josh Don wrote:
> > I'm writing BPF programs for scheduling (ie. sched_ext), so these are
> > getting invoked in hot paths and invoked concurrently across multiple
> > cpus (for example, pick_next_task, enqueue_task, etc.). The kernel is
> > responsible for relaying ground truth, userspace makes O(ms)
> > scheduling decisions, and BPF makes O(us) scheduling decisions.
> > BPF-BPF concurrency is possible with spinlocks and RMW, BPF-userspace
> > can currently only really use RMW. My line of questioning is more
> > forward looking, as I'm preemptively thinking of how to ensure
> > kernel-like scheduling performance, since BPF spinlock or RMW is
> > sometimes overkill :) I would think that barrier() and smp_mb() would
> > probably be the minimum viable set (at least for x86) that people
> > would find useful, but maybe others can chime in.
>
> My personal favorite set is store_release/load_acquire(). I have a hard time
> thinking up cases which can't be covered by them and they're basically free
> on x86.

First of all, Thanks Josh for highlighting this topic and
gently nudging Paul to continue his work :)

It's absolutely essential for BPF to have a well defined memory model.

It's necessary for fast sched-ext bpf progs and for HW offloads too.
As a minimum we need to document it in Documentation/bpf/standardization/.
It's much more challenging than it looks.
Unlike traditional ISAs. We cannot say that memory consistency is
similar to x86 or arm64 or riscv.
bpf memory consistency cannot pick the lower common denominator either.
bpf memory model most likely going to be pretty close to kernel memory model
instead of HW or C.

In parallel we can start adding new concurrency primitives.
Sounds like smp_load_acquire()/store_release should be the first pair.
Here it's also more challenging than in the kernel.
We cannot define bpf_smp_load_acquire() as a macro.
It needs to be a new flavor of BPF_LDX instruction that JITs
will convert into a proper sequence of insns.
On x86-64 it will remain normal load,
while on arm64 it will be LDAR instead of LDR and so on.

Some of the barriers we can implement as kfuncs since they're slow anyway.
Some other barriers would need to be new instructions too.
The design would need to take into account multiple architectures,
gcc/llvm consideration, verifier complexity, and,
of course, include bpf IETF standardization working group.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-08 23:16       ` Alexei Starovoitov
@ 2023-09-09 12:47         ` Paul E. McKenney
  0 siblings, 0 replies; 13+ messages in thread
From: Paul E. McKenney @ 2023-09-09 12:47 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Tejun Heo, Josh Don, Hao Luo, Dave Marchevsky, David Vernet,
	Neel Natu, Jack Humphries, bpf, Alexei Starovoitov,
	Christoph Hellwig, Dave Thaler, Jose E. Marchesi

On Fri, Sep 08, 2023 at 04:16:39PM -0700, Alexei Starovoitov wrote:
> On Fri, Sep 8, 2023 at 3:07 PM Tejun Heo <tj@kernel.org> wrote:
> >
> > Hello,
> >
> > On Fri, Sep 08, 2023 at 01:26:11PM -0700, Josh Don wrote:
> > > I'm writing BPF programs for scheduling (ie. sched_ext), so these are
> > > getting invoked in hot paths and invoked concurrently across multiple
> > > cpus (for example, pick_next_task, enqueue_task, etc.). The kernel is
> > > responsible for relaying ground truth, userspace makes O(ms)
> > > scheduling decisions, and BPF makes O(us) scheduling decisions.
> > > BPF-BPF concurrency is possible with spinlocks and RMW, BPF-userspace
> > > can currently only really use RMW. My line of questioning is more
> > > forward looking, as I'm preemptively thinking of how to ensure
> > > kernel-like scheduling performance, since BPF spinlock or RMW is
> > > sometimes overkill :) I would think that barrier() and smp_mb() would
> > > probably be the minimum viable set (at least for x86) that people
> > > would find useful, but maybe others can chime in.
> >
> > My personal favorite set is store_release/load_acquire(). I have a hard time
> > thinking up cases which can't be covered by them and they're basically free
> > on x86.
> 
> First of all, Thanks Josh for highlighting this topic and
> gently nudging Paul to continue his work :)

I hereby consider myself nudged.  ;-)

> It's absolutely essential for BPF to have a well defined memory model.
> 
> It's necessary for fast sched-ext bpf progs and for HW offloads too.
> As a minimum we need to document it in Documentation/bpf/standardization/.

Ah, I see that in current mainline.

> It's much more challenging than it looks.
> Unlike traditional ISAs. We cannot say that memory consistency is
> similar to x86 or arm64 or riscv.
> bpf memory consistency cannot pick the lower common denominator either.
> bpf memory model most likely going to be pretty close to kernel memory model
> instead of HW or C.
> In parallel we can start adding new concurrency primitives.

My first thought would be to look at instruction-set.rst in that
directory, and project LKMM onto the concurrency primitives that
are currently defined there.  The advantage of this is "just enough
LKMM" at any given time, but it would also mean that memory-model.rst
(or whatever eventual bikesheded name) would need maintenance as new
concurrency primitives are added.  Which seems like the correct
approach, as opposed to attempting to define memory model concepts
for non-existent concurrency primitives.

Presumably, I also need to run this through the BPF standardization
process.

Or did you have something else in mind?

							Thanx, Paul

> Sounds like smp_load_acquire()/store_release should be the first pair.
> Here it's also more challenging than in the kernel.
> We cannot define bpf_smp_load_acquire() as a macro.
> It needs to be a new flavor of BPF_LDX instruction that JITs
> will convert into a proper sequence of insns.
> On x86-64 it will remain normal load,
> while on arm64 it will be LDAR instead of LDR and so on.
> 
> Some of the barriers we can implement as kfuncs since they're slow anyway.
> Some other barriers would need to be new instructions too.
> The design would need to take into account multiple architectures,
> gcc/llvm consideration, verifier complexity, and,
> of course, include bpf IETF standardization working group.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-08  8:42 ` Paul E. McKenney
  2023-09-08 20:26   ` Josh Don
@ 2023-09-18 15:09   ` Barret Rhoden
  2023-09-19  9:52     ` Paul E. McKenney
  1 sibling, 1 reply; 13+ messages in thread
From: Barret Rhoden @ 2023-09-18 15:09 UTC (permalink / raw)
  To: paulmck
  Cc: Josh Don, Hao Luo, davemarchevsky, Tejun Heo, David Vernet,
	Neel Natu, Jack Humphries, bpf, ast

On 9/8/23 04:42, Paul E. McKenney wrote:
> But what BPF programs are you running that are seeing excessive 
> synchronization overhead? That will tell us which operations to start 
> with. (Or maybe it is time to just add the full Linux-kernel 
> atomic-operations kitchen sink, but that would not normally be the way 
> to bet.)

Here's what I use in BPF, (also for writing parallel schedulers):
- READ_ONCE/WRITE_ONCE
- compiler atomic builtins, like CAS, swap/exchange, fetch_and_add, etc.
- smp_store_release, __atomic_load_n, etc.
- at one point, i was sprinkling asm volatile ("" ::: "memory") around 
too, though not in any active code at the moment.

My mental model, right or wrong, is that I am operating under something 
like the LKMM, and that I need to convince the compiler to spit out the 
right code (sort of like writing shared memory code to talk to a device 
or userspace) and hope the JIT does the right thing.

Barret



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-18 15:09   ` Barret Rhoden
@ 2023-09-19  9:52     ` Paul E. McKenney
  2023-09-19 15:55       ` Barret Rhoden
  0 siblings, 1 reply; 13+ messages in thread
From: Paul E. McKenney @ 2023-09-19  9:52 UTC (permalink / raw)
  To: Barret Rhoden
  Cc: Josh Don, Hao Luo, davemarchevsky, Tejun Heo, David Vernet,
	Neel Natu, Jack Humphries, bpf, ast

On Mon, Sep 18, 2023 at 11:09:26AM -0400, Barret Rhoden wrote:
> On 9/8/23 04:42, Paul E. McKenney wrote:
> > But what BPF programs are you running that are seeing excessive
> > synchronization overhead? That will tell us which operations to start
> > with. (Or maybe it is time to just add the full Linux-kernel
> > atomic-operations kitchen sink, but that would not normally be the way
> > to bet.)
> 
> Here's what I use in BPF, (also for writing parallel schedulers):
> - READ_ONCE/WRITE_ONCE
> - compiler atomic builtins, like CAS, swap/exchange, fetch_and_add, etc.
> - smp_store_release, __atomic_load_n, etc.
> - at one point, i was sprinkling asm volatile ("" ::: "memory") around too,
> though not in any active code at the moment.

Good to know, thank you very much!!!

> My mental model, right or wrong, is that I am operating under something like
> the LKMM, and that I need to convince the compiler to spit out the right
> code (sort of like writing shared memory code to talk to a device or
> userspace) and hope the JIT does the right thing.

Just to make sure that I understand, the idea is to compile from (say)
__atomic_load_n() to BPF instructions, correct?  Or is this compiling
all the way to the target x86/ARMv8/whatever machine instructions?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-19  9:52     ` Paul E. McKenney
@ 2023-09-19 15:55       ` Barret Rhoden
  2023-10-16 16:48         ` Paul E. McKenney
  0 siblings, 1 reply; 13+ messages in thread
From: Barret Rhoden @ 2023-09-19 15:55 UTC (permalink / raw)
  To: paulmck
  Cc: Josh Don, Hao Luo, davemarchevsky, Tejun Heo, David Vernet,
	Neel Natu, Jack Humphries, bpf, ast

On 9/19/23 05:52, Paul E. McKenney wrote:
> Just to make sure that I understand, the idea is to compile from (say) 
> __atomic_load_n() to BPF instructions, correct? Or is this compiling all 
> the way to the target x86/ARMv8/whatever machine instructions?

correct; i'm compiling with clang -target bpf to BPF instructions, which 
should be spitting out the appropriate BPF atomic ops.  then i hope that 
if i get the compiler to emit the reads and writes in the correct order, 
that the JIT maintains that order when it turns them into x86/whatever.

thanks,

barret



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-09-19 15:55       ` Barret Rhoden
@ 2023-10-16 16:48         ` Paul E. McKenney
  2023-10-16 17:17           ` Jose E. Marchesi
  2023-11-13 18:53           ` Barret Rhoden
  0 siblings, 2 replies; 13+ messages in thread
From: Paul E. McKenney @ 2023-10-16 16:48 UTC (permalink / raw)
  To: Barret Rhoden
  Cc: Josh Don, Hao Luo, davemarchevsky, Tejun Heo, David Vernet,
	Neel Natu, Jack Humphries, bpf, ast

On Tue, Sep 19, 2023 at 11:55:42AM -0400, Barret Rhoden wrote:
> On 9/19/23 05:52, Paul E. McKenney wrote:
> > Just to make sure that I understand, the idea is to compile from (say)
> > __atomic_load_n() to BPF instructions, correct? Or is this compiling all
> > the way to the target x86/ARMv8/whatever machine instructions?
> 
> correct; i'm compiling with clang -target bpf to BPF instructions, which
> should be spitting out the appropriate BPF atomic ops.  then i hope that if
> i get the compiler to emit the reads and writes in the correct order, that
> the JIT maintains that order when it turns them into x86/whatever.

Hopefully better late than never, here is a draft:

https://docs.google.com/document/d/1TaSEfWfLnRUi5KqkavUQyL2tThJXYWHS15qcbxIsFb0/edit?usp=sharing

Please do feel free to add relevant comments.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-10-16 16:48         ` Paul E. McKenney
@ 2023-10-16 17:17           ` Jose E. Marchesi
  2023-11-13 18:53           ` Barret Rhoden
  1 sibling, 0 replies; 13+ messages in thread
From: Jose E. Marchesi @ 2023-10-16 17:17 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Barret Rhoden, Josh Don, Hao Luo, davemarchevsky, Tejun Heo,
	David Vernet, Neel Natu, Jack Humphries, bpf, ast


> On Tue, Sep 19, 2023 at 11:55:42AM -0400, Barret Rhoden wrote:
>> On 9/19/23 05:52, Paul E. McKenney wrote:
>> > Just to make sure that I understand, the idea is to compile from (say)
>> > __atomic_load_n() to BPF instructions, correct? Or is this compiling all
>> > the way to the target x86/ARMv8/whatever machine instructions?
>> 
>> correct; i'm compiling with clang -target bpf to BPF instructions, which
>> should be spitting out the appropriate BPF atomic ops.  then i hope that if
>> i get the compiler to emit the reads and writes in the correct order, that
>> the JIT maintains that order when it turns them into x86/whatever.
>
> Hopefully better late than never, here is a draft:
>
> https://docs.google.com/document/d/1TaSEfWfLnRUi5KqkavUQyL2tThJXYWHS15qcbxIsFb0/edit?usp=sharing
>
> Please do feel free to add relevant comments.

Nice :-)
/me reads carefully

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-10-16 16:48         ` Paul E. McKenney
  2023-10-16 17:17           ` Jose E. Marchesi
@ 2023-11-13 18:53           ` Barret Rhoden
  2023-11-13 20:03             ` Paul E. McKenney
  1 sibling, 1 reply; 13+ messages in thread
From: Barret Rhoden @ 2023-11-13 18:53 UTC (permalink / raw)
  To: paulmck
  Cc: Josh Don, Hao Luo, davemarchevsky, Tejun Heo, David Vernet,
	Neel Natu, Jack Humphries, bpf, ast

On 10/16/23 12:48, Paul E. McKenney wrote:
> Hopefully better late than never, here is a draft:
> 
> https://docs.google.com/document/d/1TaSEfWfLnRUi5KqkavUQyL2tThJXYWHS15qcbxIsFb0/edit?usp=sharing
> 
> Please do feel free to add relevant comments.
> 
> 							Thanx, Paul

thanks for putting this together, and great LPC talk today!

barret

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: BPF memory model
  2023-11-13 18:53           ` Barret Rhoden
@ 2023-11-13 20:03             ` Paul E. McKenney
  0 siblings, 0 replies; 13+ messages in thread
From: Paul E. McKenney @ 2023-11-13 20:03 UTC (permalink / raw)
  To: Barret Rhoden
  Cc: Josh Don, Hao Luo, davemarchevsky, Tejun Heo, David Vernet,
	Neel Natu, Jack Humphries, bpf, ast

On Mon, Nov 13, 2023 at 01:53:17PM -0500, Barret Rhoden wrote:
> On 10/16/23 12:48, Paul E. McKenney wrote:
> > Hopefully better late than never, here is a draft:
> > 
> > https://docs.google.com/document/d/1TaSEfWfLnRUi5KqkavUQyL2tThJXYWHS15qcbxIsFb0/edit?usp=sharing
> > 
> > Please do feel free to add relevant comments.
> > 
> > 							Thanx, Paul
> 
> thanks for putting this together, and great LPC talk today!

Glad you liked it, and thank you!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-11-13 20:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-07 22:00 BPF memory model Josh Don
2023-09-08  8:42 ` Paul E. McKenney
2023-09-08 20:26   ` Josh Don
2023-09-08 22:07     ` Tejun Heo
2023-09-08 23:16       ` Alexei Starovoitov
2023-09-09 12:47         ` Paul E. McKenney
2023-09-18 15:09   ` Barret Rhoden
2023-09-19  9:52     ` Paul E. McKenney
2023-09-19 15:55       ` Barret Rhoden
2023-10-16 16:48         ` Paul E. McKenney
2023-10-16 17:17           ` Jose E. Marchesi
2023-11-13 18:53           ` Barret Rhoden
2023-11-13 20:03             ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.