All of lore.kernel.org
 help / color / mirror / Atom feed
* Can the Kernel Concurrency Sanitizer Own Rust Code?
@ 2021-10-07 13:01 Marco Elver
  2021-10-07 14:15 ` Boqun Feng
  2021-10-07 16:30 ` Paul E. McKenney
  0 siblings, 2 replies; 39+ messages in thread
From: Marco Elver @ 2021-10-07 13:01 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: kasan-dev, Boqun Feng, rust-for-linux

Hi Paul,

Thanks for writing up https://paulmck.livejournal.com/64970.html --
these were also my thoughts. Similarly for KASAN.

Sanitizer integration will also, over time, provide quantitative data
on the rate of bugs in C code, unsafe-Rust, and of course safe-Rust
code as well as any number of interactions between them once the
fuzzers are let loose on Rust code.

Re integrating KCSAN with Rust, this should be doable since rustc does
support ThreadSanitizer instrumentation:
https://rustc-dev-guide.rust-lang.org/sanitizers.html

Just need to pass all the rest of the -mllvm options to rustc as well,
and ensure it's not attempting to link against compiler-rt. I haven't
tried, so wouldn't know how it currently behaves.

Also of importance will be the __tsan_atomic*() instrumentation, which
KCSAN already provides: my guess is that whatever subset of the LKMM
Rust initially provides (looking at the current version it certainly
is the case), the backend will lower them to LLVM atomic intrinsics
[1], which ThreadSanitizer instrumentation turns into __tsan_atomic*()
calls.
[1] https://llvm.org/docs/Atomics.html

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 13:01 Can the Kernel Concurrency Sanitizer Own Rust Code? Marco Elver
@ 2021-10-07 14:15 ` Boqun Feng
  2021-10-07 14:22   ` Marco Elver
  2021-10-07 16:30 ` Paul E. McKenney
  1 sibling, 1 reply; 39+ messages in thread
From: Boqun Feng @ 2021-10-07 14:15 UTC (permalink / raw)
  To: Marco Elver; +Cc: Paul E. McKenney, kasan-dev, rust-for-linux

Hi Marco,

On Thu, Oct 07, 2021 at 03:01:07PM +0200, Marco Elver wrote:
> Hi Paul,
> 
> Thanks for writing up https://paulmck.livejournal.com/64970.html --
> these were also my thoughts. Similarly for KASAN.
> 
> Sanitizer integration will also, over time, provide quantitative data
> on the rate of bugs in C code, unsafe-Rust, and of course safe-Rust
> code as well as any number of interactions between them once the
> fuzzers are let loose on Rust code.
> 
> Re integrating KCSAN with Rust, this should be doable since rustc does
> support ThreadSanitizer instrumentation:
> https://rustc-dev-guide.rust-lang.org/sanitizers.html
> 
> Just need to pass all the rest of the -mllvm options to rustc as well,
> and ensure it's not attempting to link against compiler-rt. I haven't
> tried, so wouldn't know how it currently behaves.
> 

Thanks for looking into this, and I think you're right: if rustc
supports ThreadSanitizer, then basic features os KCSAN should work.

> Also of importance will be the __tsan_atomic*() instrumentation, which
> KCSAN already provides: my guess is that whatever subset of the LKMM
> Rust initially provides (looking at the current version it certainly
> is the case), the backend will lower them to LLVM atomic intrinsics
> [1], which ThreadSanitizer instrumentation turns into __tsan_atomic*()
> calls.
> [1] https://llvm.org/docs/Atomics.html
> 

Besides atomics, the counterpart of READ_ONCE() and WRITE_ONCE() should
also be looked into, IOW the core::ptr::{read,write}_volatile()
(although I don't think their semantics is completely defined since the
memory model of Rust is incomplete). There could easily be cases where
Rust-side do writes with lock critical sections while C-side do reads
out of the lock critical sections, so Rust-side need to play the
volatile game.

I'm not sure whether rustc will generate special instrumentation for
{read,write}_volatile(), if not, we need to provide something similar to
KCSAN does for READ_ONCE() and WRITE_ONCE().

Regards,
Boqun

> Thanks,
> -- Marco

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 14:15 ` Boqun Feng
@ 2021-10-07 14:22   ` Marco Elver
  2021-10-07 14:43     ` Boqun Feng
  2021-10-07 17:44     ` Miguel Ojeda
  0 siblings, 2 replies; 39+ messages in thread
From: Marco Elver @ 2021-10-07 14:22 UTC (permalink / raw)
  To: Boqun Feng; +Cc: Paul E. McKenney, kasan-dev, rust-for-linux

On Thu, 7 Oct 2021 at 16:16, Boqun Feng <boqun.feng@gmail.com> wrote:
[...]
> > Also of importance will be the __tsan_atomic*() instrumentation, which
> > KCSAN already provides: my guess is that whatever subset of the LKMM
> > Rust initially provides (looking at the current version it certainly
> > is the case), the backend will lower them to LLVM atomic intrinsics
> > [1], which ThreadSanitizer instrumentation turns into __tsan_atomic*()
> > calls.
> > [1] https://llvm.org/docs/Atomics.html
> >
>
> Besides atomics, the counterpart of READ_ONCE() and WRITE_ONCE() should
> also be looked into, IOW the core::ptr::{read,write}_volatile()
> (although I don't think their semantics is completely defined since the
> memory model of Rust is incomplete). There could easily be cases where
> Rust-side do writes with lock critical sections while C-side do reads
> out of the lock critical sections, so Rust-side need to play the
> volatile game.
>
> I'm not sure whether rustc will generate special instrumentation for
> {read,write}_volatile(), if not, we need to provide something similar to
> KCSAN does for READ_ONCE() and WRITE_ONCE().

For volatile (i.e. *ONCE()) KCSAN no longer does anything special.
This was one of the major compiler changes (-mllvm
-tsan-distinguish-volatile=1, and similarly for GCC) to get KCSAN
merged in the end.

So if rustc lowers core::ptr::{read,write}_volatile() to volatile in
LLVM IR (which I assume it does), then everything works as intended,
and no extra explicit instrumentation is required.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 14:22   ` Marco Elver
@ 2021-10-07 14:43     ` Boqun Feng
  2021-10-07 17:44     ` Miguel Ojeda
  1 sibling, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2021-10-07 14:43 UTC (permalink / raw)
  To: Marco Elver; +Cc: Paul E. McKenney, kasan-dev, rust-for-linux

On Thu, Oct 07, 2021 at 04:22:41PM +0200, Marco Elver wrote:
> On Thu, 7 Oct 2021 at 16:16, Boqun Feng <boqun.feng@gmail.com> wrote:
> [...]
> > > Also of importance will be the __tsan_atomic*() instrumentation, which
> > > KCSAN already provides: my guess is that whatever subset of the LKMM
> > > Rust initially provides (looking at the current version it certainly
> > > is the case), the backend will lower them to LLVM atomic intrinsics
> > > [1], which ThreadSanitizer instrumentation turns into __tsan_atomic*()
> > > calls.
> > > [1] https://llvm.org/docs/Atomics.html
> > >
> >
> > Besides atomics, the counterpart of READ_ONCE() and WRITE_ONCE() should
> > also be looked into, IOW the core::ptr::{read,write}_volatile()
> > (although I don't think their semantics is completely defined since the
> > memory model of Rust is incomplete). There could easily be cases where
> > Rust-side do writes with lock critical sections while C-side do reads
> > out of the lock critical sections, so Rust-side need to play the
> > volatile game.
> >
> > I'm not sure whether rustc will generate special instrumentation for
> > {read,write}_volatile(), if not, we need to provide something similar to
> > KCSAN does for READ_ONCE() and WRITE_ONCE().
> 
> For volatile (i.e. *ONCE()) KCSAN no longer does anything special.
> This was one of the major compiler changes (-mllvm
> -tsan-distinguish-volatile=1, and similarly for GCC) to get KCSAN
> merged in the end.
> 

Ah, I should have remembered this ;-) Thanks!

Regards,
Boqun

> So if rustc lowers core::ptr::{read,write}_volatile() to volatile in
> LLVM IR (which I assume it does), then everything works as intended,
> and no extra explicit instrumentation is required.
> 
> Thanks,
> -- Marco

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 13:01 Can the Kernel Concurrency Sanitizer Own Rust Code? Marco Elver
  2021-10-07 14:15 ` Boqun Feng
@ 2021-10-07 16:30 ` Paul E. McKenney
  2021-10-07 16:35   ` Marco Elver
  1 sibling, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-07 16:30 UTC (permalink / raw)
  To: Marco Elver; +Cc: kasan-dev, Boqun Feng, rust-for-linux

On Thu, Oct 07, 2021 at 03:01:07PM +0200, Marco Elver wrote:
> Hi Paul,
> 
> Thanks for writing up https://paulmck.livejournal.com/64970.html --
> these were also my thoughts. Similarly for KASAN.
> 
> Sanitizer integration will also, over time, provide quantitative data
> on the rate of bugs in C code, unsafe-Rust, and of course safe-Rust
> code as well as any number of interactions between them once the
> fuzzers are let loose on Rust code.
> 
> Re integrating KCSAN with Rust, this should be doable since rustc does
> support ThreadSanitizer instrumentation:
> https://rustc-dev-guide.rust-lang.org/sanitizers.html
> 
> Just need to pass all the rest of the -mllvm options to rustc as well,
> and ensure it's not attempting to link against compiler-rt. I haven't
> tried, so wouldn't know how it currently behaves.
> 
> Also of importance will be the __tsan_atomic*() instrumentation, which
> KCSAN already provides: my guess is that whatever subset of the LKMM
> Rust initially provides (looking at the current version it certainly
> is the case), the backend will lower them to LLVM atomic intrinsics
> [1], which ThreadSanitizer instrumentation turns into __tsan_atomic*()
> calls.
> [1] https://llvm.org/docs/Atomics.html

May I add this information to the article with attribution?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 16:30 ` Paul E. McKenney
@ 2021-10-07 16:35   ` Marco Elver
  0 siblings, 0 replies; 39+ messages in thread
From: Marco Elver @ 2021-10-07 16:35 UTC (permalink / raw)
  To: paulmck; +Cc: kasan-dev, Boqun Feng, rust-for-linux

On Thu, 7 Oct 2021 at 18:30, Paul E. McKenney <paulmck@kernel.org> wrote:
[...]
> May I add this information to the article with attribution?

Of course, please go ahead.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 14:22   ` Marco Elver
  2021-10-07 14:43     ` Boqun Feng
@ 2021-10-07 17:44     ` Miguel Ojeda
  2021-10-07 18:50       ` Paul E. McKenney
  1 sibling, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-07 17:44 UTC (permalink / raw)
  To: Marco Elver; +Cc: Boqun Feng, Paul E. McKenney, kasan-dev, rust-for-linux

On Thu, Oct 7, 2021 at 5:47 PM Marco Elver <elver@google.com> wrote:
>
> So if rustc lowers core::ptr::{read,write}_volatile() to volatile in
> LLVM IR (which I assume it does)

Yeah, it should, e.g. https://godbolt.org/z/hsnozhvc4

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 17:44     ` Miguel Ojeda
@ 2021-10-07 18:50       ` Paul E. McKenney
  2021-10-07 21:42         ` Gary Guo
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-07 18:50 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Thu, Oct 07, 2021 at 07:44:01PM +0200, Miguel Ojeda wrote:
> On Thu, Oct 7, 2021 at 5:47 PM Marco Elver <elver@google.com> wrote:
> >
> > So if rustc lowers core::ptr::{read,write}_volatile() to volatile in
> > LLVM IR (which I assume it does)
> 
> Yeah, it should, e.g. https://godbolt.org/z/hsnozhvc4

I have updated https://paulmck.livejournal.com/64970.html accordingly
(and hopefully correctly), so thank you both!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 18:50       ` Paul E. McKenney
@ 2021-10-07 21:42         ` Gary Guo
  2021-10-07 22:30           ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Gary Guo @ 2021-10-07 21:42 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Miguel Ojeda, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Thu, 7 Oct 2021 11:50:29 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> I have updated https://paulmck.livejournal.com/64970.html accordingly
> (and hopefully correctly), so thank you both!

The page writes:
> ... and furthermore safe code can violate unsafe code's assumptions as
> long as it is in the same module. For all I know, this last caveat
> might also apply to unsafe code in other modules for kernels built
> with link-time optimizations (LTO) enabled.

This is incorrect.

The statement "safe code can violate unsafe code's assumptions as long
as it is in the same module" is true, but the "module" here means [Rust
module](https://doc.rust-lang.org/reference/items/modules.html) not
kernel module. Module is the encapsulation boundary in Rust, so code
can access things defined in the same module without visibility checks.

So take this file binding as an example,

	struct File {
	    ptr: *mut bindings::file,
	}

	impl File {
	    pub fn pos(&self) -> u64 {
	        unsafe { (*self.ptr).f_pos as u64 }
	    }
	}

The unsafe code assume ptr is valid. The default visibility is private,
so code in other modules cannot modify ptr directly. But within the
same module file.ptr can be accessed, so code within the same module
can use an invalid ptr and invalidate assumption.

This is purely syntactical, and have nothing to do with code generation
and LTO.

And this caveat could be easily be mitigated. In Rust-for-linux, these
structs have type invariant comments, and we require a comment
asserting that the invariant is upheld whenever these types are
modified or created directly with struct expression.

- Gary

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 21:42         ` Gary Guo
@ 2021-10-07 22:30           ` Paul E. McKenney
  2021-10-07 23:06             ` Gary Guo
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-07 22:30 UTC (permalink / raw)
  To: Gary Guo; +Cc: Miguel Ojeda, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Thu, Oct 07, 2021 at 10:42:47PM +0100, Gary Guo wrote:
> On Thu, 7 Oct 2021 11:50:29 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > I have updated https://paulmck.livejournal.com/64970.html accordingly
> > (and hopefully correctly), so thank you both!
> 
> The page writes:
> > ... and furthermore safe code can violate unsafe code's assumptions as
> > long as it is in the same module. For all I know, this last caveat
> > might also apply to unsafe code in other modules for kernels built
> > with link-time optimizations (LTO) enabled.
> 
> This is incorrect.
> 
> The statement "safe code can violate unsafe code's assumptions as long
> as it is in the same module" is true, but the "module" here means [Rust
> module](https://doc.rust-lang.org/reference/items/modules.html) not
> kernel module. Module is the encapsulation boundary in Rust, so code
> can access things defined in the same module without visibility checks.

Believe it or not, I actually understood that this had nothing to
do with a modprobe-style kernel module.  ;-)

For C/C++, I would have written "translation unit".  But my guess is that
"Rust module" would work better.

Thoughts?

> So take this file binding as an example,
> 
> 	struct File {
> 	    ptr: *mut bindings::file,
> 	}
> 
> 	impl File {
> 	    pub fn pos(&self) -> u64 {
> 	        unsafe { (*self.ptr).f_pos as u64 }
> 	    }
> 	}
> 
> The unsafe code assume ptr is valid. The default visibility is private,
> so code in other modules cannot modify ptr directly. But within the
> same module file.ptr can be accessed, so code within the same module
> can use an invalid ptr and invalidate assumption.
> 
> This is purely syntactical, and have nothing to do with code generation
> and LTO.
> 
> And this caveat could be easily be mitigated. In Rust-for-linux, these
> structs have type invariant comments, and we require a comment
> asserting that the invariant is upheld whenever these types are
> modified or created directly with struct expression.

And the definition of a module is constrained to be contained within a
given translation unit, correct?

But what prevents unsafe Rust code in one translation unit from violating
the assumptions of safe Rust code in another translation unit, Rust
modules notwithstanding?  Especially if that unsafe code contains a bug?

Finally, are you arguing that LTO cannot under any circumstances inflict a
bug in Rust unsafe code on Rust safe code in some other translation unit?
Or just that if there are no bugs in Rust code (either safe or unsafe),
that LTO cannot possibly introduce any?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 22:30           ` Paul E. McKenney
@ 2021-10-07 23:06             ` Gary Guo
  2021-10-07 23:42               ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Gary Guo @ 2021-10-07 23:06 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Miguel Ojeda, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Thu, 7 Oct 2021 15:30:10 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> For C/C++, I would have written "translation unit".  But my guess is
> that "Rust module" would work better.
> 
> Thoughts?

Module is not a translation unit in Rust, it is more like C++
namespaces. The translation unit equivalent in Rust is crate.

> And the definition of a module is constrained to be contained within a
> given translation unit, correct?

Correct.

> But what prevents unsafe Rust code in one translation unit from
> violating the assumptions of safe Rust code in another translation
> unit, Rust modules notwithstanding?  Especially if that unsafe code
> contains a bug?

Unsafe code obviously can do all sorts of crazy things and hence
they're unsafe :)

However your article is talking about "safe code can violate unsafe
code's assumptions" and this would only apply if they are in the same
Rust module.

When one writes a safe abstraction using unsafe code they need to prove
that the usage is correct. Most properties used to construct such a
proof would be a local type invariant (like `ptr` being a valid,
non-null pointer in `File` example).

Sometimes the code may rely on invariants of a foreign type that it
depends on (e.g. If I have a `ptr: NonNull<bindings::file>` then I
would expect `ptr.as_ptr()` to be non-null, and `as_ptr` is indeed
implemented in Rust's libcore as safe code. But safe code of a
*downstream* crate cannot violate upstream unsafe code's assumption.

> 
> Finally, are you arguing that LTO cannot under any circumstances
> inflict a bug in Rust unsafe code on Rust safe code in some other
> translation unit? Or just that if there are no bugs in Rust code
> (either safe or unsafe), that LTO cannot possibly introduce any?

I don't see why LTO is significant in the argument. Doing LTO or not
wouldn't change the number of bugs. It could make a bug more or less
visible, but buggy code remains buggy and bug-free code remains
bug-free.

If I have expose a safe `invoke_ub` function in a translation unit that
internally causes UB using unsafe code, and have another
all-safe-code crate calling it, then the whole program has UB
regardless LTO is enabled or not.

- Gary

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 23:06             ` Gary Guo
@ 2021-10-07 23:42               ` Paul E. McKenney
  2021-10-07 23:59                 ` Gary Guo
  2021-10-08 19:53                 ` Miguel Ojeda
  0 siblings, 2 replies; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-07 23:42 UTC (permalink / raw)
  To: Gary Guo; +Cc: Miguel Ojeda, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Fri, Oct 08, 2021 at 12:06:01AM +0100, Gary Guo wrote:
> On Thu, 7 Oct 2021 15:30:10 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > For C/C++, I would have written "translation unit".  But my guess is
> > that "Rust module" would work better.
> > 
> > Thoughts?
> 
> Module is not a translation unit in Rust, it is more like C++
> namespaces. The translation unit equivalent in Rust is crate.
> 
> > And the definition of a module is constrained to be contained within a
> > given translation unit, correct?
> 
> Correct.

OK, I now have this:

	Both the unsafe Rust code and the C code can interfere with Rust
	non-unsafe code, and furthermore safe code can violate unsafe
	code's assumptions as long as it is in the same module. However,
	please note that a Rust module is a syntactic construct vaguely
	resembling a C++ namespace, and has nothing to do with a kernel
	module or a translation unit.

Is that better?

> > But what prevents unsafe Rust code in one translation unit from
> > violating the assumptions of safe Rust code in another translation
> > unit, Rust modules notwithstanding?  Especially if that unsafe code
> > contains a bug?
> 
> Unsafe code obviously can do all sorts of crazy things and hence
> they're unsafe :)
> 
> However your article is talking about "safe code can violate unsafe
> code's assumptions" and this would only apply if they are in the same
> Rust module.

Understood.  I was instead double-checking the first clause of that
first sentence quoted above.

> When one writes a safe abstraction using unsafe code they need to prove
> that the usage is correct. Most properties used to construct such a
> proof would be a local type invariant (like `ptr` being a valid,
> non-null pointer in `File` example).
> 
> Sometimes the code may rely on invariants of a foreign type that it
> depends on (e.g. If I have a `ptr: NonNull<bindings::file>` then I
> would expect `ptr.as_ptr()` to be non-null, and `as_ptr` is indeed
> implemented in Rust's libcore as safe code. But safe code of a
> *downstream* crate cannot violate upstream unsafe code's assumption.

OK, thank you.

> > Finally, are you arguing that LTO cannot under any circumstances
> > inflict a bug in Rust unsafe code on Rust safe code in some other
> > translation unit? Or just that if there are no bugs in Rust code
> > (either safe or unsafe), that LTO cannot possibly introduce any?
> 
> I don't see why LTO is significant in the argument. Doing LTO or not
> wouldn't change the number of bugs. It could make a bug more or less
> visible, but buggy code remains buggy and bug-free code remains
> bug-free.
> 
> If I have expose a safe `invoke_ub` function in a translation unit that
> internally causes UB using unsafe code, and have another
> all-safe-code crate calling it, then the whole program has UB
> regardless LTO is enabled or not.

Here is the problem we face.  The least buggy project I know of was a
single-threaded safety-critical project that was subjected to stringent
code-style constraints and heavy-duty formal verification.  There was
also a testing phase at the end of the validation process, but any failure
detected by the test was considered to be a critical bug not only against
the software under test, but also against the formal verification phase.

The results were impressive, coming in at about 0.04 bugs per thousand
lines of code (KLoC), that is, about one bug per 25,000 lines of code.

But that is still way more than zero bugs.  And I seriously doubt that
Rust will be anywhere near this level.

A more typical bug rate is about 1-3 bugs per KLoC.

Suppose Rust geometrically splits the difference between the better
end of typical experience (1 bug per KLoC) and that safety-critical
project (again, 0.04 bugs per KLoC), that is to say 0.2 bugs per KLoC.
(The arithmetic mean would give 0.52 bugs per KLoC, so I am being
Rust-optimistic here.)

In a project the size of the Linux kernel, that still works out to some
thousands of bugs.

So in the context of the Linux kernel, the propagation of bugs will still
be important, even if the entire kernel were to be converted to Rust.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 23:42               ` Paul E. McKenney
@ 2021-10-07 23:59                 ` Gary Guo
  2021-10-08  0:27                   ` comex
  2021-10-08 17:40                   ` Paul E. McKenney
  2021-10-08 19:53                 ` Miguel Ojeda
  1 sibling, 2 replies; 39+ messages in thread
From: Gary Guo @ 2021-10-07 23:59 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Miguel Ojeda, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Thu, 7 Oct 2021 16:42:47 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> > I don't see why LTO is significant in the argument. Doing LTO or not
> > wouldn't change the number of bugs. It could make a bug more or less
> > visible, but buggy code remains buggy and bug-free code remains
> > bug-free.
> > 
> > If I have expose a safe `invoke_ub` function in a translation unit
> > that internally causes UB using unsafe code, and have another
> > all-safe-code crate calling it, then the whole program has UB
> > regardless LTO is enabled or not.  
> 
> Here is the problem we face.  The least buggy project I know of was a
> single-threaded safety-critical project that was subjected to
> stringent code-style constraints and heavy-duty formal verification.
> There was also a testing phase at the end of the validation process,
> but any failure detected by the test was considered to be a critical
> bug not only against the software under test, but also against the
> formal verification phase.
> 
> The results were impressive, coming in at about 0.04 bugs per thousand
> lines of code (KLoC), that is, about one bug per 25,000 lines of code.
> 
> But that is still way more than zero bugs.  And I seriously doubt that
> Rust will be anywhere near this level.
> 
> A more typical bug rate is about 1-3 bugs per KLoC.
> 
> Suppose Rust geometrically splits the difference between the better
> end of typical experience (1 bug per KLoC) and that safety-critical
> project (again, 0.04 bugs per KLoC), that is to say 0.2 bugs per KLoC.
> (The arithmetic mean would give 0.52 bugs per KLoC, so I am being
> Rust-optimistic here.)
> 
> In a project the size of the Linux kernel, that still works out to
> some thousands of bugs.
> 
> So in the context of the Linux kernel, the propagation of bugs will
> still be important, even if the entire kernel were to be converted to
> Rust.

There is a distinction between what is considered safe in Rust and what
is considered safe in safety-critical systems. Miguel's LPC talk
(https://youtu.be/ORwYx5_zmZo?t=1749) summarizes this really well. A
large Rust program would no doubt contain bugs, but it is quite
possible that it's UB-free.

I should probably say that doing LTO or not wouldn't make a UB-free
program exhibit UB (assuming LLVM doesn't introduce any during LTO).

- Gary

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 23:59                 ` Gary Guo
@ 2021-10-08  0:27                   ` comex
  2021-10-08 17:40                   ` Paul E. McKenney
  1 sibling, 0 replies; 39+ messages in thread
From: comex @ 2021-10-08  0:27 UTC (permalink / raw)
  To: Gary Guo
  Cc: Paul E. McKenney, Miguel Ojeda, Marco Elver, Boqun Feng,
	kasan-dev, rust-for-linux



> On Oct 7, 2021, at 4:59 PM, Gary Guo <gary@garyguo.net> wrote:
> 
> 
> I should probably say that doing LTO or not wouldn't make a UB-free
> program exhibit UB (assuming LLVM doesn't introduce any during LTO).

Yes, but LTO makes certain types of UB – the types that exist mainly to enable compiler optimizations – more likely to cause actual misbehavior.  But that’s no different from C.  Though, Rust code does tend to make more aggressive use of inlining than C code, and then there’s the whole saga with noalias…

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 23:59                 ` Gary Guo
  2021-10-08  0:27                   ` comex
@ 2021-10-08 17:40                   ` Paul E. McKenney
  2021-10-08 21:32                     ` Miguel Ojeda
  1 sibling, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-08 17:40 UTC (permalink / raw)
  To: Gary Guo; +Cc: Miguel Ojeda, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Fri, Oct 08, 2021 at 12:59:58AM +0100, Gary Guo wrote:
> On Thu, 7 Oct 2021 16:42:47 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > > I don't see why LTO is significant in the argument. Doing LTO or not
> > > wouldn't change the number of bugs. It could make a bug more or less
> > > visible, but buggy code remains buggy and bug-free code remains
> > > bug-free.
> > > 
> > > If I have expose a safe `invoke_ub` function in a translation unit
> > > that internally causes UB using unsafe code, and have another
> > > all-safe-code crate calling it, then the whole program has UB
> > > regardless LTO is enabled or not.  
> > 
> > Here is the problem we face.  The least buggy project I know of was a
> > single-threaded safety-critical project that was subjected to
> > stringent code-style constraints and heavy-duty formal verification.
> > There was also a testing phase at the end of the validation process,
> > but any failure detected by the test was considered to be a critical
> > bug not only against the software under test, but also against the
> > formal verification phase.
> > 
> > The results were impressive, coming in at about 0.04 bugs per thousand
> > lines of code (KLoC), that is, about one bug per 25,000 lines of code.
> > 
> > But that is still way more than zero bugs.  And I seriously doubt that
> > Rust will be anywhere near this level.
> > 
> > A more typical bug rate is about 1-3 bugs per KLoC.
> > 
> > Suppose Rust geometrically splits the difference between the better
> > end of typical experience (1 bug per KLoC) and that safety-critical
> > project (again, 0.04 bugs per KLoC), that is to say 0.2 bugs per KLoC.
> > (The arithmetic mean would give 0.52 bugs per KLoC, so I am being
> > Rust-optimistic here.)
> > 
> > In a project the size of the Linux kernel, that still works out to
> > some thousands of bugs.
> > 
> > So in the context of the Linux kernel, the propagation of bugs will
> > still be important, even if the entire kernel were to be converted to
> > Rust.
> 
> There is a distinction between what is considered safe in Rust and what
> is considered safe in safety-critical systems. Miguel's LPC talk
> (https://youtu.be/ORwYx5_zmZo?t=1749) summarizes this really well. A
> large Rust program would no doubt contain bugs, but it is quite
> possible that it's UB-free.

The only purpose of my above wall of text was to assert that, as you
said, "A large Rust program would no doubt contain bugs", so we are
good on that point.

Just in case there is lingering confusion, my purpose in providing an
example from the field of safety-critical systems was nothing more or
less than to derive an extreme lower bound for the expected bug rate in
production software.  Believe me, there is no way that I am advocating
use of Rust as it currently exists for use in safety-critical systems!
Not that this will necessarily prevent such use, mind you!  ;-)

OK, on to your point about UB-freedom.

From what I have seen, people prevent unsafe Rust code from introducing
UB by adding things, for example assertions and proofs of correctness.
Each and every one of those added things have a non-zero probability
of themselves containing bugs or mistakes.  Therefore, a Rust program
containing a sufficiently large quantity of unsafe code will with high
probability invoke UB.

Hopefully, a much lower UB-invocation probability than a similar quantity
of C code, but nevertheless, a decidedly non-zero probability.

So what am I missing here?

> I should probably say that doing LTO or not wouldn't make a UB-free
> program exhibit UB (assuming LLVM doesn't introduce any during LTO).

I defer to comex's reply to this.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-07 23:42               ` Paul E. McKenney
  2021-10-07 23:59                 ` Gary Guo
@ 2021-10-08 19:53                 ` Miguel Ojeda
  2021-10-08 23:57                   ` Paul E. McKenney
  1 sibling, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-08 19:53 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Fri, Oct 8, 2021 at 1:42 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> OK, I now have this:
>
>         Both the unsafe Rust code and the C code can interfere with Rust
>         non-unsafe code, and furthermore safe code can violate unsafe
>         code's assumptions as long as it is in the same module. However,
>         please note that a Rust module is a syntactic construct vaguely
>         resembling a C++ namespace, and has nothing to do with a kernel
>         module or a translation unit.
>
> Is that better?

For someone new to Rust, I think the paragraph may be hard to make
sense of, and there are several ways to read it.

For instance, safe code "can" violate unsafe code's assumptions in the
same module, but then it just means the module is buggy/unsound.

But if we are talking about buggy/unsound modules, then even safe code
outside the module may be able to violate the module's assumptions
too.

Instead, it is easier to talk about what Rust aims to guarantee: that
if libraries containing unsafe code are sound, then outside safe code
cannot subvert them to introduce UB.

Thus it is a conditional promise. But it is a powerful one. The point
is not that libraries may be subverted if there is a bug in them, but
that they cannot be subverted if they are correct.

As an example, take `std::vector` from C++. Correct usage of
`std::vector` will not trigger UB (as long as `std::vector` is
non-buggy). Rust aims to guarantee something extra: that even
*incorrect* safe code using `Vec` will not be able to trigger UB (as
long as `Vec` and other abstractions are non-buggy).

As you see, the condition "as long as X is non-buggy" remains. But
that is OK -- it does not mean encapsulation is useless: it still
allows to effectively contain UB.

Put another way, C and C++ APIs are the trivial / reduced case for
what Rust aims to guarantee. For instance, we can think of C++
`std::vector` as a Rust type where every method is marked as `unsafe`.
As such, Rust would be able to provide its guarantee vacuously --
there are no safe APIs to call to begin with.

To be clear, this "incorrect" usage includes maliciously-written safe
code. So it even has some merits as an "extra layer of protection"
against Minnesota-style or "Underhanded C Contest"-style code (at
least regarding vulnerabilities that exploit UB).

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-08 17:40                   ` Paul E. McKenney
@ 2021-10-08 21:32                     ` Miguel Ojeda
  2021-10-09  0:08                       ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-08 21:32 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Fri, Oct 8, 2021 at 7:40 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> Just in case there is lingering confusion, my purpose in providing an
> example from the field of safety-critical systems was nothing more or
> less than to derive an extreme lower bound for the expected bug rate in

Yes, safety-critical systems usually have lower rate of bugs, but they
can actually be very buggy as long as they comply with requirements...
:P

> production software.  Believe me, there is no way that I am advocating
> use of Rust as it currently exists for use in safety-critical systems!
> Not that this will necessarily prevent such use, mind you!  ;-)

Well, people are already working on bringing Rust to safety-critical domains! :)

In any case, for example, DO-178 describes the software development
process, but does not require a particular language to be used even if
a particular project following that standard may do so.

> From what I have seen, people prevent unsafe Rust code from introducing
> UB by adding things, for example assertions and proofs of correctness.
> Each and every one of those added things have a non-zero probability
> of themselves containing bugs or mistakes.  Therefore, a Rust program
> containing a sufficiently large quantity of unsafe code will with high
> probability invoke UB.
>
> Hopefully, a much lower UB-invocation probability than a similar quantity
> of C code, but nevertheless, a decidedly non-zero probability.
>
> So what am I missing here?

Rust does not guarantee UB-freedom in an absolute way -- after all,
there is unsafe code in the standard library, we have unsafe code in
the kernel abstractions, the compiler may have bugs, the hardware may
misbehave, there may be a single-event upset, etc.

However, the key is to understand Rust as a way to minimize unsafe
code, and therefore minimize the chances of UB happening.

Let's take an example: we need to dereference a pointer 10 times in a
driver. And 10 more times in another driver. We may do it writing
`unsafe` many times in every driver, and checking that every single
usage does not trigger UB. This is fine, and we can write Rust code
like that, but is not buying us much. And, as you say, if we keep
accumulating those dereferences, the probability of a mistake grows
and grows.

Instead, we could write an abstraction that provides a safe way to do
the same thing. Then we can focus our efforts in checking the
abstraction, and reuse it everywhere, in all drivers.

That abstraction does not guarantee there is no UB -- after all, it
may have a bug, or someone else may corrupt our memory, or the
hardware may have a bug, etc. However, that abstraction is promising
that, as long as there is no other UB subverting it, then it will not
allow safe code to create UB.

Therefore, as a driver writer, as long as I keep writing only safe
code, I do not have to care about introducing UB. As a reviewer, if
the driver does not contain unsafe code, I don't need to worry about
any UB either. If UB is actually introduced, then the bug is in the
abstractions, not the safe driver.

Thus we are reducing the amount of places where we risk using a
potentially-UB operation.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-08 19:53                 ` Miguel Ojeda
@ 2021-10-08 23:57                   ` Paul E. McKenney
  2021-10-09 16:30                     ` Miguel Ojeda
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-08 23:57 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Fri, Oct 08, 2021 at 09:53:34PM +0200, Miguel Ojeda wrote:
> On Fri, Oct 8, 2021 at 1:42 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > OK, I now have this:
> >
> >         Both the unsafe Rust code and the C code can interfere with Rust
> >         non-unsafe code, and furthermore safe code can violate unsafe
> >         code's assumptions as long as it is in the same module. However,
> >         please note that a Rust module is a syntactic construct vaguely
> >         resembling a C++ namespace, and has nothing to do with a kernel
> >         module or a translation unit.
> >
> > Is that better?
> 
> For someone new to Rust, I think the paragraph may be hard to make
> sense of, and there are several ways to read it.
> 
> For instance, safe code "can" violate unsafe code's assumptions in the
> same module, but then it just means the module is buggy/unsound.
> 
> But if we are talking about buggy/unsound modules, then even safe code
> outside the module may be able to violate the module's assumptions
> too.
> 
> Instead, it is easier to talk about what Rust aims to guarantee: that
> if libraries containing unsafe code are sound, then outside safe code
> cannot subvert them to introduce UB.
> 
> Thus it is a conditional promise. But it is a powerful one. The point
> is not that libraries may be subverted if there is a bug in them, but
> that they cannot be subverted if they are correct.

But some other library could have a wild-pointer bug in unsafe Rust code
or in C code, correct?  And such a bug could subvert a rather wide range
of code, including that of correct libraries, right?  If I am wrong,
please tell me what Rust is doing to provide the additional protection.

> As an example, take `std::vector` from C++. Correct usage of
> `std::vector` will not trigger UB (as long as `std::vector` is
> non-buggy). Rust aims to guarantee something extra: that even
> *incorrect* safe code using `Vec` will not be able to trigger UB (as
> long as `Vec` and other abstractions are non-buggy).
> 
> As you see, the condition "as long as X is non-buggy" remains. But
> that is OK -- it does not mean encapsulation is useless: it still
> allows to effectively contain UB.

I would like to believe that, but I have seen too many cases where
UB propagates far and wide.  :-(

> Put another way, C and C++ APIs are the trivial / reduced case for
> what Rust aims to guarantee. For instance, we can think of C++
> `std::vector` as a Rust type where every method is marked as `unsafe`.
> As such, Rust would be able to provide its guarantee vacuously --
> there are no safe APIs to call to begin with.

Believe me, I am not arguing that C code is safer than Rust code,
not even than Rust unsafe code.

> To be clear, this "incorrect" usage includes maliciously-written safe
> code. So it even has some merits as an "extra layer of protection"
> against Minnesota-style or "Underhanded C Contest"-style code (at
> least regarding vulnerabilities that exploit UB).

Except that all too many compiler writers are actively looking for more
UB to exploit.  So this would be a difficult moving target.

Let me see if I can summarize with a bit of interpretation...

1.	Rust modules are a pointless distraction here.	Unless you object,
	I will remove all mention of them from this blog series.

2.	Safe Rust code might have bugs, as might any other code.

	For example, even if Linux-kernel RCU were to somehow be rewritten
	into Rust with no unsafe code whatsoever, there is not a verifier
	alive today that is going to realize that changing the value of
	RCU_JIFFIES_FQS_DIV from 256 to (say) 16 is a really bad idea.
	Nevertheless, RCU's users would not likely suffer in silence
	after seeing the greatly extended RCU grace periods, which in
	some cases could result in OOMing the system.

3.	Correctly written unsafe Rust code defends itself (and the safe
	code invoking it) from misuse.  And presumably the same applies
	for wrappers written for C code, given that there is probably
	an "unsafe" lurking somewhere in such wrappers.

4.	Rust's safety properties are focused more on UB in particular
	than on bugs in general.

And one final thing to keep in mind...  If I turn this blog series into
a rosy hymn to Rust, nobody is going to believe it.  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-08 21:32                     ` Miguel Ojeda
@ 2021-10-09  0:08                       ` Paul E. McKenney
  2021-10-09 16:31                         ` Miguel Ojeda
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-09  0:08 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Fri, Oct 08, 2021 at 11:32:34PM +0200, Miguel Ojeda wrote:
> On Fri, Oct 8, 2021 at 7:40 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > Just in case there is lingering confusion, my purpose in providing an
> > example from the field of safety-critical systems was nothing more or
> > less than to derive an extreme lower bound for the expected bug rate in
> 
> Yes, safety-critical systems usually have lower rate of bugs, but they
> can actually be very buggy as long as they comply with requirements...
> :P

If it complies with requirements, is it really a bug?  And while we are
at it, I need to make an insignificant change to those requirements.  ;-)

> > production software.  Believe me, there is no way that I am advocating
> > use of Rust as it currently exists for use in safety-critical systems!
> > Not that this will necessarily prevent such use, mind you!  ;-)
> 
> Well, people are already working on bringing Rust to safety-critical domains! :)

Hey, they have been using C for quite some time!  In at least some cases,
with the assistance of formal verification tooling that takes the C code
as input (cbmc, for example).

> In any case, for example, DO-178 describes the software development
> process, but does not require a particular language to be used even if
> a particular project following that standard may do so.

And how many of those boxes are ticked by the usual open-source processes?
Nicholas Mc Guire talks about this from time to time.

One challenge for use of Rust in my previous work with similar standards
would be repeatability.  It would be necessary to carefully identify and
archive the Rust compiler.

> > From what I have seen, people prevent unsafe Rust code from introducing
> > UB by adding things, for example assertions and proofs of correctness.
> > Each and every one of those added things have a non-zero probability
> > of themselves containing bugs or mistakes.  Therefore, a Rust program
> > containing a sufficiently large quantity of unsafe code will with high
> > probability invoke UB.
> >
> > Hopefully, a much lower UB-invocation probability than a similar quantity
> > of C code, but nevertheless, a decidedly non-zero probability.
> >
> > So what am I missing here?
> 
> Rust does not guarantee UB-freedom in an absolute way -- after all,
> there is unsafe code in the standard library, we have unsafe code in
> the kernel abstractions, the compiler may have bugs, the hardware may
> misbehave, there may be a single-event upset, etc.
> 
> However, the key is to understand Rust as a way to minimize unsafe
> code, and therefore minimize the chances of UB happening.
> 
> Let's take an example: we need to dereference a pointer 10 times in a
> driver. And 10 more times in another driver. We may do it writing
> `unsafe` many times in every driver, and checking that every single
> usage does not trigger UB. This is fine, and we can write Rust code
> like that, but is not buying us much. And, as you say, if we keep
> accumulating those dereferences, the probability of a mistake grows
> and grows.

The real fun in device drivers is the MMIO references, along with the
IOMMU, the occasional cache-incoherent device, and so on.

> Instead, we could write an abstraction that provides a safe way to do
> the same thing. Then we can focus our efforts in checking the
> abstraction, and reuse it everywhere, in all drivers.
> 
> That abstraction does not guarantee there is no UB -- after all, it
> may have a bug, or someone else may corrupt our memory, or the
> hardware may have a bug, etc. However, that abstraction is promising
> that, as long as there is no other UB subverting it, then it will not
> allow safe code to create UB.
> 
> Therefore, as a driver writer, as long as I keep writing only safe
> code, I do not have to care about introducing UB. As a reviewer, if
> the driver does not contain unsafe code, I don't need to worry about
> any UB either. If UB is actually introduced, then the bug is in the
> abstractions, not the safe driver.
> 
> Thus we are reducing the amount of places where we risk using a
> potentially-UB operation.

So Rust is an attempt to let the compiler writers have their UB while
inflicting at least somewhat less inconvenience on those of us poor
fools using the resulting compilers?  If so, I predict that the compiler
writers will work hard to exploit additional UB until such time as Rust
is at least as unsound as the C language currently is.

Sorry, but you did leave yourself wide open for that one!!!  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-08 23:57                   ` Paul E. McKenney
@ 2021-10-09 16:30                     ` Miguel Ojeda
  2021-10-09 23:48                       ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-09 16:30 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Sat, Oct 9, 2021 at 1:57 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> But some other library could have a wild-pointer bug in unsafe Rust code
> or in C code, correct?  And such a bug could subvert a rather wide range

Indeed, but that would require a bug somewhere in unsafe Rust code --
safe Rust code cannot do so on its own. That is why I mentioned
"outside safe code".

> of code, including that of correct libraries, right?  If I am wrong,
> please tell me what Rust is doing to provide the additional protection.

Of course, an unsafe code bug, or C code going wild, or a compiler
bug, or a hardware bug, or a single-event upset etc. can subvert
everything (see the other reply).

This is why I emphasize that the guarantees Rust aims to provide are
conditional to all that. After all, it is just a language -- there is
no way it could make a system (including hardware) immune to that.

> I would like to believe that, but I have seen too many cases where
> UB propagates far and wide.  :-(

To be clear, the "effectively contain UB" above did not imply that
Rust somehow prevents UB from breaking everything if it actually
happens (this relates to the previous point). It means that, as a
tool, it seems to be an effective way to write less UB-related bugs
compared to using languages like C.

In other words, UB-related bugs can definitely still happen, but the
idea is to reduce the amount of issues involving UB as much as
possible via reducing the amount of code that we need to write that
requires potentially-UB operations. So it is a matter of reducing the
probabilities you mentioned -- but Rust alone will not make them zero
nor guarantee no UB in an absolute manner.

> Except that all too many compiler writers are actively looking for more
> UB to exploit.  So this would be a difficult moving target.

If you mean it in the sense of C and C++ (i.e. where it is easy to
trigger UB without realizing it because the optimizer may not take
advantage of that today, but may actually take advantage of it
tomorrow); then in safe Rust that would be a bug.

That is, such a bug may be in the compiler frontend, it may be a bug
in LLVM, or in the language spec, or in the stdlib, or in our own
unsafe code in the kernel, etc. But ultimately, it would be considered
a bug.

The idea is that the safe subset of Rust does not allow you to write
UB at all, whatever you write. So, for instance, no optimizer (whether
today's version or tomorrow's version) will be able to break your code
(again, assuming no bugs in the optimizer etc.).

This is in contrast with C (or unsafe Rust!), where not only we have
the risk of compiler bugs like in safe Rust, but also all the UB
landmines in the language itself that correct optimizers can exploit
(assuming we agreed what is "legal" by the standard, which is a whole
another discussion).

> Let me see if I can summarize with a bit of interpretation...
>
> 1.      Rust modules are a pointless distraction here.  Unless you object,
>         I will remove all mention of them from this blog series.

I agree it is best to omit them. However, it is not that Rust modules
are irrelevant/unrelated to the safety story in Rust, but for
newcomers to Rust, I think it is a detail that can easily mislead
them.

> 2.      Safe Rust code might have bugs, as might any other code.
>
>         For example, even if Linux-kernel RCU were to somehow be rewritten
>         into Rust with no unsafe code whatsoever, there is not a verifier
>         alive today that is going to realize that changing the value of
>         RCU_JIFFIES_FQS_DIV from 256 to (say) 16 is a really bad idea.

Definitely: logic bugs are not prevented by safe Rust.

It may reduce the chances of logic bugs compared to C though (e.g.
through its stricter type system etc.), but this is another topic,
mostly unrelated to the safety/UB discussion.

> 3.      Correctly written unsafe Rust code defends itself (and the safe
>         code invoking it) from misuse.  And presumably the same applies
>         for wrappers written for C code, given that there is probably
>         an "unsafe" lurking somewhere in such wrappers.

Yes. And definitely, calling C code is unsafe, since C code does not
have a way to promise in its signature that it is safe.

> 4.      Rust's safety properties are focused more on UB in particular
>         than on bugs in general.

Yes, safety in Rust is all about UB, not logic bugs.

This does not mean that Rust was not designed to try to minimize logic
bugs too, of course, but that is another discussion.

> And one final thing to keep in mind...  If I turn this blog series into
> a rosy hymn to Rust, nobody is going to believe it.  ;-)

I understand :)

As a personal note: I am trying my best to give a fair assessment of
Rust for the kernel, and trying hard to describe what Rust actually
aims to guarantee and what not. I do not enjoy when Rust is portrayed
as the solution to every single problem -- it does not solve all
issues, at all. But I think it is a big enough improvement to be
seriously considered for kernel development.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-09  0:08                       ` Paul E. McKenney
@ 2021-10-09 16:31                         ` Miguel Ojeda
  2021-10-09 23:59                           ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-09 16:31 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Sat, Oct 9, 2021 at 2:08 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> If it complies with requirements, is it really a bug?  And while we are
> at it, I need to make an insignificant change to those requirements.  ;-)
>
> Hey, they have been using C for quite some time!  In at least some cases,
> with the assistance of formal verification tooling that takes the C code
> as input (cbmc, for example).

Indeed, for assurance levels that require that kind of verification,
there is a need for that kind of tooling for Rust.

> And how many of those boxes are ticked by the usual open-source processes?
> Nicholas Mc Guire talks about this from time to time.
>
> One challenge for use of Rust in my previous work with similar standards
> would be repeatability.  It would be necessary to carefully identify and
> archive the Rust compiler.

This may be open for interpretation, but I am aware of safety-critical
projects having used open-source compilers (e.g. GCC) and passing
certification (in at least some assurance levels).

Of course, in any case, companies looking to certify a system will not
jump right away into Rust because there are many other things to
consider: previous experience certifying, existence of tools, etc. and
all their implications in cost.

> So Rust is an attempt to let the compiler writers have their UB while
> inflicting at least somewhat less inconvenience on those of us poor
> fools using the resulting compilers?  If so, I predict that the compiler

You can see Rust as a way to "tame" C and C++, yes ;D

More seriously, users of Rust also take advantage of it, not just
compiler writers. For instance, unsafe code is used all the time to
implement all sorts of data structures in a performant way, while
still giving callers a safe interface.

There is also the angle about using `unsafe` even in "normal code" as
an escape hatch when you really need the performance (e.g. to avoid a
runtime check you can show it always holds).

The key idea is to encapsulate and minimize all that, and keep most of
the code (e.g. drivers) within the safe subset while still taking
advantage of the performance potentially-UB operations give us.

> writers will work hard to exploit additional UB until such time as Rust
> is at least as unsound as the C language currently is.

Rust has defined both the language and the compiler frontend so far,
thus it is also its own compiler writer here (ignoring here
alternative compilers which are very welcome). So it is in a good
position to argue with itself about what should be UB ;)

Now, of course, the Rust compiler writers have to ensure to abide by
LLVM's UB semantics when they lower code (and similarly for
alternative backends). But this is a different layer of UB, one that
frontend writers are responsible for, not the Rust one, which is the
one we care about for writing unsafe code.

Nevertheless, in the layer we care about, it would be nice to see the
unsafe Rust semantics defined as precisely as possible -- and there is
work to do there (as well as an opportunity).

(In any case, to be clear, this all is about unsafe Rust -- for safe
Rust, it has to show no UB modulo bugs in optimizers, libraries,
hardware, etc. -- see my other email about this. Furthermore, even if
there comes a time Rust has an standard, the safe Rust subset should
still not allow any UB).

> Sorry, but you did leave yourself wide open for that one!!!  ;-)

No worries :) I appreciate that you raise all these points, and I hope
it clarifies things for others with the same questions.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-09 16:30                     ` Miguel Ojeda
@ 2021-10-09 23:48                       ` Paul E. McKenney
  2021-10-11  0:59                         ` Miguel Ojeda
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-09 23:48 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Sat, Oct 09, 2021 at 06:30:10PM +0200, Miguel Ojeda wrote:
> On Sat, Oct 9, 2021 at 1:57 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > But some other library could have a wild-pointer bug in unsafe Rust code
> > or in C code, correct?  And such a bug could subvert a rather wide range
> 
> Indeed, but that would require a bug somewhere in unsafe Rust code --
> safe Rust code cannot do so on its own. That is why I mentioned
> "outside safe code".

Understood.

> > of code, including that of correct libraries, right?  If I am wrong,
> > please tell me what Rust is doing to provide the additional protection.
> 
> Of course, an unsafe code bug, or C code going wild, or a compiler
> bug, or a hardware bug, or a single-event upset etc. can subvert
> everything (see the other reply).
> 
> This is why I emphasize that the guarantees Rust aims to provide are
> conditional to all that. After all, it is just a language -- there is
> no way it could make a system (including hardware) immune to that.

And understood here as well.

> > I would like to believe that, but I have seen too many cases where
> > UB propagates far and wide.  :-(
> 
> To be clear, the "effectively contain UB" above did not imply that
> Rust somehow prevents UB from breaking everything if it actually
> happens (this relates to the previous point). It means that, as a
> tool, it seems to be an effective way to write less UB-related bugs
> compared to using languages like C.
> 
> In other words, UB-related bugs can definitely still happen, but the
> idea is to reduce the amount of issues involving UB as much as
> possible via reducing the amount of code that we need to write that
> requires potentially-UB operations. So it is a matter of reducing the
> probabilities you mentioned -- but Rust alone will not make them zero
> nor guarantee no UB in an absolute manner.

And understood here, too.

> > Except that all too many compiler writers are actively looking for more
> > UB to exploit.  So this would be a difficult moving target.
> 
> If you mean it in the sense of C and C++ (i.e. where it is easy to
> trigger UB without realizing it because the optimizer may not take
> advantage of that today, but may actually take advantage of it
> tomorrow); then in safe Rust that would be a bug.
> 
> That is, such a bug may be in the compiler frontend, it may be a bug
> in LLVM, or in the language spec, or in the stdlib, or in our own
> unsafe code in the kernel, etc. But ultimately, it would be considered
> a bug.
> 
> The idea is that the safe subset of Rust does not allow you to write
> UB at all, whatever you write. So, for instance, no optimizer (whether
> today's version or tomorrow's version) will be able to break your code
> (again, assuming no bugs in the optimizer etc.).
> 
> This is in contrast with C (or unsafe Rust!), where not only we have
> the risk of compiler bugs like in safe Rust, but also all the UB
> landmines in the language itself that correct optimizers can exploit
> (assuming we agreed what is "legal" by the standard, which is a whole
> another discussion).

As long as a significant number of compiler writers evaluate themselves by
improved optimization, they will be working hard to create additional UB
opportunities.  From what you say above, their doing so has the potential
to generate bugs in the Rust compiler.  Suppose this happens ten years
from now.  Do you propose to force rework not just the compiler, but
large quantities of Rust code that might have been written by that time?

> > Let me see if I can summarize with a bit of interpretation...
> >
> > 1.      Rust modules are a pointless distraction here.  Unless you object,
> >         I will remove all mention of them from this blog series.
> 
> I agree it is best to omit them. However, it is not that Rust modules
> are irrelevant/unrelated to the safety story in Rust, but for
> newcomers to Rust, I think it is a detail that can easily mislead
> them.

Plus the connection to a Rust memory model is not all that strong.

> > 2.      Safe Rust code might have bugs, as might any other code.
> >
> >         For example, even if Linux-kernel RCU were to somehow be rewritten
> >         into Rust with no unsafe code whatsoever, there is not a verifier
> >         alive today that is going to realize that changing the value of
> >         RCU_JIFFIES_FQS_DIV from 256 to (say) 16 is a really bad idea.
> 
> Definitely: logic bugs are not prevented by safe Rust.
> 
> It may reduce the chances of logic bugs compared to C though (e.g.
> through its stricter type system etc.), but this is another topic,
> mostly unrelated to the safety/UB discussion.

The thing is that you have still not convinced me that UB is all that
separate of a category from logic bugs, especially given that either
can generate the other.

> > 3.      Correctly written unsafe Rust code defends itself (and the safe
> >         code invoking it) from misuse.  And presumably the same applies
> >         for wrappers written for C code, given that there is probably
> >         an "unsafe" lurking somewhere in such wrappers.
> 
> Yes. And definitely, calling C code is unsafe, since C code does not
> have a way to promise in its signature that it is safe.

Hence the Rust-unsafe wrappering for C code, presumably.

> > 4.      Rust's safety properties are focused more on UB in particular
> >         than on bugs in general.
> 
> Yes, safety in Rust is all about UB, not logic bugs.
> 
> This does not mean that Rust was not designed to try to minimize logic
> bugs too, of course, but that is another discussion.

This focus on UB surprises me.  Unless the goal is mainly comfort for
compiler writers looking for more UB to "optimize".  ;-)

> > And one final thing to keep in mind...  If I turn this blog series into
> > a rosy hymn to Rust, nobody is going to believe it.  ;-)
> 
> I understand :)
> 
> As a personal note: I am trying my best to give a fair assessment of
> Rust for the kernel, and trying hard to describe what Rust actually
> aims to guarantee and what not. I do not enjoy when Rust is portrayed
> as the solution to every single problem -- it does not solve all
> issues, at all. But I think it is a big enough improvement to be
> seriously considered for kernel development.

It will be interesting to see how the experiment plays out.  And to
be sure, part of my skepticism is the fact that UB is rarely (if ever)
the cause of my Linux-kernel RCU bugs.  But the other option that the
kernel uses is gcc and clang/LLVM flags to cause the compiler to define
standard-C UB, one example being signed integer overflow.

But my main official focus is of course the memory model.

						Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-09 16:31                         ` Miguel Ojeda
@ 2021-10-09 23:59                           ` Paul E. McKenney
  2021-10-11  1:24                             ` Miguel Ojeda
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-09 23:59 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Sat, Oct 09, 2021 at 06:31:06PM +0200, Miguel Ojeda wrote:
> On Sat, Oct 9, 2021 at 2:08 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > If it complies with requirements, is it really a bug?  And while we are
> > at it, I need to make an insignificant change to those requirements.  ;-)
> >
> > Hey, they have been using C for quite some time!  In at least some cases,
> > with the assistance of formal verification tooling that takes the C code
> > as input (cbmc, for example).
> 
> Indeed, for assurance levels that require that kind of verification,
> there is a need for that kind of tooling for Rust.
> 
> > And how many of those boxes are ticked by the usual open-source processes?
> > Nicholas Mc Guire talks about this from time to time.
> >
> > One challenge for use of Rust in my previous work with similar standards
> > would be repeatability.  It would be necessary to carefully identify and
> > archive the Rust compiler.
> 
> This may be open for interpretation, but I am aware of safety-critical
> projects having used open-source compilers (e.g. GCC) and passing
> certification (in at least some assurance levels).
> 
> Of course, in any case, companies looking to certify a system will not
> jump right away into Rust because there are many other things to
> consider: previous experience certifying, existence of tools, etc. and
> all their implications in cost.

The advantage that GCC and Clang/LLVM have is that you can simply say
"CentOS vx.yy" and define the full distro in an organized manner, for
a reasonably old and trusted distro version.  Perhaps Rust is already
there, but some have led me to believe that the safety-critical project
would need to take on some of the job of a Linux distribution.

Which they most definitely can do, if they so choose and properly document
with proper approvals.  Which should not be that much of a problem to
make happen.

> > So Rust is an attempt to let the compiler writers have their UB while
> > inflicting at least somewhat less inconvenience on those of us poor
> > fools using the resulting compilers?  If so, I predict that the compiler
> 
> You can see Rust as a way to "tame" C and C++, yes ;D

How about instead taming the people writing insane optimizations?  ;-)

> More seriously, users of Rust also take advantage of it, not just
> compiler writers. For instance, unsafe code is used all the time to
> implement all sorts of data structures in a performant way, while
> still giving callers a safe interface.
> 
> There is also the angle about using `unsafe` even in "normal code" as
> an escape hatch when you really need the performance (e.g. to avoid a
> runtime check you can show it always holds).
> 
> The key idea is to encapsulate and minimize all that, and keep most of
> the code (e.g. drivers) within the safe subset while still taking
> advantage of the performance potentially-UB operations give us.

Nice spin.  ;-)

> > writers will work hard to exploit additional UB until such time as Rust
> > is at least as unsound as the C language currently is.
> 
> Rust has defined both the language and the compiler frontend so far,
> thus it is also its own compiler writer here (ignoring here
> alternative compilers which are very welcome). So it is in a good
> position to argue with itself about what should be UB ;)
> 
> Now, of course, the Rust compiler writers have to ensure to abide by
> LLVM's UB semantics when they lower code (and similarly for
> alternative backends). But this is a different layer of UB, one that
> frontend writers are responsible for, not the Rust one, which is the
> one we care about for writing unsafe code.
> 
> Nevertheless, in the layer we care about, it would be nice to see the
> unsafe Rust semantics defined as precisely as possible -- and there is
> work to do there (as well as an opportunity).
> 
> (In any case, to be clear, this all is about unsafe Rust -- for safe
> Rust, it has to show no UB modulo bugs in optimizers, libraries,
> hardware, etc. -- see my other email about this. Furthermore, even if
> there comes a time Rust has an standard, the safe Rust subset should
> still not allow any UB).

In the near term, you are constrained by the existing compiler backends,
which contain a bunch of optimizations that are and will continue to limit
what you can do.  Longer term, you could write your own backend, or rework
the existing backends, but are all of you really interested in doing that?

The current ownership model is also an interesting constraint, witness
the comments on the sequence locking post.  That said, I completely
understand how the ownership model is a powerful tool that can do an
extremely good job of keeping concurrency novices out of trouble.

> > Sorry, but you did leave yourself wide open for that one!!!  ;-)
> 
> No worries :) I appreciate that you raise all these points, and I hope
> it clarifies things for others with the same questions.

Here is hoping!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-09 23:48                       ` Paul E. McKenney
@ 2021-10-11  0:59                         ` Miguel Ojeda
  2021-10-11 18:52                           ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-11  0:59 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Sun, Oct 10, 2021 at 1:48 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> As long as a significant number of compiler writers evaluate themselves by
> improved optimization, they will be working hard to create additional UB
> opportunities.  From what you say above, their doing so has the potential

Compiler writers definitely try to take advantage of as much UB as
possible to improve optimization, but I would not call that creating
additional UB opportunities. The opportunities are already there,
created by the standards/committees in the case of C and the
RFCs/teams in the case of unsafe Rust.

Of course, compiler writers may be stretching too much the intention
and/or ambiguities, and there is the whole discussion about whether UB
was/is supposed to allow unbounded consequences which WG14 is
discussing in the recently created UBSG.

But I touch on this to emphasize that, even in unsafe Rust, compiler
writers are not completely free to do whatever they want (even if they
completely disregarded their users and existing code bases) and that
C/unsafe Rust also share part of the responsibility (as languages) to
define clearly what is allowed and what is not. So unsafe Rust is in a
similar position to C here (though not equal).

> to generate bugs in the Rust compiler.  Suppose this happens ten years

I am not sure what you mean by bugs in the Rust compiler. If the
compiler is following what unsafe Rust designers asked for, then it
wouldn't be a bug. Whether those semantics are what we want as users,
of course, is a different matter, but we should talk in that case with
the language people (see the previous point).

> from now.  Do you propose to force rework not just the compiler, but
> large quantities of Rust code that might have been written by that time?

No, but I am not sure where you are coming from.

If your concern is that the unsafe Rust code we write today in the
kernel may be broken in 10 years because the language changed the
semantics, then this is a real concern if we are writing unsafe code
that relies on yet-to-be-defined semantics. Of course, we should avoid
doing that just yet. This is why I hope to see more work on the Rust
reference etc. -- an independent implementation like the upcoming GCC
Rust may prove very useful for this.

Now, even if we do use subtle semantics that may not be clear yet,
upstream Rust should not be happy to break the kernel (just like ISO C
and GCC/Clang should not be). At least, they seem quite careful about
this. For instance, when they consider it a need, upstream Rust
compiles and/or runs the tests of huge amounts of open source
libraries out there [1] e.g. [2]. It would be ideal to have the kernel
integrated into those "crater runs" even if we are not a normal crate.

[1] https://rustc-dev-guide.rust-lang.org/tests/intro.html#crater
[2] https://crater-reports.s3.amazonaws.com/beta-1.56-1/index.html

> The thing is that you have still not convinced me that UB is all that
> separate of a category from logic bugs, especially given that either
> can generate the other.

Logic bugs in safe Rust cannot trigger UB as long as those conditions
we discussed apply. Thus, in that sense, they are separate in Rust.

But even in C, we can see it from the angle that triggering UB means
the compiler output cannot be "trusted" anymore (assuming we use the
definition of UB that compiler writers like to use but that not
everybody in the committee agrees with). While with logic bugs, even
with optimizations applied, the output still has to be consistent with
the input (in terms of observable behavior). For instance, the
compiler returning -38 here (https://godbolt.org/z/Pa8TWjY9a):

    int f(void) {
        const unsigned char s = 42;
        _Bool d;
        memcpy(&d, &s, 1);
        return d ? 3 : 4;
    }

The distinction is also useful in order to discuss vulnerabilities:
about ~70% of them come from UB-related issues [1][2][3][4].

[1] https://msrc-blog.microsoft.com/2019/07/18/we-need-a-safer-systems-programming-language/
[2] https://langui.sh/2019/07/23/apple-memory-safety/
[3] https://www.chromium.org/Home/chromium-security/memory-safety
[4] https://security.googleblog.com/2019/05/queue-hardening-enhancements.html

> Hence the Rust-unsafe wrappering for C code, presumably.

Yes, the wrapping uses unsafe code to call the C bindings, but the
wrapper may expose a safe interface to the users.

That wrapping is what we call "abstractions". In our approach, drivers
should only ever call the abstractions, never interacting with the C
bindings directly.

Wrapping things also allows us to leverage Rust features to provide
better APIs compared to using C APIs. For instance, using `Result`
everywhere to represent success/failure.

> This focus on UB surprises me.  Unless the goal is mainly comfort for
> compiler writers looking for more UB to "optimize".  ;-)

I could have been clearer: what I meant is that "safety" in Rust (as a
concept) is related to UB. So safety in Rust "focuses" on UB.

But Rust also focuses on "safety" in a more general sense about
preventing all kinds of bugs, and is a significant improvement over C
in this regard, removing some classes of errors.

For instance, in the previous point, I mention `Result` -- using it
statically avoids forgetting to handle errors, as well as mistakes due
to confusion over error encoding.

> It will be interesting to see how the experiment plays out.  And to
> be sure, part of my skepticism is the fact that UB is rarely (if ever)
> the cause of my Linux-kernel RCU bugs.  But the other option that the

Safe/UB-related Rust guarantees may not useful everywhere, but Rust
also helps lowering the chances of logic bugs in general (see the
previous point).

> kernel uses is gcc and clang/LLVM flags to cause the compiler to define
> standard-C UB, one example being signed integer overflow.

Definitely, compilers could offer to define many UBs in C. The
standard could also decide to remove them, too.

However, there are still cases that C cannot really prevent unless
major changes take place, such as dereferencing pointers or preventing
data races.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-09 23:59                           ` Paul E. McKenney
@ 2021-10-11  1:24                             ` Miguel Ojeda
  2021-10-11 19:01                               ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-11  1:24 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Sun, Oct 10, 2021 at 1:59 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> The advantage that GCC and Clang/LLVM have is that you can simply say
> "CentOS vx.yy" and define the full distro in an organized manner, for
> a reasonably old and trusted distro version.  Perhaps Rust is already
> there, but some have led me to believe that the safety-critical project
> would need to take on some of the job of a Linux distribution.
>
> Which they most definitely can do, if they so choose and properly document
> with proper approvals.  Which should not be that much of a problem to
> make happen.

Exactly, it is doable, and the language is really just one more tool
in the process. For instance, if I had to take on such a project right
now, I might be more afraid (in terms of cost) of having to adapt
internal testing-related tooling (so that it works with Rust) than
about justifying the open-source compiler.

> In the near term, you are constrained by the existing compiler backends,
> which contain a bunch of optimizations that are and will continue to limit
> what you can do.  Longer term, you could write your own backend, or rework
> the existing backends, but are all of you really interested in doing that?

I am not sure I understand what you mean, nor why you think we would
need to rewrite any backend (I think your point here is the same as in
the other email -- see the answer there).

Regardless of what UB instances a backend defines, Rust is still a
layer above. It is the responsibility of the lowering code to not give
e.g. LLVM enough freedom in its own UB terms to do unsound
optimizations in terms of Rust UB.

> The current ownership model is also an interesting constraint, witness
> the comments on the sequence locking post.  That said, I completely
> understand how the ownership model is a powerful tool that can do an
> extremely good job of keeping concurrency novices out of trouble.

I think it also does a good job of keeping concurrency experts out of trouble ;)

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-11  0:59                         ` Miguel Ojeda
@ 2021-10-11 18:52                           ` Paul E. McKenney
  2021-10-13 11:47                             ` Miguel Ojeda
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-11 18:52 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Mon, Oct 11, 2021 at 02:59:00AM +0200, Miguel Ojeda wrote:
> On Sun, Oct 10, 2021 at 1:48 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > As long as a significant number of compiler writers evaluate themselves by
> > improved optimization, they will be working hard to create additional UB
> > opportunities.  From what you say above, their doing so has the potential
> 
> Compiler writers definitely try to take advantage of as much UB as
> possible to improve optimization, but I would not call that creating
> additional UB opportunities. The opportunities are already there,
> created by the standards/committees in the case of C and the
> RFCs/teams in the case of unsafe Rust.
> 
> Of course, compiler writers may be stretching too much the intention
> and/or ambiguities, and there is the whole discussion about whether UB
> was/is supposed to allow unbounded consequences which WG14 is
> discussing in the recently created UBSG.
> 
> But I touch on this to emphasize that, even in unsafe Rust, compiler
> writers are not completely free to do whatever they want (even if they
> completely disregarded their users and existing code bases) and that
> C/unsafe Rust also share part of the responsibility (as languages) to
> define clearly what is allowed and what is not. So unsafe Rust is in a
> similar position to C here (though not equal).

I am sorry, but I have personally witnessed way way too many compiler
writers gleefully talk about breaking user programs.

And yes, I am working to try to provide the standards with safe ways to
implement any number of long-standing concurrent algorithms.  And more
than a few sequential algorithms.  It is slow going.  Compiler writers are
quite protective of not just current UB, but any prospects for future UB.

> > to generate bugs in the Rust compiler.  Suppose this happens ten years
> 
> I am not sure what you mean by bugs in the Rust compiler. If the
> compiler is following what unsafe Rust designers asked for, then it
> wouldn't be a bug. Whether those semantics are what we want as users,
> of course, is a different matter, but we should talk in that case with
> the language people (see the previous point).

Adducing new classes of UB from the standard means that there will be
classes of UB that the Rust compiler doesn't handle.  Optimizations in
the common compiler backends could then break existing Rust programs.

> > from now.  Do you propose to force rework not just the compiler, but
> > large quantities of Rust code that might have been written by that time?
> 
> No, but I am not sure where you are coming from.
> 
> If your concern is that the unsafe Rust code we write today in the
> kernel may be broken in 10 years because the language changed the
> semantics, then this is a real concern if we are writing unsafe code
> that relies on yet-to-be-defined semantics. Of course, we should avoid
> doing that just yet. This is why I hope to see more work on the Rust
> reference etc. -- an independent implementation like the upcoming GCC
> Rust may prove very useful for this.
> 
> Now, even if we do use subtle semantics that may not be clear yet,
> upstream Rust should not be happy to break the kernel (just like ISO C
> and GCC/Clang should not be). At least, they seem quite careful about
> this. For instance, when they consider it a need, upstream Rust
> compiles and/or runs the tests of huge amounts of open source
> libraries out there [1] e.g. [2]. It would be ideal to have the kernel
> integrated into those "crater runs" even if we are not a normal crate.
> 
> [1] https://rustc-dev-guide.rust-lang.org/tests/intro.html#crater
> [2] https://crater-reports.s3.amazonaws.com/beta-1.56-1/index.html

Or you rely on semantics that appear to be clear to you right now, but
that someone comes up with another interpretation for later.  And that
other interpretation opens the door for unanticipated-by-Rust classes
of UB.

> > The thing is that you have still not convinced me that UB is all that
> > separate of a category from logic bugs, especially given that either
> > can generate the other.
> 
> Logic bugs in safe Rust cannot trigger UB as long as those conditions
> we discussed apply. Thus, in that sense, they are separate in Rust.
> 
> But even in C, we can see it from the angle that triggering UB means
> the compiler output cannot be "trusted" anymore (assuming we use the
> definition of UB that compiler writers like to use but that not
> everybody in the committee agrees with). While with logic bugs, even
> with optimizations applied, the output still has to be consistent with
> the input (in terms of observable behavior). For instance, the
> compiler returning -38 here (https://godbolt.org/z/Pa8TWjY9a):
> 
>     int f(void) {
>         const unsigned char s = 42;
>         _Bool d;
>         memcpy(&d, &s, 1);
>         return d ? 3 : 4;
>     }
> 
> The distinction is also useful in order to discuss vulnerabilities:
> about ~70% of them come from UB-related issues [1][2][3][4].
> 
> [1] https://msrc-blog.microsoft.com/2019/07/18/we-need-a-safer-systems-programming-language/
> [2] https://langui.sh/2019/07/23/apple-memory-safety/
> [3] https://www.chromium.org/Home/chromium-security/memory-safety
> [4] https://security.googleblog.com/2019/05/queue-hardening-enhancements.html

All fair points, but either way the program doesn't do what its users
want it to do.

> > Hence the Rust-unsafe wrappering for C code, presumably.
> 
> Yes, the wrapping uses unsafe code to call the C bindings, but the
> wrapper may expose a safe interface to the users.
> 
> That wrapping is what we call "abstractions". In our approach, drivers
> should only ever call the abstractions, never interacting with the C
> bindings directly.
> 
> Wrapping things also allows us to leverage Rust features to provide
> better APIs compared to using C APIs. For instance, using `Result`
> everywhere to represent success/failure.

OK, I will more strongly emphasize wrappering in my next pass through
this series.  And there does seem to have been at least a few cases
of confusion where "implementing" was interpreted by me as a proposed
rewrite of some Linux-kernel subsystem, but where others instead meant
"provide Rust wrappers for".

> > This focus on UB surprises me.  Unless the goal is mainly comfort for
> > compiler writers looking for more UB to "optimize".  ;-)
> 
> I could have been clearer: what I meant is that "safety" in Rust (as a
> concept) is related to UB. So safety in Rust "focuses" on UB.
> 
> But Rust also focuses on "safety" in a more general sense about
> preventing all kinds of bugs, and is a significant improvement over C
> in this regard, removing some classes of errors.
> 
> For instance, in the previous point, I mention `Result` -- using it
> statically avoids forgetting to handle errors, as well as mistakes due
> to confusion over error encoding.

I get that the Rust community makes this distinction.  I am a loss as
to why they do so.

> > It will be interesting to see how the experiment plays out.  And to
> > be sure, part of my skepticism is the fact that UB is rarely (if ever)
> > the cause of my Linux-kernel RCU bugs.  But the other option that the
> 
> Safe/UB-related Rust guarantees may not useful everywhere, but Rust
> also helps lowering the chances of logic bugs in general (see the
> previous point).

OK.  I am definitely not putting forward Linux-kernel RCU as a candidate
for conversion.  But it might well be that there is code in the Linux
kernel that would benefit from application of Rust, and answering this
question is in fact the point of this experiment.

> > kernel uses is gcc and clang/LLVM flags to cause the compiler to define
> > standard-C UB, one example being signed integer overflow.
> 
> Definitely, compilers could offer to define many UBs in C. The
> standard could also decide to remove them, too.

The former seems easier and faster than the latter, sad to say!  ;-)

> However, there are still cases that C cannot really prevent unless
> major changes take place, such as dereferencing pointers or preventing
> data races.

Plus there are long-standing algorithms that dereference pointers to
objects that have been freed, but only if a type-compatible still-live
object was subsequently allocated and initialized at that same address.
And "long standing" as in known and used when I first wrote code, which
was quite some time ago.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-11  1:24                             ` Miguel Ojeda
@ 2021-10-11 19:01                               ` Paul E. McKenney
  2021-10-13 11:48                                 ` Miguel Ojeda
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-11 19:01 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Mon, Oct 11, 2021 at 03:24:53AM +0200, Miguel Ojeda wrote:
> On Sun, Oct 10, 2021 at 1:59 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > The advantage that GCC and Clang/LLVM have is that you can simply say
> > "CentOS vx.yy" and define the full distro in an organized manner, for
> > a reasonably old and trusted distro version.  Perhaps Rust is already
> > there, but some have led me to believe that the safety-critical project
> > would need to take on some of the job of a Linux distribution.
> >
> > Which they most definitely can do, if they so choose and properly document
> > with proper approvals.  Which should not be that much of a problem to
> > make happen.
> 
> Exactly, it is doable, and the language is really just one more tool
> in the process. For instance, if I had to take on such a project right
> now, I might be more afraid (in terms of cost) of having to adapt
> internal testing-related tooling (so that it works with Rust) than
> about justifying the open-source compiler.

The main issue I was calling out was not justifying Rust, but rather
making sure that the exact same build could be reproduced a decade later.

> > In the near term, you are constrained by the existing compiler backends,
> > which contain a bunch of optimizations that are and will continue to limit
> > what you can do.  Longer term, you could write your own backend, or rework
> > the existing backends, but are all of you really interested in doing that?
> 
> I am not sure I understand what you mean, nor why you think we would
> need to rewrite any backend (I think your point here is the same as in
> the other email -- see the answer there).
> 
> Regardless of what UB instances a backend defines, Rust is still a
> layer above. It is the responsibility of the lowering code to not give
> e.g. LLVM enough freedom in its own UB terms to do unsound
> optimizations in terms of Rust UB.

There are things that concurrent software would like to do that are
made quite inconvenient due to large numbers of existing optimizations
in the various compiler backends.  Yes, we have workarounds.  But I
do not see how Rust is going to help with these inconveniences.

> > The current ownership model is also an interesting constraint, witness
> > the comments on the sequence locking post.  That said, I completely
> > understand how the ownership model is a powerful tool that can do an
> > extremely good job of keeping concurrency novices out of trouble.
> 
> I think it also does a good job of keeping concurrency experts out of trouble ;)

You mean like how I am not coding while I am producing blog posts and
responding to emails?  ;-)

Other than that, from some of the replies I am seeing to some of the
posts in this series, it looks like there are some things that concurrency
experts need to do that Rust makes quite hard.

But maybe others in the Rust community know easy solutions to the issues
raised in this series.  If so, perhaps they should speak up.  ;-)

But to be fair, much again depends on exactly where Rust is to be applied
in the kernel.  If a given Linux-kernel feature is not used where Rust
needs to be applied, then there is no need to solve the corresponding
issues.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-11 18:52                           ` Paul E. McKenney
@ 2021-10-13 11:47                             ` Miguel Ojeda
  2021-10-13 23:29                               ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-13 11:47 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Mon, Oct 11, 2021 at 8:52 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> I am sorry, but I have personally witnessed way way too many compiler
> writers gleefully talk about breaking user programs.

Sure, and I just said that even if compiler writers disregarded their
users, they are not completely free to do whatever they want.

> And yes, I am working to try to provide the standards with safe ways to
> implement any number of long-standing concurrent algorithms.  And more
> than a few sequential algorithms.  It is slow going.  Compiler writers are
> quite protective of not just current UB, but any prospects for future UB.

I am aware of that -- I am in WG14 and the UBSG, and some folks there
want to change the definition of UB altogether to prevent exactly the
sort of issues you worry about.

But, again, this is a different matter, and it does not impact Rust.

> Adducing new classes of UB from the standard means that there will be
> classes of UB that the Rust compiler doesn't handle.  Optimizations in
> the common compiler backends could then break existing Rust programs.

No, that is conflating different layers. The Rust compiler does not
"handle classes of UB" from the C or C++ standards. LLVM, the main
backend in rustc, defines some semantics and optimizes according to
those. Rust lowers to LLVM, not to C.

Now, sure, somebody may break LLVM with any given change, including
changes that are intended to be used by a particular language. But
that is arguing about accidents and it can happen in every direction,
not just C to Rust (e.g. Rust made LLVM fix bugs in `noalias` -- those
changes could have broken the C and C++ compilers). If you follow that
logic, then compilers should never use a common backend. Including
between C and C++.

Furthermore, the Rust compiler does not randomly pick a LLVM version
found in your system. Each release internally uses a given LLVM
instance. So you can see the Rust compiler as monolithic, not
"sharing" the backend. Therefore, even if LLVM has a particular bug
somewhere, the Rust frontend can either fix that in their copy (they
patch LLVM at times) or avoid generating the input that breaks LLVM
(they did it for `noalias`).

But, again, this applies to any change to LLVM, UB-related or not. I
don't see how or why this is related to Rust in particular.

> Or you rely on semantics that appear to be clear to you right now, but
> that someone comes up with another interpretation for later.  And that
> other interpretation opens the door for unanticipated-by-Rust classes
> of UB.

When I say "subtle semantics that may not be clear yet", I mean that
they are not explicitly delimited by the language; not as in
"understood in a personal capacity".

If we really want to use `unsafe` code with unclear semantics, we have
several options:

  - Ask upstream Rust about it, so that it can be clearly encoded /
clarified in the reference etc.

  - Do it, but ensure we create an issue in upstream Rust + ideally we
have a test for it in the kernel, so that a crater run would alert
upstream Rust if they ever attempt to change it in the future
(assuming we manage to get the kernel in the crater runs).

  - Call into C for the time being.

> All fair points, but either way the program doesn't do what its users
> want it to do.

Sure, but even if you don't agree with the categorization, safe Rust
helps to avoid several classes of errors, and users do see the results
of that.

> OK, I will more strongly emphasize wrappering in my next pass through
> this series.  And there does seem to have been at least a few cases
> of confusion where "implementing" was interpreted by me as a proposed
> rewrite of some Linux-kernel subsystem, but where others instead meant
> "provide Rust wrappers for".

Yeah, we are not suggesting to rewrite anything. There are, in fact,
several fine approaches, and which to take depends on the code we are
talking about:

  - A given kernel maintainer can provide safe abstractions over the C
APIs, thus avoiding the risk of rewrites, and then start accepting new
"client" modules in mostly safe Rust.

  - Another may do the same, but may only accept new "client" modules
in Rust and not C.

  - Another may do the same, but start rewriting the existing "client"
modules too, perhaps with aims to gradually move to Rust.

  - Another may decide to rewrite the entire subsystem in Rust,
possibly keeping the C version alive for some releases or forever.

  - Another may do the same, but provide the existing C API as
exported Rust functions.

In any case, rewrites from scratch should be a conscious decision --
perhaps a major refactor was due anyway, perhaps the subsystem has had
a history of memory-safety issues, perhaps they want to take advantage
of Rust generics, macros or enums...

> I get that the Rust community makes this distinction.  I am a loss as
> to why they do so.

If you mean the distinction between different types of bugs, then the
distinction does not come from the Rust community.

For instance, in the links I gave you, you can see major C/C++
projects like Chromium and major companies like Microsoft talking
about memory-safety issues.

> OK.  I am definitely not putting forward Linux-kernel RCU as a candidate
> for conversion.  But it might well be that there is code in the Linux
> kernel that would benefit from application of Rust, and answering this
> question is in fact the point of this experiment.

Converting (rather than wrapping) core kernel APIs requires keeping
two separate implementations, because Rust is not mandatory for the
moment.

So I would only do that if there is a good reason, or if somebody is
implementing something new, rather than rewriting it.

> The former seems easier and faster than the latter, sad to say!  ;-)

Well, since you maintain that compiler writers will never drop UB from
their hands, I would expect you see the latter as the easier one. ;)

And, in fact, it would be the best way to do it -- fix the language,
not each individual tool.

> Plus there are long-standing algorithms that dereference pointers to
> objects that have been freed, but only if a type-compatible still-live
> object was subsequently allocated and initialized at that same address.
> And "long standing" as in known and used when I first wrote code, which
> was quite some time ago.

Yes, C and/or Rust may not be suitable for writing certain algorithms
without invoking UB, but that just means we need to write them in
another language, or in assembly, or we ask the compiler to do what we
need. It does not mean we need to drop C or Rust for the vast majority
of the code.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-11 19:01                               ` Paul E. McKenney
@ 2021-10-13 11:48                                 ` Miguel Ojeda
  2021-10-13 16:07                                   ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-13 11:48 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> The main issue I was calling out was not justifying Rust, but rather
> making sure that the exact same build could be reproduced a decade later.

Yes, but that is quite trivial compared to other issues I was
mentioning like adapting and requalifying a testing tool. For
instance, if you already had a team maintaining the configuration
management (i.e. the versions etc.), adding one more tool is not a big
deal.

> There are things that concurrent software would like to do that are
> made quite inconvenient due to large numbers of existing optimizations
> in the various compiler backends.  Yes, we have workarounds.  But I
> do not see how Rust is going to help with these inconveniences.

Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
valuable to be able to express particular algorithms in unsafe Rust,
then I would contact the Rust teams to let them know your needs --
perhaps we end up with something way better than C for that use case!

In any case, Rust does not necessarily need to help there. What is
important is whether Rust helps writing the majority of the kernel
code. If we need to call into C or use inline assembly for certain
bits -- so be it.

> But to be fair, much again depends on exactly where Rust is to be applied
> in the kernel.  If a given Linux-kernel feature is not used where Rust
> needs to be applied, then there is no need to solve the corresponding
> issues.

Exactly.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-13 11:48                                 ` Miguel Ojeda
@ 2021-10-13 16:07                                   ` Paul E. McKenney
  2021-10-13 17:50                                     ` Wedson Almeida Filho
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-13 16:07 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Wed, Oct 13, 2021 at 01:48:13PM +0200, Miguel Ojeda wrote:
> On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > The main issue I was calling out was not justifying Rust, but rather
> > making sure that the exact same build could be reproduced a decade later.
> 
> Yes, but that is quite trivial compared to other issues I was
> mentioning like adapting and requalifying a testing tool. For
> instance, if you already had a team maintaining the configuration
> management (i.e. the versions etc.), adding one more tool is not a big
> deal.

OK, close enough to fair enough.  ;-)

> > There are things that concurrent software would like to do that are
> > made quite inconvenient due to large numbers of existing optimizations
> > in the various compiler backends.  Yes, we have workarounds.  But I
> > do not see how Rust is going to help with these inconveniences.
> 
> Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
> valuable to be able to express particular algorithms in unsafe Rust,
> then I would contact the Rust teams to let them know your needs --
> perhaps we end up with something way better than C for that use case!

Sequence locks and RCU do seem to be posing some challenges.  I suppose
this should not be too much of a surprise, given that there are people who
have been in the Rust community for a long time who do understand both.
If it were easy, they would have already come up with a solution.

So the trick is to stage things so as to allow people time to work on
these sorts of issues.

> In any case, Rust does not necessarily need to help there. What is
> important is whether Rust helps writing the majority of the kernel
> code. If we need to call into C or use inline assembly for certain
> bits -- so be it.
> 
> > But to be fair, much again depends on exactly where Rust is to be applied
> > in the kernel.  If a given Linux-kernel feature is not used where Rust
> > needs to be applied, then there is no need to solve the corresponding
> > issues.
> 
> Exactly.

Thank you for bearing with me.

I will respond to your other email later,.  but the focus on memory
safety in particular instead of undefined behavior in general does help
me quite a bit.

My next step is to create a "TL;DR: Memory-Model Recommendations" post
that is more specific, with both short-term ("do what is easy") and
long-term suggestions.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-13 16:07                                   ` Paul E. McKenney
@ 2021-10-13 17:50                                     ` Wedson Almeida Filho
  2021-10-14  3:35                                       ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Wedson Almeida Filho @ 2021-10-13 17:50 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Miguel Ojeda, Gary Guo, Marco Elver, Boqun Feng, kasan-dev,
	rust-for-linux

On Wed, Oct 13, 2021 at 09:07:07AM -0700, Paul E. McKenney wrote:
> On Wed, Oct 13, 2021 at 01:48:13PM +0200, Miguel Ojeda wrote:
> > On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > >
> > > The main issue I was calling out was not justifying Rust, but rather
> > > making sure that the exact same build could be reproduced a decade later.
> > 
> > Yes, but that is quite trivial compared to other issues I was
> > mentioning like adapting and requalifying a testing tool. For
> > instance, if you already had a team maintaining the configuration
> > management (i.e. the versions etc.), adding one more tool is not a big
> > deal.
> 
> OK, close enough to fair enough.  ;-)
> 
> > > There are things that concurrent software would like to do that are
> > > made quite inconvenient due to large numbers of existing optimizations
> > > in the various compiler backends.  Yes, we have workarounds.  But I
> > > do not see how Rust is going to help with these inconveniences.
> > 
> > Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
> > valuable to be able to express particular algorithms in unsafe Rust,
> > then I would contact the Rust teams to let them know your needs --
> > perhaps we end up with something way better than C for that use case!
> 
> Sequence locks and RCU do seem to be posing some challenges.  I suppose
> this should not be too much of a surprise, given that there are people who
> have been in the Rust community for a long time who do understand both.
> If it were easy, they would have already come up with a solution.

(Hey Paul, I tried posting on your blog series, but I'm having difficulty so I
thought I'd reply here given that we mention seqlocks and RCU here.)

I spent a bit of time thinking about sequence locks and I think I have something
that is workable. (I remind you that we use the C implementation for the
synchronisation primitives). Suppose we had some struct like so:

struct X {
    a: AtomicU32,
    b: AtomicU32,
}

And suppose we have it protected by a sequence lock. If we wanted to return the
sum of the two fields, the code would look like this:

    let v = y.access(|x| {
        let a = x.a.load(Ordering::Relaxed);
	let b = x.b.load(Ordering::Relaxed);
	a + b
    });

It would be expanded to the following machine code in aarch64 (when LTO is
enabled):

  403fd4:       14000002        b       403fdc
  403fd8:       d503203f        yield
  403fdc:       b9400808        ldr     w8, [x0, #8]
  403fe0:       3707ffc8        tbnz    w8, #0, 403fd8
  403fe4:       d50339bf        dmb     ishld
  403fe8:       b9400c09        ldr     w9, [x0, #12]
  403fec:       b940100a        ldr     w10, [x0, #16]
  403ff0:       d50339bf        dmb     ishld
  403ff4:       b940080b        ldr     w11, [x0, #8]
  403ff8:       6b08017f        cmp     w11, w8
  403ffc:       54ffff01        b.ne    403fdc
  404000:       0b090148        add     w8, w10, w9

It is as efficient as the C version, though not as ergonomic. The
.load(Ordering::Relaxed) can of course be improved to something shorter like
.load_relaxed() or even new atomic types  with .load() being relaxed and
.load_ordered(Ordering) for other ordering.

I also have guard- and iterator-based methods for the read path that would look
like this (these can all co-exist if we so choose):

    let v = loop {
        let guard = y.read();
        let a = guard.a.load(Ordering::Relaxed);
        let b = guard.b.load(Ordering::Relaxed);
        if !guard.need_retry() {
            break a + b;
        }
    };

and

    let mut v = 0;
    for x in y {
        let a = x.a.load(Ordering::Relaxed);
	let b = x.b.load(Ordering::Relaxed);
	v = a + b;
    }

The former generates the exact same machine code as above though the latter
generates slightly worse code (it has instructions sequences like "mov w10,
#0x1; tbnz w10, #0, 403ffc" and , "mov w10, wzr; tbnz w10, #0, 403ffc", which
could be optimised but for some reason isn't).

Anyway, on to the write path. We need another primitive to ensure that only one
writer at a time attempts to acquire the sequence lock in write mode. We do this
by taking a guard for this other lock, for example, suppose we want to increment
each of the fields:

    let other_guard = other_lock.lock();
    let guard = y.write(&other_guard);
    guard.a.store(guard.a.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
    guard.b.store(guard.b.load(Ordering::Relaxed) + 1, Ordering::Relaxed);

The part the relates to the sequence lock is compiled to the following:

  404058:       f9400009        ldr     x9, [x0]
  40405c:       eb08013f        cmp     x9, x8
  404060:       54000281        b.ne    4040b0

  404064:       b9400808        ldr     w8, [x0, #8]
  404068:       11000508        add     w8, w8, #0x1
  40406c:       b9000808        str     w8, [x0, #8]
  404070:       d5033abf        dmb     ishst
  404074:       b9400c08        ldr     w8, [x0, #12]
  404078:       11000508        add     w8, w8, #0x1
  40407c:       b9000c08        str     w8, [x0, #12]
  404080:       b9401008        ldr     w8, [x0, #16]
  404084:       11000508        add     w8, w8, #0x1
  404088:       b9001008        str     w8, [x0, #16]
  40408c:       d5033abf        dmb     ishst
  404090:       b9400808        ldr     w8, [x0, #8]
  404094:       11000508        add     w8, w8, #0x1
  404098:       b9000808        str     w8, [x0, #8]

If we ignore the first three instructions momentarily, the rest is as efficient
as C. The reason we need the first three instructions is to ensure that guard
that was passed into the `write` function is a guard to the correct lock. The
lock type already eliminates the vast majority of issues, but a developer could
accidentally lock the wrong lock and use it in the sequence lock, which would be
problematic. So we need this check in Rust that we don't need in C (although the
same mistake could happen in C).

We can provide an 'unsafe' version that doesn't perform this check, then the
onus is on the callers to convince themselves that they have acquired the
correct lock (and they'd be required to use an unsafe block). Then the
performance would be the same as the C version.

Now that I've presented how my proposal looks like from the PoV of a user,
here's its rationale: given that we only want one copy of the data and that
mutable references are always unique in the safe fragment of Rust, we can't (and
don't) return a mutable reference to what's protected by the sequence lock, we
always only allow shared access, even when the sequence lock is acquired in
write mode.

Then how does one change the fields? Interior mutability. In the examples above,
the fields are all atomic, so they can be changed with the `store` method. Any
type that provides interior mutability is suitable here.

If we need to use types with interior mutability, what's the point of the
sequence lock? The point is to allow a consistent view of the fields. In our
example, even though `a` and `b` are atomic, the sequence lock guarantees that
readers will get a consistent view of the values even though writers modify one
at a time.

Lastly, the fact we use a generic `Guard` as proof that a lock is held (for the
write path) means that we don't need to manually implement this for each
different lock we care about; any that implements the `Lock` trait can be used.
This is unlike the C code that uses fragile macros to generate code for
different types of locks (though the scenario is slightly different in that the
C code embeds a lock, which is also something we could do in Rust) -- the Rust
version uses generics, so it is type-checked by the compiler.

RCU pointers can be implemented with a similar technique in that read access is
protected by a 'global' RCU reader lock (and evidence of it being locked is
required to get read access), and writers require another lock to be held. The
only piece that I haven't thought through yet is how to ensure that pointers
that were exposed with RCU 'protection' cannot be freed before the grace period
has elapsed. But this is a discussion for another time.

I'll send out the patches for what I describe above in the next couple of days.

Does any of the above help answer the questions you have about seqlocks in Rust?

Thanks,
-Wedson

> So the trick is to stage things so as to allow people time to work on
> these sorts of issues.
> 
> > In any case, Rust does not necessarily need to help there. What is
> > important is whether Rust helps writing the majority of the kernel
> > code. If we need to call into C or use inline assembly for certain
> > bits -- so be it.
> > 
> > > But to be fair, much again depends on exactly where Rust is to be applied
> > > in the kernel.  If a given Linux-kernel feature is not used where Rust
> > > needs to be applied, then there is no need to solve the corresponding
> > > issues.
> > 
> > Exactly.
> 
> Thank you for bearing with me.
> 
> I will respond to your other email later,.  but the focus on memory
> safety in particular instead of undefined behavior in general does help
> me quite a bit.
> 
> My next step is to create a "TL;DR: Memory-Model Recommendations" post
> that is more specific, with both short-term ("do what is easy") and
> long-term suggestions.
> 
> 							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-13 11:47                             ` Miguel Ojeda
@ 2021-10-13 23:29                               ` Paul E. McKenney
  2021-10-22 19:17                                 ` Miguel Ojeda
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-13 23:29 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Wed, Oct 13, 2021 at 01:47:34PM +0200, Miguel Ojeda wrote:
> On Mon, Oct 11, 2021 at 8:52 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > I am sorry, but I have personally witnessed way way too many compiler
> > writers gleefully talk about breaking user programs.
> 
> Sure, and I just said that even if compiler writers disregarded their
> users, they are not completely free to do whatever they want.

Here is hoping!  Me, I have been pointing out to them the possible
consequences of breaking certain programs.  ;-)

I am responding to a very few points, but your point about memory safety
in particular rather than undefined behavior in general simplifies things.
Which makes most of the discussion, entertaining though it was, less
relevant to the problem at hand.

Not that my silence on the remaining points should be in any way
interpreted as agreement, mind you!  ;-)

> > And yes, I am working to try to provide the standards with safe ways to
> > implement any number of long-standing concurrent algorithms.  And more
> > than a few sequential algorithms.  It is slow going.  Compiler writers are
> > quite protective of not just current UB, but any prospects for future UB.
> 
> I am aware of that -- I am in WG14 and the UBSG, and some folks there
> want to change the definition of UB altogether to prevent exactly the
> sort of issues you worry about.
> 
> But, again, this is a different matter, and it does not impact Rust.
> 
> > Adducing new classes of UB from the standard means that there will be
> > classes of UB that the Rust compiler doesn't handle.  Optimizations in
> > the common compiler backends could then break existing Rust programs.
> 
> No, that is conflating different layers. The Rust compiler does not
> "handle classes of UB" from the C or C++ standards. LLVM, the main
> backend in rustc, defines some semantics and optimizes according to
> those. Rust lowers to LLVM, not to C.

So Rust could support zombie pointers without changes to LLVM?

> Now, sure, somebody may break LLVM with any given change, including
> changes that are intended to be used by a particular language. But
> that is arguing about accidents and it can happen in every direction,
> not just C to Rust (e.g. Rust made LLVM fix bugs in `noalias` -- those
> changes could have broken the C and C++ compilers). If you follow that
> logic, then compilers should never use a common backend. Including
> between C and C++.
> 
> Furthermore, the Rust compiler does not randomly pick a LLVM version
> found in your system. Each release internally uses a given LLVM
> instance. So you can see the Rust compiler as monolithic, not
> "sharing" the backend. Therefore, even if LLVM has a particular bug
> somewhere, the Rust frontend can either fix that in their copy (they
> patch LLVM at times) or avoid generating the input that breaks LLVM
> (they did it for `noalias`).
> 
> But, again, this applies to any change to LLVM, UB-related or not. I
> don't see how or why this is related to Rust in particular.
> 
> > Or you rely on semantics that appear to be clear to you right now, but
> > that someone comes up with another interpretation for later.  And that
> > other interpretation opens the door for unanticipated-by-Rust classes
> > of UB.
> 
> When I say "subtle semantics that may not be clear yet", I mean that
> they are not explicitly delimited by the language; not as in
> "understood in a personal capacity".

The standard is for the most part not a mathematical document.  So many
parts of it can only be "understood in a personal capacity".

> If we really want to use `unsafe` code with unclear semantics, we have
> several options:
> 
>   - Ask upstream Rust about it, so that it can be clearly encoded /
> clarified in the reference etc.
> 
>   - Do it, but ensure we create an issue in upstream Rust + ideally we
> have a test for it in the kernel, so that a crater run would alert
> upstream Rust if they ever attempt to change it in the future
> (assuming we manage to get the kernel in the crater runs).
> 
>   - Call into C for the time being.

I have been thinking more in terms of calling into C in the short term.
I added a post looking at short-term and longer-term possibilities.
The short-term possibilities are mostly "call into C", while the long-term
possibilities are more utopian, perhaps insanely so in many cases.

> > All fair points, but either way the program doesn't do what its users
> > want it to do.
> 
> Sure, but even if you don't agree with the categorization, safe Rust
> helps to avoid several classes of errors, and users do see the results
> of that.

To be proven in the context of the Linux kernel.  And I am happy to
provide at least a little help with the experiment.

> > OK, I will more strongly emphasize wrappering in my next pass through
> > this series.  And there does seem to have been at least a few cases
> > of confusion where "implementing" was interpreted by me as a proposed
> > rewrite of some Linux-kernel subsystem, but where others instead meant
> > "provide Rust wrappers for".
> 
> Yeah, we are not suggesting to rewrite anything. There are, in fact,
> several fine approaches, and which to take depends on the code we are
> talking about:
> 
>   - A given kernel maintainer can provide safe abstractions over the C
> APIs, thus avoiding the risk of rewrites, and then start accepting new
> "client" modules in mostly safe Rust.
> 
>   - Another may do the same, but may only accept new "client" modules
> in Rust and not C.
> 
>   - Another may do the same, but start rewriting the existing "client"
> modules too, perhaps with aims to gradually move to Rust.
> 
>   - Another may decide to rewrite the entire subsystem in Rust,
> possibly keeping the C version alive for some releases or forever.
> 
>   - Another may do the same, but provide the existing C API as
> exported Rust functions.
> 
> In any case, rewrites from scratch should be a conscious decision --
> perhaps a major refactor was due anyway, perhaps the subsystem has had
> a history of memory-safety issues, perhaps they want to take advantage
> of Rust generics, macros or enums...

My current belief is that wrappers would more likely be around
higher-level C code using RCU than around the low-level RCU APIs
themselves.  But who knows?

> > I get that the Rust community makes this distinction.  I am a loss as
> > to why they do so.
> 
> If you mean the distinction between different types of bugs, then the
> distinction does not come from the Rust community.
> 
> For instance, in the links I gave you, you can see major C/C++
> projects like Chromium and major companies like Microsoft talking
> about memory-safety issues.

And talking about memory-safety issues makes much more sense to me than
does talking about undefined behavior in general.

> > OK.  I am definitely not putting forward Linux-kernel RCU as a candidate
> > for conversion.  But it might well be that there is code in the Linux
> > kernel that would benefit from application of Rust, and answering this
> > question is in fact the point of this experiment.
> 
> Converting (rather than wrapping) core kernel APIs requires keeping
> two separate implementations, because Rust is not mandatory for the
> moment.
> 
> So I would only do that if there is a good reason, or if somebody is
> implementing something new, rather than rewriting it.

That makes sense, especially if you are looking at bug rate as a measure
of effectiveness.  Unnecessarily converting well-tested and heavily used
code normally does not improve its bug rate.

> > The former seems easier and faster than the latter, sad to say!  ;-)
> 
> Well, since you maintain that compiler writers will never drop UB from
> their hands, I would expect you see the latter as the easier one. ;)
> 
> And, in fact, it would be the best way to do it -- fix the language,
> not each individual tool.

Working on it in the case of C/C++, though quite a bit more slowly
than I would like.

> > Plus there are long-standing algorithms that dereference pointers to
> > objects that have been freed, but only if a type-compatible still-live
> > object was subsequently allocated and initialized at that same address.
> > And "long standing" as in known and used when I first wrote code, which
> > was quite some time ago.
> 
> Yes, C and/or Rust may not be suitable for writing certain algorithms
> without invoking UB, but that just means we need to write them in
> another language, or in assembly, or we ask the compiler to do what we
> need. It does not mean we need to drop C or Rust for the vast majority
> of the code.

As we agreed earlier, we instead need to provide ways for these languages
to conveniently express these algorithms.

However...

Just to get you an idea of the timeframe, the C++ committee requested
an RCU proposal from me in 2014.  It took about four years to exchange
sufficient C++ and RCU knowledge to come to agreement on what a C++
RCU API would even look like.  The subsequent three years of delay were
due to bottlenecks in the standardization process.  Only this year were
hazard pointers and RCU voted into a Technical Specification, which has
since been drafted by Michael Wong, Maged Michael (who of course did the
hazard pointers section), and myself.  The earliest possible International
Standard release date is 2026, with 2029 perhaps being more likely.

Let's be optimistic and assume 2026.  That would be 12 years elapsed time.

Now, the USA Social Security actuarial tables [1] give me about a 77%
chance of living another 12 years, never mind the small matter of
remaining vigorous enough to participate in the standards process.
Therefore, there is only so much more that I will doing in this space.

Apologies for bringing up what might seem to be a rather morbid point,
but there really are sharp limits here.  ;-)

							Thanx, Paul

[1] https://www.ssa.gov/oact/STATS/table4c6.html

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-13 17:50                                     ` Wedson Almeida Filho
@ 2021-10-14  3:35                                       ` Paul E. McKenney
  2021-10-14  8:03                                         ` Wedson Almeida Filho
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-14  3:35 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: Miguel Ojeda, Gary Guo, Marco Elver, Boqun Feng, kasan-dev,
	rust-for-linux

On Wed, Oct 13, 2021 at 06:50:24PM +0100, Wedson Almeida Filho wrote:
> On Wed, Oct 13, 2021 at 09:07:07AM -0700, Paul E. McKenney wrote:
> > On Wed, Oct 13, 2021 at 01:48:13PM +0200, Miguel Ojeda wrote:
> > > On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > >
> > > > The main issue I was calling out was not justifying Rust, but rather
> > > > making sure that the exact same build could be reproduced a decade later.
> > > 
> > > Yes, but that is quite trivial compared to other issues I was
> > > mentioning like adapting and requalifying a testing tool. For
> > > instance, if you already had a team maintaining the configuration
> > > management (i.e. the versions etc.), adding one more tool is not a big
> > > deal.
> > 
> > OK, close enough to fair enough.  ;-)
> > 
> > > > There are things that concurrent software would like to do that are
> > > > made quite inconvenient due to large numbers of existing optimizations
> > > > in the various compiler backends.  Yes, we have workarounds.  But I
> > > > do not see how Rust is going to help with these inconveniences.
> > > 
> > > Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
> > > valuable to be able to express particular algorithms in unsafe Rust,
> > > then I would contact the Rust teams to let them know your needs --
> > > perhaps we end up with something way better than C for that use case!
> > 
> > Sequence locks and RCU do seem to be posing some challenges.  I suppose
> > this should not be too much of a surprise, given that there are people who
> > have been in the Rust community for a long time who do understand both.
> > If it were easy, they would have already come up with a solution.
> 
> (Hey Paul, I tried posting on your blog series, but I'm having difficulty so I
> thought I'd reply here given that we mention seqlocks and RCU here.)

It should be straightforward to post a comment, but some report that
their employers block livejournal.com.  :-/

Oh, and I have updated heavily recently, including adding a bunch of
Linux-kernel use cases for both sequence locking and RCU.

> I spent a bit of time thinking about sequence locks and I think I have something
> that is workable. (I remind you that we use the C implementation for the
> synchronisation primitives). Suppose we had some struct like so:
> 
> struct X {
>     a: AtomicU32,
>     b: AtomicU32,
> }
> 
> And suppose we have it protected by a sequence lock. If we wanted to return the
> sum of the two fields, the code would look like this:
> 
>     let v = y.access(|x| {
>         let a = x.a.load(Ordering::Relaxed);
> 	let b = x.b.load(Ordering::Relaxed);
> 	a + b
>     });
> 
> It would be expanded to the following machine code in aarch64 (when LTO is
> enabled):
> 
>   403fd4:       14000002        b       403fdc
>   403fd8:       d503203f        yield
>   403fdc:       b9400808        ldr     w8, [x0, #8]
>   403fe0:       3707ffc8        tbnz    w8, #0, 403fd8
>   403fe4:       d50339bf        dmb     ishld
>   403fe8:       b9400c09        ldr     w9, [x0, #12]
>   403fec:       b940100a        ldr     w10, [x0, #16]
>   403ff0:       d50339bf        dmb     ishld
>   403ff4:       b940080b        ldr     w11, [x0, #8]
>   403ff8:       6b08017f        cmp     w11, w8
>   403ffc:       54ffff01        b.ne    403fdc
>   404000:       0b090148        add     w8, w10, w9
> 
> It is as efficient as the C version, though not as ergonomic. The
> .load(Ordering::Relaxed) can of course be improved to something shorter like
> .load_relaxed() or even new atomic types  with .load() being relaxed and
> .load_ordered(Ordering) for other ordering.

Nice!

Is this a native Rust sequence-lock implementation or a wrapper around
the C-language Linux-kernel implementation?

> I also have guard- and iterator-based methods for the read path that would look
> like this (these can all co-exist if we so choose):
> 
>     let v = loop {
>         let guard = y.read();
>         let a = guard.a.load(Ordering::Relaxed);
>         let b = guard.b.load(Ordering::Relaxed);
>         if !guard.need_retry() {
>             break a + b;
>         }
>     };
> 
> and
> 
>     let mut v = 0;
>     for x in y {
>         let a = x.a.load(Ordering::Relaxed);
> 	let b = x.b.load(Ordering::Relaxed);
> 	v = a + b;
>     }
> 
> The former generates the exact same machine code as above though the latter
> generates slightly worse code (it has instructions sequences like "mov w10,
> #0x1; tbnz w10, #0, 403ffc" and , "mov w10, wzr; tbnz w10, #0, 403ffc", which
> could be optimised but for some reason isn't).

The C++ bindings for RCU provide a similar guard approach, leveraging
C++ BasicLock.  Explicit lock and unlock can be obtained using
move-assignments.

> Anyway, on to the write path. We need another primitive to ensure that only one
> writer at a time attempts to acquire the sequence lock in write mode. We do this
> by taking a guard for this other lock, for example, suppose we want to increment
> each of the fields:
> 
>     let other_guard = other_lock.lock();
>     let guard = y.write(&other_guard);

The first acquires the lock in an RAII (scoped) fashion and the second
enters the sequence-lock write-side critical section, correct?

>     guard.a.store(guard.a.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
>     guard.b.store(guard.b.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> 
> The part the relates to the sequence lock is compiled to the following:
> 
>   404058:       f9400009        ldr     x9, [x0]
>   40405c:       eb08013f        cmp     x9, x8
>   404060:       54000281        b.ne    4040b0
> 
>   404064:       b9400808        ldr     w8, [x0, #8]
>   404068:       11000508        add     w8, w8, #0x1
>   40406c:       b9000808        str     w8, [x0, #8]
>   404070:       d5033abf        dmb     ishst
>   404074:       b9400c08        ldr     w8, [x0, #12]
>   404078:       11000508        add     w8, w8, #0x1
>   40407c:       b9000c08        str     w8, [x0, #12]
>   404080:       b9401008        ldr     w8, [x0, #16]
>   404084:       11000508        add     w8, w8, #0x1
>   404088:       b9001008        str     w8, [x0, #16]
>   40408c:       d5033abf        dmb     ishst
>   404090:       b9400808        ldr     w8, [x0, #8]
>   404094:       11000508        add     w8, w8, #0x1
>   404098:       b9000808        str     w8, [x0, #8]
> 
> If we ignore the first three instructions momentarily, the rest is as efficient
> as C. The reason we need the first three instructions is to ensure that guard
> that was passed into the `write` function is a guard to the correct lock. The
> lock type already eliminates the vast majority of issues, but a developer could
> accidentally lock the wrong lock and use it in the sequence lock, which would be
> problematic. So we need this check in Rust that we don't need in C (although the
> same mistake could happen in C).
> 
> We can provide an 'unsafe' version that doesn't perform this check, then the
> onus is on the callers to convince themselves that they have acquired the
> correct lock (and they'd be required to use an unsafe block). Then the
> performance would be the same as the C version.

The Linux-kernel C-language sequence counter (as opposed to the various
flavors of sequence lock) assume that the caller has provided any needed
mutual exclusion.

> Now that I've presented how my proposal looks like from the PoV of a user,
> here's its rationale: given that we only want one copy of the data and that
> mutable references are always unique in the safe fragment of Rust, we can't (and
> don't) return a mutable reference to what's protected by the sequence lock, we
> always only allow shared access, even when the sequence lock is acquired in
> write mode.
> 
> Then how does one change the fields? Interior mutability. In the examples above,
> the fields are all atomic, so they can be changed with the `store` method. Any
> type that provides interior mutability is suitable here.

OK, so following the approach of "marked accesses".

> If we need to use types with interior mutability, what's the point of the
> sequence lock? The point is to allow a consistent view of the fields. In our
> example, even though `a` and `b` are atomic, the sequence lock guarantees that
> readers will get a consistent view of the values even though writers modify one
> at a time.

Yes.

I suppose that the KCSAN ASSERT_EXCLUSIVE_WRITER() could be used on
the sequence-lock update side to check for unwanted concurrency.

> Lastly, the fact we use a generic `Guard` as proof that a lock is held (for the
> write path) means that we don't need to manually implement this for each
> different lock we care about; any that implements the `Lock` trait can be used.
> This is unlike the C code that uses fragile macros to generate code for
> different types of locks (though the scenario is slightly different in that the
> C code embeds a lock, which is also something we could do in Rust) -- the Rust
> version uses generics, so it is type-checked by the compiler.

OK, so this is a standalone implementation of sequence locks in Rust,
rather than something that could interoperate with the C-language
sequence locks?

Is "fragile macros" just the usual Rust denigration of the C preprocessor,
or is there some specific vulnerability that you see in those macros?

Of course, those macros could be used to automatically generate the
wrappers.  Extract the macro invocations from the C source, and transform
them to wrappers, perhaps using Rust macros somewhere along the way.

> RCU pointers can be implemented with a similar technique in that read access is
> protected by a 'global' RCU reader lock (and evidence of it being locked is
> required to get read access), and writers require another lock to be held. The
> only piece that I haven't thought through yet is how to ensure that pointers
> that were exposed with RCU 'protection' cannot be freed before the grace period
> has elapsed. But this is a discussion for another time.

Please note that it is quite important for Rust to use the RCU provided
by the C-language part of the kernel.  Probably also for sequence locks,
but splitting RCU reduces the effectiveness of its batching optimizations.

For at least some of the Linux kernel's RCU use cases, something like
interior mutability may be required.  Whether those use cases show up
in any Rust-language drivers I cannot say.  Other use cases would work
well with RCU readers having read ownership of the non-pointer fields
in each RCU-protected object.

Again, I did add rough descriptions of a few Linux-kernel RCU use cases.

> I'll send out the patches for what I describe above in the next couple of days.
> 
> Does any of the above help answer the questions you have about seqlocks in Rust?

Possibly at least some of them.  I suspect that there is still much to
be learned on all sides, including learning about additional questions
that need to be asked.

Either way, thank you for your work on this!

							Thanx, Paul

> Thanks,
> -Wedson
> 
> > So the trick is to stage things so as to allow people time to work on
> > these sorts of issues.
> > 
> > > In any case, Rust does not necessarily need to help there. What is
> > > important is whether Rust helps writing the majority of the kernel
> > > code. If we need to call into C or use inline assembly for certain
> > > bits -- so be it.
> > > 
> > > > But to be fair, much again depends on exactly where Rust is to be applied
> > > > in the kernel.  If a given Linux-kernel feature is not used where Rust
> > > > needs to be applied, then there is no need to solve the corresponding
> > > > issues.
> > > 
> > > Exactly.
> > 
> > Thank you for bearing with me.
> > 
> > I will respond to your other email later,.  but the focus on memory
> > safety in particular instead of undefined behavior in general does help
> > me quite a bit.
> > 
> > My next step is to create a "TL;DR: Memory-Model Recommendations" post
> > that is more specific, with both short-term ("do what is easy") and
> > long-term suggestions.
> > 
> > 							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-14  3:35                                       ` Paul E. McKenney
@ 2021-10-14  8:03                                         ` Wedson Almeida Filho
  2021-10-14 19:43                                           ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Wedson Almeida Filho @ 2021-10-14  8:03 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Miguel Ojeda, Gary Guo, Marco Elver, Boqun Feng, kasan-dev,
	rust-for-linux

On Wed, Oct 13, 2021 at 08:35:57PM -0700, Paul E. McKenney wrote:
> On Wed, Oct 13, 2021 at 06:50:24PM +0100, Wedson Almeida Filho wrote:
> > On Wed, Oct 13, 2021 at 09:07:07AM -0700, Paul E. McKenney wrote:
> > > On Wed, Oct 13, 2021 at 01:48:13PM +0200, Miguel Ojeda wrote:
> > > > On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > >
> > > > > The main issue I was calling out was not justifying Rust, but rather
> > > > > making sure that the exact same build could be reproduced a decade later.
> > > > 
> > > > Yes, but that is quite trivial compared to other issues I was
> > > > mentioning like adapting and requalifying a testing tool. For
> > > > instance, if you already had a team maintaining the configuration
> > > > management (i.e. the versions etc.), adding one more tool is not a big
> > > > deal.
> > > 
> > > OK, close enough to fair enough.  ;-)
> > > 
> > > > > There are things that concurrent software would like to do that are
> > > > > made quite inconvenient due to large numbers of existing optimizations
> > > > > in the various compiler backends.  Yes, we have workarounds.  But I
> > > > > do not see how Rust is going to help with these inconveniences.
> > > > 
> > > > Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
> > > > valuable to be able to express particular algorithms in unsafe Rust,
> > > > then I would contact the Rust teams to let them know your needs --
> > > > perhaps we end up with something way better than C for that use case!
> > > 
> > > Sequence locks and RCU do seem to be posing some challenges.  I suppose
> > > this should not be too much of a surprise, given that there are people who
> > > have been in the Rust community for a long time who do understand both.
> > > If it were easy, they would have already come up with a solution.
> > 
> > (Hey Paul, I tried posting on your blog series, but I'm having difficulty so I
> > thought I'd reply here given that we mention seqlocks and RCU here.)
> 
> It should be straightforward to post a comment, but some report that
> their employers block livejournal.com.  :-/

I tried to use my google account while posting and then after I posted it took
me through some workflow to confirm my account, perhaps the comment was lost
during this workflow. Let me try again.

> Oh, and I have updated heavily recently, including adding a bunch of
> Linux-kernel use cases for both sequence locking and RCU.

I'll check it out, thanks!
 
> > I spent a bit of time thinking about sequence locks and I think I have something
> > that is workable. (I remind you that we use the C implementation for the
> > synchronisation primitives). Suppose we had some struct like so:
> > 
> > struct X {
> >     a: AtomicU32,
> >     b: AtomicU32,
> > }
> > 
> > And suppose we have it protected by a sequence lock. If we wanted to return the
> > sum of the two fields, the code would look like this:
> > 
> >     let v = y.access(|x| {
> >         let a = x.a.load(Ordering::Relaxed);
> > 	let b = x.b.load(Ordering::Relaxed);
> > 	a + b
> >     });
> > 
> > It would be expanded to the following machine code in aarch64 (when LTO is
> > enabled):
> > 
> >   403fd4:       14000002        b       403fdc
> >   403fd8:       d503203f        yield
> >   403fdc:       b9400808        ldr     w8, [x0, #8]
> >   403fe0:       3707ffc8        tbnz    w8, #0, 403fd8
> >   403fe4:       d50339bf        dmb     ishld
> >   403fe8:       b9400c09        ldr     w9, [x0, #12]
> >   403fec:       b940100a        ldr     w10, [x0, #16]
> >   403ff0:       d50339bf        dmb     ishld
> >   403ff4:       b940080b        ldr     w11, [x0, #8]
> >   403ff8:       6b08017f        cmp     w11, w8
> >   403ffc:       54ffff01        b.ne    403fdc
> >   404000:       0b090148        add     w8, w10, w9
> > 
> > It is as efficient as the C version, though not as ergonomic. The
> > .load(Ordering::Relaxed) can of course be improved to something shorter like
> > .load_relaxed() or even new atomic types  with .load() being relaxed and
> > .load_ordered(Ordering) for other ordering.
> 
> Nice!
> 
> Is this a native Rust sequence-lock implementation or a wrapper around
> the C-language Linux-kernel implementation?

It's a wrapper around the C-language Linux kernel implementation. (To get the
generated code with LTO inlining, I compiled the code in userspace because
LTO with cross-language inlining isn't enabled/working in the kernel yet).

> > I also have guard- and iterator-based methods for the read path that would look
> > like this (these can all co-exist if we so choose):
> > 
> >     let v = loop {
> >         let guard = y.read();
> >         let a = guard.a.load(Ordering::Relaxed);
> >         let b = guard.b.load(Ordering::Relaxed);
> >         if !guard.need_retry() {
> >             break a + b;
> >         }
> >     };
> > 
> > and
> > 
> >     let mut v = 0;
> >     for x in y {
> >         let a = x.a.load(Ordering::Relaxed);
> > 	let b = x.b.load(Ordering::Relaxed);
> > 	v = a + b;
> >     }
> > 
> > The former generates the exact same machine code as above though the latter
> > generates slightly worse code (it has instructions sequences like "mov w10,
> > #0x1; tbnz w10, #0, 403ffc" and , "mov w10, wzr; tbnz w10, #0, 403ffc", which
> > could be optimised but for some reason isn't).
> 
> The C++ bindings for RCU provide a similar guard approach, leveraging
> C++ BasicLock.  Explicit lock and unlock can be obtained using
> move-assignments.

I haven't seen these bindings, perhaps I should :) But one relevant point about
guards is that Rust has an affine type system that allows it to catch misuse of
guards at compile time. For example, if one wants to explicitly unlock, the
unlock method 'consumes' (move-assigns) the guard, rendering it unusable:
attempting to use such a guard is a compile-time error (even if it's in scope).
In C++, this wouldn't be caught at compile time as moved variables remain
accessible while in scope.

> > Anyway, on to the write path. We need another primitive to ensure that only one
> > writer at a time attempts to acquire the sequence lock in write mode. We do this
> > by taking a guard for this other lock, for example, suppose we want to increment
> > each of the fields:
> > 
> >     let other_guard = other_lock.lock();
> >     let guard = y.write(&other_guard);
> 
> The first acquires the lock in an RAII (scoped) fashion and the second
> enters the sequence-lock write-side critical section, correct?

Yes, exactly.

Additionally, the ownership rules guarantee that the outer lock cannot be
unlocked while in the sequence-lock write-side critical section (because the
inner guard borrows the outer one, so it can be only be consumed after this
borrow goes away). An attempt to do so would result in a compile-time error.

> 
> >     guard.a.store(guard.a.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> >     guard.b.store(guard.b.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> > 
> > The part the relates to the sequence lock is compiled to the following:
> > 
> >   404058:       f9400009        ldr     x9, [x0]
> >   40405c:       eb08013f        cmp     x9, x8
> >   404060:       54000281        b.ne    4040b0
> > 
> >   404064:       b9400808        ldr     w8, [x0, #8]
> >   404068:       11000508        add     w8, w8, #0x1
> >   40406c:       b9000808        str     w8, [x0, #8]
> >   404070:       d5033abf        dmb     ishst
> >   404074:       b9400c08        ldr     w8, [x0, #12]
> >   404078:       11000508        add     w8, w8, #0x1
> >   40407c:       b9000c08        str     w8, [x0, #12]
> >   404080:       b9401008        ldr     w8, [x0, #16]
> >   404084:       11000508        add     w8, w8, #0x1
> >   404088:       b9001008        str     w8, [x0, #16]
> >   40408c:       d5033abf        dmb     ishst
> >   404090:       b9400808        ldr     w8, [x0, #8]
> >   404094:       11000508        add     w8, w8, #0x1
> >   404098:       b9000808        str     w8, [x0, #8]
> > 
> > If we ignore the first three instructions momentarily, the rest is as efficient
> > as C. The reason we need the first three instructions is to ensure that guard
> > that was passed into the `write` function is a guard to the correct lock. The
> > lock type already eliminates the vast majority of issues, but a developer could
> > accidentally lock the wrong lock and use it in the sequence lock, which would be
> > problematic. So we need this check in Rust that we don't need in C (although the
> > same mistake could happen in C).
> > 
> > We can provide an 'unsafe' version that doesn't perform this check, then the
> > onus is on the callers to convince themselves that they have acquired the
> > correct lock (and they'd be required to use an unsafe block). Then the
> > performance would be the same as the C version.
> 
> The Linux-kernel C-language sequence counter (as opposed to the various
> flavors of sequence lock) assume that the caller has provided any needed
> mutual exclusion.

Yes, this actually uses sequence counters.

I suppose if we embed the locks ourselves like sequence locks do, we can wrap
such 'unsafe' blocks as part of the implementation and only expose safe
interfaces as efficient as C.

Do you happen to know the usage ratio between sequence counters vs sequence
locks (all flavours combined)? If the latter are used in the vast majority of
cases, I think it makes sense to do something similar in Rust.

> 
> > Now that I've presented how my proposal looks like from the PoV of a user,
> > here's its rationale: given that we only want one copy of the data and that
> > mutable references are always unique in the safe fragment of Rust, we can't (and
> > don't) return a mutable reference to what's protected by the sequence lock, we
> > always only allow shared access, even when the sequence lock is acquired in
> > write mode.
> > 
> > Then how does one change the fields? Interior mutability. In the examples above,
> > the fields are all atomic, so they can be changed with the `store` method. Any
> > type that provides interior mutability is suitable here.
> 
> OK, so following the approach of "marked accesses".

Yes.
 
> > If we need to use types with interior mutability, what's the point of the
> > sequence lock? The point is to allow a consistent view of the fields. In our
> > example, even though `a` and `b` are atomic, the sequence lock guarantees that
> > readers will get a consistent view of the values even though writers modify one
> > at a time.
> 
> Yes.
> 
> I suppose that the KCSAN ASSERT_EXCLUSIVE_WRITER() could be used on
> the sequence-lock update side to check for unwanted concurrency.

Yes, definitely!

> > Lastly, the fact we use a generic `Guard` as proof that a lock is held (for the
> > write path) means that we don't need to manually implement this for each
> > different lock we care about; any that implements the `Lock` trait can be used.
> > This is unlike the C code that uses fragile macros to generate code for
> > different types of locks (though the scenario is slightly different in that the
> > C code embeds a lock, which is also something we could do in Rust) -- the Rust
> > version uses generics, so it is type-checked by the compiler.
> 
> OK, so this is a standalone implementation of sequence locks in Rust,
> rather than something that could interoperate with the C-language
> sequence locks?

It's an implementation of sequence locks using C-language sequence counters.
Instead of embedding a lock for writer mutual exclusion, we require evidence
that some lock is in use. The idea was to be "flexible" and share locks, but if
most usage just embeds a lock, we may as well do something similar in Rust.

> Is "fragile macros" just the usual Rust denigration of the C preprocessor,
> or is there some specific vulnerability that you see in those macros?

I don't see any specific vulnerability. By fragile I meant that it's more error
prone to write "generic" code with macros than with compiler-supported generics.
 
> Of course, those macros could be used to automatically generate the
> wrappers.  Extract the macro invocations from the C source, and transform
> them to wrappers, perhaps using Rust macros somewhere along the way.

Sure, we could do something like that.

But given that we already wrap the C locks in Rust abstractions that implement a
common trait (interface), we can use Rust generics to leverage all locks without
the need for macros.

> > RCU pointers can be implemented with a similar technique in that read access is
> > protected by a 'global' RCU reader lock (and evidence of it being locked is
> > required to get read access), and writers require another lock to be held. The
> > only piece that I haven't thought through yet is how to ensure that pointers
> > that were exposed with RCU 'protection' cannot be freed before the grace period
> > has elapsed. But this is a discussion for another time.
> 
> Please note that it is quite important for Rust to use the RCU provided
> by the C-language part of the kernel.  Probably also for sequence locks,
> but splitting RCU reduces the effectiveness of its batching optimizations.

Agreed. We actually use the C implementation for all synchronisation primitives
(including ref-counting, which isn't technically a synchronisation primitive but
has subtle usage of barriers). What I mean by "implemented in Rust" is just the
abstractions leveraging Rust concepts to catch misuses earlier where possible.

> For at least some of the Linux kernel's RCU use cases, something like
> interior mutability may be required.  Whether those use cases show up
> in any Rust-language drivers I cannot say.  Other use cases would work
> well with RCU readers having read ownership of the non-pointer fields
> in each RCU-protected object.
> 
> Again, I did add rough descriptions of a few Linux-kernel RCU use cases.
> 
> > I'll send out the patches for what I describe above in the next couple of days.
> > 
> > Does any of the above help answer the questions you have about seqlocks in Rust?
> 
> Possibly at least some of them.  I suspect that there is still much to
> be learned on all sides, including learning about additional questions
> that need to be asked.

Fair point. We don't know quite yet if we've asked all the questions.

> Either way, thank you for your work on this!

Thanks for engaging with us, this is much appreciated.

Cheers,
-Wedson

> 
> 							Thanx, Paul
> 
> > Thanks,
> > -Wedson
> > 
> > > So the trick is to stage things so as to allow people time to work on
> > > these sorts of issues.
> > > 
> > > > In any case, Rust does not necessarily need to help there. What is
> > > > important is whether Rust helps writing the majority of the kernel
> > > > code. If we need to call into C or use inline assembly for certain
> > > > bits -- so be it.
> > > > 
> > > > > But to be fair, much again depends on exactly where Rust is to be applied
> > > > > in the kernel.  If a given Linux-kernel feature is not used where Rust
> > > > > needs to be applied, then there is no need to solve the corresponding
> > > > > issues.
> > > > 
> > > > Exactly.
> > > 
> > > Thank you for bearing with me.
> > > 
> > > I will respond to your other email later,.  but the focus on memory
> > > safety in particular instead of undefined behavior in general does help
> > > me quite a bit.
> > > 
> > > My next step is to create a "TL;DR: Memory-Model Recommendations" post
> > > that is more specific, with both short-term ("do what is easy") and
> > > long-term suggestions.
> > > 
> > > 							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-14  8:03                                         ` Wedson Almeida Filho
@ 2021-10-14 19:43                                           ` Paul E. McKenney
  2021-10-15 15:06                                             ` Wedson Almeida Filho
  0 siblings, 1 reply; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-14 19:43 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: Miguel Ojeda, Gary Guo, Marco Elver, Boqun Feng, kasan-dev,
	rust-for-linux

On Thu, Oct 14, 2021 at 09:03:42AM +0100, Wedson Almeida Filho wrote:
> On Wed, Oct 13, 2021 at 08:35:57PM -0700, Paul E. McKenney wrote:
> > On Wed, Oct 13, 2021 at 06:50:24PM +0100, Wedson Almeida Filho wrote:
> > > On Wed, Oct 13, 2021 at 09:07:07AM -0700, Paul E. McKenney wrote:
> > > > On Wed, Oct 13, 2021 at 01:48:13PM +0200, Miguel Ojeda wrote:
> > > > > On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > > >
> > > > > > The main issue I was calling out was not justifying Rust, but rather
> > > > > > making sure that the exact same build could be reproduced a decade later.
> > > > > 
> > > > > Yes, but that is quite trivial compared to other issues I was
> > > > > mentioning like adapting and requalifying a testing tool. For
> > > > > instance, if you already had a team maintaining the configuration
> > > > > management (i.e. the versions etc.), adding one more tool is not a big
> > > > > deal.
> > > > 
> > > > OK, close enough to fair enough.  ;-)
> > > > 
> > > > > > There are things that concurrent software would like to do that are
> > > > > > made quite inconvenient due to large numbers of existing optimizations
> > > > > > in the various compiler backends.  Yes, we have workarounds.  But I
> > > > > > do not see how Rust is going to help with these inconveniences.
> > > > > 
> > > > > Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
> > > > > valuable to be able to express particular algorithms in unsafe Rust,
> > > > > then I would contact the Rust teams to let them know your needs --
> > > > > perhaps we end up with something way better than C for that use case!
> > > > 
> > > > Sequence locks and RCU do seem to be posing some challenges.  I suppose
> > > > this should not be too much of a surprise, given that there are people who
> > > > have been in the Rust community for a long time who do understand both.
> > > > If it were easy, they would have already come up with a solution.
> > > 
> > > (Hey Paul, I tried posting on your blog series, but I'm having difficulty so I
> > > thought I'd reply here given that we mention seqlocks and RCU here.)
> > 
> > It should be straightforward to post a comment, but some report that
> > their employers block livejournal.com.  :-/
> 
> I tried to use my google account while posting and then after I posted it took
> me through some workflow to confirm my account, perhaps the comment was lost
> during this workflow. Let me try again.

Please let me know how it goes.

> > Oh, and I have updated heavily recently, including adding a bunch of
> > Linux-kernel use cases for both sequence locking and RCU.
> 
> I'll check it out, thanks!
>  
> > > I spent a bit of time thinking about sequence locks and I think I have something
> > > that is workable. (I remind you that we use the C implementation for the
> > > synchronisation primitives). Suppose we had some struct like so:
> > > 
> > > struct X {
> > >     a: AtomicU32,
> > >     b: AtomicU32,
> > > }
> > > 
> > > And suppose we have it protected by a sequence lock. If we wanted to return the
> > > sum of the two fields, the code would look like this:
> > > 
> > >     let v = y.access(|x| {
> > >         let a = x.a.load(Ordering::Relaxed);
> > > 	let b = x.b.load(Ordering::Relaxed);
> > > 	a + b
> > >     });
> > > 
> > > It would be expanded to the following machine code in aarch64 (when LTO is
> > > enabled):
> > > 
> > >   403fd4:       14000002        b       403fdc
> > >   403fd8:       d503203f        yield
> > >   403fdc:       b9400808        ldr     w8, [x0, #8]
> > >   403fe0:       3707ffc8        tbnz    w8, #0, 403fd8
> > >   403fe4:       d50339bf        dmb     ishld
> > >   403fe8:       b9400c09        ldr     w9, [x0, #12]
> > >   403fec:       b940100a        ldr     w10, [x0, #16]
> > >   403ff0:       d50339bf        dmb     ishld
> > >   403ff4:       b940080b        ldr     w11, [x0, #8]
> > >   403ff8:       6b08017f        cmp     w11, w8
> > >   403ffc:       54ffff01        b.ne    403fdc
> > >   404000:       0b090148        add     w8, w10, w9
> > > 
> > > It is as efficient as the C version, though not as ergonomic. The
> > > .load(Ordering::Relaxed) can of course be improved to something shorter like
> > > .load_relaxed() or even new atomic types  with .load() being relaxed and
> > > .load_ordered(Ordering) for other ordering.
> > 
> > Nice!
> > 
> > Is this a native Rust sequence-lock implementation or a wrapper around
> > the C-language Linux-kernel implementation?
> 
> It's a wrapper around the C-language Linux kernel implementation. (To get the
> generated code with LTO inlining, I compiled the code in userspace because
> LTO with cross-language inlining isn't enabled/working in the kernel yet).

Good on the wrapper, and agreed, I also tend to prototype in userspace.

> > > I also have guard- and iterator-based methods for the read path that would look
> > > like this (these can all co-exist if we so choose):
> > > 
> > >     let v = loop {
> > >         let guard = y.read();
> > >         let a = guard.a.load(Ordering::Relaxed);
> > >         let b = guard.b.load(Ordering::Relaxed);
> > >         if !guard.need_retry() {
> > >             break a + b;
> > >         }
> > >     };
> > > 
> > > and
> > > 
> > >     let mut v = 0;
> > >     for x in y {
> > >         let a = x.a.load(Ordering::Relaxed);
> > > 	let b = x.b.load(Ordering::Relaxed);
> > > 	v = a + b;
> > >     }
> > > 
> > > The former generates the exact same machine code as above though the latter
> > > generates slightly worse code (it has instructions sequences like "mov w10,
> > > #0x1; tbnz w10, #0, 403ffc" and , "mov w10, wzr; tbnz w10, #0, 403ffc", which
> > > could be optimised but for some reason isn't).
> > 
> > The C++ bindings for RCU provide a similar guard approach, leveraging
> > C++ BasicLock.  Explicit lock and unlock can be obtained using
> > move-assignments.
> 
> I haven't seen these bindings, perhaps I should :) But one relevant point about
> guards is that Rust has an affine type system that allows it to catch misuse of
> guards at compile time. For example, if one wants to explicitly unlock, the
> unlock method 'consumes' (move-assigns) the guard, rendering it unusable:
> attempting to use such a guard is a compile-time error (even if it's in scope).
> In C++, this wouldn't be caught at compile time as moved variables remain
> accessible while in scope.

OK, but there are cases where seqlock entry/exit is buried in helper
functions, for example in follow_dotdot_rcu() function in fs/namei.c.
(See recent changes to https://paulmck.livejournal.com/63957.html.)
This sort of thing is often necessary to support iterators.

So how is that use case handled?

Plus we could easily get an RAII-like effect in C code for RCU as follows:

	#define rcu_read_lock_scoped rcu_read_lock(); {
	#define rcu_read_unlock_scoped } rcu_read_unlock();

	rcu_read_lock_scoped();
		struct foo *p = rcu_dereference(global_p);

		do_some_rcu_stuff_with(p);
	rcu_read_unlock_scoped();

But we don't.  One reason is that we often need to do things like
this:

	rcu_read_lock();
	p = rcu_dereference(global_p);
	if (ask_rcu_question(p)) {
		do_some_other_rcu_thing(p);
		rcu_read_unlock();
		do_something_that_sleeps();
	} else {
		do_yet_some_other_rcu_thing(p);
		rcu_read_unlock();
		do_something_else_that_sleeps();
	}

Sure, you could write that like this:

	bool q;

	rcu_read_lock_scoped();
	struct foo *p = rcu_dereference(global_p);
		q = ask_rcu_question(p);
		if (q)
			do_some_other_rcu_thing(p);
		else
			do_yet_some_other_rcu_thing(p);
	rcu_read_unlock_scoped();
	if (q)
		do_something_that_sleeps();
	else
		do_something_else_that_sleeps();

And I know any number of C++ guys who would sing the benefits of the
latter over the former, but I personally think they are drunk on RAII
Koolaid.  As would any number of people in the Linux kernel community. ;-)

It turns out that there are about 3400 uses of rcu_read_lock() and
about 4200 uses of rcu_read_unlock().  So this sort of thing is common.
Yes, it is possible that use of RAII would get rid of some of them,
but definitely not all of them.

Plus there are situations where an iterator momentarily drops out of
an RCU read-side critical section in order to keep from impeding RCU
grace periods.  These tend to be buried deep down the function-call stack.

Don't get me wrong, RAII has its benefits.  But also its drawbacks.

> > > Anyway, on to the write path. We need another primitive to ensure that only one
> > > writer at a time attempts to acquire the sequence lock in write mode. We do this
> > > by taking a guard for this other lock, for example, suppose we want to increment
> > > each of the fields:
> > > 
> > >     let other_guard = other_lock.lock();
> > >     let guard = y.write(&other_guard);
> > 
> > The first acquires the lock in an RAII (scoped) fashion and the second
> > enters the sequence-lock write-side critical section, correct?
> 
> Yes, exactly.

But wouldn't it be more ergonomic and thus less error-prone to be able
to combine those into a single statement?

> Additionally, the ownership rules guarantee that the outer lock cannot be
> unlocked while in the sequence-lock write-side critical section (because the
> inner guard borrows the outer one, so it can be only be consumed after this
> borrow goes away). An attempt to do so would result in a compile-time error.

OK, let's talk about the Rusty Scale of easy of use...

This was introduced by Rusty Russell in his 2003 Ottawa Linux Symposium
keynote: https://ozlabs.org/~rusty/ols-2003-keynote/ols-keynote-2003.html.
The relevant portion is in slides 39-57.

An API that doesn't let you get it wrong (combined lock/count acquisition)
is better than one where the compiler complains if you get it wrong.  ;-)

> > >     guard.a.store(guard.a.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> > >     guard.b.store(guard.b.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> > > 
> > > The part the relates to the sequence lock is compiled to the following:
> > > 
> > >   404058:       f9400009        ldr     x9, [x0]
> > >   40405c:       eb08013f        cmp     x9, x8
> > >   404060:       54000281        b.ne    4040b0
> > > 
> > >   404064:       b9400808        ldr     w8, [x0, #8]
> > >   404068:       11000508        add     w8, w8, #0x1
> > >   40406c:       b9000808        str     w8, [x0, #8]
> > >   404070:       d5033abf        dmb     ishst
> > >   404074:       b9400c08        ldr     w8, [x0, #12]
> > >   404078:       11000508        add     w8, w8, #0x1
> > >   40407c:       b9000c08        str     w8, [x0, #12]
> > >   404080:       b9401008        ldr     w8, [x0, #16]
> > >   404084:       11000508        add     w8, w8, #0x1
> > >   404088:       b9001008        str     w8, [x0, #16]
> > >   40408c:       d5033abf        dmb     ishst
> > >   404090:       b9400808        ldr     w8, [x0, #8]
> > >   404094:       11000508        add     w8, w8, #0x1
> > >   404098:       b9000808        str     w8, [x0, #8]
> > > 
> > > If we ignore the first three instructions momentarily, the rest is as efficient
> > > as C. The reason we need the first three instructions is to ensure that guard
> > > that was passed into the `write` function is a guard to the correct lock. The
> > > lock type already eliminates the vast majority of issues, but a developer could
> > > accidentally lock the wrong lock and use it in the sequence lock, which would be
> > > problematic. So we need this check in Rust that we don't need in C (although the
> > > same mistake could happen in C).
> > > 
> > > We can provide an 'unsafe' version that doesn't perform this check, then the
> > > onus is on the callers to convince themselves that they have acquired the
> > > correct lock (and they'd be required to use an unsafe block). Then the
> > > performance would be the same as the C version.
> > 
> > The Linux-kernel C-language sequence counter (as opposed to the various
> > flavors of sequence lock) assume that the caller has provided any needed
> > mutual exclusion.
> 
> Yes, this actually uses sequence counters.
> 
> I suppose if we embed the locks ourselves like sequence locks do, we can wrap
> such 'unsafe' blocks as part of the implementation and only expose safe
> interfaces as efficient as C.
> 
> Do you happen to know the usage ratio between sequence counters vs sequence
> locks (all flavours combined)? If the latter are used in the vast majority of
> cases, I think it makes sense to do something similar in Rust.

Let's count the initializations:

o	Sequence counters:

	 8	SEQCNT_ZERO
	15	seqcount_init

	23	Total

o	Sequence locks:

	3	SEQCNT_RAW_SPINLOCK_ZERO
	3	SEQCNT_SPINLOCK_ZERO
	0	SEQCNT_RWLOCK_ZERO
	0	SEQCNT_MUTEX_ZERO
	0	SEQCNT_WW_MUTEX_ZERO
	1	seqcount_raw_spinlock_init
	13	seqcount_spinlock_init
	1	seqcount_rwlock_init
	1	seqcount_mutex_init
	1	seqcount_ww_mutex_init

	23	Total

Exactly even!  When does -that- ever happen?  ;-)

> > > Now that I've presented how my proposal looks like from the PoV of a user,
> > > here's its rationale: given that we only want one copy of the data and that
> > > mutable references are always unique in the safe fragment of Rust, we can't (and
> > > don't) return a mutable reference to what's protected by the sequence lock, we
> > > always only allow shared access, even when the sequence lock is acquired in
> > > write mode.
> > > 
> > > Then how does one change the fields? Interior mutability. In the examples above,
> > > the fields are all atomic, so they can be changed with the `store` method. Any
> > > type that provides interior mutability is suitable here.
> > 
> > OK, so following the approach of "marked accesses".
> 
> Yes.
>  
> > > If we need to use types with interior mutability, what's the point of the
> > > sequence lock? The point is to allow a consistent view of the fields. In our
> > > example, even though `a` and `b` are atomic, the sequence lock guarantees that
> > > readers will get a consistent view of the values even though writers modify one
> > > at a time.
> > 
> > Yes.
> > 
> > I suppose that the KCSAN ASSERT_EXCLUSIVE_WRITER() could be used on
> > the sequence-lock update side to check for unwanted concurrency.
> 
> Yes, definitely!

Could anything be done to check for values leaking out of failed seqlock
read-side critical sections?

> > > Lastly, the fact we use a generic `Guard` as proof that a lock is held (for the
> > > write path) means that we don't need to manually implement this for each
> > > different lock we care about; any that implements the `Lock` trait can be used.
> > > This is unlike the C code that uses fragile macros to generate code for
> > > different types of locks (though the scenario is slightly different in that the
> > > C code embeds a lock, which is also something we could do in Rust) -- the Rust
> > > version uses generics, so it is type-checked by the compiler.
> > 
> > OK, so this is a standalone implementation of sequence locks in Rust,
> > rather than something that could interoperate with the C-language
> > sequence locks?
> 
> It's an implementation of sequence locks using C-language sequence counters.
> Instead of embedding a lock for writer mutual exclusion, we require evidence
> that some lock is in use. The idea was to be "flexible" and share locks, but if
> most usage just embeds a lock, we may as well do something similar in Rust.

Whew!

I don't know if such a case exists, but there is the possibility of
non-lock mutual exclusion.  For example, the last guy to remove a
reference to something is allowed to do a sequence-counter update.

How would such a case be handled?

> > Is "fragile macros" just the usual Rust denigration of the C preprocessor,
> > or is there some specific vulnerability that you see in those macros?
> 
> I don't see any specific vulnerability. By fragile I meant that it's more error
> prone to write "generic" code with macros than with compiler-supported generics.

Fair enough, but rest assured that those who love the C preprocessor
have their own "interesting" descriptions of Rust macros.  ;-)

Plus I am old enough to remember people extolling the simplicity of
C-preprocessor macros compared to, among other things, LISP macros.
And they were correct to do so, at least for simple use cases.

I suggest just calling them CPP macros or similar when talking with
Linux-kernel community members.  Me, I have seen enough software artifacts
come and go that I don't much care what you call them, but others just
might be a bit more touchy about such things.

> > Of course, those macros could be used to automatically generate the
> > wrappers.  Extract the macro invocations from the C source, and transform
> > them to wrappers, perhaps using Rust macros somewhere along the way.
> 
> Sure, we could do something like that.
> 
> But given that we already wrap the C locks in Rust abstractions that implement a
> common trait (interface), we can use Rust generics to leverage all locks without
> the need for macros.

If you have a particular sequence lock that is shared between Rust and C
code, it would be good to be able to easily to find the Rust uses given
the C uses and vice versa!

I am not claiming that generics won't work, but instead that we still need
to be able to debug the Linux kernel, and that requires us to be able to
quickly and easily find all the places where a given object is used.

> > > RCU pointers can be implemented with a similar technique in that read access is
> > > protected by a 'global' RCU reader lock (and evidence of it being locked is
> > > required to get read access), and writers require another lock to be held. The
> > > only piece that I haven't thought through yet is how to ensure that pointers
> > > that were exposed with RCU 'protection' cannot be freed before the grace period
> > > has elapsed. But this is a discussion for another time.
> > 
> > Please note that it is quite important for Rust to use the RCU provided
> > by the C-language part of the kernel.  Probably also for sequence locks,
> > but splitting RCU reduces the effectiveness of its batching optimizations.
> 
> Agreed. We actually use the C implementation for all synchronisation primitives
> (including ref-counting, which isn't technically a synchronisation primitive but
> has subtle usage of barriers). What I mean by "implemented in Rust" is just the
> abstractions leveraging Rust concepts to catch misuses earlier where possible.

Might I suggest that you instead say "wrappered for Rust"?

I am not the only one to whom "implemented in Rust" means just what
it says, that Rust has its own variant written completely in Rust.
Continuing to use "implemented in Rust" will continue to mislead
Linux-kernel developers into believing that you created a from-scratch
Rust variant of the code at hand, and believe me, that won't go well.

> > For at least some of the Linux kernel's RCU use cases, something like
> > interior mutability may be required.  Whether those use cases show up
> > in any Rust-language drivers I cannot say.  Other use cases would work
> > well with RCU readers having read ownership of the non-pointer fields
> > in each RCU-protected object.
> > 
> > Again, I did add rough descriptions of a few Linux-kernel RCU use cases.
> > 
> > > I'll send out the patches for what I describe above in the next couple of days.
> > > 
> > > Does any of the above help answer the questions you have about seqlocks in Rust?
> > 
> > Possibly at least some of them.  I suspect that there is still much to
> > be learned on all sides, including learning about additional questions
> > that need to be asked.
> 
> Fair point. We don't know quite yet if we've asked all the questions.

My main immediate additional question is "what are the bugs and what
can be done to better locate them".  That question of course applies
regardless of the language and tools used for a given piece of code.

> > Either way, thank you for your work on this!
> 
> Thanks for engaging with us, this is much appreciated.
> 
> Cheers,
> -Wedson
> 
> > 
> > 							Thanx, Paul
> > 
> > > Thanks,
> > > -Wedson
> > > 
> > > > So the trick is to stage things so as to allow people time to work on
> > > > these sorts of issues.
> > > > 
> > > > > In any case, Rust does not necessarily need to help there. What is
> > > > > important is whether Rust helps writing the majority of the kernel
> > > > > code. If we need to call into C or use inline assembly for certain
> > > > > bits -- so be it.
> > > > > 
> > > > > > But to be fair, much again depends on exactly where Rust is to be applied
> > > > > > in the kernel.  If a given Linux-kernel feature is not used where Rust
> > > > > > needs to be applied, then there is no need to solve the corresponding
> > > > > > issues.
> > > > > 
> > > > > Exactly.
> > > > 
> > > > Thank you for bearing with me.
> > > > 
> > > > I will respond to your other email later,.  but the focus on memory
> > > > safety in particular instead of undefined behavior in general does help
> > > > me quite a bit.
> > > > 
> > > > My next step is to create a "TL;DR: Memory-Model Recommendations" post
> > > > that is more specific, with both short-term ("do what is easy") and
> > > > long-term suggestions.
> > > > 
> > > > 							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-14 19:43                                           ` Paul E. McKenney
@ 2021-10-15 15:06                                             ` Wedson Almeida Filho
  2021-10-15 23:29                                               ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Wedson Almeida Filho @ 2021-10-15 15:06 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Miguel Ojeda, Gary Guo, Marco Elver, Boqun Feng, kasan-dev,
	rust-for-linux

On Thu, Oct 14, 2021 at 12:43:41PM -0700, Paul E. McKenney wrote:
> On Thu, Oct 14, 2021 at 09:03:42AM +0100, Wedson Almeida Filho wrote:
> > On Wed, Oct 13, 2021 at 08:35:57PM -0700, Paul E. McKenney wrote:
> > > On Wed, Oct 13, 2021 at 06:50:24PM +0100, Wedson Almeida Filho wrote:
> > > > On Wed, Oct 13, 2021 at 09:07:07AM -0700, Paul E. McKenney wrote:
> > > > > On Wed, Oct 13, 2021 at 01:48:13PM +0200, Miguel Ojeda wrote:
> > > > > > On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > > > >
> > > > > > > The main issue I was calling out was not justifying Rust, but rather
> > > > > > > making sure that the exact same build could be reproduced a decade later.
> > > > > > 
> > > > > > Yes, but that is quite trivial compared to other issues I was
> > > > > > mentioning like adapting and requalifying a testing tool. For
> > > > > > instance, if you already had a team maintaining the configuration
> > > > > > management (i.e. the versions etc.), adding one more tool is not a big
> > > > > > deal.
> > > > > 
> > > > > OK, close enough to fair enough.  ;-)
> > > > > 
> > > > > > > There are things that concurrent software would like to do that are
> > > > > > > made quite inconvenient due to large numbers of existing optimizations
> > > > > > > in the various compiler backends.  Yes, we have workarounds.  But I
> > > > > > > do not see how Rust is going to help with these inconveniences.
> > > > > > 
> > > > > > Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
> > > > > > valuable to be able to express particular algorithms in unsafe Rust,
> > > > > > then I would contact the Rust teams to let them know your needs --
> > > > > > perhaps we end up with something way better than C for that use case!
> > > > > 
> > > > > Sequence locks and RCU do seem to be posing some challenges.  I suppose
> > > > > this should not be too much of a surprise, given that there are people who
> > > > > have been in the Rust community for a long time who do understand both.
> > > > > If it were easy, they would have already come up with a solution.
> > > > 
> > > > (Hey Paul, I tried posting on your blog series, but I'm having difficulty so I
> > > > thought I'd reply here given that we mention seqlocks and RCU here.)
> > > 
> > > It should be straightforward to post a comment, but some report that
> > > their employers block livejournal.com.  :-/
> > 
> > I tried to use my google account while posting and then after I posted it took
> > me through some workflow to confirm my account, perhaps the comment was lost
> > during this workflow. Let me try again.
> 
> Please let me know how it goes.

It says my comment is spam :) When I'm logged in I can actually see it as if it
was accepted, but when I open the very same page while logged out, I don't see
any comments.

Here's the URL for the entry where I've left a comment:
https://paulmck.livejournal.com/62835.html

> > > Oh, and I have updated heavily recently, including adding a bunch of
> > > Linux-kernel use cases for both sequence locking and RCU.
> > 
> > I'll check it out, thanks!
> >  
> > > > I spent a bit of time thinking about sequence locks and I think I have something
> > > > that is workable. (I remind you that we use the C implementation for the
> > > > synchronisation primitives). Suppose we had some struct like so:
> > > > 
> > > > struct X {
> > > >     a: AtomicU32,
> > > >     b: AtomicU32,
> > > > }
> > > > 
> > > > And suppose we have it protected by a sequence lock. If we wanted to return the
> > > > sum of the two fields, the code would look like this:
> > > > 
> > > >     let v = y.access(|x| {
> > > >         let a = x.a.load(Ordering::Relaxed);
> > > > 	let b = x.b.load(Ordering::Relaxed);
> > > > 	a + b
> > > >     });
> > > > 
> > > > It would be expanded to the following machine code in aarch64 (when LTO is
> > > > enabled):
> > > > 
> > > >   403fd4:       14000002        b       403fdc
> > > >   403fd8:       d503203f        yield
> > > >   403fdc:       b9400808        ldr     w8, [x0, #8]
> > > >   403fe0:       3707ffc8        tbnz    w8, #0, 403fd8
> > > >   403fe4:       d50339bf        dmb     ishld
> > > >   403fe8:       b9400c09        ldr     w9, [x0, #12]
> > > >   403fec:       b940100a        ldr     w10, [x0, #16]
> > > >   403ff0:       d50339bf        dmb     ishld
> > > >   403ff4:       b940080b        ldr     w11, [x0, #8]
> > > >   403ff8:       6b08017f        cmp     w11, w8
> > > >   403ffc:       54ffff01        b.ne    403fdc
> > > >   404000:       0b090148        add     w8, w10, w9
> > > > 
> > > > It is as efficient as the C version, though not as ergonomic. The
> > > > .load(Ordering::Relaxed) can of course be improved to something shorter like
> > > > .load_relaxed() or even new atomic types  with .load() being relaxed and
> > > > .load_ordered(Ordering) for other ordering.
> > > 
> > > Nice!
> > > 
> > > Is this a native Rust sequence-lock implementation or a wrapper around
> > > the C-language Linux-kernel implementation?
> > 
> > It's a wrapper around the C-language Linux kernel implementation. (To get the
> > generated code with LTO inlining, I compiled the code in userspace because
> > LTO with cross-language inlining isn't enabled/working in the kernel yet).
> 
> Good on the wrapper, and agreed, I also tend to prototype in userspace.
> 
> > > > I also have guard- and iterator-based methods for the read path that would look
> > > > like this (these can all co-exist if we so choose):
> > > > 
> > > >     let v = loop {
> > > >         let guard = y.read();
> > > >         let a = guard.a.load(Ordering::Relaxed);
> > > >         let b = guard.b.load(Ordering::Relaxed);
> > > >         if !guard.need_retry() {
> > > >             break a + b;
> > > >         }
> > > >     };
> > > > 
> > > > and
> > > > 
> > > >     let mut v = 0;
> > > >     for x in y {
> > > >         let a = x.a.load(Ordering::Relaxed);
> > > > 	let b = x.b.load(Ordering::Relaxed);
> > > > 	v = a + b;
> > > >     }
> > > > 
> > > > The former generates the exact same machine code as above though the latter
> > > > generates slightly worse code (it has instructions sequences like "mov w10,
> > > > #0x1; tbnz w10, #0, 403ffc" and , "mov w10, wzr; tbnz w10, #0, 403ffc", which
> > > > could be optimised but for some reason isn't).
> > > 
> > > The C++ bindings for RCU provide a similar guard approach, leveraging
> > > C++ BasicLock.  Explicit lock and unlock can be obtained using
> > > move-assignments.
> > 
> > I haven't seen these bindings, perhaps I should :) But one relevant point about
> > guards is that Rust has an affine type system that allows it to catch misuse of
> > guards at compile time. For example, if one wants to explicitly unlock, the
> > unlock method 'consumes' (move-assigns) the guard, rendering it unusable:
> > attempting to use such a guard is a compile-time error (even if it's in scope).
> > In C++, this wouldn't be caught at compile time as moved variables remain
> > accessible while in scope.
> 
> OK, but there are cases where seqlock entry/exit is buried in helper
> functions, for example in follow_dotdot_rcu() function in fs/namei.c.
> (See recent changes to https://paulmck.livejournal.com/63957.html.)
> This sort of thing is often necessary to support iterators.
> 
> So how is that use case handled?

Note that even the C code needs to carry some state between these functions, in
particular the seqp. Rust would be no different, but it would carry the guard
(which would boil down to a single 32-bit value as well); so we would have
something like:

fn follow_dotdot_rcu([args]) -> (Dentry, SeqLockReadGuard);
fn into_dot([args], read_guard: SeqLockReadGuard);

That is, follow_dotdot_rcu, creates a guard and returns it, so the lock
continues acquired (in the case of seqcounters it's just conceptually) as the
function returns, and it can be passed to another function, so an example of
calling the functions above would be:

  let (dentry, guard) = follow_dotdot_rcu([args]);
  into_dot([args], guard);

And into_dot can use the guard as if it had created it itself, and it will be
unlocked once into_dot finishes (or later if into_dot moves it elsewhere).

> Plus we could easily get an RAII-like effect in C code for RCU as follows:
> 
> 	#define rcu_read_lock_scoped rcu_read_lock(); {
> 	#define rcu_read_unlock_scoped } rcu_read_unlock();
> 
> 	rcu_read_lock_scoped();
> 		struct foo *p = rcu_dereference(global_p);
> 
> 		do_some_rcu_stuff_with(p);
> 	rcu_read_unlock_scoped();
> 

I think using the __cleanup__ attribute is more promising than the above. The
indentation without explicit braces doesn't seem very ergonomic, perhaps we
could leave the braces out of the macros to improve this... But anyway, if
there's a `return` statement within the block, you end up leaving the function
without unlocking.

> But we don't.  One reason is that we often need to do things like
> this:
> 
> 	rcu_read_lock();
> 	p = rcu_dereference(global_p);
> 	if (ask_rcu_question(p)) {
> 		do_some_other_rcu_thing(p);
> 		rcu_read_unlock();
> 		do_something_that_sleeps();
> 	} else {
> 		do_yet_some_other_rcu_thing(p);
> 		rcu_read_unlock();
> 		do_something_else_that_sleeps();
> 	}
>
> Sure, you could write that like this:
> 
> 	bool q;
> 
> 	rcu_read_lock_scoped();
> 	struct foo *p = rcu_dereference(global_p);
> 		q = ask_rcu_question(p);
> 		if (q)
> 			do_some_other_rcu_thing(p);
> 		else
> 			do_yet_some_other_rcu_thing(p);
> 	rcu_read_unlock_scoped();
> 	if (q)
> 		do_something_that_sleeps();
> 	else
> 		do_something_else_that_sleeps();
> 
> And I know any number of C++ guys who would sing the benefits of the
> latter over the former, but I personally think they are drunk on RAII
> Koolaid.  As would any number of people in the Linux kernel community. ;-)
> 
> It turns out that there are about 3400 uses of rcu_read_lock() and
> about 4200 uses of rcu_read_unlock().  So this sort of thing is common.
> Yes, it is possible that use of RAII would get rid of some of them,
> but definitely not all of them.
> 
> Plus there are situations where an iterator momentarily drops out of
> an RCU read-side critical section in order to keep from impeding RCU
> grace periods.  These tend to be buried deep down the function-call stack.
> 
> Don't get me wrong, RAII has its benefits.  But also its drawbacks.

Agreed. Rust allows RAII, but it's by no means required. Your first example can
be translated to Rust as follows:

 	rcu_guard = rcu::read_lock();
 	p = global_p.rcu_dereference(&rcu_guard);
 	if (ask_rcu_question(p)) {
 		do_some_other_rcu_thing(p);
 		rcu_guard.unlock();
 		do_something_that_sleeps();
 	} else {
 		do_yet_some_other_rcu_thing(p);
 		rcu_guard.unlock();
 		do_something_else_that_sleeps();
 	}

This is not very different from C version but has the following extra
advantages:
  1. global_p can only be dereferenced while in an rcu critical section.
  2. p becomes inaccessible after rcu_guard.unlock() is called.
  3. If we fail to call rcu_guard.unlock() in some code path, it will be
     automatically called when rcu_guard goes out of scope. (But only if we
     forget, otherwise it won't because rcu_guard.unlock consumes the guard.)

> > > > Anyway, on to the write path. We need another primitive to ensure that only one
> > > > writer at a time attempts to acquire the sequence lock in write mode. We do this
> > > > by taking a guard for this other lock, for example, suppose we want to increment
> > > > each of the fields:
> > > > 
> > > >     let other_guard = other_lock.lock();
> > > >     let guard = y.write(&other_guard);
> > > 
> > > The first acquires the lock in an RAII (scoped) fashion and the second
> > > enters the sequence-lock write-side critical section, correct?
> > 
> > Yes, exactly.
> 
> But wouldn't it be more ergonomic and thus less error-prone to be able
> to combine those into a single statement?

Definitely. The above example is similar to the usage of a seqcounter in C --
with the added requirement that users need to provide evidence that they're in
fact using a lock (which is something that C doesn't do, so it's more error
prone).

Combining a lock and a seqcounter into one thing (seqlocks) is better when
that's what users need. I'll improve the wrappers to allow both.

> > Additionally, the ownership rules guarantee that the outer lock cannot be
> > unlocked while in the sequence-lock write-side critical section (because the
> > inner guard borrows the outer one, so it can be only be consumed after this
> > borrow goes away). An attempt to do so would result in a compile-time error.
> 
> OK, let's talk about the Rusty Scale of easy of use...
> 
> This was introduced by Rusty Russell in his 2003 Ottawa Linux Symposium
> keynote: https://ozlabs.org/~rusty/ols-2003-keynote/ols-keynote-2003.html.
> The relevant portion is in slides 39-57.
> 
> An API that doesn't let you get it wrong (combined lock/count acquisition)
> is better than one where the compiler complains if you get it wrong.  ;-)

You're right.

But see the distinction I made above: seqcounter vs seqlock. In cases when a
seqlock isn't suitable but a seqcounter is, C will let you misuse the write-side
critical section, Rust won't :)

> > > >     guard.a.store(guard.a.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> > > >     guard.b.store(guard.b.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> > > > 
> > > > The part the relates to the sequence lock is compiled to the following:
> > > > 
> > > >   404058:       f9400009        ldr     x9, [x0]
> > > >   40405c:       eb08013f        cmp     x9, x8
> > > >   404060:       54000281        b.ne    4040b0
> > > > 
> > > >   404064:       b9400808        ldr     w8, [x0, #8]
> > > >   404068:       11000508        add     w8, w8, #0x1
> > > >   40406c:       b9000808        str     w8, [x0, #8]
> > > >   404070:       d5033abf        dmb     ishst
> > > >   404074:       b9400c08        ldr     w8, [x0, #12]
> > > >   404078:       11000508        add     w8, w8, #0x1
> > > >   40407c:       b9000c08        str     w8, [x0, #12]
> > > >   404080:       b9401008        ldr     w8, [x0, #16]
> > > >   404084:       11000508        add     w8, w8, #0x1
> > > >   404088:       b9001008        str     w8, [x0, #16]
> > > >   40408c:       d5033abf        dmb     ishst
> > > >   404090:       b9400808        ldr     w8, [x0, #8]
> > > >   404094:       11000508        add     w8, w8, #0x1
> > > >   404098:       b9000808        str     w8, [x0, #8]
> > > > 
> > > > If we ignore the first three instructions momentarily, the rest is as efficient
> > > > as C. The reason we need the first three instructions is to ensure that guard
> > > > that was passed into the `write` function is a guard to the correct lock. The
> > > > lock type already eliminates the vast majority of issues, but a developer could
> > > > accidentally lock the wrong lock and use it in the sequence lock, which would be
> > > > problematic. So we need this check in Rust that we don't need in C (although the
> > > > same mistake could happen in C).
> > > > 
> > > > We can provide an 'unsafe' version that doesn't perform this check, then the
> > > > onus is on the callers to convince themselves that they have acquired the
> > > > correct lock (and they'd be required to use an unsafe block). Then the
> > > > performance would be the same as the C version.
> > > 
> > > The Linux-kernel C-language sequence counter (as opposed to the various
> > > flavors of sequence lock) assume that the caller has provided any needed
> > > mutual exclusion.
> > 
> > Yes, this actually uses sequence counters.
> > 
> > I suppose if we embed the locks ourselves like sequence locks do, we can wrap
> > such 'unsafe' blocks as part of the implementation and only expose safe
> > interfaces as efficient as C.
> > 
> > Do you happen to know the usage ratio between sequence counters vs sequence
> > locks (all flavours combined)? If the latter are used in the vast majority of
> > cases, I think it makes sense to do something similar in Rust.
> 
> Let's count the initializations:
> 
> o	Sequence counters:
> 
> 	 8	SEQCNT_ZERO
> 	15	seqcount_init
> 
> 	23	Total
> 
> o	Sequence locks:
> 
> 	3	SEQCNT_RAW_SPINLOCK_ZERO
> 	3	SEQCNT_SPINLOCK_ZERO
> 	0	SEQCNT_RWLOCK_ZERO
> 	0	SEQCNT_MUTEX_ZERO
> 	0	SEQCNT_WW_MUTEX_ZERO
> 	1	seqcount_raw_spinlock_init
> 	13	seqcount_spinlock_init
> 	1	seqcount_rwlock_init
> 	1	seqcount_mutex_init
> 	1	seqcount_ww_mutex_init
> 
> 	23	Total
> 
> Exactly even!  When does -that- ever happen?  ;-)

Oh, man! I was hoping seqlocks would be so dominant that we could ignore
seqcounters in Rust :)

> > > > Now that I've presented how my proposal looks like from the PoV of a user,
> > > > here's its rationale: given that we only want one copy of the data and that
> > > > mutable references are always unique in the safe fragment of Rust, we can't (and
> > > > don't) return a mutable reference to what's protected by the sequence lock, we
> > > > always only allow shared access, even when the sequence lock is acquired in
> > > > write mode.
> > > > 
> > > > Then how does one change the fields? Interior mutability. In the examples above,
> > > > the fields are all atomic, so they can be changed with the `store` method. Any
> > > > type that provides interior mutability is suitable here.
> > > 
> > > OK, so following the approach of "marked accesses".
> > 
> > Yes.
> >  
> > > > If we need to use types with interior mutability, what's the point of the
> > > > sequence lock? The point is to allow a consistent view of the fields. In our
> > > > example, even though `a` and `b` are atomic, the sequence lock guarantees that
> > > > readers will get a consistent view of the values even though writers modify one
> > > > at a time.
> > > 
> > > Yes.
> > > 
> > > I suppose that the KCSAN ASSERT_EXCLUSIVE_WRITER() could be used on
> > > the sequence-lock update side to check for unwanted concurrency.
> > 
> > Yes, definitely!
> 
> Could anything be done to check for values leaking out of failed seqlock
> read-side critical sections?

I can't think of a way to prevent them outright, but if one uses the
closure-based version, values cannot escape through the captured state of the
closure because it is declared immutable (Fn vs FnMut), though values of failed
iterations could potentially escape through say global variables.

I'll think some more about this to see if I can come up with something. If you
have other ideas, please let us know!

> > > > Lastly, the fact we use a generic `Guard` as proof that a lock is held (for the
> > > > write path) means that we don't need to manually implement this for each
> > > > different lock we care about; any that implements the `Lock` trait can be used.
> > > > This is unlike the C code that uses fragile macros to generate code for
> > > > different types of locks (though the scenario is slightly different in that the
> > > > C code embeds a lock, which is also something we could do in Rust) -- the Rust
> > > > version uses generics, so it is type-checked by the compiler.
> > > 
> > > OK, so this is a standalone implementation of sequence locks in Rust,
> > > rather than something that could interoperate with the C-language
> > > sequence locks?
> > 
> > It's an implementation of sequence locks using C-language sequence counters.
> > Instead of embedding a lock for writer mutual exclusion, we require evidence
> > that some lock is in use. The idea was to be "flexible" and share locks, but if
> > most usage just embeds a lock, we may as well do something similar in Rust.
> 
> Whew!
> 
> I don't know if such a case exists, but there is the possibility of
> non-lock mutual exclusion.  For example, the last guy to remove a
> reference to something is allowed to do a sequence-counter update.
> 
> How would such a case be handled?

Well, it depends on how this mutual exclusion can be expressed to Rust. If,
let's say, the protected data structure is being freed, then there it is
guaranteed that no-one else has references to it. In that case, one could just
implement the `Drop` trait and get a mutable reference (&mut) to the object
directly without having to go through the lock.

If Rust can't reason be convinced of the mutual exclusion, then it would require
an unsafe variant, so its declaration would be something like:

/// Enter the write-side critical section.
//
//  # Safety
//
//  Callers must ensure that at all times, at most one thread/CPU call this
//  function and own the guard.
unsafe fn write_unsafe(&self) -> Guard;


And callers would write something like:

  // SAFETY: The mutual exclusion requirements is satisfied by [reason here].
  let guard = unsafe { seqcounter.write_unsafe() };

Note that the `unsafe` annotation in the function declaration above makes it
such that all callers must wrap the calls in `unsafe` blocks. Failure to do so
results in a compiler error saying that they should check the documentation on
the safety requirements for this function.

> 
> > > Is "fragile macros" just the usual Rust denigration of the C preprocessor,
> > > or is there some specific vulnerability that you see in those macros?
> > 
> > I don't see any specific vulnerability. By fragile I meant that it's more error
> > prone to write "generic" code with macros than with compiler-supported generics.
> 
> Fair enough, but rest assured that those who love the C preprocessor
> have their own "interesting" descriptions of Rust macros.  ;-)

Oh, you won't see me defending macros from either language :)

> Plus I am old enough to remember people extolling the simplicity of
> C-preprocessor macros compared to, among other things, LISP macros.
> And they were correct to do so, at least for simple use cases.
> 
> I suggest just calling them CPP macros or similar when talking with
> Linux-kernel community members.  Me, I have seen enough software artifacts
> come and go that I don't much care what you call them, but others just
> might be a bit more touchy about such things.

Sure, but to be clear, I haven't talked about Rust macros, and I don't encourage
their use. I was talking about generics, which is a Rust language feature that
is part of the type system (integral to the lifetimes story), so they are
checked by the compiler, unlike macros (C or Rust).

> > > Of course, those macros could be used to automatically generate the
> > > wrappers.  Extract the macro invocations from the C source, and transform
> > > them to wrappers, perhaps using Rust macros somewhere along the way.
> > 
> > Sure, we could do something like that.
> > 
> > But given that we already wrap the C locks in Rust abstractions that implement a
> > common trait (interface), we can use Rust generics to leverage all locks without
> > the need for macros.
> 
> If you have a particular sequence lock that is shared between Rust and C
> code, it would be good to be able to easily to find the Rust uses given
> the C uses and vice versa!
> 
> I am not claiming that generics won't work, but instead that we still need
> to be able to debug the Linux kernel, and that requires us to be able to
> quickly and easily find all the places where a given object is used.

Fair point. We need to spend more time on tooling to link the C code with the
Rust wrappers and the usage of wrappers.

> > > > RCU pointers can be implemented with a similar technique in that read access is
> > > > protected by a 'global' RCU reader lock (and evidence of it being locked is
> > > > required to get read access), and writers require another lock to be held. The
> > > > only piece that I haven't thought through yet is how to ensure that pointers
> > > > that were exposed with RCU 'protection' cannot be freed before the grace period
> > > > has elapsed. But this is a discussion for another time.
> > > 
> > > Please note that it is quite important for Rust to use the RCU provided
> > > by the C-language part of the kernel.  Probably also for sequence locks,
> > > but splitting RCU reduces the effectiveness of its batching optimizations.
> > 
> > Agreed. We actually use the C implementation for all synchronisation primitives
> > (including ref-counting, which isn't technically a synchronisation primitive but
> > has subtle usage of barriers). What I mean by "implemented in Rust" is just the
> > abstractions leveraging Rust concepts to catch misuses earlier where possible.
> 
> Might I suggest that you instead say "wrappered for Rust"?
> 
> I am not the only one to whom "implemented in Rust" means just what
> it says, that Rust has its own variant written completely in Rust.
> Continuing to use "implemented in Rust" will continue to mislead
> Linux-kernel developers into believing that you created a from-scratch
> Rust variant of the code at hand, and believe me, that won't go well.

That's good feedback, thank you. I'll police my usage of implement vs wrap.

> > > For at least some of the Linux kernel's RCU use cases, something like
> > > interior mutability may be required.  Whether those use cases show up
> > > in any Rust-language drivers I cannot say.  Other use cases would work
> > > well with RCU readers having read ownership of the non-pointer fields
> > > in each RCU-protected object.
> > > 
> > > Again, I did add rough descriptions of a few Linux-kernel RCU use cases.
> > > 
> > > > I'll send out the patches for what I describe above in the next couple of days.
> > > > 
> > > > Does any of the above help answer the questions you have about seqlocks in Rust?
> > > 
> > > Possibly at least some of them.  I suspect that there is still much to
> > > be learned on all sides, including learning about additional questions
> > > that need to be asked.
> > 
> > Fair point. We don't know quite yet if we've asked all the questions.
> 
> My main immediate additional question is "what are the bugs and what
> can be done to better locate them".  That question of course applies
> regardless of the language and tools used for a given piece of code.
> 
> > > Either way, thank you for your work on this!
> > 
> > Thanks for engaging with us, this is much appreciated.
> > 
> > Cheers,
> > -Wedson
> > 
> > > 
> > > 							Thanx, Paul
> > > 
> > > > Thanks,
> > > > -Wedson
> > > > 
> > > > > So the trick is to stage things so as to allow people time to work on
> > > > > these sorts of issues.
> > > > > 
> > > > > > In any case, Rust does not necessarily need to help there. What is
> > > > > > important is whether Rust helps writing the majority of the kernel
> > > > > > code. If we need to call into C or use inline assembly for certain
> > > > > > bits -- so be it.
> > > > > > 
> > > > > > > But to be fair, much again depends on exactly where Rust is to be applied
> > > > > > > in the kernel.  If a given Linux-kernel feature is not used where Rust
> > > > > > > needs to be applied, then there is no need to solve the corresponding
> > > > > > > issues.
> > > > > > 
> > > > > > Exactly.
> > > > > 
> > > > > Thank you for bearing with me.
> > > > > 
> > > > > I will respond to your other email later,.  but the focus on memory
> > > > > safety in particular instead of undefined behavior in general does help
> > > > > me quite a bit.
> > > > > 
> > > > > My next step is to create a "TL;DR: Memory-Model Recommendations" post
> > > > > that is more specific, with both short-term ("do what is easy") and
> > > > > long-term suggestions.
> > > > > 
> > > > > 							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-15 15:06                                             ` Wedson Almeida Filho
@ 2021-10-15 23:29                                               ` Paul E. McKenney
  0 siblings, 0 replies; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-15 23:29 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: Miguel Ojeda, Gary Guo, Marco Elver, Boqun Feng, kasan-dev,
	rust-for-linux

On Fri, Oct 15, 2021 at 04:06:40PM +0100, Wedson Almeida Filho wrote:
> On Thu, Oct 14, 2021 at 12:43:41PM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 14, 2021 at 09:03:42AM +0100, Wedson Almeida Filho wrote:
> > > On Wed, Oct 13, 2021 at 08:35:57PM -0700, Paul E. McKenney wrote:
> > > > On Wed, Oct 13, 2021 at 06:50:24PM +0100, Wedson Almeida Filho wrote:
> > > > > On Wed, Oct 13, 2021 at 09:07:07AM -0700, Paul E. McKenney wrote:
> > > > > > On Wed, Oct 13, 2021 at 01:48:13PM +0200, Miguel Ojeda wrote:
> > > > > > > On Mon, Oct 11, 2021 at 9:01 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > > > > >
> > > > > > > > The main issue I was calling out was not justifying Rust, but rather
> > > > > > > > making sure that the exact same build could be reproduced a decade later.
> > > > > > > 
> > > > > > > Yes, but that is quite trivial compared to other issues I was
> > > > > > > mentioning like adapting and requalifying a testing tool. For
> > > > > > > instance, if you already had a team maintaining the configuration
> > > > > > > management (i.e. the versions etc.), adding one more tool is not a big
> > > > > > > deal.
> > > > > > 
> > > > > > OK, close enough to fair enough.  ;-)
> > > > > > 
> > > > > > > > There are things that concurrent software would like to do that are
> > > > > > > > made quite inconvenient due to large numbers of existing optimizations
> > > > > > > > in the various compiler backends.  Yes, we have workarounds.  But I
> > > > > > > > do not see how Rust is going to help with these inconveniences.
> > > > > > > 
> > > > > > > Sure, but C UB is unrelated to Rust UB. Thus, if you think it would be
> > > > > > > valuable to be able to express particular algorithms in unsafe Rust,
> > > > > > > then I would contact the Rust teams to let them know your needs --
> > > > > > > perhaps we end up with something way better than C for that use case!
> > > > > > 
> > > > > > Sequence locks and RCU do seem to be posing some challenges.  I suppose
> > > > > > this should not be too much of a surprise, given that there are people who
> > > > > > have been in the Rust community for a long time who do understand both.
> > > > > > If it were easy, they would have already come up with a solution.
> > > > > 
> > > > > (Hey Paul, I tried posting on your blog series, but I'm having difficulty so I
> > > > > thought I'd reply here given that we mention seqlocks and RCU here.)
> > > > 
> > > > It should be straightforward to post a comment, but some report that
> > > > their employers block livejournal.com.  :-/
> > > 
> > > I tried to use my google account while posting and then after I posted it took
> > > me through some workflow to confirm my account, perhaps the comment was lost
> > > during this workflow. Let me try again.
> > 
> > Please let me know how it goes.
> 
> It says my comment is spam :) When I'm logged in I can actually see it as if it
> was accepted, but when I open the very same page while logged out, I don't see
> any comments.
> 
> Here's the URL for the entry where I've left a comment:
> https://paulmck.livejournal.com/62835.html

Apologies for the inconvenience!  I have unspammed this comment.  On the
other hand, livejournal's hyperactive spam marking does seem to keep
the spam down.

> > > > Oh, and I have updated heavily recently, including adding a bunch of
> > > > Linux-kernel use cases for both sequence locking and RCU.
> > > 
> > > I'll check it out, thanks!
> > >  
> > > > > I spent a bit of time thinking about sequence locks and I think I have something
> > > > > that is workable. (I remind you that we use the C implementation for the
> > > > > synchronisation primitives). Suppose we had some struct like so:
> > > > > 
> > > > > struct X {
> > > > >     a: AtomicU32,
> > > > >     b: AtomicU32,
> > > > > }
> > > > > 
> > > > > And suppose we have it protected by a sequence lock. If we wanted to return the
> > > > > sum of the two fields, the code would look like this:
> > > > > 
> > > > >     let v = y.access(|x| {
> > > > >         let a = x.a.load(Ordering::Relaxed);
> > > > > 	let b = x.b.load(Ordering::Relaxed);
> > > > > 	a + b
> > > > >     });
> > > > > 
> > > > > It would be expanded to the following machine code in aarch64 (when LTO is
> > > > > enabled):
> > > > > 
> > > > >   403fd4:       14000002        b       403fdc
> > > > >   403fd8:       d503203f        yield
> > > > >   403fdc:       b9400808        ldr     w8, [x0, #8]
> > > > >   403fe0:       3707ffc8        tbnz    w8, #0, 403fd8
> > > > >   403fe4:       d50339bf        dmb     ishld
> > > > >   403fe8:       b9400c09        ldr     w9, [x0, #12]
> > > > >   403fec:       b940100a        ldr     w10, [x0, #16]
> > > > >   403ff0:       d50339bf        dmb     ishld
> > > > >   403ff4:       b940080b        ldr     w11, [x0, #8]
> > > > >   403ff8:       6b08017f        cmp     w11, w8
> > > > >   403ffc:       54ffff01        b.ne    403fdc
> > > > >   404000:       0b090148        add     w8, w10, w9
> > > > > 
> > > > > It is as efficient as the C version, though not as ergonomic. The
> > > > > .load(Ordering::Relaxed) can of course be improved to something shorter like
> > > > > .load_relaxed() or even new atomic types  with .load() being relaxed and
> > > > > .load_ordered(Ordering) for other ordering.
> > > > 
> > > > Nice!
> > > > 
> > > > Is this a native Rust sequence-lock implementation or a wrapper around
> > > > the C-language Linux-kernel implementation?
> > > 
> > > It's a wrapper around the C-language Linux kernel implementation. (To get the
> > > generated code with LTO inlining, I compiled the code in userspace because
> > > LTO with cross-language inlining isn't enabled/working in the kernel yet).
> > 
> > Good on the wrapper, and agreed, I also tend to prototype in userspace.
> > 
> > > > > I also have guard- and iterator-based methods for the read path that would look
> > > > > like this (these can all co-exist if we so choose):
> > > > > 
> > > > >     let v = loop {
> > > > >         let guard = y.read();
> > > > >         let a = guard.a.load(Ordering::Relaxed);
> > > > >         let b = guard.b.load(Ordering::Relaxed);
> > > > >         if !guard.need_retry() {
> > > > >             break a + b;
> > > > >         }
> > > > >     };
> > > > > 
> > > > > and
> > > > > 
> > > > >     let mut v = 0;
> > > > >     for x in y {
> > > > >         let a = x.a.load(Ordering::Relaxed);
> > > > > 	let b = x.b.load(Ordering::Relaxed);
> > > > > 	v = a + b;
> > > > >     }
> > > > > 
> > > > > The former generates the exact same machine code as above though the latter
> > > > > generates slightly worse code (it has instructions sequences like "mov w10,
> > > > > #0x1; tbnz w10, #0, 403ffc" and , "mov w10, wzr; tbnz w10, #0, 403ffc", which
> > > > > could be optimised but for some reason isn't).
> > > > 
> > > > The C++ bindings for RCU provide a similar guard approach, leveraging
> > > > C++ BasicLock.  Explicit lock and unlock can be obtained using
> > > > move-assignments.
> > > 
> > > I haven't seen these bindings, perhaps I should :) But one relevant point about
> > > guards is that Rust has an affine type system that allows it to catch misuse of
> > > guards at compile time. For example, if one wants to explicitly unlock, the
> > > unlock method 'consumes' (move-assigns) the guard, rendering it unusable:
> > > attempting to use such a guard is a compile-time error (even if it's in scope).
> > > In C++, this wouldn't be caught at compile time as moved variables remain
> > > accessible while in scope.
> > 
> > OK, but there are cases where seqlock entry/exit is buried in helper
> > functions, for example in follow_dotdot_rcu() function in fs/namei.c.
> > (See recent changes to https://paulmck.livejournal.com/63957.html.)
> > This sort of thing is often necessary to support iterators.
> > 
> > So how is that use case handled?
> 
> Note that even the C code needs to carry some state between these functions, in
> particular the seqp. Rust would be no different, but it would carry the guard
> (which would boil down to a single 32-bit value as well); so we would have
> something like:
> 
> fn follow_dotdot_rcu([args]) -> (Dentry, SeqLockReadGuard);
> fn into_dot([args], read_guard: SeqLockReadGuard);
> 
> That is, follow_dotdot_rcu, creates a guard and returns it, so the lock
> continues acquired (in the case of seqcounters it's just conceptually) as the
> function returns, and it can be passed to another function, so an example of
> calling the functions above would be:
> 
>   let (dentry, guard) = follow_dotdot_rcu([args]);
>   into_dot([args], guard);
> 
> And into_dot can use the guard as if it had created it itself, and it will be
> unlocked once into_dot finishes (or later if into_dot moves it elsewhere).

OK, similar to the way guards are used in C++, then.  Whew!  ;-)

> > Plus we could easily get an RAII-like effect in C code for RCU as follows:
> > 
> > 	#define rcu_read_lock_scoped rcu_read_lock(); {
> > 	#define rcu_read_unlock_scoped } rcu_read_unlock();
> > 
> > 	rcu_read_lock_scoped();
> > 		struct foo *p = rcu_dereference(global_p);
> > 
> > 		do_some_rcu_stuff_with(p);
> > 	rcu_read_unlock_scoped();
> > 
> 
> I think using the __cleanup__ attribute is more promising than the above. The
> indentation without explicit braces doesn't seem very ergonomic, perhaps we
> could leave the braces out of the macros to improve this... But anyway, if
> there's a `return` statement within the block, you end up leaving the function
> without unlocking.
> 
> > But we don't.  One reason is that we often need to do things like
> > this:
> > 
> > 	rcu_read_lock();
> > 	p = rcu_dereference(global_p);
> > 	if (ask_rcu_question(p)) {
> > 		do_some_other_rcu_thing(p);
> > 		rcu_read_unlock();
> > 		do_something_that_sleeps();
> > 	} else {
> > 		do_yet_some_other_rcu_thing(p);
> > 		rcu_read_unlock();
> > 		do_something_else_that_sleeps();
> > 	}
> >
> > Sure, you could write that like this:
> > 
> > 	bool q;
> > 
> > 	rcu_read_lock_scoped();
> > 	struct foo *p = rcu_dereference(global_p);
> > 		q = ask_rcu_question(p);
> > 		if (q)
> > 			do_some_other_rcu_thing(p);
> > 		else
> > 			do_yet_some_other_rcu_thing(p);
> > 	rcu_read_unlock_scoped();
> > 	if (q)
> > 		do_something_that_sleeps();
> > 	else
> > 		do_something_else_that_sleeps();
> > 
> > And I know any number of C++ guys who would sing the benefits of the
> > latter over the former, but I personally think they are drunk on RAII
> > Koolaid.  As would any number of people in the Linux kernel community. ;-)
> > 
> > It turns out that there are about 3400 uses of rcu_read_lock() and
> > about 4200 uses of rcu_read_unlock().  So this sort of thing is common.
> > Yes, it is possible that use of RAII would get rid of some of them,
> > but definitely not all of them.
> > 
> > Plus there are situations where an iterator momentarily drops out of
> > an RCU read-side critical section in order to keep from impeding RCU
> > grace periods.  These tend to be buried deep down the function-call stack.
> > 
> > Don't get me wrong, RAII has its benefits.  But also its drawbacks.
> 
> Agreed. Rust allows RAII, but it's by no means required. Your first example can
> be translated to Rust as follows:
> 
>  	rcu_guard = rcu::read_lock();
>  	p = global_p.rcu_dereference(&rcu_guard);
>  	if (ask_rcu_question(p)) {
>  		do_some_other_rcu_thing(p);
>  		rcu_guard.unlock();
>  		do_something_that_sleeps();
>  	} else {
>  		do_yet_some_other_rcu_thing(p);
>  		rcu_guard.unlock();
>  		do_something_else_that_sleeps();
>  	}
> 
> This is not very different from C version but has the following extra
> advantages:
>   1. global_p can only be dereferenced while in an rcu critical section.
>   2. p becomes inaccessible after rcu_guard.unlock() is called.
>   3. If we fail to call rcu_guard.unlock() in some code path, it will be
>      automatically called when rcu_guard goes out of scope. (But only if we
>      forget, otherwise it won't because rcu_guard.unlock consumes the guard.)

OK, so this presumably allows overlapping an RCU reader with a lock:

	rcu_read_lock();
	p = rcu_dereference(global_p);
	if (ask_rcu_question(p)) {
		do_some_other_rcu_thing(p);
		spin_lock(&p->lock);
		rcu_read_unlock();
		do_something_that_sleeps();
		spin_unlock(&p->lock);
	} else {
		do_yet_some_other_rcu_thing(p);
		rcu_read_unlock();
		do_something_else_that_sleeps();
	}

Or do we need to "launder" p somehow to make this work?  There is a macro
that documents similar transitions in the Linux kernel:

	rcu_read_lock();
	p = rcu_dereference(global_p);
	if (ask_rcu_question(p)) {
		do_some_other_rcu_thing(p);
		spin_lock(&p->lock);
		q = rcu_pointer_handoff(p);
		rcu_read_unlock();
		do_something_that_sleeps();
		spin_unlock(&q->lock);
	} else {
		do_yet_some_other_rcu_thing(p);
		rcu_read_unlock();
		do_something_else_that_sleeps();
	}

Another odd twist is where objects are inserted into a given data
structure but never removed.  In that case, you need rcu_dereference(),
but you do not need rcu_read_lock() and rcu_read_unlock().  One approach
within the Linux kernel is rcu_dereference_protected(global_p, 1) or
equivalently rcu_dereference_raw(p).  Thoughts?

> > > > > Anyway, on to the write path. We need another primitive to ensure that only one
> > > > > writer at a time attempts to acquire the sequence lock in write mode. We do this
> > > > > by taking a guard for this other lock, for example, suppose we want to increment
> > > > > each of the fields:
> > > > > 
> > > > >     let other_guard = other_lock.lock();
> > > > >     let guard = y.write(&other_guard);
> > > > 
> > > > The first acquires the lock in an RAII (scoped) fashion and the second
> > > > enters the sequence-lock write-side critical section, correct?
> > > 
> > > Yes, exactly.
> > 
> > But wouldn't it be more ergonomic and thus less error-prone to be able
> > to combine those into a single statement?
> 
> Definitely. The above example is similar to the usage of a seqcounter in C --
> with the added requirement that users need to provide evidence that they're in
> fact using a lock (which is something that C doesn't do, so it's more error
> prone).
> 
> Combining a lock and a seqcounter into one thing (seqlocks) is better when
> that's what users need. I'll improve the wrappers to allow both.

Very good!

> > > Additionally, the ownership rules guarantee that the outer lock cannot be
> > > unlocked while in the sequence-lock write-side critical section (because the
> > > inner guard borrows the outer one, so it can be only be consumed after this
> > > borrow goes away). An attempt to do so would result in a compile-time error.
> > 
> > OK, let's talk about the Rusty Scale of easy of use...
> > 
> > This was introduced by Rusty Russell in his 2003 Ottawa Linux Symposium
> > keynote: https://ozlabs.org/~rusty/ols-2003-keynote/ols-keynote-2003.html.
> > The relevant portion is in slides 39-57.
> > 
> > An API that doesn't let you get it wrong (combined lock/count acquisition)
> > is better than one where the compiler complains if you get it wrong.  ;-)
> 
> You're right.
> 
> But see the distinction I made above: seqcounter vs seqlock. In cases when a
> seqlock isn't suitable but a seqcounter is, C will let you misuse the write-side
> critical section, Rust won't :)

OK, but please understand that "won't" is a very strong word.  ;-)

> > > > >     guard.a.store(guard.a.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> > > > >     guard.b.store(guard.b.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
> > > > > 
> > > > > The part the relates to the sequence lock is compiled to the following:
> > > > > 
> > > > >   404058:       f9400009        ldr     x9, [x0]
> > > > >   40405c:       eb08013f        cmp     x9, x8
> > > > >   404060:       54000281        b.ne    4040b0
> > > > > 
> > > > >   404064:       b9400808        ldr     w8, [x0, #8]
> > > > >   404068:       11000508        add     w8, w8, #0x1
> > > > >   40406c:       b9000808        str     w8, [x0, #8]
> > > > >   404070:       d5033abf        dmb     ishst
> > > > >   404074:       b9400c08        ldr     w8, [x0, #12]
> > > > >   404078:       11000508        add     w8, w8, #0x1
> > > > >   40407c:       b9000c08        str     w8, [x0, #12]
> > > > >   404080:       b9401008        ldr     w8, [x0, #16]
> > > > >   404084:       11000508        add     w8, w8, #0x1
> > > > >   404088:       b9001008        str     w8, [x0, #16]
> > > > >   40408c:       d5033abf        dmb     ishst
> > > > >   404090:       b9400808        ldr     w8, [x0, #8]
> > > > >   404094:       11000508        add     w8, w8, #0x1
> > > > >   404098:       b9000808        str     w8, [x0, #8]
> > > > > 
> > > > > If we ignore the first three instructions momentarily, the rest is as efficient
> > > > > as C. The reason we need the first three instructions is to ensure that guard
> > > > > that was passed into the `write` function is a guard to the correct lock. The
> > > > > lock type already eliminates the vast majority of issues, but a developer could
> > > > > accidentally lock the wrong lock and use it in the sequence lock, which would be
> > > > > problematic. So we need this check in Rust that we don't need in C (although the
> > > > > same mistake could happen in C).
> > > > > 
> > > > > We can provide an 'unsafe' version that doesn't perform this check, then the
> > > > > onus is on the callers to convince themselves that they have acquired the
> > > > > correct lock (and they'd be required to use an unsafe block). Then the
> > > > > performance would be the same as the C version.
> > > > 
> > > > The Linux-kernel C-language sequence counter (as opposed to the various
> > > > flavors of sequence lock) assume that the caller has provided any needed
> > > > mutual exclusion.
> > > 
> > > Yes, this actually uses sequence counters.
> > > 
> > > I suppose if we embed the locks ourselves like sequence locks do, we can wrap
> > > such 'unsafe' blocks as part of the implementation and only expose safe
> > > interfaces as efficient as C.
> > > 
> > > Do you happen to know the usage ratio between sequence counters vs sequence
> > > locks (all flavours combined)? If the latter are used in the vast majority of
> > > cases, I think it makes sense to do something similar in Rust.
> > 
> > Let's count the initializations:
> > 
> > o	Sequence counters:
> > 
> > 	 8	SEQCNT_ZERO
> > 	15	seqcount_init
> > 
> > 	23	Total
> > 
> > o	Sequence locks:
> > 
> > 	3	SEQCNT_RAW_SPINLOCK_ZERO
> > 	3	SEQCNT_SPINLOCK_ZERO
> > 	0	SEQCNT_RWLOCK_ZERO
> > 	0	SEQCNT_MUTEX_ZERO
> > 	0	SEQCNT_WW_MUTEX_ZERO
> > 	1	seqcount_raw_spinlock_init
> > 	13	seqcount_spinlock_init
> > 	1	seqcount_rwlock_init
> > 	1	seqcount_mutex_init
> > 	1	seqcount_ww_mutex_init
> > 
> > 	23	Total
> > 
> > Exactly even!  When does -that- ever happen?  ;-)
> 
> Oh, man! I was hoping seqlocks would be so dominant that we could ignore
> seqcounters in Rust :)

Actually, I bet that you can ignore both seqlocks and seqcounters for
quite some time, depending on what device drivers you are targeting.
Most of the uses are in the core kernel rather than in device drivers.

> > > > > Now that I've presented how my proposal looks like from the PoV of a user,
> > > > > here's its rationale: given that we only want one copy of the data and that
> > > > > mutable references are always unique in the safe fragment of Rust, we can't (and
> > > > > don't) return a mutable reference to what's protected by the sequence lock, we
> > > > > always only allow shared access, even when the sequence lock is acquired in
> > > > > write mode.
> > > > > 
> > > > > Then how does one change the fields? Interior mutability. In the examples above,
> > > > > the fields are all atomic, so they can be changed with the `store` method. Any
> > > > > type that provides interior mutability is suitable here.
> > > > 
> > > > OK, so following the approach of "marked accesses".
> > > 
> > > Yes.
> > >  
> > > > > If we need to use types with interior mutability, what's the point of the
> > > > > sequence lock? The point is to allow a consistent view of the fields. In our
> > > > > example, even though `a` and `b` are atomic, the sequence lock guarantees that
> > > > > readers will get a consistent view of the values even though writers modify one
> > > > > at a time.
> > > > 
> > > > Yes.
> > > > 
> > > > I suppose that the KCSAN ASSERT_EXCLUSIVE_WRITER() could be used on
> > > > the sequence-lock update side to check for unwanted concurrency.
> > > 
> > > Yes, definitely!
> > 
> > Could anything be done to check for values leaking out of failed seqlock
> > read-side critical sections?
> 
> I can't think of a way to prevent them outright, but if one uses the
> closure-based version, values cannot escape through the captured state of the
> closure because it is declared immutable (Fn vs FnMut), though values of failed
> iterations could potentially escape through say global variables.
> 
> I'll think some more about this to see if I can come up with something. If you
> have other ideas, please let us know!

My ignorance of Rust prevents me from saying much.  Me, I am just taking
you guys at your word about preventing bugs.  ;-)

> > > > > Lastly, the fact we use a generic `Guard` as proof that a lock is held (for the
> > > > > write path) means that we don't need to manually implement this for each
> > > > > different lock we care about; any that implements the `Lock` trait can be used.
> > > > > This is unlike the C code that uses fragile macros to generate code for
> > > > > different types of locks (though the scenario is slightly different in that the
> > > > > C code embeds a lock, which is also something we could do in Rust) -- the Rust
> > > > > version uses generics, so it is type-checked by the compiler.
> > > > 
> > > > OK, so this is a standalone implementation of sequence locks in Rust,
> > > > rather than something that could interoperate with the C-language
> > > > sequence locks?
> > > 
> > > It's an implementation of sequence locks using C-language sequence counters.
> > > Instead of embedding a lock for writer mutual exclusion, we require evidence
> > > that some lock is in use. The idea was to be "flexible" and share locks, but if
> > > most usage just embeds a lock, we may as well do something similar in Rust.
> > 
> > Whew!
> > 
> > I don't know if such a case exists, but there is the possibility of
> > non-lock mutual exclusion.  For example, the last guy to remove a
> > reference to something is allowed to do a sequence-counter update.
> > 
> > How would such a case be handled?
> 
> Well, it depends on how this mutual exclusion can be expressed to Rust. If,
> let's say, the protected data structure is being freed, then there it is
> guaranteed that no-one else has references to it. In that case, one could just
> implement the `Drop` trait and get a mutable reference (&mut) to the object
> directly without having to go through the lock.
> 
> If Rust can't reason be convinced of the mutual exclusion, then it would require
> an unsafe variant, so its declaration would be something like:
> 
> /// Enter the write-side critical section.
> //
> //  # Safety
> //
> //  Callers must ensure that at all times, at most one thread/CPU call this
> //  function and own the guard.
> unsafe fn write_unsafe(&self) -> Guard;
> 
> 
> And callers would write something like:
> 
>   // SAFETY: The mutual exclusion requirements is satisfied by [reason here].
>   let guard = unsafe { seqcounter.write_unsafe() };
> 
> Note that the `unsafe` annotation in the function declaration above makes it
> such that all callers must wrap the calls in `unsafe` blocks. Failure to do so
> results in a compiler error saying that they should check the documentation on
> the safety requirements for this function.

Perhaps one approach that might work in at least a few cases would be to
bury the reference removal (atomic_dec_and_test()) into the same place
doing the write-side sequence-count work.  Perhaps that would allow the
reference-removal return value to feed in somehow?

But each special case seems like it would need special invention, which
leads to using Rust unsafe (as you suggest above) or just leaving it in
C code.

> > > > Is "fragile macros" just the usual Rust denigration of the C preprocessor,
> > > > or is there some specific vulnerability that you see in those macros?
> > > 
> > > I don't see any specific vulnerability. By fragile I meant that it's more error
> > > prone to write "generic" code with macros than with compiler-supported generics.
> > 
> > Fair enough, but rest assured that those who love the C preprocessor
> > have their own "interesting" descriptions of Rust macros.  ;-)
> 
> Oh, you won't see me defending macros from either language :)

They are crufty, difficult to get right, easy to inject bugs into,
ugly, inelegant, ...

And always there when you need them!

> > Plus I am old enough to remember people extolling the simplicity of
> > C-preprocessor macros compared to, among other things, LISP macros.
> > And they were correct to do so, at least for simple use cases.
> > 
> > I suggest just calling them CPP macros or similar when talking with
> > Linux-kernel community members.  Me, I have seen enough software artifacts
> > come and go that I don't much care what you call them, but others just
> > might be a bit more touchy about such things.
> 
> Sure, but to be clear, I haven't talked about Rust macros, and I don't encourage
> their use. I was talking about generics, which is a Rust language feature that
> is part of the type system (integral to the lifetimes story), so they are
> checked by the compiler, unlike macros (C or Rust).

I agree that various sorts of generics can do some jobs better than
macros can.  Give or take their effect of C++ build times, but maybe
Rust has a better story there.

> > > > Of course, those macros could be used to automatically generate the
> > > > wrappers.  Extract the macro invocations from the C source, and transform
> > > > them to wrappers, perhaps using Rust macros somewhere along the way.
> > > 
> > > Sure, we could do something like that.
> > > 
> > > But given that we already wrap the C locks in Rust abstractions that implement a
> > > common trait (interface), we can use Rust generics to leverage all locks without
> > > the need for macros.
> > 
> > If you have a particular sequence lock that is shared between Rust and C
> > code, it would be good to be able to easily to find the Rust uses given
> > the C uses and vice versa!
> > 
> > I am not claiming that generics won't work, but instead that we still need
> > to be able to debug the Linux kernel, and that requires us to be able to
> > quickly and easily find all the places where a given object is used.
> 
> Fair point. We need to spend more time on tooling to link the C code with the
> Rust wrappers and the usage of wrappers.

Very much agreed!  ;-)

> > > > > RCU pointers can be implemented with a similar technique in that read access is
> > > > > protected by a 'global' RCU reader lock (and evidence of it being locked is
> > > > > required to get read access), and writers require another lock to be held. The
> > > > > only piece that I haven't thought through yet is how to ensure that pointers
> > > > > that were exposed with RCU 'protection' cannot be freed before the grace period
> > > > > has elapsed. But this is a discussion for another time.
> > > > 
> > > > Please note that it is quite important for Rust to use the RCU provided
> > > > by the C-language part of the kernel.  Probably also for sequence locks,
> > > > but splitting RCU reduces the effectiveness of its batching optimizations.
> > > 
> > > Agreed. We actually use the C implementation for all synchronisation primitives
> > > (including ref-counting, which isn't technically a synchronisation primitive but
> > > has subtle usage of barriers). What I mean by "implemented in Rust" is just the
> > > abstractions leveraging Rust concepts to catch misuses earlier where possible.
> > 
> > Might I suggest that you instead say "wrappered for Rust"?
> > 
> > I am not the only one to whom "implemented in Rust" means just what
> > it says, that Rust has its own variant written completely in Rust.
> > Continuing to use "implemented in Rust" will continue to mislead
> > Linux-kernel developers into believing that you created a from-scratch
> > Rust variant of the code at hand, and believe me, that won't go well.
> 
> That's good feedback, thank you. I'll police my usage of implement vs wrap.

Just for my education in the other direction, what do I say to indicate
"written completely in Rust" as opposed to wrappered when talking to
Rust people?

							Thanx, Paul

> > > > For at least some of the Linux kernel's RCU use cases, something like
> > > > interior mutability may be required.  Whether those use cases show up
> > > > in any Rust-language drivers I cannot say.  Other use cases would work
> > > > well with RCU readers having read ownership of the non-pointer fields
> > > > in each RCU-protected object.
> > > > 
> > > > Again, I did add rough descriptions of a few Linux-kernel RCU use cases.
> > > > 
> > > > > I'll send out the patches for what I describe above in the next couple of days.
> > > > > 
> > > > > Does any of the above help answer the questions you have about seqlocks in Rust?
> > > > 
> > > > Possibly at least some of them.  I suspect that there is still much to
> > > > be learned on all sides, including learning about additional questions
> > > > that need to be asked.
> > > 
> > > Fair point. We don't know quite yet if we've asked all the questions.
> > 
> > My main immediate additional question is "what are the bugs and what
> > can be done to better locate them".  That question of course applies
> > regardless of the language and tools used for a given piece of code.
> > 
> > > > Either way, thank you for your work on this!
> > > 
> > > Thanks for engaging with us, this is much appreciated.
> > > 
> > > Cheers,
> > > -Wedson
> > > 
> > > > 
> > > > 							Thanx, Paul
> > > > 
> > > > > Thanks,
> > > > > -Wedson
> > > > > 
> > > > > > So the trick is to stage things so as to allow people time to work on
> > > > > > these sorts of issues.
> > > > > > 
> > > > > > > In any case, Rust does not necessarily need to help there. What is
> > > > > > > important is whether Rust helps writing the majority of the kernel
> > > > > > > code. If we need to call into C or use inline assembly for certain
> > > > > > > bits -- so be it.
> > > > > > > 
> > > > > > > > But to be fair, much again depends on exactly where Rust is to be applied
> > > > > > > > in the kernel.  If a given Linux-kernel feature is not used where Rust
> > > > > > > > needs to be applied, then there is no need to solve the corresponding
> > > > > > > > issues.
> > > > > > > 
> > > > > > > Exactly.
> > > > > > 
> > > > > > Thank you for bearing with me.
> > > > > > 
> > > > > > I will respond to your other email later,.  but the focus on memory
> > > > > > safety in particular instead of undefined behavior in general does help
> > > > > > me quite a bit.
> > > > > > 
> > > > > > My next step is to create a "TL;DR: Memory-Model Recommendations" post
> > > > > > that is more specific, with both short-term ("do what is easy") and
> > > > > > long-term suggestions.
> > > > > > 
> > > > > > 							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-13 23:29                               ` Paul E. McKenney
@ 2021-10-22 19:17                                 ` Miguel Ojeda
  2021-10-22 20:34                                   ` Paul E. McKenney
  0 siblings, 1 reply; 39+ messages in thread
From: Miguel Ojeda @ 2021-10-22 19:17 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Thu, Oct 14, 2021 at 1:29 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> So Rust could support zombie pointers without changes to LLVM?

I don't know what you mean "without changes". LLVM is not fixed, it
changes every version, and Rust sometimes has to patch it on top. If
Rust decides to support (or not) zombie pointers, then they will have
to look for a way to lower code in the given version/instance of LLVM
they are using in a way that does not break the zap-susceptible
algorithms. That may require new features for the IR, or disabling
certain optimizations, or fixing bugs, etc.

> The standard is for the most part not a mathematical document.  So many
> parts of it can only be "understood in a personal capacity".

Sure, but there is a middle-ground between a formal model and
completely unstated semantics where nobody can even guess the
intention. My point was that we should not rely on semantics that are
not precise yet -- if possible. And if the same problem happens in C,
but we have a workaround for it, we should not be rewriting those
algorithms in Rust.

> To be proven in the context of the Linux kernel.  And I am happy to
> provide at least a little help with the experiment.

I was talking about classes of errors that are avoided "just" by using
the language. For instance, using `Result` instead of hoping users to
get the error encoding right even across maintenance rounds.

> Working on it in the case of C/C++, though quite a bit more slowly
> than I would like.

In my case I am trying to see if WG14 would be interested in adding
Rust-like features to C, but even if everyone agreed, it would take a
very long time, indeed.

> However...
>
> Just to get you an idea of the timeframe, the C++ committee requested
> an RCU proposal from me in 2014.  It took about four years to exchange
> sufficient C++ and RCU knowledge to come to agreement on what a C++
> RCU API would even look like.  The subsequent three years of delay were
> due to bottlenecks in the standardization process.  Only this year were
> hazard pointers and RCU voted into a Technical Specification, which has
> since been drafted by Michael Wong, Maged Michael (who of course did the
> hazard pointers section), and myself.  The earliest possible International
> Standard release date is 2026, with 2029 perhaps being more likely.
>
> Let's be optimistic and assume 2026.  That would be 12 years elapsed time.
>
> Now, the USA Social Security actuarial tables [1] give me about a 77%
> chance of living another 12 years, never mind the small matter of
> remaining vigorous enough to participate in the standards process.
> Therefore, there is only so much more that I will doing in this space.
>
> Apologies for bringing up what might seem to be a rather morbid point,
> but there really are sharp limits here.  ;-)

I feel you, I have also experienced it (to a much lesser degree, though).

I could even see similar work for Rust going in faster than in C++
even if you started today ;-)

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Can the Kernel Concurrency Sanitizer Own Rust Code?
  2021-10-22 19:17                                 ` Miguel Ojeda
@ 2021-10-22 20:34                                   ` Paul E. McKenney
  0 siblings, 0 replies; 39+ messages in thread
From: Paul E. McKenney @ 2021-10-22 20:34 UTC (permalink / raw)
  To: Miguel Ojeda; +Cc: Gary Guo, Marco Elver, Boqun Feng, kasan-dev, rust-for-linux

On Fri, Oct 22, 2021 at 09:17:34PM +0200, Miguel Ojeda wrote:
> On Thu, Oct 14, 2021 at 1:29 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > So Rust could support zombie pointers without changes to LLVM?
> 
> I don't know what you mean "without changes". LLVM is not fixed, it
> changes every version, and Rust sometimes has to patch it on top. If
> Rust decides to support (or not) zombie pointers, then they will have
> to look for a way to lower code in the given version/instance of LLVM
> they are using in a way that does not break the zap-susceptible
> algorithms. That may require new features for the IR, or disabling
> certain optimizations, or fixing bugs, etc.

And we do have some people working on these fixes in the LLVM backend,
but it may take some time.

> > The standard is for the most part not a mathematical document.  So many
> > parts of it can only be "understood in a personal capacity".
> 
> Sure, but there is a middle-ground between a formal model and
> completely unstated semantics where nobody can even guess the
> intention. My point was that we should not rely on semantics that are
> not precise yet -- if possible. And if the same problem happens in C,
> but we have a workaround for it, we should not be rewriting those
> algorithms in Rust.

Me, I don't have a choice.  To get my job done, I am required to use
things that the standards do not define very well, if at all.

And this is true of any large project.  And also part of the reason that
Rust has unsafe mode.

But yes, in many cases, informal definitions are better than no
definitions.  And I agree that it is possible to reason informally.
After all, the formal definitions of RCU didn't show up until RCU had
some decades of use in production.  ;-)

> > To be proven in the context of the Linux kernel.  And I am happy to
> > provide at least a little help with the experiment.
> 
> I was talking about classes of errors that are avoided "just" by using
> the language. For instance, using `Result` instead of hoping users to
> get the error encoding right even across maintenance rounds.

OK, I have to ask and I apologize in advance, but...

...have you taken courses in statistics and in experiment design?

> > Working on it in the case of C/C++, though quite a bit more slowly
> > than I would like.
> 
> In my case I am trying to see if WG14 would be interested in adding
> Rust-like features to C, but even if everyone agreed, it would take a
> very long time, indeed.

I know that feeling.

And to be fair, everyone would have been better off had C and C++ been
slower to adopt memory_order_consume.  (Another thing being worked on.)

> > However...
> >
> > Just to get you an idea of the timeframe, the C++ committee requested
> > an RCU proposal from me in 2014.  It took about four years to exchange
> > sufficient C++ and RCU knowledge to come to agreement on what a C++
> > RCU API would even look like.  The subsequent three years of delay were
> > due to bottlenecks in the standardization process.  Only this year were
> > hazard pointers and RCU voted into a Technical Specification, which has
> > since been drafted by Michael Wong, Maged Michael (who of course did the
> > hazard pointers section), and myself.  The earliest possible International
> > Standard release date is 2026, with 2029 perhaps being more likely.
> >
> > Let's be optimistic and assume 2026.  That would be 12 years elapsed time.
> >
> > Now, the USA Social Security actuarial tables [1] give me about a 77%
> > chance of living another 12 years, never mind the small matter of
> > remaining vigorous enough to participate in the standards process.
> > Therefore, there is only so much more that I will doing in this space.
> >
> > Apologies for bringing up what might seem to be a rather morbid point,
> > but there really are sharp limits here.  ;-)
> 
> I feel you, I have also experienced it (to a much lesser degree, though).
> 
> I could even see similar work for Rust going in faster than in C++
> even if you started today ;-)

For RCU in Rust, there would first be much need for the Rust community
to learn more about RCU and for me to learn more about Rust.  As noted
earlier, my next step is to better document RCU's wide range of use cases.

And I know (and appreciate) that some in the Rust community have read
my open-source book on concurrency, but I suspect that quite a few more
could benefit from doing so.  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2021-10-22 20:34 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-07 13:01 Can the Kernel Concurrency Sanitizer Own Rust Code? Marco Elver
2021-10-07 14:15 ` Boqun Feng
2021-10-07 14:22   ` Marco Elver
2021-10-07 14:43     ` Boqun Feng
2021-10-07 17:44     ` Miguel Ojeda
2021-10-07 18:50       ` Paul E. McKenney
2021-10-07 21:42         ` Gary Guo
2021-10-07 22:30           ` Paul E. McKenney
2021-10-07 23:06             ` Gary Guo
2021-10-07 23:42               ` Paul E. McKenney
2021-10-07 23:59                 ` Gary Guo
2021-10-08  0:27                   ` comex
2021-10-08 17:40                   ` Paul E. McKenney
2021-10-08 21:32                     ` Miguel Ojeda
2021-10-09  0:08                       ` Paul E. McKenney
2021-10-09 16:31                         ` Miguel Ojeda
2021-10-09 23:59                           ` Paul E. McKenney
2021-10-11  1:24                             ` Miguel Ojeda
2021-10-11 19:01                               ` Paul E. McKenney
2021-10-13 11:48                                 ` Miguel Ojeda
2021-10-13 16:07                                   ` Paul E. McKenney
2021-10-13 17:50                                     ` Wedson Almeida Filho
2021-10-14  3:35                                       ` Paul E. McKenney
2021-10-14  8:03                                         ` Wedson Almeida Filho
2021-10-14 19:43                                           ` Paul E. McKenney
2021-10-15 15:06                                             ` Wedson Almeida Filho
2021-10-15 23:29                                               ` Paul E. McKenney
2021-10-08 19:53                 ` Miguel Ojeda
2021-10-08 23:57                   ` Paul E. McKenney
2021-10-09 16:30                     ` Miguel Ojeda
2021-10-09 23:48                       ` Paul E. McKenney
2021-10-11  0:59                         ` Miguel Ojeda
2021-10-11 18:52                           ` Paul E. McKenney
2021-10-13 11:47                             ` Miguel Ojeda
2021-10-13 23:29                               ` Paul E. McKenney
2021-10-22 19:17                                 ` Miguel Ojeda
2021-10-22 20:34                                   ` Paul E. McKenney
2021-10-07 16:30 ` Paul E. McKenney
2021-10-07 16:35   ` Marco Elver

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.