All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
@ 2023-07-14  9:13 Asahi Lina
  2023-07-14  9:13 ` [PATCH RFC 01/11] rust: types: Add Opaque::zeroed() Asahi Lina
                   ` (11 more replies)
  0 siblings, 12 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Begone, lock classes!

As discussed in meetings/etc, we would really like to support implicit
lock class creation for Rust code. Right now, lock classes are created
using macros and passed around (similar to C). Unfortunately, Rust
macros don't look like Rust functions, which means adding lockdep to a
type is a breaking API change. This makes Rust mutex creation rather
ugly, with the new_mutex!() macro and friends.

Implicit lock classes have to be unique per instantiation code site.
Notably, with Rust generics and monomorphization, this is not the same
as unique per generated code instance. If this weren't the case, we
could use inline functions and asm!() magic to try to create lock
classes that have the right uniqueness semantics. But that doesn't work,
since it would create too many lock classes for the same actual lock
creation in the source code.

But Rust does have one trick we can use: it can track the caller
location (as file:line:column), across multiple functions. This works
using an implicit argument that gets passed around, which is exactly the
thing we do for lock classes. The tricky bit is that, while the value of
these Location objects has the semantics we want (unique value per
source code location), there is no guarantee that they are deduplicated
in memory.

So we use a hash table, and map Location values to lock classes. Et
voila, implicit lock class support!

This lets us clean up the Mutex & co APIs and make them look a lot more
Rust-like, but it also means we can now throw Lockdep into more APIs
without breaking the API. And so we can pull a neat trick: adding
Lockdep support into Arc<T>. This catches cases where the Arc Drop
implementation could create a locking correctness violation only when
the reference count drops to 0 at that particular drop site, which is
otherwise not detectable unless that condition actually happens at
runtime. Since Drop is "magic" in Rust and Drop codepaths very difficult
to audit, this helps a lot.

For the initial RFC, this implements the new API only for Mutex. If this
looks good, I can extend it to CondVar & friends in the next version.
This series also folds in a few related minor dependencies / changes
(like the pin_init mutex stuff).

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
Asahi Lina (11):
      rust: types: Add Opaque::zeroed()
      rust: lock: Add Lock::pin_init()
      rust: Use absolute paths to build Rust objects
      rust: siphash: Add a simple siphash abstraction
      rust: sync: Add dummy LockClassKey implementation for !CONFIG_LOCKDEP
      rust: sync: Replace static LockClassKey refs with a pointer wrapper
      rust: sync: Implement dynamic lockdep class creation
      rust: sync: Classless Lock::new() and pin_init()
      rust: init: Update documentation for new mutex init style
      rust: sync: Add LockdepMap abstraction
      rust: sync: arc: Add lockdep integration
 lib/Kconfig.debug                 |   8 ++
 rust/Makefile                     |   2 +-
 rust/bindings/bindings_helper.h   |   2 +
 rust/helpers.c                    |  24 ++++
 rust/kernel/init.rs               |  22 ++--
 rust/kernel/lib.rs                |   1 +
 rust/kernel/siphash.rs            |  39 +++++++
 rust/kernel/sync.rs               |  33 ++----
 rust/kernel/sync/arc.rs           |  71 +++++++++++-
 rust/kernel/sync/condvar.rs       |   2 +-
 rust/kernel/sync/lock.rs          |  68 ++++++++++-
 rust/kernel/sync/lock/mutex.rs    |  15 ++-
 rust/kernel/sync/lock/spinlock.rs |   2 +-
 rust/kernel/sync/lockdep.rs       | 229 ++++++++++++++++++++++++++++++++++++++
 rust/kernel/sync/no_lockdep.rs    |  38 +++++++
 rust/kernel/types.rs              |   7 +-
 scripts/Makefile.build            |   8 +-
 17 files changed, 525 insertions(+), 46 deletions(-)
---
base-commit: 7eb28ae62e16abc207c90064ad2b824c19566fe2
change-id: 20230714-classless_lockdep-f1d5972fb4ba

Thank you,
~~ Lina


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH RFC 01/11] rust: types: Add Opaque::zeroed()
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
@ 2023-07-14  9:13 ` Asahi Lina
  2023-07-14 10:15   ` Alice Ryhl
  2023-07-15 14:27   ` Gary Guo
  2023-07-14  9:13 ` [PATCH RFC 02/11] rust: lock: Add Lock::pin_init() Asahi Lina
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Opaque types are internally MaybeUninit, so it's safe to actually
zero-initialize them as long as we don't claim they are initialized.
This is useful for many FFI types that are expected to be zero-inited by
the user.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/kernel/types.rs | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
index 1e5380b16ed5..185d3493857e 100644
--- a/rust/kernel/types.rs
+++ b/rust/kernel/types.rs
@@ -237,6 +237,11 @@ pub const fn uninit() -> Self {
         Self(MaybeUninit::uninit())
     }
 
+    /// Creates a zeroed value.
+    pub fn zeroed() -> Self {
+        Self(MaybeUninit::zeroed())
+    }
+
     /// Creates a pin-initializer from the given initializer closure.
     ///
     /// The returned initializer calls the given closure with the pointer to the inner `T` of this

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 02/11] rust: lock: Add Lock::pin_init()
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
  2023-07-14  9:13 ` [PATCH RFC 01/11] rust: types: Add Opaque::zeroed() Asahi Lina
@ 2023-07-14  9:13 ` Asahi Lina
  2023-07-15 14:29   ` Gary Guo
  2023-07-14  9:13 ` [PATCH RFC 03/11] rust: Use absolute paths to build Rust objects Asahi Lina
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Allow initializing a lock using pin_init!(), instead of requiring
the inner data to be passed through the stack.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/kernel/sync/lock.rs       | 30 +++++++++++++++++++++++++++++-
 rust/kernel/sync/lock/mutex.rs | 13 +++++++++++++
 2 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index a2216325632d..d493c5d19104 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -6,7 +6,9 @@
 //! spinlocks, raw spinlocks) to be provided with minimal effort.
 
 use super::LockClassKey;
-use crate::{bindings, init::PinInit, pin_init, str::CStr, types::Opaque, types::ScopeGuard};
+use crate::{
+    bindings, init::PinInit, pin_init, str::CStr, try_pin_init, types::Opaque, types::ScopeGuard,
+};
 use core::{cell::UnsafeCell, marker::PhantomData, marker::PhantomPinned};
 use macros::pin_data;
 
@@ -87,6 +89,7 @@ pub struct Lock<T: ?Sized, B: Backend> {
     _pin: PhantomPinned,
 
     /// The data protected by the lock.
+    #[pin]
     pub(crate) data: UnsafeCell<T>,
 }
 
@@ -111,6 +114,31 @@ pub fn new(t: T, name: &'static CStr, key: &'static LockClassKey) -> impl PinIni
             }),
         })
     }
+
+    /// Constructs a new lock initialiser taking an initialiser.
+    pub fn pin_init<E>(
+        t: impl PinInit<T, E>,
+        name: &'static CStr,
+        key: &'static LockClassKey,
+    ) -> impl PinInit<Self, E>
+    where
+        E: core::convert::From<core::convert::Infallible>,
+    {
+        try_pin_init!(Self {
+            // SAFETY: We are just forwarding the initialization across a
+            // cast away from UnsafeCell, so the pin_init_from_closure and
+            // __pinned_init() requirements are in sync.
+            data <- unsafe { crate::init::pin_init_from_closure(move |slot: *mut UnsafeCell<T>| {
+                t.__pinned_init(slot as *mut T)
+            })},
+            _pin: PhantomPinned,
+            // SAFETY: `slot` is valid while the closure is called and both `name` and `key` have
+            // static lifetimes so they live indefinitely.
+            state <- Opaque::ffi_init(|slot| unsafe {
+                B::init(slot, name.as_char_ptr(), key.as_ptr())
+            }),
+        }? E)
+    }
 }
 
 impl<T: ?Sized, B: Backend> Lock<T, B> {
diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
index 923472f04af4..06fe685501b4 100644
--- a/rust/kernel/sync/lock/mutex.rs
+++ b/rust/kernel/sync/lock/mutex.rs
@@ -18,6 +18,19 @@ macro_rules! new_mutex {
     };
 }
 
+/// Creates a [`Mutex`] initialiser with the given name and a newly-created lock class,
+/// given an initialiser for the inner type.
+///
+/// It uses the name if one is given, otherwise it generates one based on the file name and line
+/// number.
+#[macro_export]
+macro_rules! new_mutex_pinned {
+    ($inner:expr $(, $name:literal)? $(,)?) => {
+        $crate::sync::Mutex::pin_init(
+            $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
+    };
+}
+
 /// A mutual exclusion primitive.
 ///
 /// Exposes the kernel's [`struct mutex`]. When multiple threads attempt to lock the same mutex,

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 03/11] rust: Use absolute paths to build Rust objects
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
  2023-07-14  9:13 ` [PATCH RFC 01/11] rust: types: Add Opaque::zeroed() Asahi Lina
  2023-07-14  9:13 ` [PATCH RFC 02/11] rust: lock: Add Lock::pin_init() Asahi Lina
@ 2023-07-14  9:13 ` Asahi Lina
  2023-07-15 14:35   ` Gary Guo
  2023-07-14  9:13 ` [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction Asahi Lina
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

We want to use caller_location to uniquely identify callsites, to
automatically create lockdep classes without macros. The location
filename in local code uses the relative path passed to the compiler,
but if that code is generic and instantiated from another crate, the
path becomes absolute.

To make this work and keep the paths consistent, always pass an absolute
path to the compiler. Then the Location path is always identical
regardless of how the code is being compiled.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/Makefile          | 2 +-
 scripts/Makefile.build | 8 ++++----
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/rust/Makefile b/rust/Makefile
index 7c9d9f11aec5..552f023099c8 100644
--- a/rust/Makefile
+++ b/rust/Makefile
@@ -369,7 +369,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
 		--emit=dep-info=$(depfile) --emit=obj=$@ \
 		--emit=metadata=$(dir $@)$(patsubst %.o,lib%.rmeta,$(notdir $@)) \
 		--crate-type rlib -L$(objtree)/$(obj) \
-		--crate-name $(patsubst %.o,%,$(notdir $@)) $< \
+		--crate-name $(patsubst %.o,%,$(notdir $@)) $(abspath $<) \
 	$(if $(rustc_objcopy),;$(OBJCOPY) $(rustc_objcopy) $@)
 
 rust-analyzer:
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 6413342a03f4..c925b90ebd80 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -283,27 +283,27 @@ rust_common_cmd = \
 # would not match each other.
 
 quiet_cmd_rustc_o_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
-      cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $<
+      cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $(abspath $<)
 
 $(obj)/%.o: $(src)/%.rs FORCE
 	$(call if_changed_dep,rustc_o_rs)
 
 quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
       cmd_rustc_rsi_rs = \
-	$(rust_common_cmd) -Zunpretty=expanded $< >$@; \
+	$(rust_common_cmd) -Zunpretty=expanded $(abspath $<) >$@; \
 	command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) $@
 
 $(obj)/%.rsi: $(src)/%.rs FORCE
 	$(call if_changed_dep,rustc_rsi_rs)
 
 quiet_cmd_rustc_s_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
-      cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $<
+      cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $(abspath $<)
 
 $(obj)/%.s: $(src)/%.rs FORCE
 	$(call if_changed_dep,rustc_s_rs)
 
 quiet_cmd_rustc_ll_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
-      cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $<
+      cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $(abspath $<)
 
 $(obj)/%.ll: $(src)/%.rs FORCE
 	$(call if_changed_dep,rustc_ll_rs)

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (2 preceding siblings ...)
  2023-07-14  9:13 ` [PATCH RFC 03/11] rust: Use absolute paths to build Rust objects Asahi Lina
@ 2023-07-14  9:13 ` Asahi Lina
  2023-07-14 14:28   ` Martin Rodriguez Reboredo
  2023-07-15 14:52   ` Gary Guo
  2023-07-14  9:13 ` [PATCH RFC 05/11] rust: sync: Add dummy LockClassKey implementation for !CONFIG_LOCKDEP Asahi Lina
                   ` (7 subsequent siblings)
  11 siblings, 2 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

This simple wrapper allows Rust code to use the Hasher interface with
the kernel siphash implementation. No fancy features supported for now,
just basic bag-of-bytes hashing. No guarantee that hash outputs will
remain stable in the future either.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/bindings/bindings_helper.h |  1 +
 rust/helpers.c                  |  8 ++++++++
 rust/kernel/lib.rs              |  1 +
 rust/kernel/siphash.rs          | 39 +++++++++++++++++++++++++++++++++++++++
 4 files changed, 49 insertions(+)

diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 3e601ce2548d..52f32e423b04 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -10,6 +10,7 @@
 #include <linux/slab.h>
 #include <linux/refcount.h>
 #include <linux/wait.h>
+#include <linux/siphash.h>
 #include <linux/sched.h>
 
 /* `bindgen` gets confused at certain things. */
diff --git a/rust/helpers.c b/rust/helpers.c
index bb594da56137..1ed71315d1eb 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -24,6 +24,7 @@
 #include <linux/errname.h>
 #include <linux/refcount.h>
 #include <linux/mutex.h>
+#include <linux/siphash.h>
 #include <linux/spinlock.h>
 #include <linux/sched/signal.h>
 #include <linux/wait.h>
@@ -135,6 +136,13 @@ void rust_helper_put_task_struct(struct task_struct *t)
 }
 EXPORT_SYMBOL_GPL(rust_helper_put_task_struct);
 
+u64 rust_helper_siphash(const void *data, size_t len,
+			const siphash_key_t *key)
+{
+	return siphash(data, len, key);
+}
+EXPORT_SYMBOL_GPL(rust_helper_siphash);
+
 /*
  * We use `bindgen`'s `--size_t-is-usize` option to bind the C `size_t` type
  * as the Rust `usize` type, so we can use it in contexts where Rust
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 85b261209977..8fb39078b85c 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -36,6 +36,7 @@
 pub mod ioctl;
 pub mod prelude;
 pub mod print;
+pub mod siphash;
 mod static_assert;
 #[doc(hidden)]
 pub mod std_vendor;
diff --git a/rust/kernel/siphash.rs b/rust/kernel/siphash.rs
new file mode 100644
index 000000000000..e13a17cd5a93
--- /dev/null
+++ b/rust/kernel/siphash.rs
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A core::hash::Hasher wrapper for the kernel siphash implementation.
+//!
+//! This module allows Rust code to use the kernel's siphash implementation
+//! to hash Rust objects.
+
+use core::hash::Hasher;
+
+/// A Hasher implementation that uses the kernel siphash implementation.
+#[derive(Default)]
+pub struct SipHasher {
+    // SipHash state is 4xu64, but the Linux implementation
+    // doesn't expose incremental hashing so let's just chain
+    // individual SipHash calls for now, which return a u64
+    // hash.
+    state: u64,
+}
+
+impl SipHasher {
+    /// Create a new SipHasher with zeroed state.
+    pub fn new() -> Self {
+        SipHasher { state: 0 }
+    }
+}
+
+impl Hasher for SipHasher {
+    fn finish(&self) -> u64 {
+        self.state
+    }
+
+    fn write(&mut self, bytes: &[u8]) {
+        let key = bindings::siphash_key_t {
+            key: [self.state, 0],
+        };
+
+        self.state = unsafe { bindings::siphash(bytes.as_ptr() as *const _, bytes.len(), &key) };
+    }
+}

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 05/11] rust: sync: Add dummy LockClassKey implementation for !CONFIG_LOCKDEP
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (3 preceding siblings ...)
  2023-07-14  9:13 ` [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction Asahi Lina
@ 2023-07-14  9:13 ` Asahi Lina
  2023-07-14 14:57   ` Martin Rodriguez Reboredo
  2023-07-14  9:13 ` [PATCH RFC 06/11] rust: sync: Replace static LockClassKey refs with a pointer wrapper Asahi Lina
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Lock classes aren't used without lockdep. The C side declares the key
as an empty struct in that case, but let's make it an explicit ZST in
Rust, implemented in a separate module. This allows us to more easily
guarantee that the lockdep code will be trivially optimized out without
CONFIG_LOCKDEP, including LockClassKey arguments that are passed around.

Depending on whether CONFIG_LOCKDEP is enabled or not, we then import
the real lockdep implementation or the dummy one.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/kernel/sync.rs            | 29 ++++++++---------------------
 rust/kernel/sync/lockdep.rs    | 27 +++++++++++++++++++++++++++
 rust/kernel/sync/no_lockdep.rs | 19 +++++++++++++++++++
 3 files changed, 54 insertions(+), 21 deletions(-)

diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index d219ee518eff..352472c6b77a 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -5,37 +5,24 @@
 //! This module contains the kernel APIs related to synchronisation that have been ported or
 //! wrapped for usage by Rust code in the kernel.
 
-use crate::types::Opaque;
-
 mod arc;
 mod condvar;
 pub mod lock;
 mod locked_by;
 
+#[cfg(CONFIG_LOCKDEP)]
+mod lockdep;
+#[cfg(not(CONFIG_LOCKDEP))]
+mod no_lockdep;
+#[cfg(not(CONFIG_LOCKDEP))]
+use no_lockdep as lockdep;
+
 pub use arc::{Arc, ArcBorrow, UniqueArc};
 pub use condvar::CondVar;
 pub use lock::{mutex::Mutex, spinlock::SpinLock};
+pub use lockdep::LockClassKey;
 pub use locked_by::LockedBy;
 
-/// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
-#[repr(transparent)]
-pub struct LockClassKey(Opaque<bindings::lock_class_key>);
-
-// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
-// provides its own synchronization.
-unsafe impl Sync for LockClassKey {}
-
-impl LockClassKey {
-    /// Creates a new lock class key.
-    pub const fn new() -> Self {
-        Self(Opaque::uninit())
-    }
-
-    pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
-        self.0.get()
-    }
-}
-
 /// Defines a new static lock class and returns a pointer to it.
 #[doc(hidden)]
 #[macro_export]
diff --git a/rust/kernel/sync/lockdep.rs b/rust/kernel/sync/lockdep.rs
new file mode 100644
index 000000000000..cb68b18dc0ad
--- /dev/null
+++ b/rust/kernel/sync/lockdep.rs
@@ -0,0 +1,27 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Lockdep utilities.
+//!
+//! This module abstracts the parts of the kernel lockdep API relevant to Rust
+//! modules, including lock classes.
+
+use crate::types::Opaque;
+
+/// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
+#[repr(transparent)]
+pub struct LockClassKey(Opaque<bindings::lock_class_key>);
+
+impl LockClassKey {
+    /// Creates a new lock class key.
+    pub const fn new() -> Self {
+        Self(Opaque::uninit())
+    }
+
+    pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
+        self.0.get()
+    }
+}
+
+// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
+// provides its own synchronization.
+unsafe impl Sync for LockClassKey {}
diff --git a/rust/kernel/sync/no_lockdep.rs b/rust/kernel/sync/no_lockdep.rs
new file mode 100644
index 000000000000..69d42e8d9801
--- /dev/null
+++ b/rust/kernel/sync/no_lockdep.rs
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Dummy lockdep utilities.
+//!
+//! Takes the place of the `lockdep` module when lockdep is disabled.
+
+/// A dummy, zero-sized lock class.
+pub struct LockClassKey();
+
+impl LockClassKey {
+    /// Creates a new dummy lock class key.
+    pub const fn new() -> Self {
+        Self()
+    }
+
+    pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
+        core::ptr::null_mut()
+    }
+}

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 06/11] rust: sync: Replace static LockClassKey refs with a pointer wrapper
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (4 preceding siblings ...)
  2023-07-14  9:13 ` [PATCH RFC 05/11] rust: sync: Add dummy LockClassKey implementation for !CONFIG_LOCKDEP Asahi Lina
@ 2023-07-14  9:13 ` Asahi Lina
  2023-07-14 15:10   ` Martin Rodriguez Reboredo
  2023-07-14  9:13 ` [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation Asahi Lina
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

We want to be able to handle dynamic lock class creation and using
pointers to things that aren't a real lock_class_key as lock classes.
Doing this by casting around Rust references is difficult without
accidentally invoking UB.

Instead, switch LockClassKey to being a raw pointer wrapper around a
lock_class_key, which means there is no UB possible on the Rust side
just by creating and consuming these objects. The C code also should
never actually dereference lock classes, only use their address
(possibly with an offset).

We still provide a dummy ZST version of this wrapper, to be used when
lockdep is disabled.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/kernel/sync.rs            |  6 +++---
 rust/kernel/sync/condvar.rs    |  2 +-
 rust/kernel/sync/lock.rs       |  4 ++--
 rust/kernel/sync/lockdep.rs    | 27 ++++++++++++++++++++++-----
 rust/kernel/sync/no_lockdep.rs | 15 +++++++++++++--
 rust/kernel/types.rs           |  2 +-
 6 files changed, 42 insertions(+), 14 deletions(-)

diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 352472c6b77a..49286c3e0ff3 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -20,7 +20,7 @@
 pub use arc::{Arc, ArcBorrow, UniqueArc};
 pub use condvar::CondVar;
 pub use lock::{mutex::Mutex, spinlock::SpinLock};
-pub use lockdep::LockClassKey;
+pub use lockdep::{LockClassKey, StaticLockClassKey};
 pub use locked_by::LockedBy;
 
 /// Defines a new static lock class and returns a pointer to it.
@@ -28,8 +28,8 @@
 #[macro_export]
 macro_rules! static_lock_class {
     () => {{
-        static CLASS: $crate::sync::LockClassKey = $crate::sync::LockClassKey::new();
-        &CLASS
+        static CLASS: $crate::sync::StaticLockClassKey = $crate::sync::StaticLockClassKey::new();
+        CLASS.key()
     }};
 }
 
diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
index ed353399c4e5..3bccb2c6ef84 100644
--- a/rust/kernel/sync/condvar.rs
+++ b/rust/kernel/sync/condvar.rs
@@ -92,7 +92,7 @@ unsafe impl Sync for CondVar {}
 impl CondVar {
     /// Constructs a new condvar initialiser.
     #[allow(clippy::new_ret_no_self)]
-    pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self> {
+    pub fn new(name: &'static CStr, key: LockClassKey) -> impl PinInit<Self> {
         pin_init!(Self {
             _pin: PhantomPinned,
             // SAFETY: `slot` is valid while the closure is called and both `name` and `key` have
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index d493c5d19104..8e71e7aa2cc1 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -103,7 +103,7 @@ unsafe impl<T: ?Sized + Send, B: Backend> Sync for Lock<T, B> {}
 impl<T, B: Backend> Lock<T, B> {
     /// Constructs a new lock initialiser.
     #[allow(clippy::new_ret_no_self)]
-    pub fn new(t: T, name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self> {
+    pub fn new(t: T, name: &'static CStr, key: LockClassKey) -> impl PinInit<Self> {
         pin_init!(Self {
             data: UnsafeCell::new(t),
             _pin: PhantomPinned,
@@ -119,7 +119,7 @@ pub fn new(t: T, name: &'static CStr, key: &'static LockClassKey) -> impl PinIni
     pub fn pin_init<E>(
         t: impl PinInit<T, E>,
         name: &'static CStr,
-        key: &'static LockClassKey,
+        key: LockClassKey,
     ) -> impl PinInit<Self, E>
     where
         E: core::convert::From<core::convert::Infallible>,
diff --git a/rust/kernel/sync/lockdep.rs b/rust/kernel/sync/lockdep.rs
index cb68b18dc0ad..d8328f4275fb 100644
--- a/rust/kernel/sync/lockdep.rs
+++ b/rust/kernel/sync/lockdep.rs
@@ -9,19 +9,36 @@
 
 /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
 #[repr(transparent)]
-pub struct LockClassKey(Opaque<bindings::lock_class_key>);
+pub struct StaticLockClassKey(Opaque<bindings::lock_class_key>);
 
-impl LockClassKey {
+impl StaticLockClassKey {
     /// Creates a new lock class key.
     pub const fn new() -> Self {
         Self(Opaque::uninit())
     }
 
+    /// Returns the lock class key reference for this static lock class.
+    pub const fn key(&self) -> LockClassKey {
+        LockClassKey(self.0.get())
+    }
+}
+
+// SAFETY: `bindings::lock_class_key` just represents an opaque memory location, and is never
+// actually dereferenced.
+unsafe impl Sync for StaticLockClassKey {}
+
+/// A reference to a lock class key. This is a raw pointer to a lock_class_key,
+/// which is required to have a static lifetime.
+#[derive(Copy, Clone)]
+pub struct LockClassKey(*mut bindings::lock_class_key);
+
+impl LockClassKey {
     pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
-        self.0.get()
+        self.0
     }
 }
 
-// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
-// provides its own synchronization.
+// SAFETY: `bindings::lock_class_key` just represents an opaque memory location, and is never
+// actually dereferenced.
+unsafe impl Send for LockClassKey {}
 unsafe impl Sync for LockClassKey {}
diff --git a/rust/kernel/sync/no_lockdep.rs b/rust/kernel/sync/no_lockdep.rs
index 69d42e8d9801..518ec0bf9a7d 100644
--- a/rust/kernel/sync/no_lockdep.rs
+++ b/rust/kernel/sync/no_lockdep.rs
@@ -5,14 +5,25 @@
 //! Takes the place of the `lockdep` module when lockdep is disabled.
 
 /// A dummy, zero-sized lock class.
-pub struct LockClassKey();
+pub struct StaticLockClassKey();
 
-impl LockClassKey {
+impl StaticLockClassKey {
     /// Creates a new dummy lock class key.
     pub const fn new() -> Self {
         Self()
     }
 
+    /// Returns the lock class key reference for this static lock class.
+    pub const fn key(&self) -> LockClassKey {
+        LockClassKey()
+    }
+}
+
+/// A dummy reference to a lock class key.
+#[derive(Copy, Clone)]
+pub struct LockClassKey();
+
+impl LockClassKey {
     pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
         core::ptr::null_mut()
     }
diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
index 185d3493857e..91739bf71cc3 100644
--- a/rust/kernel/types.rs
+++ b/rust/kernel/types.rs
@@ -262,7 +262,7 @@ pub fn ffi_init(init_func: impl FnOnce(*mut T)) -> impl PinInit<Self> {
     }
 
     /// Returns a raw pointer to the opaque data.
-    pub fn get(&self) -> *mut T {
+    pub const fn get(&self) -> *mut T {
         UnsafeCell::raw_get(self.0.as_ptr())
     }
 

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (5 preceding siblings ...)
  2023-07-14  9:13 ` [PATCH RFC 06/11] rust: sync: Replace static LockClassKey refs with a pointer wrapper Asahi Lina
@ 2023-07-14  9:13 ` Asahi Lina
  2023-07-14 19:56   ` Martin Rodriguez Reboredo
  2023-07-15 15:47   ` Gary Guo
  2023-07-14  9:14 ` [PATCH RFC 08/11] rust: sync: Classless Lock::new() and pin_init() Asahi Lina
                   ` (4 subsequent siblings)
  11 siblings, 2 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:13 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Using macros to create lock classes all over the place is unergonomic,
and makes it impossible to add new features that require lock classes to
code such as Arc<> without changing all callers.

Rust has the ability to track the caller's identity by file/line/column
number, and we can use that to dynamically generate lock classes
instead.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/kernel/sync/lockdep.rs    | 147 ++++++++++++++++++++++++++++++++++++++++-
 rust/kernel/sync/no_lockdep.rs |   8 +++
 2 files changed, 154 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync/lockdep.rs b/rust/kernel/sync/lockdep.rs
index d8328f4275fb..fbf9f6ed403d 100644
--- a/rust/kernel/sync/lockdep.rs
+++ b/rust/kernel/sync/lockdep.rs
@@ -5,7 +5,19 @@
 //! This module abstracts the parts of the kernel lockdep API relevant to Rust
 //! modules, including lock classes.
 
-use crate::types::Opaque;
+use crate::{
+    c_str, fmt,
+    init::InPlaceInit,
+    new_mutex,
+    prelude::{Box, Result, Vec},
+    str::{CStr, CString},
+    sync::Mutex,
+    types::Opaque,
+};
+
+use core::hash::{Hash, Hasher};
+use core::pin::Pin;
+use core::sync::atomic::{AtomicPtr, Ordering};
 
 /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
 #[repr(transparent)]
@@ -42,3 +54,136 @@ pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
 // actually dereferenced.
 unsafe impl Send for LockClassKey {}
 unsafe impl Sync for LockClassKey {}
+
+// Location is 'static but not really, since module unloads will
+// invalidate existing static Locations within that module.
+// To avoid breakage, we maintain our own location struct which is
+// dynamically allocated on first reference. We store a hash of the
+// whole location (including the filename string), as well as the
+// line and column separately. The assumption is that this whole
+// struct is highly unlikely to ever collide with a reasonable
+// hash (this saves us from having to check the filename string
+// itself).
+#[derive(PartialEq, Debug)]
+struct LocationKey {
+    hash: u64,
+    line: u32,
+    column: u32,
+}
+
+struct DynLockClassKey {
+    key: Opaque<bindings::lock_class_key>,
+    loc: LocationKey,
+    name: CString,
+}
+
+impl LocationKey {
+    fn new(loc: &'static core::panic::Location<'static>) -> Self {
+        let mut hasher = crate::siphash::SipHasher::new();
+        loc.hash(&mut hasher);
+
+        LocationKey {
+            hash: hasher.finish(),
+            line: loc.line(),
+            column: loc.column(),
+        }
+    }
+}
+
+impl DynLockClassKey {
+    fn key(&'static self) -> LockClassKey {
+        LockClassKey(self.key.get())
+    }
+
+    fn name(&'static self) -> &CStr {
+        &self.name
+    }
+}
+
+const LOCK_CLASS_BUCKETS: usize = 1024;
+
+#[track_caller]
+fn caller_lock_class_inner() -> Result<&'static DynLockClassKey> {
+    // This is just a hack to make the below static array initialization work.
+    #[allow(clippy::declare_interior_mutable_const)]
+    const ATOMIC_PTR: AtomicPtr<Mutex<Vec<&'static DynLockClassKey>>> =
+        AtomicPtr::new(core::ptr::null_mut());
+
+    #[allow(clippy::complexity)]
+    static LOCK_CLASSES: [AtomicPtr<Mutex<Vec<&'static DynLockClassKey>>>; LOCK_CLASS_BUCKETS] =
+        [ATOMIC_PTR; LOCK_CLASS_BUCKETS];
+
+    let loc = core::panic::Location::caller();
+    let loc_key = LocationKey::new(loc);
+
+    let index = (loc_key.hash % (LOCK_CLASS_BUCKETS as u64)) as usize;
+    let slot = &LOCK_CLASSES[index];
+
+    let mut ptr = slot.load(Ordering::Relaxed);
+    if ptr.is_null() {
+        let new_element = Box::pin_init(new_mutex!(Vec::new()))?;
+
+        if let Err(e) = slot.compare_exchange(
+            core::ptr::null_mut(),
+            // SAFETY: We never move out of this Box
+            Box::into_raw(unsafe { Pin::into_inner_unchecked(new_element) }),
+            Ordering::Relaxed,
+            Ordering::Relaxed,
+        ) {
+            // SAFETY: We just got this pointer from `into_raw()`
+            unsafe { Box::from_raw(e) };
+        }
+
+        ptr = slot.load(Ordering::Relaxed);
+        assert!(!ptr.is_null());
+    }
+
+    // SAFETY: This mutex was either just created above or previously allocated,
+    // and we never free these objects so the pointer is guaranteed to be valid.
+    let mut guard = unsafe { (*ptr).lock() };
+
+    for i in guard.iter() {
+        if i.loc == loc_key {
+            return Ok(i);
+        }
+    }
+
+    // We immediately leak the class, so it becomes 'static
+    let new_class = Box::leak(Box::try_new(DynLockClassKey {
+        key: Opaque::zeroed(),
+        loc: loc_key,
+        name: CString::try_from_fmt(fmt!("{}:{}:{}", loc.file(), loc.line(), loc.column()))?,
+    })?);
+
+    // SAFETY: This is safe to call with a pointer to a dynamically allocated lockdep key,
+    // and we never free the objects so it is safe to never unregister the key.
+    unsafe { bindings::lockdep_register_key(new_class.key.get()) };
+
+    guard.try_push(new_class)?;
+
+    Ok(new_class)
+}
+
+#[track_caller]
+pub(crate) fn caller_lock_class() -> (LockClassKey, &'static CStr) {
+    match caller_lock_class_inner() {
+        Ok(a) => (a.key(), a.name()),
+        Err(_) => {
+            crate::pr_err!(
+                "Failed to dynamically allocate lock class, lockdep may be unreliable.\n"
+            );
+
+            let loc = core::panic::Location::caller();
+            // SAFETY: LockClassKey is opaque and the lockdep implementation only needs
+            // unique addresses for statically allocated keys, so it is safe to just cast
+            // the Location reference directly into a LockClassKey. However, this will
+            // result in multiple keys for the same callsite due to monomorphization,
+            // as well as spuriously destroyed keys when the static key is allocated in
+            // the wrong module, which is what makes this unreliable.
+            (
+                LockClassKey(loc as *const _ as *mut _),
+                c_str!("fallback_lock_class"),
+            )
+        }
+    }
+}
diff --git a/rust/kernel/sync/no_lockdep.rs b/rust/kernel/sync/no_lockdep.rs
index 518ec0bf9a7d..de53c4de7fbe 100644
--- a/rust/kernel/sync/no_lockdep.rs
+++ b/rust/kernel/sync/no_lockdep.rs
@@ -4,6 +4,8 @@
 //!
 //! Takes the place of the `lockdep` module when lockdep is disabled.
 
+use crate::{c_str, str::CStr};
+
 /// A dummy, zero-sized lock class.
 pub struct StaticLockClassKey();
 
@@ -28,3 +30,9 @@ pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
         core::ptr::null_mut()
     }
 }
+
+pub(crate) fn caller_lock_class() -> (LockClassKey, &'static CStr) {
+    static DUMMY_LOCK_CLASS: StaticLockClassKey = StaticLockClassKey::new();
+
+    (DUMMY_LOCK_CLASS.key(), c_str!("dummy"))
+}

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 08/11] rust: sync: Classless Lock::new() and pin_init()
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (6 preceding siblings ...)
  2023-07-14  9:13 ` [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation Asahi Lina
@ 2023-07-14  9:14 ` Asahi Lina
  2023-07-14  9:14 ` [PATCH RFC 09/11] rust: init: Update documentation for new mutex init style Asahi Lina
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:14 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Use the new automagic lock class code to remove the lock class and name
parameters from Lock::new() and Lock::pin_init(). The old functions
are renamed to new_with_class() and pin_init_with_class() respectively.

The new approach uses the caller tracking machinery in Rust, which means
it can be trivially wrapped by adding #[track_caller] to any functions
that should bubble up lock class creation to their caller. This, for
example, allows a type using multiple Mutexes to create separate lock
classes for every user of the type, simply by adding that attribute to
the mutex creation code paths.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/kernel/sync/lock.rs          | 42 +++++++++++++++++++++++++++++++++++----
 rust/kernel/sync/lock/mutex.rs    |  4 ++--
 rust/kernel/sync/lock/spinlock.rs |  2 +-
 3 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index 8e71e7aa2cc1..8849741c1d9a 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -5,7 +5,7 @@
 //! It contains a generic Rust lock and guard that allow for different backends (e.g., mutexes,
 //! spinlocks, raw spinlocks) to be provided with minimal effort.
 
-use super::LockClassKey;
+use super::{lockdep::caller_lock_class, LockClassKey};
 use crate::{
     bindings, init::PinInit, pin_init, str::CStr, try_pin_init, types::Opaque, types::ScopeGuard,
 };
@@ -103,7 +103,40 @@ unsafe impl<T: ?Sized + Send, B: Backend> Sync for Lock<T, B> {}
 impl<T, B: Backend> Lock<T, B> {
     /// Constructs a new lock initialiser.
     #[allow(clippy::new_ret_no_self)]
-    pub fn new(t: T, name: &'static CStr, key: LockClassKey) -> impl PinInit<Self> {
+    #[track_caller]
+    pub fn new(t: T) -> impl PinInit<Self> {
+        let (key, name) = caller_lock_class();
+        Self::new_with_key(t, name, key)
+    }
+
+    /// Constructs a new lock initialiser taking an initialiser/
+    pub fn pin_init<E>(t: impl PinInit<T, E>) -> impl PinInit<Self, E>
+    where
+        E: core::convert::From<core::convert::Infallible>,
+    {
+        let (key, name) = caller_lock_class();
+        Self::pin_init_with_key(t, name, key)
+    }
+
+    /// Constructs a new lock initialiser.
+    #[allow(clippy::new_ret_no_self)]
+    #[track_caller]
+    pub fn new_named(t: T, name: &'static CStr) -> impl PinInit<Self> {
+        let (key, _) = caller_lock_class();
+        Self::new_with_key(t, name, key)
+    }
+
+    /// Constructs a new lock initialiser taking an initialiser/
+    pub fn pin_init_named<E>(t: impl PinInit<T, E>, name: &'static CStr) -> impl PinInit<Self, E>
+    where
+        E: core::convert::From<core::convert::Infallible>,
+    {
+        let (key, _) = caller_lock_class();
+        Self::pin_init_with_key(t, name, key)
+    }
+
+    /// Constructs a new lock initialiser given a particular name and lock class key.
+    pub fn new_with_key(t: T, name: &'static CStr, key: LockClassKey) -> impl PinInit<Self> {
         pin_init!(Self {
             data: UnsafeCell::new(t),
             _pin: PhantomPinned,
@@ -115,8 +148,9 @@ pub fn new(t: T, name: &'static CStr, key: LockClassKey) -> impl PinInit<Self> {
         })
     }
 
-    /// Constructs a new lock initialiser taking an initialiser.
-    pub fn pin_init<E>(
+    /// Constructs a new lock initialiser taking an initialiser given a particular
+    /// name and lock class key.
+    pub fn pin_init_with_key<E>(
         t: impl PinInit<T, E>,
         name: &'static CStr,
         key: LockClassKey,
diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
index 06fe685501b4..15ea70fa3933 100644
--- a/rust/kernel/sync/lock/mutex.rs
+++ b/rust/kernel/sync/lock/mutex.rs
@@ -13,7 +13,7 @@
 #[macro_export]
 macro_rules! new_mutex {
     ($inner:expr $(, $name:literal)? $(,)?) => {
-        $crate::sync::Mutex::new(
+        $crate::sync::Mutex::new_with_key(
             $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
     };
 }
@@ -26,7 +26,7 @@ macro_rules! new_mutex {
 #[macro_export]
 macro_rules! new_mutex_pinned {
     ($inner:expr $(, $name:literal)? $(,)?) => {
-        $crate::sync::Mutex::pin_init(
+        $crate::sync::Mutex::pin_init_with_key(
             $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
     };
 }
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index 979b56464a4e..9f6137f047ee 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -13,7 +13,7 @@
 #[macro_export]
 macro_rules! new_spinlock {
     ($inner:expr $(, $name:literal)? $(,)?) => {
-        $crate::sync::SpinLock::new(
+        $crate::sync::SpinLock::new_with_class(
             $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
     };
 }

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 09/11] rust: init: Update documentation for new mutex init style
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (7 preceding siblings ...)
  2023-07-14  9:14 ` [PATCH RFC 08/11] rust: sync: Classless Lock::new() and pin_init() Asahi Lina
@ 2023-07-14  9:14 ` Asahi Lina
  2023-07-14  9:14 ` [PATCH RFC 10/11] rust: sync: Add LockdepMap abstraction Asahi Lina
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:14 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Now that we have classless Mutex creation, update the docs to reflect
the new API.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/kernel/init.rs | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
index b4332a4ec1f4..f190bbd0bab1 100644
--- a/rust/kernel/init.rs
+++ b/rust/kernel/init.rs
@@ -36,7 +36,7 @@
 //!
 //! ```rust
 //! # #![allow(clippy::disallowed_names, clippy::new_ret_no_self)]
-//! use kernel::{prelude::*, sync::Mutex, new_mutex};
+//! use kernel::{prelude::*, sync::Mutex};
 //! # use core::pin::Pin;
 //! #[pin_data]
 //! struct Foo {
@@ -46,7 +46,7 @@
 //! }
 //!
 //! let foo = pin_init!(Foo {
-//!     a <- new_mutex!(42, "Foo::a"),
+//!     a <- Mutex::new_named(42, "Foo::a"),
 //!     b: 24,
 //! });
 //! ```
@@ -56,7 +56,7 @@
 //!
 //! ```rust
 //! # #![allow(clippy::disallowed_names, clippy::new_ret_no_self)]
-//! # use kernel::{prelude::*, sync::Mutex, new_mutex};
+//! # use kernel::{prelude::*, sync::Mutex};
 //! # use core::pin::Pin;
 //! # #[pin_data]
 //! # struct Foo {
@@ -65,7 +65,7 @@
 //! #     b: u32,
 //! # }
 //! # let foo = pin_init!(Foo {
-//! #     a <- new_mutex!(42, "Foo::a"),
+//! #     a <- Mutex::new_named(42, "Foo::a"),
 //! #     b: 24,
 //! # });
 //! let foo: Result<Pin<Box<Foo>>> = Box::pin_init(foo);
@@ -98,7 +98,7 @@
 //! impl DriverData {
 //!     fn new() -> impl PinInit<Self, Error> {
 //!         try_pin_init!(Self {
-//!             status <- new_mutex!(0, "DriverData::status"),
+//!             status <- Mutex::new_named(0, "DriverData::status"),
 //!             buffer: Box::init(kernel::init::zeroed())?,
 //!         })
 //!     }
@@ -242,7 +242,7 @@
 /// }
 ///
 /// stack_pin_init!(let foo = pin_init!(Foo {
-///     a <- new_mutex!(42),
+///     a <- Mutex::new(42),
 ///     b: Bar {
 ///         x: 64,
 ///     },
@@ -294,7 +294,7 @@ macro_rules! stack_pin_init {
 /// }
 ///
 /// stack_try_pin_init!(let foo: Result<Pin<&mut Foo>, AllocError> = pin_init!(Foo {
-///     a <- new_mutex!(42),
+///     a <- Mutex::new(42),
 ///     b: Box::try_new(Bar {
 ///         x: 64,
 ///     })?,
@@ -320,7 +320,7 @@ macro_rules! stack_pin_init {
 /// }
 ///
 /// stack_try_pin_init!(let foo: Pin<&mut Foo> =? pin_init!(Foo {
-///     a <- new_mutex!(42),
+///     a <- Mutex::new(42),
 ///     b: Box::try_new(Bar {
 ///         x: 64,
 ///     })?,

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 10/11] rust: sync: Add LockdepMap abstraction
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (8 preceding siblings ...)
  2023-07-14  9:14 ` [PATCH RFC 09/11] rust: init: Update documentation for new mutex init style Asahi Lina
@ 2023-07-14  9:14 ` Asahi Lina
  2023-07-14  9:14 ` [PATCH RFC 11/11] rust: sync: arc: Add lockdep integration Asahi Lina
  2023-07-14 10:13 ` [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Alice Ryhl
  11 siblings, 0 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:14 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Add a simple abstraction for creating new lockdep maps. This allows
Rust code to explicitly integrate types with lockdep.

There's some voodoo compiler intrinsic magic to get the caller return
address on the C side. I have no idea how to plumb that through in Rust
if that's even possible, so let's just wrap it in a C helper for now.
That gives us the callsite from the Rust abstraction instead of its
user, but that's probably okay for now.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 rust/bindings/bindings_helper.h |  1 +
 rust/helpers.c                  | 16 ++++++++++++++++
 rust/kernel/sync/lockdep.rs     | 40 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 57 insertions(+)

diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 52f32e423b04..5c28de44e528 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -8,6 +8,7 @@
 
 #include <linux/errname.h>
 #include <linux/slab.h>
+#include <linux/lockdep.h>
 #include <linux/refcount.h>
 #include <linux/wait.h>
 #include <linux/siphash.h>
diff --git a/rust/helpers.c b/rust/helpers.c
index 1ed71315d1eb..392f5359677a 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -22,6 +22,8 @@
 #include <linux/build_bug.h>
 #include <linux/err.h>
 #include <linux/errname.h>
+#include <linux/instruction_pointer.h>
+#include <linux/lockdep.h>
 #include <linux/refcount.h>
 #include <linux/mutex.h>
 #include <linux/siphash.h>
@@ -143,6 +145,20 @@ u64 rust_helper_siphash(const void *data, size_t len,
 }
 EXPORT_SYMBOL_GPL(rust_helper_siphash);
 
+void rust_helper_lock_acquire_ret(struct lockdep_map *lock, unsigned int subclass,
+				  int trylock, int read, int check,
+				  struct lockdep_map *nest_lock)
+{
+	lock_acquire(lock, subclass, trylock, read, check, nest_lock, _RET_IP_);
+}
+EXPORT_SYMBOL_GPL(rust_helper_lock_acquire_ret);
+
+void rust_helper_lock_release_ret(struct lockdep_map *lock)
+{
+	lock_release(lock, _RET_IP_);
+}
+EXPORT_SYMBOL_GPL(rust_helper_lock_release_ret);
+
 /*
  * We use `bindgen`'s `--size_t-is-usize` option to bind the C `size_t` type
  * as the Rust `usize` type, so we can use it in contexts where Rust
diff --git a/rust/kernel/sync/lockdep.rs b/rust/kernel/sync/lockdep.rs
index fbf9f6ed403d..ca32aa833e10 100644
--- a/rust/kernel/sync/lockdep.rs
+++ b/rust/kernel/sync/lockdep.rs
@@ -187,3 +187,43 @@ pub(crate) fn caller_lock_class() -> (LockClassKey, &'static CStr) {
         }
     }
 }
+
+pub(crate) struct LockdepMap(Opaque<bindings::lockdep_map>);
+pub(crate) struct LockdepGuard<'a>(&'a LockdepMap);
+
+#[allow(dead_code)]
+impl LockdepMap {
+    #[track_caller]
+    pub(crate) fn new() -> Self {
+        let map = Opaque::uninit();
+        let (key, name) = caller_lock_class();
+
+        unsafe {
+            bindings::lockdep_init_map_type(
+                map.get(),
+                name.as_char_ptr(),
+                key.as_ptr(),
+                0,
+                bindings::lockdep_wait_type_LD_WAIT_INV as _,
+                bindings::lockdep_wait_type_LD_WAIT_INV as _,
+                bindings::lockdep_lock_type_LD_LOCK_NORMAL as _,
+            )
+        };
+
+        LockdepMap(map)
+    }
+
+    #[inline(always)]
+    pub(crate) fn lock(&self) -> LockdepGuard<'_> {
+        unsafe { bindings::lock_acquire_ret(self.0.get(), 0, 0, 1, 1, core::ptr::null_mut()) };
+
+        LockdepGuard(self)
+    }
+}
+
+impl<'a> Drop for LockdepGuard<'a> {
+    #[inline(always)]
+    fn drop(&mut self) {
+        unsafe { bindings::lock_release_ret(self.0 .0.get()) };
+    }
+}

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RFC 11/11] rust: sync: arc: Add lockdep integration
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (9 preceding siblings ...)
  2023-07-14  9:14 ` [PATCH RFC 10/11] rust: sync: Add LockdepMap abstraction Asahi Lina
@ 2023-07-14  9:14 ` Asahi Lina
  2023-07-15 16:00   ` Gary Guo
  2023-07-14 10:13 ` [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Alice Ryhl
  11 siblings, 1 reply; 31+ messages in thread
From: Asahi Lina @ 2023-07-14  9:14 UTC (permalink / raw)
  To: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm, Asahi Lina

Now that we have magic lock class support and a LockdepMap that can be
hooked up into arbitrary Rust types, we can integrate lockdep support
directly into the Rust Arc<T> type. This means we can catch potential
Drop codepaths that could result in a locking error, even if those
codepaths never actually execute due to the reference count being
nonzero at that point.

Signed-off-by: Asahi Lina <lina@asahilina.net>
---
 lib/Kconfig.debug       |  8 ++++++
 rust/kernel/init.rs     |  6 +++++
 rust/kernel/sync/arc.rs | 71 ++++++++++++++++++++++++++++++++++++++++++++++---
 3 files changed, 82 insertions(+), 3 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index fbc89baf7de6..ff4f06df88ee 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -3010,6 +3010,14 @@ config RUST_BUILD_ASSERT_ALLOW
 
 	  If unsure, say N.
 
+config RUST_EXTRA_LOCKDEP
+	bool "Extra lockdep checking"
+	depends on RUST && PROVE_LOCKING
+	help
+	  Enabled additional lockdep integration with certain Rust types.
+
+	  If unsure, say N.
+
 endmenu # "Rust"
 
 endmenu # Kernel hacking
diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
index f190bbd0bab1..b64a507f0a34 100644
--- a/rust/kernel/init.rs
+++ b/rust/kernel/init.rs
@@ -1208,6 +1208,7 @@ pub trait InPlaceInit<T>: Sized {
     /// type.
     ///
     /// If `T: !Unpin` it will not be able to move afterwards.
+    #[track_caller]
     fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
     where
         E: From<AllocError>;
@@ -1216,6 +1217,7 @@ fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
     /// type.
     ///
     /// If `T: !Unpin` it will not be able to move afterwards.
+    #[track_caller]
     fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Pin<Self>>
     where
         Error: From<E>,
@@ -1228,11 +1230,13 @@ fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Pin<Self>>
     }
 
     /// Use the given initializer to in-place initialize a `T`.
+    #[track_caller]
     fn try_init<E>(init: impl Init<T, E>) -> Result<Self, E>
     where
         E: From<AllocError>;
 
     /// Use the given initializer to in-place initialize a `T`.
+    #[track_caller]
     fn init<E>(init: impl Init<T, E>) -> error::Result<Self>
     where
         Error: From<E>,
@@ -1277,6 +1281,7 @@ fn try_init<E>(init: impl Init<T, E>) -> Result<Self, E>
 
 impl<T> InPlaceInit<T> for UniqueArc<T> {
     #[inline]
+    #[track_caller]
     fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
     where
         E: From<AllocError>,
@@ -1291,6 +1296,7 @@ fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
     }
 
     #[inline]
+    #[track_caller]
     fn try_init<E>(init: impl Init<T, E>) -> Result<Self, E>
     where
         E: From<AllocError>,
diff --git a/rust/kernel/sync/arc.rs b/rust/kernel/sync/arc.rs
index a89843cacaad..3bb73b614fd1 100644
--- a/rust/kernel/sync/arc.rs
+++ b/rust/kernel/sync/arc.rs
@@ -34,6 +34,9 @@
 };
 use macros::pin_data;
 
+#[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+use crate::sync::lockdep::LockdepMap;
+
 mod std_vendor;
 
 /// A reference-counted pointer to an instance of `T`.
@@ -127,6 +130,17 @@ pub struct Arc<T: ?Sized> {
     _p: PhantomData<ArcInner<T>>,
 }
 
+#[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+#[pin_data]
+#[repr(C)]
+struct ArcInner<T: ?Sized> {
+    refcount: Opaque<bindings::refcount_t>,
+    lockdep_map: LockdepMap,
+    data: T,
+}
+
+// FIXME: pin_data does not work well with cfg attributes within the struct definition.
+#[cfg(not(CONFIG_RUST_EXTRA_LOCKDEP))]
 #[pin_data]
 #[repr(C)]
 struct ArcInner<T: ?Sized> {
@@ -159,11 +173,14 @@ unsafe impl<T: ?Sized + Sync + Send> Sync for Arc<T> {}
 
 impl<T> Arc<T> {
     /// Constructs a new reference counted instance of `T`.
+    #[track_caller]
     pub fn try_new(contents: T) -> Result<Self, AllocError> {
         // INVARIANT: The refcount is initialised to a non-zero value.
         let value = ArcInner {
             // SAFETY: There are no safety requirements for this FFI call.
             refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }),
+            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+            lockdep_map: LockdepMap::new(),
             data: contents,
         };
 
@@ -178,6 +195,7 @@ pub fn try_new(contents: T) -> Result<Self, AllocError> {
     ///
     /// If `T: !Unpin` it will not be able to move afterwards.
     #[inline]
+    #[track_caller]
     pub fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Self>
     where
         Error: From<E>,
@@ -189,6 +207,7 @@ pub fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Self>
     ///
     /// This is equivalent to [`Arc<T>::pin_init`], since an [`Arc`] is always pinned.
     #[inline]
+    #[track_caller]
     pub fn init<E>(init: impl Init<T, E>) -> error::Result<Self>
     where
         Error: From<E>,
@@ -292,15 +311,46 @@ fn drop(&mut self) {
         // freed/invalid memory as long as it is never dereferenced.
         let refcount = unsafe { self.ptr.as_ref() }.refcount.get();
 
+        // SAFETY: By the type invariant, there is necessarily a reference to the object.
+        // We cannot hold the map lock across the reference decrement, as we might race
+        // another thread. Therefore, we lock and immediately drop the guard here. This
+        // only serves to inform lockdep of the dependency up the call stack.
+        #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+        unsafe { self.ptr.as_ref() }.lockdep_map.lock();
+
         // INVARIANT: If the refcount reaches zero, there are no other instances of `Arc`, and
         // this instance is being dropped, so the broken invariant is not observable.
         // SAFETY: Also by the type invariant, we are allowed to decrement the refcount.
         let is_zero = unsafe { bindings::refcount_dec_and_test(refcount) };
+
         if is_zero {
             // The count reached zero, we must free the memory.
-            //
-            // SAFETY: The pointer was initialised from the result of `Box::leak`.
-            unsafe { Box::from_raw(self.ptr.as_ptr()) };
+
+            // SAFETY: If we get this far, we had the last reference to the object.
+            // That means we are responsible for freeing it, so we can safely lock
+            // the fake lock again. This wraps dropping the inner object, which
+            // informs lockdep of the dependencies down the call stack.
+            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+            let guard = unsafe { self.ptr.as_ref() }.lockdep_map.lock();
+
+            // SAFETY: The pointer was initialised from the result of `Box::leak`,
+            // and the value is valid.
+            unsafe { core::ptr::drop_in_place(&mut self.ptr.as_mut().data) };
+
+            // We need to drop the lock guard before freeing the LockdepMap itself
+            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+            core::mem::drop(guard);
+
+            // SAFETY: The pointer was initialised from the result of `Box::leak`,
+            // and the lockdep map is valid.
+            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+            unsafe {
+                core::ptr::drop_in_place(&mut self.ptr.as_mut().lockdep_map)
+            };
+
+            // SAFETY: The pointer was initialised from the result of `Box::leak`, and
+            // a ManuallyDrop<T> is compatible. We already dropped the contents above.
+            unsafe { Box::from_raw(self.ptr.as_ptr() as *mut ManuallyDrop<ArcInner<T>>) };
         }
     }
 }
@@ -512,6 +562,7 @@ pub struct UniqueArc<T: ?Sized> {
 
 impl<T> UniqueArc<T> {
     /// Tries to allocate a new [`UniqueArc`] instance.
+    #[track_caller]
     pub fn try_new(value: T) -> Result<Self, AllocError> {
         Ok(Self {
             // INVARIANT: The newly-created object has a ref-count of 1.
@@ -520,13 +571,27 @@ pub fn try_new(value: T) -> Result<Self, AllocError> {
     }
 
     /// Tries to allocate a new [`UniqueArc`] instance whose contents are not initialised yet.
+    #[track_caller]
     pub fn try_new_uninit() -> Result<UniqueArc<MaybeUninit<T>>, AllocError> {
         // INVARIANT: The refcount is initialised to a non-zero value.
+        #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
+        let inner = {
+            let map = LockdepMap::new();
+            Box::try_init::<AllocError>(try_init!(ArcInner {
+                // SAFETY: There are no safety requirements for this FFI call.
+                refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }),
+                lockdep_map: map,
+                data <- init::uninit::<T, AllocError>(),
+            }? AllocError))?
+        };
+        // FIXME: try_init!() does not work with cfg attributes.
+        #[cfg(not(CONFIG_RUST_EXTRA_LOCKDEP))]
         let inner = Box::try_init::<AllocError>(try_init!(ArcInner {
             // SAFETY: There are no safety requirements for this FFI call.
             refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }),
             data <- init::uninit::<T, AllocError>(),
         }? AllocError))?;
+
         Ok(UniqueArc {
             // INVARIANT: The newly-created object has a ref-count of 1.
             // SAFETY: The pointer from the `Box` is valid.

-- 
2.40.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
  2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
                   ` (10 preceding siblings ...)
  2023-07-14  9:14 ` [PATCH RFC 11/11] rust: sync: arc: Add lockdep integration Asahi Lina
@ 2023-07-14 10:13 ` Alice Ryhl
  2023-07-14 12:20   ` Asahi Lina
  11 siblings, 1 reply; 31+ messages in thread
From: Alice Ryhl @ 2023-07-14 10:13 UTC (permalink / raw)
  To: lina
  Cc: alex.gaynor, alyssa, asahi, benno.lossin, bjorn3_gh, boqun.feng,
	daniel, gary, linux-kbuild, linux-kernel, llvm, marcan,
	masahiroy, nathan, ndesaulniers, nicolas, ojeda, rust-for-linux,
	sven, trix, wedsonaf

Asahi Lina <lina@asahilina.net> writes:
> Begone, lock classes!
> 
> As discussed in meetings/etc, we would really like to support implicit
> lock class creation for Rust code. Right now, lock classes are created
> using macros and passed around (similar to C). Unfortunately, Rust
> macros don't look like Rust functions, which means adding lockdep to a
> type is a breaking API change. This makes Rust mutex creation rather
> ugly, with the new_mutex!() macro and friends.
> 
> Implicit lock classes have to be unique per instantiation code site.
> Notably, with Rust generics and monomorphization, this is not the same
> as unique per generated code instance. If this weren't the case, we
> could use inline functions and asm!() magic to try to create lock
> classes that have the right uniqueness semantics. But that doesn't work,
> since it would create too many lock classes for the same actual lock
> creation in the source code.
> 
> But Rust does have one trick we can use: it can track the caller
> location (as file:line:column), across multiple functions. This works
> using an implicit argument that gets passed around, which is exactly the
> thing we do for lock classes. The tricky bit is that, while the value of
> these Location objects has the semantics we want (unique value per
> source code location), there is no guarantee that they are deduplicated
> in memory.
> 
> So we use a hash table, and map Location values to lock classes. Et
> voila, implicit lock class support!
>
> This lets us clean up the Mutex & co APIs and make them look a lot more
> Rust-like, but it also means we can now throw Lockdep into more APIs
> without breaking the API. And so we can pull a neat trick: adding
> Lockdep support into Arc<T>. This catches cases where the Arc Drop
> implementation could create a locking correctness violation only when
> the reference count drops to 0 at that particular drop site, which is
> otherwise not detectable unless that condition actually happens at
> runtime. Since Drop is "magic" in Rust and Drop codepaths very difficult
> to audit, this helps a lot.
> 
> For the initial RFC, this implements the new API only for Mutex. If this
> looks good, I can extend it to CondVar & friends in the next version.
> This series also folds in a few related minor dependencies / changes
> (like the pin_init mutex stuff).

I'm not convinced that this is the right compromise. Moving lockdep
class creation to runtime sounds unfortunate, especially since this
makes them fallible due to memory allocations (I think?).

I would be inclined to keep using macros for this.

Alice


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 01/11] rust: types: Add Opaque::zeroed()
  2023-07-14  9:13 ` [PATCH RFC 01/11] rust: types: Add Opaque::zeroed() Asahi Lina
@ 2023-07-14 10:15   ` Alice Ryhl
  2023-07-15 14:27   ` Gary Guo
  1 sibling, 0 replies; 31+ messages in thread
From: Alice Ryhl @ 2023-07-14 10:15 UTC (permalink / raw)
  To: lina
  Cc: alex.gaynor, alyssa, asahi, benno.lossin, bjorn3_gh, boqun.feng,
	daniel, gary, linux-kbuild, linux-kernel, llvm, marcan,
	masahiroy, nathan, ndesaulniers, nicolas, ojeda, rust-for-linux,
	sven, trix, wedsonaf, Alice Ryhl

Asahi Lina <lina@asahilina.net> writes:
> Opaque types are internally MaybeUninit, so it's safe to actually
> zero-initialize them as long as we don't claim they are initialized.
> This is useful for many FFI types that are expected to be zero-inited by
> the user.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>

LGTM. This is useful on its own.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
  2023-07-14 10:13 ` [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Alice Ryhl
@ 2023-07-14 12:20   ` Asahi Lina
  2023-07-14 13:59     ` Alice Ryhl
  0 siblings, 1 reply; 31+ messages in thread
From: Asahi Lina @ 2023-07-14 12:20 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: alex.gaynor, alyssa, asahi, benno.lossin, bjorn3_gh, boqun.feng,
	daniel, gary, linux-kbuild, linux-kernel, llvm, marcan,
	masahiroy, nathan, ndesaulniers, nicolas, ojeda, rust-for-linux,
	sven, trix, wedsonaf

On 14/07/2023 19.13, Alice Ryhl wrote:
> Asahi Lina <lina@asahilina.net> writes:
>> Begone, lock classes!
>>
>> As discussed in meetings/etc, we would really like to support implicit
>> lock class creation for Rust code. Right now, lock classes are created
>> using macros and passed around (similar to C). Unfortunately, Rust
>> macros don't look like Rust functions, which means adding lockdep to a
>> type is a breaking API change. This makes Rust mutex creation rather
>> ugly, with the new_mutex!() macro and friends.
>>
>> Implicit lock classes have to be unique per instantiation code site.
>> Notably, with Rust generics and monomorphization, this is not the same
>> as unique per generated code instance. If this weren't the case, we
>> could use inline functions and asm!() magic to try to create lock
>> classes that have the right uniqueness semantics. But that doesn't work,
>> since it would create too many lock classes for the same actual lock
>> creation in the source code.
>>
>> But Rust does have one trick we can use: it can track the caller
>> location (as file:line:column), across multiple functions. This works
>> using an implicit argument that gets passed around, which is exactly the
>> thing we do for lock classes. The tricky bit is that, while the value of
>> these Location objects has the semantics we want (unique value per
>> source code location), there is no guarantee that they are deduplicated
>> in memory.
>>
>> So we use a hash table, and map Location values to lock classes. Et
>> voila, implicit lock class support!
>>
>> This lets us clean up the Mutex & co APIs and make them look a lot more
>> Rust-like, but it also means we can now throw Lockdep into more APIs
>> without breaking the API. And so we can pull a neat trick: adding
>> Lockdep support into Arc<T>. This catches cases where the Arc Drop
>> implementation could create a locking correctness violation only when
>> the reference count drops to 0 at that particular drop site, which is
>> otherwise not detectable unless that condition actually happens at
>> runtime. Since Drop is "magic" in Rust and Drop codepaths very difficult
>> to audit, this helps a lot.
>>
>> For the initial RFC, this implements the new API only for Mutex. If this
>> looks good, I can extend it to CondVar & friends in the next version.
>> This series also folds in a few related minor dependencies / changes
>> (like the pin_init mutex stuff).
> 
> I'm not convinced that this is the right compromise. Moving lockdep
> class creation to runtime sounds unfortunate, especially since this
> makes them fallible due to memory allocations (I think?).
> 
> I would be inclined to keep using macros for this.

Most people were very enthusiastic about this change in the meetings... 
it wasn't even my own idea ^^

I don't think the fallibility is an issue. Lockdep is a debugging tool, 
and it doesn't have to handle all possible circumstances perfectly. If 
you are debugging normal lock issues you probably shouldn't be running 
out of RAM, and if you are debugging OOM situations the lock keys would 
normally have been created long before you reach an OOM situation, since 
they would be created the first time a relevant lock class is used. More 
objects of the same class don't cause any more allocations. And the code 
has a fallback for the OOM case, where it just uses the Location object 
as a static lock class. That's not ideal and degrades the quality of the 
lockdep results, but it shouldn't completely break anything.

The advantages of being able to throw lockdep checking into arbitrary 
types, like the Arc<T> thing, are pretty significant. It closes a major 
correctness checking issue we have with Rust and its automagic Drop 
implementations that are almost impossible to properly audit for 
potential locking issues. I think that alone makes this worth it, even 
if you don't use it for normal mutex creation...

~~ Lina


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
  2023-07-14 12:20   ` Asahi Lina
@ 2023-07-14 13:59     ` Alice Ryhl
  2023-07-14 15:21       ` Boqun Feng
  2023-07-15 14:25       ` Gary Guo
  0 siblings, 2 replies; 31+ messages in thread
From: Alice Ryhl @ 2023-07-14 13:59 UTC (permalink / raw)
  To: lina
  Cc: alex.gaynor, aliceryhl, alyssa, asahi, benno.lossin, bjorn3_gh,
	boqun.feng, daniel, gary, linux-kbuild, linux-kernel, llvm,
	marcan, masahiroy, nathan, ndesaulniers, nicolas, ojeda,
	rust-for-linux, sven, trix, wedsonaf

Asahi Lina <lina@asahilina.net> writes:
> On 14/07/2023 19.13, Alice Ryhl wrote:
> > Asahi Lina <lina@asahilina.net> writes:
> >> Begone, lock classes!
> >>
> >> As discussed in meetings/etc, we would really like to support implicit
> >> lock class creation for Rust code. Right now, lock classes are created
> >> using macros and passed around (similar to C). Unfortunately, Rust
> >> macros don't look like Rust functions, which means adding lockdep to a
> >> type is a breaking API change. This makes Rust mutex creation rather
> >> ugly, with the new_mutex!() macro and friends.
> >>
> >> Implicit lock classes have to be unique per instantiation code site.
> >> Notably, with Rust generics and monomorphization, this is not the same
> >> as unique per generated code instance. If this weren't the case, we
> >> could use inline functions and asm!() magic to try to create lock
> >> classes that have the right uniqueness semantics. But that doesn't work,
> >> since it would create too many lock classes for the same actual lock
> >> creation in the source code.
> >>
> >> But Rust does have one trick we can use: it can track the caller
> >> location (as file:line:column), across multiple functions. This works
> >> using an implicit argument that gets passed around, which is exactly the
> >> thing we do for lock classes. The tricky bit is that, while the value of
> >> these Location objects has the semantics we want (unique value per
> >> source code location), there is no guarantee that they are deduplicated
> >> in memory.
> >>
> >> So we use a hash table, and map Location values to lock classes. Et
> >> voila, implicit lock class support!
> >>
> >> This lets us clean up the Mutex & co APIs and make them look a lot more
> >> Rust-like, but it also means we can now throw Lockdep into more APIs
> >> without breaking the API. And so we can pull a neat trick: adding
> >> Lockdep support into Arc<T>. This catches cases where the Arc Drop
> >> implementation could create a locking correctness violation only when
> >> the reference count drops to 0 at that particular drop site, which is
> >> otherwise not detectable unless that condition actually happens at
> >> runtime. Since Drop is "magic" in Rust and Drop codepaths very difficult
> >> to audit, this helps a lot.
> >>
> >> For the initial RFC, this implements the new API only for Mutex. If this
> >> looks good, I can extend it to CondVar & friends in the next version.
> >> This series also folds in a few related minor dependencies / changes
> >> (like the pin_init mutex stuff).
> > 
> > I'm not convinced that this is the right compromise. Moving lockdep
> > class creation to runtime sounds unfortunate, especially since this
> > makes them fallible due to memory allocations (I think?).
> > 
> > I would be inclined to keep using macros for this.
> 
> Most people were very enthusiastic about this change in the meetings... 
> it wasn't even my own idea ^^

I don't think I was in that meeting. Anyway,
 
> I don't think the fallibility is an issue. Lockdep is a debugging tool, 
> and it doesn't have to handle all possible circumstances perfectly. If 
> you are debugging normal lock issues you probably shouldn't be running 
> out of RAM, and if you are debugging OOM situations the lock keys would 
> normally have been created long before you reach an OOM situation, since 
> they would be created the first time a relevant lock class is used. More 
> objects of the same class don't cause any more allocations. And the code 
> has a fallback for the OOM case, where it just uses the Location object 
> as a static lock class. That's not ideal and degrades the quality of the 
> lockdep results, but it shouldn't completely break anything.

If you have a fallback when the allocation fails, that helps ...

You say that Location objects are not necessarily unique per file
location. In practice, how often are they not unique? Always just using
the Location object as a static lock class seems like it would
significantly simplify this proposal.

> The advantages of being able to throw lockdep checking into arbitrary 
> types, like the Arc<T> thing, are pretty significant. It closes a major 
> correctness checking issue we have with Rust and its automagic Drop 
> implementations that are almost impossible to properly audit for 
> potential locking issues. I think that alone makes this worth it, even 
> if you don't use it for normal mutex creation...

I do agree that there is value in being able to more easily detect
potential deadlocks involving destructors of ref-counted values. I once
had a case of that myself, though lockdep was able to catch it without
this change because it saw the refcount hit zero in the right place.

Alice


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction
  2023-07-14  9:13 ` [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction Asahi Lina
@ 2023-07-14 14:28   ` Martin Rodriguez Reboredo
  2023-07-15 14:52   ` Gary Guo
  1 sibling, 0 replies; 31+ messages in thread
From: Martin Rodriguez Reboredo @ 2023-07-14 14:28 UTC (permalink / raw)
  To: Asahi Lina, Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Masahiro Yamada, Nathan Chancellor, Nick Desaulniers,
	Nicolas Schier, Tom Rix, Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm

On 7/14/23 06:13, Asahi Lina wrote:
> This simple wrapper allows Rust code to use the Hasher interface with
> the kernel siphash implementation. No fancy features supported for now,
> just basic bag-of-bytes hashing. No guarantee that hash outputs will
> remain stable in the future either.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
> [...]
> --- /dev/null
> +++ b/rust/kernel/siphash.rs
> @@ -0,0 +1,39 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! A core::hash::Hasher wrapper for the kernel siphash implementation.
> +//!
> +//! This module allows Rust code to use the kernel's siphash implementation
> +//! to hash Rust objects.
> +
> +use core::hash::Hasher;
> +
> +/// A Hasher implementation that uses the kernel siphash implementation.
> +#[derive(Default)]
> +pub struct SipHasher {
> +    // SipHash state is 4xu64, but the Linux implementation
> +    // doesn't expose incremental hashing so let's just chain
> +    // individual SipHash calls for now, which return a u64
> +    // hash.

Isn't this detail relevant to mention in the doc comment? At least to
explain the difference between them.

> +    state: u64,
> +}
> [...]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 05/11] rust: sync: Add dummy LockClassKey implementation for !CONFIG_LOCKDEP
  2023-07-14  9:13 ` [PATCH RFC 05/11] rust: sync: Add dummy LockClassKey implementation for !CONFIG_LOCKDEP Asahi Lina
@ 2023-07-14 14:57   ` Martin Rodriguez Reboredo
  0 siblings, 0 replies; 31+ messages in thread
From: Martin Rodriguez Reboredo @ 2023-07-14 14:57 UTC (permalink / raw)
  To: Asahi Lina, Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Masahiro Yamada, Nathan Chancellor, Nick Desaulniers,
	Nicolas Schier, Tom Rix, Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm

On 7/14/23 06:13, Asahi Lina wrote:
> Lock classes aren't used without lockdep. The C side declares the key
> as an empty struct in that case, but let's make it an explicit ZST in
> Rust, implemented in a separate module. This allows us to more easily
> guarantee that the lockdep code will be trivially optimized out without
> CONFIG_LOCKDEP, including LockClassKey arguments that are passed around.
> 
> Depending on whether CONFIG_LOCKDEP is enabled or not, we then import
> the real lockdep implementation or the dummy one.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
>   rust/kernel/sync.rs            | 29 ++++++++---------------------
>   rust/kernel/sync/lockdep.rs    | 27 +++++++++++++++++++++++++++
>   rust/kernel/sync/no_lockdep.rs | 19 +++++++++++++++++++
>   3 files changed, 54 insertions(+), 21 deletions(-)
> 
> diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> index d219ee518eff..352472c6b77a 100644
> --- a/rust/kernel/sync.rs
> +++ b/rust/kernel/sync.rs
> @@ -5,37 +5,24 @@
>   //! This module contains the kernel APIs related to synchronisation that have been ported or
>   //! wrapped for usage by Rust code in the kernel.
>   
> -use crate::types::Opaque;
> -
>   mod arc;
>   mod condvar;
>   pub mod lock;
>   mod locked_by;
>   
> +#[cfg(CONFIG_LOCKDEP)]
> +mod lockdep;
> +#[cfg(not(CONFIG_LOCKDEP))]
> +mod no_lockdep;

Some care must be taken wrt how is it going to appear in the resulting
documentation, I think it can lead to missing details. `#[cfg(doc)]`
should be considered.

> +#[cfg(not(CONFIG_LOCKDEP))]
> +use no_lockdep as lockdep;
> [...]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 06/11] rust: sync: Replace static LockClassKey refs with a pointer wrapper
  2023-07-14  9:13 ` [PATCH RFC 06/11] rust: sync: Replace static LockClassKey refs with a pointer wrapper Asahi Lina
@ 2023-07-14 15:10   ` Martin Rodriguez Reboredo
  0 siblings, 0 replies; 31+ messages in thread
From: Martin Rodriguez Reboredo @ 2023-07-14 15:10 UTC (permalink / raw)
  To: Asahi Lina, Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Masahiro Yamada, Nathan Chancellor, Nick Desaulniers,
	Nicolas Schier, Tom Rix, Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm

On 7/14/23 06:13, Asahi Lina wrote:
> We want to be able to handle dynamic lock class creation and using
> pointers to things that aren't a real lock_class_key as lock classes.
> Doing this by casting around Rust references is difficult without
> accidentally invoking UB.
> 
> Instead, switch LockClassKey to being a raw pointer wrapper around a
> lock_class_key, which means there is no UB possible on the Rust side
> just by creating and consuming these objects. The C code also should
> never actually dereference lock classes, only use their address
> (possibly with an offset).
> 
> We still provide a dummy ZST version of this wrapper, to be used when
> lockdep is disabled.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
> [...]
> diff --git a/rust/kernel/sync/lockdep.rs b/rust/kernel/sync/lockdep.rs
> index cb68b18dc0ad..d8328f4275fb 100644
> --- a/rust/kernel/sync/lockdep.rs
> +++ b/rust/kernel/sync/lockdep.rs
> @@ -9,19 +9,36 @@
>   
>   /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
>   #[repr(transparent)]
> -pub struct LockClassKey(Opaque<bindings::lock_class_key>);
> +pub struct StaticLockClassKey(Opaque<bindings::lock_class_key>);
>   
> -impl LockClassKey {
> +impl StaticLockClassKey {
>       /// Creates a new lock class key.
>       pub const fn new() -> Self {
>           Self(Opaque::uninit())
>       }
>   
> +    /// Returns the lock class key reference for this static lock class.
> +    pub const fn key(&self) -> LockClassKey {
> +        LockClassKey(self.0.get())

`Opaque::get` is not a `const fn` so this will not compile.

> +    }
> +}
> +
> +// SAFETY: `bindings::lock_class_key` just represents an opaque memory location, and is never
> +// actually dereferenced.
> +unsafe impl Sync for StaticLockClassKey {}
> +
> +/// A reference to a lock class key. This is a raw pointer to a lock_class_key,
> +/// which is required to have a static lifetime.
> +#[derive(Copy, Clone)]
> +pub struct LockClassKey(*mut bindings::lock_class_key);
> +
> +impl LockClassKey {
>       pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {

I think this can be made into a `const fn`. What do you all think?

> -        self.0.get()
> +        self.0
>       }
>   }
>   
> [...]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
  2023-07-14 13:59     ` Alice Ryhl
@ 2023-07-14 15:21       ` Boqun Feng
  2023-07-16  6:56         ` Asahi Lina
  2023-07-15 14:25       ` Gary Guo
  1 sibling, 1 reply; 31+ messages in thread
From: Boqun Feng @ 2023-07-14 15:21 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: lina, alex.gaynor, alyssa, asahi, benno.lossin, bjorn3_gh,
	daniel, gary, linux-kbuild, linux-kernel, llvm, marcan,
	masahiroy, nathan, ndesaulniers, nicolas, ojeda, rust-for-linux,
	sven, trix, wedsonaf

On Fri, Jul 14, 2023 at 01:59:26PM +0000, Alice Ryhl wrote:
> Asahi Lina <lina@asahilina.net> writes:
> > On 14/07/2023 19.13, Alice Ryhl wrote:
> > > Asahi Lina <lina@asahilina.net> writes:
> > >> Begone, lock classes!
> > >>
> > >> As discussed in meetings/etc, we would really like to support implicit
> > >> lock class creation for Rust code. Right now, lock classes are created

Thanks for looking into this! Could you also copy locking maintainers in
the next version?

> > >> using macros and passed around (similar to C). Unfortunately, Rust
> > >> macros don't look like Rust functions, which means adding lockdep to a
> > >> type is a breaking API change. This makes Rust mutex creation rather
> > >> ugly, with the new_mutex!() macro and friends.
> > >>
> > >> Implicit lock classes have to be unique per instantiation code site.
> > >> Notably, with Rust generics and monomorphization, this is not the same
> > >> as unique per generated code instance. If this weren't the case, we
> > >> could use inline functions and asm!() magic to try to create lock
> > >> classes that have the right uniqueness semantics. But that doesn't work,
> > >> since it would create too many lock classes for the same actual lock
> > >> creation in the source code.
> > >>
> > >> But Rust does have one trick we can use: it can track the caller
> > >> location (as file:line:column), across multiple functions. This works
> > >> using an implicit argument that gets passed around, which is exactly the
> > >> thing we do for lock classes. The tricky bit is that, while the value of
> > >> these Location objects has the semantics we want (unique value per
> > >> source code location), there is no guarantee that they are deduplicated
> > >> in memory.
> > >>
> > >> So we use a hash table, and map Location values to lock classes. Et
> > >> voila, implicit lock class support!
> > >>
> > >> This lets us clean up the Mutex & co APIs and make them look a lot more
> > >> Rust-like, but it also means we can now throw Lockdep into more APIs
> > >> without breaking the API. And so we can pull a neat trick: adding
> > >> Lockdep support into Arc<T>. This catches cases where the Arc Drop
> > >> implementation could create a locking correctness violation only when
> > >> the reference count drops to 0 at that particular drop site, which is
> > >> otherwise not detectable unless that condition actually happens at
> > >> runtime. Since Drop is "magic" in Rust and Drop codepaths very difficult
> > >> to audit, this helps a lot.
> > >>
> > >> For the initial RFC, this implements the new API only for Mutex. If this
> > >> looks good, I can extend it to CondVar & friends in the next version.
> > >> This series also folds in a few related minor dependencies / changes
> > >> (like the pin_init mutex stuff).
> > > 
> > > I'm not convinced that this is the right compromise. Moving lockdep
> > > class creation to runtime sounds unfortunate, especially since this
> > > makes them fallible due to memory allocations (I think?).
> > > 
> > > I would be inclined to keep using macros for this.
> > 
> > Most people were very enthusiastic about this change in the meetings... 
> > it wasn't even my own idea ^^
> 
> I don't think I was in that meeting. Anyway,
>  
> > I don't think the fallibility is an issue. Lockdep is a debugging tool, 
> > and it doesn't have to handle all possible circumstances perfectly. If 
> > you are debugging normal lock issues you probably shouldn't be running 
> > out of RAM, and if you are debugging OOM situations the lock keys would 
> > normally have been created long before you reach an OOM situation, since 
> > they would be created the first time a relevant lock class is used. More 
> > objects of the same class don't cause any more allocations. And the code 
> > has a fallback for the OOM case, where it just uses the Location object 
> > as a static lock class. That's not ideal and degrades the quality of the 
> > lockdep results, but it shouldn't completely break anything.
> 
> If you have a fallback when the allocation fails, that helps ...
> 
> You say that Location objects are not necessarily unique per file
> location. In practice, how often are they not unique? Always just using
> the Location object as a static lock class seems like it would
> significantly simplify this proposal.
> 

Agreed. For example, `caller_lock_class_inner` has a Mutex critical
section in it (for the hash table synchronization), that makes it
impossible to be called in preemption disabled contexts, which limits
the usage.

Regards,
Boqun

> > The advantages of being able to throw lockdep checking into arbitrary 
> > types, like the Arc<T> thing, are pretty significant. It closes a major 
> > correctness checking issue we have with Rust and its automagic Drop 
> > implementations that are almost impossible to properly audit for 
> > potential locking issues. I think that alone makes this worth it, even 
> > if you don't use it for normal mutex creation...
> 
> I do agree that there is value in being able to more easily detect
> potential deadlocks involving destructors of ref-counted values. I once
> had a case of that myself, though lockdep was able to catch it without
> this change because it saw the refcount hit zero in the right place.
> 
> Alice
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation
  2023-07-14  9:13 ` [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation Asahi Lina
@ 2023-07-14 19:56   ` Martin Rodriguez Reboredo
  2023-07-15 15:47   ` Gary Guo
  1 sibling, 0 replies; 31+ messages in thread
From: Martin Rodriguez Reboredo @ 2023-07-14 19:56 UTC (permalink / raw)
  To: Asahi Lina, Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Masahiro Yamada, Nathan Chancellor, Nick Desaulniers,
	Nicolas Schier, Tom Rix, Daniel Vetter
  Cc: Hector Martin, Sven Peter, Alyssa Rosenzweig, asahi,
	rust-for-linux, linux-kernel, linux-kbuild, llvm

On 7/14/23 06:13, Asahi Lina wrote:
> Using macros to create lock classes all over the place is unergonomic,
> and makes it impossible to add new features that require lock classes to
> code such as Arc<> without changing all callers.
> 
> Rust has the ability to track the caller's identity by file/line/column
> number, and we can use that to dynamically generate lock classes
> instead.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
> [...]
> +
> +const LOCK_CLASS_BUCKETS: usize = 1024;
> +
> +#[track_caller]
> +fn caller_lock_class_inner() -> Result<&'static DynLockClassKey> {
> +    // This is just a hack to make the below static array initialization work.
> +    #[allow(clippy::declare_interior_mutable_const)]
> +    const ATOMIC_PTR: AtomicPtr<Mutex<Vec<&'static DynLockClassKey>>> =
> +        AtomicPtr::new(core::ptr::null_mut());
> +
> +    #[allow(clippy::complexity)]
> +    static LOCK_CLASSES: [AtomicPtr<Mutex<Vec<&'static DynLockClassKey>>>; LOCK_CLASS_BUCKETS] =
> +        [ATOMIC_PTR; LOCK_CLASS_BUCKETS];
> +
> +    let loc = core::panic::Location::caller();
> +    let loc_key = LocationKey::new(loc);
> +
> +    let index = (loc_key.hash % (LOCK_CLASS_BUCKETS as u64)) as usize;
> +    let slot = &LOCK_CLASSES[index];
> +
> +    let mut ptr = slot.load(Ordering::Relaxed);
> +    if ptr.is_null() {
> +        let new_element = Box::pin_init(new_mutex!(Vec::new()))?;
> +
> +        if let Err(e) = slot.compare_exchange(
> +            core::ptr::null_mut(),
> +            // SAFETY: We never move out of this Box
> +            Box::into_raw(unsafe { Pin::into_inner_unchecked(new_element) }),
> +            Ordering::Relaxed,
> +            Ordering::Relaxed,
> +        ) {
> +            // SAFETY: We just got this pointer from `into_raw()`
> +            unsafe { Box::from_raw(e) };
> +        }
> +
> +        ptr = slot.load(Ordering::Relaxed);
> +        assert!(!ptr.is_null());
> +    }
> +
> +    // SAFETY: This mutex was either just created above or previously allocated,
> +    // and we never free these objects so the pointer is guaranteed to be valid.
> +    let mut guard = unsafe { (*ptr).lock() };
> +
> +    for i in guard.iter() {
> +        if i.loc == loc_key {
> +            return Ok(i);
> +        }
> +    }
> +
> +    // We immediately leak the class, so it becomes 'static
> +    let new_class = Box::leak(Box::try_new(DynLockClassKey {
> +        key: Opaque::zeroed(),
> +        loc: loc_key,
> +        name: CString::try_from_fmt(fmt!("{}:{}:{}", loc.file(), loc.line(), loc.column()))?,
> +    })?);
> +
> +    // SAFETY: This is safe to call with a pointer to a dynamically allocated lockdep key,
> +    // and we never free the objects so it is safe to never unregister the key.
> +    unsafe { bindings::lockdep_register_key(new_class.key.get()) };
> +
> +    guard.try_push(new_class)?;
> +
> +    Ok(new_class)
> +}
> +
> [...]

Is there any problem if we have many `DynLockClassKey`s leaked or not?

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
  2023-07-14 13:59     ` Alice Ryhl
  2023-07-14 15:21       ` Boqun Feng
@ 2023-07-15 14:25       ` Gary Guo
  2023-07-18 16:48         ` Boqun Feng
  1 sibling, 1 reply; 31+ messages in thread
From: Gary Guo @ 2023-07-15 14:25 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: lina, alex.gaynor, alyssa, asahi, benno.lossin, bjorn3_gh,
	boqun.feng, daniel, linux-kbuild, linux-kernel, llvm, marcan,
	masahiroy, nathan, ndesaulniers, nicolas, ojeda, rust-for-linux,
	sven, trix, wedsonaf

On Fri, 14 Jul 2023 13:59:26 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> Asahi Lina <lina@asahilina.net> writes:
> > On 14/07/2023 19.13, Alice Ryhl wrote:  
> > > Asahi Lina <lina@asahilina.net> writes:  
> > >> Begone, lock classes!
> > >>
> > >> As discussed in meetings/etc, we would really like to support implicit
> > >> lock class creation for Rust code. Right now, lock classes are created
> > >> using macros and passed around (similar to C). Unfortunately, Rust
> > >> macros don't look like Rust functions, which means adding lockdep to a
> > >> type is a breaking API change. This makes Rust mutex creation rather
> > >> ugly, with the new_mutex!() macro and friends.
> > >>
> > >> Implicit lock classes have to be unique per instantiation code site.
> > >> Notably, with Rust generics and monomorphization, this is not the same
> > >> as unique per generated code instance. If this weren't the case, we
> > >> could use inline functions and asm!() magic to try to create lock
> > >> classes that have the right uniqueness semantics. But that doesn't work,
> > >> since it would create too many lock classes for the same actual lock
> > >> creation in the source code.
> > >>
> > >> But Rust does have one trick we can use: it can track the caller
> > >> location (as file:line:column), across multiple functions. This works
> > >> using an implicit argument that gets passed around, which is exactly the
> > >> thing we do for lock classes. The tricky bit is that, while the value of
> > >> these Location objects has the semantics we want (unique value per
> > >> source code location), there is no guarantee that they are deduplicated
> > >> in memory.
> > >>
> > >> So we use a hash table, and map Location values to lock classes. Et
> > >> voila, implicit lock class support!
> > >>
> > >> This lets us clean up the Mutex & co APIs and make them look a lot more
> > >> Rust-like, but it also means we can now throw Lockdep into more APIs
> > >> without breaking the API. And so we can pull a neat trick: adding
> > >> Lockdep support into Arc<T>. This catches cases where the Arc Drop
> > >> implementation could create a locking correctness violation only when
> > >> the reference count drops to 0 at that particular drop site, which is
> > >> otherwise not detectable unless that condition actually happens at
> > >> runtime. Since Drop is "magic" in Rust and Drop codepaths very difficult
> > >> to audit, this helps a lot.
> > >>
> > >> For the initial RFC, this implements the new API only for Mutex. If this
> > >> looks good, I can extend it to CondVar & friends in the next version.
> > >> This series also folds in a few related minor dependencies / changes
> > >> (like the pin_init mutex stuff).  
> > > 
> > > I'm not convinced that this is the right compromise. Moving lockdep
> > > class creation to runtime sounds unfortunate, especially since this
> > > makes them fallible due to memory allocations (I think?).
> > > 
> > > I would be inclined to keep using macros for this.  
> > 
> > Most people were very enthusiastic about this change in the meetings... 
> > it wasn't even my own idea ^^  
> 
> I don't think I was in that meeting. Anyway,

Just for some contexts.

This idea has been discussed multiple times. The earliest discussion
that I can recall is from a tea-break-time discussion in Kangrejos 2022.

It was brought up recently in a discussion related to DRM,
and the consensus was that it's definitely a idea worth exploring.

>  
> > I don't think the fallibility is an issue. Lockdep is a debugging tool, 
> > and it doesn't have to handle all possible circumstances perfectly. If 
> > you are debugging normal lock issues you probably shouldn't be running 
> > out of RAM, and if you are debugging OOM situations the lock keys would 
> > normally have been created long before you reach an OOM situation, since 
> > they would be created the first time a relevant lock class is used. More 
> > objects of the same class don't cause any more allocations. And the code 
> > has a fallback for the OOM case, where it just uses the Location object 
> > as a static lock class. That's not ideal and degrades the quality of the 
> > lockdep results, but it shouldn't completely break anything.  
> 
> If you have a fallback when the allocation fails, that helps ...

I am pretty sure lockdep needs to do some internal allocation anyway
because only address matters for lock class keys. So some extra
allocation probably is fine...

> 
> You say that Location objects are not necessarily unique per file
> location. In practice, how often are they not unique? Always just using
> the Location object as a static lock class seems like it would
> significantly simplify this proposal.

Location objects are constants, so they are not guaranteed to have a
fixed address. With inlining and generics you can very easily get
multiple instances of it. That said, their address is also not
significant, so LLVM is pretty good at merging them back to one single
address, **if everything is linked statically**.

The merging is an optimisation, and is far from guaranteed. With kernel
modules, which effectively is dynamic linking, the address of `Location`
*will* be duplicated if the function invoking a `#[track_caller]`
function is inlined.

An idea was flared when I discussed this with Josh Triplett in last
Kangrejos, that it might be possible to make `Location` generated by
compiler be `static` rather than just normal constants, and then we can
ensure that the address is unique. I tried to prototype this idea but
it didn't seem to work very well because currently you can use
`#[track_caller]` in a const fn but can't refer to statics in a const
fn, so it's a bit hard to desugar. I am pretty sure there are ways
around it, but someone would need to implement it :)

So TL;DR: while in many cases the address is unique, it's far from a
guarantee. It might be possible to guarantee uniqueness but that
requires compiler changes.

> 
> > The advantages of being able to throw lockdep checking into arbitrary 
> > types, like the Arc<T> thing, are pretty significant. It closes a major 
> > correctness checking issue we have with Rust and its automagic Drop 
> > implementations that are almost impossible to properly audit for 
> > potential locking issues. I think that alone makes this worth it, even 
> > if you don't use it for normal mutex creation...  
> 
> I do agree that there is value in being able to more easily detect
> potential deadlocks involving destructors of ref-counted values. I once
> had a case of that myself, though lockdep was able to catch it without
> this change because it saw the refcount hit zero in the right place.
> 
> Alice
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 01/11] rust: types: Add Opaque::zeroed()
  2023-07-14  9:13 ` [PATCH RFC 01/11] rust: types: Add Opaque::zeroed() Asahi Lina
  2023-07-14 10:15   ` Alice Ryhl
@ 2023-07-15 14:27   ` Gary Guo
  1 sibling, 0 replies; 31+ messages in thread
From: Gary Guo @ 2023-07-15 14:27 UTC (permalink / raw)
  To: Asahi Lina
  Cc: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter, Hector Martin, Sven Peter, Alyssa Rosenzweig,
	asahi, rust-for-linux, linux-kernel, linux-kbuild, llvm

On Fri, 14 Jul 2023 18:13:53 +0900
Asahi Lina <lina@asahilina.net> wrote:

> Opaque types are internally MaybeUninit, so it's safe to actually
> zero-initialize them as long as we don't claim they are initialized.
> This is useful for many FFI types that are expected to be zero-inited by
> the user.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>

Reviewed-by: Gary Guo <gary@garyguo.net>

I agree with Alice that this patch can accepted on its own.

> ---
>  rust/kernel/types.rs | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
> index 1e5380b16ed5..185d3493857e 100644
> --- a/rust/kernel/types.rs
> +++ b/rust/kernel/types.rs
> @@ -237,6 +237,11 @@ pub const fn uninit() -> Self {
>          Self(MaybeUninit::uninit())
>      }
>  
> +    /// Creates a zeroed value.
> +    pub fn zeroed() -> Self {
> +        Self(MaybeUninit::zeroed())
> +    }
> +
>      /// Creates a pin-initializer from the given initializer closure.
>      ///
>      /// The returned initializer calls the given closure with the pointer to the inner `T` of this
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 02/11] rust: lock: Add Lock::pin_init()
  2023-07-14  9:13 ` [PATCH RFC 02/11] rust: lock: Add Lock::pin_init() Asahi Lina
@ 2023-07-15 14:29   ` Gary Guo
  0 siblings, 0 replies; 31+ messages in thread
From: Gary Guo @ 2023-07-15 14:29 UTC (permalink / raw)
  To: Asahi Lina
  Cc: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter, Hector Martin, Sven Peter, Alyssa Rosenzweig,
	asahi, rust-for-linux, linux-kernel, linux-kbuild, llvm

On Fri, 14 Jul 2023 18:13:54 +0900
Asahi Lina <lina@asahilina.net> wrote:

> Allow initializing a lock using pin_init!(), instead of requiring
> the inner data to be passed through the stack.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
>  rust/kernel/sync/lock.rs       | 30 +++++++++++++++++++++++++++++-
>  rust/kernel/sync/lock/mutex.rs | 13 +++++++++++++
>  2 files changed, 42 insertions(+), 1 deletion(-)
> 
> diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
> index a2216325632d..d493c5d19104 100644
> --- a/rust/kernel/sync/lock.rs
> +++ b/rust/kernel/sync/lock.rs
> @@ -6,7 +6,9 @@
>  //! spinlocks, raw spinlocks) to be provided with minimal effort.
>  
>  use super::LockClassKey;
> -use crate::{bindings, init::PinInit, pin_init, str::CStr, types::Opaque, types::ScopeGuard};
> +use crate::{
> +    bindings, init::PinInit, pin_init, str::CStr, try_pin_init, types::Opaque, types::ScopeGuard,
> +};
>  use core::{cell::UnsafeCell, marker::PhantomData, marker::PhantomPinned};
>  use macros::pin_data;
>  
> @@ -87,6 +89,7 @@ pub struct Lock<T: ?Sized, B: Backend> {
>      _pin: PhantomPinned,
>  
>      /// The data protected by the lock.
> +    #[pin]
>      pub(crate) data: UnsafeCell<T>,
>  }
>  
> @@ -111,6 +114,31 @@ pub fn new(t: T, name: &'static CStr, key: &'static LockClassKey) -> impl PinIni
>              }),
>          })
>      }
> +
> +    /// Constructs a new lock initialiser taking an initialiser.
> +    pub fn pin_init<E>(
> +        t: impl PinInit<T, E>,
> +        name: &'static CStr,
> +        key: &'static LockClassKey,
> +    ) -> impl PinInit<Self, E>
> +    where
> +        E: core::convert::From<core::convert::Infallible>,
> +    {
> +        try_pin_init!(Self {
> +            // SAFETY: We are just forwarding the initialization across a
> +            // cast away from UnsafeCell, so the pin_init_from_closure and
> +            // __pinned_init() requirements are in sync.
> +            data <- unsafe { crate::init::pin_init_from_closure(move |slot: *mut UnsafeCell<T>| {
> +                t.__pinned_init(slot as *mut T)
> +            })},
> +            _pin: PhantomPinned,
> +            // SAFETY: `slot` is valid while the closure is called and both `name` and `key` have
> +            // static lifetimes so they live indefinitely.
> +            state <- Opaque::ffi_init(|slot| unsafe {
> +                B::init(slot, name.as_char_ptr(), key.as_ptr())
> +            }),
> +        }? E)
> +    }

I think you might be able to just modify the `new` function? We have a
blanket implementation

	impl<T> Init<T, Infallible> for T

which makes any `T` also `impl PinInit`.

>  }
>  
>  impl<T: ?Sized, B: Backend> Lock<T, B> {
> diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
> index 923472f04af4..06fe685501b4 100644
> --- a/rust/kernel/sync/lock/mutex.rs
> +++ b/rust/kernel/sync/lock/mutex.rs
> @@ -18,6 +18,19 @@ macro_rules! new_mutex {
>      };
>  }
>  
> +/// Creates a [`Mutex`] initialiser with the given name and a newly-created lock class,
> +/// given an initialiser for the inner type.
> +///
> +/// It uses the name if one is given, otherwise it generates one based on the file name and line
> +/// number.
> +#[macro_export]
> +macro_rules! new_mutex_pinned {
> +    ($inner:expr $(, $name:literal)? $(,)?) => {
> +        $crate::sync::Mutex::pin_init(
> +            $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
> +    };
> +}
> +
>  /// A mutual exclusion primitive.
>  ///
>  /// Exposes the kernel's [`struct mutex`]. When multiple threads attempt to lock the same mutex,
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 03/11] rust: Use absolute paths to build Rust objects
  2023-07-14  9:13 ` [PATCH RFC 03/11] rust: Use absolute paths to build Rust objects Asahi Lina
@ 2023-07-15 14:35   ` Gary Guo
  2023-07-16  7:53     ` Asahi Lina
  0 siblings, 1 reply; 31+ messages in thread
From: Gary Guo @ 2023-07-15 14:35 UTC (permalink / raw)
  To: Asahi Lina
  Cc: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter, Hector Martin, Sven Peter, Alyssa Rosenzweig,
	asahi, rust-for-linux, linux-kernel, linux-kbuild, llvm

On Fri, 14 Jul 2023 18:13:55 +0900
Asahi Lina <lina@asahilina.net> wrote:

> We want to use caller_location to uniquely identify callsites, to
> automatically create lockdep classes without macros. The location
> filename in local code uses the relative path passed to the compiler,
> but if that code is generic and instantiated from another crate, the
> path becomes absolute.
> 
> To make this work and keep the paths consistent, always pass an absolute
> path to the compiler. Then the Location path is always identical
> regardless of how the code is being compiled.

I wonder if this can have some reproducible build implications. We
probably also need to use remap-path-prefix?

> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
>  rust/Makefile          | 2 +-
>  scripts/Makefile.build | 8 ++++----
>  2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/rust/Makefile b/rust/Makefile
> index 7c9d9f11aec5..552f023099c8 100644
> --- a/rust/Makefile
> +++ b/rust/Makefile
> @@ -369,7 +369,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
>  		--emit=dep-info=$(depfile) --emit=obj=$@ \
>  		--emit=metadata=$(dir $@)$(patsubst %.o,lib%.rmeta,$(notdir $@)) \
>  		--crate-type rlib -L$(objtree)/$(obj) \
> -		--crate-name $(patsubst %.o,%,$(notdir $@)) $< \
> +		--crate-name $(patsubst %.o,%,$(notdir $@)) $(abspath $<) \
>  	$(if $(rustc_objcopy),;$(OBJCOPY) $(rustc_objcopy) $@)
>  
>  rust-analyzer:
> diff --git a/scripts/Makefile.build b/scripts/Makefile.build
> index 6413342a03f4..c925b90ebd80 100644
> --- a/scripts/Makefile.build
> +++ b/scripts/Makefile.build
> @@ -283,27 +283,27 @@ rust_common_cmd = \
>  # would not match each other.
>  
>  quiet_cmd_rustc_o_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
> -      cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $<
> +      cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $(abspath $<)
>  
>  $(obj)/%.o: $(src)/%.rs FORCE
>  	$(call if_changed_dep,rustc_o_rs)
>  
>  quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
>        cmd_rustc_rsi_rs = \
> -	$(rust_common_cmd) -Zunpretty=expanded $< >$@; \
> +	$(rust_common_cmd) -Zunpretty=expanded $(abspath $<) >$@; \
>  	command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) $@
>  
>  $(obj)/%.rsi: $(src)/%.rs FORCE
>  	$(call if_changed_dep,rustc_rsi_rs)
>  
>  quiet_cmd_rustc_s_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
> -      cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $<
> +      cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $(abspath $<)
>  
>  $(obj)/%.s: $(src)/%.rs FORCE
>  	$(call if_changed_dep,rustc_s_rs)
>  
>  quiet_cmd_rustc_ll_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
> -      cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $<
> +      cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $(abspath $<)
>  
>  $(obj)/%.ll: $(src)/%.rs FORCE
>  	$(call if_changed_dep,rustc_ll_rs)
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction
  2023-07-14  9:13 ` [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction Asahi Lina
  2023-07-14 14:28   ` Martin Rodriguez Reboredo
@ 2023-07-15 14:52   ` Gary Guo
  1 sibling, 0 replies; 31+ messages in thread
From: Gary Guo @ 2023-07-15 14:52 UTC (permalink / raw)
  To: Asahi Lina
  Cc: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter, Hector Martin, Sven Peter, Alyssa Rosenzweig,
	asahi, rust-for-linux, linux-kernel, linux-kbuild, llvm

On Fri, 14 Jul 2023 18:13:56 +0900
Asahi Lina <lina@asahilina.net> wrote:

> This simple wrapper allows Rust code to use the Hasher interface with
> the kernel siphash implementation. No fancy features supported for now,
> just basic bag-of-bytes hashing. No guarantee that hash outputs will
> remain stable in the future either.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
>  rust/bindings/bindings_helper.h |  1 +
>  rust/helpers.c                  |  8 ++++++++
>  rust/kernel/lib.rs              |  1 +
>  rust/kernel/siphash.rs          | 39 +++++++++++++++++++++++++++++++++++++++
>  4 files changed, 49 insertions(+)
> 
> diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
> index 3e601ce2548d..52f32e423b04 100644
> --- a/rust/bindings/bindings_helper.h
> +++ b/rust/bindings/bindings_helper.h
> @@ -10,6 +10,7 @@
>  #include <linux/slab.h>
>  #include <linux/refcount.h>
>  #include <linux/wait.h>
> +#include <linux/siphash.h>
>  #include <linux/sched.h>
>  
>  /* `bindgen` gets confused at certain things. */
> diff --git a/rust/helpers.c b/rust/helpers.c
> index bb594da56137..1ed71315d1eb 100644
> --- a/rust/helpers.c
> +++ b/rust/helpers.c
> @@ -24,6 +24,7 @@
>  #include <linux/errname.h>
>  #include <linux/refcount.h>
>  #include <linux/mutex.h>
> +#include <linux/siphash.h>
>  #include <linux/spinlock.h>
>  #include <linux/sched/signal.h>
>  #include <linux/wait.h>
> @@ -135,6 +136,13 @@ void rust_helper_put_task_struct(struct task_struct *t)
>  }
>  EXPORT_SYMBOL_GPL(rust_helper_put_task_struct);
>  
> +u64 rust_helper_siphash(const void *data, size_t len,
> +			const siphash_key_t *key)
> +{
> +	return siphash(data, len, key);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_siphash);
> +
>  /*
>   * We use `bindgen`'s `--size_t-is-usize` option to bind the C `size_t` type
>   * as the Rust `usize` type, so we can use it in contexts where Rust
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index 85b261209977..8fb39078b85c 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -36,6 +36,7 @@
>  pub mod ioctl;
>  pub mod prelude;
>  pub mod print;
> +pub mod siphash;
>  mod static_assert;
>  #[doc(hidden)]
>  pub mod std_vendor;
> diff --git a/rust/kernel/siphash.rs b/rust/kernel/siphash.rs
> new file mode 100644
> index 000000000000..e13a17cd5a93
> --- /dev/null
> +++ b/rust/kernel/siphash.rs
> @@ -0,0 +1,39 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! A core::hash::Hasher wrapper for the kernel siphash implementation.
> +//!
> +//! This module allows Rust code to use the kernel's siphash implementation
> +//! to hash Rust objects.
> +
> +use core::hash::Hasher;
> +
> +/// A Hasher implementation that uses the kernel siphash implementation.
> +#[derive(Default)]
> +pub struct SipHasher {
> +    // SipHash state is 4xu64, but the Linux implementation
> +    // doesn't expose incremental hashing so let's just chain
> +    // individual SipHash calls for now, which return a u64
> +    // hash.

This is actually quite a big difference, which makes me think that this
hasher probably shouldn't be called `SipHasher`.

Actually, do we need a strong hash? Given that lock dep is only for
debugging purposes, I think we can use fnv, or even just fx hash?
They're all simple enough to be implemented in a couple of lines in
Rust and wouldn't need to call into FFI.

> +    state: u64,
> +}
> +
> +impl SipHasher {
> +    /// Create a new SipHasher with zeroed state.
> +    pub fn new() -> Self {
> +        SipHasher { state: 0 }
> +    }
> +}
> +
> +impl Hasher for SipHasher {
> +    fn finish(&self) -> u64 {
> +        self.state
> +    }
> +
> +    fn write(&mut self, bytes: &[u8]) {
> +        let key = bindings::siphash_key_t {
> +            key: [self.state, 0],
> +        };
> +
> +        self.state = unsafe { bindings::siphash(bytes.as_ptr() as *const _, bytes.len(), &key) };
> +    }
> +}
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation
  2023-07-14  9:13 ` [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation Asahi Lina
  2023-07-14 19:56   ` Martin Rodriguez Reboredo
@ 2023-07-15 15:47   ` Gary Guo
  1 sibling, 0 replies; 31+ messages in thread
From: Gary Guo @ 2023-07-15 15:47 UTC (permalink / raw)
  To: Asahi Lina
  Cc: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter, Hector Martin, Sven Peter, Alyssa Rosenzweig,
	asahi, rust-for-linux, linux-kernel, linux-kbuild, llvm

On Fri, 14 Jul 2023 18:13:59 +0900
Asahi Lina <lina@asahilina.net> wrote:

> Using macros to create lock classes all over the place is unergonomic,
> and makes it impossible to add new features that require lock classes to
> code such as Arc<> without changing all callers.
> 
> Rust has the ability to track the caller's identity by file/line/column
> number, and we can use that to dynamically generate lock classes
> instead.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
>  rust/kernel/sync/lockdep.rs    | 147 ++++++++++++++++++++++++++++++++++++++++-
>  rust/kernel/sync/no_lockdep.rs |   8 +++
>  2 files changed, 154 insertions(+), 1 deletion(-)
> 
> diff --git a/rust/kernel/sync/lockdep.rs b/rust/kernel/sync/lockdep.rs
> index d8328f4275fb..fbf9f6ed403d 100644
> --- a/rust/kernel/sync/lockdep.rs
> +++ b/rust/kernel/sync/lockdep.rs
> @@ -5,7 +5,19 @@
>  //! This module abstracts the parts of the kernel lockdep API relevant to Rust
>  //! modules, including lock classes.
>  
> -use crate::types::Opaque;
> +use crate::{
> +    c_str, fmt,
> +    init::InPlaceInit,
> +    new_mutex,
> +    prelude::{Box, Result, Vec},
> +    str::{CStr, CString},
> +    sync::Mutex,
> +    types::Opaque,
> +};
> +
> +use core::hash::{Hash, Hasher};
> +use core::pin::Pin;
> +use core::sync::atomic::{AtomicPtr, Ordering};
>  
>  /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
>  #[repr(transparent)]
> @@ -42,3 +54,136 @@ pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
>  // actually dereferenced.
>  unsafe impl Send for LockClassKey {}
>  unsafe impl Sync for LockClassKey {}
> +
> +// Location is 'static but not really, since module unloads will
> +// invalidate existing static Locations within that module.
> +// To avoid breakage, we maintain our own location struct which is
> +// dynamically allocated on first reference. We store a hash of the
> +// whole location (including the filename string), as well as the
> +// line and column separately. The assumption is that this whole
> +// struct is highly unlikely to ever collide with a reasonable
> +// hash (this saves us from having to check the filename string
> +// itself).
> +#[derive(PartialEq, Debug)]
> +struct LocationKey {
> +    hash: u64,
> +    line: u32,
> +    column: u32,
> +}
> +
> +struct DynLockClassKey {
> +    key: Opaque<bindings::lock_class_key>,
> +    loc: LocationKey,
> +    name: CString,
> +}
> +
> +impl LocationKey {
> +    fn new(loc: &'static core::panic::Location<'static>) -> Self {
> +        let mut hasher = crate::siphash::SipHasher::new();
> +        loc.hash(&mut hasher);
> +
> +        LocationKey {
> +            hash: hasher.finish(),
> +            line: loc.line(),
> +            column: loc.column(),
> +        }
> +    }
> +}
> +
> +impl DynLockClassKey {
> +    fn key(&'static self) -> LockClassKey {
> +        LockClassKey(self.key.get())
> +    }

I don't understand why PATCH 06 is needed. If we keep the current
`LockClassKey` definition this could just be returning `'static
LockClassKey`, which is a simple `&self.key`.

> +
> +    fn name(&'static self) -> &CStr {
> +        &self.name
> +    }
> +}
> +
> +const LOCK_CLASS_BUCKETS: usize = 1024;
> +
> +#[track_caller]
> +fn caller_lock_class_inner() -> Result<&'static DynLockClassKey> {
> +    // This is just a hack to make the below static array initialization work.
> +    #[allow(clippy::declare_interior_mutable_const)]
> +    const ATOMIC_PTR: AtomicPtr<Mutex<Vec<&'static DynLockClassKey>>> =
> +        AtomicPtr::new(core::ptr::null_mut());
> +
> +    #[allow(clippy::complexity)]
> +    static LOCK_CLASSES: [AtomicPtr<Mutex<Vec<&'static DynLockClassKey>>>; LOCK_CLASS_BUCKETS] =
> +        [ATOMIC_PTR; LOCK_CLASS_BUCKETS];
> +
> +    let loc = core::panic::Location::caller();
> +    let loc_key = LocationKey::new(loc);
> +
> +    let index = (loc_key.hash % (LOCK_CLASS_BUCKETS as u64)) as usize;
> +    let slot = &LOCK_CLASSES[index];
> +
> +    let mut ptr = slot.load(Ordering::Relaxed);
> +    if ptr.is_null() {
> +        let new_element = Box::pin_init(new_mutex!(Vec::new()))?;
> +
> +        if let Err(e) = slot.compare_exchange(
> +            core::ptr::null_mut(),
> +            // SAFETY: We never move out of this Box
> +            Box::into_raw(unsafe { Pin::into_inner_unchecked(new_element) }),
> +            Ordering::Relaxed,
> +            Ordering::Relaxed,
> +        ) {
> +            // SAFETY: We just got this pointer from `into_raw()`
> +            unsafe { Box::from_raw(e) };
> +        }
> +
> +        ptr = slot.load(Ordering::Relaxed);
> +        assert!(!ptr.is_null());
> +    }
> +
> +    // SAFETY: This mutex was either just created above or previously allocated,
> +    // and we never free these objects so the pointer is guaranteed to be valid.
> +    let mut guard = unsafe { (*ptr).lock() };
> +
> +    for i in guard.iter() {
> +        if i.loc == loc_key {
> +            return Ok(i);
> +        }
> +    }
> +
> +    // We immediately leak the class, so it becomes 'static
> +    let new_class = Box::leak(Box::try_new(DynLockClassKey {
> +        key: Opaque::zeroed(),
> +        loc: loc_key,
> +        name: CString::try_from_fmt(fmt!("{}:{}:{}", loc.file(), loc.line(), loc.column()))?,
> +    })?);
> +
> +    // SAFETY: This is safe to call with a pointer to a dynamically allocated lockdep key,
> +    // and we never free the objects so it is safe to never unregister the key.
> +    unsafe { bindings::lockdep_register_key(new_class.key.get()) };
> +
> +    guard.try_push(new_class)?;
> +
> +    Ok(new_class)
> +}
> +
> +#[track_caller]
> +pub(crate) fn caller_lock_class() -> (LockClassKey, &'static CStr) {
> +    match caller_lock_class_inner() {
> +        Ok(a) => (a.key(), a.name()),
> +        Err(_) => {
> +            crate::pr_err!(
> +                "Failed to dynamically allocate lock class, lockdep may be unreliable.\n"
> +            );
> +
> +            let loc = core::panic::Location::caller();
> +            // SAFETY: LockClassKey is opaque and the lockdep implementation only needs
> +            // unique addresses for statically allocated keys, so it is safe to just cast
> +            // the Location reference directly into a LockClassKey. However, this will
> +            // result in multiple keys for the same callsite due to monomorphization,
> +            // as well as spuriously destroyed keys when the static key is allocated in
> +            // the wrong module, which is what makes this unreliable.

If the only purpose of introducing `StaticLockClassKey` and change
`LockClassKey` is to make this fallback path work, then I don't think
that change is worth it. I don't really see an issue with forging a
`'static LockClassKey' from a `'static Location`, especially since you
can't really do any memory access with `LockClassKey`.

> +            (
> +                LockClassKey(loc as *const _ as *mut _),
> +                c_str!("fallback_lock_class"),
> +            )
> +        }
> +    }
> +}
> diff --git a/rust/kernel/sync/no_lockdep.rs b/rust/kernel/sync/no_lockdep.rs
> index 518ec0bf9a7d..de53c4de7fbe 100644
> --- a/rust/kernel/sync/no_lockdep.rs
> +++ b/rust/kernel/sync/no_lockdep.rs
> @@ -4,6 +4,8 @@
>  //!
>  //! Takes the place of the `lockdep` module when lockdep is disabled.
>  
> +use crate::{c_str, str::CStr};
> +
>  /// A dummy, zero-sized lock class.
>  pub struct StaticLockClassKey();
>  
> @@ -28,3 +30,9 @@ pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
>          core::ptr::null_mut()
>      }
>  }
> +
> +pub(crate) fn caller_lock_class() -> (LockClassKey, &'static CStr) {
> +    static DUMMY_LOCK_CLASS: StaticLockClassKey = StaticLockClassKey::new();
> +
> +    (DUMMY_LOCK_CLASS.key(), c_str!("dummy"))
> +}
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 11/11] rust: sync: arc: Add lockdep integration
  2023-07-14  9:14 ` [PATCH RFC 11/11] rust: sync: arc: Add lockdep integration Asahi Lina
@ 2023-07-15 16:00   ` Gary Guo
  0 siblings, 0 replies; 31+ messages in thread
From: Gary Guo @ 2023-07-15 16:00 UTC (permalink / raw)
  To: Asahi Lina
  Cc: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter, Hector Martin, Sven Peter, Alyssa Rosenzweig,
	asahi, rust-for-linux, linux-kernel, linux-kbuild, llvm

On Fri, 14 Jul 2023 18:14:03 +0900
Asahi Lina <lina@asahilina.net> wrote:

> Now that we have magic lock class support and a LockdepMap that can be
> hooked up into arbitrary Rust types, we can integrate lockdep support
> directly into the Rust Arc<T> type. This means we can catch potential
> Drop codepaths that could result in a locking error, even if those
> codepaths never actually execute due to the reference count being
> nonzero at that point.
> 
> Signed-off-by: Asahi Lina <lina@asahilina.net>
> ---
>  lib/Kconfig.debug       |  8 ++++++
>  rust/kernel/init.rs     |  6 +++++
>  rust/kernel/sync/arc.rs | 71 ++++++++++++++++++++++++++++++++++++++++++++++---
>  3 files changed, 82 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index fbc89baf7de6..ff4f06df88ee 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -3010,6 +3010,14 @@ config RUST_BUILD_ASSERT_ALLOW
>  
>  	  If unsure, say N.
>  
> +config RUST_EXTRA_LOCKDEP
> +	bool "Extra lockdep checking"
> +	depends on RUST && PROVE_LOCKING
> +	help
> +	  Enabled additional lockdep integration with certain Rust types.
> +
> +	  If unsure, say N.
> +
>  endmenu # "Rust"
>  
>  endmenu # Kernel hacking
> diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
> index f190bbd0bab1..b64a507f0a34 100644
> --- a/rust/kernel/init.rs
> +++ b/rust/kernel/init.rs
> @@ -1208,6 +1208,7 @@ pub trait InPlaceInit<T>: Sized {
>      /// type.
>      ///
>      /// If `T: !Unpin` it will not be able to move afterwards.
> +    #[track_caller]
>      fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
>      where
>          E: From<AllocError>;
> @@ -1216,6 +1217,7 @@ fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
>      /// type.
>      ///
>      /// If `T: !Unpin` it will not be able to move afterwards.
> +    #[track_caller]
>      fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Pin<Self>>
>      where
>          Error: From<E>,
> @@ -1228,11 +1230,13 @@ fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Pin<Self>>
>      }
>  
>      /// Use the given initializer to in-place initialize a `T`.
> +    #[track_caller]
>      fn try_init<E>(init: impl Init<T, E>) -> Result<Self, E>
>      where
>          E: From<AllocError>;
>  
>      /// Use the given initializer to in-place initialize a `T`.
> +    #[track_caller]
>      fn init<E>(init: impl Init<T, E>) -> error::Result<Self>
>      where
>          Error: From<E>,
> @@ -1277,6 +1281,7 @@ fn try_init<E>(init: impl Init<T, E>) -> Result<Self, E>
>  
>  impl<T> InPlaceInit<T> for UniqueArc<T> {
>      #[inline]
> +    #[track_caller]
>      fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
>      where
>          E: From<AllocError>,
> @@ -1291,6 +1296,7 @@ fn try_pin_init<E>(init: impl PinInit<T, E>) -> Result<Pin<Self>, E>
>      }
>  
>      #[inline]
> +    #[track_caller]
>      fn try_init<E>(init: impl Init<T, E>) -> Result<Self, E>
>      where
>          E: From<AllocError>,
> diff --git a/rust/kernel/sync/arc.rs b/rust/kernel/sync/arc.rs
> index a89843cacaad..3bb73b614fd1 100644
> --- a/rust/kernel/sync/arc.rs
> +++ b/rust/kernel/sync/arc.rs
> @@ -34,6 +34,9 @@
>  };
>  use macros::pin_data;
>  
> +#[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +use crate::sync::lockdep::LockdepMap;
> +
>  mod std_vendor;
>  
>  /// A reference-counted pointer to an instance of `T`.
> @@ -127,6 +130,17 @@ pub struct Arc<T: ?Sized> {
>      _p: PhantomData<ArcInner<T>>,
>  }
>  
> +#[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +#[pin_data]
> +#[repr(C)]
> +struct ArcInner<T: ?Sized> {
> +    refcount: Opaque<bindings::refcount_t>,
> +    lockdep_map: LockdepMap,
> +    data: T,
> +}
> +
> +// FIXME: pin_data does not work well with cfg attributes within the struct definition.
> +#[cfg(not(CONFIG_RUST_EXTRA_LOCKDEP))]
>  #[pin_data]
>  #[repr(C)]
>  struct ArcInner<T: ?Sized> {
> @@ -159,11 +173,14 @@ unsafe impl<T: ?Sized + Sync + Send> Sync for Arc<T> {}
>  
>  impl<T> Arc<T> {
>      /// Constructs a new reference counted instance of `T`.
> +    #[track_caller]
>      pub fn try_new(contents: T) -> Result<Self, AllocError> {
>          // INVARIANT: The refcount is initialised to a non-zero value.
>          let value = ArcInner {
>              // SAFETY: There are no safety requirements for this FFI call.
>              refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }),
> +            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +            lockdep_map: LockdepMap::new(),
>              data: contents,
>          };
>  
> @@ -178,6 +195,7 @@ pub fn try_new(contents: T) -> Result<Self, AllocError> {
>      ///
>      /// If `T: !Unpin` it will not be able to move afterwards.
>      #[inline]
> +    #[track_caller]
>      pub fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Self>
>      where
>          Error: From<E>,
> @@ -189,6 +207,7 @@ pub fn pin_init<E>(init: impl PinInit<T, E>) -> error::Result<Self>
>      ///
>      /// This is equivalent to [`Arc<T>::pin_init`], since an [`Arc`] is always pinned.
>      #[inline]
> +    #[track_caller]
>      pub fn init<E>(init: impl Init<T, E>) -> error::Result<Self>
>      where
>          Error: From<E>,
> @@ -292,15 +311,46 @@ fn drop(&mut self) {
>          // freed/invalid memory as long as it is never dereferenced.
>          let refcount = unsafe { self.ptr.as_ref() }.refcount.get();
>  
> +        // SAFETY: By the type invariant, there is necessarily a reference to the object.
> +        // We cannot hold the map lock across the reference decrement, as we might race
> +        // another thread. Therefore, we lock and immediately drop the guard here. This
> +        // only serves to inform lockdep of the dependency up the call stack.
> +        #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +        unsafe { self.ptr.as_ref() }.lockdep_map.lock();

Make the intention explicit by

	drop(unsafe { self.ptr.as_ref() }.lockdep_map.lock());

and make `lock` function `must_use`.

> +
>          // INVARIANT: If the refcount reaches zero, there are no other instances of `Arc`, and
>          // this instance is being dropped, so the broken invariant is not observable.
>          // SAFETY: Also by the type invariant, we are allowed to decrement the refcount.
>          let is_zero = unsafe { bindings::refcount_dec_and_test(refcount) };
> +
>          if is_zero {
>              // The count reached zero, we must free the memory.
> -            //
> -            // SAFETY: The pointer was initialised from the result of `Box::leak`.
> -            unsafe { Box::from_raw(self.ptr.as_ptr()) };
> +
> +            // SAFETY: If we get this far, we had the last reference to the object.
> +            // That means we are responsible for freeing it, so we can safely lock
> +            // the fake lock again. This wraps dropping the inner object, which
> +            // informs lockdep of the dependencies down the call stack.
> +            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +            let guard = unsafe { self.ptr.as_ref() }.lockdep_map.lock();
> +
> +            // SAFETY: The pointer was initialised from the result of `Box::leak`,
> +            // and the value is valid.
> +            unsafe { core::ptr::drop_in_place(&mut self.ptr.as_mut().data) };
> +
> +            // We need to drop the lock guard before freeing the LockdepMap itself
> +            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +            core::mem::drop(guard);
> +
> +            // SAFETY: The pointer was initialised from the result of `Box::leak`,
> +            // and the lockdep map is valid.
> +            #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +            unsafe {
> +                core::ptr::drop_in_place(&mut self.ptr.as_mut().lockdep_map)
> +            };
> +
> +            // SAFETY: The pointer was initialised from the result of `Box::leak`, and
> +            // a ManuallyDrop<T> is compatible. We already dropped the contents above.
> +            unsafe { Box::from_raw(self.ptr.as_ptr() as *mut ManuallyDrop<ArcInner<T>>) };

I feel there are a lot more `as_ref/as_mut` calls than it could be.
Could you refactor the code to make a single `as_ref()` call for the
non-zero path and a single `as_mut()` call for the zero path?

>          }
>      }
>  }
> @@ -512,6 +562,7 @@ pub struct UniqueArc<T: ?Sized> {
>  
>  impl<T> UniqueArc<T> {
>      /// Tries to allocate a new [`UniqueArc`] instance.
> +    #[track_caller]
>      pub fn try_new(value: T) -> Result<Self, AllocError> {
>          Ok(Self {
>              // INVARIANT: The newly-created object has a ref-count of 1.
> @@ -520,13 +571,27 @@ pub fn try_new(value: T) -> Result<Self, AllocError> {
>      }
>  
>      /// Tries to allocate a new [`UniqueArc`] instance whose contents are not initialised yet.
> +    #[track_caller]
>      pub fn try_new_uninit() -> Result<UniqueArc<MaybeUninit<T>>, AllocError> {
>          // INVARIANT: The refcount is initialised to a non-zero value.
> +        #[cfg(CONFIG_RUST_EXTRA_LOCKDEP)]
> +        let inner = {
> +            let map = LockdepMap::new();
> +            Box::try_init::<AllocError>(try_init!(ArcInner {
> +                // SAFETY: There are no safety requirements for this FFI call.
> +                refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }),
> +                lockdep_map: map,
> +                data <- init::uninit::<T, AllocError>(),
> +            }? AllocError))?
> +        };
> +        // FIXME: try_init!() does not work with cfg attributes.
> +        #[cfg(not(CONFIG_RUST_EXTRA_LOCKDEP))]
>          let inner = Box::try_init::<AllocError>(try_init!(ArcInner {
>              // SAFETY: There are no safety requirements for this FFI call.
>              refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }),
>              data <- init::uninit::<T, AllocError>(),
>          }? AllocError))?;
> +
>          Ok(UniqueArc {
>              // INVARIANT: The newly-created object has a ref-count of 1.
>              // SAFETY: The pointer from the `Box` is valid.
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
  2023-07-14 15:21       ` Boqun Feng
@ 2023-07-16  6:56         ` Asahi Lina
  0 siblings, 0 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-16  6:56 UTC (permalink / raw)
  To: Boqun Feng, Alice Ryhl
  Cc: alex.gaynor, alyssa, asahi, benno.lossin, bjorn3_gh, daniel,
	gary, linux-kbuild, linux-kernel, llvm, marcan, masahiroy,
	nathan, ndesaulniers, nicolas, ojeda, rust-for-linux, sven, trix,
	wedsonaf

On 15/07/2023 00.21, Boqun Feng wrote:
> On Fri, Jul 14, 2023 at 01:59:26PM +0000, Alice Ryhl wrote:
>> Asahi Lina <lina@asahilina.net> writes:
>>> On 14/07/2023 19.13, Alice Ryhl wrote:
>>>> Asahi Lina <lina@asahilina.net> writes:
>>>>> Begone, lock classes!
>>>>>
>>>>> As discussed in meetings/etc, we would really like to support implicit
>>>>> lock class creation for Rust code. Right now, lock classes are created
> 
> Thanks for looking into this! Could you also copy locking maintainers in
> the next version?

Sure! Sorry, I totally forgot that I needed to do that manually since b4 
doesn't know about rust->C relations...

> 
>>>>> using macros and passed around (similar to C). Unfortunately, Rust
>>>>> macros don't look like Rust functions, which means adding lockdep to a
>>>>> type is a breaking API change. This makes Rust mutex creation rather
>>>>> ugly, with the new_mutex!() macro and friends.
>>>>>
>>>>> Implicit lock classes have to be unique per instantiation code site.
>>>>> Notably, with Rust generics and monomorphization, this is not the same
>>>>> as unique per generated code instance. If this weren't the case, we
>>>>> could use inline functions and asm!() magic to try to create lock
>>>>> classes that have the right uniqueness semantics. But that doesn't work,
>>>>> since it would create too many lock classes for the same actual lock
>>>>> creation in the source code.
>>>>>
>>>>> But Rust does have one trick we can use: it can track the caller
>>>>> location (as file:line:column), across multiple functions. This works
>>>>> using an implicit argument that gets passed around, which is exactly the
>>>>> thing we do for lock classes. The tricky bit is that, while the value of
>>>>> these Location objects has the semantics we want (unique value per
>>>>> source code location), there is no guarantee that they are deduplicated
>>>>> in memory.
>>>>>
>>>>> So we use a hash table, and map Location values to lock classes. Et
>>>>> voila, implicit lock class support!
>>>>>
>>>>> This lets us clean up the Mutex & co APIs and make them look a lot more
>>>>> Rust-like, but it also means we can now throw Lockdep into more APIs
>>>>> without breaking the API. And so we can pull a neat trick: adding
>>>>> Lockdep support into Arc<T>. This catches cases where the Arc Drop
>>>>> implementation could create a locking correctness violation only when
>>>>> the reference count drops to 0 at that particular drop site, which is
>>>>> otherwise not detectable unless that condition actually happens at
>>>>> runtime. Since Drop is "magic" in Rust and Drop codepaths very difficult
>>>>> to audit, this helps a lot.
>>>>>
>>>>> For the initial RFC, this implements the new API only for Mutex. If this
>>>>> looks good, I can extend it to CondVar & friends in the next version.
>>>>> This series also folds in a few related minor dependencies / changes
>>>>> (like the pin_init mutex stuff).
>>>>
>>>> I'm not convinced that this is the right compromise. Moving lockdep
>>>> class creation to runtime sounds unfortunate, especially since this
>>>> makes them fallible due to memory allocations (I think?).
>>>>
>>>> I would be inclined to keep using macros for this.
>>>
>>> Most people were very enthusiastic about this change in the meetings...
>>> it wasn't even my own idea ^^
>>
>> I don't think I was in that meeting. Anyway,
>>   
>>> I don't think the fallibility is an issue. Lockdep is a debugging tool,
>>> and it doesn't have to handle all possible circumstances perfectly. If
>>> you are debugging normal lock issues you probably shouldn't be running
>>> out of RAM, and if you are debugging OOM situations the lock keys would
>>> normally have been created long before you reach an OOM situation, since
>>> they would be created the first time a relevant lock class is used. More
>>> objects of the same class don't cause any more allocations. And the code
>>> has a fallback for the OOM case, where it just uses the Location object
>>> as a static lock class. That's not ideal and degrades the quality of the
>>> lockdep results, but it shouldn't completely break anything.
>>
>> If you have a fallback when the allocation fails, that helps ...
>>
>> You say that Location objects are not necessarily unique per file
>> location. In practice, how often are they not unique? Always just using
>> the Location object as a static lock class seems like it would
>> significantly simplify this proposal.

If a generic type is instantiated from different crates (e.g. kernel 
crate and a driver), it creates separate Location objects. But we also 
have a bigger problem: this breaks module unload, since that leaves lock 
classes dangling. Though that is yet another discussion to have (Rust's 
lifetime semantics kind of break down when you can unload modules!).

>>
> 
> Agreed. For example, `caller_lock_class_inner` has a Mutex critical
> section in it (for the hash table synchronization), that makes it
> impossible to be called in preemption disabled contexts, which limits
> the usage.

Maybe we can just make it a spinlock? The critical section is very short 
for lock classes that already exist (just iterating over the hash 
bucket, which will almost always be length 1), so it's probably more 
efficient to do that than use a mutex anyway. Lockdep itself uses a 
single global spinlock for a bunch of stuff too.

For the new class case it does do an allocation, but I think code 
probably shouldn't be creating locks and things like that with 
preemption disabled / in atomic context? That just seems like a recipe 
for trouble... though this ties into the whole execution context story 
for Rust, which we don't have a terribly good answer for yet, so I think 
it shouldn't block this approach. The macro style lock creation 
primitives still exist for code that really needs the static behavior.

~~ Lina


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 03/11] rust: Use absolute paths to build Rust objects
  2023-07-15 14:35   ` Gary Guo
@ 2023-07-16  7:53     ` Asahi Lina
  0 siblings, 0 replies; 31+ messages in thread
From: Asahi Lina @ 2023-07-16  7:53 UTC (permalink / raw)
  To: Gary Guo
  Cc: Miguel Ojeda, Alex Gaynor, Wedson Almeida Filho, Boqun Feng,
	Björn Roy Baron, Benno Lossin, Masahiro Yamada,
	Nathan Chancellor, Nick Desaulniers, Nicolas Schier, Tom Rix,
	Daniel Vetter, Hector Martin, Sven Peter, Alyssa Rosenzweig,
	asahi, rust-for-linux, linux-kernel, linux-kbuild, llvm

On 15/07/2023 23.35, Gary Guo wrote:
> On Fri, 14 Jul 2023 18:13:55 +0900
> Asahi Lina <lina@asahilina.net> wrote:
> 
>> We want to use caller_location to uniquely identify callsites, to
>> automatically create lockdep classes without macros. The location
>> filename in local code uses the relative path passed to the compiler,
>> but if that code is generic and instantiated from another crate, the
>> path becomes absolute.
>>
>> To make this work and keep the paths consistent, always pass an absolute
>> path to the compiler. Then the Location path is always identical
>> regardless of how the code is being compiled.
> 
> I wonder if this can have some reproducible build implications. We
> probably also need to use remap-path-prefix?

We already end up with absolute paths in objects anyway, just not 
consistently. If it were consistently relative paths that would be fine 
too, but it looks like Rust likes to internally absolute-ize some paths, 
that's why I wrote this patch to make it always like that.

TIL about remap-path-prefix, that looks very useful! I'll give it a try.

> 
>>
>> Signed-off-by: Asahi Lina <lina@asahilina.net>
>> ---
>>   rust/Makefile          | 2 +-
>>   scripts/Makefile.build | 8 ++++----
>>   2 files changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/rust/Makefile b/rust/Makefile
>> index 7c9d9f11aec5..552f023099c8 100644
>> --- a/rust/Makefile
>> +++ b/rust/Makefile
>> @@ -369,7 +369,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
>>   		--emit=dep-info=$(depfile) --emit=obj=$@ \
>>   		--emit=metadata=$(dir $@)$(patsubst %.o,lib%.rmeta,$(notdir $@)) \
>>   		--crate-type rlib -L$(objtree)/$(obj) \
>> -		--crate-name $(patsubst %.o,%,$(notdir $@)) $< \
>> +		--crate-name $(patsubst %.o,%,$(notdir $@)) $(abspath $<) \
>>   	$(if $(rustc_objcopy),;$(OBJCOPY) $(rustc_objcopy) $@)
>>   
>>   rust-analyzer:
>> diff --git a/scripts/Makefile.build b/scripts/Makefile.build
>> index 6413342a03f4..c925b90ebd80 100644
>> --- a/scripts/Makefile.build
>> +++ b/scripts/Makefile.build
>> @@ -283,27 +283,27 @@ rust_common_cmd = \
>>   # would not match each other.
>>   
>>   quiet_cmd_rustc_o_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
>> -      cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $<
>> +      cmd_rustc_o_rs = $(rust_common_cmd) --emit=obj=$@ $(abspath $<)
>>   
>>   $(obj)/%.o: $(src)/%.rs FORCE
>>   	$(call if_changed_dep,rustc_o_rs)
>>   
>>   quiet_cmd_rustc_rsi_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
>>         cmd_rustc_rsi_rs = \
>> -	$(rust_common_cmd) -Zunpretty=expanded $< >$@; \
>> +	$(rust_common_cmd) -Zunpretty=expanded $(abspath $<) >$@; \
>>   	command -v $(RUSTFMT) >/dev/null && $(RUSTFMT) $@
>>   
>>   $(obj)/%.rsi: $(src)/%.rs FORCE
>>   	$(call if_changed_dep,rustc_rsi_rs)
>>   
>>   quiet_cmd_rustc_s_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
>> -      cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $<
>> +      cmd_rustc_s_rs = $(rust_common_cmd) --emit=asm=$@ $(abspath $<)
>>   
>>   $(obj)/%.s: $(src)/%.rs FORCE
>>   	$(call if_changed_dep,rustc_s_rs)
>>   
>>   quiet_cmd_rustc_ll_rs = $(RUSTC_OR_CLIPPY_QUIET) $(quiet_modtag) $@
>> -      cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $<
>> +      cmd_rustc_ll_rs = $(rust_common_cmd) --emit=llvm-ir=$@ $(abspath $<)
>>   
>>   $(obj)/%.ll: $(src)/%.rs FORCE
>>   	$(call if_changed_dep,rustc_ll_rs)
>>
> 
> 

~~ Lina


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration
  2023-07-15 14:25       ` Gary Guo
@ 2023-07-18 16:48         ` Boqun Feng
  0 siblings, 0 replies; 31+ messages in thread
From: Boqun Feng @ 2023-07-18 16:48 UTC (permalink / raw)
  To: Gary Guo
  Cc: Alice Ryhl, lina, alex.gaynor, alyssa, asahi, benno.lossin,
	bjorn3_gh, daniel, linux-kbuild, linux-kernel, llvm, marcan,
	masahiroy, nathan, ndesaulniers, nicolas, ojeda, rust-for-linux,
	sven, trix, wedsonaf

On Sat, Jul 15, 2023 at 03:25:54PM +0100, Gary Guo wrote:
[...]
> > > I don't think the fallibility is an issue. Lockdep is a debugging tool, 
> > > and it doesn't have to handle all possible circumstances perfectly. If 
> > > you are debugging normal lock issues you probably shouldn't be running 
> > > out of RAM, and if you are debugging OOM situations the lock keys would 
> > > normally have been created long before you reach an OOM situation, since 
> > > they would be created the first time a relevant lock class is used. More 
> > > objects of the same class don't cause any more allocations. And the code 
> > > has a fallback for the OOM case, where it just uses the Location object 
> > > as a static lock class. That's not ideal and degrades the quality of the 
> > > lockdep results, but it shouldn't completely break anything.  
> > 
> > If you have a fallback when the allocation fails, that helps ...
> 
> I am pretty sure lockdep needs to do some internal allocation anyway
> because only address matters for lock class keys. So some extra
> allocation probably is fine...
> 

Lockdep uses a few static arrays for its own allocation, but doesn't use
"external" allocatin (i.e. kalloc() and its friends. IIUC, originally
this has to do in this way to avoid recursive calls like:
lockdep->slab->lockdep, but now lockdep has a recursion counter, that's
not a problem any more. However, it's still better that lockdep can work
on its own without relying on other components.

Regards,
Boqun

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2023-07-18 16:48 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-14  9:13 [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Asahi Lina
2023-07-14  9:13 ` [PATCH RFC 01/11] rust: types: Add Opaque::zeroed() Asahi Lina
2023-07-14 10:15   ` Alice Ryhl
2023-07-15 14:27   ` Gary Guo
2023-07-14  9:13 ` [PATCH RFC 02/11] rust: lock: Add Lock::pin_init() Asahi Lina
2023-07-15 14:29   ` Gary Guo
2023-07-14  9:13 ` [PATCH RFC 03/11] rust: Use absolute paths to build Rust objects Asahi Lina
2023-07-15 14:35   ` Gary Guo
2023-07-16  7:53     ` Asahi Lina
2023-07-14  9:13 ` [PATCH RFC 04/11] rust: siphash: Add a simple siphash abstraction Asahi Lina
2023-07-14 14:28   ` Martin Rodriguez Reboredo
2023-07-15 14:52   ` Gary Guo
2023-07-14  9:13 ` [PATCH RFC 05/11] rust: sync: Add dummy LockClassKey implementation for !CONFIG_LOCKDEP Asahi Lina
2023-07-14 14:57   ` Martin Rodriguez Reboredo
2023-07-14  9:13 ` [PATCH RFC 06/11] rust: sync: Replace static LockClassKey refs with a pointer wrapper Asahi Lina
2023-07-14 15:10   ` Martin Rodriguez Reboredo
2023-07-14  9:13 ` [PATCH RFC 07/11] rust: sync: Implement dynamic lockdep class creation Asahi Lina
2023-07-14 19:56   ` Martin Rodriguez Reboredo
2023-07-15 15:47   ` Gary Guo
2023-07-14  9:14 ` [PATCH RFC 08/11] rust: sync: Classless Lock::new() and pin_init() Asahi Lina
2023-07-14  9:14 ` [PATCH RFC 09/11] rust: init: Update documentation for new mutex init style Asahi Lina
2023-07-14  9:14 ` [PATCH RFC 10/11] rust: sync: Add LockdepMap abstraction Asahi Lina
2023-07-14  9:14 ` [PATCH RFC 11/11] rust: sync: arc: Add lockdep integration Asahi Lina
2023-07-15 16:00   ` Gary Guo
2023-07-14 10:13 ` [PATCH RFC 00/11] rust: Implicit lock class creation & Arc Lockdep integration Alice Ryhl
2023-07-14 12:20   ` Asahi Lina
2023-07-14 13:59     ` Alice Ryhl
2023-07-14 15:21       ` Boqun Feng
2023-07-16  6:56         ` Asahi Lina
2023-07-15 14:25       ` Gary Guo
2023-07-18 16:48         ` Boqun Feng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.