All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/13] rust: sync: introduce `LockClassKey`
@ 2023-03-30  4:39 Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 02/13] rust: sync: introduce `Lock` and `Guard` Wedson Almeida Filho
                   ` (13 more replies)
  0 siblings, 14 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

From: Wedson Almeida Filho <walmeida@microsoft.com>

It is a wrapper around C's `lock_class_key`, which is used by the
synchronisation primitives that are checked with lockdep. This is in
preparation for introducing Rust abstractions for these primitives.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/kernel/sync.rs | 45 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 33da23e3076d..84a4b560828c 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -5,6 +5,51 @@
 //! This module contains the kernel APIs related to synchronisation that have been ported or
 //! wrapped for usage by Rust code in the kernel.
 
+use crate::types::Opaque;
+
 mod arc;
 
 pub use arc::{Arc, ArcBorrow, UniqueArc};
+
+/// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
+#[repr(transparent)]
+pub struct LockClassKey(Opaque<bindings::lock_class_key>);
+
+// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
+// provides its own synchronization.
+unsafe impl Sync for LockClassKey {}
+
+impl LockClassKey {
+    /// Creates a new lock class key.
+    pub const fn new() -> Self {
+        Self(Opaque::uninit())
+    }
+
+    #[allow(dead_code)]
+    pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
+        self.0.get()
+    }
+}
+
+/// Defines a new static lock class and returns a pointer to it.
+#[doc(hidden)]
+#[macro_export]
+macro_rules! static_lock_class {
+    () => {{
+        static CLASS: $crate::sync::LockClassKey = $crate::sync::LockClassKey::new();
+        &CLASS
+    }};
+}
+
+/// Returns the given string, if one is provided, otherwise generateis one based on the source code
+/// location.
+#[doc(hidden)]
+#[macro_export]
+macro_rules! optional_name {
+    () => {
+        $crate::c_str!(core::concat!(core::file!(), ":", core::line!()))
+    };
+    ($name:literal) => {
+        $crate::c_str!($name)
+    };
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 02/13] rust: sync: introduce `Lock` and `Guard`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 03/13] rust: lock: introduce `Mutex` Wedson Almeida Filho
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

From: Wedson Almeida Filho <walmeida@microsoft.com>

They are generic Rust implementations of a lock and a lock guard that
contain code that is common to all locks. Different backends will be
introduced in subsequent commits.

Suggested-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/kernel/sync.rs      |   2 +-
 rust/kernel/sync/lock.rs | 160 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 161 insertions(+), 1 deletion(-)
 create mode 100644 rust/kernel/sync/lock.rs

diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 84a4b560828c..bf088b324af4 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -8,6 +8,7 @@
 use crate::types::Opaque;
 
 mod arc;
+pub mod lock;
 
 pub use arc::{Arc, ArcBorrow, UniqueArc};
 
@@ -25,7 +26,6 @@ impl LockClassKey {
         Self(Opaque::uninit())
     }
 
-    #[allow(dead_code)]
     pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
         self.0.get()
     }
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
new file mode 100644
index 000000000000..f5614bed2a78
--- /dev/null
+++ b/rust/kernel/sync/lock.rs
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Generic kernel lock and guard.
+//!
+//! It contains a generic Rust lock and guard that allow for different backends (e.g., mutexes,
+//! spinlocks, raw spinlocks) to be provided with minimal effort.
+
+use super::LockClassKey;
+use crate::{bindings, init::PinInit, pin_init, str::CStr, types::Opaque};
+use core::{cell::UnsafeCell, marker::PhantomData, marker::PhantomPinned};
+use macros::pin_data;
+
+/// The "backend" of a lock.
+///
+/// It is the actual implementation of the lock, without the need to repeat patterns used in all
+/// locks.
+///
+/// # Safety
+///
+/// - Implementers must ensure that only one thread/CPU may access the protected data once the lock
+/// is owned, that is, between calls to `lock` and `unlock`.
+pub unsafe trait Backend {
+    /// The state required by the lock.
+    type State;
+
+    /// The state required to be kept between lock and unlock.
+    type GuardState;
+
+    /// Initialises the lock.
+    ///
+    /// # Safety
+    ///
+    /// `ptr` must be valid for write for the duration of the call, while `name` and `key` must
+    /// remain valid for read indefinitely.
+    unsafe fn init(
+        ptr: *mut Self::State,
+        name: *const core::ffi::c_char,
+        key: *mut bindings::lock_class_key,
+    );
+
+    /// Acquires the lock, making the caller its owner.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that [`Backend::init`] has been previously called.
+    #[must_use]
+    unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState;
+
+    /// Releases the lock, giving up its ownership.
+    ///
+    /// # Safety
+    ///
+    /// It must only be called by the current owner of the lock.
+    unsafe fn unlock(ptr: *mut Self::State, guard_state: &Self::GuardState);
+}
+
+/// A mutual exclusion primitive.
+///
+/// Exposes one of the kernel locking primitives. Which one is exposed depends on the lock banckend
+/// specified as the generic parameter `T`.
+#[pin_data]
+pub struct Lock<T: ?Sized, B: Backend> {
+    /// The kernel lock object.
+    #[pin]
+    state: Opaque<B::State>,
+
+    /// Some locks are known to be self-referential (e.g., mutexes), while others are architecture
+    /// or config defined (e.g., spinlocks). So we conservatively require them to be pinned in case
+    /// some architecture uses self-references now or in the future.
+    #[pin]
+    _pin: PhantomPinned,
+
+    /// The data protected by the lock.
+    data: UnsafeCell<T>,
+}
+
+// SAFETY: `Lock` can be transferred across thread boundaries iff the data it protects can.
+unsafe impl<T: ?Sized + Send, B: Backend> Send for Lock<T, B> {}
+
+// SAFETY: `Lock` serialises the interior mutability it provides, so it is `Sync` as long as the
+// data it protects is `Send`.
+unsafe impl<T: ?Sized + Send, B: Backend> Sync for Lock<T, B> {}
+
+impl<T, B: Backend> Lock<T, B> {
+    /// Constructs a new lock initialiser.
+    #[allow(clippy::new_ret_no_self)]
+    pub fn new(t: T, name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self> {
+        pin_init!(Self {
+            data: UnsafeCell::new(t),
+            _pin: PhantomPinned,
+            // SAFETY: `B::init` initialises the lock state, and both `name` and `key` have static
+            // lifetimes so they live indefinitely.
+            state <- unsafe { Opaque::manual_init2(B::init, name.as_char_ptr(), key.as_ptr()) },
+        })
+    }
+}
+
+impl<T: ?Sized, B: Backend> Lock<T, B> {
+    /// Acquires the lock and gives the caller access to the data protected by it.
+    pub fn lock(&self) -> Guard<'_, T, B> {
+        // SAFETY: The constructor of the type calls `init`, so the existence of the object proves
+        // that `init` was called.
+        let state = unsafe { B::lock(self.state.get()) };
+        // SAFETY: The lock was just acquired.
+        unsafe { Guard::new(self, state) }
+    }
+}
+
+/// A lock guard.
+///
+/// Allows mutual exclusion primitives that implement the `Backend` trait to automatically unlock
+/// when a guard goes out of scope. It also provides a safe and convenient way to access the data
+/// protected by the lock.
+#[must_use = "the lock unlocks immediately when the guard is unused"]
+pub struct Guard<'a, T: ?Sized, B: Backend> {
+    pub(crate) lock: &'a Lock<T, B>,
+    pub(crate) state: B::GuardState,
+    _not_send: PhantomData<*mut ()>,
+}
+
+// SAFETY: `Guard` is sync when the data protected by the lock is also sync.
+unsafe impl<T: Sync + ?Sized, B: Backend> Sync for Guard<'_, T, B> {}
+
+impl<T: ?Sized, B: Backend> core::ops::Deref for Guard<'_, T, B> {
+    type Target = T;
+
+    fn deref(&self) -> &Self::Target {
+        // SAFETY: The caller owns the lock, so it is safe to deref the protected data.
+        unsafe { &*self.lock.data.get() }
+    }
+}
+
+impl<T: ?Sized, B: Backend> core::ops::DerefMut for Guard<'_, T, B> {
+    fn deref_mut(&mut self) -> &mut Self::Target {
+        // SAFETY: The caller owns the lock, so it is safe to deref the protected data.
+        unsafe { &mut *self.lock.data.get() }
+    }
+}
+
+impl<T: ?Sized, B: Backend> Drop for Guard<'_, T, B> {
+    fn drop(&mut self) {
+        // SAFETY: The caller owns the lock, so it is safe to unlock it.
+        unsafe { B::unlock(self.lock.state.get(), &self.state) };
+    }
+}
+
+impl<'a, T: ?Sized, B: Backend> Guard<'a, T, B> {
+    /// Constructs a new immutable lock guard.
+    ///
+    /// # Safety
+    ///
+    /// The caller must ensure that it owns the lock.
+    pub(crate) unsafe fn new(lock: &'a Lock<T, B>, state: B::GuardState) -> Self {
+        Self {
+            lock,
+            state,
+            _not_send: PhantomData,
+        }
+    }
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 02/13] rust: sync: introduce `Lock` and `Guard` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30 13:01   ` Peter Zijlstra
  2023-03-30  4:39 ` [PATCH 04/13] locking/spinlock: introduce spin_lock_init_with_key Wedson Almeida Filho
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

From: Wedson Almeida Filho <walmeida@microsoft.com>

This is the `struct mutex` lock backend and allows Rust code to use the
kernel mutex idiomatically.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/helpers.c                 |   7 ++
 rust/kernel/sync.rs            |   1 +
 rust/kernel/sync/lock.rs       |   2 +
 rust/kernel/sync/lock/mutex.rs | 118 +++++++++++++++++++++++++++++++++
 4 files changed, 128 insertions(+)
 create mode 100644 rust/kernel/sync/lock/mutex.rs

diff --git a/rust/helpers.c b/rust/helpers.c
index 09a4d93f9d62..3010a2ec26e2 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -21,6 +21,7 @@
 #include <linux/bug.h>
 #include <linux/build_bug.h>
 #include <linux/refcount.h>
+#include <linux/mutex.h>
 
 __noreturn void rust_helper_BUG(void)
 {
@@ -28,6 +29,12 @@ __noreturn void rust_helper_BUG(void)
 }
 EXPORT_SYMBOL_GPL(rust_helper_BUG);
 
+void rust_helper_mutex_lock(struct mutex *lock)
+{
+	mutex_lock(lock);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mutex_lock);
+
 refcount_t rust_helper_REFCOUNT_INIT(int n)
 {
 	return (refcount_t)REFCOUNT_INIT(n);
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index bf088b324af4..9ff116b2eebe 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -11,6 +11,7 @@ mod arc;
 pub mod lock;
 
 pub use arc::{Arc, ArcBorrow, UniqueArc};
+pub use lock::mutex::Mutex;
 
 /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
 #[repr(transparent)]
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index f5614bed2a78..cec1d68bab86 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -10,6 +10,8 @@ use crate::{bindings, init::PinInit, pin_init, str::CStr, types::Opaque};
 use core::{cell::UnsafeCell, marker::PhantomData, marker::PhantomPinned};
 use macros::pin_data;
 
+pub mod mutex;
+
 /// The "backend" of a lock.
 ///
 /// It is the actual implementation of the lock, without the need to repeat patterns used in all
diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
new file mode 100644
index 000000000000..923472f04af4
--- /dev/null
+++ b/rust/kernel/sync/lock/mutex.rs
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A kernel mutex.
+//!
+//! This module allows Rust code to use the kernel's `struct mutex`.
+
+use crate::bindings;
+
+/// Creates a [`Mutex`] initialiser with the given name and a newly-created lock class.
+///
+/// It uses the name if one is given, otherwise it generates one based on the file name and line
+/// number.
+#[macro_export]
+macro_rules! new_mutex {
+    ($inner:expr $(, $name:literal)? $(,)?) => {
+        $crate::sync::Mutex::new(
+            $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
+    };
+}
+
+/// A mutual exclusion primitive.
+///
+/// Exposes the kernel's [`struct mutex`]. When multiple threads attempt to lock the same mutex,
+/// only one at a time is allowed to progress, the others will block (sleep) until the mutex is
+/// unlocked, at which point another thread will be allowed to wake up and make progress.
+///
+/// Since it may block, [`Mutex`] needs to be used with care in atomic contexts.
+///
+/// Instances of [`Mutex`] need a lock class and to be pinned. The recommended way to create such
+/// instances is with the [`pin_init`](crate::pin_init) and [`new_mutex`] macros.
+///
+/// # Examples
+///
+/// The following example shows how to declare, allocate and initialise a struct (`Example`) that
+/// contains an inner struct (`Inner`) that is protected by a mutex.
+///
+/// ```
+/// use kernel::{init::InPlaceInit, init::PinInit, new_mutex, pin_init, sync::Mutex};
+///
+/// struct Inner {
+///     a: u32,
+///     b: u32,
+/// }
+///
+/// #[pin_data]
+/// struct Example {
+///     c: u32,
+///     #[pin]
+///     d: Mutex<Inner>,
+/// }
+///
+/// impl Example {
+///     fn new() -> impl PinInit<Self> {
+///         pin_init!(Self {
+///             c: 10,
+///             d <- new_mutex!(Inner { a: 20, b: 30 }),
+///         })
+///     }
+/// }
+///
+/// // Allocate a boxed `Example`.
+/// let e = Box::pin_init(Example::new())?;
+/// assert_eq!(e.c, 10);
+/// assert_eq!(e.d.lock().a, 20);
+/// assert_eq!(e.d.lock().b, 30);
+/// ```
+///
+/// The following example shows how to use interior mutability to modify the contents of a struct
+/// protected by a mutex despite only having a shared reference:
+///
+/// ```
+/// use kernel::sync::Mutex;
+///
+/// struct Example {
+///     a: u32,
+///     b: u32,
+/// }
+///
+/// fn example(m: &Mutex<Example>) {
+///     let mut guard = m.lock();
+///     guard.a += 10;
+///     guard.b += 20;
+/// }
+/// ```
+///
+/// [`struct mutex`]: ../../../../include/linux/mutex.h
+pub type Mutex<T> = super::Lock<T, MutexBackend>;
+
+/// A kernel `struct mutex` lock backend.
+pub struct MutexBackend;
+
+// SAFETY: The underlying kernel `struct mutex` object ensures mutual exclusion.
+unsafe impl super::Backend for MutexBackend {
+    type State = bindings::mutex;
+    type GuardState = ();
+
+    unsafe fn init(
+        ptr: *mut Self::State,
+        name: *const core::ffi::c_char,
+        key: *mut bindings::lock_class_key,
+    ) {
+        // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and
+        // `key` are valid for read indefinitely.
+        unsafe { bindings::__mutex_init(ptr, name, key) }
+    }
+
+    unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
+        // SAFETY: The safety requirements of this function ensure that `ptr` points to valid
+        // memory, and that it has been initialised before.
+        unsafe { bindings::mutex_lock(ptr) };
+    }
+
+    unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
+        // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
+        // caller is the owner of the mutex.
+        unsafe { bindings::mutex_unlock(ptr) };
+    }
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 04/13] locking/spinlock: introduce spin_lock_init_with_key
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 02/13] rust: sync: introduce `Lock` and `Guard` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 03/13] rust: lock: introduce `Mutex` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 05/13] rust: lock: introduce `SpinLock` Wedson Almeida Filho
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

From: Wedson Almeida Filho <walmeida@microsoft.com>

Rust cannot call C macros, so it has its own macro to create a new lock
class when a spin lock is initialised. This new function allows Rust
code to pass the lock class it generates to the C implementation.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 include/linux/spinlock.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index be48f1cb1878..cdc92d095133 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -327,12 +327,17 @@ static __always_inline raw_spinlock_t *spinlock_check(spinlock_t *lock)
 
 #ifdef CONFIG_DEBUG_SPINLOCK
 
+static inline void spin_lock_init_with_key(spinlock_t *lock, const char *name,
+					   struct lock_class_key *key)
+{
+	__raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG);
+}
+
 # define spin_lock_init(lock)					\
 do {								\
 	static struct lock_class_key __key;			\
 								\
-	__raw_spin_lock_init(spinlock_check(lock),		\
-			     #lock, &__key, LD_WAIT_CONFIG);	\
+	spin_lock_init_with_key(lock, #lock, &__key);		\
 } while (0)
 
 #else
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 05/13] rust: lock: introduce `SpinLock`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (2 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 04/13] locking/spinlock: introduce spin_lock_init_with_key Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 06/13] rust: lock: add support for `Lock::lock_irqsave` Wedson Almeida Filho
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

From: Wedson Almeida Filho <walmeida@microsoft.com>

This is the `spinlock_t` lock backend and allows Rust code to use the
kernel spinlock idiomatically.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/helpers.c                    |  24 +++++++
 rust/kernel/sync.rs               |   2 +-
 rust/kernel/sync/lock.rs          |   1 +
 rust/kernel/sync/lock/spinlock.rs | 116 ++++++++++++++++++++++++++++++
 4 files changed, 142 insertions(+), 1 deletion(-)
 create mode 100644 rust/kernel/sync/lock/spinlock.rs

diff --git a/rust/helpers.c b/rust/helpers.c
index 3010a2ec26e2..05694e3f8f70 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -22,6 +22,7 @@
 #include <linux/build_bug.h>
 #include <linux/refcount.h>
 #include <linux/mutex.h>
+#include <linux/spinlock.h>
 
 __noreturn void rust_helper_BUG(void)
 {
@@ -35,6 +36,29 @@ void rust_helper_mutex_lock(struct mutex *lock)
 }
 EXPORT_SYMBOL_GPL(rust_helper_mutex_lock);
 
+void rust_helper___spin_lock_init(spinlock_t *lock, const char *name,
+				  struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_SPINLOCK
+	spin_lock_init_with_key(lock, name, key);
+#else
+	spin_lock_init(lock);
+#endif
+}
+EXPORT_SYMBOL_GPL(rust_helper___spin_lock_init);
+
+void rust_helper_spin_lock(spinlock_t *lock)
+{
+	spin_lock(lock);
+}
+EXPORT_SYMBOL_GPL(rust_helper_spin_lock);
+
+void rust_helper_spin_unlock(spinlock_t *lock)
+{
+	spin_unlock(lock);
+}
+EXPORT_SYMBOL_GPL(rust_helper_spin_unlock);
+
 refcount_t rust_helper_REFCOUNT_INIT(int n)
 {
 	return (refcount_t)REFCOUNT_INIT(n);
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 9ff116b2eebe..ed07437d7d55 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -11,7 +11,7 @@ mod arc;
 pub mod lock;
 
 pub use arc::{Arc, ArcBorrow, UniqueArc};
-pub use lock::mutex::Mutex;
+pub use lock::{mutex::Mutex, spinlock::SpinLock};
 
 /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
 #[repr(transparent)]
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index cec1d68bab86..bca9af2a9a5a 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -11,6 +11,7 @@ use core::{cell::UnsafeCell, marker::PhantomData, marker::PhantomPinned};
 use macros::pin_data;
 
 pub mod mutex;
+pub mod spinlock;
 
 /// The "backend" of a lock.
 ///
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
new file mode 100644
index 000000000000..a52d20fc9755
--- /dev/null
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -0,0 +1,116 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A kernel spinlock.
+//!
+//! This module allows Rust code to use the kernel's `spinlock_t`.
+
+use crate::bindings;
+
+/// Creates a [`SpinLock`] initialiser with the given name and a newly-created lock class.
+///
+/// It uses the name if one is given, otherwise it generates one based on the file name and line
+/// number.
+#[macro_export]
+macro_rules! new_spinlock {
+    ($inner:expr $(, $name:literal)? $(,)?) => {
+        $crate::sync::SpinLock::new(
+            $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
+    };
+}
+
+/// A spinlock.
+///
+/// Exposes the kernel's [`spinlock_t`]. When multiple CPUs attempt to lock the same spinlock, only
+/// one at a time is allowed to progress, the others will block (spinning) until the spinlock is
+/// unlocked, at which point another CPU will be allowed to make progress.
+///
+/// Instances of [`SpinLock`] need a lock class and to be pinned. The recommended way to create such
+/// instances is with the [`pin_init`](crate::pin_init) and [`new_spinlock`] macros.
+///
+/// # Examples
+///
+/// The following example shows how to declare, allocate and initialise a struct (`Example`) that
+/// contains an inner struct (`Inner`) that is protected by a spinlock.
+///
+/// ```
+/// use kernel::{init::InPlaceInit, init::PinInit, new_spinlock, pin_init, sync::SpinLock};
+///
+/// struct Inner {
+///     a: u32,
+///     b: u32,
+/// }
+///
+/// #[pin_data]
+/// struct Example {
+///     c: u32,
+///     #[pin]
+///     d: SpinLock<Inner>,
+/// }
+///
+/// impl Example {
+///     fn new() -> impl PinInit<Self> {
+///         pin_init!(Self {
+///             c: 10,
+///             d <- new_spinlock!(Inner { a: 20, b: 30 }),
+///         })
+///     }
+/// }
+///
+/// // Allocate a boxed `Example`.
+/// let e = Box::pin_init(Example::new())?;
+/// assert_eq!(e.c, 10);
+/// assert_eq!(e.d.lock().a, 20);
+/// assert_eq!(e.d.lock().b, 30);
+/// ```
+///
+/// The following example shows how to use interior mutability to modify the contents of a struct
+/// protected by a spinlock despite only having a shared reference:
+///
+/// ```
+/// use kernel::sync::SpinLock;
+///
+/// struct Example {
+///     a: u32,
+///     b: u32,
+/// }
+///
+/// fn example(m: &SpinLock<Example>) {
+///     let mut guard = m.lock();
+///     guard.a += 10;
+///     guard.b += 20;
+/// }
+/// ```
+///
+/// [`spinlock_t`]: ../../../../include/linux/spinlock.h
+pub type SpinLock<T> = super::Lock<T, SpinLockBackend>;
+
+/// A kernel `spinlock_t` lock backend.
+pub struct SpinLockBackend;
+
+// SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion.
+unsafe impl super::Backend for SpinLockBackend {
+    type State = bindings::spinlock_t;
+    type GuardState = ();
+
+    unsafe fn init(
+        ptr: *mut Self::State,
+        name: *const core::ffi::c_char,
+        key: *mut bindings::lock_class_key,
+    ) {
+        // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and
+        // `key` are valid for read indefinitely.
+        unsafe { bindings::__spin_lock_init(ptr, name, key) }
+    }
+
+    unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
+        // SAFETY: The safety requirements of this function ensure that `ptr` points to valid
+        // memory, and that it has been initialised before.
+        unsafe { bindings::spin_lock(ptr) }
+    }
+
+    unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
+        // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
+        // caller is the owner of the mutex.
+        unsafe { bindings::spin_unlock(ptr) }
+    }
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 06/13] rust: lock: add support for `Lock::lock_irqsave`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (3 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 05/13] rust: lock: introduce `SpinLock` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 07/13] rust: lock: implement `IrqSaveBackend` for `SpinLock` Wedson Almeida Filho
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

From: Wedson Almeida Filho <walmeida@microsoft.com>

This allows locks like spinlocks and raw spinlocks to expose a
`lock_irqsave` variant in Rust that corresponds to the C version.

Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/kernel/sync/lock.rs | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index bca9af2a9a5a..491446c3a074 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -57,6 +57,29 @@ pub unsafe trait Backend {
     unsafe fn unlock(ptr: *mut Self::State, guard_state: &Self::GuardState);
 }
 
+/// The "backend" of a lock that supports the irq-save variant.
+///
+/// # Safety
+///
+/// The same requirements wrt mutual exclusion in [`Backend`] apply for acquiring the lock via
+/// [`IrqSaveBackend::lock_irqsave`].
+///
+/// Additionally, when [`IrqSaveBackend::lock_irqsave`] is used to acquire the lock, implementers
+/// must disable interrupts on lock, and restore interrupt state on unlock. Implementers may use
+/// [`Backend::GuardState`] to store state needed to keep track of the interrupt state.
+pub unsafe trait IrqSaveBackend: Backend {
+    /// Acquires the lock, making the caller its owner.
+    ///
+    /// Before acquiring the lock, it disables interrupts, and returns the previous interrupt state
+    /// as its guard state so that the guard can restore it when it is dropped.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that [`Backend::init`] has been previously called.
+    #[must_use]
+    unsafe fn lock_irqsave(ptr: *mut Self::State) -> Self::GuardState;
+}
+
 /// A mutual exclusion primitive.
 ///
 /// Exposes one of the kernel locking primitives. Which one is exposed depends on the lock banckend
@@ -109,6 +132,21 @@ impl<T: ?Sized, B: Backend> Lock<T, B> {
     }
 }
 
+impl<T: ?Sized, B: IrqSaveBackend> Lock<T, B> {
+    /// Acquires the lock and gives the caller access to the data protected by it.
+    ///
+    /// Before acquiring the lock, it disables interrupts. When the guard is dropped, the interrupt
+    /// state (either enabled or disabled) is restored to its state before
+    /// [`lock_irqsave`](Self::lock_irqsave) was called.
+    pub fn lock_irqsave(&self) -> Guard<'_, T, B> {
+        // SAFETY: The constructor of the type calls `init`, so the existence of the object proves
+        // that `init` was called.
+        let state = unsafe { B::lock_irqsave(self.state.get()) };
+        // SAFETY: The lock was just acquired.
+        unsafe { Guard::new(self, state) }
+    }
+}
+
 /// A lock guard.
 ///
 /// Allows mutual exclusion primitives that implement the `Backend` trait to automatically unlock
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 07/13] rust: lock: implement `IrqSaveBackend` for `SpinLock`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (4 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 06/13] rust: lock: add support for `Lock::lock_irqsave` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 08/13] rust: introduce `ARef` Wedson Almeida Filho
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

From: Wedson Almeida Filho <walmeida@microsoft.com>

This allows Rust code to use the `lock_irqsave` variant of spinlocks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/helpers.c                    | 16 +++++++++++++
 rust/kernel/sync/lock/spinlock.rs | 38 ++++++++++++++++++++++++++-----
 2 files changed, 48 insertions(+), 6 deletions(-)

diff --git a/rust/helpers.c b/rust/helpers.c
index 05694e3f8f70..e42f5b446f92 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -59,6 +59,22 @@ void rust_helper_spin_unlock(spinlock_t *lock)
 }
 EXPORT_SYMBOL_GPL(rust_helper_spin_unlock);
 
+unsigned long rust_helper_spin_lock_irqsave(spinlock_t *lock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(lock, flags);
+
+	return flags;
+}
+EXPORT_SYMBOL_GPL(rust_helper_spin_lock_irqsave);
+
+void rust_helper_spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
+{
+	spin_unlock_irqrestore(lock, flags);
+}
+EXPORT_SYMBOL_GPL(rust_helper_spin_unlock_irqrestore);
+
 refcount_t rust_helper_REFCOUNT_INIT(int n)
 {
 	return (refcount_t)REFCOUNT_INIT(n);
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index a52d20fc9755..34dec09a97c0 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -61,6 +61,8 @@ macro_rules! new_spinlock {
 /// assert_eq!(e.c, 10);
 /// assert_eq!(e.d.lock().a, 20);
 /// assert_eq!(e.d.lock().b, 30);
+/// assert_eq!(e.d.lock_irqsave().a, 20);
+/// assert_eq!(e.d.lock_irqsave().b, 30);
 /// ```
 ///
 /// The following example shows how to use interior mutability to modify the contents of a struct
@@ -79,6 +81,12 @@ macro_rules! new_spinlock {
 ///     guard.a += 10;
 ///     guard.b += 20;
 /// }
+///
+/// fn example2(m: &SpinLock<Example>) {
+///     let mut guard = m.lock_irqsave();
+///     guard.a += 10;
+///     guard.b += 20;
+/// }
 /// ```
 ///
 /// [`spinlock_t`]: ../../../../include/linux/spinlock.h
@@ -90,7 +98,7 @@ pub struct SpinLockBackend;
 // SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion.
 unsafe impl super::Backend for SpinLockBackend {
     type State = bindings::spinlock_t;
-    type GuardState = ();
+    type GuardState = Option<core::ffi::c_ulong>;
 
     unsafe fn init(
         ptr: *mut Self::State,
@@ -105,12 +113,30 @@ unsafe impl super::Backend for SpinLockBackend {
     unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
         // SAFETY: The safety requirements of this function ensure that `ptr` points to valid
         // memory, and that it has been initialised before.
-        unsafe { bindings::spin_lock(ptr) }
+        unsafe { bindings::spin_lock(ptr) };
+        None
     }
 
-    unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
-        // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
-        // caller is the owner of the mutex.
-        unsafe { bindings::spin_unlock(ptr) }
+    unsafe fn unlock(ptr: *mut Self::State, guard_state: &Self::GuardState) {
+        match guard_state {
+            // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that
+            // the caller is the owner of the mutex.
+            Some(flags) => unsafe { bindings::spin_unlock_irqrestore(ptr, *flags) },
+            // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that
+            // the caller is the owner of the mutex.
+            None => unsafe { bindings::spin_unlock(ptr) },
+        }
+    }
+}
+
+// SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion. We use the `irqsave`
+// variant of the C lock acquisition functions to disable interrupts and retrieve the original
+// interrupt state, and the `irqrestore` variant of the lock release functions to restore the state
+// in `unlock` -- we use the guard context to determine which method was used to acquire the lock.
+unsafe impl super::IrqSaveBackend for SpinLockBackend {
+    unsafe fn lock_irqsave(ptr: *mut Self::State) -> Self::GuardState {
+        // SAFETY: The safety requirements of this function ensure that `ptr` points to valid
+        // memory, and that it has been initialised before.
+        Some(unsafe { bindings::spin_lock_irqsave(ptr) })
     }
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 08/13] rust: introduce `ARef`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (5 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 07/13] rust: lock: implement `IrqSaveBackend` for `SpinLock` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30 14:17   ` Gary Guo
  2023-03-30  4:39 ` [PATCH 09/13] rust: add basic `Task` Wedson Almeida Filho
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

From: Wedson Almeida Filho <walmeida@microsoft.com>

This is an owned reference to an object that is always ref-counted. This
is meant to be used in wrappers for C types that have their own ref
counting functions, for example, tasks, files, inodes, dentries, etc.

Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/kernel/types.rs | 107 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 107 insertions(+)

diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
index dbfae9bb97ce..b071730253c7 100644
--- a/rust/kernel/types.rs
+++ b/rust/kernel/types.rs
@@ -6,8 +6,10 @@ use crate::init::{self, PinInit};
 use alloc::boxed::Box;
 use core::{
     cell::UnsafeCell,
+    marker::PhantomData,
     mem::MaybeUninit,
     ops::{Deref, DerefMut},
+    ptr::NonNull,
 };
 
 /// Used to transfer ownership to and from foreign (non-Rust) languages.
@@ -295,6 +297,111 @@ opaque_init_funcs! {
     "Rust" manual_init4(arg1: A1, arg2: A2, arg3: A3, arg4: A4);
 }
 
+/// Types that are _always_ reference counted.
+///
+/// It allows such types to define their own custom ref increment and decrement functions.
+/// Additionally, it allows users to convert from a shared reference `&T` to an owned reference
+/// [`ARef<T>`].
+///
+/// This is usually implemented by wrappers to existing structures on the C side of the code. For
+/// Rust code, the recommendation is to use [`Arc`](crate::sync::Arc) to create reference-counted
+/// instances of a type.
+///
+/// # Safety
+///
+/// Implementers must ensure that increments to the reference count keep the object alive in memory
+/// at least until matching decrements are performed.
+///
+/// Implementers must also ensure that all instances are reference-counted. (Otherwise they
+/// won't be able to honour the requirement that [`AlwaysRefCounted::inc_ref`] keep the object
+/// alive.)
+pub unsafe trait AlwaysRefCounted {
+    /// Increments the reference count on the object.
+    fn inc_ref(&self);
+
+    /// Decrements the reference count on the object.
+    ///
+    /// Frees the object when the count reaches zero.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that there was a previous matching increment to the reference count,
+    /// and that the object is no longer used after its reference count is decremented (as it may
+    /// result in the object being freed), unless the caller owns another increment on the refcount
+    /// (e.g., it calls [`AlwaysRefCounted::inc_ref`] twice, then calls
+    /// [`AlwaysRefCounted::dec_ref`] once).
+    unsafe fn dec_ref(obj: NonNull<Self>);
+}
+
+/// An owned reference to an always-reference-counted object.
+///
+/// The object's reference count is automatically decremented when an instance of [`ARef`] is
+/// dropped. It is also automatically incremented when a new instance is created via
+/// [`ARef::clone`].
+///
+/// # Invariants
+///
+/// The pointer stored in `ptr` is non-null and valid for the lifetime of the [`ARef`] instance. In
+/// particular, the [`ARef`] instance owns an increment on the underlying object's reference count.
+pub struct ARef<T: AlwaysRefCounted> {
+    ptr: NonNull<T>,
+    _p: PhantomData<T>,
+}
+
+impl<T: AlwaysRefCounted> ARef<T> {
+    /// Creates a new instance of [`ARef`].
+    ///
+    /// It takes over an increment of the reference count on the underlying object.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that the reference count was incremented at least once, and that they
+    /// are properly relinquishing one increment. That is, if there is only one increment, callers
+    /// must not use the underlying object anymore -- it is only safe to do so via the newly
+    /// created [`ARef`].
+    pub unsafe fn from_raw(ptr: NonNull<T>) -> Self {
+        // INVARIANT: The safety requirements guarantee that the new instance now owns the
+        // increment on the refcount.
+        Self {
+            ptr,
+            _p: PhantomData,
+        }
+    }
+}
+
+impl<T: AlwaysRefCounted> Clone for ARef<T> {
+    fn clone(&self) -> Self {
+        self.inc_ref();
+        // SAFETY: We just incremented the refcount above.
+        unsafe { Self::from_raw(self.ptr) }
+    }
+}
+
+impl<T: AlwaysRefCounted> Deref for ARef<T> {
+    type Target = T;
+
+    fn deref(&self) -> &Self::Target {
+        // SAFETY: The type invariants guarantee that the object is valid.
+        unsafe { self.ptr.as_ref() }
+    }
+}
+
+impl<T: AlwaysRefCounted> From<&T> for ARef<T> {
+    fn from(b: &T) -> Self {
+        b.inc_ref();
+        // SAFETY: We just incremented the refcount above.
+        unsafe { Self::from_raw(NonNull::from(b)) }
+    }
+}
+
+impl<T: AlwaysRefCounted> Drop for ARef<T> {
+    fn drop(&mut self) {
+        // SAFETY: The type invariants guarantee that the `ARef` owns the reference we're about to
+        // decrement.
+        unsafe { T::dec_ref(self.ptr) };
+    }
+}
+
 /// A sum type that always holds either a value of type `L` or `R`.
 pub enum Either<L, R> {
     /// Constructs an instance of [`Either`] containing a value of type `L`.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 09/13] rust: add basic `Task`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (6 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 08/13] rust: introduce `ARef` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 10/13] rust: introduce `Task::current` Wedson Almeida Filho
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Peter Zijlstra

From: Wedson Almeida Filho <walmeida@microsoft.com>

It is an abstraction for C's `struct task_struct`. It implements
`AlwaysRefCounted`, so the refcount of the wrapped object is managed
safely on the Rust side.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/bindings/bindings_helper.h |  1 +
 rust/helpers.c                  | 19 +++++++++
 rust/kernel/lib.rs              |  1 +
 rust/kernel/task.rs             | 71 +++++++++++++++++++++++++++++++++
 4 files changed, 92 insertions(+)
 create mode 100644 rust/kernel/task.rs

diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 75d85bd6c592..03656a44a83f 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -8,6 +8,7 @@
 
 #include <linux/slab.h>
 #include <linux/refcount.h>
+#include <linux/sched.h>
 
 /* `bindgen` gets confused at certain things. */
 const gfp_t BINDINGS_GFP_KERNEL = GFP_KERNEL;
diff --git a/rust/helpers.c b/rust/helpers.c
index e42f5b446f92..58a194042c86 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -23,6 +23,7 @@
 #include <linux/refcount.h>
 #include <linux/mutex.h>
 #include <linux/spinlock.h>
+#include <linux/sched/signal.h>
 
 __noreturn void rust_helper_BUG(void)
 {
@@ -75,6 +76,12 @@ void rust_helper_spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 }
 EXPORT_SYMBOL_GPL(rust_helper_spin_unlock_irqrestore);
 
+int rust_helper_signal_pending(struct task_struct *t)
+{
+	return signal_pending(t);
+}
+EXPORT_SYMBOL_GPL(rust_helper_signal_pending);
+
 refcount_t rust_helper_REFCOUNT_INIT(int n)
 {
 	return (refcount_t)REFCOUNT_INIT(n);
@@ -93,6 +100,18 @@ bool rust_helper_refcount_dec_and_test(refcount_t *r)
 }
 EXPORT_SYMBOL_GPL(rust_helper_refcount_dec_and_test);
 
+void rust_helper_get_task_struct(struct task_struct *t)
+{
+	get_task_struct(t);
+}
+EXPORT_SYMBOL_GPL(rust_helper_get_task_struct);
+
+void rust_helper_put_task_struct(struct task_struct *t)
+{
+	put_task_struct(t);
+}
+EXPORT_SYMBOL_GPL(rust_helper_put_task_struct);
+
 /*
  * We use `bindgen`'s `--size_t-is-usize` option to bind the C `size_t` type
  * as the Rust `usize` type, so we can use it in contexts where Rust
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index d9df77132fa2..4e1d5ba2e241 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -43,6 +43,7 @@ mod static_assert;
 pub mod std_vendor;
 pub mod str;
 pub mod sync;
+pub mod task;
 pub mod types;
 
 #[doc(hidden)]
diff --git a/rust/kernel/task.rs b/rust/kernel/task.rs
new file mode 100644
index 000000000000..8d7a8222990f
--- /dev/null
+++ b/rust/kernel/task.rs
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Tasks (threads and processes).
+//!
+//! C header: [`include/linux/sched.h`](../../../../include/linux/sched.h).
+
+use crate::bindings;
+use core::{cell::UnsafeCell, ptr};
+
+/// Wraps the kernel's `struct task_struct`.
+///
+/// # Invariants
+///
+/// Instances of this type are always ref-counted, that is, a call to `get_task_struct` ensures
+/// that the allocation remains valid at least until the matching call to `put_task_struct`.
+#[repr(transparent)]
+pub struct Task(pub(crate) UnsafeCell<bindings::task_struct>);
+
+// SAFETY: It's OK to access `Task` through references from other threads because we're either
+// accessing properties that don't change (e.g., `pid`, `group_leader`) or that are properly
+// synchronised by C code (e.g., `signal_pending`).
+unsafe impl Sync for Task {}
+
+/// The type of process identifiers (PIDs).
+type Pid = bindings::pid_t;
+
+impl Task {
+    /// Returns the group leader of the given task.
+    pub fn group_leader(&self) -> &Task {
+        // SAFETY: By the type invariant, we know that `self.0` is valid.
+        let ptr = unsafe { *ptr::addr_of!((*self.0.get()).group_leader) };
+
+        // SAFETY: The lifetime of the returned task reference is tied to the lifetime of `self`,
+        // and given that a task has a reference to its group leader, we know it must be valid for
+        // the lifetime of the returned task reference.
+        unsafe { &*ptr.cast() }
+    }
+
+    /// Returns the PID of the given task.
+    pub fn pid(&self) -> Pid {
+        // SAFETY: By the type invariant, we know that `self.0` is valid.
+        unsafe { *ptr::addr_of!((*self.0.get()).pid) }
+    }
+
+    /// Determines whether the given task has pending signals.
+    pub fn signal_pending(&self) -> bool {
+        // SAFETY: By the type invariant, we know that `self.0` is valid.
+        unsafe { bindings::signal_pending(self.0.get()) != 0 }
+    }
+
+    /// Wakes up the task.
+    pub fn wake_up(&self) {
+        // SAFETY: By the type invariant, we know that `self.0.get()` is non-null and valid.
+        // And `wake_up_process` is safe to be called for any valid task, even if the task is
+        // running.
+        unsafe { bindings::wake_up_process(self.0.get()) };
+    }
+}
+
+// SAFETY: The type invariants guarantee that `Task` is always ref-counted.
+unsafe impl crate::types::AlwaysRefCounted for Task {
+    fn inc_ref(&self) {
+        // SAFETY: The existence of a shared reference means that the refcount is nonzero.
+        unsafe { bindings::get_task_struct(self.0.get()) };
+    }
+
+    unsafe fn dec_ref(obj: ptr::NonNull<Self>) {
+        // SAFETY: The safety requirements guarantee that the refcount is nonzero.
+        unsafe { bindings::put_task_struct(obj.cast().as_ptr()) }
+    }
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 10/13] rust: introduce `Task::current`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (7 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 09/13] rust: add basic `Task` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-31  2:47   ` Gary Guo
  2023-03-30  4:39 ` [PATCH 11/13] rust: lock: add `Guard::do_unlocked` Wedson Almeida Filho
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Peter Zijlstra

From: Wedson Almeida Filho <walmeida@microsoft.com>

This allows Rust code to get a reference to the current task without
having to increment the refcount, but still guaranteeing memory safety.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/helpers.c      |  6 ++++
 rust/kernel/task.rs | 83 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/rust/helpers.c b/rust/helpers.c
index 58a194042c86..96441744030e 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -100,6 +100,12 @@ bool rust_helper_refcount_dec_and_test(refcount_t *r)
 }
 EXPORT_SYMBOL_GPL(rust_helper_refcount_dec_and_test);
 
+struct task_struct *rust_helper_get_current(void)
+{
+	return current;
+}
+EXPORT_SYMBOL_GPL(rust_helper_get_current);
+
 void rust_helper_get_task_struct(struct task_struct *t)
 {
 	get_task_struct(t);
diff --git a/rust/kernel/task.rs b/rust/kernel/task.rs
index 8d7a8222990f..8b2b56ba9c6d 100644
--- a/rust/kernel/task.rs
+++ b/rust/kernel/task.rs
@@ -5,7 +5,7 @@
 //! C header: [`include/linux/sched.h`](../../../../include/linux/sched.h).
 
 use crate::bindings;
-use core::{cell::UnsafeCell, ptr};
+use core::{cell::UnsafeCell, marker::PhantomData, ops::Deref, ptr};
 
 /// Wraps the kernel's `struct task_struct`.
 ///
@@ -13,6 +13,46 @@ use core::{cell::UnsafeCell, ptr};
 ///
 /// Instances of this type are always ref-counted, that is, a call to `get_task_struct` ensures
 /// that the allocation remains valid at least until the matching call to `put_task_struct`.
+///
+/// # Examples
+///
+/// The following is an example of getting the PID of the current thread with zero additional cost
+/// when compared to the C version:
+///
+/// ```
+/// use kernel::task::Task;
+///
+/// let pid = Task::current().pid();
+/// ```
+///
+/// Getting the PID of the current process, also zero additional cost:
+///
+/// ```
+/// use kernel::task::Task;
+///
+/// let pid = Task::current().group_leader().pid();
+/// ```
+///
+/// Getting the current task and storing it in some struct. The reference count is automatically
+/// incremented when creating `State` and decremented when it is dropped:
+///
+/// ```
+/// use kernel::{task::Task, ARef};
+///
+/// struct State {
+///     creator: ARef<Task>,
+///     index: u32,
+/// }
+///
+/// impl State {
+///     fn new() -> Self {
+///         Self {
+///             creator: Task::current().into(),
+///             index: 0,
+///         }
+///     }
+/// }
+/// ```
 #[repr(transparent)]
 pub struct Task(pub(crate) UnsafeCell<bindings::task_struct>);
 
@@ -25,6 +65,20 @@ unsafe impl Sync for Task {}
 type Pid = bindings::pid_t;
 
 impl Task {
+    /// Returns a task reference for the currently executing task/thread.
+    pub fn current<'a>() -> TaskRef<'a> {
+        // SAFETY: Just an FFI call with no additional safety requirements.
+        let ptr = unsafe { bindings::get_current() };
+
+        TaskRef {
+            // SAFETY: If the current thread is still running, the current task is valid. Given
+            // that `TaskRef` is not `Send`, we know it cannot be transferred to another thread
+            // (where it could potentially outlive the caller).
+            task: unsafe { &*ptr.cast() },
+            _not_send: PhantomData,
+        }
+    }
+
     /// Returns the group leader of the given task.
     pub fn group_leader(&self) -> &Task {
         // SAFETY: By the type invariant, we know that `self.0` is valid.
@@ -69,3 +123,30 @@ unsafe impl crate::types::AlwaysRefCounted for Task {
         unsafe { bindings::put_task_struct(obj.cast().as_ptr()) }
     }
 }
+
+/// A wrapper for a shared reference to [`Task`] that isn't [`Send`].
+///
+/// We make this explicitly not [`Send`] so that we can use it to represent the current thread
+/// without having to increment/decrement the task's reference count.
+///
+/// # Invariants
+///
+/// The wrapped [`Task`] remains valid for the lifetime of the object.
+pub struct TaskRef<'a> {
+    task: &'a Task,
+    _not_send: PhantomData<*mut ()>,
+}
+
+impl Deref for TaskRef<'_> {
+    type Target = Task;
+
+    fn deref(&self) -> &Self::Target {
+        self.task
+    }
+}
+
+impl From<TaskRef<'_>> for crate::types::ARef<Task> {
+    fn from(t: TaskRef<'_>) -> Self {
+        t.deref().into()
+    }
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 11/13] rust: lock: add `Guard::do_unlocked`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (8 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 10/13] rust: introduce `Task::current` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30  4:39 ` [PATCH 12/13] rust: sync: introduce `CondVar` Wedson Almeida Filho
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

From: Wedson Almeida Filho <walmeida@microsoft.com>

It releases the lock, executes some function provided by the caller,
then reacquires the lock. This is preparation for the implementation of
condvars, which will sleep after between unlocking and relocking.

We need an explicit `relock` method for primitives like `SpinLock` that
have an irqsave variant: we use the guard state to determine if the lock
was originally acquired with the regular `lock` function or
`lock_irqsave`.

Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/kernel/sync/lock.rs          | 25 +++++++++++++++++++++++++
 rust/kernel/sync/lock/spinlock.rs | 17 +++++++++++++++--
 2 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index 491446c3a074..ae20277c39c8 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -22,6 +22,9 @@ pub mod spinlock;
 ///
 /// - Implementers must ensure that only one thread/CPU may access the protected data once the lock
 /// is owned, that is, between calls to `lock` and `unlock`.
+/// - Implementers must also ensure that `relock` uses the same locking method as the original
+/// lock operation. For example, it should disable interrupts if [`IrqSaveBackend::lock_irqsave`]
+/// is used.
 pub unsafe trait Backend {
     /// The state required by the lock.
     type State;
@@ -55,6 +58,17 @@ pub unsafe trait Backend {
     ///
     /// It must only be called by the current owner of the lock.
     unsafe fn unlock(ptr: *mut Self::State, guard_state: &Self::GuardState);
+
+    /// Reacquires the lock, making the caller its owner.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that `state` comes from a previous call to [`Backend::lock`] (or
+    /// variant) that has been unlocked with [`Backend::unlock`] and will be relocked now.
+    unsafe fn relock(ptr: *mut Self::State, guard_state: &mut Self::GuardState) {
+        // SAFETY: The safety requirements ensure that the lock is initialised.
+        *guard_state = unsafe { Self::lock(ptr) };
+    }
 }
 
 /// The "backend" of a lock that supports the irq-save variant.
@@ -162,6 +176,17 @@ pub struct Guard<'a, T: ?Sized, B: Backend> {
 // SAFETY: `Guard` is sync when the data protected by the lock is also sync.
 unsafe impl<T: Sync + ?Sized, B: Backend> Sync for Guard<'_, T, B> {}
 
+impl<T: ?Sized, B: Backend> Guard<'_, T, B> {
+    #[allow(dead_code)]
+    pub(crate) fn do_unlocked(&mut self, cb: impl FnOnce()) {
+        // SAFETY: The caller owns the lock, so it is safe to unlock it.
+        unsafe { B::unlock(self.lock.state.get(), &self.state) };
+        cb();
+        // SAFETY: The lock was just unlocked above and is being relocked now.
+        unsafe { B::relock(self.lock.state.get(), &mut self.state) };
+    }
+}
+
 impl<T: ?Sized, B: Backend> core::ops::Deref for Guard<'_, T, B> {
     type Target = T;
 
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index 34dec09a97c0..e2a2f68e6d93 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -4,6 +4,7 @@
 //!
 //! This module allows Rust code to use the kernel's `spinlock_t`.
 
+use super::IrqSaveBackend;
 use crate::bindings;
 
 /// Creates a [`SpinLock`] initialiser with the given name and a newly-created lock class.
@@ -95,7 +96,8 @@ pub type SpinLock<T> = super::Lock<T, SpinLockBackend>;
 /// A kernel `spinlock_t` lock backend.
 pub struct SpinLockBackend;
 
-// SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion.
+// SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion. `relock` uses the
+// same scheme as `unlock` to figure out which locking method was used originally.
 unsafe impl super::Backend for SpinLockBackend {
     type State = bindings::spinlock_t;
     type GuardState = Option<core::ffi::c_ulong>;
@@ -127,13 +129,24 @@ unsafe impl super::Backend for SpinLockBackend {
             None => unsafe { bindings::spin_unlock(ptr) },
         }
     }
+
+    unsafe fn relock(ptr: *mut Self::State, guard_state: &mut Self::GuardState) {
+        let _ = match guard_state {
+            // SAFETY: The safety requiments of this function ensure that `ptr` has been
+            // initialised.
+            None => unsafe { Self::lock(ptr) },
+            // SAFETY: The safety requiments of this function ensure that `ptr` has been
+            // initialised.
+            Some(_) => unsafe { Self::lock_irqsave(ptr) },
+        };
+    }
 }
 
 // SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion. We use the `irqsave`
 // variant of the C lock acquisition functions to disable interrupts and retrieve the original
 // interrupt state, and the `irqrestore` variant of the lock release functions to restore the state
 // in `unlock` -- we use the guard context to determine which method was used to acquire the lock.
-unsafe impl super::IrqSaveBackend for SpinLockBackend {
+unsafe impl IrqSaveBackend for SpinLockBackend {
     unsafe fn lock_irqsave(ptr: *mut Self::State) -> Self::GuardState {
         // SAFETY: The safety requirements of this function ensure that `ptr` points to valid
         // memory, and that it has been initialised before.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 12/13] rust: sync: introduce `CondVar`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (9 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 11/13] rust: lock: add `Guard::do_unlocked` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30 12:52   ` Peter Zijlstra
  2023-03-30 12:59   ` Peter Zijlstra
  2023-03-30  4:39 ` [PATCH 13/13] rust: sync: introduce `LockedBy` Wedson Almeida Filho
                   ` (2 subsequent siblings)
  13 siblings, 2 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

From: Wedson Almeida Filho <walmeida@microsoft.com>

This is the traditional condition variable or monitor synchronisation
primitive. It is implemented with C's `wait_queue_head_t`.

It allows users to release a lock and go to sleep while guaranteeing
that notifications won't be missed. This is achieved by enqueuing a wait
entry before releasing the lock.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/bindings/bindings_helper.h |   1 +
 rust/helpers.c                  |   7 ++
 rust/kernel/sync.rs             |   2 +
 rust/kernel/sync/condvar.rs     | 178 ++++++++++++++++++++++++++++++++
 rust/kernel/sync/lock.rs        |   1 -
 5 files changed, 188 insertions(+), 1 deletion(-)
 create mode 100644 rust/kernel/sync/condvar.rs

diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 03656a44a83f..50e7a76d5455 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -8,6 +8,7 @@
 
 #include <linux/slab.h>
 #include <linux/refcount.h>
+#include <linux/wait.h>
 #include <linux/sched.h>
 
 /* `bindgen` gets confused at certain things. */
diff --git a/rust/helpers.c b/rust/helpers.c
index 96441744030e..8ff2559c1572 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -24,6 +24,7 @@
 #include <linux/mutex.h>
 #include <linux/spinlock.h>
 #include <linux/sched/signal.h>
+#include <linux/wait.h>
 
 __noreturn void rust_helper_BUG(void)
 {
@@ -76,6 +77,12 @@ void rust_helper_spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 }
 EXPORT_SYMBOL_GPL(rust_helper_spin_unlock_irqrestore);
 
+void rust_helper_init_wait(struct wait_queue_entry *wq_entry)
+{
+	init_wait(wq_entry);
+}
+EXPORT_SYMBOL_GPL(rust_helper_init_wait);
+
 int rust_helper_signal_pending(struct task_struct *t)
 {
 	return signal_pending(t);
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index ed07437d7d55..d6dd0e2c1678 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -8,9 +8,11 @@
 use crate::types::Opaque;
 
 mod arc;
+mod condvar;
 pub mod lock;
 
 pub use arc::{Arc, ArcBorrow, UniqueArc};
+pub use condvar::CondVar;
 pub use lock::{mutex::Mutex, spinlock::SpinLock};
 
 /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
new file mode 100644
index 000000000000..3f528fc7fa48
--- /dev/null
+++ b/rust/kernel/sync/condvar.rs
@@ -0,0 +1,178 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A condition variable.
+//!
+//! This module allows Rust code to use the kernel's [`struct wait_queue_head`] as a condition
+//! variable.
+
+use super::{lock::Backend, lock::Guard, LockClassKey};
+use crate::{bindings, init::PinInit, pin_init, str::CStr, task::Task, types::Opaque};
+use core::marker::PhantomPinned;
+use macros::pin_data;
+
+/// Creates a [`CondVar`] initialiser with the given name and a newly-created lock class.
+#[macro_export]
+macro_rules! new_condvar {
+    ($($name:literal)?) => {
+        $crate::sync::CondVar::new($crate::optional_name!($($name)?), $crate::static_lock_class!())
+    };
+}
+
+/// A conditional variable.
+///
+/// Exposes the kernel's [`struct wait_queue_head`] as a condition variable. It allows the caller to
+/// atomically release the given lock and go to sleep. It reacquires the lock when it wakes up. And
+/// it wakes up when notified by another thread (via [`CondVar::notify_one`] or
+/// [`CondVar::notify_all`]) or because the thread received a signal. It may also wake up
+/// spuriously.
+///
+/// Instances of [`CondVar`] need a lock class and to be pinned. The recommended way to create such
+/// instances is with the [`pin_init`](crate::pin_init) and [`new_condvar`] macros.
+///
+/// # Examples
+///
+/// The following is an example of using a condvar with a mutex:
+///
+/// ```
+/// use kernel::sync::{CondVar, Mutex};
+/// use kernel::{new_condvar, new_mutex};
+///
+/// #[pin_data]
+/// pub struct Example {
+///     #[pin]
+///     value: Mutex<u32>,
+///
+///     #[pin]
+///     value_changed: CondVar,
+/// }
+///
+/// /// Waits for `e.value` to become `v`.
+/// fn wait_for_vaue(e: &Example, v: u32) {
+///     let mut guard = e.value.lock();
+///     while *guard != v {
+///         e.value_changed.wait_uninterruptible(&mut guard);
+///     }
+/// }
+///
+/// /// Increments `e.value` and notifies all potential waiters.
+/// fn increment(e: &Example) {
+///     *e.value.lock() += 1;
+///     e.value_changed.notify_all();
+/// }
+///
+/// /// Allocates a new boxed `Example`.
+/// fn new_example() -> Result<Pin<Box<Example>>> {
+///     Box::pin_init(pin_init!(Example {
+///         value <- new_mutex!(0),
+///         value_changed <- new_condvar!(),
+///     }))
+/// }
+/// ```
+///
+/// [`struct wait_queue_head`]: ../../../include/linux/wait.h
+#[pin_data]
+pub struct CondVar {
+    #[pin]
+    pub(crate) wait_list: Opaque<bindings::wait_queue_head>,
+
+    /// A condvar needs to be pinned because it contains a [`struct list_head`] that is
+    /// self-referential, so it cannot be safely moved once it is initialised.
+    #[pin]
+    _pin: PhantomPinned,
+}
+
+// SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on any thread.
+#[allow(clippy::non_send_fields_in_send_ty)]
+unsafe impl Send for CondVar {}
+
+// SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on multiple threads
+// concurrently.
+unsafe impl Sync for CondVar {}
+
+impl CondVar {
+    /// Constructs a new condvar initialiser.
+    #[allow(clippy::new_ret_no_self)]
+    pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self> {
+        pin_init!(Self {
+            _pin: PhantomPinned,
+            // SAFETY: `__init_waitqueue_head` initialises the waitqueue head, and both `name` and
+            // `key` have static lifetimes so they live indefinitely.
+            wait_list <- unsafe {
+                Opaque::ffi_init2(
+                    bindings::__init_waitqueue_head,
+                    name.as_char_ptr(),
+                    key.as_ptr(),
+                )
+            },
+        })
+    }
+
+    fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guard<'_, T, B>) {
+        let wait = Opaque::<bindings::wait_queue_entry>::uninit();
+
+        // SAFETY: `wait` points to valid memory.
+        unsafe { bindings::init_wait(wait.get()) };
+
+        // SAFETY: Both `wait` and `wait_list` point to valid memory.
+        unsafe {
+            bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
+        };
+
+        // SAFETY: No arguments, switches to another thread.
+        guard.do_unlocked(|| unsafe { bindings::schedule() });
+
+        // SAFETY: Both `wait` and `wait_list` point to valid memory.
+        unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
+    }
+
+    /// Releases the lock and waits for a notification in interruptible mode.
+    ///
+    /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
+    /// thread to sleep, reacquiring the lock on wake up. It wakes up when notified by
+    /// [`CondVar::notify_one`] or [`CondVar::notify_all`], or when the thread receives a signal.
+    /// It may also wake up spuriously.
+    ///
+    /// Returns whether there is a signal pending.
+    #[must_use = "wait returns if a signal is pending, so the caller must check the return value"]
+    pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) -> bool {
+        self.wait_internal(bindings::TASK_INTERRUPTIBLE, guard);
+        Task::current().signal_pending()
+    }
+
+    /// Releases the lock and waits for a notification in uninterruptible mode.
+    ///
+    /// Similar to [`CondVar::wait`], except that the wait is not interruptible. That is, the
+    /// thread won't wake up due to signals. It may, however, wake up supirously.
+    pub fn wait_uninterruptible<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) {
+        self.wait_internal(bindings::TASK_UNINTERRUPTIBLE, guard)
+    }
+
+    /// Calls the kernel function to notify the appropriate number of threads with the given flags.
+    fn notify(&self, count: i32, flags: u32) {
+        // SAFETY: `wait_list` points to valid memory.
+        unsafe {
+            bindings::__wake_up(
+                self.wait_list.get(),
+                bindings::TASK_NORMAL,
+                count,
+                flags as _,
+            )
+        };
+    }
+
+    /// Wakes a single waiter up, if any.
+    ///
+    /// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost
+    /// completely (as opposed to automatically waking up the next waiter).
+    pub fn notify_one(&self) {
+        self.notify(1, 0);
+    }
+
+    /// Wakes all waiters up, if any.
+    ///
+    /// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost
+    /// completely (as opposed to automatically waking up the next waiter).
+    pub fn notify_all(&self) {
+        self.notify(0, 0);
+    }
+}
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index ae20277c39c8..f52ba9ab1b70 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -177,7 +177,6 @@ pub struct Guard<'a, T: ?Sized, B: Backend> {
 unsafe impl<T: Sync + ?Sized, B: Backend> Sync for Guard<'_, T, B> {}
 
 impl<T: ?Sized, B: Backend> Guard<'_, T, B> {
-    #[allow(dead_code)]
     pub(crate) fn do_unlocked(&mut self, cb: impl FnOnce()) {
         // SAFETY: The caller owns the lock, so it is safe to unlock it.
         unsafe { B::unlock(self.lock.state.get(), &self.state) };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 13/13] rust: sync: introduce `LockedBy`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (10 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 12/13] rust: sync: introduce `CondVar` Wedson Almeida Filho
@ 2023-03-30  4:39 ` Wedson Almeida Filho
  2023-03-30 11:28   ` Benno Lossin
  2023-03-30 11:10 ` [PATCH 01/13] rust: sync: introduce `LockClassKey` Gary Guo
  2023-03-31  7:28 ` Alice Ryhl
  13 siblings, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30  4:39 UTC (permalink / raw)
  To: rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

From: Wedson Almeida Filho <walmeida@microsoft.com>

This allows us to have data protected by a lock despite not being
wrapped by it. Access is granted by providing evidence that the lock is
held by the caller.

Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
---
 rust/kernel/sync.rs           |   2 +
 rust/kernel/sync/lock.rs      |   2 +-
 rust/kernel/sync/locked_by.rs | 126 ++++++++++++++++++++++++++++++++++
 3 files changed, 129 insertions(+), 1 deletion(-)
 create mode 100644 rust/kernel/sync/locked_by.rs

diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index d6dd0e2c1678..f8edb6d0d794 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -10,10 +10,12 @@ use crate::types::Opaque;
 mod arc;
 mod condvar;
 pub mod lock;
+mod locked_by;
 
 pub use arc::{Arc, ArcBorrow, UniqueArc};
 pub use condvar::CondVar;
 pub use lock::{mutex::Mutex, spinlock::SpinLock};
+pub use locked_by::LockedBy;
 
 /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
 #[repr(transparent)]
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index f52ba9ab1b70..51c996ca2109 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -111,7 +111,7 @@ pub struct Lock<T: ?Sized, B: Backend> {
     _pin: PhantomPinned,
 
     /// The data protected by the lock.
-    data: UnsafeCell<T>,
+    pub(crate) data: UnsafeCell<T>,
 }
 
 // SAFETY: `Lock` can be transferred across thread boundaries iff the data it protects can.
diff --git a/rust/kernel/sync/locked_by.rs b/rust/kernel/sync/locked_by.rs
new file mode 100644
index 000000000000..cbfd4e84b770
--- /dev/null
+++ b/rust/kernel/sync/locked_by.rs
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A wrapper for data protected by a lock that does not wrap it.
+
+use super::{lock::Backend, lock::Lock};
+use core::{cell::UnsafeCell, ptr};
+
+/// Allows access to some data to be serialised by a lock that does not wrap it.
+///
+/// In most cases, data protected by a lock is wrapped by the appropriate lock type, e.g.,
+/// [`super::Mutex`] or [`super::SpinLock`]. [`LockedBy`] is meant for cases when this is not
+/// possible. For example, if a container has a lock and some data in the contained elements needs
+/// to be protected by the same lock.
+///
+/// [`LockedBy`] wraps the data in lieu of another locking primitive, and only allows access to it
+/// when the caller shows evidence that the 'external' lock is locked.
+///
+/// # Examples
+///
+/// The following is an example for illustrative purposes: `InnerDirectory::bytes_used` is an
+/// aggregate of all `InnerFile::bytes_used` and must be kept consistent; so we wrap `InnerFile` in
+/// a `LockedBy` so that it shares a lock with `InnerDirectory`. This allows us to enforce at
+/// compile-time that access to `InnerFile` is only granted when an `InnerDirectory` is also
+/// locked; we enforce at run time that the right `InnerDirectory` is locked.
+///
+/// ```
+/// use kernel::sync::{LockedBy, Mutex};
+///
+/// struct InnerFile {
+///     bytes_used: u64,
+/// }
+///
+/// struct File {
+///     _ino: u32,
+///     inner: LockedBy<InnerFile, InnerDirectory>,
+/// }
+///
+/// struct InnerDirectory {
+///     /// The sum of the bytes used by all files.
+///     bytes_used: u64,
+///     _files: Vec<File>,
+/// }
+///
+/// struct Directory {
+///     _ino: u32,
+///     inner: Mutex<InnerDirectory>,
+/// }
+///
+/// /// Prints `bytes_used` from both the directory and file.
+/// fn print_bytes_used(dir: &Directory, file: &File) {
+///     let guard = dir.inner.lock();
+///     let inner_file = file.inner.access(&guard);
+///     pr_info!("{} {}", guard.bytes_used, inner_file.bytes_used);
+/// }
+///
+/// /// Increments `bytes_used` for both the directory and file.
+/// fn inc_bytes_used(dir: &Directory, file: &File) {
+///     let mut guard = dir.inner.lock();
+///     guard.bytes_used += 10;
+///
+///     let file_inner = file.inner.access_mut(&mut guard);
+///     file_inner.bytes_used += 10;
+/// }
+///
+/// /// Creates a new file.
+/// fn new_file(ino: u32, dir: &Directory) -> File {
+///     File {
+///         _ino: ino,
+///         inner: LockedBy::new(&dir.inner, InnerFile { bytes_used: 0 }),
+///     }
+/// }
+/// ```
+pub struct LockedBy<T: ?Sized, U: ?Sized> {
+    owner: *const U,
+    data: UnsafeCell<T>,
+}
+
+// SAFETY: `LockedBy` can be transferred across thread boundaries iff the data it protects can.
+unsafe impl<T: ?Sized + Send, U: ?Sized> Send for LockedBy<T, U> {}
+
+// SAFETY: `LockedBy` serialises the interior mutability it provides, so it is `Sync` as long as the
+// data it protects is `Send`.
+unsafe impl<T: ?Sized + Send, U: ?Sized> Sync for LockedBy<T, U> {}
+
+impl<T, U: ?Sized> LockedBy<T, U> {
+    /// Constructs a new instance of [`LockedBy`].
+    ///
+    /// It stores a raw pointer to the owner that is never dereferenced. It is only used to ensure
+    /// that the right owner is being used to access the protected data. If the owner is freed, the
+    /// data becomes inaccessible; if another instance of the owner is allocated *on the same
+    /// memory location*, the data becomes accessible again: none of this affects memory safety
+    /// because in any case at most one thread (or CPU) can access the protected data at a time.
+    pub fn new(owner: &Lock<U, impl Backend>, data: T) -> Self {
+        Self {
+            owner: owner.data.get(),
+            data: UnsafeCell::new(data),
+        }
+    }
+}
+
+impl<T: ?Sized, U: ?Sized> LockedBy<T, U> {
+    /// Returns a reference to the protected data when the caller provides evidence (via a
+    /// reference) that the owner is locked.
+    pub fn access<'a>(&'a self, owner: &'a U) -> &'a T {
+        if !ptr::eq(owner, self.owner) {
+            panic!("mismatched owners");
+        }
+
+        // SAFETY: `owner` is evidence that the owner is locked.
+        unsafe { &*self.data.get() }
+    }
+
+    /// Returns a mutable reference to the protected data when the caller provides evidence (via a
+    /// mutable owner) that the owner is locked mutably.
+    ///
+    /// Showing a mutable reference to the owner is sufficient because we know no other references
+    /// can exist to it.
+    pub fn access_mut<'a>(&'a self, owner: &'a mut U) -> &'a mut T {
+        if !ptr::eq(owner, self.owner) {
+            panic!("mismatched owners");
+        }
+
+        // SAFETY: `owner` is evidence that there is only one reference to the owner.
+        unsafe { &mut *self.data.get() }
+    }
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/13] rust: sync: introduce `LockClassKey`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (11 preceding siblings ...)
  2023-03-30  4:39 ` [PATCH 13/13] rust: sync: introduce `LockedBy` Wedson Almeida Filho
@ 2023-03-30 11:10 ` Gary Guo
  2023-03-31  7:28 ` Alice Ryhl
  13 siblings, 0 replies; 42+ messages in thread
From: Gary Guo @ 2023-03-30 11:10 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

On Thu, 30 Mar 2023 01:39:42 -0300
Wedson Almeida Filho <wedsonaf@gmail.com> wrote:

> From: Wedson Almeida Filho <walmeida@microsoft.com>
> 
> It is a wrapper around C's `lock_class_key`, which is used by the
> synchronisation primitives that are checked with lockdep. This is in
> preparation for introducing Rust abstractions for these primitives.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
> ---
>  rust/kernel/sync.rs | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 45 insertions(+)
> 
> diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> index 33da23e3076d..84a4b560828c 100644
> --- a/rust/kernel/sync.rs
> +++ b/rust/kernel/sync.rs
> @@ -5,6 +5,51 @@
>  //! This module contains the kernel APIs related to synchronisation that have been ported or
>  //! wrapped for usage by Rust code in the kernel.
>  
> +use crate::types::Opaque;
> +
>  mod arc;
>  
>  pub use arc::{Arc, ArcBorrow, UniqueArc};
> +
> +/// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
> +#[repr(transparent)]
> +pub struct LockClassKey(Opaque<bindings::lock_class_key>);
> +
> +// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
> +// provides its own synchronization.
> +unsafe impl Sync for LockClassKey {}
> +
> +impl LockClassKey {
> +    /// Creates a new lock class key.
> +    pub const fn new() -> Self {
> +        Self(Opaque::uninit())
> +    }
> +
> +    #[allow(dead_code)]
> +    pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
> +        self.0.get()
> +    }
> +}
> +
> +/// Defines a new static lock class and returns a pointer to it.
> +#[doc(hidden)]
> +#[macro_export]
> +macro_rules! static_lock_class {
> +    () => {{
> +        static CLASS: $crate::sync::LockClassKey = $crate::sync::LockClassKey::new();
> +        &CLASS
> +    }};
> +}
> +
> +/// Returns the given string, if one is provided, otherwise generateis one based on the source code

Typo.

> +/// location.
> +#[doc(hidden)]
> +#[macro_export]
> +macro_rules! optional_name {
> +    () => {
> +        $crate::c_str!(core::concat!(core::file!(), ":", core::line!()))
> +    };
> +    ($name:literal) => {
> +        $crate::c_str!($name)
> +    };
> +}


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 13/13] rust: sync: introduce `LockedBy`
  2023-03-30  4:39 ` [PATCH 13/13] rust: sync: introduce `LockedBy` Wedson Almeida Filho
@ 2023-03-30 11:28   ` Benno Lossin
  2023-03-30 11:45     ` Benno Lossin
  2023-03-30 20:44     ` Wedson Almeida Filho
  0 siblings, 2 replies; 42+ messages in thread
From: Benno Lossin @ 2023-03-30 11:28 UTC (permalink / raw)
  To: Wedson Almeida Filho, rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

On 30.03.23 06:39, Wedson Almeida Filho wrote:
> From: Wedson Almeida Filho <walmeida@microsoft.com>
>
> This allows us to have data protected by a lock despite not being
> wrapped by it. Access is granted by providing evidence that the lock is
> held by the caller.
>
> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
> ---
>   rust/kernel/sync.rs           |   2 +
>   rust/kernel/sync/lock.rs      |   2 +-
>   rust/kernel/sync/locked_by.rs | 126 ++++++++++++++++++++++++++++++++++
>   3 files changed, 129 insertions(+), 1 deletion(-)
>   create mode 100644 rust/kernel/sync/locked_by.rs
>
> diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> index d6dd0e2c1678..f8edb6d0d794 100644
> --- a/rust/kernel/sync.rs
> +++ b/rust/kernel/sync.rs
> @@ -10,10 +10,12 @@ use crate::types::Opaque;
>   mod arc;
>   mod condvar;
>   pub mod lock;
> +mod locked_by;
>
>   pub use arc::{Arc, ArcBorrow, UniqueArc};
>   pub use condvar::CondVar;
>   pub use lock::{mutex::Mutex, spinlock::SpinLock};
> +pub use locked_by::LockedBy;
>
>   /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
>   #[repr(transparent)]
> diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
> index f52ba9ab1b70..51c996ca2109 100644
> --- a/rust/kernel/sync/lock.rs
> +++ b/rust/kernel/sync/lock.rs
> @@ -111,7 +111,7 @@ pub struct Lock<T: ?Sized, B: Backend> {
>       _pin: PhantomPinned,
>
>       /// The data protected by the lock.
> -    data: UnsafeCell<T>,
> +    pub(crate) data: UnsafeCell<T>,
>   }
>
>   // SAFETY: `Lock` can be transferred across thread boundaries iff the data it protects can.
> diff --git a/rust/kernel/sync/locked_by.rs b/rust/kernel/sync/locked_by.rs
> new file mode 100644
> index 000000000000..cbfd4e84b770
> --- /dev/null
> +++ b/rust/kernel/sync/locked_by.rs
> @@ -0,0 +1,126 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! A wrapper for data protected by a lock that does not wrap it.
> +
> +use super::{lock::Backend, lock::Lock};
> +use core::{cell::UnsafeCell, ptr};
> +
> +/// Allows access to some data to be serialised by a lock that does not wrap it.
> +///
> +/// In most cases, data protected by a lock is wrapped by the appropriate lock type, e.g.,
> +/// [`super::Mutex`] or [`super::SpinLock`]. [`LockedBy`] is meant for cases when this is not
> +/// possible. For example, if a container has a lock and some data in the contained elements needs
> +/// to be protected by the same lock.
> +///
> +/// [`LockedBy`] wraps the data in lieu of another locking primitive, and only allows access to it
> +/// when the caller shows evidence that the 'external' lock is locked.
> +///
> +/// # Examples
> +///
> +/// The following is an example for illustrative purposes: `InnerDirectory::bytes_used` is an
> +/// aggregate of all `InnerFile::bytes_used` and must be kept consistent; so we wrap `InnerFile` in
> +/// a `LockedBy` so that it shares a lock with `InnerDirectory`. This allows us to enforce at
> +/// compile-time that access to `InnerFile` is only granted when an `InnerDirectory` is also
> +/// locked; we enforce at run time that the right `InnerDirectory` is locked.
> +///
> +/// ```
> +/// use kernel::sync::{LockedBy, Mutex};
> +///
> +/// struct InnerFile {
> +///     bytes_used: u64,
> +/// }
> +///
> +/// struct File {
> +///     _ino: u32,
> +///     inner: LockedBy<InnerFile, InnerDirectory>,
> +/// }
> +///
> +/// struct InnerDirectory {
> +///     /// The sum of the bytes used by all files.
> +///     bytes_used: u64,
> +///     _files: Vec<File>,
> +/// }
> +///
> +/// struct Directory {
> +///     _ino: u32,
> +///     inner: Mutex<InnerDirectory>,
> +/// }
> +///
> +/// /// Prints `bytes_used` from both the directory and file.
> +/// fn print_bytes_used(dir: &Directory, file: &File) {
> +///     let guard = dir.inner.lock();
> +///     let inner_file = file.inner.access(&guard);
> +///     pr_info!("{} {}", guard.bytes_used, inner_file.bytes_used);
> +/// }
> +///
> +/// /// Increments `bytes_used` for both the directory and file.
> +/// fn inc_bytes_used(dir: &Directory, file: &File) {
> +///     let mut guard = dir.inner.lock();
> +///     guard.bytes_used += 10;
> +///
> +///     let file_inner = file.inner.access_mut(&mut guard);

Missing deref (`*`) in front of `guard`.

> +///     file_inner.bytes_used += 10;
> +/// }
> +///
> +/// /// Creates a new file.
> +/// fn new_file(ino: u32, dir: &Directory) -> File {
> +///     File {
> +///         _ino: ino,
> +///         inner: LockedBy::new(&dir.inner, InnerFile { bytes_used: 0 }),
> +///     }
> +/// }
> +/// ```
> +pub struct LockedBy<T: ?Sized, U: ?Sized> {
> +    owner: *const U,
> +    data: UnsafeCell<T>,
> +}
> +
> +// SAFETY: `LockedBy` can be transferred across thread boundaries iff the data it protects can.
> +unsafe impl<T: ?Sized + Send, U: ?Sized> Send for LockedBy<T, U> {}
> +
> +// SAFETY: `LockedBy` serialises the interior mutability it provides, so it is `Sync` as long as the
> +// data it protects is `Send`.
> +unsafe impl<T: ?Sized + Send, U: ?Sized> Sync for LockedBy<T, U> {}
> +
> +impl<T, U: ?Sized> LockedBy<T, U> {
> +    /// Constructs a new instance of [`LockedBy`].
> +    ///
> +    /// It stores a raw pointer to the owner that is never dereferenced. It is only used to ensure
> +    /// that the right owner is being used to access the protected data. If the owner is freed, the
> +    /// data becomes inaccessible; if another instance of the owner is allocated *on the same
> +    /// memory location*, the data becomes accessible again: none of this affects memory safety
> +    /// because in any case at most one thread (or CPU) can access the protected data at a time.
> +    pub fn new(owner: &Lock<U, impl Backend>, data: T) -> Self {
> +        Self {
> +            owner: owner.data.get(),
> +            data: UnsafeCell::new(data),
> +        }
> +    }
> +}
> +
> +impl<T: ?Sized, U: ?Sized> LockedBy<T, U> {
> +    /// Returns a reference to the protected data when the caller provides evidence (via a
> +    /// reference) that the owner is locked.
> +    pub fn access<'a>(&'a self, owner: &'a U) -> &'a T {
> +        if !ptr::eq(owner, self.owner) {
> +            panic!("mismatched owners");
> +        }
> +
> +        // SAFETY: `owner` is evidence that the owner is locked.
> +        unsafe { &*self.data.get() }
> +    }
> +
> +    /// Returns a mutable reference to the protected data when the caller provides evidence (via a
> +    /// mutable owner) that the owner is locked mutably.
> +    ///
> +    /// Showing a mutable reference to the owner is sufficient because we know no other references
> +    /// can exist to it.
> +    pub fn access_mut<'a>(&'a self, owner: &'a mut U) -> &'a mut T {
> +        if !ptr::eq(owner, self.owner) {
> +            panic!("mismatched owners");
> +        }
> +
> +        // SAFETY: `owner` is evidence that there is only one reference to the owner.
> +        unsafe { &mut *self.data.get() }
> +    }
> +}
> --
> 2.34.1
>

What happens if the the protected data `U` is a ZST? Then the address
comparing will not work, since all ZST references have the same address.
For example:

     struct Outer {
         mtx: Mutex<()>,
         inners: Vec<Inner>,
     }

     struct Inner {
         count: LockedBy<usize, ()>,
     }

     fn evil(inner: &Inner) {
         // can create two mutable references at the same time:
         let a = inner.count.access_mut(&mut ());
         let b = inner.count.access_mut(&mut ());
         core::mem::swap(a, b);
     }

Maybe prevent this by checking for `assert!(mem::size_of::<U>() != 0);`
in the `new` function? Though I am not sure if a ZST is the only way for
values to share addresses.

--
Cheers,
Benno



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 13/13] rust: sync: introduce `LockedBy`
  2023-03-30 11:28   ` Benno Lossin
@ 2023-03-30 11:45     ` Benno Lossin
  2023-03-30 21:04       ` Wedson Almeida Filho
  2023-03-30 20:44     ` Wedson Almeida Filho
  1 sibling, 1 reply; 42+ messages in thread
From: Benno Lossin @ 2023-03-30 11:45 UTC (permalink / raw)
  To: Wedson Almeida Filho, rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

On 30.03.23 13:28, Benno Lossin wrote:
> What happens if the the protected data `U` is a ZST? Then the address
> comparing will not work, since all ZST references have the same address.
> For example:
>
>       struct Outer {
>           mtx: Mutex<()>,
>           inners: Vec<Inner>,
>       }
>
>       struct Inner {
>           count: LockedBy<usize, ()>,
>       }
>
>       fn evil(inner: &Inner) {
>           // can create two mutable references at the same time:
>           let a = inner.count.access_mut(&mut ());
>           let b = inner.count.access_mut(&mut ());
>           core::mem::swap(a, b);
>       }

Sorry the example I provided does not actually work, since `&mut ()`
refers to a place on the stack. I found a new example that shows ZSTs
are still problematic:

     struct Outer {
         mtx1: Mutex<()>,
         mtx2: Mutex<()>,
         inners: Vec<Inner>,
     }

     struct Inner {
         count: LockedBy<usize, ()>,
     }

     fn new_inner(outer: &Outer) -> Inner {
         Inner { count: LockedBy::new(&outer.mtx1, 0) }
     }

     fn evil(outer: &Outer) {
         let inner = outer.inners.get(0).unwrap();
         let mut guard1 = outer.mtx1.lock();
         let mut guard2 = outer.mtx2.lock();
	// The pointee of `guard1` and `guard2` have the same address.
         let ref1 = inner.count.access_mut(&mut *guard1);
         let ref2 = inner.count.access_mut(&mut *guard2);
         mem::swap(ref1, ref2);
     }

--
Cheers,
Benno



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/13] rust: sync: introduce `CondVar`
  2023-03-30  4:39 ` [PATCH 12/13] rust: sync: introduce `CondVar` Wedson Almeida Filho
@ 2023-03-30 12:52   ` Peter Zijlstra
  2023-03-30 14:43     ` Wedson Almeida Filho
  2023-03-30 12:59   ` Peter Zijlstra
  1 sibling, 1 reply; 42+ messages in thread
From: Peter Zijlstra @ 2023-03-30 12:52 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 01:39:53AM -0300, Wedson Almeida Filho wrote:
> From: Wedson Almeida Filho <walmeida@microsoft.com>
> 
> This is the traditional condition variable or monitor synchronisation
> primitive. It is implemented with C's `wait_queue_head_t`.
> 
> It allows users to release a lock and go to sleep while guaranteeing
> that notifications won't be missed. This is achieved by enqueuing a wait
> entry before releasing the lock.
> 

> +/// A conditional variable.
> +///
> +/// Exposes the kernel's [`struct wait_queue_head`] as a condition variable. It allows the caller to
> +/// atomically release the given lock and go to sleep. It reacquires the lock when it wakes up. And
> +/// it wakes up when notified by another thread (via [`CondVar::notify_one`] or
> +/// [`CondVar::notify_all`]) or because the thread received a signal. It may also wake up
> +/// spuriously.

Urgh so wide :-/

But no, threads can *always* and for any reason, have spurious wakeups.

Also, is this hard tied to mutex? If so, you should probably use swait
instead of wait.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/13] rust: sync: introduce `CondVar`
  2023-03-30  4:39 ` [PATCH 12/13] rust: sync: introduce `CondVar` Wedson Almeida Filho
  2023-03-30 12:52   ` Peter Zijlstra
@ 2023-03-30 12:59   ` Peter Zijlstra
  2023-03-30 14:56     ` Wedson Almeida Filho
  1 sibling, 1 reply; 42+ messages in thread
From: Peter Zijlstra @ 2023-03-30 12:59 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 01:39:53AM -0300, Wedson Almeida Filho wrote:

> +impl CondVar {
> +    /// Constructs a new condvar initialiser.
> +    #[allow(clippy::new_ret_no_self)]
> +    pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self> {
> +        pin_init!(Self {
> +            _pin: PhantomPinned,
> +            // SAFETY: `__init_waitqueue_head` initialises the waitqueue head, and both `name` and
> +            // `key` have static lifetimes so they live indefinitely.
> +            wait_list <- unsafe {
> +                Opaque::ffi_init2(
> +                    bindings::__init_waitqueue_head,
> +                    name.as_char_ptr(),
> +                    key.as_ptr(),
> +                )
> +            },
> +        })
> +    }
> +
> +    fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guard<'_, T, B>) {
> +        let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> +
> +        // SAFETY: `wait` points to valid memory.
> +        unsafe { bindings::init_wait(wait.get()) };
> +
> +        // SAFETY: Both `wait` and `wait_list` point to valid memory.
> +        unsafe {
> +            bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
> +        };

I can't read this rust gunk, but where is the condition test gone?

Also, where is the loop gone to?

> +
> +        // SAFETY: No arguments, switches to another thread.
> +        guard.do_unlocked(|| unsafe { bindings::schedule() });
> +
> +        // SAFETY: Both `wait` and `wait_list` point to valid memory.
> +        unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
> +    }
> +
> +    /// Releases the lock and waits for a notification in interruptible mode.
> +    ///
> +    /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> +    /// thread to sleep, reacquiring the lock on wake up. It wakes up when notified by
> +    /// [`CondVar::notify_one`] or [`CondVar::notify_all`], or when the thread receives a signal.
> +    /// It may also wake up spuriously.
> +    ///
> +    /// Returns whether there is a signal pending.
> +    #[must_use = "wait returns if a signal is pending, so the caller must check the return value"]
> +    pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) -> bool {
> +        self.wait_internal(bindings::TASK_INTERRUPTIBLE, guard);
> +        Task::current().signal_pending()
> +    }
> +
> +    /// Releases the lock and waits for a notification in uninterruptible mode.
> +    ///
> +    /// Similar to [`CondVar::wait`], except that the wait is not interruptible. That is, the
> +    /// thread won't wake up due to signals. It may, however, wake up supirously.
> +    pub fn wait_uninterruptible<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) {
> +        self.wait_internal(bindings::TASK_UNINTERRUPTIBLE, guard)
> +    }
> +
> +    /// Calls the kernel function to notify the appropriate number of threads with the given flags.
> +    fn notify(&self, count: i32, flags: u32) {
> +        // SAFETY: `wait_list` points to valid memory.
> +        unsafe {
> +            bindings::__wake_up(
> +                self.wait_list.get(),
> +                bindings::TASK_NORMAL,
> +                count,
> +                flags as _,
> +            )
> +        };
> +    }
> +
> +    /// Wakes a single waiter up, if any.
> +    ///
> +    /// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost
> +    /// completely (as opposed to automatically waking up the next waiter).
> +    pub fn notify_one(&self) {
> +        self.notify(1, 0);
> +    }
> +
> +    /// Wakes all waiters up, if any.
> +    ///
> +    /// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost
> +    /// completely (as opposed to automatically waking up the next waiter).
> +    pub fn notify_all(&self) {
> +        self.notify(0, 0);
> +    }
> +}

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-03-30  4:39 ` [PATCH 03/13] rust: lock: introduce `Mutex` Wedson Almeida Filho
@ 2023-03-30 13:01   ` Peter Zijlstra
  2023-03-30 18:47     ` Boqun Feng
  0 siblings, 1 reply; 42+ messages in thread
From: Peter Zijlstra @ 2023-03-30 13:01 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:
> From: Wedson Almeida Filho <walmeida@microsoft.com>
> 
> This is the `struct mutex` lock backend and allows Rust code to use the
> kernel mutex idiomatically.

What, if anything, are the plans to support the various lockdep
annotations? Idem for the spinlock thing in the other patch I suppose.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/13] rust: introduce `ARef`
  2023-03-30  4:39 ` [PATCH 08/13] rust: introduce `ARef` Wedson Almeida Filho
@ 2023-03-30 14:17   ` Gary Guo
  0 siblings, 0 replies; 42+ messages in thread
From: Gary Guo @ 2023-03-30 14:17 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

On Thu, 30 Mar 2023 01:39:49 -0300
Wedson Almeida Filho <wedsonaf@gmail.com> wrote:

> From: Wedson Almeida Filho <walmeida@microsoft.com>
> 
> This is an owned reference to an object that is always ref-counted. This
> is meant to be used in wrappers for C types that have their own ref
> counting functions, for example, tasks, files, inodes, dentries, etc.
> 
> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>

Reviewed-by: Gary Guo <gary@garyguo.net>

> ---
>  rust/kernel/types.rs | 107 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 107 insertions(+)
> 
> diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
> index dbfae9bb97ce..b071730253c7 100644
> --- a/rust/kernel/types.rs
> +++ b/rust/kernel/types.rs
> @@ -6,8 +6,10 @@ use crate::init::{self, PinInit};
>  use alloc::boxed::Box;
>  use core::{
>      cell::UnsafeCell,
> +    marker::PhantomData,
>      mem::MaybeUninit,
>      ops::{Deref, DerefMut},
> +    ptr::NonNull,
>  };
>  
>  /// Used to transfer ownership to and from foreign (non-Rust) languages.
> @@ -295,6 +297,111 @@ opaque_init_funcs! {
>      "Rust" manual_init4(arg1: A1, arg2: A2, arg3: A3, arg4: A4);
>  }
>  
> +/// Types that are _always_ reference counted.
> +///
> +/// It allows such types to define their own custom ref increment and decrement functions.
> +/// Additionally, it allows users to convert from a shared reference `&T` to an owned reference
> +/// [`ARef<T>`].
> +///
> +/// This is usually implemented by wrappers to existing structures on the C side of the code. For
> +/// Rust code, the recommendation is to use [`Arc`](crate::sync::Arc) to create reference-counted
> +/// instances of a type.
> +///
> +/// # Safety
> +///
> +/// Implementers must ensure that increments to the reference count keep the object alive in memory
> +/// at least until matching decrements are performed.
> +///
> +/// Implementers must also ensure that all instances are reference-counted. (Otherwise they
> +/// won't be able to honour the requirement that [`AlwaysRefCounted::inc_ref`] keep the object
> +/// alive.)
> +pub unsafe trait AlwaysRefCounted {
> +    /// Increments the reference count on the object.
> +    fn inc_ref(&self);
> +
> +    /// Decrements the reference count on the object.
> +    ///
> +    /// Frees the object when the count reaches zero.
> +    ///
> +    /// # Safety
> +    ///
> +    /// Callers must ensure that there was a previous matching increment to the reference count,
> +    /// and that the object is no longer used after its reference count is decremented (as it may
> +    /// result in the object being freed), unless the caller owns another increment on the refcount
> +    /// (e.g., it calls [`AlwaysRefCounted::inc_ref`] twice, then calls
> +    /// [`AlwaysRefCounted::dec_ref`] once).
> +    unsafe fn dec_ref(obj: NonNull<Self>);
> +}
> +
> +/// An owned reference to an always-reference-counted object.
> +///
> +/// The object's reference count is automatically decremented when an instance of [`ARef`] is
> +/// dropped. It is also automatically incremented when a new instance is created via
> +/// [`ARef::clone`].
> +///
> +/// # Invariants
> +///
> +/// The pointer stored in `ptr` is non-null and valid for the lifetime of the [`ARef`] instance. In
> +/// particular, the [`ARef`] instance owns an increment on the underlying object's reference count.
> +pub struct ARef<T: AlwaysRefCounted> {
> +    ptr: NonNull<T>,
> +    _p: PhantomData<T>,
> +}
> +
> +impl<T: AlwaysRefCounted> ARef<T> {
> +    /// Creates a new instance of [`ARef`].
> +    ///
> +    /// It takes over an increment of the reference count on the underlying object.
> +    ///
> +    /// # Safety
> +    ///
> +    /// Callers must ensure that the reference count was incremented at least once, and that they
> +    /// are properly relinquishing one increment. That is, if there is only one increment, callers
> +    /// must not use the underlying object anymore -- it is only safe to do so via the newly
> +    /// created [`ARef`].
> +    pub unsafe fn from_raw(ptr: NonNull<T>) -> Self {
> +        // INVARIANT: The safety requirements guarantee that the new instance now owns the
> +        // increment on the refcount.
> +        Self {
> +            ptr,
> +            _p: PhantomData,
> +        }
> +    }
> +}
> +
> +impl<T: AlwaysRefCounted> Clone for ARef<T> {
> +    fn clone(&self) -> Self {
> +        self.inc_ref();
> +        // SAFETY: We just incremented the refcount above.
> +        unsafe { Self::from_raw(self.ptr) }
> +    }
> +}
> +
> +impl<T: AlwaysRefCounted> Deref for ARef<T> {
> +    type Target = T;
> +
> +    fn deref(&self) -> &Self::Target {
> +        // SAFETY: The type invariants guarantee that the object is valid.
> +        unsafe { self.ptr.as_ref() }
> +    }
> +}
> +
> +impl<T: AlwaysRefCounted> From<&T> for ARef<T> {
> +    fn from(b: &T) -> Self {
> +        b.inc_ref();
> +        // SAFETY: We just incremented the refcount above.
> +        unsafe { Self::from_raw(NonNull::from(b)) }
> +    }
> +}
> +
> +impl<T: AlwaysRefCounted> Drop for ARef<T> {
> +    fn drop(&mut self) {
> +        // SAFETY: The type invariants guarantee that the `ARef` owns the reference we're about to
> +        // decrement.
> +        unsafe { T::dec_ref(self.ptr) };
> +    }
> +}
> +
>  /// A sum type that always holds either a value of type `L` or `R`.
>  pub enum Either<L, R> {
>      /// Constructs an instance of [`Either`] containing a value of type `L`.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/13] rust: sync: introduce `CondVar`
  2023-03-30 12:52   ` Peter Zijlstra
@ 2023-03-30 14:43     ` Wedson Almeida Filho
  0 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30 14:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 02:52:23PM +0200, Peter Zijlstra wrote:
> On Thu, Mar 30, 2023 at 01:39:53AM -0300, Wedson Almeida Filho wrote:
> > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > 
> > This is the traditional condition variable or monitor synchronisation
> > primitive. It is implemented with C's `wait_queue_head_t`.
> > 
> > It allows users to release a lock and go to sleep while guaranteeing
> > that notifications won't be missed. This is achieved by enqueuing a wait
> > entry before releasing the lock.
> > 
> 
> > +/// A conditional variable.
> > +///
> > +/// Exposes the kernel's [`struct wait_queue_head`] as a condition variable. It allows the caller to
> > +/// atomically release the given lock and go to sleep. It reacquires the lock when it wakes up. And
> > +/// it wakes up when notified by another thread (via [`CondVar::notify_one`] or
> > +/// [`CondVar::notify_all`]) or because the thread received a signal. It may also wake up
> > +/// spuriously.
> 
> Urgh so wide :-/

Thanks for reviewing :)

> But no, threads can *always* and for any reason, have spurious wakeups.

I don't believe I said otherwise. Is there anything in the text above you'd like to see changed?

> Also, is this hard tied to mutex? If so, you should probably use swait
> instead of wait.

This is not tied to mutex, it works with any lock.

Cheers,
-Wedson

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/13] rust: sync: introduce `CondVar`
  2023-03-30 12:59   ` Peter Zijlstra
@ 2023-03-30 14:56     ` Wedson Almeida Filho
  2023-04-03  8:59       ` Peter Zijlstra
  0 siblings, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30 14:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 02:59:27PM +0200, Peter Zijlstra wrote:
> On Thu, Mar 30, 2023 at 01:39:53AM -0300, Wedson Almeida Filho wrote:
> 
> > +    fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guard<'_, T, B>) {
> > +        let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> > +
> > +        // SAFETY: `wait` points to valid memory.
> > +        unsafe { bindings::init_wait(wait.get()) };
> > +
> > +        // SAFETY: Both `wait` and `wait_list` point to valid memory.
> > +        unsafe {
> > +            bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
> > +        };
> 
> I can't read this rust gunk, but where is the condition test gone?
> 
> Also, where is the loop gone to?

They're both at the caller. The usage of condition variables is something like:

while guard.value != v {
    condvar.wait_uninterruptible(&mut guard);
}

(Note that this is not specific to the kernel or to Rust: this is how condvars
work in general. You'll find this in any textbook on the topic.)

In the implementation of wait_internal(), we add the local wait entry to the
wait queue _before_ releasing the lock (i.e., before the test result can
change), so we guarantee that we don't miss wake up attempts.

Thanks,
-Wedson

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-03-30 13:01   ` Peter Zijlstra
@ 2023-03-30 18:47     ` Boqun Feng
  2023-03-30 18:51       ` [DRAFT 1/2] locking/selftest: Add test infrastructure for Rust locking APIs Boqun Feng
                         ` (2 more replies)
  0 siblings, 3 replies; 42+ messages in thread
From: Boqun Feng @ 2023-03-30 18:47 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wedson Almeida Filho, rust-for-linux, Miguel Ojeda, Alex Gaynor,
	Gary Guo, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:
> On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:
> > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > 
> > This is the `struct mutex` lock backend and allows Rust code to use the
> > kernel mutex idiomatically.
> 
> What, if anything, are the plans to support the various lockdep
> annotations? Idem for the spinlock thing in the other patch I suppose.

FWIW:

*	At the init stage, SpinLock and Mutex in Rust use initializers
	that are aware of the lockdep, so everything (lockdep_map and
	lock_class) is all set up.

*	At acquire or release time, Rust locks just use ffi to call C
	functions that have lockdep annotations in them, so lockdep
	should just work.

In fact, I shared some same worry as you, so I already work on adding
lockdep selftests for Rust lock APIs, will send them shortly, although
they are just draft.

Regards,
Boqun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [DRAFT 1/2] locking/selftest: Add test infrastructure for Rust locking APIs
  2023-03-30 18:47     ` Boqun Feng
@ 2023-03-30 18:51       ` Boqun Feng
  2023-03-30 18:51         ` [DRAFT 2/2] locking/selftest: Add AA deadlock selftest for Mutex and SpinLock Boqun Feng
  2023-03-30 18:56       ` [PATCH 03/13] rust: lock: introduce `Mutex` Boqun Feng
  2023-04-03  8:20       ` Peter Zijlstra
  2 siblings, 1 reply; 42+ messages in thread
From: Boqun Feng @ 2023-03-30 18:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wedson Almeida Filho, rust-for-linux, Miguel Ojeda, Alex Gaynor,
	Boqun Feng, Gary Guo, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 MAINTAINERS                  |  1 +
 lib/Makefile                 |  3 +++
 lib/locking-selftest.c       |  9 +++++++++
 lib/rust_locking_selftest.rs | 12 ++++++++++++
 4 files changed, 25 insertions(+)
 create mode 100644 lib/rust_locking_selftest.rs

diff --git a/MAINTAINERS b/MAINTAINERS
index 8d5bc223f305..c1878e18f98a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12060,6 +12060,7 @@ F:	include/linux/seqlock.h
 F:	include/linux/spinlock*.h
 F:	kernel/locking/
 F:	lib/locking*.[ch]
+F:	lib/rust-locking-selftest.rs
 X:	kernel/locking/locktorture.c
 
 LOGICAL DISK MANAGER SUPPORT (LDM, Windows 2000/XP/Vista Dynamic Disks)
diff --git a/lib/Makefile b/lib/Makefile
index baf2821f7a00..940374c08edd 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -148,6 +148,9 @@ obj-$(CONFIG_GENERIC_PCI_IOMAP) += pci_iomap.o
 obj-$(CONFIG_HAS_IOMEM) += iomap_copy.o devres.o
 obj-$(CONFIG_CHECK_SIGNATURE) += check_signature.o
 obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o
+ifdef CONFIG_RUST
+obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += rust_locking_selftest.o
+endif
 
 lib-y += logic_pio.o
 
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 8d24279fad05..9ef3ad92bc47 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -2854,6 +2854,11 @@ static void hardirq_deadlock_softirq_not_deadlock(void)
 	HARDIRQ_ENABLE();
 }
 
+#ifdef CONFIG_RUST
+void rust_locking_test(void);
+#else
+#define rust_locking_test()
+#endif
 void locking_selftest(void)
 {
 	/*
@@ -3010,6 +3015,10 @@ void locking_selftest(void)
 		printk("---------------------------------\n");
 		debug_locks = 1;
 	}
+
+	/* Rust locking API tests */
+	rust_locking_test();
+
 	lockdep_set_selftest_task(NULL);
 	debug_locks_silent = 0;
 }
diff --git a/lib/rust_locking_selftest.rs b/lib/rust_locking_selftest.rs
new file mode 100644
index 000000000000..61560a2f3c6b
--- /dev/null
+++ b/lib/rust_locking_selftest.rs
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Selftests for Rust locking APIs.
+
+use kernel::prelude::*;
+const __LOG_PREFIX: &[u8] = b"locking selftest\0";
+
+/// Entry point for tests.
+#[no_mangle]
+pub extern "C" fn rust_locking_test() {
+    pr_info!("Selftests for Rust locking APIs");
+}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [DRAFT 2/2] locking/selftest: Add AA deadlock selftest for Mutex and SpinLock
  2023-03-30 18:51       ` [DRAFT 1/2] locking/selftest: Add test infrastructure for Rust locking APIs Boqun Feng
@ 2023-03-30 18:51         ` Boqun Feng
  0 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2023-03-30 18:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wedson Almeida Filho, rust-for-linux, Miguel Ojeda, Alex Gaynor,
	Boqun Feng, Gary Guo, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 lib/locking-selftest.c       |  3 +-
 lib/rust_locking_selftest.rs | 99 ++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 9ef3ad92bc47..a4830e3cc998 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -60,6 +60,7 @@ __setup("debug_locks_verbose=", setup_debug_locks_verbose);
 #define LOCKTYPE_RTMUTEX 0x20
 #define LOCKTYPE_LL	0x40
 #define LOCKTYPE_SPECIAL 0x80
+#define LOCKTYPE_RUST	0x100
 
 static struct ww_acquire_ctx t, t2;
 static struct ww_mutex o, o2, o3;
@@ -1427,7 +1428,7 @@ static int testcase_successes;
 static int expected_testcase_failures;
 static int unexpected_testcase_failures;
 
-static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
+void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
 {
 	int saved_preempt_count = preempt_count();
 #ifdef CONFIG_PREEMPT_RT
diff --git a/lib/rust_locking_selftest.rs b/lib/rust_locking_selftest.rs
index 61560a2f3c6b..c050edf2ac9a 100644
--- a/lib/rust_locking_selftest.rs
+++ b/lib/rust_locking_selftest.rs
@@ -2,11 +2,110 @@
 
 //! Selftests for Rust locking APIs.
 
+use kernel::pr_cont;
 use kernel::prelude::*;
 const __LOG_PREFIX: &[u8] = b"locking selftest\0";
 
+extern "C" {
+    fn dotest(
+        testcase_fn: extern "C" fn(),
+        expected: core::ffi::c_int,
+        lockclass_mask: core::ffi::c_int,
+    );
+}
+
+/// Same as the definition in lib/locking-selftest.c
+#[allow(dead_code)]
+enum Expectation {
+    Failure = 0,
+    Success = 1,
+    Timeout = 2,
+}
+
+trait LockTest {
+    const EXPECTED: Expectation;
+    const MASK: i32;
+
+    fn test();
+}
+
+extern "C" fn bridge<T: LockTest>() {
+    T::test();
+}
+
+fn test<T: LockTest>() {
+    pr_cont!("\n");
+    pr_cont!("{}: ", core::any::type_name::<T>());
+    unsafe {
+        dotest(bridge::<T>, T::EXPECTED as core::ffi::c_int, T::MASK);
+    }
+    pr_cont!("\n");
+}
+
+struct SpinLockAATest;
+
+impl LockTest for SpinLockAATest {
+    const EXPECTED: Expectation = Expectation::Failure;
+    const MASK: i32 = 0x100; // TODO
+
+    fn test() {
+        use kernel::static_lock_class;
+        use kernel::sync::SpinLock;
+        use kernel::{c_str, stack_pin_init};
+
+        let key = static_lock_class!();
+        let name = c_str!("A1");
+
+        stack_pin_init!(
+            let a1 = SpinLock::new(0, name, key)
+        );
+
+        stack_pin_init!(
+            let a2 = SpinLock::new(0, name, key)
+        );
+
+        let a1 = a1.unwrap();
+        let a2 = a2.unwrap();
+
+        let _x = a1.lock();
+        let _y = a2.lock();
+    }
+}
+
+struct MutexAATest;
+
+impl LockTest for MutexAATest {
+    const EXPECTED: Expectation = Expectation::Failure;
+    const MASK: i32 = 0x100; // TODO
+
+    fn test() {
+        use kernel::static_lock_class;
+        use kernel::sync::Mutex;
+        use kernel::{c_str, stack_pin_init};
+
+        let key = static_lock_class!();
+        let name = c_str!("A1");
+
+        stack_pin_init!(
+            let a1 = Mutex::new(0, name, key)
+        );
+
+        stack_pin_init!(
+            let a2 = Mutex::new(0, name, key)
+        );
+
+        let a1 = a1.unwrap();
+        let a2 = a2.unwrap();
+
+        let _x = a1.lock();
+        let _y = a2.lock();
+    }
+}
+
 /// Entry point for tests.
 #[no_mangle]
 pub extern "C" fn rust_locking_test() {
     pr_info!("Selftests for Rust locking APIs");
+    test::<SpinLockAATest>();
+    test::<MutexAATest>();
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-03-30 18:47     ` Boqun Feng
  2023-03-30 18:51       ` [DRAFT 1/2] locking/selftest: Add test infrastructure for Rust locking APIs Boqun Feng
@ 2023-03-30 18:56       ` Boqun Feng
  2023-04-03  8:20       ` Peter Zijlstra
  2 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2023-03-30 18:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wedson Almeida Filho, rust-for-linux, Miguel Ojeda, Alex Gaynor,
	Gary Guo, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 11:47:12AM -0700, Boqun Feng wrote:
> On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:
> > On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:
> > > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > > 
> > > This is the `struct mutex` lock backend and allows Rust code to use the
> > > kernel mutex idiomatically.
> > 
> > What, if anything, are the plans to support the various lockdep
> > annotations? Idem for the spinlock thing in the other patch I suppose.
> 
> FWIW:
> 
> *	At the init stage, SpinLock and Mutex in Rust use initializers
> 	that are aware of the lockdep, so everything (lockdep_map and
> 	lock_class) is all set up.
> 
> *	At acquire or release time, Rust locks just use ffi to call C
> 	functions that have lockdep annotations in them, so lockdep
> 	should just work.
> 
> In fact, I shared some same worry as you, so I already work on adding
> lockdep selftests for Rust lock APIs, will send them shortly, although
> they are just draft.
> 

Needless to say, the test shows that lockdep works for deadlock
detection (although currently they are only simple cases):

	[...] locking selftest: Selftests for Rust locking APIs
	[...] rust_locking_selftest::SpinLockAATest: 
	[...] 
	[...] ============================================
	[...] WARNING: possible recursive locking detected
	[...] 6.3.0-rc1-00049-gee35790bd43e-dirty #99 Not tainted
	[...] --------------------------------------------
	[...] swapper/0/0 is trying to acquire lock:
	[...] ffffffff8c603e30 (A1){+.+.}-{2:2}, at: _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
	[...] 
	[...] but task is already holding lock:
	[...] ffffffff8c603de0 (A1){+.+.}-{2:2}, at: _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
	[...] 
	[...] other info that might help us debug this:
	[...]  Possible unsafe locking scenario:
	[...] 
	[...]        CPU0
	[...]        ----
	[...]   lock(A1);
	[...]   lock(A1);
	[...] 
	[...]  *** DEADLOCK ***
	[...] 
	[...]  May be due to missing lock nesting notation
	[...] 
	[...] 1 lock held by swapper/0/0:
	[...]  #0: ffffffff8c603de0 (A1){+.+.}-{2:2}, at: _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
	[...] 
	[...] stack backtrace:
	[...] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.3.0-rc1-00049-gee35790bd43e-dirty #99
	[...] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.1-1-1 04/01/2014
	[...] Call Trace:
	[...]  <TASK>
	[...]  dump_stack_lvl+0x6d/0xa0
	[...]  __lock_acquire+0x825/0x2e20
	[...]  ? __lock_acquire+0x626/0x2e20
	[...]  ? prb_read_valid+0x24/0x50
	[...]  ? printk_get_next_message+0xf6/0x380
	[...]  ? _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
	[...]  lock_acquire+0x109/0x2c0
	[...]  ? _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
	[...]  _raw_spin_lock+0x2e/0x40
	[...]  ? _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
	[...]  _RNvXNtNtNtCs1t6xtuX2C8s_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend4lock+0x6/0x10
	[...]  _RNvXCsaDWbe1gW6fC_21rust_locking_selftestNtB2_14SpinLockAATestNtB2_8LockTest4test+0xa5/0xe0
	[...]  ? prb_read_valid+0x24/0x50
	[...]  dotest+0x5a/0x8d0
	[...]  rust_locking_test+0xf8/0x210
	[...]  ? _printk+0x58/0x80
	[...]  ? local_lock_release+0x60/0x60
	[...]  locking_selftest+0x328f/0x32b0
	[...]  start_kernel+0x285/0x420
	[...]  secondary_startup_64_no_verify+0xe1/0xeb
	[...]  </TASK>
	[...]   ok  | lockclass mask: 100, debug_locks: 0, expected: 0

Regards,
Boqun

> Regards,
> Boqun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 13/13] rust: sync: introduce `LockedBy`
  2023-03-30 11:28   ` Benno Lossin
  2023-03-30 11:45     ` Benno Lossin
@ 2023-03-30 20:44     ` Wedson Almeida Filho
  1 sibling, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30 20:44 UTC (permalink / raw)
  To: Benno Lossin
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

Hey Benno,

Thanks for reviewing!

On Thu, 30 Mar 2023 at 08:29, Benno Lossin <y86-dev@protonmail.com> wrote:
>
> On 30.03.23 06:39, Wedson Almeida Filho wrote:
> > From: Wedson Almeida Filho <walmeida@microsoft.com>
> >
> > This allows us to have data protected by a lock despite not being
> > wrapped by it. Access is granted by providing evidence that the lock is
> > held by the caller.
> >
> > Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
> > ---
> >   rust/kernel/sync.rs           |   2 +
> >   rust/kernel/sync/lock.rs      |   2 +-
> >   rust/kernel/sync/locked_by.rs | 126 ++++++++++++++++++++++++++++++++++
> >   3 files changed, 129 insertions(+), 1 deletion(-)
> >   create mode 100644 rust/kernel/sync/locked_by.rs
> >
> > diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> > index d6dd0e2c1678..f8edb6d0d794 100644
> > --- a/rust/kernel/sync.rs
> > +++ b/rust/kernel/sync.rs
> > @@ -10,10 +10,12 @@ use crate::types::Opaque;
> >   mod arc;
> >   mod condvar;
> >   pub mod lock;
> > +mod locked_by;
> >
> >   pub use arc::{Arc, ArcBorrow, UniqueArc};
> >   pub use condvar::CondVar;
> >   pub use lock::{mutex::Mutex, spinlock::SpinLock};
> > +pub use locked_by::LockedBy;
> >
> >   /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
> >   #[repr(transparent)]
> > diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
> > index f52ba9ab1b70..51c996ca2109 100644
> > --- a/rust/kernel/sync/lock.rs
> > +++ b/rust/kernel/sync/lock.rs
> > @@ -111,7 +111,7 @@ pub struct Lock<T: ?Sized, B: Backend> {
> >       _pin: PhantomPinned,
> >
> >       /// The data protected by the lock.
> > -    data: UnsafeCell<T>,
> > +    pub(crate) data: UnsafeCell<T>,
> >   }
> >
> >   // SAFETY: `Lock` can be transferred across thread boundaries iff the data it protects can.
> > diff --git a/rust/kernel/sync/locked_by.rs b/rust/kernel/sync/locked_by.rs
> > new file mode 100644
> > index 000000000000..cbfd4e84b770
> > --- /dev/null
> > +++ b/rust/kernel/sync/locked_by.rs
> > @@ -0,0 +1,126 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +//! A wrapper for data protected by a lock that does not wrap it.
> > +
> > +use super::{lock::Backend, lock::Lock};
> > +use core::{cell::UnsafeCell, ptr};
> > +
> > +/// Allows access to some data to be serialised by a lock that does not wrap it.
> > +///
> > +/// In most cases, data protected by a lock is wrapped by the appropriate lock type, e.g.,
> > +/// [`super::Mutex`] or [`super::SpinLock`]. [`LockedBy`] is meant for cases when this is not
> > +/// possible. For example, if a container has a lock and some data in the contained elements needs
> > +/// to be protected by the same lock.
> > +///
> > +/// [`LockedBy`] wraps the data in lieu of another locking primitive, and only allows access to it
> > +/// when the caller shows evidence that the 'external' lock is locked.
> > +///
> > +/// # Examples
> > +///
> > +/// The following is an example for illustrative purposes: `InnerDirectory::bytes_used` is an
> > +/// aggregate of all `InnerFile::bytes_used` and must be kept consistent; so we wrap `InnerFile` in
> > +/// a `LockedBy` so that it shares a lock with `InnerDirectory`. This allows us to enforce at
> > +/// compile-time that access to `InnerFile` is only granted when an `InnerDirectory` is also
> > +/// locked; we enforce at run time that the right `InnerDirectory` is locked.
> > +///
> > +/// ```
> > +/// use kernel::sync::{LockedBy, Mutex};
> > +///
> > +/// struct InnerFile {
> > +///     bytes_used: u64,
> > +/// }
> > +///
> > +/// struct File {
> > +///     _ino: u32,
> > +///     inner: LockedBy<InnerFile, InnerDirectory>,
> > +/// }
> > +///
> > +/// struct InnerDirectory {
> > +///     /// The sum of the bytes used by all files.
> > +///     bytes_used: u64,
> > +///     _files: Vec<File>,
> > +/// }
> > +///
> > +/// struct Directory {
> > +///     _ino: u32,
> > +///     inner: Mutex<InnerDirectory>,
> > +/// }
> > +///
> > +/// /// Prints `bytes_used` from both the directory and file.
> > +/// fn print_bytes_used(dir: &Directory, file: &File) {
> > +///     let guard = dir.inner.lock();
> > +///     let inner_file = file.inner.access(&guard);
> > +///     pr_info!("{} {}", guard.bytes_used, inner_file.bytes_used);
> > +/// }
> > +///
> > +/// /// Increments `bytes_used` for both the directory and file.
> > +/// fn inc_bytes_used(dir: &Directory, file: &File) {
> > +///     let mut guard = dir.inner.lock();
> > +///     guard.bytes_used += 10;
> > +///
> > +///     let file_inner = file.inner.access_mut(&mut guard);
>
> Missing deref (`*`) in front of `guard`.

`Deref` coercion obviates the need for an explicit dereference. This
works as is.

> > +///     file_inner.bytes_used += 10;
> > +/// }
> > +///
> > +/// /// Creates a new file.
> > +/// fn new_file(ino: u32, dir: &Directory) -> File {
> > +///     File {
> > +///         _ino: ino,
> > +///         inner: LockedBy::new(&dir.inner, InnerFile { bytes_used: 0 }),
> > +///     }
> > +/// }
> > +/// ```
> > +pub struct LockedBy<T: ?Sized, U: ?Sized> {
> > +    owner: *const U,
> > +    data: UnsafeCell<T>,
> > +}
> > +
> > +// SAFETY: `LockedBy` can be transferred across thread boundaries iff the data it protects can.
> > +unsafe impl<T: ?Sized + Send, U: ?Sized> Send for LockedBy<T, U> {}
> > +
> > +// SAFETY: `LockedBy` serialises the interior mutability it provides, so it is `Sync` as long as the
> > +// data it protects is `Send`.
> > +unsafe impl<T: ?Sized + Send, U: ?Sized> Sync for LockedBy<T, U> {}
> > +
> > +impl<T, U: ?Sized> LockedBy<T, U> {
> > +    /// Constructs a new instance of [`LockedBy`].
> > +    ///
> > +    /// It stores a raw pointer to the owner that is never dereferenced. It is only used to ensure
> > +    /// that the right owner is being used to access the protected data. If the owner is freed, the
> > +    /// data becomes inaccessible; if another instance of the owner is allocated *on the same
> > +    /// memory location*, the data becomes accessible again: none of this affects memory safety
> > +    /// because in any case at most one thread (or CPU) can access the protected data at a time.
> > +    pub fn new(owner: &Lock<U, impl Backend>, data: T) -> Self {
> > +        Self {
> > +            owner: owner.data.get(),
> > +            data: UnsafeCell::new(data),
> > +        }
> > +    }
> > +}
> > +
> > +impl<T: ?Sized, U: ?Sized> LockedBy<T, U> {
> > +    /// Returns a reference to the protected data when the caller provides evidence (via a
> > +    /// reference) that the owner is locked.
> > +    pub fn access<'a>(&'a self, owner: &'a U) -> &'a T {
> > +        if !ptr::eq(owner, self.owner) {
> > +            panic!("mismatched owners");
> > +        }
> > +
> > +        // SAFETY: `owner` is evidence that the owner is locked.
> > +        unsafe { &*self.data.get() }
> > +    }
> > +
> > +    /// Returns a mutable reference to the protected data when the caller provides evidence (via a
> > +    /// mutable owner) that the owner is locked mutably.
> > +    ///
> > +    /// Showing a mutable reference to the owner is sufficient because we know no other references
> > +    /// can exist to it.
> > +    pub fn access_mut<'a>(&'a self, owner: &'a mut U) -> &'a mut T {
> > +        if !ptr::eq(owner, self.owner) {
> > +            panic!("mismatched owners");
> > +        }
> > +
> > +        // SAFETY: `owner` is evidence that there is only one reference to the owner.
> > +        unsafe { &mut *self.data.get() }
> > +    }
> > +}
> > --
> > 2.34.1
> >
>
> What happens if the the protected data `U` is a ZST? Then the address
> comparing will not work, since all ZST references have the same address.

Indeed SZTs are problematic. I'll add a restriction to rule them out.

> For example:
>
>      struct Outer {
>          mtx: Mutex<()>,
>          inners: Vec<Inner>,
>      }
>
>      struct Inner {
>          count: LockedBy<usize, ()>,
>      }
>
>      fn evil(inner: &Inner) {
>          // can create two mutable references at the same time:
>          let a = inner.count.access_mut(&mut ());
>          let b = inner.count.access_mut(&mut ());
>          core::mem::swap(a, b);
>      }
>
> Maybe prevent this by checking for `assert!(mem::size_of::<U>() != 0);`
> in the `new` function? Though I am not sure if a ZST is the only way for
> values to share addresses.

I'll add such an assert a part of a `const` inside an impl block so
that we get it to fail at compile time if misused.

>
> --
> Cheers,
> Benno
>
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 13/13] rust: sync: introduce `LockedBy`
  2023-03-30 11:45     ` Benno Lossin
@ 2023-03-30 21:04       ` Wedson Almeida Filho
  2023-03-30 21:10         ` Benno Lossin
  0 siblings, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-03-30 21:04 UTC (permalink / raw)
  To: Benno Lossin
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

On Thu, 30 Mar 2023 at 08:45, Benno Lossin <y86-dev@protonmail.com> wrote:
>
> On 30.03.23 13:28, Benno Lossin wrote:
>      struct Outer {
>          mtx1: Mutex<()>,
>          mtx2: Mutex<()>,
>          inners: Vec<Inner>,
>      }
>
>      struct Inner {
>          count: LockedBy<usize, ()>,
>      }
>
>      fn new_inner(outer: &Outer) -> Inner {
>          Inner { count: LockedBy::new(&outer.mtx1, 0) }
>      }
>
>      fn evil(outer: &Outer) {
>          let inner = outer.inners.get(0).unwrap();
>          let mut guard1 = outer.mtx1.lock();
>          let mut guard2 = outer.mtx2.lock();
>         // The pointee of `guard1` and `guard2` have the same address.
>          let ref1 = inner.count.access_mut(&mut *guard1);
>          let ref2 = inner.count.access_mut(&mut *guard2);
>          mem::swap(ref1, ref2);
>      }

This doesn't reproduce the issue because `mtx2` itself is not a ZST
(it contains a `struct mutex` before the data it protects).

Something like the following should reproduce it though:

    struct Outer {
         mtx1: Mutex<()>,
         zst: (),
     }

     fn evil(outer: &Outer) {
         let lb = LockedBy::new(&outer.mtx1, 0u8);
         let value = lb.access(&outer.zst);
         // Accessing "value" without holding `mtx1`.
         pr_info!("{}", *value);
     }

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 13/13] rust: sync: introduce `LockedBy`
  2023-03-30 21:04       ` Wedson Almeida Filho
@ 2023-03-30 21:10         ` Benno Lossin
  0 siblings, 0 replies; 42+ messages in thread
From: Benno Lossin @ 2023-03-30 21:10 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho

On 30.03.23 23:04, Wedson Almeida Filho wrote:
> On Thu, 30 Mar 2023 at 08:45, Benno Lossin <y86-dev@protonmail.com> wrote:
>>
>> On 30.03.23 13:28, Benno Lossin wrote:
>>       struct Outer {
>>           mtx1: Mutex<()>,
>>           mtx2: Mutex<()>,
>>           inners: Vec<Inner>,
>>       }
>>
>>       struct Inner {
>>           count: LockedBy<usize, ()>,
>>       }
>>
>>       fn new_inner(outer: &Outer) -> Inner {
>>           Inner { count: LockedBy::new(&outer.mtx1, 0) }
>>       }
>>
>>       fn evil(outer: &Outer) {
>>           let inner = outer.inners.get(0).unwrap();
>>           let mut guard1 = outer.mtx1.lock();
>>           let mut guard2 = outer.mtx2.lock();
>>          // The pointee of `guard1` and `guard2` have the same address.
>>           let ref1 = inner.count.access_mut(&mut *guard1);
>>           let ref2 = inner.count.access_mut(&mut *guard2);
>>           mem::swap(ref1, ref2);
>>       }
>
> This doesn't reproduce the issue because `mtx2` itself is not a ZST
> (it contains a `struct mutex` before the data it protects).
>
> Something like the following should reproduce it though:
>
>      struct Outer {
>           mtx1: Mutex<()>,
>           zst: (),
>       }
>
>       fn evil(outer: &Outer) {
>           let lb = LockedBy::new(&outer.mtx1, 0u8);
>           let value = lb.access(&outer.zst);
>           // Accessing "value" without holding `mtx1`.
>           pr_info!("{}", *value);
>       }

You are correct, but in your example you also cannot be sure that it
works, since the layout of the `Mutex` and `Outer` is `repr(Rust)`.
And so you cannot be sure that `zst` has the same address as `value`
inside of the `Mutex` (since the `struct mutex` could be in between).
But regardless, lets just deny ZSTs in `LockedBy` since the fix is
easy and it would be weird to put a ZST in a lock in the first place.
(Not that you have argued against it)

--
Cheers,
Benno



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] rust: introduce `Task::current`
  2023-03-30  4:39 ` [PATCH 10/13] rust: introduce `Task::current` Wedson Almeida Filho
@ 2023-03-31  2:47   ` Gary Guo
  2023-03-31  7:32     ` Alice Ryhl
  2023-04-01  4:09     ` Wedson Almeida Filho
  0 siblings, 2 replies; 42+ messages in thread
From: Gary Guo @ 2023-03-31  2:47 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Peter Zijlstra

On Thu, 30 Mar 2023 01:39:51 -0300
Wedson Almeida Filho <wedsonaf@gmail.com> wrote:

> From: Wedson Almeida Filho <walmeida@microsoft.com>
> 
> This allows Rust code to get a reference to the current task without
> having to increment the refcount, but still guaranteeing memory safety.
> 
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
> ---
>  rust/helpers.c      |  6 ++++
>  rust/kernel/task.rs | 83 ++++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 88 insertions(+), 1 deletion(-)
> 
> diff --git a/rust/helpers.c b/rust/helpers.c
> index 58a194042c86..96441744030e 100644
> --- a/rust/helpers.c
> +++ b/rust/helpers.c
> @@ -100,6 +100,12 @@ bool rust_helper_refcount_dec_and_test(refcount_t *r)
>  }
>  EXPORT_SYMBOL_GPL(rust_helper_refcount_dec_and_test);
>  
> +struct task_struct *rust_helper_get_current(void)
> +{
> +	return current;
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_get_current);
> +
>  void rust_helper_get_task_struct(struct task_struct *t)
>  {
>  	get_task_struct(t);
> diff --git a/rust/kernel/task.rs b/rust/kernel/task.rs
> index 8d7a8222990f..8b2b56ba9c6d 100644
> --- a/rust/kernel/task.rs
> +++ b/rust/kernel/task.rs
> @@ -5,7 +5,7 @@
>  //! C header: [`include/linux/sched.h`](../../../../include/linux/sched.h).
>  
>  use crate::bindings;
> -use core::{cell::UnsafeCell, ptr};
> +use core::{cell::UnsafeCell, marker::PhantomData, ops::Deref, ptr};
>  
>  /// Wraps the kernel's `struct task_struct`.
>  ///
> @@ -13,6 +13,46 @@ use core::{cell::UnsafeCell, ptr};
>  ///
>  /// Instances of this type are always ref-counted, that is, a call to `get_task_struct` ensures
>  /// that the allocation remains valid at least until the matching call to `put_task_struct`.
> +///
> +/// # Examples
> +///
> +/// The following is an example of getting the PID of the current thread with zero additional cost
> +/// when compared to the C version:
> +///
> +/// ```
> +/// use kernel::task::Task;
> +///
> +/// let pid = Task::current().pid();
> +/// ```
> +///
> +/// Getting the PID of the current process, also zero additional cost:
> +///
> +/// ```
> +/// use kernel::task::Task;
> +///
> +/// let pid = Task::current().group_leader().pid();
> +/// ```
> +///
> +/// Getting the current task and storing it in some struct. The reference count is automatically
> +/// incremented when creating `State` and decremented when it is dropped:
> +///
> +/// ```
> +/// use kernel::{task::Task, ARef};
> +///
> +/// struct State {
> +///     creator: ARef<Task>,
> +///     index: u32,
> +/// }
> +///
> +/// impl State {
> +///     fn new() -> Self {
> +///         Self {
> +///             creator: Task::current().into(),
> +///             index: 0,
> +///         }
> +///     }
> +/// }
> +/// ```
>  #[repr(transparent)]
>  pub struct Task(pub(crate) UnsafeCell<bindings::task_struct>);
>  
> @@ -25,6 +65,20 @@ unsafe impl Sync for Task {}
>  type Pid = bindings::pid_t;
>  
>  impl Task {
> +    /// Returns a task reference for the currently executing task/thread.
> +    pub fn current<'a>() -> TaskRef<'a> {
> +        // SAFETY: Just an FFI call with no additional safety requirements.
> +        let ptr = unsafe { bindings::get_current() };
> +
> +        TaskRef {
> +            // SAFETY: If the current thread is still running, the current task is valid. Given
> +            // that `TaskRef` is not `Send`, we know it cannot be transferred to another thread
> +            // (where it could potentially outlive the caller).
> +            task: unsafe { &*ptr.cast() },
> +            _not_send: PhantomData,
> +        }
> +    }
> +

I don't think this API is sound, as you can do `&*Task::current()` and
get a `&'static Task`, which is very problematic.

A sound API would be

	pub fn with_current<R>(f: imp FnOnce(&Task) -> R) -> R { ... }

(which also is how thread local works in Rust)

You would have to write `Task::with_current(|cur| cur.pid())` though,
which unfortunately is a bit less ergonomic.

Best,
Gary

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/13] rust: sync: introduce `LockClassKey`
  2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
                   ` (12 preceding siblings ...)
  2023-03-30 11:10 ` [PATCH 01/13] rust: sync: introduce `LockClassKey` Gary Guo
@ 2023-03-31  7:28 ` Alice Ryhl
  2023-04-05 17:42   ` Wedson Almeida Filho
  13 siblings, 1 reply; 42+ messages in thread
From: Alice Ryhl @ 2023-03-31  7:28 UTC (permalink / raw)
  To: Wedson Almeida Filho, rust-for-linux
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

On 3/30/23 06:39, Wedson Almeida Filho wrote:
> From: Wedson Almeida Filho <walmeida@microsoft.com>
> 
> It is a wrapper around C's `lock_class_key`, which is used by the
> synchronisation primitives that are checked with lockdep. This is in
> preparation for introducing Rust abstractions for these primitives.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
> ---
> +// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
> +// provides its own synchronization.
> +unsafe impl Sync for LockClassKey {}

No Send?

> +
> +impl LockClassKey {
> +    /// Creates a new lock class key.
> +    pub const fn new() -> Self {
> +        Self(Opaque::uninit())
> +    }
> +
> +    #[allow(dead_code)]
> +    pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
> +        self.0.get()
> +    }

I would just make this pub and drop the `#[allow(dead_code)]`. I think 
it is often useful to have methods like this available publicly.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] rust: introduce `Task::current`
  2023-03-31  2:47   ` Gary Guo
@ 2023-03-31  7:32     ` Alice Ryhl
  2023-04-01  4:09     ` Wedson Almeida Filho
  1 sibling, 0 replies; 42+ messages in thread
From: Alice Ryhl @ 2023-03-31  7:32 UTC (permalink / raw)
  To: Gary Guo, Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Peter Zijlstra

On 3/31/23 04:47, Gary Guo wrote:
> On Thu, 30 Mar 2023 01:39:51 -0300
> Wedson Almeida Filho <wedsonaf@gmail.com> wrote:
> 
>> From: Wedson Almeida Filho <walmeida@microsoft.com>
>>
>> This allows Rust code to get a reference to the current task without
>> having to increment the refcount, but still guaranteeing memory safety.
>>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
>> ---
>>   rust/helpers.c      |  6 ++++
>>   rust/kernel/task.rs | 83 ++++++++++++++++++++++++++++++++++++++++++++-
>>   2 files changed, 88 insertions(+), 1 deletion(-)
>>
>> diff --git a/rust/helpers.c b/rust/helpers.c
>> index 58a194042c86..96441744030e 100644
>> --- a/rust/helpers.c
>> +++ b/rust/helpers.c
>> @@ -100,6 +100,12 @@ bool rust_helper_refcount_dec_and_test(refcount_t *r)
>>   }
>>   EXPORT_SYMBOL_GPL(rust_helper_refcount_dec_and_test);
>>   
>> +struct task_struct *rust_helper_get_current(void)
>> +{
>> +	return current;
>> +}
>> +EXPORT_SYMBOL_GPL(rust_helper_get_current);
>> +
>>   void rust_helper_get_task_struct(struct task_struct *t)
>>   {
>>   	get_task_struct(t);
>> diff --git a/rust/kernel/task.rs b/rust/kernel/task.rs
>> index 8d7a8222990f..8b2b56ba9c6d 100644
>> --- a/rust/kernel/task.rs
>> +++ b/rust/kernel/task.rs
>> @@ -5,7 +5,7 @@
>>   //! C header: [`include/linux/sched.h`](../../../../include/linux/sched.h).
>>   
>>   use crate::bindings;
>> -use core::{cell::UnsafeCell, ptr};
>> +use core::{cell::UnsafeCell, marker::PhantomData, ops::Deref, ptr};
>>   
>>   /// Wraps the kernel's `struct task_struct`.
>>   ///
>> @@ -13,6 +13,46 @@ use core::{cell::UnsafeCell, ptr};
>>   ///
>>   /// Instances of this type are always ref-counted, that is, a call to `get_task_struct` ensures
>>   /// that the allocation remains valid at least until the matching call to `put_task_struct`.
>> +///
>> +/// # Examples
>> +///
>> +/// The following is an example of getting the PID of the current thread with zero additional cost
>> +/// when compared to the C version:
>> +///
>> +/// ```
>> +/// use kernel::task::Task;
>> +///
>> +/// let pid = Task::current().pid();
>> +/// ```
>> +///
>> +/// Getting the PID of the current process, also zero additional cost:
>> +///
>> +/// ```
>> +/// use kernel::task::Task;
>> +///
>> +/// let pid = Task::current().group_leader().pid();
>> +/// ```
>> +///
>> +/// Getting the current task and storing it in some struct. The reference count is automatically
>> +/// incremented when creating `State` and decremented when it is dropped:
>> +///
>> +/// ```
>> +/// use kernel::{task::Task, ARef};
>> +///
>> +/// struct State {
>> +///     creator: ARef<Task>,
>> +///     index: u32,
>> +/// }
>> +///
>> +/// impl State {
>> +///     fn new() -> Self {
>> +///         Self {
>> +///             creator: Task::current().into(),
>> +///             index: 0,
>> +///         }
>> +///     }
>> +/// }
>> +/// ```
>>   #[repr(transparent)]
>>   pub struct Task(pub(crate) UnsafeCell<bindings::task_struct>);
>>   
>> @@ -25,6 +65,20 @@ unsafe impl Sync for Task {}
>>   type Pid = bindings::pid_t;
>>   
>>   impl Task {
>> +    /// Returns a task reference for the currently executing task/thread.
>> +    pub fn current<'a>() -> TaskRef<'a> {
>> +        // SAFETY: Just an FFI call with no additional safety requirements.
>> +        let ptr = unsafe { bindings::get_current() };
>> +
>> +        TaskRef {
>> +            // SAFETY: If the current thread is still running, the current task is valid. Given
>> +            // that `TaskRef` is not `Send`, we know it cannot be transferred to another thread
>> +            // (where it could potentially outlive the caller).
>> +            task: unsafe { &*ptr.cast() },
>> +            _not_send: PhantomData,
>> +        }
>> +    }
>> +
> 
> I don't think this API is sound, as you can do `&*Task::current()` and
> get a `&'static Task`, which is very problematic.
> 
> A sound API would be
> 
> 	pub fn with_current<R>(f: imp FnOnce(&Task) -> R) -> R { ... }
> 
> (which also is how thread local works in Rust)
> 
> You would have to write `Task::with_current(|cur| cur.pid())` though,
> which unfortunately is a bit less ergonomic.
> 
> Best,
> Gary

This is true, unfortunately. It would be possible to write a macro with 
a more similar API to the current implementation.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] rust: introduce `Task::current`
  2023-03-31  2:47   ` Gary Guo
  2023-03-31  7:32     ` Alice Ryhl
@ 2023-04-01  4:09     ` Wedson Almeida Filho
  2023-04-01  7:01       ` Gary Guo
  1 sibling, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-04-01  4:09 UTC (permalink / raw)
  To: Gary Guo
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Peter Zijlstra

Gary, thanks for reviewing!

On Fri, Mar 31, 2023 at 03:47:01AM +0100, Gary Guo wrote:
> 
> I don't think this API is sound, as you can do `&*Task::current()` and
> get a `&'static Task`, which is very problematic.

One thing that isn't clear to me is: how do you get a 'static lifetime in the
example above?

Altough `TaskRef` does have an arbitrary lifetime param, that's not the lifetime
that the returned `Task` reference gets. For illustration, I've explicitly added
a lifetime 'a in the impl below:

impl Deref for TaskRef<'_> {
    type Target = Task;
    fn deref(&'a self) -> &'a Self::Target {
        self.task
    }
}

Which means that the borrow of the `TaskRef` you use to call `deref` must
outlive the returned `Task`.

So how do you get a `TaskRef` with a static lifetime to begin with? Or is there
another trick to get the `&'static Task` that I can't see?

Thanks,
-Wedson

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] rust: introduce `Task::current`
  2023-04-01  4:09     ` Wedson Almeida Filho
@ 2023-04-01  7:01       ` Gary Guo
  0 siblings, 0 replies; 42+ messages in thread
From: Gary Guo @ 2023-04-01  7:01 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Peter Zijlstra

On Sat, 1 Apr 2023 01:09:18 -0300
Wedson Almeida Filho <wedsonaf@gmail.com> wrote:

> Gary, thanks for reviewing!
> 
> On Fri, Mar 31, 2023 at 03:47:01AM +0100, Gary Guo wrote:
> > 
> > I don't think this API is sound, as you can do `&*Task::current()` and
> > get a `&'static Task`, which is very problematic.  
> 
> One thing that isn't clear to me is: how do you get a 'static lifetime in the
> example above?
> 
> Altough `TaskRef` does have an arbitrary lifetime param, that's not the lifetime
> that the returned `Task` reference gets. For illustration, I've explicitly added
> a lifetime 'a in the impl below:
> 
> impl Deref for TaskRef<'_> {
>     type Target = Task;
>     fn deref(&'a self) -> &'a Self::Target {
>         self.task
>     }
> }
> 
> Which means that the borrow of the `TaskRef` you use to call `deref` must
> outlive the returned `Task`.
> 
> So how do you get a `TaskRef` with a static lifetime to begin with? Or is there
> another trick to get the `&'static Task` that I can't see?
> 
> Thanks,
> -Wedson

Hi Wedson,

My apologies for the confusion. `&*Task::current()` is not
sufficient. I typed too quick without double checking.

However it is still true that `TaskRef<'static>` is unsound, and it can
be retrieved from `current()`. The missing step is `&'static
TaskRef<'static>`.

So you can write `&*Box::leak(Box::try_new(Task::current()).unwrap())`
and get `&'static Task`.

Best,
Gary

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-03-30 18:47     ` Boqun Feng
  2023-03-30 18:51       ` [DRAFT 1/2] locking/selftest: Add test infrastructure for Rust locking APIs Boqun Feng
  2023-03-30 18:56       ` [PATCH 03/13] rust: lock: introduce `Mutex` Boqun Feng
@ 2023-04-03  8:20       ` Peter Zijlstra
  2023-04-03 13:50         ` Wedson Almeida Filho
  2023-04-03 14:04         ` Boqun Feng
  2 siblings, 2 replies; 42+ messages in thread
From: Peter Zijlstra @ 2023-04-03  8:20 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Wedson Almeida Filho, rust-for-linux, Miguel Ojeda, Alex Gaynor,
	Gary Guo, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 11:47:12AM -0700, Boqun Feng wrote:
> On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:
> > On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:
> > > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > > 
> > > This is the `struct mutex` lock backend and allows Rust code to use the
> > > kernel mutex idiomatically.
> > 
> > What, if anything, are the plans to support the various lockdep
> > annotations? Idem for the spinlock thing in the other patch I suppose.
> 
> FWIW:
> 
> *	At the init stage, SpinLock and Mutex in Rust use initializers
> 	that are aware of the lockdep, so everything (lockdep_map and
> 	lock_class) is all set up.
> 
> *	At acquire or release time, Rust locks just use ffi to call C
> 	functions that have lockdep annotations in them, so lockdep
> 	should just work.
> 

ffi is what the C++ world calls RAII ?

But yes, I got that far, but I wondered about things like
spin_lock_nested(&foo, SINGLE_DEPTH_NESTING) and other such 'advanced'
annotations.

Surely we're going to be needing them at some point. I suppose you can
do the single depth nesting one with a special guard type (or whatever
you call that in the rust world) ?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/13] rust: sync: introduce `CondVar`
  2023-03-30 14:56     ` Wedson Almeida Filho
@ 2023-04-03  8:59       ` Peter Zijlstra
  2023-04-03 13:35         ` Wedson Almeida Filho
  0 siblings, 1 reply; 42+ messages in thread
From: Peter Zijlstra @ 2023-04-03  8:59 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu, Mar 30, 2023 at 11:56:33AM -0300, Wedson Almeida Filho wrote:
> On Thu, Mar 30, 2023 at 02:59:27PM +0200, Peter Zijlstra wrote:
> > On Thu, Mar 30, 2023 at 01:39:53AM -0300, Wedson Almeida Filho wrote:
> > 
> > > +    fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guard<'_, T, B>) {
> > > +        let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> > > +
> > > +        // SAFETY: `wait` points to valid memory.
> > > +        unsafe { bindings::init_wait(wait.get()) };
> > > +
> > > +        // SAFETY: Both `wait` and `wait_list` point to valid memory.
> > > +        unsafe {
> > > +            bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
> > > +        };
> > 
> > I can't read this rust gunk, but where is the condition test gone?
> > 
> > Also, where is the loop gone to?
> 
> They're both at the caller. The usage of condition variables is something like:
> 
> while guard.value != v {
>     condvar.wait_uninterruptible(&mut guard);
> }
> 
> (Note that this is not specific to the kernel or to Rust: this is how condvars
> work in general. You'll find this in any textbook on the topic.)
> 
> In the implementation of wait_internal(), we add the local wait entry to the
> wait queue _before_ releasing the lock (i.e., before the test result can
> change), so we guarantee that we don't miss wake up attempts.

Ah, so you've not yet been exposed to the wonderful 'feature' where
pthread_cond_timedwait() gets called with .mutex=NULL and people expect
things to just work :/ (luckily not accepted by the majority of
implementations)

Or a little more devious, calling signal and not holding the same mutex.

But then yes, I suppose it should work. I just got alarm bells going off
because I see prepare_to_wait without an obvious loop around it.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/13] rust: sync: introduce `CondVar`
  2023-04-03  8:59       ` Peter Zijlstra
@ 2023-04-03 13:35         ` Wedson Almeida Filho
  0 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-04-03 13:35 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Mon, Apr 03, 2023 at 10:59:59AM +0200, Peter Zijlstra wrote:
> On Thu, Mar 30, 2023 at 11:56:33AM -0300, Wedson Almeida Filho wrote:
> > On Thu, Mar 30, 2023 at 02:59:27PM +0200, Peter Zijlstra wrote:
> > > On Thu, Mar 30, 2023 at 01:39:53AM -0300, Wedson Almeida Filho wrote:
> > > 
> > > > +    fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guard<'_, T, B>) {
> > > > +        let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> > > > +
> > > > +        // SAFETY: `wait` points to valid memory.
> > > > +        unsafe { bindings::init_wait(wait.get()) };
> > > > +
> > > > +        // SAFETY: Both `wait` and `wait_list` point to valid memory.
> > > > +        unsafe {
> > > > +            bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
> > > > +        };
> > > 
> > > I can't read this rust gunk, but where is the condition test gone?
> > > 
> > > Also, where is the loop gone to?
> > 
> > They're both at the caller. The usage of condition variables is something like:
> > 
> > while guard.value != v {
> >     condvar.wait_uninterruptible(&mut guard);
> > }
> > 
> > (Note that this is not specific to the kernel or to Rust: this is how condvars
> > work in general. You'll find this in any textbook on the topic.)
> > 
> > In the implementation of wait_internal(), we add the local wait entry to the
> > wait queue _before_ releasing the lock (i.e., before the test result can
> > change), so we guarantee that we don't miss wake up attempts.
> 
> Ah, so you've not yet been exposed to the wonderful 'feature' where
> pthread_cond_timedwait() gets called with .mutex=NULL and people expect
> things to just work :/ (luckily not accepted by the majority of
> implementations)

Rust doesn't have this problem: a `Guard` cannot exist without a lock, and one
cannot call `wait` or `wait_uninterruptible` without a `Guard`.

> Or a little more devious, calling signal and not holding the same mutex.

We don't require that callers hold the lock while singaling. If they signal when
the condition isn't satisfied (with or without the lock held, it doesn't
matter), it will just look like a spurious signal to the waiting thread.

> I just got alarm bells going off because I see prepare_to_wait without an
> obvious loop around it.

Fair enough, we do need a loop.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-04-03  8:20       ` Peter Zijlstra
@ 2023-04-03 13:50         ` Wedson Almeida Filho
  2023-04-03 15:25           ` Gary Guo
  2023-04-03 14:04         ` Boqun Feng
  1 sibling, 1 reply; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-04-03 13:50 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Boqun Feng, rust-for-linux, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Ingo Molnar, Will Deacon, Waiman Long

On Mon, Apr 03, 2023 at 10:20:52AM +0200, Peter Zijlstra wrote:
> On Thu, Mar 30, 2023 at 11:47:12AM -0700, Boqun Feng wrote:
> > On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:
> > > On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:
> > > > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > > > 
> > > > This is the `struct mutex` lock backend and allows Rust code to use the
> > > > kernel mutex idiomatically.
> > > 
> > > What, if anything, are the plans to support the various lockdep
> > > annotations? Idem for the spinlock thing in the other patch I suppose.
> > 
> > FWIW:
> > 
> > *	At the init stage, SpinLock and Mutex in Rust use initializers
> > 	that are aware of the lockdep, so everything (lockdep_map and
> > 	lock_class) is all set up.
> > 
> > *	At acquire or release time, Rust locks just use ffi to call C
> > 	functions that have lockdep annotations in them, so lockdep
> > 	should just work.
> > 
> 
> ffi is what the C++ world calls RAII ?

No, ffi is 'foreign function interface', it just means that the caller will use
the same ABI as the callee.

> But yes, I got that far, but I wondered about things like
> spin_lock_nested(&foo, SINGLE_DEPTH_NESTING) and other such 'advanced'
> annotations.
> 
> Surely we're going to be needing them at some point. I suppose you can
> do the single depth nesting one with a special guard type (or whatever
> you call that in the rust world) ?

I haven't looked at all the advanced annotations, but something like
spin_lock_nested/mutex_lock_nested can be exposed as a lock_nested() associated
function of the `Lock` type, so one would do:

  let guard = my_mutex.lock_nested(SINGLE_DEPTH_NESTING);
  // Do something with data protected by my_mutex.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-04-03  8:20       ` Peter Zijlstra
  2023-04-03 13:50         ` Wedson Almeida Filho
@ 2023-04-03 14:04         ` Boqun Feng
  1 sibling, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2023-04-03 14:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wedson Almeida Filho, rust-for-linux, Miguel Ojeda, Alex Gaynor,
	Gary Guo, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

On Mon, Apr 03, 2023 at 10:20:52AM +0200, Peter Zijlstra wrote:
> On Thu, Mar 30, 2023 at 11:47:12AM -0700, Boqun Feng wrote:
> > On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:
> > > On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:
> > > > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > > > 
> > > > This is the `struct mutex` lock backend and allows Rust code to use the
> > > > kernel mutex idiomatically.
> > > 
> > > What, if anything, are the plans to support the various lockdep
> > > annotations? Idem for the spinlock thing in the other patch I suppose.
> > 
> > FWIW:
> > 
> > *	At the init stage, SpinLock and Mutex in Rust use initializers
> > 	that are aware of the lockdep, so everything (lockdep_map and
> > 	lock_class) is all set up.
> > 
> > *	At acquire or release time, Rust locks just use ffi to call C
> > 	functions that have lockdep annotations in them, so lockdep
> > 	should just work.
> > 
> 
> ffi is what the C++ world calls RAII ?
> 

ffi is foreign function interface, it means calling a C function from
Rust. Sorry if I make things confusing ;-)

> But yes, I got that far, but I wondered about things like
> spin_lock_nested(&foo, SINGLE_DEPTH_NESTING) and other such 'advanced'
> annotations.
> 

Right, I haven't really thought through them, but I think it's easy to
add them later (famous later words).

> Surely we're going to be needing them at some point. I suppose you can
> do the single depth nesting one with a special guard type (or whatever
> you call that in the rust world) ?

or a different method for Lock:

	impl Lock { // implementation block for type `Lock`
	//                 v function is called via a.lock_nested(SINGLE_DEPTH_NESTING), a is a Lock
	    fn lock_nested(&self, level: i32) -> Guard<..> {
	//  ^ defines a function           ^ returns a guard

	        ..  
	    }
	}

since Rust side just uses the same function to unlock as C side, so a
normal Guard type suffices, because we don't treat nested lock
differently at unlock time. But if we were to add some more checking at
compile time, we could have a slight different Guard or something.

Regards,
Boqun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-04-03 13:50         ` Wedson Almeida Filho
@ 2023-04-03 15:25           ` Gary Guo
  2023-04-03 15:44             ` Boqun Feng
  0 siblings, 1 reply; 42+ messages in thread
From: Gary Guo @ 2023-04-03 15:25 UTC (permalink / raw)
  To: Wedson Almeida Filho
  Cc: Peter Zijlstra, Boqun Feng, rust-for-linux, Miguel Ojeda,
	Alex Gaynor, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

On Mon, 3 Apr 2023 10:50:09 -0300
Wedson Almeida Filho <wedsonaf@gmail.com> wrote:

> On Mon, Apr 03, 2023 at 10:20:52AM +0200, Peter Zijlstra wrote:
> > On Thu, Mar 30, 2023 at 11:47:12AM -0700, Boqun Feng wrote:  
> > > On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:  
> > > > On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:  
> > > > > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > > > > 
> > > > > This is the `struct mutex` lock backend and allows Rust code to use the
> > > > > kernel mutex idiomatically.  
> > > > 
> > > > What, if anything, are the plans to support the various lockdep
> > > > annotations? Idem for the spinlock thing in the other patch I suppose.  
> > > 
> > > FWIW:
> > > 
> > > *	At the init stage, SpinLock and Mutex in Rust use initializers
> > > 	that are aware of the lockdep, so everything (lockdep_map and
> > > 	lock_class) is all set up.
> > > 
> > > *	At acquire or release time, Rust locks just use ffi to call C
> > > 	functions that have lockdep annotations in them, so lockdep
> > > 	should just work.
> > >   
> > 
> > ffi is what the C++ world calls RAII ?  
> 
> No, ffi is 'foreign function interface', it just means that the caller will use
> the same ABI as the callee.
> 
> > But yes, I got that far, but I wondered about things like
> > spin_lock_nested(&foo, SINGLE_DEPTH_NESTING) and other such 'advanced'
> > annotations.
> > 
> > Surely we're going to be needing them at some point. I suppose you can
> > do the single depth nesting one with a special guard type (or whatever
> > you call that in the rust world) ?  
> 
> I haven't looked at all the advanced annotations, but something like
> spin_lock_nested/mutex_lock_nested can be exposed as a lock_nested() associated
> function of the `Lock` type, so one would do:
> 
>   let guard = my_mutex.lock_nested(SINGLE_DEPTH_NESTING);
>   // Do something with data protected by my_mutex.

I don't think an additional function would work. It's not okay to
perform both nested locking and non-nested locking on the same lock
because non-nested locking will give you a mutable reference, and
getting another reference from nested lock guard would violate aliasing
rules.

A new lock type would be needed for nested locking, and guard of it can
only hand out immutable reference.

Best,
Gary

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/13] rust: lock: introduce `Mutex`
  2023-04-03 15:25           ` Gary Guo
@ 2023-04-03 15:44             ` Boqun Feng
  0 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2023-04-03 15:44 UTC (permalink / raw)
  To: Gary Guo
  Cc: Wedson Almeida Filho, Peter Zijlstra, rust-for-linux,
	Miguel Ojeda, Alex Gaynor, Björn Roy Baron, linux-kernel,
	Wedson Almeida Filho, Ingo Molnar, Will Deacon, Waiman Long

On Mon, Apr 03, 2023 at 04:25:29PM +0100, Gary Guo wrote:
> On Mon, 3 Apr 2023 10:50:09 -0300
> Wedson Almeida Filho <wedsonaf@gmail.com> wrote:
> 
> > On Mon, Apr 03, 2023 at 10:20:52AM +0200, Peter Zijlstra wrote:
> > > On Thu, Mar 30, 2023 at 11:47:12AM -0700, Boqun Feng wrote:  
> > > > On Thu, Mar 30, 2023 at 03:01:08PM +0200, Peter Zijlstra wrote:  
> > > > > On Thu, Mar 30, 2023 at 01:39:44AM -0300, Wedson Almeida Filho wrote:  
> > > > > > From: Wedson Almeida Filho <walmeida@microsoft.com>
> > > > > > 
> > > > > > This is the `struct mutex` lock backend and allows Rust code to use the
> > > > > > kernel mutex idiomatically.  
> > > > > 
> > > > > What, if anything, are the plans to support the various lockdep
> > > > > annotations? Idem for the spinlock thing in the other patch I suppose.  
> > > > 
> > > > FWIW:
> > > > 
> > > > *	At the init stage, SpinLock and Mutex in Rust use initializers
> > > > 	that are aware of the lockdep, so everything (lockdep_map and
> > > > 	lock_class) is all set up.
> > > > 
> > > > *	At acquire or release time, Rust locks just use ffi to call C
> > > > 	functions that have lockdep annotations in them, so lockdep
> > > > 	should just work.
> > > >   
> > > 
> > > ffi is what the C++ world calls RAII ?  
> > 
> > No, ffi is 'foreign function interface', it just means that the caller will use
> > the same ABI as the callee.
> > 
> > > But yes, I got that far, but I wondered about things like
> > > spin_lock_nested(&foo, SINGLE_DEPTH_NESTING) and other such 'advanced'
> > > annotations.
> > > 
> > > Surely we're going to be needing them at some point. I suppose you can
> > > do the single depth nesting one with a special guard type (or whatever
> > > you call that in the rust world) ?  
> > 
> > I haven't looked at all the advanced annotations, but something like
> > spin_lock_nested/mutex_lock_nested can be exposed as a lock_nested() associated
> > function of the `Lock` type, so one would do:
> > 
> >   let guard = my_mutex.lock_nested(SINGLE_DEPTH_NESTING);
> >   // Do something with data protected by my_mutex.
> 
> I don't think an additional function would work. It's not okay to
> perform both nested locking and non-nested locking on the same lock

Note that lock_nested() here is simply a lockdep concept, it means
locking nested under the same lock class (key), not lock instance, for
example:

	spinlock_t X1;
	spinlock_t X2;

	// X1 and X2 are of the same lock class X
	spin_lock(&X1);
	spin_lock(&X2); // lockdep will report a deadlock.

	// However, if we know that X1 and X2 has some ordering to lock,
	// e.g. X1 is the lock for a directory and X2 is the lock for
	// the file in the directory, we can
	spin_lock(&X1);
	spin_lock_nested(&X2, SINGLE_DEPTH_NESTING);

	// and lockdep won't complain.

There is some design space here for Rust, since we may be able to put
the nested information in the type.

Regards,
Boqun

> because non-nested locking will give you a mutable reference, and
> getting another reference from nested lock guard would violate aliasing
> rules.
> 
> A new lock type would be needed for nested locking, and guard of it can
> only hand out immutable reference.
> 
> Best,
> Gary

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/13] rust: sync: introduce `LockClassKey`
  2023-03-31  7:28 ` Alice Ryhl
@ 2023-04-05 17:42   ` Wedson Almeida Filho
  0 siblings, 0 replies; 42+ messages in thread
From: Wedson Almeida Filho @ 2023-04-05 17:42 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: rust-for-linux, Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, linux-kernel, Wedson Almeida Filho,
	Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long

On Fri, 31 Mar 2023 at 04:28, Alice Ryhl <alice@ryhl.io> wrote:
>
> On 3/30/23 06:39, Wedson Almeida Filho wrote:
> > From: Wedson Almeida Filho <walmeida@microsoft.com>
> >
> > It is a wrapper around C's `lock_class_key`, which is used by the
> > synchronisation primitives that are checked with lockdep. This is in
> > preparation for introducing Rust abstractions for these primitives.
> >
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Waiman Long <longman@redhat.com>
> > Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com>
> > ---
> > +// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
> > +// provides its own synchronization.
> > +unsafe impl Sync for LockClassKey {}
>
> No Send?

We haven't needed it. We can add it when needed.

> > +
> > +impl LockClassKey {
> > +    /// Creates a new lock class key.
> > +    pub const fn new() -> Self {
> > +        Self(Opaque::uninit())
> > +    }
> > +
> > +    #[allow(dead_code)]
> > +    pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
> > +        self.0.get()
> > +    }
>
> I would just make this pub and drop the `#[allow(dead_code)]`. I think
> it is often useful to have methods like this available publicly.

The `allow(dead_code)` is removed on the next patch, it's here just to
make this patch compile when applied alone.

This isn't public because the return type refers to a type from
`bindings`, which we intend to eventually hide from drivers, making it
public now would like our lives harder in the future.

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2023-04-05 17:42 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-30  4:39 [PATCH 01/13] rust: sync: introduce `LockClassKey` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 02/13] rust: sync: introduce `Lock` and `Guard` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 03/13] rust: lock: introduce `Mutex` Wedson Almeida Filho
2023-03-30 13:01   ` Peter Zijlstra
2023-03-30 18:47     ` Boqun Feng
2023-03-30 18:51       ` [DRAFT 1/2] locking/selftest: Add test infrastructure for Rust locking APIs Boqun Feng
2023-03-30 18:51         ` [DRAFT 2/2] locking/selftest: Add AA deadlock selftest for Mutex and SpinLock Boqun Feng
2023-03-30 18:56       ` [PATCH 03/13] rust: lock: introduce `Mutex` Boqun Feng
2023-04-03  8:20       ` Peter Zijlstra
2023-04-03 13:50         ` Wedson Almeida Filho
2023-04-03 15:25           ` Gary Guo
2023-04-03 15:44             ` Boqun Feng
2023-04-03 14:04         ` Boqun Feng
2023-03-30  4:39 ` [PATCH 04/13] locking/spinlock: introduce spin_lock_init_with_key Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 05/13] rust: lock: introduce `SpinLock` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 06/13] rust: lock: add support for `Lock::lock_irqsave` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 07/13] rust: lock: implement `IrqSaveBackend` for `SpinLock` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 08/13] rust: introduce `ARef` Wedson Almeida Filho
2023-03-30 14:17   ` Gary Guo
2023-03-30  4:39 ` [PATCH 09/13] rust: add basic `Task` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 10/13] rust: introduce `Task::current` Wedson Almeida Filho
2023-03-31  2:47   ` Gary Guo
2023-03-31  7:32     ` Alice Ryhl
2023-04-01  4:09     ` Wedson Almeida Filho
2023-04-01  7:01       ` Gary Guo
2023-03-30  4:39 ` [PATCH 11/13] rust: lock: add `Guard::do_unlocked` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 12/13] rust: sync: introduce `CondVar` Wedson Almeida Filho
2023-03-30 12:52   ` Peter Zijlstra
2023-03-30 14:43     ` Wedson Almeida Filho
2023-03-30 12:59   ` Peter Zijlstra
2023-03-30 14:56     ` Wedson Almeida Filho
2023-04-03  8:59       ` Peter Zijlstra
2023-04-03 13:35         ` Wedson Almeida Filho
2023-03-30  4:39 ` [PATCH 13/13] rust: sync: introduce `LockedBy` Wedson Almeida Filho
2023-03-30 11:28   ` Benno Lossin
2023-03-30 11:45     ` Benno Lossin
2023-03-30 21:04       ` Wedson Almeida Filho
2023-03-30 21:10         ` Benno Lossin
2023-03-30 20:44     ` Wedson Almeida Filho
2023-03-30 11:10 ` [PATCH 01/13] rust: sync: introduce `LockClassKey` Gary Guo
2023-03-31  7:28 ` Alice Ryhl
2023-04-05 17:42   ` Wedson Almeida Filho

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.