linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alice Ryhl <aliceryhl@google.com>
To: "Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	"Arve Hjønnevåg" <arve@android.com>,
	"Todd Kjos" <tkjos@android.com>,
	"Martijn Coenen" <maco@android.com>,
	"Joel Fernandes" <joel@joelfernandes.org>,
	"Christian Brauner" <brauner@kernel.org>,
	"Carlos Llamas" <cmllamas@google.com>,
	"Suren Baghdasaryan" <surenb@google.com>,
	"Miguel Ojeda" <ojeda@kernel.org>,
	"Alex Gaynor" <alex.gaynor@gmail.com>,
	"Wedson Almeida Filho" <wedsonaf@gmail.com>
Cc: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org,
	"Boqun Feng" <boqun.feng@gmail.com>,
	"Gary Guo" <gary@garyguo.net>,
	"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
	"Benno Lossin" <benno.lossin@proton.me>,
	"Andreas Hindborg" <a.hindborg@samsung.com>,
	"Matt Gilbride" <mattgilbride@google.com>,
	"Jeffrey Vander Stoep" <jeffv@google.com>,
	"Matthew Maurer" <mmaurer@google.com>,
	"Alice Ryhl" <aliceryhl@google.com>
Subject: [PATCH RFC 17/20] rust_binder: add oneway spam detection
Date: Wed, 01 Nov 2023 18:01:47 +0000	[thread overview]
Message-ID: <20231101-rust-binder-v1-17-08ba9197f637@google.com> (raw)
In-Reply-To: <20231101-rust-binder-v1-0-08ba9197f637@google.com>

From: Matt Gilbride <mattgilbride@google.com>

The idea is that once we cross a certain threshold of free async space,
whoever is responsible for the low async space is likely to try to send
another async transaction.

This change allows servers to turn on oneway spam detection and return a
different binder reply when it is detected.

Signed-off-by: Matt Gilbride <mattgilbride@google.com>
Co-developed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/android/allocation.rs  |  3 ++
 drivers/android/defs.rs        |  1 +
 drivers/android/process.rs     | 39 ++++++++++++++++++++++++--
 drivers/android/range_alloc.rs | 62 ++++++++++++++++++++++++++++++++++++++++--
 drivers/android/rust_binder.rs |  1 -
 drivers/android/thread.rs      | 11 ++++++--
 drivers/android/transaction.rs |  5 ++++
 rust/kernel/task.rs            |  2 +-
 8 files changed, 115 insertions(+), 9 deletions(-)

diff --git a/drivers/android/allocation.rs b/drivers/android/allocation.rs
index c7f44a54b79b..7b64e7fcce4d 100644
--- a/drivers/android/allocation.rs
+++ b/drivers/android/allocation.rs
@@ -49,6 +49,7 @@ pub(crate) struct Allocation {
     pub(crate) process: Arc<Process>,
     allocation_info: Option<AllocationInfo>,
     free_on_drop: bool,
+    pub(crate) oneway_spam_detected: bool,
 }
 
 impl Allocation {
@@ -58,6 +59,7 @@ pub(crate) fn new(
         size: usize,
         ptr: usize,
         pages: Arc<Vec<Pages<0>>>,
+        oneway_spam_detected: bool,
     ) -> Self {
         Self {
             process,
@@ -65,6 +67,7 @@ pub(crate) fn new(
             size,
             ptr,
             pages,
+            oneway_spam_detected,
             allocation_info: None,
             free_on_drop: true,
         }
diff --git a/drivers/android/defs.rs b/drivers/android/defs.rs
index b1a54f85b365..e345b6ea45cc 100644
--- a/drivers/android/defs.rs
+++ b/drivers/android/defs.rs
@@ -24,6 +24,7 @@ macro_rules! pub_no_prefix {
     BR_NOOP,
     BR_SPAWN_LOOPER,
     BR_TRANSACTION_COMPLETE,
+    BR_ONEWAY_SPAM_SUSPECT,
     BR_OK,
     BR_ERROR,
     BR_INCREFS,
diff --git a/drivers/android/process.rs b/drivers/android/process.rs
index 44baf9e3f998..4ac5d09041a4 100644
--- a/drivers/android/process.rs
+++ b/drivers/android/process.rs
@@ -92,6 +92,8 @@ pub(crate) struct ProcessInner {
     pub(crate) sync_recv: bool,
     /// Process received async transactions since last frozen.
     pub(crate) async_recv: bool,
+    /// Check for oneway spam
+    oneway_spam_detection_enabled: bool,
 }
 
 impl ProcessInner {
@@ -113,6 +115,7 @@ fn new() -> Self {
             is_frozen: false,
             sync_recv: false,
             async_recv: false,
+            oneway_spam_detection_enabled: false,
         }
     }
 
@@ -658,17 +661,21 @@ pub(crate) fn buffer_alloc(
         self: &Arc<Self>,
         size: usize,
         is_oneway: bool,
+        from_pid: i32,
     ) -> BinderResult<Allocation> {
         let alloc = range_alloc::ReserveNewBox::try_new()?;
         let mut inner = self.inner.lock();
         let mapping = inner.mapping.as_mut().ok_or_else(BinderError::new_dead)?;
-        let offset = mapping.alloc.reserve_new(size, is_oneway, alloc)?;
+        let offset = mapping
+            .alloc
+            .reserve_new(size, is_oneway, from_pid, alloc)?;
         Ok(Allocation::new(
             self.clone(),
             offset,
             size,
             mapping.address + offset,
             mapping.pages.clone(),
+            mapping.alloc.oneway_spam_detected,
         ))
     }
 
@@ -677,7 +684,14 @@ pub(crate) fn buffer_get(self: &Arc<Self>, ptr: usize) -> Option<Allocation> {
         let mapping = inner.mapping.as_mut()?;
         let offset = ptr.checked_sub(mapping.address)?;
         let (size, odata) = mapping.alloc.reserve_existing(offset).ok()?;
-        let mut alloc = Allocation::new(self.clone(), offset, size, ptr, mapping.pages.clone());
+        let mut alloc = Allocation::new(
+            self.clone(),
+            offset,
+            size,
+            ptr,
+            mapping.pages.clone(),
+            mapping.alloc.oneway_spam_detected,
+        );
         if let Some(data) = odata {
             alloc.set_info(data);
         }
@@ -762,6 +776,14 @@ fn set_max_threads(&self, max: u32) {
         self.inner.lock().max_threads = max;
     }
 
+    fn set_oneway_spam_detection_enabled(&self, enabled: u32) {
+        self.inner.lock().oneway_spam_detection_enabled = enabled != 0;
+    }
+
+    pub(crate) fn is_oneway_spam_detection_enabled(&self) -> bool {
+        self.inner.lock().oneway_spam_detection_enabled
+    }
+
     fn get_node_debug_info(&self, data: UserSlicePtr) -> Result {
         let (mut reader, mut writer) = data.reader_writer();
 
@@ -948,9 +970,17 @@ fn deferred_release(self: Arc<Self>) {
         if let Some(mut mapping) = omapping {
             let address = mapping.address;
             let pages = mapping.pages.clone();
+            let oneway_spam_detected = mapping.alloc.oneway_spam_detected;
             mapping.alloc.take_for_each(|offset, size, odata| {
                 let ptr = offset + address;
-                let mut alloc = Allocation::new(self.clone(), offset, size, ptr, pages.clone());
+                let mut alloc = Allocation::new(
+                    self.clone(),
+                    offset,
+                    size,
+                    ptr,
+                    pages.clone(),
+                    oneway_spam_detected,
+                );
                 if let Some(data) = odata {
                     alloc.set_info(data);
                 }
@@ -1144,6 +1174,9 @@ fn write(
             bindings::BINDER_SET_CONTEXT_MGR_EXT => {
                 this.set_as_manager(Some(reader.read()?), &thread)?
             }
+            bindings::BINDER_ENABLE_ONEWAY_SPAM_DETECTION => {
+                this.set_oneway_spam_detection_enabled(reader.read()?)
+            }
             bindings::BINDER_FREEZE => ioctl_freeze(reader)?,
             _ => return Err(EINVAL),
         }
diff --git a/drivers/android/range_alloc.rs b/drivers/android/range_alloc.rs
index e757129613cf..c1d47115e54d 100644
--- a/drivers/android/range_alloc.rs
+++ b/drivers/android/range_alloc.rs
@@ -3,6 +3,7 @@
 use kernel::{
     prelude::*,
     rbtree::{RBTree, RBTreeNode, RBTreeNodeReservation},
+    task::Pid,
 };
 
 /// Keeps track of allocations in a process' mmap.
@@ -13,7 +14,9 @@
 pub(crate) struct RangeAllocator<T> {
     tree: RBTree<usize, Descriptor<T>>,
     free_tree: RBTree<FreeKey, ()>,
+    size: usize,
     free_oneway_space: usize,
+    pub(crate) oneway_spam_detected: bool,
 }
 
 impl<T> RangeAllocator<T> {
@@ -26,6 +29,8 @@ pub(crate) fn new(size: usize) -> Result<Self> {
             free_oneway_space: size / 2,
             tree,
             free_tree,
+            oneway_spam_detected: false,
+            size,
         })
     }
 
@@ -40,6 +45,7 @@ pub(crate) fn reserve_new(
         &mut self,
         size: usize,
         is_oneway: bool,
+        pid: Pid,
         alloc: ReserveNewBox<T>,
     ) -> Result<usize> {
         // Compute new value of free_oneway_space, which is set only on success.
@@ -52,6 +58,15 @@ pub(crate) fn reserve_new(
             self.free_oneway_space
         };
 
+        // Start detecting spammers once we have less than 20%
+        // of async space left (which is less than 10% of total
+        // buffer size).
+        //
+        // (This will short-circut, so `low_oneway_space` is
+        // only called when necessary.)
+        self.oneway_spam_detected =
+            is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);
+
         let (found_size, found_off, tree_node, free_tree_node) = match self.find_best_match(size) {
             None => {
                 pr_warn!("ENOSPC from range_alloc.reserve_new - size: {}", size);
@@ -65,7 +80,7 @@ pub(crate) fn reserve_new(
                 let new_desc = Descriptor::new(found_offset + size, found_size - size);
                 let (tree_node, free_tree_node, desc_node_res) = alloc.initialize(new_desc);
 
-                desc.state = Some(DescriptorState::new(is_oneway, desc_node_res));
+                desc.state = Some(DescriptorState::new(is_oneway, pid, desc_node_res));
                 desc.size = size;
 
                 (found_size, found_offset, tree_node, free_tree_node)
@@ -224,6 +239,30 @@ pub(crate) fn take_for_each<F: Fn(usize, usize, Option<T>)>(&mut self, callback:
             }
         }
     }
+
+    /// Find the amount and size of buffers allocated by the current caller.
+    ///
+    /// The idea is that once we cross the threshold, whoever is responsible
+    /// for the low async space is likely to try to send another async transaction,
+    /// and at some point we'll catch them in the act.  This is more efficient
+    /// than keeping a map per pid.
+    fn low_oneway_space(&self, calling_pid: Pid) -> bool {
+        let mut total_alloc_size = 0;
+        let mut num_buffers = 0;
+        for (_, desc) in self.tree.iter() {
+            if let Some(state) = &desc.state {
+                if state.is_oneway() && state.pid() == calling_pid {
+                    total_alloc_size += desc.size;
+                    num_buffers += 1;
+                }
+            }
+        }
+
+        // Warn if this pid has more than 50 transactions, or more than 50% of
+        // async space (which is 25% of total buffer size). Oneway spam is only
+        // detected when the threshold is exceeded.
+        num_buffers > 50 || total_alloc_size > self.size / 4
+    }
 }
 
 struct Descriptor<T> {
@@ -257,16 +296,32 @@ enum DescriptorState<T> {
 }
 
 impl<T> DescriptorState<T> {
-    fn new(is_oneway: bool, free_res: FreeNodeRes) -> Self {
+    fn new(is_oneway: bool, pid: Pid, free_res: FreeNodeRes) -> Self {
         DescriptorState::Reserved(Reservation {
             is_oneway,
+            pid,
             free_res,
         })
     }
+
+    fn pid(&self) -> Pid {
+        match self {
+            DescriptorState::Reserved(inner) => inner.pid,
+            DescriptorState::Allocated(inner) => inner.pid,
+        }
+    }
+
+    fn is_oneway(&self) -> bool {
+        match self {
+            DescriptorState::Reserved(inner) => inner.is_oneway,
+            DescriptorState::Allocated(inner) => inner.is_oneway,
+        }
+    }
 }
 
 struct Reservation {
     is_oneway: bool,
+    pid: Pid,
     free_res: FreeNodeRes,
 }
 
@@ -275,6 +330,7 @@ fn allocate<T>(self, data: Option<T>) -> Allocation<T> {
         Allocation {
             data,
             is_oneway: self.is_oneway,
+            pid: self.pid,
             free_res: self.free_res,
         }
     }
@@ -282,6 +338,7 @@ fn allocate<T>(self, data: Option<T>) -> Allocation<T> {
 
 struct Allocation<T> {
     is_oneway: bool,
+    pid: Pid,
     free_res: FreeNodeRes,
     data: Option<T>,
 }
@@ -291,6 +348,7 @@ fn deallocate(self) -> (Reservation, Option<T>) {
         (
             Reservation {
                 is_oneway: self.is_oneway,
+                pid: self.pid,
                 free_res: self.free_res,
             },
             self.data,
diff --git a/drivers/android/rust_binder.rs b/drivers/android/rust_binder.rs
index 04477ff7e5a0..adf542872f36 100644
--- a/drivers/android/rust_binder.rs
+++ b/drivers/android/rust_binder.rs
@@ -107,7 +107,6 @@ fn new(val: impl PinInit<T>) -> impl PinInit<Self> {
         })
     }
 
-    #[allow(dead_code)]
     fn arc_try_new(val: T) -> Result<DLArc<T>, alloc::alloc::AllocError> {
         ListArc::pin_init(pin_init!(Self {
             links <- ListLinksSelfPtr::new(),
diff --git a/drivers/android/thread.rs b/drivers/android/thread.rs
index 0238c15604f6..414ffb1387a0 100644
--- a/drivers/android/thread.rs
+++ b/drivers/android/thread.rs
@@ -909,7 +909,7 @@ pub(crate) fn copy_transaction_data(
             size_of::<usize>(),
         );
         let secctx_off = adata_size + aoffsets_size + abuffers_size;
-        let mut alloc = match to_process.buffer_alloc(len, is_oneway) {
+        let mut alloc = match to_process.buffer_alloc(len, is_oneway, self.process.task.pid()) {
             Ok(alloc) => alloc,
             Err(err) => {
                 pr_warn!(
@@ -1191,8 +1191,15 @@ fn oneway_transaction_inner(self: &Arc<Self>, tr: &BinderTransactionDataSg) -> B
         let handle = unsafe { tr.transaction_data.target.handle };
         let node_ref = self.process.get_transaction_node(handle)?;
         security::binder_transaction(&self.process.cred, &node_ref.node.owner.cred)?;
-        let list_completion = DTRWrap::arc_try_new(DeliverCode::new(BR_TRANSACTION_COMPLETE))?;
         let transaction = Transaction::new(node_ref, None, self, tr)?;
+        let code = if self.process.is_oneway_spam_detection_enabled()
+            && transaction.oneway_spam_detected
+        {
+            BR_ONEWAY_SPAM_SUSPECT
+        } else {
+            BR_TRANSACTION_COMPLETE
+        };
+        let list_completion = DTRWrap::arc_try_new(DeliverCode::new(code))?;
         let completion = list_completion.clone_arc();
         self.inner.lock().push_work(list_completion);
         match transaction.submit() {
diff --git a/drivers/android/transaction.rs b/drivers/android/transaction.rs
index 7028c504ef8c..84b9fe58fe3e 100644
--- a/drivers/android/transaction.rs
+++ b/drivers/android/transaction.rs
@@ -38,6 +38,7 @@ pub(crate) struct Transaction {
     data_address: usize,
     sender_euid: Kuid,
     txn_security_ctx_off: Option<usize>,
+    pub(crate) oneway_spam_detected: bool,
 }
 
 kernel::list::impl_list_arc_safe! {
@@ -70,6 +71,7 @@ pub(crate) fn new(
                 return Err(err);
             }
         };
+        let oneway_spam_detected = alloc.oneway_spam_detected;
         if trd.flags & TF_ONE_WAY != 0 {
             if stack_next.is_some() {
                 pr_warn!("Oneway transaction should not be in a transaction stack.");
@@ -98,6 +100,7 @@ pub(crate) fn new(
             allocation <- kernel::new_spinlock!(Some(alloc), "Transaction::new"),
             is_outstanding: AtomicBool::new(false),
             txn_security_ctx_off,
+            oneway_spam_detected,
         }))?)
     }
 
@@ -115,6 +118,7 @@ pub(crate) fn new_reply(
                 return Err(err);
             }
         };
+        let oneway_spam_detected = alloc.oneway_spam_detected;
         if trd.flags & TF_CLEAR_BUF != 0 {
             alloc.set_info_clear_on_drop();
         }
@@ -132,6 +136,7 @@ pub(crate) fn new_reply(
             allocation <- kernel::new_spinlock!(Some(alloc), "Transaction::new"),
             is_outstanding: AtomicBool::new(false),
             txn_security_ctx_off: None,
+            oneway_spam_detected,
         }))?)
     }
 
diff --git a/rust/kernel/task.rs b/rust/kernel/task.rs
index 1a27b968a907..81649f12758b 100644
--- a/rust/kernel/task.rs
+++ b/rust/kernel/task.rs
@@ -81,7 +81,7 @@ unsafe impl Send for Task {}
 unsafe impl Sync for Task {}
 
 /// The type of process identifiers (PIDs).
-type Pid = bindings::pid_t;
+pub type Pid = bindings::pid_t;
 
 /// The type of user identifiers (UIDs).
 #[derive(Copy, Clone)]

-- 
2.42.0.820.g83a721a137-goog


  parent reply	other threads:[~2023-11-01 18:03 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-01 18:01 [PATCH RFC 00/20] Setting up Binder for the future Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 01/20] rust_binder: define a Rust binder driver Alice Ryhl
2023-11-01 18:09   ` Greg Kroah-Hartman
2023-11-08 10:38     ` Alice Ryhl
2023-11-01 18:25   ` Boqun Feng
2023-11-02 10:27     ` Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 02/20] rust_binder: add binderfs support to Rust binder Alice Ryhl
2023-11-01 18:10   ` Greg Kroah-Hartman
2023-11-08 10:42     ` Alice Ryhl
2023-11-03 10:11   ` Finn Behrens
2023-11-08 10:31     ` Alice Ryhl
2023-11-03 16:30   ` Benno Lossin
2023-11-03 17:34     ` Boqun Feng
2023-11-08 10:25     ` Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 03/20] rust_binder: add threading support Alice Ryhl
2023-11-03 10:51   ` Finn Behrens
2023-11-08 10:27     ` Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 04/20] rust_binder: add work lists Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 05/20] rust_binder: add nodes and context managers Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 06/20] rust_binder: add oneway transactions Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 07/20] rust_binder: add epoll support Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 08/20] rust_binder: add non-oneway transactions Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 09/20] rust_binder: serialize oneway transactions Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 10/20] rust_binder: add death notifications Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 11/20] rust_binder: send nodes in transactions Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 12/20] rust_binder: add BINDER_TYPE_PTR support Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 13/20] rust_binder: add BINDER_TYPE_FD support Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 14/20] rust_binder: add BINDER_TYPE_FDA support Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 15/20] rust_binder: add process freezing Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 16/20] rust_binder: add TF_UPDATE_TXN support Alice Ryhl
2023-11-01 18:01 ` Alice Ryhl [this message]
2023-11-01 18:01 ` [PATCH RFC 18/20] rust_binder: add binder_logs/state Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 19/20] rust_binder: add vma shrinker Alice Ryhl
2023-11-01 18:01 ` [PATCH RFC 20/20] binder: delete the C implementation Alice Ryhl
2023-11-01 18:15   ` Greg Kroah-Hartman
2023-11-01 18:39   ` Carlos Llamas
2023-11-01 18:34 ` [PATCH RFC 00/20] Setting up Binder for the future Carlos Llamas
2023-11-02 13:33   ` Alice Ryhl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231101-rust-binder-v1-17-08ba9197f637@google.com \
    --to=aliceryhl@google.com \
    --cc=a.hindborg@samsung.com \
    --cc=alex.gaynor@gmail.com \
    --cc=arve@android.com \
    --cc=benno.lossin@proton.me \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=brauner@kernel.org \
    --cc=cmllamas@google.com \
    --cc=gary@garyguo.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=jeffv@google.com \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maco@android.com \
    --cc=mattgilbride@google.com \
    --cc=mmaurer@google.com \
    --cc=ojeda@kernel.org \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=surenb@google.com \
    --cc=tkjos@android.com \
    --cc=wedsonaf@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).