netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock
       [not found] <20200524215739.551568-1-bigeasy@linutronix.de>
@ 2020-05-24 21:57 ` Sebastian Andrzej Siewior
  2020-05-25  7:18   ` Ingo Molnar
  0 siblings, 1 reply; 3+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-24 21:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Evgeniy Polyakov, netdev,
	Sebastian Andrzej Siewior

From: Mike Galbraith <umgwanakikbuti@gmail.com>

send_msg() disables preemption to avoid out-of-order messages. As the
code inside the preempt disabled section acquires regular spinlocks,
which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
eventually calls into a memory allocator, this conflicts with the RT
semantics.

Convert it to a local_lock which allows RT kernels to substitute them with
a real per CPU lock. On non RT kernels this maps to preempt_disable() as
before. No functional change.

[bigeasy: Patch description]

Cc: Evgeniy Polyakov <zbr@ioremap.net>
Cc: netdev@vger.kernel.org
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/connector/cn_proc.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index d58ce664da843..d424d1f469136 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -18,6 +18,7 @@
 #include <linux/pid_namespace.h>
 
 #include <linux/cn_proc.h>
+#include <linux/locallock.h>
 
 /*
  * Size of a cn_msg followed by a proc_event structure.  Since the
@@ -38,25 +39,32 @@ static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
 static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
 static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
 
-/* proc_event_counts is used as the sequence number of the netlink message */
-static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
+/* local_evt.counts is used as the sequence number of the netlink message */
+struct local_evt {
+	__u32 counts;
+	struct local_lock lock;
+};
+static DEFINE_PER_CPU(struct local_evt, local_evt) = {
+	.counts = 0,
+	.lock = INIT_LOCAL_LOCK(lock),
+};
 
 static inline void send_msg(struct cn_msg *msg)
 {
-	preempt_disable();
+	local_lock(&local_evt.lock);
 
-	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
+	msg->seq = __this_cpu_inc_return(local_evt.counts) - 1;
 	((struct proc_event *)msg->data)->cpu = smp_processor_id();
 
 	/*
-	 * Preemption remains disabled during send to ensure the messages are
-	 * ordered according to their sequence numbers.
+	 * local_lock() disables preemption during send to ensure the messages
+	 * are ordered according to their sequence numbers.
 	 *
 	 * If cn_netlink_send() fails, the data is not sent.
 	 */
 	cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
 
-	preempt_enable();
+	local_unlock(&local_evt.lock);
 }
 
 void proc_fork_connector(struct task_struct *task)
-- 
2.27.0.rc0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock
  2020-05-24 21:57 ` [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
@ 2020-05-25  7:18   ` Ingo Molnar
  2020-05-25 14:51     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 3+ messages in thread
From: Ingo Molnar @ 2020-05-25  7:18 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Evgeniy Polyakov, netdev


* Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> From: Mike Galbraith <umgwanakikbuti@gmail.com>
> 
> send_msg() disables preemption to avoid out-of-order messages. As the
> code inside the preempt disabled section acquires regular spinlocks,
> which are converted to 'sleeping' spinlocks on a PREEMPT_RT kernel and
> eventually calls into a memory allocator, this conflicts with the RT
> semantics.
> 
> Convert it to a local_lock which allows RT kernels to substitute them with
> a real per CPU lock. On non RT kernels this maps to preempt_disable() as
> before. No functional change.
> 
> [bigeasy: Patch description]
> 
> Cc: Evgeniy Polyakov <zbr@ioremap.net>
> Cc: netdev@vger.kernel.org
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  drivers/connector/cn_proc.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
> index d58ce664da843..d424d1f469136 100644
> --- a/drivers/connector/cn_proc.c
> +++ b/drivers/connector/cn_proc.c
> @@ -18,6 +18,7 @@
>  #include <linux/pid_namespace.h>
>  
>  #include <linux/cn_proc.h>
> +#include <linux/locallock.h>
>  
>  /*
>   * Size of a cn_msg followed by a proc_event structure.  Since the
> @@ -38,25 +39,32 @@ static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
>  static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
>  static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
>  
> -/* proc_event_counts is used as the sequence number of the netlink message */
> -static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
> +/* local_evt.counts is used as the sequence number of the netlink message */
> +struct local_evt {
> +	__u32 counts;
> +	struct local_lock lock;
> +};
> +static DEFINE_PER_CPU(struct local_evt, local_evt) = {
> +	.counts = 0,

I don't think zero initializations need to be written out explicitly.

> +	.lock = INIT_LOCAL_LOCK(lock),
> +};
>  
>  static inline void send_msg(struct cn_msg *msg)
>  {
> -	preempt_disable();
> +	local_lock(&local_evt.lock);
>  
> -	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
> +	msg->seq = __this_cpu_inc_return(local_evt.counts) - 1;

Naming nit: renaming this from 'proc_event_counts' to 
'local_evt.counts' is a step back IMO - what's an 'evt',
did we run out of e's? ;-)

Should be something like local_event.count? (Singular.)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock
  2020-05-25  7:18   ` Ingo Molnar
@ 2020-05-25 14:51     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 3+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-05-25 14:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Steven Rostedt, Will Deacon,
	Thomas Gleixner, Paul E . McKenney, Linus Torvalds,
	Matthew Wilcox, Mike Galbraith, Evgeniy Polyakov, netdev

On 2020-05-25 09:18:19 [+0200], Ingo Molnar wrote:
> > +static DEFINE_PER_CPU(struct local_evt, local_evt) = {
> > +	.counts = 0,
> 
> I don't think zero initializations need to be written out explicitly.
yes.

> > +	.lock = INIT_LOCAL_LOCK(lock),
> > +};
> >  
> >  static inline void send_msg(struct cn_msg *msg)
> >  {
> > -	preempt_disable();
> > +	local_lock(&local_evt.lock);
> >  
> > -	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
> > +	msg->seq = __this_cpu_inc_return(local_evt.counts) - 1;
> 
> Naming nit: renaming this from 'proc_event_counts' to 
> 'local_evt.counts' is a step back IMO - what's an 'evt',
> did we run out of e's? ;-)
> 
> Should be something like local_event.count? (Singular.)

okay.

> Thanks,
> 
> 	Ingo

Sebastian

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-05-25 14:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200524215739.551568-1-bigeasy@linutronix.de>
2020-05-24 21:57 ` [PATCH v2 5/7] connector/cn_proc: Protect send_msg() with a local lock Sebastian Andrzej Siewior
2020-05-25  7:18   ` Ingo Molnar
2020-05-25 14:51     ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).