All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/3] sparse: Introduce __private to privatize members of structs
@ 2015-12-12  2:55 Boqun Feng
  2015-12-12  2:56 ` [RFC 1/3] sparse: Add " Boqun Feng
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Boqun Feng @ 2015-12-12  2:55 UTC (permalink / raw)
  To: linux-sparse, linux-kernel
  Cc: Christopher Li, Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Thomas Gleixner, Jiang Liu, Marc Zyngier, Mika Westerberg,
	Russell King, Brian Norris, Peter Zijlstra, Boqun Feng

Hi all,

As I proposed:

http://marc.info/?l=linux-arm-kernel&m=144944469805753

, we can define a __private modifier for sparse to detect misuses of private
members of structs. This could make maintenace a little easier and prevent some
potential bugs.

This patchset serves as a POC and consists of three patches:

1.	Introduce __private and related macro, also improve compiler.h a litte
	bit

2.	Privatize rcu_node::lock

3.	Privatize irq_common_data::state_use_accessors

This patchset is against

	-rcu/rcu/next	5f343fc7f0ce7edfe344e7cc5afdfe145a18f802

because this depends on commits:
	
	"rcu: Create transitive rnp->lock acquisition functions"

and

	"rcu: Add transitivity to remaining rcu_node ->lock acquisitions


I tried to use 0day to test this, but seems the email transfer from 0day to my
mail box is very slow recently, so I haven't received a recent result yet. But
as one can see, this patchset doesn't have any functional modification, the 0day
may not find anything interesting ;-)


Looking forward to any suggestion, question and comment ;-)

Regards,
Boqun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC 1/3] sparse: Add __private to privatize members of structs
  2015-12-12  2:55 [RFC 0/3] sparse: Introduce __private to privatize members of structs Boqun Feng
@ 2015-12-12  2:56 ` Boqun Feng
  2015-12-12  2:56 ` [RFC 2/3] RCU: Privatize rcu_node::lock Boqun Feng
  2015-12-12  2:56 ` [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors Boqun Feng
  2 siblings, 0 replies; 10+ messages in thread
From: Boqun Feng @ 2015-12-12  2:56 UTC (permalink / raw)
  To: linux-sparse, linux-kernel
  Cc: Christopher Li, Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Thomas Gleixner, Jiang Liu, Marc Zyngier, Mika Westerberg,
	Russell King, Brian Norris, Peter Zijlstra, Boqun Feng

In C programming language, we don't have a easy way to privatize a
member of a structure. However in kernel, sometimes there is a need to
privatize a member in case of potential bugs or misuses.

Fortunately, the noderef attribute of sparse is a way to privatize a
member, as by defining a member as noderef, the address-of operator on
the member will produce a noderef pointer to that member, and if anyone
wants to dereference that kind of pointers to read or modify the member,
sparse will yell.

Based on this, __private modifier and related operation ACCESS_PRIVATE()
are introduced, which could help detect undesigned public uses of
private members of structs. Here is an example of sparse's output if it
detect an undersigned public use:

| kernel/rcu/tree.c:4453:25: warning: incorrect type in argument 1 (different modifiers)
| kernel/rcu/tree.c:4453:25:    expected struct raw_spinlock [usertype] *lock
| kernel/rcu/tree.c:4453:25:    got struct raw_spinlock [noderef] *<noident>

Also, this patch improves compiler.h a little bit by adding comments for
"#else" and "#endif".

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/compiler.h | 12 ++++++++----
 scripts/checkpatch.pl    |  3 ++-
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 4dac103..e4a4fb1 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -20,12 +20,14 @@
 # define __pmem		__attribute__((noderef, address_space(5)))
 #ifdef CONFIG_SPARSE_RCU_POINTER
 # define __rcu		__attribute__((noderef, address_space(4)))
-#else
+#else /* CONFIG_SPARSE_RCU_POINTER */
 # define __rcu
-#endif
+#endif /* CONFIG_SPARSE_RCU_POINTER */
+# define __private	__attribute__((noderef))
 extern void __chk_user_ptr(const volatile void __user *);
 extern void __chk_io_ptr(const volatile void __iomem *);
-#else
+# define ACCESS_PRIVATE(p, member) (*((typeof((p)->member) __force *) &(p)->member))
+#else /* __CHECKER__ */
 # define __user
 # define __kernel
 # define __safe
@@ -44,7 +46,9 @@ extern void __chk_io_ptr(const volatile void __iomem *);
 # define __percpu
 # define __rcu
 # define __pmem
-#endif
+# define __private
+# define ACCESS_PRIVATE(p, member) ((p)->member)
+#endif /* __CHECKER__ */
 
 /* Indirect macros required for expanded argument pasting, eg. __LINE__. */
 #define ___PASTE(a,b) a##b
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 2b3c228..fda1c61 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -269,7 +269,8 @@ our $Sparse	= qr{
 			__init_refok|
 			__kprobes|
 			__ref|
-			__rcu
+			__rcu|
+			__private
 		}x;
 our $InitAttributePrefix = qr{__(?:mem|cpu|dev|net_|)};
 our $InitAttributeData = qr{$InitAttributePrefix(?:initdata\b)};
-- 
2.6.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 2/3] RCU: Privatize rcu_node::lock
  2015-12-12  2:55 [RFC 0/3] sparse: Introduce __private to privatize members of structs Boqun Feng
  2015-12-12  2:56 ` [RFC 1/3] sparse: Add " Boqun Feng
@ 2015-12-12  2:56 ` Boqun Feng
  2015-12-12  2:56 ` [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors Boqun Feng
  2 siblings, 0 replies; 10+ messages in thread
From: Boqun Feng @ 2015-12-12  2:56 UTC (permalink / raw)
  To: linux-sparse, linux-kernel
  Cc: Christopher Li, Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Thomas Gleixner, Jiang Liu, Marc Zyngier, Mika Westerberg,
	Russell King, Brian Norris, Peter Zijlstra, Boqun Feng

In patch:

"rcu: Add transitivity to remaining rcu_node ->lock acquisitions"

All locking operations on rcu_node::lock are replaced with the wrappers
because of the need of transitivity, which indicates we should never
write code using LOCK primitives alone(i.e. without a proper barrier
following) on rcu_node::lock outside those wrappers. We could detect
this kind of misuses on rcu_node::lock in the future by adding __private
modifier on rcu_node::lock.

To privatize rcu_node::lock, unlock wrappers are also needed. Replacing
spinlock unlocks with these wrappers not only privatizes rcu_node::lock
but also makes it easier to figure out critical sections of rcu_node.

This patch adds __private modifier to rcu_node::lock and makes every
access to it wrapped by ACCESS_PRIVATE(). Besides, unlock wrappers are
added and raw_spin_unlock(&rnp->lock) and its friends are replaced with
those wrappers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/rcu/tree.c        | 103 ++++++++++++++++++++++++-----------------------
 kernel/rcu/tree.h        |  42 ++++++++++++++-----
 kernel/rcu/tree_plugin.h |  26 ++++++------
 3 files changed, 96 insertions(+), 75 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index e41dd41..1646efd 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1246,7 +1246,7 @@ static void rcu_dump_cpu_stacks(struct rcu_state *rsp)
 				if (rnp->qsmask & (1UL << cpu))
 					dump_cpu_task(rnp->grplo + cpu);
 		}
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	}
 }
 
@@ -1266,12 +1266,12 @@ static void print_other_cpu_stall(struct rcu_state *rsp, unsigned long gpnum)
 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	delta = jiffies - READ_ONCE(rsp->jiffies_stall);
 	if (delta < RCU_STALL_RAT_DELAY || !rcu_gp_in_progress(rsp)) {
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		return;
 	}
 	WRITE_ONCE(rsp->jiffies_stall,
 		   jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 
 	/*
 	 * OK, time to rat on our buddy...
@@ -1292,7 +1292,7 @@ static void print_other_cpu_stall(struct rcu_state *rsp, unsigned long gpnum)
 					ndetected++;
 				}
 		}
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	}
 
 	print_cpu_stall_info_end();
@@ -1357,7 +1357,7 @@ static void print_cpu_stall(struct rcu_state *rsp)
 	if (ULONG_CMP_GE(jiffies, READ_ONCE(rsp->jiffies_stall)))
 		WRITE_ONCE(rsp->jiffies_stall,
 			   jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 
 	/*
 	 * Attempt to revive the RCU machinery by forcing a context switch.
@@ -1595,7 +1595,7 @@ rcu_start_future_gp(struct rcu_node *rnp, struct rcu_data *rdp,
 	}
 unlock_out:
 	if (rnp != rnp_root)
-		raw_spin_unlock(&rnp_root->lock);
+		raw_spin_unlock_rcu_node(rnp_root);
 out:
 	if (c_out != NULL)
 		*c_out = c;
@@ -1815,7 +1815,7 @@ static void note_gp_changes(struct rcu_state *rsp, struct rcu_data *rdp)
 		return;
 	}
 	needwake = __note_gp_changes(rsp, rnp, rdp);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	if (needwake)
 		rcu_gp_kthread_wake(rsp);
 }
@@ -1840,7 +1840,7 @@ static bool rcu_gp_init(struct rcu_state *rsp)
 	raw_spin_lock_irq_rcu_node(rnp);
 	if (!READ_ONCE(rsp->gp_flags)) {
 		/* Spurious wakeup, tell caller to go back to sleep.  */
-		raw_spin_unlock_irq(&rnp->lock);
+		raw_spin_unlock_irq_rcu_node(rnp);
 		return false;
 	}
 	WRITE_ONCE(rsp->gp_flags, 0); /* Clear all flags: New grace period. */
@@ -1850,7 +1850,7 @@ static bool rcu_gp_init(struct rcu_state *rsp)
 		 * Grace period already in progress, don't start another.
 		 * Not supposed to be able to happen.
 		 */
-		raw_spin_unlock_irq(&rnp->lock);
+		raw_spin_unlock_irq_rcu_node(rnp);
 		return false;
 	}
 
@@ -1859,7 +1859,7 @@ static bool rcu_gp_init(struct rcu_state *rsp)
 	/* Record GP times before starting GP, hence smp_store_release(). */
 	smp_store_release(&rsp->gpnum, rsp->gpnum + 1);
 	trace_rcu_grace_period(rsp->name, rsp->gpnum, TPS("start"));
-	raw_spin_unlock_irq(&rnp->lock);
+	raw_spin_unlock_irq_rcu_node(rnp);
 
 	/*
 	 * Apply per-leaf buffered online and offline operations to the
@@ -1873,7 +1873,7 @@ static bool rcu_gp_init(struct rcu_state *rsp)
 		if (rnp->qsmaskinit == rnp->qsmaskinitnext &&
 		    !rnp->wait_blkd_tasks) {
 			/* Nothing to do on this leaf rcu_node structure. */
-			raw_spin_unlock_irq(&rnp->lock);
+			raw_spin_unlock_irq_rcu_node(rnp);
 			continue;
 		}
 
@@ -1907,7 +1907,7 @@ static bool rcu_gp_init(struct rcu_state *rsp)
 			rcu_cleanup_dead_rnp(rnp);
 		}
 
-		raw_spin_unlock_irq(&rnp->lock);
+		raw_spin_unlock_irq_rcu_node(rnp);
 	}
 
 	/*
@@ -1938,7 +1938,7 @@ static bool rcu_gp_init(struct rcu_state *rsp)
 		trace_rcu_grace_period_init(rsp->name, rnp->gpnum,
 					    rnp->level, rnp->grplo,
 					    rnp->grphi, rnp->qsmask);
-		raw_spin_unlock_irq(&rnp->lock);
+		raw_spin_unlock_irq_rcu_node(rnp);
 		cond_resched_rcu_qs();
 		WRITE_ONCE(rsp->gp_activity, jiffies);
 	}
@@ -1996,7 +1996,7 @@ static void rcu_gp_fqs(struct rcu_state *rsp, bool first_time)
 		raw_spin_lock_irq_rcu_node(rnp);
 		WRITE_ONCE(rsp->gp_flags,
 			   READ_ONCE(rsp->gp_flags) & ~RCU_GP_FLAG_FQS);
-		raw_spin_unlock_irq(&rnp->lock);
+		raw_spin_unlock_irq_rcu_node(rnp);
 	}
 }
 
@@ -2025,7 +2025,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 	 * safe for us to drop the lock in order to mark the grace
 	 * period as completed in all of the rcu_node structures.
 	 */
-	raw_spin_unlock_irq(&rnp->lock);
+	raw_spin_unlock_irq_rcu_node(rnp);
 
 	/*
 	 * Propagate new ->completed value to rcu_node structures so
@@ -2046,7 +2046,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 			needgp = __note_gp_changes(rsp, rnp, rdp) || needgp;
 		/* smp_mb() provided by prior unlock-lock pair. */
 		nocb += rcu_future_gp_cleanup(rsp, rnp);
-		raw_spin_unlock_irq(&rnp->lock);
+		raw_spin_unlock_irq_rcu_node(rnp);
 		cond_resched_rcu_qs();
 		WRITE_ONCE(rsp->gp_activity, jiffies);
 		rcu_gp_slow(rsp, gp_cleanup_delay);
@@ -2068,7 +2068,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
 				       READ_ONCE(rsp->gpnum),
 				       TPS("newreq"));
 	}
-	raw_spin_unlock_irq(&rnp->lock);
+	raw_spin_unlock_irq_rcu_node(rnp);
 }
 
 /*
@@ -2245,7 +2245,7 @@ static void rcu_report_qs_rsp(struct rcu_state *rsp, unsigned long flags)
 {
 	WARN_ON_ONCE(!rcu_gp_in_progress(rsp));
 	WRITE_ONCE(rsp->gp_flags, READ_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS);
-	raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rcu_get_root(rsp), flags);
 	rcu_gp_kthread_wake(rsp);
 }
 
@@ -2275,7 +2275,7 @@ rcu_report_qs_rnp(unsigned long mask, struct rcu_state *rsp,
 			 * Our bit has already been cleared, or the
 			 * relevant grace period is already over, so done.
 			 */
-			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 			return;
 		}
 		WARN_ON_ONCE(oldmask); /* Any child must be all zeroed! */
@@ -2287,7 +2287,7 @@ rcu_report_qs_rnp(unsigned long mask, struct rcu_state *rsp,
 		if (rnp->qsmask != 0 || rcu_preempt_blocked_readers_cgp(rnp)) {
 
 			/* Other bits still set at this level, so done. */
-			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 			return;
 		}
 		mask = rnp->grpmask;
@@ -2297,7 +2297,7 @@ rcu_report_qs_rnp(unsigned long mask, struct rcu_state *rsp,
 
 			break;
 		}
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		rnp_c = rnp;
 		rnp = rnp->parent;
 		raw_spin_lock_irqsave_rcu_node(rnp, flags);
@@ -2329,7 +2329,7 @@ static void rcu_report_unblock_qs_rnp(struct rcu_state *rsp,
 
 	if (rcu_state_p == &rcu_sched_state || rsp != rcu_state_p ||
 	    rnp->qsmask != 0 || rcu_preempt_blocked_readers_cgp(rnp)) {
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		return;  /* Still need more quiescent states! */
 	}
 
@@ -2346,7 +2346,7 @@ static void rcu_report_unblock_qs_rnp(struct rcu_state *rsp,
 	/* Report up the rest of the hierarchy, tracking current ->gpnum. */
 	gps = rnp->gpnum;
 	mask = rnp->grpmask;
-	raw_spin_unlock(&rnp->lock);	/* irqs remain disabled. */
+	raw_spin_unlock_rcu_node(rnp);	/* irqs remain disabled. */
 	raw_spin_lock_rcu_node(rnp_p);	/* irqs already disabled. */
 	rcu_report_qs_rnp(mask, rsp, rnp_p, gps, flags);
 }
@@ -2383,12 +2383,12 @@ rcu_report_qs_rdp(int cpu, struct rcu_state *rsp, struct rcu_data *rdp)
 		 */
 		rdp->cpu_no_qs.b.norm = true;	/* need qs for new gp. */
 		rdp->rcu_qs_ctr_snap = __this_cpu_read(rcu_qs_ctr);
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		return;
 	}
 	mask = rdp->grpmask;
 	if ((rnp->qsmask & mask) == 0) {
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	} else {
 		rdp->core_needs_qs = 0;
 
@@ -2599,10 +2599,11 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
 		rnp->qsmaskinit &= ~mask;
 		rnp->qsmask &= ~mask;
 		if (rnp->qsmaskinit) {
-			raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
+			raw_spin_unlock_rcu_node(rnp);
+			/* irqs remain disabled. */
 			return;
 		}
-		raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
+		raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */
 	}
 }
 
@@ -2625,7 +2626,7 @@ static void rcu_cleanup_dying_idle_cpu(int cpu, struct rcu_state *rsp)
 	mask = rdp->grpmask;
 	raw_spin_lock_irqsave_rcu_node(rnp, flags); /* Enforce GP memory-order guarantee. */
 	rnp->qsmaskinitnext &= ~mask;
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
 /*
@@ -2859,7 +2860,7 @@ static void force_qs_rnp(struct rcu_state *rsp,
 			rcu_report_qs_rnp(mask, rsp, rnp, rnp->gpnum, flags);
 		} else {
 			/* Nothing to do here, so just drop the lock. */
-			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		}
 	}
 }
@@ -2895,11 +2896,11 @@ static void force_quiescent_state(struct rcu_state *rsp)
 	raw_spin_unlock(&rnp_old->fqslock);
 	if (READ_ONCE(rsp->gp_flags) & RCU_GP_FLAG_FQS) {
 		rsp->n_force_qs_lh++;
-		raw_spin_unlock_irqrestore(&rnp_old->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp_old, flags);
 		return;  /* Someone beat us to it. */
 	}
 	WRITE_ONCE(rsp->gp_flags, READ_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS);
-	raw_spin_unlock_irqrestore(&rnp_old->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp_old, flags);
 	rcu_gp_kthread_wake(rsp);
 }
 
@@ -2925,7 +2926,7 @@ __rcu_process_callbacks(struct rcu_state *rsp)
 	if (cpu_needs_another_gp(rsp, rdp)) {
 		raw_spin_lock_rcu_node(rcu_get_root(rsp)); /* irqs disabled. */
 		needwake = rcu_start_gp(rsp);
-		raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rcu_get_root(rsp), flags);
 		if (needwake)
 			rcu_gp_kthread_wake(rsp);
 	} else {
@@ -3016,7 +3017,7 @@ static void __call_rcu_core(struct rcu_state *rsp, struct rcu_data *rdp,
 
 			raw_spin_lock_rcu_node(rnp_root);
 			needwake = rcu_start_gp(rsp);
-			raw_spin_unlock(&rnp_root->lock);
+			raw_spin_unlock_rcu_node(rnp_root);
 			if (needwake)
 				rcu_gp_kthread_wake(rsp);
 		} else {
@@ -3436,14 +3437,14 @@ static void sync_exp_reset_tree_hotplug(struct rcu_state *rsp)
 	rcu_for_each_leaf_node(rsp, rnp) {
 		raw_spin_lock_irqsave_rcu_node(rnp, flags);
 		if (rnp->expmaskinit == rnp->expmaskinitnext) {
-			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 			continue;  /* No new CPUs, nothing to do. */
 		}
 
 		/* Update this node's mask, track old value for propagation. */
 		oldmask = rnp->expmaskinit;
 		rnp->expmaskinit = rnp->expmaskinitnext;
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 
 		/* If was already nonzero, nothing to propagate. */
 		if (oldmask)
@@ -3458,7 +3459,7 @@ static void sync_exp_reset_tree_hotplug(struct rcu_state *rsp)
 			if (rnp_up->expmaskinit)
 				done = true;
 			rnp_up->expmaskinit |= mask;
-			raw_spin_unlock_irqrestore(&rnp_up->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp_up, flags);
 			if (done)
 				break;
 			mask = rnp_up->grpmask;
@@ -3481,7 +3482,7 @@ static void __maybe_unused sync_exp_reset_tree(struct rcu_state *rsp)
 		raw_spin_lock_irqsave_rcu_node(rnp, flags);
 		WARN_ON_ONCE(rnp->expmask);
 		rnp->expmask = rnp->expmaskinit;
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	}
 }
 
@@ -3522,11 +3523,11 @@ static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
 			if (!rnp->expmask)
 				rcu_initiate_boost(rnp, flags);
 			else
-				raw_spin_unlock_irqrestore(&rnp->lock, flags);
+				raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 			break;
 		}
 		if (rnp->parent == NULL) {
-			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 			if (wake) {
 				smp_mb(); /* EGP done before wake_up(). */
 				wake_up(&rsp->expedited_wq);
@@ -3534,7 +3535,7 @@ static void __rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
 			break;
 		}
 		mask = rnp->grpmask;
-		raw_spin_unlock(&rnp->lock); /* irqs remain disabled */
+		raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled */
 		rnp = rnp->parent;
 		raw_spin_lock_rcu_node(rnp); /* irqs already disabled */
 		WARN_ON_ONCE(!(rnp->expmask & mask));
@@ -3569,7 +3570,7 @@ static void rcu_report_exp_cpu_mult(struct rcu_state *rsp, struct rcu_node *rnp,
 
 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	if (!(rnp->expmask & mask)) {
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		return;
 	}
 	rnp->expmask &= ~mask;
@@ -3730,7 +3731,7 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
 		 */
 		if (rcu_preempt_has_tasks(rnp))
 			rnp->exp_tasks = rnp->blkd_tasks.next;
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 
 		/* IPI the remaining CPUs for expedited quiescent state. */
 		mask = 1;
@@ -3747,7 +3748,7 @@ retry_ipi:
 			raw_spin_lock_irqsave_rcu_node(rnp, flags);
 			if (cpu_online(cpu) &&
 			    (rnp->expmask & mask)) {
-				raw_spin_unlock_irqrestore(&rnp->lock, flags);
+				raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 				schedule_timeout_uninterruptible(1);
 				if (cpu_online(cpu) &&
 				    (rnp->expmask & mask))
@@ -3756,7 +3757,7 @@ retry_ipi:
 			}
 			if (!(rnp->expmask & mask))
 				mask_ofl_ipi &= ~mask;
-			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		}
 		/* Report quiescent states for those that went offline. */
 		mask_ofl_test |= mask_ofl_ipi;
@@ -4163,7 +4164,7 @@ static void rcu_init_new_rnp(struct rcu_node *rnp_leaf)
 			return;
 		raw_spin_lock_rcu_node(rnp); /* Interrupts already disabled. */
 		rnp->qsmaskinit |= mask;
-		raw_spin_unlock(&rnp->lock); /* Interrupts remain disabled. */
+		raw_spin_unlock_rcu_node(rnp); /* Interrupts remain disabled. */
 	}
 }
 
@@ -4187,7 +4188,7 @@ rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp)
 	rdp->rsp = rsp;
 	mutex_init(&rdp->exp_funnel_mutex);
 	rcu_boot_init_nocb_percpu_data(rdp);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
 /*
@@ -4215,7 +4216,7 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp)
 	rcu_sysidle_init_percpu_data(rdp->dynticks);
 	atomic_set(&rdp->dynticks->dynticks,
 		   (atomic_read(&rdp->dynticks->dynticks) & ~0x1) + 1);
-	raw_spin_unlock(&rnp->lock);		/* irqs remain disabled. */
+	raw_spin_unlock_rcu_node(rnp);		/* irqs remain disabled. */
 
 	/*
 	 * Add CPU to leaf rcu_node pending-online bitmask.  Any needed
@@ -4236,7 +4237,7 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp)
 	rdp->rcu_qs_ctr_snap = per_cpu(rcu_qs_ctr, cpu);
 	rdp->core_needs_qs = false;
 	trace_rcu_grace_period(rsp->name, rdp->gpnum, TPS("cpuonl"));
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
 static void rcu_prepare_cpu(int cpu)
@@ -4358,7 +4359,7 @@ static int __init rcu_spawn_gp_kthread(void)
 			sp.sched_priority = kthread_prio;
 			sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
 		}
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		wake_up_process(t);
 	}
 	rcu_spawn_nocb_kthreads();
@@ -4449,8 +4450,8 @@ static void __init rcu_init_one(struct rcu_state *rsp)
 		cpustride *= levelspread[i];
 		rnp = rsp->level[i];
 		for (j = 0; j < levelcnt[i]; j++, rnp++) {
-			raw_spin_lock_init(&rnp->lock);
-			lockdep_set_class_and_name(&rnp->lock,
+			raw_spin_lock_init(&ACCESS_PRIVATE(rnp, lock));
+			lockdep_set_class_and_name(&ACCESS_PRIVATE(rnp, lock),
 						   &rcu_node_class[i], buf[i]);
 			raw_spin_lock_init(&rnp->fqslock);
 			lockdep_set_class_and_name(&rnp->fqslock,
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 83360b4..4886d6a 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -149,8 +149,9 @@ struct rcu_dynticks {
  * Definition for node within the RCU grace-period-detection hierarchy.
  */
 struct rcu_node {
-	raw_spinlock_t lock;	/* Root rcu_node's lock protects some */
-				/*  rcu_state fields as well as following. */
+	raw_spinlock_t __private lock;	/* Root rcu_node's lock protects */
+					/*  some rcu_state fields as well as */
+					/*  following. */
 	unsigned long gpnum;	/* Current grace period for this node. */
 				/*  This will either be equal to or one */
 				/*  behind the root rcu_node's gpnum. */
@@ -680,7 +681,7 @@ static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
 #endif /* #else #ifdef CONFIG_PPC */
 
 /*
- * Wrappers for the rcu_node::lock acquire.
+ * Wrappers for the rcu_node::lock acquire and release.
  *
  * Because the rcu_nodes form a tree, the tree traversal locking will observe
  * different lock values, this in turn means that an UNLOCK of one level
@@ -689,29 +690,48 @@ static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
  *
  * In order to restore full ordering between tree levels, augment the regular
  * lock acquire functions with smp_mb__after_unlock_lock().
+ *
+ * As ->lock of struct rcu_node is a __private field, therefore one should use
+ * these wrappers rather than directly call raw_spin_{lock,unlock}* on ->lock.
  */
 static inline void raw_spin_lock_rcu_node(struct rcu_node *rnp)
 {
-	raw_spin_lock(&rnp->lock);
+	raw_spin_lock(&ACCESS_PRIVATE(rnp, lock));
 	smp_mb__after_unlock_lock();
 }
 
+static inline void raw_spin_unlock_rcu_node(struct rcu_node *rnp)
+{
+	raw_spin_unlock(&ACCESS_PRIVATE(rnp, lock));
+}
+
 static inline void raw_spin_lock_irq_rcu_node(struct rcu_node *rnp)
 {
-	raw_spin_lock_irq(&rnp->lock);
+	raw_spin_lock_irq(&ACCESS_PRIVATE(rnp, lock));
 	smp_mb__after_unlock_lock();
 }
 
-#define raw_spin_lock_irqsave_rcu_node(rnp, flags)	\
-do {							\
-	typecheck(unsigned long, flags);		\
-	raw_spin_lock_irqsave(&(rnp)->lock, flags);	\
-	smp_mb__after_unlock_lock();			\
+static inline void raw_spin_unlock_irq_rcu_node(struct rcu_node *rnp)
+{
+	raw_spin_unlock_irq(&ACCESS_PRIVATE(rnp, lock));
+}
+
+#define raw_spin_lock_irqsave_rcu_node(rnp, flags)			\
+do {									\
+	typecheck(unsigned long, flags);				\
+	raw_spin_lock_irqsave(&ACCESS_PRIVATE(rnp, lock), flags);	\
+	smp_mb__after_unlock_lock();					\
+} while (0)
+
+#define raw_spin_unlock_irqrestore_rcu_node(rnp, flags)			\
+do {									\
+	typecheck(unsigned long, flags);				\
+	raw_spin_unlock_irqrestore(&ACCESS_PRIVATE(rnp, lock), flags);	\
 } while (0)
 
 static inline bool raw_spin_trylock_rcu_node(struct rcu_node *rnp)
 {
-	bool locked = raw_spin_trylock(&rnp->lock);
+	bool locked = raw_spin_trylock(&ACCESS_PRIVATE(rnp, lock));
 
 	if (locked)
 		smp_mb__after_unlock_lock();
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 9467a8b..0d51a67 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -235,7 +235,7 @@ static void rcu_preempt_ctxt_queue(struct rcu_node *rnp, struct rcu_data *rdp)
 		rnp->gp_tasks = &t->rcu_node_entry;
 	if (!rnp->exp_tasks && (blkd_state & RCU_EXP_BLKD))
 		rnp->exp_tasks = &t->rcu_node_entry;
-	raw_spin_unlock(&rnp->lock); /* rrupts remain disabled. */
+	raw_spin_unlock_rcu_node(rnp); /* interrupts remain disabled. */
 
 	/*
 	 * Report the quiescent state for the expedited GP.  This expedited
@@ -489,7 +489,7 @@ void rcu_read_unlock_special(struct task_struct *t)
 							 !!rnp->gp_tasks);
 			rcu_report_unblock_qs_rnp(rcu_state_p, rnp, flags);
 		} else {
-			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		}
 
 		/* Unboost if we were boosted. */
@@ -518,14 +518,14 @@ static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
 
 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	if (!rcu_preempt_blocked_readers_cgp(rnp)) {
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		return;
 	}
 	t = list_entry(rnp->gp_tasks->prev,
 		       struct task_struct, rcu_node_entry);
 	list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry)
 		sched_show_task(t);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
 /*
@@ -991,7 +991,7 @@ static int rcu_boost(struct rcu_node *rnp)
 	 * might exit their RCU read-side critical sections on their own.
 	 */
 	if (rnp->exp_tasks == NULL && rnp->boost_tasks == NULL) {
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		return 0;
 	}
 
@@ -1028,7 +1028,7 @@ static int rcu_boost(struct rcu_node *rnp)
 	 */
 	t = container_of(tb, struct task_struct, rcu_node_entry);
 	rt_mutex_init_proxy_locked(&rnp->boost_mtx, t);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	/* Lock only for side effect: boosts task t's priority. */
 	rt_mutex_lock(&rnp->boost_mtx);
 	rt_mutex_unlock(&rnp->boost_mtx);  /* Then keep lockdep happy. */
@@ -1088,7 +1088,7 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
 
 	if (!rcu_preempt_blocked_readers_cgp(rnp) && rnp->exp_tasks == NULL) {
 		rnp->n_balk_exp_gp_tasks++;
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		return;
 	}
 	if (rnp->exp_tasks != NULL ||
@@ -1098,13 +1098,13 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
 	     ULONG_CMP_GE(jiffies, rnp->boost_time))) {
 		if (rnp->exp_tasks == NULL)
 			rnp->boost_tasks = rnp->gp_tasks;
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 		t = rnp->boost_kthread_task;
 		if (t)
 			rcu_wake_cond(t, rnp->boost_kthread_status);
 	} else {
 		rcu_initiate_boost_trace(rnp);
-		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	}
 }
 
@@ -1172,7 +1172,7 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
 		return PTR_ERR(t);
 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	rnp->boost_kthread_task = t;
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	sp.sched_priority = kthread_prio;
 	sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
 	wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */
@@ -1308,7 +1308,7 @@ static void rcu_prepare_kthreads(int cpu)
 static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
 	__releases(rnp->lock)
 {
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
 static void invoke_rcu_callbacks_kthread(void)
@@ -1559,7 +1559,7 @@ static void rcu_prepare_for_idle(void)
 		rnp = rdp->mynode;
 		raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */
 		needwake = rcu_accelerate_cbs(rsp, rnp, rdp);
-		raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
+		raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */
 		if (needwake)
 			rcu_gp_kthread_wake(rsp);
 	}
@@ -2059,7 +2059,7 @@ static void rcu_nocb_wait_gp(struct rcu_data *rdp)
 
 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	needwake = rcu_start_future_gp(rnp, rdp, &c);
-	raw_spin_unlock_irqrestore(&rnp->lock, flags);
+	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 	if (needwake)
 		rcu_gp_kthread_wake(rdp->rsp);
 
-- 
2.6.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors
  2015-12-12  2:55 [RFC 0/3] sparse: Introduce __private to privatize members of structs Boqun Feng
  2015-12-12  2:56 ` [RFC 1/3] sparse: Add " Boqun Feng
  2015-12-12  2:56 ` [RFC 2/3] RCU: Privatize rcu_node::lock Boqun Feng
@ 2015-12-12  2:56 ` Boqun Feng
  2015-12-14  9:50   ` Peter Zijlstra
  2 siblings, 1 reply; 10+ messages in thread
From: Boqun Feng @ 2015-12-12  2:56 UTC (permalink / raw)
  To: linux-sparse, linux-kernel
  Cc: Christopher Li, Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Thomas Gleixner, Jiang Liu, Marc Zyngier, Mika Westerberg,
	Russell King, Brian Norris, Peter Zijlstra, Boqun Feng

According to Peter Zijlstra, irq_common_data::state_use_accessors is not
designed for public use. Therefore make it private so that people who
write code accessing it directly will get blamed by sparse.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/irq.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/irq.h b/include/linux/irq.h
index 3c1c967..0b8f273 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -137,7 +137,7 @@ struct irq_domain;
  * @msi_desc:		MSI descriptor
  */
 struct irq_common_data {
-	unsigned int		state_use_accessors;
+	unsigned int		__private state_use_accessors;
 #ifdef CONFIG_NUMA
 	unsigned int		node;
 #endif
@@ -208,7 +208,7 @@ enum {
 	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
 };
 
-#define __irqd_to_state(d)		((d)->common->state_use_accessors)
+#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
 
 static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
 {
-- 
2.6.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors
  2015-12-12  2:56 ` [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors Boqun Feng
@ 2015-12-14  9:50   ` Peter Zijlstra
  2015-12-14 10:03     ` Thomas Gleixner
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2015-12-14  9:50 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-sparse, linux-kernel, Christopher Li, Paul E . McKenney,
	Josh Triplett, Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan,
	Andy Whitcroft, Joe Perches, Thomas Gleixner, Jiang Liu,
	Marc Zyngier, Mika Westerberg, Russell King, Brian Norris

On Sat, Dec 12, 2015 at 10:56:02AM +0800, Boqun Feng wrote:
> According to Peter Zijlstra, irq_common_data::state_use_accessors is not
> designed for public use. Therefore make it private so that people who
> write code accessing it directly will get blamed by sparse.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  include/linux/irq.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/irq.h b/include/linux/irq.h
> index 3c1c967..0b8f273 100644
> --- a/include/linux/irq.h
> +++ b/include/linux/irq.h
> @@ -137,7 +137,7 @@ struct irq_domain;
>   * @msi_desc:		MSI descriptor
>   */
>  struct irq_common_data {
> -	unsigned int		state_use_accessors;
> +	unsigned int		__private state_use_accessors;
>  #ifdef CONFIG_NUMA
>  	unsigned int		node;
>  #endif
> @@ -208,7 +208,7 @@ enum {
>  	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
>  };
>  
> -#define __irqd_to_state(d)		((d)->common->state_use_accessors)
> +#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
>  
>  static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
>  {

We should probably #undef that one once we're done with it.. Thomas?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors
  2015-12-14  9:50   ` Peter Zijlstra
@ 2015-12-14 10:03     ` Thomas Gleixner
  2015-12-14 12:00         ` Boqun Feng
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2015-12-14 10:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Boqun Feng, linux-sparse, linux-kernel, Christopher Li,
	Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Jiang Liu, Marc Zyngier, Mika Westerberg, Russell King,
	Brian Norris

On Mon, 14 Dec 2015, Peter Zijlstra wrote:
> On Sat, Dec 12, 2015 at 10:56:02AM +0800, Boqun Feng wrote:
> > According to Peter Zijlstra, irq_common_data::state_use_accessors is not
> > designed for public use. Therefore make it private so that people who
> > write code accessing it directly will get blamed by sparse.
> > 
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> >  include/linux/irq.h | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/irq.h b/include/linux/irq.h
> > index 3c1c967..0b8f273 100644
> > --- a/include/linux/irq.h
> > +++ b/include/linux/irq.h
> > @@ -137,7 +137,7 @@ struct irq_domain;
> >   * @msi_desc:		MSI descriptor
> >   */
> >  struct irq_common_data {
> > -	unsigned int		state_use_accessors;
> > +	unsigned int		__private state_use_accessors;
> >  #ifdef CONFIG_NUMA
> >  	unsigned int		node;
> >  #endif
> > @@ -208,7 +208,7 @@ enum {
> >  	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
> >  };
> >  
> > -#define __irqd_to_state(d)		((d)->common->state_use_accessors)
> > +#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
> >  
> >  static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
> >  {
> 
> We should probably #undef that one once we're done with it.. Thomas?

Probably.
 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors
  2015-12-14 10:03     ` Thomas Gleixner
@ 2015-12-14 12:00         ` Boqun Feng
  0 siblings, 0 replies; 10+ messages in thread
From: Boqun Feng @ 2015-12-14 12:00 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, linux-sparse, linux-kernel, Christopher Li,
	Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Jiang Liu, Marc Zyngier, Mika Westerberg, Russell King,
	Brian Norris

On Mon, Dec 14, 2015 at 11:03:03AM +0100, Thomas Gleixner wrote:
> On Mon, 14 Dec 2015, Peter Zijlstra wrote:
> > On Sat, Dec 12, 2015 at 10:56:02AM +0800, Boqun Feng wrote:
> > > According to Peter Zijlstra, irq_common_data::state_use_accessors is not
> > > designed for public use. Therefore make it private so that people who
> > > write code accessing it directly will get blamed by sparse.
> > > 
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > >  include/linux/irq.h | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/irq.h b/include/linux/irq.h
> > > index 3c1c967..0b8f273 100644
> > > --- a/include/linux/irq.h
> > > +++ b/include/linux/irq.h
> > > @@ -137,7 +137,7 @@ struct irq_domain;
> > >   * @msi_desc:		MSI descriptor
> > >   */
> > >  struct irq_common_data {
> > > -	unsigned int		state_use_accessors;
> > > +	unsigned int		__private state_use_accessors;
> > >  #ifdef CONFIG_NUMA
> > >  	unsigned int		node;
> > >  #endif
> > > @@ -208,7 +208,7 @@ enum {
> > >  	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
> > >  };
> > >  
> > > -#define __irqd_to_state(d)		((d)->common->state_use_accessors)
> > > +#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
> > >  
> > >  static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
> > >  {
> > 
> > We should probably #undef that one once we're done with it.. Thomas?
> 
> Probably.
>  

Probably something like this(untested, only run "make kernel/irq/")?

Subject: [RFC v2 3/3] irq: Privatize irq_common_data::state_use_accessors

According to Peter Zijlstra, irq_common_data::state_use_accessors is not
designed for public use. Therefore make it private so that people who
write code accessing it directly will get blamed by sparse.

Also macro __irqd_to_state() is for _designed_ accesses to irq_data's
state only, it's better to limit its scope, therefore put all its
callers together and #undef it after use.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
v1-->v2:
	* move all callers of __irqd_to_state() together so that we can
	  #undef it after use.

 include/linux/irq.h    | 35 +++++++++++++++++++++++++++++++++--
 kernel/irq/internals.h | 28 ----------------------------
 2 files changed, 33 insertions(+), 30 deletions(-)

diff --git a/include/linux/irq.h b/include/linux/irq.h
index 3c1c967..9b20df6 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -137,7 +137,7 @@ struct irq_domain;
  * @msi_desc:		MSI descriptor
  */
 struct irq_common_data {
-	unsigned int		state_use_accessors;
+	unsigned int		__private state_use_accessors;
 #ifdef CONFIG_NUMA
 	unsigned int		node;
 #endif
@@ -208,7 +208,11 @@ enum {
 	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
 };
 
-#define __irqd_to_state(d)		((d)->common->state_use_accessors)
+#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
+
+/*
+ * Manipulation functions for irq_data's state
+ */
 
 static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
 {
@@ -299,6 +303,33 @@ static inline void irqd_clr_forwarded_to_vcpu(struct irq_data *d)
 	__irqd_to_state(d) &= ~IRQD_FORWARDED_TO_VCPU;
 }
 
+static inline void irqd_set_move_pending(struct irq_data *d)
+{
+	__irqd_to_state(d) |= IRQD_SETAFFINITY_PENDING;
+}
+
+static inline void irqd_clr_move_pending(struct irq_data *d)
+{
+	__irqd_to_state(d) &= ~IRQD_SETAFFINITY_PENDING;
+}
+
+static inline void irqd_clear(struct irq_data *d, unsigned int mask)
+{
+	__irqd_to_state(d) &= ~mask;
+}
+
+static inline void irqd_set(struct irq_data *d, unsigned int mask)
+{
+	__irqd_to_state(d) |= mask;
+}
+
+static inline bool irqd_has_set(struct irq_data *d, unsigned int mask)
+{
+	return __irqd_to_state(d) & mask;
+}
+
+#undef __irqd_to_state
+
 static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
 {
 	return d->hwirq;
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index fcab63c..307fff8 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -160,34 +160,6 @@ irq_put_desc_unlock(struct irq_desc *desc, unsigned long flags)
 	__irq_put_desc_unlock(desc, flags, false);
 }
 
-/*
- * Manipulation functions for irq_data.state
- */
-static inline void irqd_set_move_pending(struct irq_data *d)
-{
-	__irqd_to_state(d) |= IRQD_SETAFFINITY_PENDING;
-}
-
-static inline void irqd_clr_move_pending(struct irq_data *d)
-{
-	__irqd_to_state(d) &= ~IRQD_SETAFFINITY_PENDING;
-}
-
-static inline void irqd_clear(struct irq_data *d, unsigned int mask)
-{
-	__irqd_to_state(d) &= ~mask;
-}
-
-static inline void irqd_set(struct irq_data *d, unsigned int mask)
-{
-	__irqd_to_state(d) |= mask;
-}
-
-static inline bool irqd_has_set(struct irq_data *d, unsigned int mask)
-{
-	return __irqd_to_state(d) & mask;
-}
-
 static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
 {
 	__this_cpu_inc(*desc->kstat_irqs);
-- 
2.6.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors
@ 2015-12-14 12:00         ` Boqun Feng
  0 siblings, 0 replies; 10+ messages in thread
From: Boqun Feng @ 2015-12-14 12:00 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, linux-sparse, linux-kernel, Christopher Li,
	Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Jiang Liu, Marc Zyngier, Mika Westerberg, Russell King,
	Brian Norris

On Mon, Dec 14, 2015 at 11:03:03AM +0100, Thomas Gleixner wrote:
> On Mon, 14 Dec 2015, Peter Zijlstra wrote:
> > On Sat, Dec 12, 2015 at 10:56:02AM +0800, Boqun Feng wrote:
> > > According to Peter Zijlstra, irq_common_data::state_use_accessors is not
> > > designed for public use. Therefore make it private so that people who
> > > write code accessing it directly will get blamed by sparse.
> > > 
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > >  include/linux/irq.h | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/irq.h b/include/linux/irq.h
> > > index 3c1c967..0b8f273 100644
> > > --- a/include/linux/irq.h
> > > +++ b/include/linux/irq.h
> > > @@ -137,7 +137,7 @@ struct irq_domain;
> > >   * @msi_desc:		MSI descriptor
> > >   */
> > >  struct irq_common_data {
> > > -	unsigned int		state_use_accessors;
> > > +	unsigned int		__private state_use_accessors;
> > >  #ifdef CONFIG_NUMA
> > >  	unsigned int		node;
> > >  #endif
> > > @@ -208,7 +208,7 @@ enum {
> > >  	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
> > >  };
> > >  
> > > -#define __irqd_to_state(d)		((d)->common->state_use_accessors)
> > > +#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
> > >  
> > >  static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
> > >  {
> > 
> > We should probably #undef that one once we're done with it.. Thomas?
> 
> Probably.
>  

Probably something like this(untested, only run "make kernel/irq/")?

Subject: [RFC v2 3/3] irq: Privatize irq_common_data::state_use_accessors

According to Peter Zijlstra, irq_common_data::state_use_accessors is not
designed for public use. Therefore make it private so that people who
write code accessing it directly will get blamed by sparse.

Also macro __irqd_to_state() is for _designed_ accesses to irq_data's
state only, it's better to limit its scope, therefore put all its
callers together and #undef it after use.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
v1-->v2:
	* move all callers of __irqd_to_state() together so that we can
	  #undef it after use.

 include/linux/irq.h    | 35 +++++++++++++++++++++++++++++++++--
 kernel/irq/internals.h | 28 ----------------------------
 2 files changed, 33 insertions(+), 30 deletions(-)

diff --git a/include/linux/irq.h b/include/linux/irq.h
index 3c1c967..9b20df6 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -137,7 +137,7 @@ struct irq_domain;
  * @msi_desc:		MSI descriptor
  */
 struct irq_common_data {
-	unsigned int		state_use_accessors;
+	unsigned int		__private state_use_accessors;
 #ifdef CONFIG_NUMA
 	unsigned int		node;
 #endif
@@ -208,7 +208,11 @@ enum {
 	IRQD_FORWARDED_TO_VCPU		= (1 << 20),
 };
 
-#define __irqd_to_state(d)		((d)->common->state_use_accessors)
+#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors)
+
+/*
+ * Manipulation functions for irq_data's state
+ */
 
 static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
 {
@@ -299,6 +303,33 @@ static inline void irqd_clr_forwarded_to_vcpu(struct irq_data *d)
 	__irqd_to_state(d) &= ~IRQD_FORWARDED_TO_VCPU;
 }
 
+static inline void irqd_set_move_pending(struct irq_data *d)
+{
+	__irqd_to_state(d) |= IRQD_SETAFFINITY_PENDING;
+}
+
+static inline void irqd_clr_move_pending(struct irq_data *d)
+{
+	__irqd_to_state(d) &= ~IRQD_SETAFFINITY_PENDING;
+}
+
+static inline void irqd_clear(struct irq_data *d, unsigned int mask)
+{
+	__irqd_to_state(d) &= ~mask;
+}
+
+static inline void irqd_set(struct irq_data *d, unsigned int mask)
+{
+	__irqd_to_state(d) |= mask;
+}
+
+static inline bool irqd_has_set(struct irq_data *d, unsigned int mask)
+{
+	return __irqd_to_state(d) & mask;
+}
+
+#undef __irqd_to_state
+
 static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
 {
 	return d->hwirq;
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index fcab63c..307fff8 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -160,34 +160,6 @@ irq_put_desc_unlock(struct irq_desc *desc, unsigned long flags)
 	__irq_put_desc_unlock(desc, flags, false);
 }
 
-/*
- * Manipulation functions for irq_data.state
- */
-static inline void irqd_set_move_pending(struct irq_data *d)
-{
-	__irqd_to_state(d) |= IRQD_SETAFFINITY_PENDING;
-}
-
-static inline void irqd_clr_move_pending(struct irq_data *d)
-{
-	__irqd_to_state(d) &= ~IRQD_SETAFFINITY_PENDING;
-}
-
-static inline void irqd_clear(struct irq_data *d, unsigned int mask)
-{
-	__irqd_to_state(d) &= ~mask;
-}
-
-static inline void irqd_set(struct irq_data *d, unsigned int mask)
-{
-	__irqd_to_state(d) |= mask;
-}
-
-static inline bool irqd_has_set(struct irq_data *d, unsigned int mask)
-{
-	return __irqd_to_state(d) & mask;
-}

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors
  2015-12-14 12:00         ` Boqun Feng
  (?)
@ 2015-12-14 15:59         ` Thomas Gleixner
  2015-12-15  0:56           ` Boqun Feng
  -1 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2015-12-14 15:59 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Peter Zijlstra, linux-sparse, linux-kernel, Christopher Li,
	Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Jiang Liu, Marc Zyngier, Mika Westerberg, Russell King,
	Brian Norris

On Mon, 14 Dec 2015, Boqun Feng wrote:
> Probably something like this(untested, only run "make kernel/irq/")?
> 
> Subject: [RFC v2 3/3] irq: Privatize irq_common_data::state_use_accessors
> 
> According to Peter Zijlstra, irq_common_data::state_use_accessors is not
> designed for public use. Therefore make it private so that people who
> write code accessing it directly will get blamed by sparse.
> 
> Also macro __irqd_to_state() is for _designed_ accesses to irq_data's
> state only, it's better to limit its scope, therefore put all its
> callers together and #undef it after use.

That exposes the set/clr functions to the global header file, while
today those are restricted to the core internals header. There is a
reason why I did not make them public ....

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors
  2015-12-14 15:59         ` Thomas Gleixner
@ 2015-12-15  0:56           ` Boqun Feng
  0 siblings, 0 replies; 10+ messages in thread
From: Boqun Feng @ 2015-12-15  0:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, linux-sparse, linux-kernel, Christopher Li,
	Paul E . McKenney, Josh Triplett, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Andy Whitcroft, Joe Perches,
	Jiang Liu, Marc Zyngier, Mika Westerberg, Russell King,
	Brian Norris

[-- Attachment #1: Type: text/plain, Size: 1066 bytes --]

On Mon, Dec 14, 2015 at 04:59:45PM +0100, Thomas Gleixner wrote:
> On Mon, 14 Dec 2015, Boqun Feng wrote:
> > Probably something like this(untested, only run "make kernel/irq/")?
> > 
> > Subject: [RFC v2 3/3] irq: Privatize irq_common_data::state_use_accessors
> > 
> > According to Peter Zijlstra, irq_common_data::state_use_accessors is not
> > designed for public use. Therefore make it private so that people who
> > write code accessing it directly will get blamed by sparse.
> > 
> > Also macro __irqd_to_state() is for _designed_ accesses to irq_data's
> > state only, it's better to limit its scope, therefore put all its
> > callers together and #undef it after use.
> 
> That exposes the set/clr functions to the global header file, while
> today those are restricted to the core internals header. There is a
> reason why I did not make them public ....
> 

Oops, I wasn't aware of that... then I guess we should define and undef
__irqd_to_state twice, in include/linux/irq.h and in
kernel/irq/internals.h, right?

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-12-15  0:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-12  2:55 [RFC 0/3] sparse: Introduce __private to privatize members of structs Boqun Feng
2015-12-12  2:56 ` [RFC 1/3] sparse: Add " Boqun Feng
2015-12-12  2:56 ` [RFC 2/3] RCU: Privatize rcu_node::lock Boqun Feng
2015-12-12  2:56 ` [RFC 3/3] irq: Privatize irq_common_data::state_use_accessors Boqun Feng
2015-12-14  9:50   ` Peter Zijlstra
2015-12-14 10:03     ` Thomas Gleixner
2015-12-14 12:00       ` Boqun Feng
2015-12-14 12:00         ` Boqun Feng
2015-12-14 15:59         ` Thomas Gleixner
2015-12-15  0:56           ` Boqun Feng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.