All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Kogan <alex.kogan@oracle.com>
To: kbuild test robot <lkp@intel.com>, linux-sparse@vger.kernel.org
Cc: kbuild-all@lists.01.org, linux@armlinux.org.uk,
	Peter Zijlstra <peterz@infradead.org>,
	mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de,
	longman@redhat.com, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de,
	hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com,
	jglauber@marvell.com, steven.sistare@oracle.com,
	daniel.m.jordan@oracle.com, dave.dice@oracle.com,
	rahul.x.yadav@oracle.com
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
Date: Thu, 14 Nov 2019 15:57:34 -0500	[thread overview]
Message-ID: <58623E4A-973B-46CC-8FA8-29E68DB5EFF4@oracle.com> (raw)
In-Reply-To: <201911110540.8p3UoQAR%lkp@intel.com>

+ linux-sparse mailing list

It seems like a bug in the way sparse handles “pure” functions that return
a pointer.

One of the pure functions in question is defined as following:
	static inline __pure
	struct mcs_spinlock *grab_mcs_node(struct mcs_spinlock *base, int idx)
	{
        	return &((struct qnode *)base + idx)->mcs;
	}

and the corresponding variable definition and the assignment statement that
produce a warning (in kernel/locking/qspinlock.c) are:
	struct mcs_spinlock *prev, *next, *node;
	…
	node = grab_mcs_node(node, idx);

The issue can be recreated without my patch with
	# sparse version: v0.6.1
	make ARCH=x86_64 allmodconfig
	make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' kernel/locking/qspinlock.o


The warnings can be eliminated by adding an explicit cast, e.g.:

	node = (struct mcs_spinlock *)grab_mcs_node(node, idx);

but this seems wrong (unnecessary) to me.

Regards,
— Alex

> On Nov 10, 2019, at 4:30 PM, kbuild test robot <lkp@intel.com> wrote:
> 
> Hi Alex,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on linus/master]
> [cannot apply to v5.4-rc6 next-20191108]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=4bbPcLEtAedk_fBrSIBMWvdEslLtH5W28nZLbmMIgL8&e= ]
> 
> url:    https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=ydR3iBtEF-3XUySBCcPYJ8oqw_oNDB-liJdapTXeFeM&e= 
> base:   https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=c4rCmFY0YTXCPiXW9d_BD0RN6WU6QGb64h1iyWNCm9A&e=  0058b0a506e40d9a2c62015fe92eb64a44d78cd9
> reproduce:
>        # apt-get install sparse
>        # sparse version: v0.6.1-21-gb31adac-dirty
>        make ARCH=x86_64 allmodconfig
>        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
> 
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@intel.com>
> 
> 
> sparse warnings: (new ones prefixed by >>)
> 
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:141:60: sparse: sparse: incorrect type in initializer (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    expected struct mcs_spinlock *tail_2nd
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:107:18: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:240:61: sparse: sparse: incorrect type in argument 2 (different modifiers) @@    expected struct mcs_spinlock *pred_start @@    got struct struct mcs_spinlock *pred_start @@
>>> kernel/locking/qspinlock_cna.h:240:61: sparse:    expected struct mcs_spinlock *pred_start
>   kernel/locking/qspinlock_cna.h:240:61: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock_cna.h:252:26: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
> 
> vim +141 kernel/locking/qspinlock_cna.h
> 
>    90	
>    91	static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val,
>    92					       struct mcs_spinlock *node)
>    93	{
>    94		struct mcs_spinlock *head_2nd, *tail_2nd;
>    95		u32 new;
>    96	
>    97		/* If the secondary queue is empty, do what MCS does. */
>    98		if (node->locked <= 1)
>    99			return __try_clear_tail(lock, val, node);
>   100	
>   101		/*
>   102		 * Try to update the tail value to the last node in the secondary queue.
>   103		 * If successful, pass the lock to the first thread in the secondary
>   104		 * queue. Doing those two actions effectively moves all nodes from the
>   105		 * secondary queue into the main one.
>   106		 */
>> 107		tail_2nd = decode_tail(node->locked);
>   108		head_2nd = tail_2nd->next;
>   109		new = ((struct cna_node *)tail_2nd)->encoded_tail + _Q_LOCKED_VAL;
>   110	
>   111		if (atomic_try_cmpxchg_relaxed(&lock->val, &val, new)) {
>   112			/*
>   113			 * Try to reset @next in tail_2nd to NULL, but no need to check
>   114			 * the result - if failed, a new successor has updated it.
>   115			 */
>   116			cmpxchg_relaxed(&tail_2nd->next, head_2nd, NULL);
>   117			arch_mcs_pass_lock(&head_2nd->locked, 1);
>   118			return true;
>   119		}
>   120	
>   121		return false;
>   122	}
>   123	
>   124	/*
>   125	 * cna_splice_tail -- splice nodes in the main queue between [first, last]
>   126	 * onto the secondary queue.
>   127	 */
>   128	static void cna_splice_tail(struct mcs_spinlock *node,
>   129				    struct mcs_spinlock *first,
>   130				    struct mcs_spinlock *last)
>   131	{
>   132		/* remove [first,last] */
>   133		node->next = last->next;
>   134	
>   135		/* stick [first,last] on the secondary queue tail */
>   136		if (node->locked <= 1) { /* if secondary queue is empty */
>   137			/* create secondary queue */
>   138			last->next = first;
>   139		} else {
>   140			/* add to the tail of the secondary queue */
>> 141			struct mcs_spinlock *tail_2nd = decode_tail(node->locked);
>   142			struct mcs_spinlock *head_2nd = tail_2nd->next;
>   143	
>   144			tail_2nd->next = first;
>   145			last->next = head_2nd;
>   146		}
>   147	
>   148		node->locked = ((struct cna_node *)last)->encoded_tail;
>   149	}
>   150	
>   151	/*
>   152	 * cna_scan_main_queue - scan the main waiting queue looking for the first
>   153	 * thread running on the same NUMA node as the lock holder. If found (call it
>   154	 * thread T), move all threads in the main queue between the lock holder and
>   155	 * T to the end of the secondary queue and return 0; otherwise, return the
>   156	 * encoded pointer of the last scanned node in the primary queue (so a
>   157	 * subsequent scan can be resumed from that node)
>   158	 *
>   159	 * Schematically, this may look like the following (nn stands for numa_node and
>   160	 * et stands for encoded_tail).
>   161	 *
>   162	 *   when cna_scan_main_queue() is called (the secondary queue is empty):
>   163	 *
>   164	 *  A+------------+   B+--------+   C+--------+   T+--------+
>   165	 *   |mcs:next    | -> |mcs:next| -> |mcs:next| -> |mcs:next| -> NULL
>   166	 *   |mcs:locked=1|    |cna:nn=0|    |cna:nn=2|    |cna:nn=1|
>   167	 *   |cna:nn=1    |    +--------+    +--------+    +--------+
>   168	 *   +----------- +
>   169	 *
>   170	 *   when cna_scan_main_queue() returns (the secondary queue contains B and C):
>   171	 *
>   172	 *  A+----------------+    T+--------+
>   173	 *   |mcs:next        | ->  |mcs:next| -> NULL
>   174	 *   |mcs:locked=C.et | -+  |cna:nn=1|
>   175	 *   |cna:nn=1        |  |  +--------+
>   176	 *   +--------------- +  +-----+
>   177	 *                             \/
>   178	 *          B+--------+   C+--------+
>   179	 *           |mcs:next| -> |mcs:next| -+
>   180	 *           |cna:nn=0|    |cna:nn=2|  |
>   181	 *           +--------+    +--------+  |
>   182	 *               ^                     |
>   183	 *               +---------------------+
>   184	 *
>   185	 * The worst case complexity of the scan is O(n), where n is the number
>   186	 * of current waiters. However, the amortized complexity is close to O(1),
>   187	 * as the immediate successor is likely to be running on the same node once
>   188	 * threads from other nodes are moved to the secondary queue.
>   189	 */
>   190	static u32 cna_scan_main_queue(struct mcs_spinlock *node,
>   191				       struct mcs_spinlock *pred_start)
>   192	{
>   193		struct cna_node *cn = (struct cna_node *)node;
>   194		struct cna_node *cni = (struct cna_node *)READ_ONCE(pred_start->next);
>   195		struct cna_node *last;
>   196		int my_numa_node = cn->numa_node;
>   197	
>   198		/* find any next waiter on 'our' NUMA node */
>   199		for (last = cn;
>   200		     cni && cni->numa_node != my_numa_node;
>   201		     last = cni, cni = (struct cna_node *)READ_ONCE(cni->mcs.next))
>   202			;
>   203	
>   204		/* if found, splice any skipped waiters onto the secondary queue */
>   205		if (cni) {
>   206			if (last != cn)	/* did we skip any waiters? */
>   207				cna_splice_tail(node, node->next,
>   208						(struct mcs_spinlock *)last);
>   209			return 0;
>   210		}
>   211	
>   212		return last->encoded_tail;
>   213	}
>   214	
>   215	__always_inline u32 cna_pre_scan(struct qspinlock *lock,
>   216					  struct mcs_spinlock *node)
>   217	{
>   218		struct cna_node *cn = (struct cna_node *)node;
>   219	
>   220		cn->pre_scan_result = cna_scan_main_queue(node, node);
>   221	
>   222		return 0;
>   223	}
>   224	
>   225	static inline void cna_pass_lock(struct mcs_spinlock *node,
>   226					 struct mcs_spinlock *next)
>   227	{
>   228		struct cna_node *cn = (struct cna_node *)node;
>   229		struct mcs_spinlock *next_holder = next, *tail_2nd;
>   230		u32 val = 1;
>   231	
>   232		u32 scan = cn->pre_scan_result;
>   233	
>   234		/*
>   235		 * check if a successor from the same numa node has not been found in
>   236		 * pre-scan, and if so, try to find it in post-scan starting from the
>   237		 * node where pre-scan stopped (stored in @pre_scan_result)
>   238		 */
>   239		if (scan > 0)
>> 240			scan = cna_scan_main_queue(node, decode_tail(scan));
> 
> ---
> 0-DAY kernel test infrastructure                 Open Source Technology Center
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_hyperkitty_list_kbuild-2Dall-40lists.01.org&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=VprTTTCiBtYDpGK-n61PqoAYogz7_cX60cLNj_O8K2E&e=  Intel Corporation


WARNING: multiple messages have this Message-ID (diff)
From: Alex Kogan <alex.kogan@oracle.com>
To: kbuild test robot <lkp@intel.com>, linux-sparse@vger.kernel.org
Cc: linux-arch@vger.kernel.org, guohanjun@huawei.com,
	kbuild-all@lists.01.org, arnd@arndb.de,
	Peter Zijlstra <peterz@infradead.org>,
	dave.dice@oracle.com, jglauber@marvell.com, x86@kernel.org,
	will.deacon@arm.com, linux@armlinux.org.uk,
	steven.sistare@oracle.com, linux-kernel@vger.kernel.org,
	rahul.x.yadav@oracle.com, mingo@redhat.com, bp@alien8.de,
	hpa@zytor.com, longman@redhat.com, tglx@linutronix.de,
	daniel.m.jordan@oracle.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
Date: Thu, 14 Nov 2019 15:57:34 -0500	[thread overview]
Message-ID: <58623E4A-973B-46CC-8FA8-29E68DB5EFF4@oracle.com> (raw)
In-Reply-To: <201911110540.8p3UoQAR%lkp@intel.com>

+ linux-sparse mailing list

It seems like a bug in the way sparse handles “pure” functions that return
a pointer.

One of the pure functions in question is defined as following:
	static inline __pure
	struct mcs_spinlock *grab_mcs_node(struct mcs_spinlock *base, int idx)
	{
        	return &((struct qnode *)base + idx)->mcs;
	}

and the corresponding variable definition and the assignment statement that
produce a warning (in kernel/locking/qspinlock.c) are:
	struct mcs_spinlock *prev, *next, *node;
	…
	node = grab_mcs_node(node, idx);

The issue can be recreated without my patch with
	# sparse version: v0.6.1
	make ARCH=x86_64 allmodconfig
	make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' kernel/locking/qspinlock.o


The warnings can be eliminated by adding an explicit cast, e.g.:

	node = (struct mcs_spinlock *)grab_mcs_node(node, idx);

but this seems wrong (unnecessary) to me.

Regards,
— Alex

> On Nov 10, 2019, at 4:30 PM, kbuild test robot <lkp@intel.com> wrote:
> 
> Hi Alex,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on linus/master]
> [cannot apply to v5.4-rc6 next-20191108]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=4bbPcLEtAedk_fBrSIBMWvdEslLtH5W28nZLbmMIgL8&e= ]
> 
> url:    https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=ydR3iBtEF-3XUySBCcPYJ8oqw_oNDB-liJdapTXeFeM&e= 
> base:   https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=c4rCmFY0YTXCPiXW9d_BD0RN6WU6QGb64h1iyWNCm9A&e=  0058b0a506e40d9a2c62015fe92eb64a44d78cd9
> reproduce:
>        # apt-get install sparse
>        # sparse version: v0.6.1-21-gb31adac-dirty
>        make ARCH=x86_64 allmodconfig
>        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
> 
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@intel.com>
> 
> 
> sparse warnings: (new ones prefixed by >>)
> 
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:141:60: sparse: sparse: incorrect type in initializer (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    expected struct mcs_spinlock *tail_2nd
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:107:18: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:240:61: sparse: sparse: incorrect type in argument 2 (different modifiers) @@    expected struct mcs_spinlock *pred_start @@    got struct struct mcs_spinlock *pred_start @@
>>> kernel/locking/qspinlock_cna.h:240:61: sparse:    expected struct mcs_spinlock *pred_start
>   kernel/locking/qspinlock_cna.h:240:61: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock_cna.h:252:26: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
> 
> vim +141 kernel/locking/qspinlock_cna.h
> 
>    90	
>    91	static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val,
>    92					       struct mcs_spinlock *node)
>    93	{
>    94		struct mcs_spinlock *head_2nd, *tail_2nd;
>    95		u32 new;
>    96	
>    97		/* If the secondary queue is empty, do what MCS does. */
>    98		if (node->locked <= 1)
>    99			return __try_clear_tail(lock, val, node);
>   100	
>   101		/*
>   102		 * Try to update the tail value to the last node in the secondary queue.
>   103		 * If successful, pass the lock to the first thread in the secondary
>   104		 * queue. Doing those two actions effectively moves all nodes from the
>   105		 * secondary queue into the main one.
>   106		 */
>> 107		tail_2nd = decode_tail(node->locked);
>   108		head_2nd = tail_2nd->next;
>   109		new = ((struct cna_node *)tail_2nd)->encoded_tail + _Q_LOCKED_VAL;
>   110	
>   111		if (atomic_try_cmpxchg_relaxed(&lock->val, &val, new)) {
>   112			/*
>   113			 * Try to reset @next in tail_2nd to NULL, but no need to check
>   114			 * the result - if failed, a new successor has updated it.
>   115			 */
>   116			cmpxchg_relaxed(&tail_2nd->next, head_2nd, NULL);
>   117			arch_mcs_pass_lock(&head_2nd->locked, 1);
>   118			return true;
>   119		}
>   120	
>   121		return false;
>   122	}
>   123	
>   124	/*
>   125	 * cna_splice_tail -- splice nodes in the main queue between [first, last]
>   126	 * onto the secondary queue.
>   127	 */
>   128	static void cna_splice_tail(struct mcs_spinlock *node,
>   129				    struct mcs_spinlock *first,
>   130				    struct mcs_spinlock *last)
>   131	{
>   132		/* remove [first,last] */
>   133		node->next = last->next;
>   134	
>   135		/* stick [first,last] on the secondary queue tail */
>   136		if (node->locked <= 1) { /* if secondary queue is empty */
>   137			/* create secondary queue */
>   138			last->next = first;
>   139		} else {
>   140			/* add to the tail of the secondary queue */
>> 141			struct mcs_spinlock *tail_2nd = decode_tail(node->locked);
>   142			struct mcs_spinlock *head_2nd = tail_2nd->next;
>   143	
>   144			tail_2nd->next = first;
>   145			last->next = head_2nd;
>   146		}
>   147	
>   148		node->locked = ((struct cna_node *)last)->encoded_tail;
>   149	}
>   150	
>   151	/*
>   152	 * cna_scan_main_queue - scan the main waiting queue looking for the first
>   153	 * thread running on the same NUMA node as the lock holder. If found (call it
>   154	 * thread T), move all threads in the main queue between the lock holder and
>   155	 * T to the end of the secondary queue and return 0; otherwise, return the
>   156	 * encoded pointer of the last scanned node in the primary queue (so a
>   157	 * subsequent scan can be resumed from that node)
>   158	 *
>   159	 * Schematically, this may look like the following (nn stands for numa_node and
>   160	 * et stands for encoded_tail).
>   161	 *
>   162	 *   when cna_scan_main_queue() is called (the secondary queue is empty):
>   163	 *
>   164	 *  A+------------+   B+--------+   C+--------+   T+--------+
>   165	 *   |mcs:next    | -> |mcs:next| -> |mcs:next| -> |mcs:next| -> NULL
>   166	 *   |mcs:locked=1|    |cna:nn=0|    |cna:nn=2|    |cna:nn=1|
>   167	 *   |cna:nn=1    |    +--------+    +--------+    +--------+
>   168	 *   +----------- +
>   169	 *
>   170	 *   when cna_scan_main_queue() returns (the secondary queue contains B and C):
>   171	 *
>   172	 *  A+----------------+    T+--------+
>   173	 *   |mcs:next        | ->  |mcs:next| -> NULL
>   174	 *   |mcs:locked=C.et | -+  |cna:nn=1|
>   175	 *   |cna:nn=1        |  |  +--------+
>   176	 *   +--------------- +  +-----+
>   177	 *                             \/
>   178	 *          B+--------+   C+--------+
>   179	 *           |mcs:next| -> |mcs:next| -+
>   180	 *           |cna:nn=0|    |cna:nn=2|  |
>   181	 *           +--------+    +--------+  |
>   182	 *               ^                     |
>   183	 *               +---------------------+
>   184	 *
>   185	 * The worst case complexity of the scan is O(n), where n is the number
>   186	 * of current waiters. However, the amortized complexity is close to O(1),
>   187	 * as the immediate successor is likely to be running on the same node once
>   188	 * threads from other nodes are moved to the secondary queue.
>   189	 */
>   190	static u32 cna_scan_main_queue(struct mcs_spinlock *node,
>   191				       struct mcs_spinlock *pred_start)
>   192	{
>   193		struct cna_node *cn = (struct cna_node *)node;
>   194		struct cna_node *cni = (struct cna_node *)READ_ONCE(pred_start->next);
>   195		struct cna_node *last;
>   196		int my_numa_node = cn->numa_node;
>   197	
>   198		/* find any next waiter on 'our' NUMA node */
>   199		for (last = cn;
>   200		     cni && cni->numa_node != my_numa_node;
>   201		     last = cni, cni = (struct cna_node *)READ_ONCE(cni->mcs.next))
>   202			;
>   203	
>   204		/* if found, splice any skipped waiters onto the secondary queue */
>   205		if (cni) {
>   206			if (last != cn)	/* did we skip any waiters? */
>   207				cna_splice_tail(node, node->next,
>   208						(struct mcs_spinlock *)last);
>   209			return 0;
>   210		}
>   211	
>   212		return last->encoded_tail;
>   213	}
>   214	
>   215	__always_inline u32 cna_pre_scan(struct qspinlock *lock,
>   216					  struct mcs_spinlock *node)
>   217	{
>   218		struct cna_node *cn = (struct cna_node *)node;
>   219	
>   220		cn->pre_scan_result = cna_scan_main_queue(node, node);
>   221	
>   222		return 0;
>   223	}
>   224	
>   225	static inline void cna_pass_lock(struct mcs_spinlock *node,
>   226					 struct mcs_spinlock *next)
>   227	{
>   228		struct cna_node *cn = (struct cna_node *)node;
>   229		struct mcs_spinlock *next_holder = next, *tail_2nd;
>   230		u32 val = 1;
>   231	
>   232		u32 scan = cn->pre_scan_result;
>   233	
>   234		/*
>   235		 * check if a successor from the same numa node has not been found in
>   236		 * pre-scan, and if so, try to find it in post-scan starting from the
>   237		 * node where pre-scan stopped (stored in @pre_scan_result)
>   238		 */
>   239		if (scan > 0)
>> 240			scan = cna_scan_main_queue(node, decode_tail(scan));
> 
> ---
> 0-DAY kernel test infrastructure                 Open Source Technology Center
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_hyperkitty_list_kbuild-2Dall-40lists.01.org&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=VprTTTCiBtYDpGK-n61PqoAYogz7_cX60cLNj_O8K2E&e=  Intel Corporation


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Alex Kogan <alex.kogan@oracle.com>
To: kbuild-all@lists.01.org
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
Date: Thu, 14 Nov 2019 15:57:34 -0500	[thread overview]
Message-ID: <58623E4A-973B-46CC-8FA8-29E68DB5EFF4@oracle.com> (raw)
In-Reply-To: <201911110540.8p3UoQAR%lkp@intel.com>

[-- Attachment #1: Type: text/plain, Size: 13453 bytes --]

+ linux-sparse mailing list

It seems like a bug in the way sparse handles “pure” functions that return
a pointer.

One of the pure functions in question is defined as following:
	static inline __pure
	struct mcs_spinlock *grab_mcs_node(struct mcs_spinlock *base, int idx)
	{
        	return &((struct qnode *)base + idx)->mcs;
	}

and the corresponding variable definition and the assignment statement that
produce a warning (in kernel/locking/qspinlock.c) are:
	struct mcs_spinlock *prev, *next, *node;
	…
	node = grab_mcs_node(node, idx);

The issue can be recreated without my patch with
	# sparse version: v0.6.1
	make ARCH=x86_64 allmodconfig
	make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' kernel/locking/qspinlock.o


The warnings can be eliminated by adding an explicit cast, e.g.:

	node = (struct mcs_spinlock *)grab_mcs_node(node, idx);

but this seems wrong (unnecessary) to me.

Regards,
— Alex

> On Nov 10, 2019, at 4:30 PM, kbuild test robot <lkp@intel.com> wrote:
> 
> Hi Alex,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on linus/master]
> [cannot apply to v5.4-rc6 next-20191108]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=4bbPcLEtAedk_fBrSIBMWvdEslLtH5W28nZLbmMIgL8&e= ]
> 
> url:    https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=ydR3iBtEF-3XUySBCcPYJ8oqw_oNDB-liJdapTXeFeM&e= 
> base:   https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=c4rCmFY0YTXCPiXW9d_BD0RN6WU6QGb64h1iyWNCm9A&e=  0058b0a506e40d9a2c62015fe92eb64a44d78cd9
> reproduce:
>        # apt-get install sparse
>        # sparse version: v0.6.1-21-gb31adac-dirty
>        make ARCH=x86_64 allmodconfig
>        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
> 
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <lkp@intel.com>
> 
> 
> sparse warnings: (new ones prefixed by >>)
> 
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:141:60: sparse: sparse: incorrect type in initializer (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    expected struct mcs_spinlock *tail_2nd
>>> kernel/locking/qspinlock_cna.h:141:60: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:107:18: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:107:18: sparse:    got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:240:61: sparse: sparse: incorrect type in argument 2 (different modifiers) @@    expected struct mcs_spinlock *pred_start @@    got struct struct mcs_spinlock *pred_start @@
>>> kernel/locking/qspinlock_cna.h:240:61: sparse:    expected struct mcs_spinlock *pred_start
>   kernel/locking/qspinlock_cna.h:240:61: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock_cna.h:252:26: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *tail_2nd @@    got struct struct mcs_spinlock *tail_2nd @@
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    expected struct mcs_spinlock *tail_2nd
>   kernel/locking/qspinlock_cna.h:252:26: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *[assigned] node @@    got ct mcs_spinlock *[assigned] node @@
>   kernel/locking/qspinlock.c:450:14: sparse:    expected struct mcs_spinlock *[assigned] node
>   kernel/locking/qspinlock.c:450:14: sparse:    got struct mcs_spinlock [pure] *
>   kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@    expected struct mcs_spinlock *prev @@    got struct struct mcs_spinlock *prev @@
>   kernel/locking/qspinlock.c:498:22: sparse:    expected struct mcs_spinlock *prev
>   kernel/locking/qspinlock.c:498:22: sparse:    got struct mcs_spinlock [pure] *
> 
> vim +141 kernel/locking/qspinlock_cna.h
> 
>    90	
>    91	static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val,
>    92					       struct mcs_spinlock *node)
>    93	{
>    94		struct mcs_spinlock *head_2nd, *tail_2nd;
>    95		u32 new;
>    96	
>    97		/* If the secondary queue is empty, do what MCS does. */
>    98		if (node->locked <= 1)
>    99			return __try_clear_tail(lock, val, node);
>   100	
>   101		/*
>   102		 * Try to update the tail value to the last node in the secondary queue.
>   103		 * If successful, pass the lock to the first thread in the secondary
>   104		 * queue. Doing those two actions effectively moves all nodes from the
>   105		 * secondary queue into the main one.
>   106		 */
>> 107		tail_2nd = decode_tail(node->locked);
>   108		head_2nd = tail_2nd->next;
>   109		new = ((struct cna_node *)tail_2nd)->encoded_tail + _Q_LOCKED_VAL;
>   110	
>   111		if (atomic_try_cmpxchg_relaxed(&lock->val, &val, new)) {
>   112			/*
>   113			 * Try to reset @next in tail_2nd to NULL, but no need to check
>   114			 * the result - if failed, a new successor has updated it.
>   115			 */
>   116			cmpxchg_relaxed(&tail_2nd->next, head_2nd, NULL);
>   117			arch_mcs_pass_lock(&head_2nd->locked, 1);
>   118			return true;
>   119		}
>   120	
>   121		return false;
>   122	}
>   123	
>   124	/*
>   125	 * cna_splice_tail -- splice nodes in the main queue between [first, last]
>   126	 * onto the secondary queue.
>   127	 */
>   128	static void cna_splice_tail(struct mcs_spinlock *node,
>   129				    struct mcs_spinlock *first,
>   130				    struct mcs_spinlock *last)
>   131	{
>   132		/* remove [first,last] */
>   133		node->next = last->next;
>   134	
>   135		/* stick [first,last] on the secondary queue tail */
>   136		if (node->locked <= 1) { /* if secondary queue is empty */
>   137			/* create secondary queue */
>   138			last->next = first;
>   139		} else {
>   140			/* add to the tail of the secondary queue */
>> 141			struct mcs_spinlock *tail_2nd = decode_tail(node->locked);
>   142			struct mcs_spinlock *head_2nd = tail_2nd->next;
>   143	
>   144			tail_2nd->next = first;
>   145			last->next = head_2nd;
>   146		}
>   147	
>   148		node->locked = ((struct cna_node *)last)->encoded_tail;
>   149	}
>   150	
>   151	/*
>   152	 * cna_scan_main_queue - scan the main waiting queue looking for the first
>   153	 * thread running on the same NUMA node as the lock holder. If found (call it
>   154	 * thread T), move all threads in the main queue between the lock holder and
>   155	 * T to the end of the secondary queue and return 0; otherwise, return the
>   156	 * encoded pointer of the last scanned node in the primary queue (so a
>   157	 * subsequent scan can be resumed from that node)
>   158	 *
>   159	 * Schematically, this may look like the following (nn stands for numa_node and
>   160	 * et stands for encoded_tail).
>   161	 *
>   162	 *   when cna_scan_main_queue() is called (the secondary queue is empty):
>   163	 *
>   164	 *  A+------------+   B+--------+   C+--------+   T+--------+
>   165	 *   |mcs:next    | -> |mcs:next| -> |mcs:next| -> |mcs:next| -> NULL
>   166	 *   |mcs:locked=1|    |cna:nn=0|    |cna:nn=2|    |cna:nn=1|
>   167	 *   |cna:nn=1    |    +--------+    +--------+    +--------+
>   168	 *   +----------- +
>   169	 *
>   170	 *   when cna_scan_main_queue() returns (the secondary queue contains B and C):
>   171	 *
>   172	 *  A+----------------+    T+--------+
>   173	 *   |mcs:next        | ->  |mcs:next| -> NULL
>   174	 *   |mcs:locked=C.et | -+  |cna:nn=1|
>   175	 *   |cna:nn=1        |  |  +--------+
>   176	 *   +--------------- +  +-----+
>   177	 *                             \/
>   178	 *          B+--------+   C+--------+
>   179	 *           |mcs:next| -> |mcs:next| -+
>   180	 *           |cna:nn=0|    |cna:nn=2|  |
>   181	 *           +--------+    +--------+  |
>   182	 *               ^                     |
>   183	 *               +---------------------+
>   184	 *
>   185	 * The worst case complexity of the scan is O(n), where n is the number
>   186	 * of current waiters. However, the amortized complexity is close to O(1),
>   187	 * as the immediate successor is likely to be running on the same node once
>   188	 * threads from other nodes are moved to the secondary queue.
>   189	 */
>   190	static u32 cna_scan_main_queue(struct mcs_spinlock *node,
>   191				       struct mcs_spinlock *pred_start)
>   192	{
>   193		struct cna_node *cn = (struct cna_node *)node;
>   194		struct cna_node *cni = (struct cna_node *)READ_ONCE(pred_start->next);
>   195		struct cna_node *last;
>   196		int my_numa_node = cn->numa_node;
>   197	
>   198		/* find any next waiter on 'our' NUMA node */
>   199		for (last = cn;
>   200		     cni && cni->numa_node != my_numa_node;
>   201		     last = cni, cni = (struct cna_node *)READ_ONCE(cni->mcs.next))
>   202			;
>   203	
>   204		/* if found, splice any skipped waiters onto the secondary queue */
>   205		if (cni) {
>   206			if (last != cn)	/* did we skip any waiters? */
>   207				cna_splice_tail(node, node->next,
>   208						(struct mcs_spinlock *)last);
>   209			return 0;
>   210		}
>   211	
>   212		return last->encoded_tail;
>   213	}
>   214	
>   215	__always_inline u32 cna_pre_scan(struct qspinlock *lock,
>   216					  struct mcs_spinlock *node)
>   217	{
>   218		struct cna_node *cn = (struct cna_node *)node;
>   219	
>   220		cn->pre_scan_result = cna_scan_main_queue(node, node);
>   221	
>   222		return 0;
>   223	}
>   224	
>   225	static inline void cna_pass_lock(struct mcs_spinlock *node,
>   226					 struct mcs_spinlock *next)
>   227	{
>   228		struct cna_node *cn = (struct cna_node *)node;
>   229		struct mcs_spinlock *next_holder = next, *tail_2nd;
>   230		u32 val = 1;
>   231	
>   232		u32 scan = cn->pre_scan_result;
>   233	
>   234		/*
>   235		 * check if a successor from the same numa node has not been found in
>   236		 * pre-scan, and if so, try to find it in post-scan starting from the
>   237		 * node where pre-scan stopped (stored in @pre_scan_result)
>   238		 */
>   239		if (scan > 0)
>> 240			scan = cna_scan_main_queue(node, decode_tail(scan));
> 
> ---
> 0-DAY kernel test infrastructure                 Open Source Technology Center
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_hyperkitty_list_kbuild-2Dall-40lists.01.org&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=VprTTTCiBtYDpGK-n61PqoAYogz7_cX60cLNj_O8K2E&e=  Intel Corporation

  reply	other threads:[~2019-11-14 20:59 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-07 17:46 [PATCH v6 0/5] Add NUMA-awareness to qspinlock Alex Kogan
2019-11-07 17:46 ` Alex Kogan
2019-11-07 17:46 ` [PATCH v6 1/5] locking/qspinlock: Rename mcs lock/unlock macros and make them more generic Alex Kogan
2019-11-07 17:46   ` Alex Kogan
2019-11-07 17:46 ` [PATCH v6 2/5] locking/qspinlock: Refactor the qspinlock slow path Alex Kogan
2019-11-07 17:46   ` Alex Kogan
2019-11-07 17:46   ` Alex Kogan
2019-11-07 17:46 ` [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock Alex Kogan
2019-11-07 17:46   ` Alex Kogan
2019-11-10 21:30   ` kbuild test robot
2019-11-10 21:30     ` kbuild test robot
2019-11-10 21:30     ` kbuild test robot
2019-11-10 21:30     ` kbuild test robot
2019-11-10 21:30     ` kbuild test robot
2019-11-14 20:57     ` Alex Kogan [this message]
2019-11-14 20:57       ` Alex Kogan
2019-11-14 20:57       ` Alex Kogan
2019-11-15  0:38       ` Luc Van Oostenryck
2019-11-15  0:38         ` Luc Van Oostenryck
2019-11-19 14:48   ` [locking/qspinlock] ad3836e30e: will-it-scale.per_thread_ops 73.5% improvement kernel test robot
2019-11-19 14:48     ` kernel test robot
2019-11-19 14:48     ` kernel test robot
2019-11-20 15:16   ` [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock kbuild test robot
2019-11-20 15:16     ` kbuild test robot
2019-11-20 15:16     ` kbuild test robot
2019-11-20 15:16     ` kbuild test robot
2019-11-20 15:16     ` kbuild test robot
2019-11-22 18:28     ` Alex Kogan
2019-11-22 18:28       ` Alex Kogan
2019-11-22 18:28       ` Alex Kogan
2019-11-22 19:29       ` Waiman Long
2019-11-22 19:29         ` Waiman Long
2019-11-22 19:29         ` Waiman Long
2019-11-22 19:52         ` Alex Kogan
2019-11-22 19:52           ` Alex Kogan
2019-11-22 19:52           ` Alex Kogan
2019-11-07 17:46 ` [PATCH v6 4/5] locking/qspinlock: Introduce starvation avoidance into CNA Alex Kogan
2019-11-07 17:46   ` Alex Kogan
2019-11-07 17:46   ` Alex Kogan
2019-11-07 17:46 ` [PATCH v6 5/5] locking/qspinlock: Introduce the shuffle reduction optimization " Alex Kogan
2019-11-07 17:46   ` Alex Kogan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58623E4A-973B-46CC-8FA8-29E68DB5EFF4@oracle.com \
    --to=alex.kogan@oracle.com \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=daniel.m.jordan@oracle.com \
    --cc=dave.dice@oracle.com \
    --cc=guohanjun@huawei.com \
    --cc=hpa@zytor.com \
    --cc=jglauber@marvell.com \
    --cc=kbuild-all@lists.01.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-sparse@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=lkp@intel.com \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rahul.x.yadav@oracle.com \
    --cc=steven.sistare@oracle.com \
    --cc=tglx@linutronix.de \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.