All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <longman@redhat.com>
Cc: Alex Kogan <alex.kogan@oracle.com>,
	linux@armlinux.org.uk, mingo@redhat.com, will.deacon@arm.com,
	arnd@arndb.de, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de,
	hpa@zytor.com, x86@kernel.org, steven.sistare@oracle.com,
	daniel.m.jordan@oracle.com, dave.dice@oracle.com,
	rahul.x.yadav@oracle.com
Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
Date: Tue, 2 Apr 2019 11:43:20 +0200	[thread overview]
Message-ID: <20190402094320.GM11158@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <60a3a2d8-d222-73aa-2df1-64c9d3fa3241@redhat.com>

On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
> On 03/29/2019 11:20 AM, Alex Kogan wrote:
> > +config NUMA_AWARE_SPINLOCKS
> > +	bool "Numa-aware spinlocks"
> > +	depends on NUMA
> > +	default y
> > +	help
> > +	  Introduce NUMA (Non Uniform Memory Access) awareness into
> > +	  the slow path of spinlocks.
> > +
> > +	  The kernel will try to keep the lock on the same node,
> > +	  thus reducing the number of remote cache misses, while
> > +	  trading some of the short term fairness for better performance.
> > +
> > +	  Say N if you want absolute first come first serve fairness.
> > +
> 
> The patch that I am looking for is to have a separate
> numa_queued_spinlock_slowpath() that coexists with
> native_queued_spinlock_slowpath() and
> paravirt_queued_spinlock_slowpath(). At boot time, we select the most
> appropriate one for the system at hand.

Agreed; and until we have static_call, I think we can abuse the paravirt
stuff for this.

By the time we patch the paravirt stuff:

  check_bugs()
    alternative_instructions()
      apply_paravirt()

we should already have enumerated the NODE topology and so nr_node_ids()
should be set.

So if we frob pv_ops.lock.queued_spin_lock_slowpath to
numa_queued_spin_lock_slowpath before that, it should all get patched
just right.

That of course means the whole NUMA_AWARE_SPINLOCKS thing depends on
PARAVIRT_SPINLOCK, which is a bit awkward...

WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <longman@redhat.com>
Cc: linux-arch@vger.kernel.org, arnd@arndb.de, dave.dice@oracle.com,
	x86@kernel.org, will.deacon@arm.com, linux@armlinux.org.uk,
	linux-kernel@vger.kernel.org, rahul.x.yadav@oracle.com,
	mingo@redhat.com, bp@alien8.de, hpa@zytor.com,
	Alex Kogan <alex.kogan@oracle.com>,
	steven.sistare@oracle.com, tglx@linutronix.de,
	daniel.m.jordan@oracle.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
Date: Tue, 2 Apr 2019 11:43:20 +0200	[thread overview]
Message-ID: <20190402094320.GM11158@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <60a3a2d8-d222-73aa-2df1-64c9d3fa3241@redhat.com>

On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
> On 03/29/2019 11:20 AM, Alex Kogan wrote:
> > +config NUMA_AWARE_SPINLOCKS
> > +	bool "Numa-aware spinlocks"
> > +	depends on NUMA
> > +	default y
> > +	help
> > +	  Introduce NUMA (Non Uniform Memory Access) awareness into
> > +	  the slow path of spinlocks.
> > +
> > +	  The kernel will try to keep the lock on the same node,
> > +	  thus reducing the number of remote cache misses, while
> > +	  trading some of the short term fairness for better performance.
> > +
> > +	  Say N if you want absolute first come first serve fairness.
> > +
> 
> The patch that I am looking for is to have a separate
> numa_queued_spinlock_slowpath() that coexists with
> native_queued_spinlock_slowpath() and
> paravirt_queued_spinlock_slowpath(). At boot time, we select the most
> appropriate one for the system at hand.

Agreed; and until we have static_call, I think we can abuse the paravirt
stuff for this.

By the time we patch the paravirt stuff:

  check_bugs()
    alternative_instructions()
      apply_paravirt()

we should already have enumerated the NODE topology and so nr_node_ids()
should be set.

So if we frob pv_ops.lock.queued_spin_lock_slowpath to
numa_queued_spin_lock_slowpath before that, it should all get patched
just right.

That of course means the whole NUMA_AWARE_SPINLOCKS thing depends on
PARAVIRT_SPINLOCK, which is a bit awkward...

WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <longman@redhat.com>
Cc: linux-arch@vger.kernel.org, arnd@arndb.de, dave.dice@oracle.com,
	x86@kernel.org, will.deacon@arm.com, linux@armlinux.org.uk,
	linux-kernel@vger.kernel.org, rahul.x.yadav@oracle.com,
	mingo@redhat.com, bp@alien8.de, hpa@zytor.com,
	Alex Kogan <alex.kogan@oracle.com>,
	steven.sistare@oracle.com, tglx@linutronix.de,
	daniel.m.jordan@oracle.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
Date: Tue, 2 Apr 2019 11:43:20 +0200	[thread overview]
Message-ID: <20190402094320.GM11158@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <60a3a2d8-d222-73aa-2df1-64c9d3fa3241@redhat.com>

On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
> On 03/29/2019 11:20 AM, Alex Kogan wrote:
> > +config NUMA_AWARE_SPINLOCKS
> > +	bool "Numa-aware spinlocks"
> > +	depends on NUMA
> > +	default y
> > +	help
> > +	  Introduce NUMA (Non Uniform Memory Access) awareness into
> > +	  the slow path of spinlocks.
> > +
> > +	  The kernel will try to keep the lock on the same node,
> > +	  thus reducing the number of remote cache misses, while
> > +	  trading some of the short term fairness for better performance.
> > +
> > +	  Say N if you want absolute first come first serve fairness.
> > +
> 
> The patch that I am looking for is to have a separate
> numa_queued_spinlock_slowpath() that coexists with
> native_queued_spinlock_slowpath() and
> paravirt_queued_spinlock_slowpath(). At boot time, we select the most
> appropriate one for the system at hand.

Agreed; and until we have static_call, I think we can abuse the paravirt
stuff for this.

By the time we patch the paravirt stuff:

  check_bugs()
    alternative_instructions()
      apply_paravirt()

we should already have enumerated the NODE topology and so nr_node_ids()
should be set.

So if we frob pv_ops.lock.queued_spin_lock_slowpath to
numa_queued_spin_lock_slowpath before that, it should all get patched
just right.

That of course means the whole NUMA_AWARE_SPINLOCKS thing depends on
PARAVIRT_SPINLOCK, which is a bit awkward...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2019-04-02  9:43 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-29 15:20 [PATCH v2 0/5] Add NUMA-awareness to qspinlock Alex Kogan
2019-03-29 15:20 ` Alex Kogan
2019-03-29 15:20 ` [PATCH v2 1/5] locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic Alex Kogan
2019-03-29 15:20   ` Alex Kogan
2019-03-29 15:20 ` [PATCH v2 2/5] locking/qspinlock: Refactor the qspinlock slow path Alex Kogan
2019-03-29 15:20   ` Alex Kogan
2019-03-29 15:20 ` [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock Alex Kogan
2019-03-29 15:20   ` Alex Kogan
2019-04-01  9:06   ` Peter Zijlstra
2019-04-01  9:06     ` Peter Zijlstra
2019-04-01  9:06     ` Peter Zijlstra
2019-04-01  9:33     ` Peter Zijlstra
2019-04-01  9:33       ` Peter Zijlstra
2019-04-01  9:33       ` Peter Zijlstra
2019-04-03 15:53       ` Alex Kogan
2019-04-03 15:53         ` Alex Kogan
2019-04-03 16:10         ` Peter Zijlstra
2019-04-03 16:10           ` Peter Zijlstra
2019-04-03 16:10           ` Peter Zijlstra
2019-04-01  9:21   ` Peter Zijlstra
2019-04-01  9:21     ` Peter Zijlstra
2019-04-01 14:36   ` Waiman Long
2019-04-01 14:36     ` Waiman Long
2019-04-02  9:43     ` Peter Zijlstra [this message]
2019-04-02  9:43       ` Peter Zijlstra
2019-04-02  9:43       ` Peter Zijlstra
2019-04-03 15:39       ` Alex Kogan
2019-04-03 15:39         ` Alex Kogan
2019-04-03 15:48         ` Waiman Long
2019-04-03 15:48           ` Waiman Long
2019-04-03 16:01         ` Peter Zijlstra
2019-04-03 16:01           ` Peter Zijlstra
2019-04-04  5:05           ` Juergen Gross
2019-04-04  5:05             ` Juergen Gross
2019-04-04  9:38             ` Peter Zijlstra
2019-04-04  9:38               ` Peter Zijlstra
2019-04-04  9:38               ` Peter Zijlstra
2019-04-04 18:03               ` Waiman Long
2019-04-04 18:03                 ` Waiman Long
2019-06-04 23:21           ` Alex Kogan
2019-06-04 23:21             ` Alex Kogan
2019-06-05 20:40             ` Peter Zijlstra
2019-06-05 20:40               ` Peter Zijlstra
2019-06-06 15:21               ` Alex Kogan
2019-06-06 15:21                 ` Alex Kogan
2019-06-06 15:32                 ` Waiman Long
2019-06-06 15:32                   ` Waiman Long
2019-06-06 15:42                   ` Waiman Long
2019-06-06 15:42                     ` Waiman Long
2019-04-03 16:33       ` Waiman Long
2019-04-03 16:33         ` Waiman Long
2019-04-03 17:16         ` Peter Zijlstra
2019-04-03 17:16           ` Peter Zijlstra
2019-04-03 17:16           ` Peter Zijlstra
2019-04-03 17:40           ` Waiman Long
2019-04-03 17:40             ` Waiman Long
2019-04-04  2:02   ` Hanjun Guo
2019-04-04  2:02     ` Hanjun Guo
2019-04-04  2:02     ` Hanjun Guo
2019-04-04  3:14     ` Alex Kogan
2019-04-04  3:14       ` Alex Kogan
2019-06-11  4:22   ` liwei (GF)
2019-06-11  4:22     ` liwei (GF)
2019-06-11  4:22     ` liwei (GF)
2019-06-12  4:38     ` Alex Kogan
2019-06-12  4:38       ` Alex Kogan
2019-06-12 15:05       ` Waiman Long
2019-06-12 15:05         ` Waiman Long
2019-03-29 15:20 ` [PATCH v2 4/5] locking/qspinlock: Introduce starvation avoidance into CNA Alex Kogan
2019-03-29 15:20   ` Alex Kogan
2019-04-02 10:37   ` Peter Zijlstra
2019-04-02 10:37     ` Peter Zijlstra
2019-04-02 10:37     ` Peter Zijlstra
2019-04-03 17:06     ` Alex Kogan
2019-04-03 17:06       ` Alex Kogan
2019-03-29 15:20 ` [PATCH v2 5/5] locking/qspinlock: Introduce the shuffle reduction optimization " Alex Kogan
2019-03-29 15:20   ` Alex Kogan
2019-04-01  9:09 ` [PATCH v2 0/5] Add NUMA-awareness to qspinlock Peter Zijlstra
2019-04-01  9:09   ` Peter Zijlstra
2019-04-01  9:09   ` Peter Zijlstra
2019-04-03 17:13   ` Alex Kogan
2019-04-03 17:13     ` Alex Kogan
2019-07-03 11:57 ` Jan Glauber
2019-07-03 11:58   ` Jan Glauber
2019-07-12  8:12   ` Hanjun Guo
2019-07-12  8:12     ` Hanjun Guo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190402094320.GM11158@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=alex.kogan@oracle.com \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=daniel.m.jordan@oracle.com \
    --cc=dave.dice@oracle.com \
    --cc=hpa@zytor.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=rahul.x.yadav@oracle.com \
    --cc=steven.sistare@oracle.com \
    --cc=tglx@linutronix.de \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.