linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86: Avoid contention on cpa_lock if possible
@ 2012-04-06 12:40 Ido Yariv
  2012-04-19 22:11 ` [PATCH RESEND] " Ido Yariv
  0 siblings, 1 reply; 11+ messages in thread
From: Ido Yariv @ 2012-04-06 12:40 UTC (permalink / raw)
  To: linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Shai Fultheim, Ido Yariv

From: Shai Fultheim <shai@scalemp.com>

Some architectures (e.g. vSMP) do not require to serialize cpa, for
instance, by guaranteeing that the most recent TLB entry will always be
used.

To avoid needless contention on cpa_lock, do not lock/unlock it on such
architectures.

Signed-off-by: Shai Fultheim <shai@scalemp.com>
[ido@wizery.com: added should_serialize_cpa, handled potential race, and
                 reworded the commit message]
Signed-off-by: Ido Yariv <ido@wizery.com>
---
 arch/x86/mm/pageattr.c |   36 ++++++++++++++++++++++++++++--------
 1 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index e1ebde3..4d606ee 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -501,7 +501,7 @@ out_unlock:
 	return do_split;
 }
 
-static int split_large_page(pte_t *kpte, unsigned long address)
+static int split_large_page(pte_t *kpte, unsigned long address, bool cpa_locked)
 {
 	unsigned long pfn, pfninc = 1;
 	unsigned int i, level;
@@ -509,10 +509,10 @@ static int split_large_page(pte_t *kpte, unsigned long address)
 	pgprot_t ref_prot;
 	struct page *base;
 
-	if (!debug_pagealloc)
+	if (cpa_locked)
 		spin_unlock(&cpa_lock);
 	base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
-	if (!debug_pagealloc)
+	if (cpa_locked)
 		spin_lock(&cpa_lock);
 	if (!base)
 		return -ENOMEM;
@@ -624,7 +624,8 @@ static int __cpa_process_fault(struct cpa_data *cpa, unsigned long vaddr,
 	}
 }
 
-static int __change_page_attr(struct cpa_data *cpa, int primary)
+static int __change_page_attr(struct cpa_data *cpa, int primary,
+			      bool cpa_locked)
 {
 	unsigned long address;
 	int do_split, err;
@@ -693,7 +694,7 @@ repeat:
 	/*
 	 * We have to split the large page:
 	 */
-	err = split_large_page(kpte, address);
+	err = split_large_page(kpte, address, cpa_locked);
 	if (!err) {
 		/*
 	 	 * Do a global flush tlb after splitting the large page
@@ -787,9 +788,20 @@ static int cpa_process_alias(struct cpa_data *cpa)
 	return 0;
 }
 
+static inline bool should_serialize_cpa(void)
+{
+	/*
+	 * Some architectures do not require cpa() to be serialized, for
+	 * instance, by guaranteeing that the most recent TLB entry will be
+	 * used.
+	 */
+	return !debug_pagealloc && !is_vsmp_box();
+}
+
 static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 {
 	int ret, numpages = cpa->numpages;
+	bool cpa_locked = false;
 
 	while (numpages) {
 		/*
@@ -801,10 +813,18 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 		if (cpa->flags & (CPA_ARRAY | CPA_PAGES_ARRAY))
 			cpa->numpages = 1;
 
-		if (!debug_pagealloc)
+		if (should_serialize_cpa()) {
 			spin_lock(&cpa_lock);
-		ret = __change_page_attr(cpa, checkalias);
-		if (!debug_pagealloc)
+			/*
+			 * In order to avoid any race conditions in which
+			 * should_serialize_cpa() returns a different value
+			 * after the lock was acquired, make sure locking is
+			 * consitent and don't ever leave the lock acquired.
+			 */
+			cpa_locked = true;
+		}
+		ret = __change_page_attr(cpa, checkalias, cpa_locked);
+		if (cpa_locked)
 			spin_unlock(&cpa_lock);
 		if (ret)
 			return ret;
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RESEND] x86: Avoid contention on cpa_lock if possible
  2012-04-06 12:40 [PATCH] x86: Avoid contention on cpa_lock if possible Ido Yariv
@ 2012-04-19 22:11 ` Ido Yariv
  2012-05-05 23:49   ` Ido Yariv
                     ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Ido Yariv @ 2012-04-19 22:11 UTC (permalink / raw)
  To: linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Shai Fultheim, Ido Yariv

From: Shai Fultheim <shai@scalemp.com>

Some architectures (e.g. vSMP) do not require to serialize cpa, for
instance, by guaranteeing that the most recent TLB entry will always be
used.

To avoid needless contention on cpa_lock, do not lock/unlock it on such
architectures.

Signed-off-by: Shai Fultheim <shai@scalemp.com>
[ido@wizery.com: added should_serialize_cpa, handled potential race, and
                 reworded the commit message]
Signed-off-by: Ido Yariv <ido@wizery.com>
---
 arch/x86/mm/pageattr.c |   36 ++++++++++++++++++++++++++++--------
 1 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index e1ebde3..4d606ee 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -501,7 +501,7 @@ out_unlock:
 	return do_split;
 }
 
-static int split_large_page(pte_t *kpte, unsigned long address)
+static int split_large_page(pte_t *kpte, unsigned long address, bool cpa_locked)
 {
 	unsigned long pfn, pfninc = 1;
 	unsigned int i, level;
@@ -509,10 +509,10 @@ static int split_large_page(pte_t *kpte, unsigned long address)
 	pgprot_t ref_prot;
 	struct page *base;
 
-	if (!debug_pagealloc)
+	if (cpa_locked)
 		spin_unlock(&cpa_lock);
 	base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
-	if (!debug_pagealloc)
+	if (cpa_locked)
 		spin_lock(&cpa_lock);
 	if (!base)
 		return -ENOMEM;
@@ -624,7 +624,8 @@ static int __cpa_process_fault(struct cpa_data *cpa, unsigned long vaddr,
 	}
 }
 
-static int __change_page_attr(struct cpa_data *cpa, int primary)
+static int __change_page_attr(struct cpa_data *cpa, int primary,
+			      bool cpa_locked)
 {
 	unsigned long address;
 	int do_split, err;
@@ -693,7 +694,7 @@ repeat:
 	/*
 	 * We have to split the large page:
 	 */
-	err = split_large_page(kpte, address);
+	err = split_large_page(kpte, address, cpa_locked);
 	if (!err) {
 		/*
 	 	 * Do a global flush tlb after splitting the large page
@@ -787,9 +788,20 @@ static int cpa_process_alias(struct cpa_data *cpa)
 	return 0;
 }
 
+static inline bool should_serialize_cpa(void)
+{
+	/*
+	 * Some architectures do not require cpa() to be serialized, for
+	 * instance, by guaranteeing that the most recent TLB entry will be
+	 * used.
+	 */
+	return !debug_pagealloc && !is_vsmp_box();
+}
+
 static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 {
 	int ret, numpages = cpa->numpages;
+	bool cpa_locked = false;
 
 	while (numpages) {
 		/*
@@ -801,10 +813,18 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 		if (cpa->flags & (CPA_ARRAY | CPA_PAGES_ARRAY))
 			cpa->numpages = 1;
 
-		if (!debug_pagealloc)
+		if (should_serialize_cpa()) {
 			spin_lock(&cpa_lock);
-		ret = __change_page_attr(cpa, checkalias);
-		if (!debug_pagealloc)
+			/*
+			 * In order to avoid any race conditions in which
+			 * should_serialize_cpa() returns a different value
+			 * after the lock was acquired, make sure locking is
+			 * consitent and don't ever leave the lock acquired.
+			 */
+			cpa_locked = true;
+		}
+		ret = __change_page_attr(cpa, checkalias, cpa_locked);
+		if (cpa_locked)
 			spin_unlock(&cpa_lock);
 		if (ret)
 			return ret;
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH RESEND] x86: Avoid contention on cpa_lock if possible
  2012-04-19 22:11 ` [PATCH RESEND] " Ido Yariv
@ 2012-05-05 23:49   ` Ido Yariv
  2012-06-02 18:56   ` Ido Yariv
  2012-06-06 16:18   ` [tip:x86/mm] x86/pat: " tip-bot for Shai Fultheim
  2 siblings, 0 replies; 11+ messages in thread
From: Ido Yariv @ 2012-05-05 23:49 UTC (permalink / raw)
  To: linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Shai Fultheim, Ido Yariv

On Fri, Apr 20, 2012 at 1:11 AM, Ido Yariv <ido@wizery.com> wrote:
> From: Shai Fultheim <shai@scalemp.com>
>
> Some architectures (e.g. vSMP) do not require to serialize cpa, for
> instance, by guaranteeing that the most recent TLB entry will always be
> used.
>
> To avoid needless contention on cpa_lock, do not lock/unlock it on such
> architectures.
>
> Signed-off-by: Shai Fultheim <shai@scalemp.com>
> [ido@wizery.com: added should_serialize_cpa, handled potential race, and
>                 reworded the commit message]
> Signed-off-by: Ido Yariv <ido@wizery.com>
> ---

Ping?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RESEND] x86: Avoid contention on cpa_lock if possible
  2012-04-19 22:11 ` [PATCH RESEND] " Ido Yariv
  2012-05-05 23:49   ` Ido Yariv
@ 2012-06-02 18:56   ` Ido Yariv
  2012-06-06 16:18   ` [tip:x86/mm] x86/pat: " tip-bot for Shai Fultheim
  2 siblings, 0 replies; 11+ messages in thread
From: Ido Yariv @ 2012-06-02 18:56 UTC (permalink / raw)
  To: linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin; +Cc: Shai Fultheim

On Fri, Apr 20, 2012 at 01:11:32AM +0300, Ido Yariv wrote:
> From: Shai Fultheim <shai@scalemp.com>
> 
> Some architectures (e.g. vSMP) do not require to serialize cpa, for
> instance, by guaranteeing that the most recent TLB entry will always be
> used.
> 
> To avoid needless contention on cpa_lock, do not lock/unlock it on such
> architectures.
> 
> Signed-off-by: Shai Fultheim <shai@scalemp.com>
> [ido@wizery.com: added should_serialize_cpa, handled potential race, and
>                  reworded the commit message]
> Signed-off-by: Ido Yariv <ido@wizery.com>
> ---

Ping?

Thanks,
Ido.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [tip:x86/mm] x86/pat: Avoid contention on cpa_lock if possible
  2012-04-19 22:11 ` [PATCH RESEND] " Ido Yariv
  2012-05-05 23:49   ` Ido Yariv
  2012-06-02 18:56   ` Ido Yariv
@ 2012-06-06 16:18   ` tip-bot for Shai Fultheim
  2012-06-06 17:24     ` Peter Zijlstra
  2 siblings, 1 reply; 11+ messages in thread
From: tip-bot for Shai Fultheim @ 2012-06-06 16:18 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, torvalds, shai, akpm, tglx, ido

Commit-ID:  d2956406fb61e15ad2b47eda823288db4996acf1
Gitweb:     http://git.kernel.org/tip/d2956406fb61e15ad2b47eda823288db4996acf1
Author:     Shai Fultheim <shai@scalemp.com>
AuthorDate: Fri, 20 Apr 2012 01:11:32 +0300
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Jun 2012 17:42:59 +0200

x86/pat: Avoid contention on cpa_lock if possible

Some architectures (e.g. vSMP) do not require to serialize cpa,
for instance, by guaranteeing that the most recent TLB entry
will always be used.

To avoid needless contention on cpa_lock, do not lock/unlock it
on such architectures.

Signed-off-by: Shai Fultheim <shai@scalemp.com>
[ Added should_serialize_cpa, handled potential race, and reworded the commit message ]
Signed-off-by: Ido Yariv <ido@wizery.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1334873492-31255-1-git-send-email-ido@wizery.com
[ I absolutely hate these locking patterns ... yet I have no better idea. Maybe the gents on Cc: ... ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/mm/pageattr.c |   36 ++++++++++++++++++++++++++++--------
 1 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index e1ebde3..4d606ee 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -501,7 +501,7 @@ out_unlock:
 	return do_split;
 }
 
-static int split_large_page(pte_t *kpte, unsigned long address)
+static int split_large_page(pte_t *kpte, unsigned long address, bool cpa_locked)
 {
 	unsigned long pfn, pfninc = 1;
 	unsigned int i, level;
@@ -509,10 +509,10 @@ static int split_large_page(pte_t *kpte, unsigned long address)
 	pgprot_t ref_prot;
 	struct page *base;
 
-	if (!debug_pagealloc)
+	if (cpa_locked)
 		spin_unlock(&cpa_lock);
 	base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
-	if (!debug_pagealloc)
+	if (cpa_locked)
 		spin_lock(&cpa_lock);
 	if (!base)
 		return -ENOMEM;
@@ -624,7 +624,8 @@ static int __cpa_process_fault(struct cpa_data *cpa, unsigned long vaddr,
 	}
 }
 
-static int __change_page_attr(struct cpa_data *cpa, int primary)
+static int __change_page_attr(struct cpa_data *cpa, int primary,
+			      bool cpa_locked)
 {
 	unsigned long address;
 	int do_split, err;
@@ -693,7 +694,7 @@ repeat:
 	/*
 	 * We have to split the large page:
 	 */
-	err = split_large_page(kpte, address);
+	err = split_large_page(kpte, address, cpa_locked);
 	if (!err) {
 		/*
 	 	 * Do a global flush tlb after splitting the large page
@@ -787,9 +788,20 @@ static int cpa_process_alias(struct cpa_data *cpa)
 	return 0;
 }
 
+static inline bool should_serialize_cpa(void)
+{
+	/*
+	 * Some architectures do not require cpa() to be serialized, for
+	 * instance, by guaranteeing that the most recent TLB entry will be
+	 * used.
+	 */
+	return !debug_pagealloc && !is_vsmp_box();
+}
+
 static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 {
 	int ret, numpages = cpa->numpages;
+	bool cpa_locked = false;
 
 	while (numpages) {
 		/*
@@ -801,10 +813,18 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 		if (cpa->flags & (CPA_ARRAY | CPA_PAGES_ARRAY))
 			cpa->numpages = 1;
 
-		if (!debug_pagealloc)
+		if (should_serialize_cpa()) {
 			spin_lock(&cpa_lock);
-		ret = __change_page_attr(cpa, checkalias);
-		if (!debug_pagealloc)
+			/*
+			 * In order to avoid any race conditions in which
+			 * should_serialize_cpa() returns a different value
+			 * after the lock was acquired, make sure locking is
+			 * consitent and don't ever leave the lock acquired.
+			 */
+			cpa_locked = true;
+		}
+		ret = __change_page_attr(cpa, checkalias, cpa_locked);
+		if (cpa_locked)
 			spin_unlock(&cpa_lock);
 		if (ret)
 			return ret;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [tip:x86/mm] x86/pat: Avoid contention on cpa_lock if possible
  2012-06-06 16:18   ` [tip:x86/mm] x86/pat: " tip-bot for Shai Fultheim
@ 2012-06-06 17:24     ` Peter Zijlstra
  2012-06-06 17:41       ` Ingo Molnar
                         ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Peter Zijlstra @ 2012-06-06 17:24 UTC (permalink / raw)
  To: mingo, hpa, linux-kernel, torvalds, shai, akpm, tglx, ido
  Cc: linux-tip-commits

On Wed, 2012-06-06 at 09:18 -0700, tip-bot for Shai Fultheim wrote:

> [ I absolutely hate these locking patterns ... yet I have no better idea. Maybe the gents on Cc: ... ]
> Signed-off-by: Ingo Molnar <mingo@kernel.org>

Oh yuck, this is vile..

static struct static_key scale_mp_trainwreck = STATIC_KEY_INIT_FALSE;

static DEFINE_SPINLOCK(_cpa_lock);

static inline void cpa_lock(void)
{
	if (static_key_false(&scale_mp_trainwreck))
		return;

	spin_lock(&_cpa_lock);
}

static inline void cpa_unlock(void)
{
	if (static_key_false(&scale_mp_trainwreck))
		return;

	spin_lock(&_cpa_lock);
}

And then use cpa_{,un}lock(), and the scale-mp guys can
static_key_slow_inc(&scale_mp_trainwreck).

[ and yes I hate those jump_label names ... but I'm not wanting
  to go through another round of bike-shed painting. ]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/mm] x86/pat: Avoid contention on cpa_lock if possible
  2012-06-06 17:24     ` Peter Zijlstra
@ 2012-06-06 17:41       ` Ingo Molnar
  2012-06-06 18:58       ` H. Peter Anvin
  2012-06-06 22:18       ` Shai Fultheim (Shai@ScaleMP.com)
  2 siblings, 0 replies; 11+ messages in thread
From: Ingo Molnar @ 2012-06-06 17:41 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: hpa, linux-kernel, torvalds, shai, akpm, tglx, ido, linux-tip-commits


* Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:

> On Wed, 2012-06-06 at 09:18 -0700, tip-bot for Shai Fultheim wrote:
> 
> > [ I absolutely hate these locking patterns ... yet I have no better idea. Maybe the gents on Cc: ... ]
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> Oh yuck, this is vile..
> 
> static struct static_key scale_mp_trainwreck = STATIC_KEY_INIT_FALSE;
> 
> static DEFINE_SPINLOCK(_cpa_lock);
> 
> static inline void cpa_lock(void)
> {
> 	if (static_key_false(&scale_mp_trainwreck))
> 		return;
> 
> 	spin_lock(&_cpa_lock);
> }
> 
> static inline void cpa_unlock(void)
> {
> 	if (static_key_false(&scale_mp_trainwreck))
> 		return;
> 
> 	spin_lock(&_cpa_lock);
> }
> 
> And then use cpa_{,un}lock(), and the scale-mp guys can
> static_key_slow_inc(&scale_mp_trainwreck).
> 
> [ and yes I hate those jump_label names ... but I'm not wanting
>   to go through another round of bike-shed painting. ]

ok.

Another problem this patch has is inadequate testing:

arch/x86/mm/pageattr.c:798:2: error: implicit declaration of 
function ‘is_vsmp_box’ [-Werror=implicit-function-declaration]

So I'm removing it from tip:x86/mm for now.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/mm] x86/pat: Avoid contention on cpa_lock if possible
  2012-06-06 17:24     ` Peter Zijlstra
  2012-06-06 17:41       ` Ingo Molnar
@ 2012-06-06 18:58       ` H. Peter Anvin
  2012-06-06 22:18       ` Shai Fultheim (Shai@ScaleMP.com)
  2 siblings, 0 replies; 11+ messages in thread
From: H. Peter Anvin @ 2012-06-06 18:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, linux-kernel, torvalds, shai, akpm, tglx, ido, linux-tip-commits

On 06/06/2012 10:24 AM, Peter Zijlstra wrote:
> On Wed, 2012-06-06 at 09:18 -0700, tip-bot for Shai Fultheim wrote:
> 
>> [ I absolutely hate these locking patterns ... yet I have no better idea. Maybe the gents on Cc: ... ]
>> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> Oh yuck, this is vile..
> 
> static struct static_key scale_mp_trainwreck = STATIC_KEY_INIT_FALSE;
> 
> static DEFINE_SPINLOCK(_cpa_lock);
> 
> static inline void cpa_lock(void)
> {
> 	if (static_key_false(&scale_mp_trainwreck))
> 		return;
> 
> 	spin_lock(&_cpa_lock);
> }
> 
> static inline void cpa_unlock(void)
> {
> 	if (static_key_false(&scale_mp_trainwreck))
> 		return;
> 
> 	spin_lock(&_cpa_lock);
> }
> 
> And then use cpa_{,un}lock(), and the scale-mp guys can
> static_key_slow_inc(&scale_mp_trainwreck).
> 

Actually, for this particular subcase I would use a synthetic CPUID bit
and use static_cpu_has().

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [tip:x86/mm] x86/pat: Avoid contention on cpa_lock if possible
  2012-06-06 17:24     ` Peter Zijlstra
  2012-06-06 17:41       ` Ingo Molnar
  2012-06-06 18:58       ` H. Peter Anvin
@ 2012-06-06 22:18       ` Shai Fultheim (Shai@ScaleMP.com)
  2012-06-06 23:05         ` Ido Yariv
  2 siblings, 1 reply; 11+ messages in thread
From: Shai Fultheim (Shai@ScaleMP.com) @ 2012-06-06 22:18 UTC (permalink / raw)
  To: Peter Zijlstra, mingo, hpa, linux-kernel, torvalds, akpm, tglx, ido
  Cc: linux-tip-commits

Peter Zijlstra [mailto:a.p.zijlstra@chello.nl] wrote
> On Wed, 2012-06-06 at 09:18 -0700, tip-bot for Shai Fultheim wrote:
> 
> > [ I absolutely hate these locking patterns ... yet I have no better idea.  Maybe the gents on Cc: ... ]
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> Oh yuck, this is vile..
> 
> static struct static_key scale_mp_trainwreck = STATIC_KEY_INIT_FALSE;
> 
> static DEFINE_SPINLOCK(_cpa_lock);
> 
> static inline void cpa_lock(void)
> {
> 	if (static_key_false(&scale_mp_trainwreck))
> 		return;
> 
> 	spin_lock(&_cpa_lock);
> }
> 
> static inline void cpa_unlock(void)
> {
> 	if (static_key_false(&scale_mp_trainwreck))
> 		return;
> 
> 	spin_lock(&_cpa_lock);
> }
> 
> And then use cpa_{,un}lock(), and the scale-mp guys can
> static_key_slow_inc(&scale_mp_trainwreck).
> 
> [ and yes I hate those jump_label names ... but I'm not wanting
>   to go through another round of bike-shed painting. ]

Looks pretty straight forward to do.  
We will try this route, as I'm concerned that synthetic CPUID bit will be kind of a global change for a pretty local consideration.

Comments?

(and we will also fix the other error pointed by Ingo - we are missing an include in this patch)

Regards,
Shai.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/mm] x86/pat: Avoid contention on cpa_lock if possible
  2012-06-06 22:18       ` Shai Fultheim (Shai@ScaleMP.com)
@ 2012-06-06 23:05         ` Ido Yariv
  2012-06-06 23:07           ` Shai Fultheim (Shai@ScaleMP.com)
  0 siblings, 1 reply; 11+ messages in thread
From: Ido Yariv @ 2012-06-06 23:05 UTC (permalink / raw)
  To: Shai Fultheim (Shai@ScaleMP.com)
  Cc: Peter Zijlstra, mingo, hpa, linux-kernel, torvalds, akpm, tglx,
	linux-tip-commits

Hi,

On Wed, Jun 06, 2012 at 06:18:12PM -0400, Shai Fultheim (Shai@ScaleMP.com) wrote:
> Peter Zijlstra [mailto:a.p.zijlstra@chello.nl] wrote
> > On Wed, 2012-06-06 at 09:18 -0700, tip-bot for Shai Fultheim wrote:
> > 
> > > [ I absolutely hate these locking patterns ... yet I have no better idea.  Maybe the gents on Cc: ... ]
> > > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> > 
> > Oh yuck, this is vile..
> > 
> > static struct static_key scale_mp_trainwreck = STATIC_KEY_INIT_FALSE;
> > 
> > static DEFINE_SPINLOCK(_cpa_lock);
> > 
> > static inline void cpa_lock(void)
> > {
> > 	if (static_key_false(&scale_mp_trainwreck))
> > 		return;
> > 
> > 	spin_lock(&_cpa_lock);
> > }
> > 
> > static inline void cpa_unlock(void)
> > {
> > 	if (static_key_false(&scale_mp_trainwreck))
> > 		return;
> > 
> > 	spin_lock(&_cpa_lock);
> > }
> > 
> > And then use cpa_{,un}lock(), and the scale-mp guys can
> > static_key_slow_inc(&scale_mp_trainwreck).
> > 
> > [ and yes I hate those jump_label names ... but I'm not wanting
> >   to go through another round of bike-shed painting. ]
> 
> Looks pretty straight forward to do.  
> We will try this route, as I'm concerned that synthetic CPUID bit will be kind of a global change for a pretty local consideration.
> 
> Comments?

The synthetic CPUID bit approach has the advantage of setting this bit
from the platform initialization code independently (in this case, in
vsmp_64.c) instead of calling platform specific functions from
pageattr.c (such as is_vsmp_box()) or exporting the static_key variable.

We'll give this a shot and send a revised patch after testing it.

Sounds good?

Thanks,
Ido.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [tip:x86/mm] x86/pat: Avoid contention on cpa_lock if possible
  2012-06-06 23:05         ` Ido Yariv
@ 2012-06-06 23:07           ` Shai Fultheim (Shai@ScaleMP.com)
  0 siblings, 0 replies; 11+ messages in thread
From: Shai Fultheim (Shai@ScaleMP.com) @ 2012-06-06 23:07 UTC (permalink / raw)
  To: Ido Yariv
  Cc: Peter Zijlstra, mingo, hpa, linux-kernel, torvalds, akpm, tglx,
	linux-tip-commits

Ido Yariv [mailto:ido@wizery.com] wrote:

> The synthetic CPUID bit approach has the advantage of setting this bit
> from the platform initialization code independently (in this case, in
> vsmp_64.c) instead of calling platform specific functions from
> pageattr.c (such as is_vsmp_box()) or exporting the static_key variable.
> 
> We'll give this a shot and send a revised patch after testing it.
> 
> Sounds good?

I can support that.  Let's get this ready.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-06-06 23:07 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-06 12:40 [PATCH] x86: Avoid contention on cpa_lock if possible Ido Yariv
2012-04-19 22:11 ` [PATCH RESEND] " Ido Yariv
2012-05-05 23:49   ` Ido Yariv
2012-06-02 18:56   ` Ido Yariv
2012-06-06 16:18   ` [tip:x86/mm] x86/pat: " tip-bot for Shai Fultheim
2012-06-06 17:24     ` Peter Zijlstra
2012-06-06 17:41       ` Ingo Molnar
2012-06-06 18:58       ` H. Peter Anvin
2012-06-06 22:18       ` Shai Fultheim (Shai@ScaleMP.com)
2012-06-06 23:05         ` Ido Yariv
2012-06-06 23:07           ` Shai Fultheim (Shai@ScaleMP.com)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).