linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
       [not found] ` <1f5d9380418ad8bb90c6bbdac34716c650b917a0.camel@linux.ibm.com>
@ 2019-09-20 21:24   ` John Hubbard
  2019-09-23 20:51   ` John Hubbard
  1 sibling, 0 replies; 19+ messages in thread
From: John Hubbard @ 2019-09-20 21:24 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

On 9/20/19 1:12 PM, Leonardo Bras wrote:
> If a process (qemu) with a lot of CPUs (128) try to munmap() a large
> chunk of memory (496GB) mapped with THP, it takes an average of 275
> seconds, which can cause a lot of problems to the load (in qemu case,
> the guest will lock for this time).
> 
> Trying to find the source of this bug, I found out most of this time is
> spent on serialize_against_pte_lookup(). This function will take a lot
> of time in smp_call_function_many() if there is more than a couple CPUs
> running the user process. Since it has to happen to all THP mapped, it
> will take a very long time for large amounts of memory.
> 
> By the docs, serialize_against_pte_lookup() is needed in order to avoid
> pmd_t to pte_t casting inside find_current_mm_pte(), or any lockless
> pagetable walk, to happen concurrently with THP splitting/collapsing.
> 
> It does so by calling a do_nothing() on each CPU in mm->cpu_bitmap[],
> after interrupts are re-enabled. 
> Since, interrupts are (usually) disabled during lockless pagetable
> walk, and serialize_against_pte_lookup will only return after
> interrupts are enabled, it is protected.
> 
> So, by what I could understand, if there is no lockless pagetable walk
> running, there is no need to call serialize_against_pte_lookup().
> 
> So, to avoid the cost of running serialize_against_pte_lookup(), I
> propose a counter that keeps track of how many find_current_mm_pte()
> are currently running, and if there is none, just skip 
> smp_call_function_many().

Just noticed that this really should also include linux-mm, maybe
it's best to repost the patchset with them included?

In particular, there is likely to be some feedback about adding more
calls, in addition to local_irq_disable/enable, around the gup_fast() path,
separately from my questions about the synchronization cases in ppc.

thanks,
-- 
John Hubbard
NVIDIA

> 
> The related functions are:
> start_lockless_pgtbl_walk(mm)
> 	Insert before starting any lockless pgtable walk
> end_lockless_pgtbl_walk(mm)
> 	Insert after the end of any lockless pgtable walk
> 	(Mostly after the ptep is last used)
> running_lockless_pgtbl_walk(mm)
> 	Returns the number of lockless pgtable walks running
> 
> 
> On my workload (qemu), I could see munmap's time reduction from 275
> seconds to 418ms.
> 
>> Leonardo Bras (11):
>>   powerpc/mm: Adds counting method to monitor lockless pgtable walks
>>   asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
>>     walks
>>   mm/gup: Applies counting method to monitor gup_pgd_range
>>   powerpc/mce_power: Applies counting method to monitor lockless pgtbl
>>     walks
>>   powerpc/perf: Applies counting method to monitor lockless pgtbl walks
>>   powerpc/mm/book3s64/hash: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl
>>     walks
>>   powerpc/kvm/book3s_hv: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/kvm/book3s_64: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/book3s_64: Enables counting method to monitor lockless pgtbl
>>     walk
>>   powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
>>
>>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>>  arch/powerpc/perf/callchain.c                |  5 ++++-
>>  include/asm-generic/pgtable.h                |  9 +++++++++
>>  mm/gup.c                                     |  4 ++++
>>  16 files changed, 108 insertions(+), 8 deletions(-)
>>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
       [not found]     ` <24863d8904c6e05e5dd48cab57db4274675ae654.camel@linux.ibm.com>
@ 2019-09-21  0:48       ` John Hubbard
  2019-09-23 17:25         ` Leonardo Bras
  0 siblings, 1 reply; 19+ messages in thread
From: John Hubbard @ 2019-09-21  0:48 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
>>> ---
>>>  arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>> index 13239b17a22c..41ca30269fa3 100644
>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>>>  void serialize_against_pte_lookup(struct mm_struct *mm)
>>>  {
>>>  	smp_mb();
>>> -	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>> +	if (running_lockless_pgtbl_walk(mm))
>>> +		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>
>> Hi,
>>
>> If you do this, then you are left without any synchronization. So it will
>> have race conditions: a page table walk could begin right after the above
>> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
>> will continue on right away, under the false assumption that it has let
>> all the current page table walks complete.
>>
>> The current code uses either interrupts or RCU to synchronize, and in
>> either case, you end up scheduling something on each CPU. If you remove
>> that entirely, I don't see anything left. ("Pure" atomic counting is not
>> a synchronization technique all by itself.)
>>
>> thanks,
> 
> Hello John,
> Thanks for the fast feedback.
> 
> See, before calling serialize_against_pte_lookup(), there is always an
> update or clear on the pmd. So, if a page table walk begin right after
> the check returns "false", there is no problem, since it will use the
> updated pmd.
> 
> Think about serialize, on a process with a bunch of cpus. After you
> check the last processor (wait part), there is no guarantee that the
> first one is not starting a lockless pagetable walk.
> 
> The same mechanism protect both methods.
> 
> Does it make sense?
> 

Yes, after working through this with Mark Hairgrove, I think I finally 
realize that the new code will allow existing gup_fast() readers to drain,
before proceeding. So that technically works (ignoring issues such as 
whether it's desirable to use this approach, vs. for example batching
the THP updates, etc), I agree.

(And please ignore my other response that asked if the counting was 
helping at all--I see that it does.)

However, Mark pointed out a pre-existing question, which neither of us
could figure out: are the irq disable/enable calls effective, given that
they are (apparently) not memory barriers? 

Given this claim from Documentation/memory-barriers.txt:

INTERRUPT DISABLING FUNCTIONS
-----------------------------

Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
barriers are required in such a situation, they must be provided from some
other means.

...and given both the preexisting code, and the code you've added:

mm/gup.c:

	atomic_inc(readers); /* new code */
	local_irq_disable();
	gup_pgd_range();
		...read page tables
	local_irq_enable();
	atomic_dec(readers); /* new code */
 
...if the page table reads are allowed to speculatively happen *outside*
of the irq enable/disable calls (which could happen if there are no run-time
memory barriers in the above code), then nothing works anymore. 

So it seems that full memory barriers (not just compiler barriers) are required.
If the irq enable/disable somehow provides that, then your new code just goes
along for the ride and Just Works. (You don't have any memory barriers in
start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
barriers provided by the atomic inc/dec.)

So it's really a pre-existing question about the correctness of the gup_fast()
irq disabling approach.

+CC linux-mm


thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-21  0:48       ` [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing John Hubbard
@ 2019-09-23 17:25         ` Leonardo Bras
  2019-09-23 18:14           ` John Hubbard
  0 siblings, 1 reply; 19+ messages in thread
From: Leonardo Bras @ 2019-09-23 17:25 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 1538 bytes --]

On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
> 
[...]
> So it seems that full memory barriers (not just compiler barriers) are required.
> If the irq enable/disable somehow provides that, then your new code just goes
> along for the ride and Just Works. (You don't have any memory barriers in
> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
> barriers provided by the atomic inc/dec.)
> 
> So it's really a pre-existing question about the correctness of the gup_fast()
> irq disabling approach.

I am not experienced in other archs, and I am still pretty new to
Power, but by what I could understand, this behavior is better
explained in serialize_against_pte_lookup. 

What happens here is that, before doing a THP split/collapse, the
function does a update of the pmd and a serialize_against_pte_lookup,
in order do avoid a invalid output on a lockless pagetable walk.

Serialize basically runs a do_nothing in every cpu related to the
process, and wait for it to return. 

This running depends on interrupt being enabled, so disabling it before
gup_pgd_range() and re-enabling after the end, makes the THP
split/collapse wait for gup_pgd_range() completion in every cpu before
continuing. (here happens the lock)

(As told before, every gup_pgd_range() that occurs after it uses a
updated pmd, so no problem.)

I am sure other archs may have a similar mechanism using
local_irq_{disable,enable}.

Did it answer your questions?

Best regards,

Leonardo Bras

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 17:25         ` Leonardo Bras
@ 2019-09-23 18:14           ` John Hubbard
  2019-09-23 19:40             ` Leonardo Bras
  0 siblings, 1 reply; 19+ messages in thread
From: John Hubbard @ 2019-09-23 18:14 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/23/19 10:25 AM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
>>
> [...]
>> So it seems that full memory barriers (not just compiler barriers) are required.
>> If the irq enable/disable somehow provides that, then your new code just goes
>> along for the ride and Just Works. (You don't have any memory barriers in
>> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
>> barriers provided by the atomic inc/dec.)
>>
>> So it's really a pre-existing question about the correctness of the gup_fast()
>> irq disabling approach.
> 
> I am not experienced in other archs, and I am still pretty new to
> Power, but by what I could understand, this behavior is better
> explained in serialize_against_pte_lookup. 
> 
> What happens here is that, before doing a THP split/collapse, the
> function does a update of the pmd and a serialize_against_pte_lookup,
> in order do avoid a invalid output on a lockless pagetable walk.
> 
> Serialize basically runs a do_nothing in every cpu related to the
> process, and wait for it to return. 
> 
> This running depends on interrupt being enabled, so disabling it before
> gup_pgd_range() and re-enabling after the end, makes the THP
> split/collapse wait for gup_pgd_range() completion in every cpu before
> continuing. (here happens the lock)
> 

That part is all fine, but there are no run-time memory barriers in the 
atomic_inc() and atomic_dec() additions, which means that this is not
safe, because memory operations on CPU 1 can be reordered. It's safe
as shown *if* there are memory barriers to keep the order as shown:

CPU 0                            CPU 1
------                         --------------
                               atomic_inc(val) (no run-time memory barrier!)
pmd_clear(pte)
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() (also not a mem barrier)

                               READ(pte)
                               if(pte)
                                  walk page tables

                               local_irq_enable() (still not a barrier)
                               atomic_dec(val)

free(pte)

thanks,
-- 
John Hubbard
NVIDIA

> (As told before, every gup_pgd_range() that occurs after it uses a
> updated pmd, so no problem.)
> 
> I am sure other archs may have a similar mechanism using
> local_irq_{disable,enable}.
> 
> Did it answer your questions?
> 
> Best regards,
> 
> Leonardo Bras
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 18:14           ` John Hubbard
@ 2019-09-23 19:40             ` Leonardo Bras
  2019-09-23 19:58               ` John Hubbard
  0 siblings, 1 reply; 19+ messages in thread
From: Leonardo Bras @ 2019-09-23 19:40 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

[-- Attachment #1: Type: text/plain, Size: 1990 bytes --]

On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
> On 9/23/19 10:25 AM, Leonardo Bras wrote:
> [...]
> That part is all fine, but there are no run-time memory barriers in the 
> atomic_inc() and atomic_dec() additions, which means that this is not
> safe, because memory operations on CPU 1 can be reordered. It's safe
> as shown *if* there are memory barriers to keep the order as shown:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                atomic_inc(val) (no run-time memory barrier!)
> pmd_clear(pte)
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() (also not a mem barrier)
> 
>                                READ(pte)
>                                if(pte)
>                                   walk page tables
> 
>                                local_irq_enable() (still not a barrier)
>                                atomic_dec(val)
> 
> free(pte)
> 
> thanks,

This is serialize:

void serialize_against_pte_lookup(struct mm_struct *mm)
{
	smp_mb();
	if (running_lockless_pgtbl_walk(mm))
		smp_call_function_many(mm_cpumask(mm), do_nothing,
NULL, 1);
}

That would mean:

CPU 0                            CPU 1
------                         --------------
                               atomic_inc(val) 
pmd_clear(pte)
smp_mb()
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() 

                               READ(pte)
                               if(pte)
                                  walk page tables

                               local_irq_enable() (still not a barrier)
                               atomic_dec(val)

By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
'If you need all the CPUs to see a given store at the same time, use
smp_mb().'

Is it not enough? 
Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 19:40             ` Leonardo Bras
@ 2019-09-23 19:58               ` John Hubbard
  2019-09-23 20:23                 ` Leonardo Bras
  0 siblings, 1 reply; 19+ messages in thread
From: John Hubbard @ 2019-09-23 19:58 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

On 9/23/19 12:40 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
>> On 9/23/19 10:25 AM, Leonardo Bras wrote:
>> [...]
>> That part is all fine, but there are no run-time memory barriers in the 
>> atomic_inc() and atomic_dec() additions, which means that this is not
>> safe, because memory operations on CPU 1 can be reordered. It's safe
>> as shown *if* there are memory barriers to keep the order as shown:
>>
>> CPU 0                            CPU 1
>> ------                         --------------
>>                                atomic_inc(val) (no run-time memory barrier!)
>> pmd_clear(pte)
>> if (val)
>>     run_on_all_cpus(): IPI
>>                                local_irq_disable() (also not a mem barrier)
>>
>>                                READ(pte)
>>                                if(pte)
>>                                   walk page tables
>>
>>                                local_irq_enable() (still not a barrier)
>>                                atomic_dec(val)
>>
>> free(pte)
>>
>> thanks,
> 
> This is serialize:
> 
> void serialize_against_pte_lookup(struct mm_struct *mm)
> {
> 	smp_mb();
> 	if (running_lockless_pgtbl_walk(mm))
> 		smp_call_function_many(mm_cpumask(mm), do_nothing,
> NULL, 1);
> }
> 
> That would mean:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                atomic_inc(val) 
> pmd_clear(pte)
> smp_mb()
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() 
> 
>                                READ(pte)
>                                if(pte)
>                                   walk page tables
> 
>                                local_irq_enable() (still not a barrier)
>                                atomic_dec(val)
> 
> By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
> 'If you need all the CPUs to see a given store at the same time, use
> smp_mb().'
> 
> Is it not enough? 

Nope. CPU 1 memory accesses could be re-ordered, as I said above:


CPU 0                            CPU 1
------                         --------------
                               READ(pte) (re-ordered at run time)
                               atomic_inc(val) (no run-time memory barrier!)
                           
pmd_clear(pte)
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() (also not a mem barrier)

                               if(pte)
                                  walk page tables
...

> Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?
> 

Yes (approximately: I'd have to look closer to see which barrier call is really
required). Unless there is something else that is providing the barrier, which
is why I called this a pre-existing question: it seems like the interrupt
interlock in the current gup_fast() might not have what it needs.

In other words, if your code needs a barrier, then the pre-existing gup_fast()
code probably does, too.

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 19:58               ` John Hubbard
@ 2019-09-23 20:23                 ` Leonardo Bras
  2019-09-23 20:26                   ` John Hubbard
  0 siblings, 1 reply; 19+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:23 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
	Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
	Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
	Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
	Allison Randal

[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]

On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                READ(pte) (re-ordered at run time)
>                                atomic_inc(val) (no run-time memory barrier!)
>                            
> pmd_clear(pte)
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() (also not a mem barrier)
> 
>                                if(pte)
>                                   walk page tables

Let me see if I can understand,
On most patches, it would be:

CPU 0                            CPU 1
------				--------------
				ptep = __find_linux_pte  
				(re-ordered at run time)
				atomic_inc(val) 
pmd_clear(pte)
smp_mb()
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() 

                               if(ptep)
                                  pte = *ptep;

Is that what you meant?



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 20:23                 ` Leonardo Bras
@ 2019-09-23 20:26                   ` John Hubbard
  0 siblings, 0 replies; 19+ messages in thread
From: John Hubbard @ 2019-09-23 20:26 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
	Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
	Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
	Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
	Allison Randal

On 9/23/19 1:23 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
>>
>> CPU 0                            CPU 1
>> ------                         --------------
>>                                READ(pte) (re-ordered at run time)
>>                                atomic_inc(val) (no run-time memory barrier!)
>>                            
>> pmd_clear(pte)
>> if (val)
>>     run_on_all_cpus(): IPI
>>                                local_irq_disable() (also not a mem barrier)
>>
>>                                if(pte)
>>                                   walk page tables
> 
> Let me see if I can understand,
> On most patches, it would be:
> 
> CPU 0                            CPU 1
> ------				--------------
> 				ptep = __find_linux_pte  
> 				(re-ordered at run time)
> 				atomic_inc(val) 
> pmd_clear(pte)
> smp_mb()
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() 
> 
>                                if(ptep)
>                                   pte = *ptep;
> 
> Is that what you meant?
> 
> 

Yes.

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
       [not found] ` <20190920195047.7703-4-leonardo@linux.ibm.com>
@ 2019-09-23 20:27   ` John Hubbard
  2019-09-23 21:01     ` Leonardo Bras
  0 siblings, 1 reply; 19+ messages in thread
From: John Hubbard @ 2019-09-23 20:27 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> As decribed, gup_pgd_range is a lockless pagetable walk. So, in order to
> monitor against THP split/collapse with the couting method, it's necessary
> to bound it with {start,end}_lockless_pgtbl_walk.
> 
> There are dummy functions, so it is not going to add any overhead on archs
> that don't use this method.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  mm/gup.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 98f13ab37bac..675e4be27082 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2404,6 +2404,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
>  			unsigned int gup_flags, struct page **pages)
>  {
>  	unsigned long addr, len, end;
> +	struct mm_struct *mm;
>  	int nr = 0, ret = 0;
>  
>  	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM)))
> @@ -2421,9 +2422,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
>  
>  	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
>  	    gup_fast_permitted(start, end)) {
> +		mm = current->mm;
> +		start_lockless_pgtbl_walk(mm);
>  		local_irq_disable();

I'd also like a second opinion from the "core" -mm maintainers, but it seems like
there is now too much code around the gup_pgd_range() call. Especially since there
are two places where it's called--did you forget the other one in 
__get_user_pages_fast(), btw??

Maybe the irq handling and atomic counting should be moved into start/finish
calls, like this:

     start_gup_fast_walk()
     gup_pgd_range()
     finish_gup_fast_walk()

>  		gup_pgd_range(addr, end, gup_flags, pages, &nr);
>  		local_irq_enable();
> +		end_lockless_pgtbl_walk(mm);
>  		ret = nr;
>  	}
>  
> 



thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
       [not found] ` <20190920195047.7703-3-leonardo@linux.ibm.com>
@ 2019-09-23 20:39   ` John Hubbard
  2019-09-23 20:48     ` Leonardo Bras
  0 siblings, 1 reply; 19+ messages in thread
From: John Hubbard @ 2019-09-23 20:39 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> There is a need to monitor lockless pagetable walks, in order to avoid
> doing THP splitting/collapsing during them.
> 
> Some methods rely on local_irq_{save,restore}, but that can be slow on
> cases with a lot of cpus are used for the process.
> 
> In order to speedup these cases, I propose a refcount-based approach, that
> counts the number of lockless pagetable	walks happening on the process.
> 
> Given that there are lockless pagetable walks on generic code, it's
> necessary to create dummy functions for archs that won't use the approach.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  include/asm-generic/pgtable.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index 75d9d68a6de7..6eb4fabb5595 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -1172,6 +1172,15 @@ static inline bool arch_has_pfn_modify_check(void)
>  #endif
>  #endif
>  
> +#ifndef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
> +static inline void start_lockless_pgtbl_walk(struct mm_struct *mm) { }
> +static inline void end_lockless_pgtbl_walk(struct mm_struct *mm) { }
> +static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	return 0;
> +}
> +#endif
> +

Please remember to include linux-mm if there is a v2.

Nit: seems like it would be nicer to just put it all in one place, and use
positive logic, and also I think people normally don't compress the empty
functions quite that much. So like this:

#ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
void start_lockless_pgtbl_walk(struct mm_struct *mm); 
void end_lockless_pgtbl_walk(struct mm_struct *mm); 
int running_lockless_pgtbl_walk(struct mm_struct *mm); 

#else
static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
{
}
static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
{
}
static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
{
	return 0;
}
#endif

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
       [not found] ` <20190920195047.7703-2-leonardo@linux.ibm.com>
@ 2019-09-23 20:42   ` John Hubbard
  2019-09-23 20:50     ` Leonardo Bras
  0 siblings, 1 reply; 19+ messages in thread
From: John Hubbard @ 2019-09-23 20:42 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> It's necessary to monitor lockless pagetable walks, in order to avoid doing
> THP splitting/collapsing during them.
> 
> Some methods rely on local_irq_{save,restore}, but that can be slow on
> cases with a lot of cpus are used for the process.
> 
> In order to speedup some cases, I propose a refcount-based approach, that
> counts the number of lockless pagetable	walks happening on the process.
> 
> This method does not exclude the current irq-oriented method. It works as a
> complement to skip unnecessary waiting.
> 
> start_lockless_pgtbl_walk(mm)
> 	Insert before starting any lockless pgtable walk
> end_lockless_pgtbl_walk(mm)
> 	Insert after the end of any lockless pgtable walk
> 	(Mostly after the ptep is last used)
> running_lockless_pgtbl_walk(mm)
> 	Returns the number of lockless pgtable walks running
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/book3s/64/mmu.h |  3 +++
>  arch/powerpc/mm/book3s64/mmu_context.c   |  1 +
>  arch/powerpc/mm/book3s64/pgtable.c       | 17 +++++++++++++++++
>  3 files changed, 21 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index 23b83d3593e2..13b006e7dde4 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -116,6 +116,9 @@ typedef struct {
>  	/* Number of users of the external (Nest) MMU */
>  	atomic_t copros;
>  
> +	/* Number of running instances of lockless pagetable walk*/
> +	atomic_t lockless_pgtbl_walk_count;
> +
>  	struct hash_mm_context *hash_context;
>  
>  	unsigned long vdso_base;
> diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
> index 2d0cb5ba9a47..3dd01c0ca5be 100644
> --- a/arch/powerpc/mm/book3s64/mmu_context.c
> +++ b/arch/powerpc/mm/book3s64/mmu_context.c
> @@ -200,6 +200,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>  #endif
>  	atomic_set(&mm->context.active_cpus, 0);
>  	atomic_set(&mm->context.copros, 0);
> +	atomic_set(&mm->context.lockless_pgtbl_walk_count, 0);
>  
>  	return 0;
>  }
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 7d0e0d0d22c4..13239b17a22c 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -98,6 +98,23 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
>  	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>  }
>  

Somewhere, there should be a short comment that explains how the following functions
are meant to be used. And it should include the interaction with irqs, so maybe
if you end up adding that combined wrapper function that does both, that's 
where the documentation would go. If not, then here is probably where it goes.


> +void start_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	atomic_inc(&mm->context.lockless_pgtbl_walk_count);
> +}
> +EXPORT_SYMBOL(start_lockless_pgtbl_walk);
> +
> +void end_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	atomic_dec(&mm->context.lockless_pgtbl_walk_count);
> +}
> +EXPORT_SYMBOL(end_lockless_pgtbl_walk);
> +
> +int running_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	return atomic_read(&mm->context.lockless_pgtbl_walk_count);
> +}
> +


thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 08/11] powerpc/kvm/book3s_hv: Applies counting method to monitor lockless pgtbl walks
       [not found] ` <20190920195047.7703-9-leonardo@linux.ibm.com>
@ 2019-09-23 20:47   ` John Hubbard
  0 siblings, 0 replies; 19+ messages in thread
From: John Hubbard @ 2019-09-23 20:47 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> Applies the counting-based method for monitoring all book3s_hv related
> functions that do lockless pagetable walks.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  arch/powerpc/kvm/book3s_hv_nested.c | 8 ++++++++
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c | 9 ++++++++-
>  2 files changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index 735e0ac6f5b2..ed68e57af3a3 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -804,6 +804,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
>  		return;
>  
>  	/* Find the pte */
> +	start_lockless_pgtbl_walk(kvm->mm);

Here, it appears that there is no disabling of interrupts, so now I'm not
sure anymore if this is going to work properly. Definitely needs some documentation
somewhere as to why this is OK, since the previous discussions don't apply
if the interrupts are left on.


thanks,
-- 
John Hubbard
NVIDIA

>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	/*
>  	 * If the pte is present and the pfn is still the same, update the pte.
> @@ -815,6 +816,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
>  		__radix_pte_update(ptep, clr, set);
>  		kvmppc_radix_tlbie_page(kvm, gpa, shift, lpid);
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  }
>  
>  /*
> @@ -854,10 +856,12 @@ static void kvmhv_remove_nest_rmap(struct kvm *kvm, u64 n_rmap,
>  		return;
>  
>  	/* Find and invalidate the pte */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	/* Don't spuriously invalidate ptes if the pfn has changed */
>  	if (ptep && pte_present(*ptep) && ((pte_val(*ptep) & mask) == hpa))
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
> +	end_lockless_pgtbl_walk(kvm->mm);
>  }
>  
>  static void kvmhv_remove_nest_rmap_list(struct kvm *kvm, unsigned long *rmapp,
> @@ -921,6 +925,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
>  	int shift;
>  
>  	spin_lock(&kvm->mmu_lock);
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	if (!shift)
>  		shift = PAGE_SHIFT;
> @@ -928,6 +933,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
>  		ret = true;
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  	spin_unlock(&kvm->mmu_lock);
>  
>  	if (shift_ret)
> @@ -1362,11 +1368,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
>  	/* See if can find translation in our partition scoped tables for L1 */
>  	pte = __pte(0);
>  	spin_lock(&kvm->mmu_lock);
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	pte_p = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (!shift)
>  		shift = PAGE_SHIFT;
>  	if (pte_p)
>  		pte = *pte_p;
> +	end_lockless_pgtbl_walk(kvm->mm);
>  	spin_unlock(&kvm->mmu_lock);
>  
>  	if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) {
> diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> index 63e0ce91e29d..53ca67492211 100644
> --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> @@ -258,6 +258,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
>  	 * If called in real mode we have MSR_EE = 0. Otherwise
>  	 * we disable irq above.
>  	 */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(pgdir, hva, NULL, &hpage_shift);
>  	if (ptep) {
>  		pte_t pte;
> @@ -311,6 +312,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
>  		ptel &= ~(HPTE_R_W|HPTE_R_I|HPTE_R_G);
>  		ptel |= HPTE_R_M;
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  
>  	/* Find and lock the HPTEG slot to use */
>   do_insert:
> @@ -886,10 +888,15 @@ static int kvmppc_get_hpa(struct kvm_vcpu *vcpu, unsigned long gpa,
>  	hva = __gfn_to_hva_memslot(memslot, gfn);
>  
>  	/* Try to find the host pte for that virtual address */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
> -	if (!ptep)
> +	if (!ptep) {
> +		end_lockless_pgtbl_walk(kvm->mm);
>  		return H_TOO_HARD;
> +	}
>  	pte = kvmppc_read_update_linux_pte(ptep, writing);
> +	end_lockless_pgtbl_walk(kvm->mm);
> +
>  	if (!pte_present(pte))
>  		return H_TOO_HARD;
>  
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
  2019-09-23 20:39   ` [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks John Hubbard
@ 2019-09-23 20:48     ` Leonardo Bras
  2019-09-23 20:53       ` John Hubbard
  0 siblings, 1 reply; 19+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:48 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 1132 bytes --]

Thanks for the feedback,

On Mon, 2019-09-23 at 13:39 -0700, John Hubbard wrote:
> Please remember to include linux-mm if there is a v2.

Sure, I will include on v3.

> Nit: seems like it would be nicer to just put it all in one place, and use
> positive logic, and also I think people normally don't compress the empty
> functions quite that much. So like this:

I did this by following the default on the rest of this file.
As you can see, all other features use the standard of 
#ifndef SOMETHING
dummy/generic functions
#endif

Declaring the functions become responsibility of the arch.

> #ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
> void start_lockless_pgtbl_walk(struct mm_struct *mm); 
> void end_lockless_pgtbl_walk(struct mm_struct *mm); 
> int running_lockless_pgtbl_walk(struct mm_struct *mm); 
> 
> #else
> static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
> {
> }
> static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
> {
> }
> static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
> {
>         return 0;
> }
> #endif
> 
> thanks,

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
  2019-09-23 20:42   ` [PATCH v2 01/11] powerpc/mm: Adds counting method " John Hubbard
@ 2019-09-23 20:50     ` Leonardo Bras
  0 siblings, 0 replies; 19+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:50 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

[-- Attachment #1: Type: text/plain, Size: 463 bytes --]

On Mon, 2019-09-23 at 13:42 -0700, John Hubbard wrote:
> Somewhere, there should be a short comment that explains how the following functions
> are meant to be used. And it should include the interaction with irqs, so maybe
> if you end up adding that combined wrapper function that does both, that's 
> where the documentation would go. If not, then here is probably where it goes.

Thanks for the feedback.
I will make sure to add comments here for v3.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
       [not found] ` <1f5d9380418ad8bb90c6bbdac34716c650b917a0.camel@linux.ibm.com>
  2019-09-20 21:24   ` [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls John Hubbard
@ 2019-09-23 20:51   ` John Hubbard
  2019-09-23 20:58     ` Leonardo Bras
  1 sibling, 1 reply; 19+ messages in thread
From: John Hubbard @ 2019-09-23 20:51 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 1:12 PM, Leonardo Bras wrote:
...
>>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>>  arch/powerpc/perf/callchain.c                |  5 ++++-
>>  include/asm-generic/pgtable.h                |  9 +++++++++
>>  mm/gup.c                                     |  4 ++++
>>  16 files changed, 108 insertions(+), 8 deletions(-)
>>

Also, which tree do these patches apply to, please? 

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
  2019-09-23 20:48     ` Leonardo Bras
@ 2019-09-23 20:53       ` John Hubbard
  0 siblings, 0 replies; 19+ messages in thread
From: John Hubbard @ 2019-09-23 20:53 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

On 9/23/19 1:48 PM, Leonardo Bras wrote:
> Thanks for the feedback,
> 
> On Mon, 2019-09-23 at 13:39 -0700, John Hubbard wrote:
>> Please remember to include linux-mm if there is a v2.
> 
> Sure, I will include on v3.
> 
>> Nit: seems like it would be nicer to just put it all in one place, and use
>> positive logic, and also I think people normally don't compress the empty
>> functions quite that much. So like this:
> 
> I did this by following the default on the rest of this file.
> As you can see, all other features use the standard of 
> #ifndef SOMETHING
> dummy/generic functions
> #endif
> 
> Declaring the functions become responsibility of the arch.

I noticed that, but...well, I guess you're right.  Follow the way
it is. :)

thanks,
-- 
John Hubbard
NVIDIA

> 
>> #ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
>> void start_lockless_pgtbl_walk(struct mm_struct *mm); 
>> void end_lockless_pgtbl_walk(struct mm_struct *mm); 
>> int running_lockless_pgtbl_walk(struct mm_struct *mm); 
>>
>> #else
>> static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>> }
>> static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>> }
>> static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>>         return 0;
>> }
>> #endif
>>
>> thanks,


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
  2019-09-23 20:51   ` John Hubbard
@ 2019-09-23 20:58     ` Leonardo Bras
  0 siblings, 0 replies; 19+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:58 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

[-- Attachment #1: Type: text/plain, Size: 474 bytes --]

On Mon, 2019-09-23 at 13:51 -0700, John Hubbard wrote:
> Also, which tree do these patches apply to, please? 
> 
> thanks,

They should apply on top of v5.3 + one patch: 
https://patchwork.ozlabs.org/patch/1164925/

I was working on top of this patch, because I thought it would be
merged fast. But since I got no feedback, it was not merged and the
present patchset became broken. :(

But I will rebase v3 on top of plain v5.3.

Thanks,
Leonardo Bras




[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
  2019-09-23 20:27   ` [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range John Hubbard
@ 2019-09-23 21:01     ` Leonardo Bras
  2019-09-23 21:09       ` John Hubbard
  0 siblings, 1 reply; 19+ messages in thread
From: Leonardo Bras @ 2019-09-23 21:01 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 837 bytes --]

On Mon, 2019-09-23 at 13:27 -0700, John Hubbard wrote:
> I'd also like a second opinion from the "core" -mm maintainers, but it seems like
> there is now too much code around the gup_pgd_range() call. Especially since there
> are two places where it's called--did you forget the other one in 
> __get_user_pages_fast(), btw??
> 
Oh, sorry, I missed this one. I will put it on v3.
(Also I will make sure to include linux-mm on v3.)

> Maybe the irq handling and atomic counting should be moved into start/finish
> calls, like this:
> 
>      start_gup_fast_walk()
>      gup_pgd_range()
>      finish_gup_fast_walk()

There are cases where interrupt disable/enable is not done around the
lockless pagetable walk.
It may come from functions called above on stack, that's why I opted it
to be only the atomic operation.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
  2019-09-23 21:01     ` Leonardo Bras
@ 2019-09-23 21:09       ` John Hubbard
  0 siblings, 0 replies; 19+ messages in thread
From: John Hubbard @ 2019-09-23 21:09 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/23/19 2:01 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 13:27 -0700, John Hubbard wrote:
>> I'd also like a second opinion from the "core" -mm maintainers, but it seems like
>> there is now too much code around the gup_pgd_range() call. Especially since there
>> are two places where it's called--did you forget the other one in 
>> __get_user_pages_fast(), btw??
>>
> Oh, sorry, I missed this one. I will put it on v3.
> (Also I will make sure to include linux-mm on v3.)
> 
>> Maybe the irq handling and atomic counting should be moved into start/finish
>> calls, like this:
>>
>>      start_gup_fast_walk()
>>      gup_pgd_range()
>>      finish_gup_fast_walk()
> 
> There are cases where interrupt disable/enable is not done around the
> lockless pagetable walk.
> It may come from functions called above on stack, that's why I opted it
> to be only the atomic operation.
> 

That doesn't prevent you from writing the above as shown, though, for mm/gup.c.

(Also, let's see on the other thread if it is even valid to be indicating
a lockless walk, without also disabling interrupts.)

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-09-23 21:09 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190920195047.7703-1-leonardo@linux.ibm.com>
     [not found] ` <20190920195047.7703-12-leonardo@linux.ibm.com>
     [not found]   ` <1b39eaa7-751d-40bc-d3d7-41aaa15be42a@nvidia.com>
     [not found]     ` <24863d8904c6e05e5dd48cab57db4274675ae654.camel@linux.ibm.com>
2019-09-21  0:48       ` [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing John Hubbard
2019-09-23 17:25         ` Leonardo Bras
2019-09-23 18:14           ` John Hubbard
2019-09-23 19:40             ` Leonardo Bras
2019-09-23 19:58               ` John Hubbard
2019-09-23 20:23                 ` Leonardo Bras
2019-09-23 20:26                   ` John Hubbard
     [not found] ` <20190920195047.7703-4-leonardo@linux.ibm.com>
2019-09-23 20:27   ` [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range John Hubbard
2019-09-23 21:01     ` Leonardo Bras
2019-09-23 21:09       ` John Hubbard
     [not found] ` <20190920195047.7703-3-leonardo@linux.ibm.com>
2019-09-23 20:39   ` [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks John Hubbard
2019-09-23 20:48     ` Leonardo Bras
2019-09-23 20:53       ` John Hubbard
     [not found] ` <20190920195047.7703-2-leonardo@linux.ibm.com>
2019-09-23 20:42   ` [PATCH v2 01/11] powerpc/mm: Adds counting method " John Hubbard
2019-09-23 20:50     ` Leonardo Bras
     [not found] ` <20190920195047.7703-9-leonardo@linux.ibm.com>
2019-09-23 20:47   ` [PATCH v2 08/11] powerpc/kvm/book3s_hv: Applies counting method to monitor lockless pgtbl walks John Hubbard
     [not found] ` <1f5d9380418ad8bb90c6bbdac34716c650b917a0.camel@linux.ibm.com>
2019-09-20 21:24   ` [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls John Hubbard
2019-09-23 20:51   ` John Hubbard
2019-09-23 20:58     ` Leonardo Bras

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).