* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-20 20:28 ` Leonardo Bras
0 siblings, 0 replies; 22+ messages in thread
From: Leonardo Bras @ 2019-09-20 20:28 UTC (permalink / raw)
To: John Hubbard, linuxppc-dev, linux-kernel
Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
Ira Weiny, Dan Williams, Allison Randal
[-- Attachment #1: Type: text/plain, Size: 2179 bytes --]
On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
> On 9/20/19 12:50 PM, Leonardo Bras wrote:
> > Skips slow part of serialize_against_pte_lookup if there is no running
> > lockless pagetable walk.
> >
> > Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> > ---
> > arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> > index 13239b17a22c..41ca30269fa3 100644
> > --- a/arch/powerpc/mm/book3s64/pgtable.c
> > +++ b/arch/powerpc/mm/book3s64/pgtable.c
> > @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
> > void serialize_against_pte_lookup(struct mm_struct *mm)
> > {
> > smp_mb();
> > - smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
> > + if (running_lockless_pgtbl_walk(mm))
> > + smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>
> Hi,
>
> If you do this, then you are left without any synchronization. So it will
> have race conditions: a page table walk could begin right after the above
> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
> will continue on right away, under the false assumption that it has let
> all the current page table walks complete.
>
> The current code uses either interrupts or RCU to synchronize, and in
> either case, you end up scheduling something on each CPU. If you remove
> that entirely, I don't see anything left. ("Pure" atomic counting is not
> a synchronization technique all by itself.)
>
> thanks,
Hello John,
Thanks for the fast feedback.
See, before calling serialize_against_pte_lookup(), there is always an
update or clear on the pmd. So, if a page table walk begin right after
the check returns "false", there is no problem, since it will use the
updated pmd.
Think about serialize, on a process with a bunch of cpus. After you
check the last processor (wait part), there is no guarantee that the
first one is not starting a lockless pagetable walk.
The same mechanism protect both methods.
Does it make sense?
Best regards,
Leonardo Bras
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-20 20:28 ` Leonardo Bras
(?)
@ 2019-09-20 21:15 ` John Hubbard
-1 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-20 21:15 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel
Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
Ira Weiny, Dan Williams, Allison Randal
On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
>>> ---
>>> arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>> index 13239b17a22c..41ca30269fa3 100644
>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>>> void serialize_against_pte_lookup(struct mm_struct *mm)
>>> {
>>> smp_mb();
>>> - smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>> + if (running_lockless_pgtbl_walk(mm))
>>> + smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>
>> Hi,
>>
>> If you do this, then you are left without any synchronization. So it will
>> have race conditions: a page table walk could begin right after the above
>> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
>> will continue on right away, under the false assumption that it has let
>> all the current page table walks complete.
>>
>> The current code uses either interrupts or RCU to synchronize, and in
>> either case, you end up scheduling something on each CPU. If you remove
>> that entirely, I don't see anything left. ("Pure" atomic counting is not
>> a synchronization technique all by itself.)
(Actually, correction: the protection is really for deleting the page tables,
so maybe that doesn't apply directly here.)
>>
>
> Hello John,
> Thanks for the fast feedback.
>
> See, before calling serialize_against_pte_lookup(), there is always an
> update or clear on the pmd. So, if a page table walk begin right after
> the check returns "false", there is no problem, since it will use the
> updated pmd.
>
> Think about serialize, on a process with a bunch of cpus. After you
> check the last processor (wait part), there is no guarantee that the
> first one is not starting a lockless pagetable walk.
>
Then why even try to count at all if a page table walker is happening, if
a) it can't actually be done accurately with just counting, and
b) you are claiming that it's safe even if the count is wrong?
In other words, by your reasoning (which I'm not saying is wrong--I don't
fully get this ppc model yet), you can probably just drop all of the counting
code from this patchset.
btw, once this is sorted out, I think all of those large copy-pasted comment
blocks before each call to serialize_against_pte_lookup(), should
be updated to reflect what the actual synchronization mechanism is.
This is pretty hard to figure out. :)
thanks,
--
John Hubbard
NVIDIA
> The same mechanism protect both methods.
>
> Does it make sense?
>
> Best regards,
> Leonardo Bras
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-20 20:28 ` Leonardo Bras
@ 2019-09-21 0:48 ` John Hubbard
-1 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-21 0:48 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
Jason Gunthorpe, Keith Busch
On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
>>> ---
>>> arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>> index 13239b17a22c..41ca30269fa3 100644
>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>>> void serialize_against_pte_lookup(struct mm_struct *mm)
>>> {
>>> smp_mb();
>>> - smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>> + if (running_lockless_pgtbl_walk(mm))
>>> + smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>
>> Hi,
>>
>> If you do this, then you are left without any synchronization. So it will
>> have race conditions: a page table walk could begin right after the above
>> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
>> will continue on right away, under the false assumption that it has let
>> all the current page table walks complete.
>>
>> The current code uses either interrupts or RCU to synchronize, and in
>> either case, you end up scheduling something on each CPU. If you remove
>> that entirely, I don't see anything left. ("Pure" atomic counting is not
>> a synchronization technique all by itself.)
>>
>> thanks,
>
> Hello John,
> Thanks for the fast feedback.
>
> See, before calling serialize_against_pte_lookup(), there is always an
> update or clear on the pmd. So, if a page table walk begin right after
> the check returns "false", there is no problem, since it will use the
> updated pmd.
>
> Think about serialize, on a process with a bunch of cpus. After you
> check the last processor (wait part), there is no guarantee that the
> first one is not starting a lockless pagetable walk.
>
> The same mechanism protect both methods.
>
> Does it make sense?
>
Yes, after working through this with Mark Hairgrove, I think I finally
realize that the new code will allow existing gup_fast() readers to drain,
before proceeding. So that technically works (ignoring issues such as
whether it's desirable to use this approach, vs. for example batching
the THP updates, etc), I agree.
(And please ignore my other response that asked if the counting was
helping at all--I see that it does.)
However, Mark pointed out a pre-existing question, which neither of us
could figure out: are the irq disable/enable calls effective, given that
they are (apparently) not memory barriers?
Given this claim from Documentation/memory-barriers.txt:
INTERRUPT DISABLING FUNCTIONS
-----------------------------
Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
barriers are required in such a situation, they must be provided from some
other means.
...and given both the preexisting code, and the code you've added:
mm/gup.c:
atomic_inc(readers); /* new code */
local_irq_disable();
gup_pgd_range();
...read page tables
local_irq_enable();
atomic_dec(readers); /* new code */
...if the page table reads are allowed to speculatively happen *outside*
of the irq enable/disable calls (which could happen if there are no run-time
memory barriers in the above code), then nothing works anymore.
So it seems that full memory barriers (not just compiler barriers) are required.
If the irq enable/disable somehow provides that, then your new code just goes
along for the ride and Just Works. (You don't have any memory barriers in
start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
barriers provided by the atomic inc/dec.)
So it's really a pre-existing question about the correctness of the gup_fast()
irq disabling approach.
+CC linux-mm
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-21 0:48 ` John Hubbard
0 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-21 0:48 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
Ira Weiny, Dan Williams, Allison Randal
On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
>>> ---
>>> arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>> index 13239b17a22c..41ca30269fa3 100644
>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>>> void serialize_against_pte_lookup(struct mm_struct *mm)
>>> {
>>> smp_mb();
>>> - smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>> + if (running_lockless_pgtbl_walk(mm))
>>> + smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>
>> Hi,
>>
>> If you do this, then you are left without any synchronization. So it will
>> have race conditions: a page table walk could begin right after the above
>> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
>> will continue on right away, under the false assumption that it has let
>> all the current page table walks complete.
>>
>> The current code uses either interrupts or RCU to synchronize, and in
>> either case, you end up scheduling something on each CPU. If you remove
>> that entirely, I don't see anything left. ("Pure" atomic counting is not
>> a synchronization technique all by itself.)
>>
>> thanks,
>
> Hello John,
> Thanks for the fast feedback.
>
> See, before calling serialize_against_pte_lookup(), there is always an
> update or clear on the pmd. So, if a page table walk begin right after
> the check returns "false", there is no problem, since it will use the
> updated pmd.
>
> Think about serialize, on a process with a bunch of cpus. After you
> check the last processor (wait part), there is no guarantee that the
> first one is not starting a lockless pagetable walk.
>
> The same mechanism protect both methods.
>
> Does it make sense?
>
Yes, after working through this with Mark Hairgrove, I think I finally
realize that the new code will allow existing gup_fast() readers to drain,
before proceeding. So that technically works (ignoring issues such as
whether it's desirable to use this approach, vs. for example batching
the THP updates, etc), I agree.
(And please ignore my other response that asked if the counting was
helping at all--I see that it does.)
However, Mark pointed out a pre-existing question, which neither of us
could figure out: are the irq disable/enable calls effective, given that
they are (apparently) not memory barriers?
Given this claim from Documentation/memory-barriers.txt:
INTERRUPT DISABLING FUNCTIONS
-----------------------------
Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
barriers are required in such a situation, they must be provided from some
other means.
...and given both the preexisting code, and the code you've added:
mm/gup.c:
atomic_inc(readers); /* new code */
local_irq_disable();
gup_pgd_range();
...read page tables
local_irq_enable();
atomic_dec(readers); /* new code */
...if the page table reads are allowed to speculatively happen *outside*
of the irq enable/disable calls (which could happen if there are no run-time
memory barriers in the above code), then nothing works anymore.
So it seems that full memory barriers (not just compiler barriers) are required.
If the irq enable/disable somehow provides that, then your new code just goes
along for the ride and Just Works. (You don't have any memory barriers in
start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
barriers provided by the atomic inc/dec.)
So it's really a pre-existing question about the correctness of the gup_fast()
irq disabling approach.
+CC linux-mm
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-21 0:48 ` John Hubbard
@ 2019-09-23 17:25 ` Leonardo Bras
-1 siblings, 0 replies; 22+ messages in thread
From: Leonardo Bras @ 2019-09-23 17:25 UTC (permalink / raw)
To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
Ira Weiny, Dan Williams, Allison Randal
[-- Attachment #1: Type: text/plain, Size: 1538 bytes --]
On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
>
[...]
> So it seems that full memory barriers (not just compiler barriers) are required.
> If the irq enable/disable somehow provides that, then your new code just goes
> along for the ride and Just Works. (You don't have any memory barriers in
> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
> barriers provided by the atomic inc/dec.)
>
> So it's really a pre-existing question about the correctness of the gup_fast()
> irq disabling approach.
I am not experienced in other archs, and I am still pretty new to
Power, but by what I could understand, this behavior is better
explained in serialize_against_pte_lookup.
What happens here is that, before doing a THP split/collapse, the
function does a update of the pmd and a serialize_against_pte_lookup,
in order do avoid a invalid output on a lockless pagetable walk.
Serialize basically runs a do_nothing in every cpu related to the
process, and wait for it to return.
This running depends on interrupt being enabled, so disabling it before
gup_pgd_range() and re-enabling after the end, makes the THP
split/collapse wait for gup_pgd_range() completion in every cpu before
continuing. (here happens the lock)
(As told before, every gup_pgd_range() that occurs after it uses a
updated pmd, so no problem.)
I am sure other archs may have a similar mechanism using
local_irq_{disable,enable}.
Did it answer your questions?
Best regards,
Leonardo Bras
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 17:25 ` Leonardo Bras
0 siblings, 0 replies; 22+ messages in thread
From: Leonardo Bras @ 2019-09-23 17:25 UTC (permalink / raw)
To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
Andrew Morton, Dan Williams
[-- Attachment #1: Type: text/plain, Size: 1538 bytes --]
On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
>
[...]
> So it seems that full memory barriers (not just compiler barriers) are required.
> If the irq enable/disable somehow provides that, then your new code just goes
> along for the ride and Just Works. (You don't have any memory barriers in
> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
> barriers provided by the atomic inc/dec.)
>
> So it's really a pre-existing question about the correctness of the gup_fast()
> irq disabling approach.
I am not experienced in other archs, and I am still pretty new to
Power, but by what I could understand, this behavior is better
explained in serialize_against_pte_lookup.
What happens here is that, before doing a THP split/collapse, the
function does a update of the pmd and a serialize_against_pte_lookup,
in order do avoid a invalid output on a lockless pagetable walk.
Serialize basically runs a do_nothing in every cpu related to the
process, and wait for it to return.
This running depends on interrupt being enabled, so disabling it before
gup_pgd_range() and re-enabling after the end, makes the THP
split/collapse wait for gup_pgd_range() completion in every cpu before
continuing. (here happens the lock)
(As told before, every gup_pgd_range() that occurs after it uses a
updated pmd, so no problem.)
I am sure other archs may have a similar mechanism using
local_irq_{disable,enable}.
Did it answer your questions?
Best regards,
Leonardo Bras
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-23 17:25 ` Leonardo Bras
@ 2019-09-23 18:14 ` John Hubbard
-1 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-23 18:14 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
Ira Weiny, Dan Williams, Allison Randal
On 9/23/19 10:25 AM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
>>
> [...]
>> So it seems that full memory barriers (not just compiler barriers) are required.
>> If the irq enable/disable somehow provides that, then your new code just goes
>> along for the ride and Just Works. (You don't have any memory barriers in
>> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
>> barriers provided by the atomic inc/dec.)
>>
>> So it's really a pre-existing question about the correctness of the gup_fast()
>> irq disabling approach.
>
> I am not experienced in other archs, and I am still pretty new to
> Power, but by what I could understand, this behavior is better
> explained in serialize_against_pte_lookup.
>
> What happens here is that, before doing a THP split/collapse, the
> function does a update of the pmd and a serialize_against_pte_lookup,
> in order do avoid a invalid output on a lockless pagetable walk.
>
> Serialize basically runs a do_nothing in every cpu related to the
> process, and wait for it to return.
>
> This running depends on interrupt being enabled, so disabling it before
> gup_pgd_range() and re-enabling after the end, makes the THP
> split/collapse wait for gup_pgd_range() completion in every cpu before
> continuing. (here happens the lock)
>
That part is all fine, but there are no run-time memory barriers in the
atomic_inc() and atomic_dec() additions, which means that this is not
safe, because memory operations on CPU 1 can be reordered. It's safe
as shown *if* there are memory barriers to keep the order as shown:
CPU 0 CPU 1
------ --------------
atomic_inc(val) (no run-time memory barrier!)
pmd_clear(pte)
if (val)
run_on_all_cpus(): IPI
local_irq_disable() (also not a mem barrier)
READ(pte)
if(pte)
walk page tables
local_irq_enable() (still not a barrier)
atomic_dec(val)
free(pte)
thanks,
--
John Hubbard
NVIDIA
> (As told before, every gup_pgd_range() that occurs after it uses a
> updated pmd, so no problem.)
>
> I am sure other archs may have a similar mechanism using
> local_irq_{disable,enable}.
>
> Did it answer your questions?
>
> Best regards,
>
> Leonardo Bras
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 18:14 ` John Hubbard
0 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-23 18:14 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
Andrew Morton, Dan Williams
On 9/23/19 10:25 AM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
>>
> [...]
>> So it seems that full memory barriers (not just compiler barriers) are required.
>> If the irq enable/disable somehow provides that, then your new code just goes
>> along for the ride and Just Works. (You don't have any memory barriers in
>> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
>> barriers provided by the atomic inc/dec.)
>>
>> So it's really a pre-existing question about the correctness of the gup_fast()
>> irq disabling approach.
>
> I am not experienced in other archs, and I am still pretty new to
> Power, but by what I could understand, this behavior is better
> explained in serialize_against_pte_lookup.
>
> What happens here is that, before doing a THP split/collapse, the
> function does a update of the pmd and a serialize_against_pte_lookup,
> in order do avoid a invalid output on a lockless pagetable walk.
>
> Serialize basically runs a do_nothing in every cpu related to the
> process, and wait for it to return.
>
> This running depends on interrupt being enabled, so disabling it before
> gup_pgd_range() and re-enabling after the end, makes the THP
> split/collapse wait for gup_pgd_range() completion in every cpu before
> continuing. (here happens the lock)
>
That part is all fine, but there are no run-time memory barriers in the
atomic_inc() and atomic_dec() additions, which means that this is not
safe, because memory operations on CPU 1 can be reordered. It's safe
as shown *if* there are memory barriers to keep the order as shown:
CPU 0 CPU 1
------ --------------
atomic_inc(val) (no run-time memory barrier!)
pmd_clear(pte)
if (val)
run_on_all_cpus(): IPI
local_irq_disable() (also not a mem barrier)
READ(pte)
if(pte)
walk page tables
local_irq_enable() (still not a barrier)
atomic_dec(val)
free(pte)
thanks,
--
John Hubbard
NVIDIA
> (As told before, every gup_pgd_range() that occurs after it uses a
> updated pmd, so no problem.)
>
> I am sure other archs may have a similar mechanism using
> local_irq_{disable,enable}.
>
> Did it answer your questions?
>
> Best regards,
>
> Leonardo Bras
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-23 18:14 ` John Hubbard
@ 2019-09-23 19:40 ` Leonardo Bras
-1 siblings, 0 replies; 22+ messages in thread
From: Leonardo Bras @ 2019-09-23 19:40 UTC (permalink / raw)
To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
Andrew Morton, Dan Williams
[-- Attachment #1: Type: text/plain, Size: 1990 bytes --]
On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
> On 9/23/19 10:25 AM, Leonardo Bras wrote:
> [...]
> That part is all fine, but there are no run-time memory barriers in the
> atomic_inc() and atomic_dec() additions, which means that this is not
> safe, because memory operations on CPU 1 can be reordered. It's safe
> as shown *if* there are memory barriers to keep the order as shown:
>
> CPU 0 CPU 1
> ------ --------------
> atomic_inc(val) (no run-time memory barrier!)
> pmd_clear(pte)
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable() (also not a mem barrier)
>
> READ(pte)
> if(pte)
> walk page tables
>
> local_irq_enable() (still not a barrier)
> atomic_dec(val)
>
> free(pte)
>
> thanks,
This is serialize:
void serialize_against_pte_lookup(struct mm_struct *mm)
{
smp_mb();
if (running_lockless_pgtbl_walk(mm))
smp_call_function_many(mm_cpumask(mm), do_nothing,
NULL, 1);
}
That would mean:
CPU 0 CPU 1
------ --------------
atomic_inc(val)
pmd_clear(pte)
smp_mb()
if (val)
run_on_all_cpus(): IPI
local_irq_disable()
READ(pte)
if(pte)
walk page tables
local_irq_enable() (still not a barrier)
atomic_dec(val)
By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
'If you need all the CPUs to see a given store at the same time, use
smp_mb().'
Is it not enough?
Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 19:40 ` Leonardo Bras
0 siblings, 0 replies; 22+ messages in thread
From: Leonardo Bras @ 2019-09-23 19:40 UTC (permalink / raw)
To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
Allison Randal
[-- Attachment #1: Type: text/plain, Size: 1990 bytes --]
On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
> On 9/23/19 10:25 AM, Leonardo Bras wrote:
> [...]
> That part is all fine, but there are no run-time memory barriers in the
> atomic_inc() and atomic_dec() additions, which means that this is not
> safe, because memory operations on CPU 1 can be reordered. It's safe
> as shown *if* there are memory barriers to keep the order as shown:
>
> CPU 0 CPU 1
> ------ --------------
> atomic_inc(val) (no run-time memory barrier!)
> pmd_clear(pte)
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable() (also not a mem barrier)
>
> READ(pte)
> if(pte)
> walk page tables
>
> local_irq_enable() (still not a barrier)
> atomic_dec(val)
>
> free(pte)
>
> thanks,
This is serialize:
void serialize_against_pte_lookup(struct mm_struct *mm)
{
smp_mb();
if (running_lockless_pgtbl_walk(mm))
smp_call_function_many(mm_cpumask(mm), do_nothing,
NULL, 1);
}
That would mean:
CPU 0 CPU 1
------ --------------
atomic_inc(val)
pmd_clear(pte)
smp_mb()
if (val)
run_on_all_cpus(): IPI
local_irq_disable()
READ(pte)
if(pte)
walk page tables
local_irq_enable() (still not a barrier)
atomic_dec(val)
By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
'If you need all the CPUs to see a given store at the same time, use
smp_mb().'
Is it not enough?
Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-23 19:40 ` Leonardo Bras
@ 2019-09-23 19:58 ` John Hubbard
-1 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-23 19:58 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
Andrew Morton, Dan Williams
On 9/23/19 12:40 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
>> On 9/23/19 10:25 AM, Leonardo Bras wrote:
>> [...]
>> That part is all fine, but there are no run-time memory barriers in the
>> atomic_inc() and atomic_dec() additions, which means that this is not
>> safe, because memory operations on CPU 1 can be reordered. It's safe
>> as shown *if* there are memory barriers to keep the order as shown:
>>
>> CPU 0 CPU 1
>> ------ --------------
>> atomic_inc(val) (no run-time memory barrier!)
>> pmd_clear(pte)
>> if (val)
>> run_on_all_cpus(): IPI
>> local_irq_disable() (also not a mem barrier)
>>
>> READ(pte)
>> if(pte)
>> walk page tables
>>
>> local_irq_enable() (still not a barrier)
>> atomic_dec(val)
>>
>> free(pte)
>>
>> thanks,
>
> This is serialize:
>
> void serialize_against_pte_lookup(struct mm_struct *mm)
> {
> smp_mb();
> if (running_lockless_pgtbl_walk(mm))
> smp_call_function_many(mm_cpumask(mm), do_nothing,
> NULL, 1);
> }
>
> That would mean:
>
> CPU 0 CPU 1
> ------ --------------
> atomic_inc(val)
> pmd_clear(pte)
> smp_mb()
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable()
>
> READ(pte)
> if(pte)
> walk page tables
>
> local_irq_enable() (still not a barrier)
> atomic_dec(val)
>
> By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
> 'If you need all the CPUs to see a given store at the same time, use
> smp_mb().'
>
> Is it not enough?
Nope. CPU 1 memory accesses could be re-ordered, as I said above:
CPU 0 CPU 1
------ --------------
READ(pte) (re-ordered at run time)
atomic_inc(val) (no run-time memory barrier!)
pmd_clear(pte)
if (val)
run_on_all_cpus(): IPI
local_irq_disable() (also not a mem barrier)
if(pte)
walk page tables
...
> Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?
>
Yes (approximately: I'd have to look closer to see which barrier call is really
required). Unless there is something else that is providing the barrier, which
is why I called this a pre-existing question: it seems like the interrupt
interlock in the current gup_fast() might not have what it needs.
In other words, if your code needs a barrier, then the pre-existing gup_fast()
code probably does, too.
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 19:58 ` John Hubbard
0 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-23 19:58 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
Allison Randal
On 9/23/19 12:40 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
>> On 9/23/19 10:25 AM, Leonardo Bras wrote:
>> [...]
>> That part is all fine, but there are no run-time memory barriers in the
>> atomic_inc() and atomic_dec() additions, which means that this is not
>> safe, because memory operations on CPU 1 can be reordered. It's safe
>> as shown *if* there are memory barriers to keep the order as shown:
>>
>> CPU 0 CPU 1
>> ------ --------------
>> atomic_inc(val) (no run-time memory barrier!)
>> pmd_clear(pte)
>> if (val)
>> run_on_all_cpus(): IPI
>> local_irq_disable() (also not a mem barrier)
>>
>> READ(pte)
>> if(pte)
>> walk page tables
>>
>> local_irq_enable() (still not a barrier)
>> atomic_dec(val)
>>
>> free(pte)
>>
>> thanks,
>
> This is serialize:
>
> void serialize_against_pte_lookup(struct mm_struct *mm)
> {
> smp_mb();
> if (running_lockless_pgtbl_walk(mm))
> smp_call_function_many(mm_cpumask(mm), do_nothing,
> NULL, 1);
> }
>
> That would mean:
>
> CPU 0 CPU 1
> ------ --------------
> atomic_inc(val)
> pmd_clear(pte)
> smp_mb()
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable()
>
> READ(pte)
> if(pte)
> walk page tables
>
> local_irq_enable() (still not a barrier)
> atomic_dec(val)
>
> By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
> 'If you need all the CPUs to see a given store at the same time, use
> smp_mb().'
>
> Is it not enough?
Nope. CPU 1 memory accesses could be re-ordered, as I said above:
CPU 0 CPU 1
------ --------------
READ(pte) (re-ordered at run time)
atomic_inc(val) (no run-time memory barrier!)
pmd_clear(pte)
if (val)
run_on_all_cpus(): IPI
local_irq_disable() (also not a mem barrier)
if(pte)
walk page tables
...
> Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?
>
Yes (approximately: I'd have to look closer to see which barrier call is really
required). Unless there is something else that is providing the barrier, which
is why I called this a pre-existing question: it seems like the interrupt
interlock in the current gup_fast() might not have what it needs.
In other words, if your code needs a barrier, then the pre-existing gup_fast()
code probably does, too.
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-23 19:58 ` John Hubbard
@ 2019-09-23 20:23 ` Leonardo Bras
-1 siblings, 0 replies; 22+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:23 UTC (permalink / raw)
To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
Allison Randal
[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]
On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
>
> CPU 0 CPU 1
> ------ --------------
> READ(pte) (re-ordered at run time)
> atomic_inc(val) (no run-time memory barrier!)
>
> pmd_clear(pte)
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable() (also not a mem barrier)
>
> if(pte)
> walk page tables
Let me see if I can understand,
On most patches, it would be:
CPU 0 CPU 1
------ --------------
ptep = __find_linux_pte
(re-ordered at run time)
atomic_inc(val)
pmd_clear(pte)
smp_mb()
if (val)
run_on_all_cpus(): IPI
local_irq_disable()
if(ptep)
pte = *ptep;
Is that what you meant?
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 20:23 ` Leonardo Bras
0 siblings, 0 replies; 22+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:23 UTC (permalink / raw)
To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
Cc: Andrew Morton, Arnd Bergmann, Richard Fontana, Mahesh Salgaonkar,
YueHaibing, Nicholas Piggin, Mike Rapoport, Keith Busch,
Jason Gunthorpe, Paul Mackerras, Aneesh Kumar K.V,
Greg Kroah-Hartman, Ganesh Goudar, Dan Williams, Ira Weiny,
Thomas Gleixner, Allison Randal
[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]
On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
>
> CPU 0 CPU 1
> ------ --------------
> READ(pte) (re-ordered at run time)
> atomic_inc(val) (no run-time memory barrier!)
>
> pmd_clear(pte)
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable() (also not a mem barrier)
>
> if(pte)
> walk page tables
Let me see if I can understand,
On most patches, it would be:
CPU 0 CPU 1
------ --------------
ptep = __find_linux_pte
(re-ordered at run time)
atomic_inc(val)
pmd_clear(pte)
smp_mb()
if (val)
run_on_all_cpus(): IPI
local_irq_disable()
if(ptep)
pte = *ptep;
Is that what you meant?
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
2019-09-23 20:23 ` Leonardo Bras
@ 2019-09-23 20:26 ` John Hubbard
-1 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-23 20:26 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
Allison Randal
On 9/23/19 1:23 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
>>
>> CPU 0 CPU 1
>> ------ --------------
>> READ(pte) (re-ordered at run time)
>> atomic_inc(val) (no run-time memory barrier!)
>>
>> pmd_clear(pte)
>> if (val)
>> run_on_all_cpus(): IPI
>> local_irq_disable() (also not a mem barrier)
>>
>> if(pte)
>> walk page tables
>
> Let me see if I can understand,
> On most patches, it would be:
>
> CPU 0 CPU 1
> ------ --------------
> ptep = __find_linux_pte
> (re-ordered at run time)
> atomic_inc(val)
> pmd_clear(pte)
> smp_mb()
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable()
>
> if(ptep)
> pte = *ptep;
>
> Is that what you meant?
>
>
Yes.
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 20:26 ` John Hubbard
0 siblings, 0 replies; 22+ messages in thread
From: John Hubbard @ 2019-09-23 20:26 UTC (permalink / raw)
To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
Cc: Andrew Morton, Arnd Bergmann, Richard Fontana, Mahesh Salgaonkar,
YueHaibing, Nicholas Piggin, Mike Rapoport, Keith Busch,
Jason Gunthorpe, Paul Mackerras, Aneesh Kumar K.V,
Greg Kroah-Hartman, Ganesh Goudar, Dan Williams, Ira Weiny,
Thomas Gleixner, Allison Randal
On 9/23/19 1:23 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
>>
>> CPU 0 CPU 1
>> ------ --------------
>> READ(pte) (re-ordered at run time)
>> atomic_inc(val) (no run-time memory barrier!)
>>
>> pmd_clear(pte)
>> if (val)
>> run_on_all_cpus(): IPI
>> local_irq_disable() (also not a mem barrier)
>>
>> if(pte)
>> walk page tables
>
> Let me see if I can understand,
> On most patches, it would be:
>
> CPU 0 CPU 1
> ------ --------------
> ptep = __find_linux_pte
> (re-ordered at run time)
> atomic_inc(val)
> pmd_clear(pte)
> smp_mb()
> if (val)
> run_on_all_cpus(): IPI
> local_irq_disable()
>
> if(ptep)
> pte = *ptep;
>
> Is that what you meant?
>
>
Yes.
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 22+ messages in thread