linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] powerpc: Enhance pmem DMA bypass handling
@ 2021-10-21 17:44 Brian King
  2021-10-22 12:24 ` Alexey Kardashevskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Brian King @ 2021-10-21 17:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aik, Brian King

If ibm,pmemory is installed in the system, it can appear anywhere
in the address space. This patch enhances how we handle DMA for devices when
ibm,pmemory is present. In the case where we have enough DMA space to
direct map all of RAM, but not ibm,pmemory, we use direct DMA for
I/O to RAM and use the default window to dynamically map ibm,pmemory.
In the case where we only have a single DMA window, this won't work,
so if the window is not big enough to map the entire address range,
we cannot direct map.

Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
---
 arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 269f61d519c2..d9ae985d10a4 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
 	phys_addr_t max_addr = memory_hotplug_max();
 	struct device_node *memory;
 
-	/*
-	 * The "ibm,pmemory" can appear anywhere in the address space.
-	 * Assuming it is still backed by page structs, set the upper limit
-	 * for the huge DMA window as MAX_PHYSMEM_BITS.
-	 */
-	if (of_find_node_by_type(NULL, "ibm,pmemory"))
-		return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
-			(phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
-
 	for_each_node_by_type(memory, "memory") {
 		unsigned long start, size;
 		int n_mem_addr_cells, n_mem_size_cells, len;
@@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
 	 */
 	len = max_ram_len;
 	if (pmem_present) {
+		if (default_win_removed) {
+			/*
+			 * If we only have one DMA window and have pmem present,
+			 * then we need to be able to map the entire address
+			 * range in order to be able to do direct DMA to RAM.
+			 */
+			len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
+					(phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));
+		}
+
 		if (query.largest_available_block >=
 		    (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
 			len = MAX_PHYSMEM_BITS;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] powerpc: Enhance pmem DMA bypass handling
  2021-10-21 17:44 [PATCH] powerpc: Enhance pmem DMA bypass handling Brian King
@ 2021-10-22 12:24 ` Alexey Kardashevskiy
  2021-10-22 20:18   ` Brian King
  0 siblings, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2021-10-22 12:24 UTC (permalink / raw)
  To: Brian King, linuxppc-dev



On 22/10/2021 04:44, Brian King wrote:
> If ibm,pmemory is installed in the system, it can appear anywhere
> in the address space. This patch enhances how we handle DMA for devices when
> ibm,pmemory is present. In the case where we have enough DMA space to
> direct map all of RAM, but not ibm,pmemory, we use direct DMA for
> I/O to RAM and use the default window to dynamically map ibm,pmemory.
> In the case where we only have a single DMA window, this won't work, > so if the window is not big enough to map the entire address range,
> we cannot direct map.

but we want the pmem range to be mapped into the huge DMA window too if 
we can, why skip it?


> 
> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
> ---
>   arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
>   1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
> index 269f61d519c2..d9ae985d10a4 100644
> --- a/arch/powerpc/platforms/pseries/iommu.c
> +++ b/arch/powerpc/platforms/pseries/iommu.c
> @@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
>   	phys_addr_t max_addr = memory_hotplug_max();
>   	struct device_node *memory;
>   
> -	/*
> -	 * The "ibm,pmemory" can appear anywhere in the address space.
> -	 * Assuming it is still backed by page structs, set the upper limit
> -	 * for the huge DMA window as MAX_PHYSMEM_BITS.
> -	 */
> -	if (of_find_node_by_type(NULL, "ibm,pmemory"))
> -		return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
> -			(phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
> -
>   	for_each_node_by_type(memory, "memory") {
>   		unsigned long start, size;
>   		int n_mem_addr_cells, n_mem_size_cells, len;
> @@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
>   	 */
>   	len = max_ram_len;
>   	if (pmem_present) {
> +		if (default_win_removed) {
> +			/*
> +			 * If we only have one DMA window and have pmem present,
> +			 * then we need to be able to map the entire address
> +			 * range in order to be able to do direct DMA to RAM.
> +			 */
> +			len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
> +					(phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));
> +		}
> +
>   		if (query.largest_available_block >=
>   		    (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
>   			len = MAX_PHYSMEM_BITS;
> 

-- 
Alexey

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] powerpc: Enhance pmem DMA bypass handling
  2021-10-22 12:24 ` Alexey Kardashevskiy
@ 2021-10-22 20:18   ` Brian King
  2021-10-23 12:18     ` Alexey Kardashevskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Brian King @ 2021-10-22 20:18 UTC (permalink / raw)
  To: Alexey Kardashevskiy, linuxppc-dev

On 10/22/21 7:24 AM, Alexey Kardashevskiy wrote:
> 
> 
> On 22/10/2021 04:44, Brian King wrote:
>> If ibm,pmemory is installed in the system, it can appear anywhere
>> in the address space. This patch enhances how we handle DMA for devices when
>> ibm,pmemory is present. In the case where we have enough DMA space to
>> direct map all of RAM, but not ibm,pmemory, we use direct DMA for
>> I/O to RAM and use the default window to dynamically map ibm,pmemory.
>> In the case where we only have a single DMA window, this won't work, > so if the window is not big enough to map the entire address range,
>> we cannot direct map.
> 
> but we want the pmem range to be mapped into the huge DMA window too if we can, why skip it?

This patch should simply do what the comment in this commit mentioned below suggests, which says that
ibm,pmemory can appear anywhere in the address space. If the DMA window is large enough
to map all of MAX_PHYSMEM_BITS, we will indeed simply do direct DMA for everything,
including the pmem. If we do not have a big enough window to do that, we will do
direct DMA for DRAM and dynamic mapping for pmem.


https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/platforms/pseries/iommu.c?id=bf6e2d562bbc4d115cf322b0bca57fe5bbd26f48


Thanks,

Brian


> 
> 
>>
>> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
>>   1 file changed, 10 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
>> index 269f61d519c2..d9ae985d10a4 100644
>> --- a/arch/powerpc/platforms/pseries/iommu.c
>> +++ b/arch/powerpc/platforms/pseries/iommu.c
>> @@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
>>       phys_addr_t max_addr = memory_hotplug_max();
>>       struct device_node *memory;
>>   -    /*
>> -     * The "ibm,pmemory" can appear anywhere in the address space.
>> -     * Assuming it is still backed by page structs, set the upper limit
>> -     * for the huge DMA window as MAX_PHYSMEM_BITS.
>> -     */
>> -    if (of_find_node_by_type(NULL, "ibm,pmemory"))
>> -        return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>> -            (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
>> -
>>       for_each_node_by_type(memory, "memory") {
>>           unsigned long start, size;
>>           int n_mem_addr_cells, n_mem_size_cells, len;
>> @@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
>>        */
>>       len = max_ram_len;
>>       if (pmem_present) {
>> +        if (default_win_removed) {
>> +            /*
>> +             * If we only have one DMA window and have pmem present,
>> +             * then we need to be able to map the entire address
>> +             * range in order to be able to do direct DMA to RAM.
>> +             */
>> +            len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>> +                    (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));
>> +        }
>> +
>>           if (query.largest_available_block >=
>>               (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
>>               len = MAX_PHYSMEM_BITS;
>>
> 


-- 
Brian King
Power Linux I/O
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] powerpc: Enhance pmem DMA bypass handling
  2021-10-22 20:18   ` Brian King
@ 2021-10-23 12:18     ` Alexey Kardashevskiy
  2021-10-25 14:40       ` Brian King
  0 siblings, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2021-10-23 12:18 UTC (permalink / raw)
  To: Brian King, linuxppc-dev



On 23/10/2021 07:18, Brian King wrote:
> On 10/22/21 7:24 AM, Alexey Kardashevskiy wrote:
>>
>>
>> On 22/10/2021 04:44, Brian King wrote:
>>> If ibm,pmemory is installed in the system, it can appear anywhere
>>> in the address space. This patch enhances how we handle DMA for devices when
>>> ibm,pmemory is present. In the case where we have enough DMA space to
>>> direct map all of RAM, but not ibm,pmemory, we use direct DMA for
>>> I/O to RAM and use the default window to dynamically map ibm,pmemory.
>>> In the case where we only have a single DMA window, this won't work, > so if the window is not big enough to map the entire address range,
>>> we cannot direct map.
>>
>> but we want the pmem range to be mapped into the huge DMA window too if we can, why skip it?
> 
> This patch should simply do what the comment in this commit mentioned below suggests, which says that
> ibm,pmemory can appear anywhere in the address space. If the DMA window is large enough
> to map all of MAX_PHYSMEM_BITS, we will indeed simply do direct DMA for everything,
> including the pmem. If we do not have a big enough window to do that, we will do
> direct DMA for DRAM and dynamic mapping for pmem.


Right, and this is what we do already, do not we? I missing something here.

> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/platforms/pseries/iommu.c?id=bf6e2d562bbc4d115cf322b0bca57fe5bbd26f48
> 
> 
> Thanks,
> 
> Brian
> 
> 
>>
>>
>>>
>>> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
>>> ---
>>>    arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
>>>    1 file changed, 10 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
>>> index 269f61d519c2..d9ae985d10a4 100644
>>> --- a/arch/powerpc/platforms/pseries/iommu.c
>>> +++ b/arch/powerpc/platforms/pseries/iommu.c
>>> @@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
>>>        phys_addr_t max_addr = memory_hotplug_max();
>>>        struct device_node *memory;
>>>    -    /*
>>> -     * The "ibm,pmemory" can appear anywhere in the address space.
>>> -     * Assuming it is still backed by page structs, set the upper limit
>>> -     * for the huge DMA window as MAX_PHYSMEM_BITS.
>>> -     */
>>> -    if (of_find_node_by_type(NULL, "ibm,pmemory"))
>>> -        return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>> -            (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
>>> -
>>>        for_each_node_by_type(memory, "memory") {
>>>            unsigned long start, size;
>>>            int n_mem_addr_cells, n_mem_size_cells, len;
>>> @@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
>>>         */
>>>        len = max_ram_len;
>>>        if (pmem_present) {
>>> +        if (default_win_removed) {
>>> +            /*
>>> +             * If we only have one DMA window and have pmem present,
>>> +             * then we need to be able to map the entire address
>>> +             * range in order to be able to do direct DMA to RAM.
>>> +             */
>>> +            len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>> +                    (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));
>>> +        }
>>> +
>>>            if (query.largest_available_block >=
>>>                (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
>>>                len = MAX_PHYSMEM_BITS;
>>>
>>
> 
> 

-- 
Alexey

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] powerpc: Enhance pmem DMA bypass handling
  2021-10-23 12:18     ` Alexey Kardashevskiy
@ 2021-10-25 14:40       ` Brian King
  2021-10-26  5:39         ` Alexey Kardashevskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Brian King @ 2021-10-25 14:40 UTC (permalink / raw)
  To: Alexey Kardashevskiy, linuxppc-dev

On 10/23/21 7:18 AM, Alexey Kardashevskiy wrote:
> 
> 
> On 23/10/2021 07:18, Brian King wrote:
>> On 10/22/21 7:24 AM, Alexey Kardashevskiy wrote:
>>>
>>>
>>> On 22/10/2021 04:44, Brian King wrote:
>>>> If ibm,pmemory is installed in the system, it can appear anywhere
>>>> in the address space. This patch enhances how we handle DMA for devices when
>>>> ibm,pmemory is present. In the case where we have enough DMA space to
>>>> direct map all of RAM, but not ibm,pmemory, we use direct DMA for
>>>> I/O to RAM and use the default window to dynamically map ibm,pmemory.
>>>> In the case where we only have a single DMA window, this won't work, > so if the window is not big enough to map the entire address range,
>>>> we cannot direct map.
>>>
>>> but we want the pmem range to be mapped into the huge DMA window too if we can, why skip it?
>>
>> This patch should simply do what the comment in this commit mentioned below suggests, which says that
>> ibm,pmemory can appear anywhere in the address space. If the DMA window is large enough
>> to map all of MAX_PHYSMEM_BITS, we will indeed simply do direct DMA for everything,
>> including the pmem. If we do not have a big enough window to do that, we will do
>> direct DMA for DRAM and dynamic mapping for pmem.
> 
> 
> Right, and this is what we do already, do not we? I missing something here.

The upstream code does not work correctly that I can see. If I boot an upstream kernel
with an nvme device and vpmem assigned to the LPAR, and enable dev_dbg in arch/powerpc/platforms/pseries/iommu.c,
I see the following in the logs:

[    2.157549] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
[    2.157561] nvme 0121:50:00.0: Skipping ibm,pmemory
[    2.157567] nvme 0121:50:00.0: can't map partition max 0x8000000000000 with 16777216 65536-sized pages
[    2.170150] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 28 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
[    2.170170] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
[    2.356260] nvme 0121:50:00.0: node is /pci@800000020000121/pci1014,683@0

This means we are heading down the leg in enable_ddw where we do not set direct_mapping to true. We use
create the DDW window, but don't do any direct DMA. This is because the window is not large enough to
map 2PB of memory, which is what ddw_memory_hotplug_max returns without my patch. 

With my patch applied, I get this in the logs:

[    2.204866] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
[    2.204875] nvme 0121:50:00.0: Skipping ibm,pmemory
[    2.205058] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 21 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
[    2.205068] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
[    2.215898] nvme 0121:50:00.0: iommu: 64-bit OK but direct DMA is limited by 800000200000000


Thanks,

Brian


> 
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/platforms/pseries/iommu.c?id=bf6e2d562bbc4d115cf322b0bca57fe5bbd26f48
>>
>>
>> Thanks,
>>
>> Brian
>>
>>
>>>
>>>
>>>>
>>>> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
>>>> ---
>>>>    arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
>>>>    1 file changed, 10 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
>>>> index 269f61d519c2..d9ae985d10a4 100644
>>>> --- a/arch/powerpc/platforms/pseries/iommu.c
>>>> +++ b/arch/powerpc/platforms/pseries/iommu.c
>>>> @@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
>>>>        phys_addr_t max_addr = memory_hotplug_max();
>>>>        struct device_node *memory;
>>>>    -    /*
>>>> -     * The "ibm,pmemory" can appear anywhere in the address space.
>>>> -     * Assuming it is still backed by page structs, set the upper limit
>>>> -     * for the huge DMA window as MAX_PHYSMEM_BITS.
>>>> -     */
>>>> -    if (of_find_node_by_type(NULL, "ibm,pmemory"))
>>>> -        return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>> -            (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
>>>> -
>>>>        for_each_node_by_type(memory, "memory") {
>>>>            unsigned long start, size;
>>>>            int n_mem_addr_cells, n_mem_size_cells, len;
>>>> @@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
>>>>         */
>>>>        len = max_ram_len;
>>>>        if (pmem_present) {
>>>> +        if (default_win_removed) {
>>>> +            /*
>>>> +             * If we only have one DMA window and have pmem present,
>>>> +             * then we need to be able to map the entire address
>>>> +             * range in order to be able to do direct DMA to RAM.
>>>> +             */
>>>> +            len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>> +                    (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));
>>>> +        }
>>>> +
>>>>            if (query.largest_available_block >=
>>>>                (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
>>>>                len = MAX_PHYSMEM_BITS;
>>>>
>>>
>>
>>
> 


-- 
Brian King
Power Linux I/O
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] powerpc: Enhance pmem DMA bypass handling
  2021-10-25 14:40       ` Brian King
@ 2021-10-26  5:39         ` Alexey Kardashevskiy
  2021-10-27 21:30           ` Brian King
  0 siblings, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2021-10-26  5:39 UTC (permalink / raw)
  To: Brian King, linuxppc-dev



On 10/26/21 01:40, Brian King wrote:
> On 10/23/21 7:18 AM, Alexey Kardashevskiy wrote:
>>
>>
>> On 23/10/2021 07:18, Brian King wrote:
>>> On 10/22/21 7:24 AM, Alexey Kardashevskiy wrote:
>>>>
>>>>
>>>> On 22/10/2021 04:44, Brian King wrote:
>>>>> If ibm,pmemory is installed in the system, it can appear anywhere
>>>>> in the address space. This patch enhances how we handle DMA for devices when
>>>>> ibm,pmemory is present. In the case where we have enough DMA space to
>>>>> direct map all of RAM, but not ibm,pmemory, we use direct DMA for
>>>>> I/O to RAM and use the default window to dynamically map ibm,pmemory.
>>>>> In the case where we only have a single DMA window, this won't work, > so if the window is not big enough to map the entire address range,
>>>>> we cannot direct map.
>>>>
>>>> but we want the pmem range to be mapped into the huge DMA window too if we can, why skip it?
>>>
>>> This patch should simply do what the comment in this commit mentioned below suggests, which says that
>>> ibm,pmemory can appear anywhere in the address space. If the DMA window is large enough
>>> to map all of MAX_PHYSMEM_BITS, we will indeed simply do direct DMA for everything,
>>> including the pmem. If we do not have a big enough window to do that, we will do
>>> direct DMA for DRAM and dynamic mapping for pmem.
>>
>>
>> Right, and this is what we do already, do not we? I missing something here.
> 
> The upstream code does not work correctly that I can see. If I boot an upstream kernel
> with an nvme device and vpmem assigned to the LPAR, and enable dev_dbg in arch/powerpc/platforms/pseries/iommu.c,
> I see the following in the logs:
> 
> [    2.157549] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
> [    2.157561] nvme 0121:50:00.0: Skipping ibm,pmemory
> [    2.157567] nvme 0121:50:00.0: can't map partition max 0x8000000000000 with 16777216 65536-sized pages
> [    2.170150] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 28 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
> [    2.170170] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
> [    2.356260] nvme 0121:50:00.0: node is /pci@800000020000121/pci1014,683@0
> 
> This means we are heading down the leg in enable_ddw where we do not set direct_mapping to true. We use
> create the DDW window, but don't do any direct DMA. This is because the window is not large enough to
> map 2PB of memory, which is what ddw_memory_hotplug_max returns without my patch. 
> 
> With my patch applied, I get this in the logs:
> 
> [    2.204866] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
> [    2.204875] nvme 0121:50:00.0: Skipping ibm,pmemory
> [    2.205058] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 21 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
> [    2.205068] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
> [    2.215898] nvme 0121:50:00.0: iommu: 64-bit OK but direct DMA is limited by 800000200000000
> 


ah I see. then...


> 
> Thanks,
> 
> Brian
> 
> 
>>
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/platforms/pseries/iommu.c?id=bf6e2d562bbc4d115cf322b0bca57fe5bbd26f48
>>>
>>>
>>> Thanks,
>>>
>>> Brian
>>>
>>>
>>>>
>>>>
>>>>>
>>>>> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
>>>>> ---
>>>>>    arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
>>>>>    1 file changed, 10 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
>>>>> index 269f61d519c2..d9ae985d10a4 100644
>>>>> --- a/arch/powerpc/platforms/pseries/iommu.c
>>>>> +++ b/arch/powerpc/platforms/pseries/iommu.c
>>>>> @@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
>>>>>        phys_addr_t max_addr = memory_hotplug_max();
>>>>>        struct device_node *memory;
>>>>>    -    /*
>>>>> -     * The "ibm,pmemory" can appear anywhere in the address space.
>>>>> -     * Assuming it is still backed by page structs, set the upper limit
>>>>> -     * for the huge DMA window as MAX_PHYSMEM_BITS.
>>>>> -     */
>>>>> -    if (of_find_node_by_type(NULL, "ibm,pmemory"))
>>>>> -        return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>>> -            (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
>>>>> -
>>>>>        for_each_node_by_type(memory, "memory") {
>>>>>            unsigned long start, size;
>>>>>            int n_mem_addr_cells, n_mem_size_cells, len;
>>>>> @@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
>>>>>         */
>>>>>        len = max_ram_len;
>>>>>        if (pmem_present) {
>>>>> +        if (default_win_removed) {
>>>>> +            /*
>>>>> +             * If we only have one DMA window and have pmem present,
>>>>> +             * then we need to be able to map the entire address
>>>>> +             * range in order to be able to do direct DMA to RAM.
>>>>> +             */
>>>>> +            len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>>> +                    (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));


... len = (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ? 31 :
MAX_PHYSMEM_BITS  ?

Or actually simply drop this hunk and only leave the first one and add
this instead:


diff --git a/arch/powerpc/platforms/pseries/iommu.c
b/arch/powerpc/platforms/pseries/iommu.c
index 591ec9e94edb..68bfcd2227d9 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -1518,7 +1518,7 @@ static bool enable_ddw(struct pci_dev *dev, struct
device_node *pdn)
         * as RAM, then we failed to create a window to cover persistent
         * memory and need to set the DMA limit.
         */
-       if (pmem_present && ddw_enabled && direct_mapping && len ==
max_ram_len)
+       if (pmem_present && ddw_enabled && direct_mapping)

?

Thanks,



>>>>> +        }
>>>>> +
>>>>>            if (query.largest_available_block >=
>>>>>                (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
>>>>>                len = MAX_PHYSMEM_BITS;
>>>>>
>>>>
>>>
>>>
>>
> 
> 

-- 
Alexey

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] powerpc: Enhance pmem DMA bypass handling
  2021-10-26  5:39         ` Alexey Kardashevskiy
@ 2021-10-27 21:30           ` Brian King
  2021-10-29  5:57             ` Alexey Kardashevskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Brian King @ 2021-10-27 21:30 UTC (permalink / raw)
  To: Alexey Kardashevskiy, linuxppc-dev

On 10/26/21 12:39 AM, Alexey Kardashevskiy wrote:
> 
> 
> On 10/26/21 01:40, Brian King wrote:
>> On 10/23/21 7:18 AM, Alexey Kardashevskiy wrote:
>>>
>>>
>>> On 23/10/2021 07:18, Brian King wrote:
>>>> On 10/22/21 7:24 AM, Alexey Kardashevskiy wrote:
>>>>>
>>>>>
>>>>> On 22/10/2021 04:44, Brian King wrote:
>>>>>> If ibm,pmemory is installed in the system, it can appear anywhere
>>>>>> in the address space. This patch enhances how we handle DMA for devices when
>>>>>> ibm,pmemory is present. In the case where we have enough DMA space to
>>>>>> direct map all of RAM, but not ibm,pmemory, we use direct DMA for
>>>>>> I/O to RAM and use the default window to dynamically map ibm,pmemory.
>>>>>> In the case where we only have a single DMA window, this won't work, > so if the window is not big enough to map the entire address range,
>>>>>> we cannot direct map.
>>>>>
>>>>> but we want the pmem range to be mapped into the huge DMA window too if we can, why skip it?
>>>>
>>>> This patch should simply do what the comment in this commit mentioned below suggests, which says that
>>>> ibm,pmemory can appear anywhere in the address space. If the DMA window is large enough
>>>> to map all of MAX_PHYSMEM_BITS, we will indeed simply do direct DMA for everything,
>>>> including the pmem. If we do not have a big enough window to do that, we will do
>>>> direct DMA for DRAM and dynamic mapping for pmem.
>>>
>>>
>>> Right, and this is what we do already, do not we? I missing something here.
>>
>> The upstream code does not work correctly that I can see. If I boot an upstream kernel
>> with an nvme device and vpmem assigned to the LPAR, and enable dev_dbg in arch/powerpc/platforms/pseries/iommu.c,
>> I see the following in the logs:
>>
>> [    2.157549] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
>> [    2.157561] nvme 0121:50:00.0: Skipping ibm,pmemory
>> [    2.157567] nvme 0121:50:00.0: can't map partition max 0x8000000000000 with 16777216 65536-sized pages
>> [    2.170150] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 28 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
>> [    2.170170] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
>> [    2.356260] nvme 0121:50:00.0: node is /pci@800000020000121/pci1014,683@0
>>
>> This means we are heading down the leg in enable_ddw where we do not set direct_mapping to true. We use
>> create the DDW window, but don't do any direct DMA. This is because the window is not large enough to
>> map 2PB of memory, which is what ddw_memory_hotplug_max returns without my patch. 
>>
>> With my patch applied, I get this in the logs:
>>
>> [    2.204866] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
>> [    2.204875] nvme 0121:50:00.0: Skipping ibm,pmemory
>> [    2.205058] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 21 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
>> [    2.205068] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
>> [    2.215898] nvme 0121:50:00.0: iommu: 64-bit OK but direct DMA is limited by 800000200000000
>>
> 
> 
> ah I see. then...
> 
> 
>>
>> Thanks,
>>
>> Brian
>>
>>
>>>
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/platforms/pseries/iommu.c?id=bf6e2d562bbc4d115cf322b0bca57fe5bbd26f48
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Brian
>>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
>>>>>> ---
>>>>>>    arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
>>>>>>    1 file changed, 10 insertions(+), 9 deletions(-)
>>>>>>
>>>>>> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
>>>>>> index 269f61d519c2..d9ae985d10a4 100644
>>>>>> --- a/arch/powerpc/platforms/pseries/iommu.c
>>>>>> +++ b/arch/powerpc/platforms/pseries/iommu.c
>>>>>> @@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
>>>>>>        phys_addr_t max_addr = memory_hotplug_max();
>>>>>>        struct device_node *memory;
>>>>>>    -    /*
>>>>>> -     * The "ibm,pmemory" can appear anywhere in the address space.
>>>>>> -     * Assuming it is still backed by page structs, set the upper limit
>>>>>> -     * for the huge DMA window as MAX_PHYSMEM_BITS.
>>>>>> -     */
>>>>>> -    if (of_find_node_by_type(NULL, "ibm,pmemory"))
>>>>>> -        return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>>>> -            (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
>>>>>> -
>>>>>>        for_each_node_by_type(memory, "memory") {
>>>>>>            unsigned long start, size;
>>>>>>            int n_mem_addr_cells, n_mem_size_cells, len;
>>>>>> @@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
>>>>>>         */
>>>>>>        len = max_ram_len;
>>>>>>        if (pmem_present) {
>>>>>> +        if (default_win_removed) {
>>>>>> +            /*
>>>>>> +             * If we only have one DMA window and have pmem present,
>>>>>> +             * then we need to be able to map the entire address
>>>>>> +             * range in order to be able to do direct DMA to RAM.
>>>>>> +             */
>>>>>> +            len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>>>> +                    (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));
> 
> 
> ... len = (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ? 31 :
> MAX_PHYSMEM_BITS  ?
> 
> Or actually simply drop this hunk and only leave the first one and add
> this instead:
> 
> 
> diff --git a/arch/powerpc/platforms/pseries/iommu.c
> b/arch/powerpc/platforms/pseries/iommu.c
> index 591ec9e94edb..68bfcd2227d9 100644
> --- a/arch/powerpc/platforms/pseries/iommu.c
> +++ b/arch/powerpc/platforms/pseries/iommu.c
> @@ -1518,7 +1518,7 @@ static bool enable_ddw(struct pci_dev *dev, struct
> device_node *pdn)
>          * as RAM, then we failed to create a window to cover persistent
>          * memory and need to set the DMA limit.
>          */
> -       if (pmem_present && ddw_enabled && direct_mapping && len ==
> max_ram_len)
> +       if (pmem_present && ddw_enabled && direct_mapping)
> 
> ?


So, this would change the handling of devices that have a single window when pmem
is present. With your proposed change, we would then direct map for DRAM
and attempt to use whatever TCE space is left to do the dynamic mapping
when DMA'ing to the pmem, all from a single window. We don't account for this
in the code from what I can see, so we could get into the scenario where we have
a DMA window just large enough to map all of DRAM, we direct map that, and then
have nothing left over for the pmem.

I would actually like to get this working, as it would be helpful for the performance
of SR-IOV devices when pmem is present. However, I think we'd need to ensure we
have at least a certain amount of reserved DMA space for the dynamic mapping
before we do. There might be other things to consider as well...

Should we handle that as a further enhancement in a future patch, and move forward with this
as a bug fix?

Thanks,

Brian

> 
> Thanks,
> 
> 
> 
>>>>>> +        }
>>>>>> +
>>>>>>            if (query.largest_available_block >=
>>>>>>                (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
>>>>>>                len = MAX_PHYSMEM_BITS;
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
> 


-- 
Brian King
Power Linux I/O
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] powerpc: Enhance pmem DMA bypass handling
  2021-10-27 21:30           ` Brian King
@ 2021-10-29  5:57             ` Alexey Kardashevskiy
  0 siblings, 0 replies; 8+ messages in thread
From: Alexey Kardashevskiy @ 2021-10-29  5:57 UTC (permalink / raw)
  To: Brian King, linuxppc-dev



On 28/10/2021 08:30, Brian King wrote:
> On 10/26/21 12:39 AM, Alexey Kardashevskiy wrote:
>>
>>
>> On 10/26/21 01:40, Brian King wrote:
>>> On 10/23/21 7:18 AM, Alexey Kardashevskiy wrote:
>>>>
>>>>
>>>> On 23/10/2021 07:18, Brian King wrote:
>>>>> On 10/22/21 7:24 AM, Alexey Kardashevskiy wrote:
>>>>>>
>>>>>>
>>>>>> On 22/10/2021 04:44, Brian King wrote:
>>>>>>> If ibm,pmemory is installed in the system, it can appear anywhere
>>>>>>> in the address space. This patch enhances how we handle DMA for devices when
>>>>>>> ibm,pmemory is present. In the case where we have enough DMA space to
>>>>>>> direct map all of RAM, but not ibm,pmemory, we use direct DMA for
>>>>>>> I/O to RAM and use the default window to dynamically map ibm,pmemory.
>>>>>>> In the case where we only have a single DMA window, this won't work, > so if the window is not big enough to map the entire address range,
>>>>>>> we cannot direct map.
>>>>>>
>>>>>> but we want the pmem range to be mapped into the huge DMA window too if we can, why skip it?
>>>>>
>>>>> This patch should simply do what the comment in this commit mentioned below suggests, which says that
>>>>> ibm,pmemory can appear anywhere in the address space. If the DMA window is large enough
>>>>> to map all of MAX_PHYSMEM_BITS, we will indeed simply do direct DMA for everything,
>>>>> including the pmem. If we do not have a big enough window to do that, we will do
>>>>> direct DMA for DRAM and dynamic mapping for pmem.
>>>>
>>>>
>>>> Right, and this is what we do already, do not we? I missing something here.
>>>
>>> The upstream code does not work correctly that I can see. If I boot an upstream kernel
>>> with an nvme device and vpmem assigned to the LPAR, and enable dev_dbg in arch/powerpc/platforms/pseries/iommu.c,
>>> I see the following in the logs:
>>>
>>> [    2.157549] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
>>> [    2.157561] nvme 0121:50:00.0: Skipping ibm,pmemory
>>> [    2.157567] nvme 0121:50:00.0: can't map partition max 0x8000000000000 with 16777216 65536-sized pages
>>> [    2.170150] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 28 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
>>> [    2.170170] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
>>> [    2.356260] nvme 0121:50:00.0: node is /pci@800000020000121/pci1014,683@0
>>>
>>> This means we are heading down the leg in enable_ddw where we do not set direct_mapping to true. We use
>>> create the DDW window, but don't do any direct DMA. This is because the window is not large enough to
>>> map 2PB of memory, which is what ddw_memory_hotplug_max returns without my patch.
>>>
>>> With my patch applied, I get this in the logs:
>>>
>>> [    2.204866] nvme 0121:50:00.0: ibm,query-pe-dma-windows(53) 500000 8000000 20000121 returned 0
>>> [    2.204875] nvme 0121:50:00.0: Skipping ibm,pmemory
>>> [    2.205058] nvme 0121:50:00.0: ibm,create-pe-dma-window(54) 500000 8000000 20000121 10 21 returned 0 (liobn = 0x70000121 starting addr = 8000000 0)
>>> [    2.205068] nvme 0121:50:00.0: created tce table LIOBN 0x70000121 for /pci@800000020000121/pci1014,683@0
>>> [    2.215898] nvme 0121:50:00.0: iommu: 64-bit OK but direct DMA is limited by 800000200000000
>>>
>>
>>
>> ah I see. then...
>>
>>
>>>
>>> Thanks,
>>>
>>> Brian
>>>
>>>
>>>>
>>>>>
>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/platforms/pseries/iommu.c?id=bf6e2d562bbc4d115cf322b0bca57fe5bbd26f48
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Brian
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
>>>>>>> ---
>>>>>>>     arch/powerpc/platforms/pseries/iommu.c | 19 ++++++++++---------
>>>>>>>     1 file changed, 10 insertions(+), 9 deletions(-)
>>>>>>>
>>>>>>> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
>>>>>>> index 269f61d519c2..d9ae985d10a4 100644
>>>>>>> --- a/arch/powerpc/platforms/pseries/iommu.c
>>>>>>> +++ b/arch/powerpc/platforms/pseries/iommu.c
>>>>>>> @@ -1092,15 +1092,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
>>>>>>>         phys_addr_t max_addr = memory_hotplug_max();
>>>>>>>         struct device_node *memory;
>>>>>>>     -    /*
>>>>>>> -     * The "ibm,pmemory" can appear anywhere in the address space.
>>>>>>> -     * Assuming it is still backed by page structs, set the upper limit
>>>>>>> -     * for the huge DMA window as MAX_PHYSMEM_BITS.
>>>>>>> -     */
>>>>>>> -    if (of_find_node_by_type(NULL, "ibm,pmemory"))
>>>>>>> -        return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>>>>> -            (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
>>>>>>> -
>>>>>>>         for_each_node_by_type(memory, "memory") {
>>>>>>>             unsigned long start, size;
>>>>>>>             int n_mem_addr_cells, n_mem_size_cells, len;
>>>>>>> @@ -1341,6 +1332,16 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
>>>>>>>          */
>>>>>>>         len = max_ram_len;
>>>>>>>         if (pmem_present) {
>>>>>>> +        if (default_win_removed) {
>>>>>>> +            /*
>>>>>>> +             * If we only have one DMA window and have pmem present,
>>>>>>> +             * then we need to be able to map the entire address
>>>>>>> +             * range in order to be able to do direct DMA to RAM.
>>>>>>> +             */
>>>>>>> +            len = order_base_2((sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
>>>>>>> +                    (phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS));


Sorry I am still not following. If pmem is present, this new chunk will 
extend @len to MAX_PHYSMEM_BITS but the same will do the code right 
after this new chunk. How does removing the default window alters any of 
this?

If the only window is removed and we can only have one window which does 
not cover MAX_PHYSMEM_BITS, then we do not try 1:1 at all. Reverting
54fc3c681ded9437e4548e250 (which added pmem to phys_addr_t 
ddw_memory_hotplug_max(void) should be enough, no?


>>
>>
>> ... len = (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ? 31 :
>> MAX_PHYSMEM_BITS  ?
>>
>> Or actually simply drop this hunk and only leave the first one and add
>> this instead:
>>
>>
>> diff --git a/arch/powerpc/platforms/pseries/iommu.c
>> b/arch/powerpc/platforms/pseries/iommu.c
>> index 591ec9e94edb..68bfcd2227d9 100644
>> --- a/arch/powerpc/platforms/pseries/iommu.c
>> +++ b/arch/powerpc/platforms/pseries/iommu.c
>> @@ -1518,7 +1518,7 @@ static bool enable_ddw(struct pci_dev *dev, struct
>> device_node *pdn)
>>           * as RAM, then we failed to create a window to cover persistent
>>           * memory and need to set the DMA limit.
>>           */
>> -       if (pmem_present && ddw_enabled && direct_mapping && len ==
>> max_ram_len)
>> +       if (pmem_present && ddw_enabled && direct_mapping)
>>
>> ?
> 
> 
> So, this would change the handling of devices that have a single window when pmem
> is present. 

Yeah, that was not right, never mind.

> With your proposed change, we would then direct map for DRAM
> and attempt to use whatever TCE space is left to do the dynamic mapping
> when DMA'ing to the pmem, all from a single window. We don't account for this
> in the code from what I can see, so we could get into the scenario where we have
> a DMA window just large enough to map all of DRAM, we direct map that, and then
> have nothing left over for the pmem.
> 
> I would actually like to get this working, as it would be helpful for the performance
> of SR-IOV devices when pmem is present. However, I think we'd need to ensure we
> have at least a certain amount of reserved DMA space for the dynamic mapping
> before we do. There might be other things to consider as well...
> 
> Should we handle that as a further enhancement in a future patch, and move forward with this
> as a bug fix?

I am still struggling to see what the second hunk fixes exactly. Thanks,


> 
> Thanks,
> 
> Brian
> 
>>
>> Thanks,
>>
>>
>>
>>>>>>> +        }
>>>>>>> +
>>>>>>>             if (query.largest_available_block >=
>>>>>>>                 (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
>>>>>>>                 len = MAX_PHYSMEM_BITS;
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
> 
> 

-- 
Alexey

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-29  5:58 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-21 17:44 [PATCH] powerpc: Enhance pmem DMA bypass handling Brian King
2021-10-22 12:24 ` Alexey Kardashevskiy
2021-10-22 20:18   ` Brian King
2021-10-23 12:18     ` Alexey Kardashevskiy
2021-10-25 14:40       ` Brian King
2021-10-26  5:39         ` Alexey Kardashevskiy
2021-10-27 21:30           ` Brian King
2021-10-29  5:57             ` Alexey Kardashevskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).