All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed
@ 2022-10-27  5:26 Alexey Kardashevskiy
  2022-10-27 16:46 ` Thadeu Lima de Souza Cascardo
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Alexey Kardashevskiy @ 2022-10-27  5:26 UTC (permalink / raw)
  To: kvm
  Cc: Alexey Kardashevskiy, linux-kernel, iommu, Ashish Kalra,
	Pankaj Gupta, Tom Lendacky, Robin Murphy, Marek Szyprowski,
	Christoph Hellwig

At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
or 1GB, whichever is less. However it is possible that there is no block
big enough in the low memory which make SWIOTLB allocation fail and
the kernel continues without DMA. In such case a VM hangs on DMA.

This divides the size in half and tries again reusing the existing
remapping logic.

This updates default_nslabs on successful allocation which looks like
an oversight as not doing so should have broken callers of
swiotlb_size_or_default().

Signed-off-by: Alexey Kardashevskiy <aik@amd.com>
--

I hit the problem with
QEMU's "-m 16819M" where SWIOTLB was adjusted to
0x7e200 == 1,058,013,184 (slightly less than 1GB) while
0x7e180 still worked.

With guest errors enabled, there are many unassigned accesses from
virtio.

---
 kernel/dma/swiotlb.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 339a990554e7..d28c294320fd 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -338,21 +338,27 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 	else
 		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
 	if (!tlb) {
-		pr_warn("%s: failed to allocate tlb structure\n", __func__);
-		return;
-	}
-
-	if (remap && remap(tlb, nslabs) < 0) {
+		pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
+			__func__, bytes);
+	} else if (remap && remap(tlb, nslabs) < 0) {
 		memblock_free(tlb, PAGE_ALIGN(bytes));
+		tlb = NULL;
+		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
+	}
 
+	if (!tlb) {
 		nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
 		if (nslabs >= IO_TLB_MIN_SLABS)
 			goto retry;
-
-		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
 		return;
 	}
 
+	if (default_nslabs != nslabs) {
+		pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
+			default_nslabs, nslabs);
+		default_nslabs = nslabs;
+	}
+
 	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
 	mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
 	if (!mem->slots) {
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed
  2022-10-27  5:26 [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed Alexey Kardashevskiy
@ 2022-10-27 16:46 ` Thadeu Lima de Souza Cascardo
  2022-10-28  1:14   ` Alexey Kardashevskiy
  2022-10-28 14:13 ` Tom Lendacky
  2022-10-30  8:05 ` Christoph Hellwig
  2 siblings, 1 reply; 5+ messages in thread
From: Thadeu Lima de Souza Cascardo @ 2022-10-27 16:46 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: kvm, linux-kernel, iommu, Ashish Kalra, Pankaj Gupta,
	Tom Lendacky, Robin Murphy, Marek Szyprowski, Christoph Hellwig

On Thu, Oct 27, 2022 at 04:26:07PM +1100, Alexey Kardashevskiy wrote:
> At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
> or 1GB, whichever is less. However it is possible that there is no block
> big enough in the low memory which make SWIOTLB allocation fail and
> the kernel continues without DMA. In such case a VM hangs on DMA.
> 
> This divides the size in half and tries again reusing the existing
> remapping logic.
> 
> This updates default_nslabs on successful allocation which looks like
> an oversight as not doing so should have broken callers of
> swiotlb_size_or_default().
> 
> Signed-off-by: Alexey Kardashevskiy <aik@amd.com>

Should this have a
Fixes: e998879d4fb7 ("x86,swiotlb: Adjust SWIOTLB bounce buffer size for SEV guests")
?

Cascardo.

> --
> 
> I hit the problem with
> QEMU's "-m 16819M" where SWIOTLB was adjusted to
> 0x7e200 == 1,058,013,184 (slightly less than 1GB) while
> 0x7e180 still worked.
> 
> With guest errors enabled, there are many unassigned accesses from
> virtio.
> 
> ---
>  kernel/dma/swiotlb.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 339a990554e7..d28c294320fd 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -338,21 +338,27 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>  	else
>  		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>  	if (!tlb) {
> -		pr_warn("%s: failed to allocate tlb structure\n", __func__);
> -		return;
> -	}
> -
> -	if (remap && remap(tlb, nslabs) < 0) {
> +		pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
> +			__func__, bytes);
> +	} else if (remap && remap(tlb, nslabs) < 0) {
>  		memblock_free(tlb, PAGE_ALIGN(bytes));
> +		tlb = NULL;
> +		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
> +	}
>  
> +	if (!tlb) {
>  		nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
>  		if (nslabs >= IO_TLB_MIN_SLABS)
>  			goto retry;
> -
> -		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
>  		return;
>  	}
>  
> +	if (default_nslabs != nslabs) {
> +		pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
> +			default_nslabs, nslabs);
> +		default_nslabs = nslabs;
> +	}
> +
>  	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
>  	mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
>  	if (!mem->slots) {
> -- 
> 2.37.3
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed
  2022-10-27 16:46 ` Thadeu Lima de Souza Cascardo
@ 2022-10-28  1:14   ` Alexey Kardashevskiy
  0 siblings, 0 replies; 5+ messages in thread
From: Alexey Kardashevskiy @ 2022-10-28  1:14 UTC (permalink / raw)
  To: Thadeu Lima de Souza Cascardo
  Cc: kvm, linux-kernel, iommu, Ashish Kalra, Pankaj Gupta,
	Tom Lendacky, Robin Murphy, Marek Szyprowski, Christoph Hellwig



On 28/10/22 03:46, Thadeu Lima de Souza Cascardo wrote:
> On Thu, Oct 27, 2022 at 04:26:07PM +1100, Alexey Kardashevskiy wrote:
>> At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
>> or 1GB, whichever is less. However it is possible that there is no block
>> big enough in the low memory which make SWIOTLB allocation fail and
>> the kernel continues without DMA. In such case a VM hangs on DMA.
>>
>> This divides the size in half and tries again reusing the existing
>> remapping logic.
>>
>> This updates default_nslabs on successful allocation which looks like
>> an oversight as not doing so should have broken callers of
>> swiotlb_size_or_default().
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@amd.com>
> 
> Should this have a
> Fixes: e998879d4fb7 ("x86,swiotlb: Adjust SWIOTLB bounce buffer size for SEV guests")
> ?


Well, the problem was there before that patch, the allocation failure 
was not handled while remap failure was. e998879d4fb7 just made it 
easier to see. But still worth mentioning I guess... Thanks,


> 
> Cascardo.
> 
>> --
>>
>> I hit the problem with
>> QEMU's "-m 16819M" where SWIOTLB was adjusted to
>> 0x7e200 == 1,058,013,184 (slightly less than 1GB) while
>> 0x7e180 still worked.
>>
>> With guest errors enabled, there are many unassigned accesses from
>> virtio.
>>
>> ---
>>   kernel/dma/swiotlb.c | 20 +++++++++++++-------
>>   1 file changed, 13 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>> index 339a990554e7..d28c294320fd 100644
>> --- a/kernel/dma/swiotlb.c
>> +++ b/kernel/dma/swiotlb.c
>> @@ -338,21 +338,27 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>>   	else
>>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>>   	if (!tlb) {
>> -		pr_warn("%s: failed to allocate tlb structure\n", __func__);
>> -		return;
>> -	}
>> -
>> -	if (remap && remap(tlb, nslabs) < 0) {
>> +		pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
>> +			__func__, bytes);
>> +	} else if (remap && remap(tlb, nslabs) < 0) {
>>   		memblock_free(tlb, PAGE_ALIGN(bytes));
>> +		tlb = NULL;
>> +		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
>> +	}
>>   
>> +	if (!tlb) {
>>   		nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
>>   		if (nslabs >= IO_TLB_MIN_SLABS)
>>   			goto retry;
>> -
>> -		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
>>   		return;
>>   	}
>>   
>> +	if (default_nslabs != nslabs) {
>> +		pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
>> +			default_nslabs, nslabs);
>> +		default_nslabs = nslabs;
>> +	}
>> +
>>   	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
>>   	mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
>>   	if (!mem->slots) {
>> -- 
>> 2.37.3
>>

-- 
Alexey

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed
  2022-10-27  5:26 [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed Alexey Kardashevskiy
  2022-10-27 16:46 ` Thadeu Lima de Souza Cascardo
@ 2022-10-28 14:13 ` Tom Lendacky
  2022-10-30  8:05 ` Christoph Hellwig
  2 siblings, 0 replies; 5+ messages in thread
From: Tom Lendacky @ 2022-10-28 14:13 UTC (permalink / raw)
  To: Alexey Kardashevskiy, kvm
  Cc: linux-kernel, iommu, Ashish Kalra, Pankaj Gupta, Robin Murphy,
	Marek Szyprowski, Christoph Hellwig

On 10/27/22 00:26, Alexey Kardashevskiy wrote:
> At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB
> or 1GB, whichever is less. However it is possible that there is no block
> big enough in the low memory which make SWIOTLB allocation fail and
> the kernel continues without DMA. In such case a VM hangs on DMA.
> 
> This divides the size in half and tries again reusing the existing
> remapping logic.
> 
> This updates default_nslabs on successful allocation which looks like
> an oversight as not doing so should have broken callers of
> swiotlb_size_or_default().
> 
> Signed-off-by: Alexey Kardashevskiy <aik@amd.com>

Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

> --
> 
> I hit the problem with
> QEMU's "-m 16819M" where SWIOTLB was adjusted to
> 0x7e200 == 1,058,013,184 (slightly less than 1GB) while
> 0x7e180 still worked.
> 
> With guest errors enabled, there are many unassigned accesses from
> virtio.
> 
> ---
>   kernel/dma/swiotlb.c | 20 +++++++++++++-------
>   1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 339a990554e7..d28c294320fd 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -338,21 +338,27 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   	if (!tlb) {
> -		pr_warn("%s: failed to allocate tlb structure\n", __func__);
> -		return;
> -	}
> -
> -	if (remap && remap(tlb, nslabs) < 0) {
> +		pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
> +			__func__, bytes);
> +	} else if (remap && remap(tlb, nslabs) < 0) {
>   		memblock_free(tlb, PAGE_ALIGN(bytes));
> +		tlb = NULL;
> +		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
> +	}
>   
> +	if (!tlb) {
>   		nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
>   		if (nslabs >= IO_TLB_MIN_SLABS)
>   			goto retry;
> -
> -		pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes);
>   		return;
>   	}
>   
> +	if (default_nslabs != nslabs) {
> +		pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs",
> +			default_nslabs, nslabs);
> +		default_nslabs = nslabs;
> +	}
> +
>   	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
>   	mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
>   	if (!mem->slots) {

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed
  2022-10-27  5:26 [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed Alexey Kardashevskiy
  2022-10-27 16:46 ` Thadeu Lima de Souza Cascardo
  2022-10-28 14:13 ` Tom Lendacky
@ 2022-10-30  8:05 ` Christoph Hellwig
  2 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2022-10-30  8:05 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: kvm, linux-kernel, iommu, Ashish Kalra, Pankaj Gupta,
	Tom Lendacky, Robin Murphy, Marek Szyprowski, Christoph Hellwig

The subject looks wrong - this just touches the swiotlb code, and
nothing x86 or AMD specific.

The code flow now looks a little confusing.  Can yu just add a little
swiotlb_memblock_alloc helper that does the two variants of memblock
allocation and remap, so that we can keep a simply
loop around them in the caller?

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-10-30  8:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-27  5:26 [PATCH kernel] x86/swiotlb/amd: Half the size if allocation failed Alexey Kardashevskiy
2022-10-27 16:46 ` Thadeu Lima de Souza Cascardo
2022-10-28  1:14   ` Alexey Kardashevskiy
2022-10-28 14:13 ` Tom Lendacky
2022-10-30  8:05 ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.