All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer
Date: Tue, 15 Mar 2022 20:39:29 -0400	[thread overview]
Message-ID: <3a8cc553-4b60-b6bb-a2d8-2b33c4c1cf23@oracle.com> (raw)
In-Reply-To: <20220315063618.GA1244@lst.de>



On 3/15/22 2:36 AM, Christoph Hellwig wrote:

> @@ -271,12 +273,23 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	 * allow to pick a location everywhere for hypervisors with guest
>   	 * memory encryption.
>   	 */
> +retry:
> +	bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
>   	if (flags & SWIOTLB_ANY)
>   		tlb = memblock_alloc(bytes, PAGE_SIZE);
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   	if (!tlb)
>   		goto fail;
> +	if (remap && remap(tlb, nslabs) < 0) {
> +		memblock_free(tlb, PAGE_ALIGN(bytes));
> +
> +		if (nslabs <= IO_TLB_MIN_SLABS)
> +			panic("%s: Failed to remap %zu bytes\n",
> +			      __func__, bytes);
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


I spoke with Konrad (who wrote the original patch --- f4b2f07b2ed9b469ead87e06fc2fc3d12663a725) and apparently the reason for 2MB was to optimize for Xen's slab allocator, it had nothing to do with IO_TLB_MIN_SLABS. Since this is now common code we should not expose Xen-specific optimizations here and smaller values will still work so IO_TLB_MIN_SLABS is fine.

I think this should be mentioned in the commit message though, probably best in the next patch where you switch to this code.

As far as the hunk above, I don't think we need the max() here: with IO_TLB_MIN_SLABS being 512 we may get stuck in an infinite loop. Something like

	nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
	if (nslabs <= IO_TLB_MIN_SLABS)
		panic()

should be sufficient.


> +		goto retry;
> +	}
>   	if (swiotlb_init_with_tbl(tlb, default_nslabs, flags))
>   		goto fail_free_mem;
>   	return;
> @@ -287,12 +300,18 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	pr_warn("Cannot allocate buffer");
>   }
>   
> +void __init swiotlb_init(bool addressing_limit, unsigned int flags)
> +{
> +	return swiotlb_init_remap(addressing_limit, flags, NULL);
> +}
> +
>   /*
>    * Systems with larger DMA zones (those that don't support ISA) can
>    * initialize the swiotlb later using the slab allocator if needed.
>    * This should be just like above, but with some error catching.
>    */
> -int swiotlb_init_late(size_t size, gfp_t gfp_mask)
> +int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> +		int (*remap)(void *tlb, unsigned long nslabs))
>   {
>   	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
>   	unsigned long bytes;
> @@ -303,6 +322,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   	if (swiotlb_force_disable)
>   		return 0;
>   
> +retry:
>   	order = get_order(nslabs << IO_TLB_SHIFT);
>   	nslabs = SLABS_PER_PAGE << order;
>   	bytes = nslabs << IO_TLB_SHIFT;
> @@ -317,6 +337,16 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   
>   	if (!vstart)
>   		return -ENOMEM;
> +	if (remap)
> +		rc = remap(vstart, nslabs);
> +	if (rc) {
> +		free_pages((unsigned long)vstart, order);
> +
> +		if (IO_TLB_MIN_SLABS <= 1024)
> +			return rc;
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


Same here. (The 'if' check above is wrong anyway).

Patches 13 and 14 look good.


-boris



> +		goto retry;
> +	}
>   
>   	if (order != get_order(bytes)) {
>   		pr_warn("only able to allocate %ld MB\n",

WARNING: multiple messages have this Message-ID (diff)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer
Date: Tue, 15 Mar 2022 20:39:29 -0400	[thread overview]
Message-ID: <3a8cc553-4b60-b6bb-a2d8-2b33c4c1cf23@oracle.com> (raw)
In-Reply-To: <20220315063618.GA1244@lst.de>



On 3/15/22 2:36 AM, Christoph Hellwig wrote:

> @@ -271,12 +273,23 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	 * allow to pick a location everywhere for hypervisors with guest
>   	 * memory encryption.
>   	 */
> +retry:
> +	bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
>   	if (flags & SWIOTLB_ANY)
>   		tlb = memblock_alloc(bytes, PAGE_SIZE);
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   	if (!tlb)
>   		goto fail;
> +	if (remap && remap(tlb, nslabs) < 0) {
> +		memblock_free(tlb, PAGE_ALIGN(bytes));
> +
> +		if (nslabs <= IO_TLB_MIN_SLABS)
> +			panic("%s: Failed to remap %zu bytes\n",
> +			      __func__, bytes);
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


I spoke with Konrad (who wrote the original patch --- f4b2f07b2ed9b469ead87e06fc2fc3d12663a725) and apparently the reason for 2MB was to optimize for Xen's slab allocator, it had nothing to do with IO_TLB_MIN_SLABS. Since this is now common code we should not expose Xen-specific optimizations here and smaller values will still work so IO_TLB_MIN_SLABS is fine.

I think this should be mentioned in the commit message though, probably best in the next patch where you switch to this code.

As far as the hunk above, I don't think we need the max() here: with IO_TLB_MIN_SLABS being 512 we may get stuck in an infinite loop. Something like

	nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
	if (nslabs <= IO_TLB_MIN_SLABS)
		panic()

should be sufficient.


> +		goto retry;
> +	}
>   	if (swiotlb_init_with_tbl(tlb, default_nslabs, flags))
>   		goto fail_free_mem;
>   	return;
> @@ -287,12 +300,18 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	pr_warn("Cannot allocate buffer");
>   }
>   
> +void __init swiotlb_init(bool addressing_limit, unsigned int flags)
> +{
> +	return swiotlb_init_remap(addressing_limit, flags, NULL);
> +}
> +
>   /*
>    * Systems with larger DMA zones (those that don't support ISA) can
>    * initialize the swiotlb later using the slab allocator if needed.
>    * This should be just like above, but with some error catching.
>    */
> -int swiotlb_init_late(size_t size, gfp_t gfp_mask)
> +int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> +		int (*remap)(void *tlb, unsigned long nslabs))
>   {
>   	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
>   	unsigned long bytes;
> @@ -303,6 +322,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   	if (swiotlb_force_disable)
>   		return 0;
>   
> +retry:
>   	order = get_order(nslabs << IO_TLB_SHIFT);
>   	nslabs = SLABS_PER_PAGE << order;
>   	bytes = nslabs << IO_TLB_SHIFT;
> @@ -317,6 +337,16 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   
>   	if (!vstart)
>   		return -ENOMEM;
> +	if (remap)
> +		rc = remap(vstart, nslabs);
> +	if (rc) {
> +		free_pages((unsigned long)vstart, order);
> +
> +		if (IO_TLB_MIN_SLABS <= 1024)
> +			return rc;
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


Same here. (The 'if' check above is wrong anyway).

Patches 13 and 14 look good.


-boris



> +		goto retry;
> +	}
>   
>   	if (order != get_order(bytes)) {
>   		pr_warn("only able to allocate %ld MB\n",

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Juergen Gross <jgross@suse.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	linux-s390@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	linux-ia64@vger.kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Robin Murphy <robin.murphy@arm.com>,
	x86@kernel.org, linux-mips@vger.kernel.org,
	tboot-devel@lists.sourceforge.net, linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org, linux-hyperv@vger.kernel.org,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-riscv@lists.infradead.org,
	David Woodhouse <dwmw2@infradead.org>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer
Date: Tue, 15 Mar 2022 20:39:29 -0400	[thread overview]
Message-ID: <3a8cc553-4b60-b6bb-a2d8-2b33c4c1cf23@oracle.com> (raw)
In-Reply-To: <20220315063618.GA1244@lst.de>



On 3/15/22 2:36 AM, Christoph Hellwig wrote:

> @@ -271,12 +273,23 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	 * allow to pick a location everywhere for hypervisors with guest
>   	 * memory encryption.
>   	 */
> +retry:
> +	bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
>   	if (flags & SWIOTLB_ANY)
>   		tlb = memblock_alloc(bytes, PAGE_SIZE);
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   	if (!tlb)
>   		goto fail;
> +	if (remap && remap(tlb, nslabs) < 0) {
> +		memblock_free(tlb, PAGE_ALIGN(bytes));
> +
> +		if (nslabs <= IO_TLB_MIN_SLABS)
> +			panic("%s: Failed to remap %zu bytes\n",
> +			      __func__, bytes);
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


I spoke with Konrad (who wrote the original patch --- f4b2f07b2ed9b469ead87e06fc2fc3d12663a725) and apparently the reason for 2MB was to optimize for Xen's slab allocator, it had nothing to do with IO_TLB_MIN_SLABS. Since this is now common code we should not expose Xen-specific optimizations here and smaller values will still work so IO_TLB_MIN_SLABS is fine.

I think this should be mentioned in the commit message though, probably best in the next patch where you switch to this code.

As far as the hunk above, I don't think we need the max() here: with IO_TLB_MIN_SLABS being 512 we may get stuck in an infinite loop. Something like

	nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
	if (nslabs <= IO_TLB_MIN_SLABS)
		panic()

should be sufficient.


> +		goto retry;
> +	}
>   	if (swiotlb_init_with_tbl(tlb, default_nslabs, flags))
>   		goto fail_free_mem;
>   	return;
> @@ -287,12 +300,18 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	pr_warn("Cannot allocate buffer");
>   }
>   
> +void __init swiotlb_init(bool addressing_limit, unsigned int flags)
> +{
> +	return swiotlb_init_remap(addressing_limit, flags, NULL);
> +}
> +
>   /*
>    * Systems with larger DMA zones (those that don't support ISA) can
>    * initialize the swiotlb later using the slab allocator if needed.
>    * This should be just like above, but with some error catching.
>    */
> -int swiotlb_init_late(size_t size, gfp_t gfp_mask)
> +int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> +		int (*remap)(void *tlb, unsigned long nslabs))
>   {
>   	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
>   	unsigned long bytes;
> @@ -303,6 +322,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   	if (swiotlb_force_disable)
>   		return 0;
>   
> +retry:
>   	order = get_order(nslabs << IO_TLB_SHIFT);
>   	nslabs = SLABS_PER_PAGE << order;
>   	bytes = nslabs << IO_TLB_SHIFT;
> @@ -317,6 +337,16 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   
>   	if (!vstart)
>   		return -ENOMEM;
> +	if (remap)
> +		rc = remap(vstart, nslabs);
> +	if (rc) {
> +		free_pages((unsigned long)vstart, order);
> +
> +		if (IO_TLB_MIN_SLABS <= 1024)
> +			return rc;
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


Same here. (The 'if' check above is wrong anyway).

Patches 13 and 14 look good.


-boris



> +		goto retry;
> +	}
>   
>   	if (order != get_order(bytes)) {
>   		pr_warn("only able to allocate %ld MB\n",
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Juergen Gross <jgross@suse.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	linux-s390@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	linux-ia64@vger.kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Joerg Roedel <joro@8bytes.org>,
	x86@kernel.org, linux-mips@vger.kernel.org,
	tboot-devel@lists.sourceforge.net, linuxppc-dev@lists.ozlabs.org,
	iommu@lists.linux-foundation.org, linux-hyperv@vger.kernel.org,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-riscv@lists.infradead.org,
	David Woodhouse <dwmw2@infradead.org>,
	linux-arm-kernel@lists.infradead.org,
	Lu Baolu <baolu.lu@linux.intel.com>
Subject: Re: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer
Date: Tue, 15 Mar 2022 20:39:29 -0400	[thread overview]
Message-ID: <3a8cc553-4b60-b6bb-a2d8-2b33c4c1cf23@oracle.com> (raw)
In-Reply-To: <20220315063618.GA1244@lst.de>



On 3/15/22 2:36 AM, Christoph Hellwig wrote:

> @@ -271,12 +273,23 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	 * allow to pick a location everywhere for hypervisors with guest
>   	 * memory encryption.
>   	 */
> +retry:
> +	bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
>   	if (flags & SWIOTLB_ANY)
>   		tlb = memblock_alloc(bytes, PAGE_SIZE);
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   	if (!tlb)
>   		goto fail;
> +	if (remap && remap(tlb, nslabs) < 0) {
> +		memblock_free(tlb, PAGE_ALIGN(bytes));
> +
> +		if (nslabs <= IO_TLB_MIN_SLABS)
> +			panic("%s: Failed to remap %zu bytes\n",
> +			      __func__, bytes);
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


I spoke with Konrad (who wrote the original patch --- f4b2f07b2ed9b469ead87e06fc2fc3d12663a725) and apparently the reason for 2MB was to optimize for Xen's slab allocator, it had nothing to do with IO_TLB_MIN_SLABS. Since this is now common code we should not expose Xen-specific optimizations here and smaller values will still work so IO_TLB_MIN_SLABS is fine.

I think this should be mentioned in the commit message though, probably best in the next patch where you switch to this code.

As far as the hunk above, I don't think we need the max() here: with IO_TLB_MIN_SLABS being 512 we may get stuck in an infinite loop. Something like

	nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
	if (nslabs <= IO_TLB_MIN_SLABS)
		panic()

should be sufficient.


> +		goto retry;
> +	}
>   	if (swiotlb_init_with_tbl(tlb, default_nslabs, flags))
>   		goto fail_free_mem;
>   	return;
> @@ -287,12 +300,18 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	pr_warn("Cannot allocate buffer");
>   }
>   
> +void __init swiotlb_init(bool addressing_limit, unsigned int flags)
> +{
> +	return swiotlb_init_remap(addressing_limit, flags, NULL);
> +}
> +
>   /*
>    * Systems with larger DMA zones (those that don't support ISA) can
>    * initialize the swiotlb later using the slab allocator if needed.
>    * This should be just like above, but with some error catching.
>    */
> -int swiotlb_init_late(size_t size, gfp_t gfp_mask)
> +int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> +		int (*remap)(void *tlb, unsigned long nslabs))
>   {
>   	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
>   	unsigned long bytes;
> @@ -303,6 +322,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   	if (swiotlb_force_disable)
>   		return 0;
>   
> +retry:
>   	order = get_order(nslabs << IO_TLB_SHIFT);
>   	nslabs = SLABS_PER_PAGE << order;
>   	bytes = nslabs << IO_TLB_SHIFT;
> @@ -317,6 +337,16 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   
>   	if (!vstart)
>   		return -ENOMEM;
> +	if (remap)
> +		rc = remap(vstart, nslabs);
> +	if (rc) {
> +		free_pages((unsigned long)vstart, order);
> +
> +		if (IO_TLB_MIN_SLABS <= 1024)
> +			return rc;
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


Same here. (The 'if' check above is wrong anyway).

Patches 13 and 14 look good.


-boris



> +		goto retry;
> +	}
>   
>   	if (order != get_order(bytes)) {
>   		pr_warn("only able to allocate %ld MB\n",

WARNING: multiple messages have this Message-ID (diff)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer
Date: Tue, 15 Mar 2022 20:39:29 -0400	[thread overview]
Message-ID: <3a8cc553-4b60-b6bb-a2d8-2b33c4c1cf23@oracle.com> (raw)
In-Reply-To: <20220315063618.GA1244@lst.de>



On 3/15/22 2:36 AM, Christoph Hellwig wrote:

> @@ -271,12 +273,23 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	 * allow to pick a location everywhere for hypervisors with guest
>   	 * memory encryption.
>   	 */
> +retry:
> +	bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
>   	if (flags & SWIOTLB_ANY)
>   		tlb = memblock_alloc(bytes, PAGE_SIZE);
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   	if (!tlb)
>   		goto fail;
> +	if (remap && remap(tlb, nslabs) < 0) {
> +		memblock_free(tlb, PAGE_ALIGN(bytes));
> +
> +		if (nslabs <= IO_TLB_MIN_SLABS)
> +			panic("%s: Failed to remap %zu bytes\n",
> +			      __func__, bytes);
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


I spoke with Konrad (who wrote the original patch --- f4b2f07b2ed9b469ead87e06fc2fc3d12663a725) and apparently the reason for 2MB was to optimize for Xen's slab allocator, it had nothing to do with IO_TLB_MIN_SLABS. Since this is now common code we should not expose Xen-specific optimizations here and smaller values will still work so IO_TLB_MIN_SLABS is fine.

I think this should be mentioned in the commit message though, probably best in the next patch where you switch to this code.

As far as the hunk above, I don't think we need the max() here: with IO_TLB_MIN_SLABS being 512 we may get stuck in an infinite loop. Something like

	nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
	if (nslabs <= IO_TLB_MIN_SLABS)
		panic()

should be sufficient.


> +		goto retry;
> +	}
>   	if (swiotlb_init_with_tbl(tlb, default_nslabs, flags))
>   		goto fail_free_mem;
>   	return;
> @@ -287,12 +300,18 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	pr_warn("Cannot allocate buffer");
>   }
>   
> +void __init swiotlb_init(bool addressing_limit, unsigned int flags)
> +{
> +	return swiotlb_init_remap(addressing_limit, flags, NULL);
> +}
> +
>   /*
>    * Systems with larger DMA zones (those that don't support ISA) can
>    * initialize the swiotlb later using the slab allocator if needed.
>    * This should be just like above, but with some error catching.
>    */
> -int swiotlb_init_late(size_t size, gfp_t gfp_mask)
> +int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> +		int (*remap)(void *tlb, unsigned long nslabs))
>   {
>   	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
>   	unsigned long bytes;
> @@ -303,6 +322,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   	if (swiotlb_force_disable)
>   		return 0;
>   
> +retry:
>   	order = get_order(nslabs << IO_TLB_SHIFT);
>   	nslabs = SLABS_PER_PAGE << order;
>   	bytes = nslabs << IO_TLB_SHIFT;
> @@ -317,6 +337,16 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   
>   	if (!vstart)
>   		return -ENOMEM;
> +	if (remap)
> +		rc = remap(vstart, nslabs);
> +	if (rc) {
> +		free_pages((unsigned long)vstart, order);
> +
> +		if (IO_TLB_MIN_SLABS <= 1024)
> +			return rc;
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


Same here. (The 'if' check above is wrong anyway).

Patches 13 and 14 look good.


-boris



> +		goto retry;
> +	}
>   
>   	if (order != get_order(bytes)) {
>   		pr_warn("only able to allocate %ld MB\n",

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org, x86@kernel.org,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>, Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Robin Murphy <robin.murphy@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org, linux-ia64@vger.kernel.org,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-hyperv@vger.kernel.org, tboot-devel@lists.sourceforge.net,
	linux-pci@vger.kernel.org
Subject: Re: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer
Date: Wed, 16 Mar 2022 00:39:29 +0000	[thread overview]
Message-ID: <3a8cc553-4b60-b6bb-a2d8-2b33c4c1cf23@oracle.com> (raw)
In-Reply-To: <20220315063618.GA1244@lst.de>



On 3/15/22 2:36 AM, Christoph Hellwig wrote:

> @@ -271,12 +273,23 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	 * allow to pick a location everywhere for hypervisors with guest
>   	 * memory encryption.
>   	 */
> +retry:
> +	bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
>   	if (flags & SWIOTLB_ANY)
>   		tlb = memblock_alloc(bytes, PAGE_SIZE);
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   	if (!tlb)
>   		goto fail;
> +	if (remap && remap(tlb, nslabs) < 0) {
> +		memblock_free(tlb, PAGE_ALIGN(bytes));
> +
> +		if (nslabs <= IO_TLB_MIN_SLABS)
> +			panic("%s: Failed to remap %zu bytes\n",
> +			      __func__, bytes);
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


I spoke with Konrad (who wrote the original patch --- f4b2f07b2ed9b469ead87e06fc2fc3d12663a725) and apparently the reason for 2MB was to optimize for Xen's slab allocator, it had nothing to do with IO_TLB_MIN_SLABS. Since this is now common code we should not expose Xen-specific optimizations here and smaller values will still work so IO_TLB_MIN_SLABS is fine.

I think this should be mentioned in the commit message though, probably best in the next patch where you switch to this code.

As far as the hunk above, I don't think we need the max() here: with IO_TLB_MIN_SLABS being 512 we may get stuck in an infinite loop. Something like

	nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
	if (nslabs <= IO_TLB_MIN_SLABS)
		panic()

should be sufficient.


> +		goto retry;
> +	}
>   	if (swiotlb_init_with_tbl(tlb, default_nslabs, flags))
>   		goto fail_free_mem;
>   	return;
> @@ -287,12 +300,18 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags)
>   	pr_warn("Cannot allocate buffer");
>   }
>   
> +void __init swiotlb_init(bool addressing_limit, unsigned int flags)
> +{
> +	return swiotlb_init_remap(addressing_limit, flags, NULL);
> +}
> +
>   /*
>    * Systems with larger DMA zones (those that don't support ISA) can
>    * initialize the swiotlb later using the slab allocator if needed.
>    * This should be just like above, but with some error catching.
>    */
> -int swiotlb_init_late(size_t size, gfp_t gfp_mask)
> +int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> +		int (*remap)(void *tlb, unsigned long nslabs))
>   {
>   	unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
>   	unsigned long bytes;
> @@ -303,6 +322,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   	if (swiotlb_force_disable)
>   		return 0;
>   
> +retry:
>   	order = get_order(nslabs << IO_TLB_SHIFT);
>   	nslabs = SLABS_PER_PAGE << order;
>   	bytes = nslabs << IO_TLB_SHIFT;
> @@ -317,6 +337,16 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask)
>   
>   	if (!vstart)
>   		return -ENOMEM;
> +	if (remap)
> +		rc = remap(vstart, nslabs);
> +	if (rc) {
> +		free_pages((unsigned long)vstart, order);
> +
> +		if (IO_TLB_MIN_SLABS <= 1024)
> +			return rc;
> +		nslabs = max(1024UL, ALIGN(nslabs >> 1, IO_TLB_SEGSIZE));


Same here. (The 'if' check above is wrong anyway).

Patches 13 and 14 look good.


-boris



> +		goto retry;
> +	}
>   
>   	if (order != get_order(bytes)) {
>   		pr_warn("only able to allocate %ld MB\n",

  reply	other threads:[~2022-03-16  0:41 UTC|newest]

Thread overview: 156+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-14  7:31 cleanup swiotlb initialization v5 Christoph Hellwig
2022-03-14  7:31 ` Christoph Hellwig
2022-03-14  7:31 ` Christoph Hellwig
2022-03-14  7:31 ` Christoph Hellwig
2022-03-14  7:31 ` Christoph Hellwig
2022-03-14  7:31 ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 01/15] dma-direct: use is_swiotlb_active in dma_direct_map_page Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 02/15] swiotlb: make swiotlb_exit a no-op if SWIOTLB_FORCE is set Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 03/15] swiotlb: simplify swiotlb_max_segment Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 04/15] swiotlb: rename swiotlb_late_init_with_default_size Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 05/15] arm/xen: don't check for xen_initial_domain() in xen_create_contiguous_region Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 06/15] MIPS/octeon: use swiotlb_init instead of open coding it Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 07/15] x86: remove the IOMMU table infrastructure Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 08/15] x86: centralize setting SWIOTLB_FORCE when guest memory encryption is enabled Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 09/15] swiotlb: make the swiotlb_init interface more useful Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 10/15] swiotlb: add a SWIOTLB_ANY flag to lift the low memory restriction Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 11/15] swiotlb: pass a gfp_mask argument to swiotlb_init_late Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31 ` [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14 22:39   ` Boris Ostrovsky
2022-03-14 22:39     ` Boris Ostrovsky
2022-03-14 22:39     ` Boris Ostrovsky
2022-03-14 22:39     ` Boris Ostrovsky
2022-03-14 22:39     ` Boris Ostrovsky
2022-03-14 22:39     ` Boris Ostrovsky
2022-03-15  6:36     ` Christoph Hellwig
2022-03-15  6:36       ` Christoph Hellwig
2022-03-15  6:36       ` Christoph Hellwig
2022-03-15  6:36       ` Christoph Hellwig
2022-03-15  6:36       ` Christoph Hellwig
2022-03-15  6:36       ` Christoph Hellwig
2022-03-16  0:39       ` Boris Ostrovsky [this message]
2022-03-16  0:39         ` Boris Ostrovsky
2022-03-16  0:39         ` Boris Ostrovsky
2022-03-16  0:39         ` Boris Ostrovsky
2022-03-16  0:39         ` Boris Ostrovsky
2022-03-16  0:39         ` Boris Ostrovsky
2022-03-14  7:31 ` [PATCH 13/15] swiotlb: merge swiotlb-xen initialization into swiotlb Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14 23:07   ` Boris Ostrovsky
2022-03-14 23:07     ` Boris Ostrovsky
2022-03-14 23:07     ` Boris Ostrovsky
2022-03-14 23:07     ` Boris Ostrovsky
2022-03-14 23:07     ` Boris Ostrovsky
2022-03-14 23:07     ` Boris Ostrovsky
2022-03-15  6:48     ` Christoph Hellwig
2022-03-15  6:48       ` Christoph Hellwig
2022-03-15  6:48       ` Christoph Hellwig
2022-03-15  6:48       ` Christoph Hellwig
2022-03-15  6:48       ` Christoph Hellwig
2022-03-15  6:48       ` Christoph Hellwig
2022-03-15  0:00   ` Stefano Stabellini
2022-03-15  0:00     ` Stefano Stabellini
2022-03-15  0:00     ` Stefano Stabellini
2022-03-15  0:00     ` Stefano Stabellini
2022-03-15  0:00     ` Stefano Stabellini
2022-03-15  0:00     ` Stefano Stabellini
2022-03-14  7:31 ` [PATCH 14/15] swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14 23:11   ` Boris Ostrovsky
2022-03-14 23:11     ` Boris Ostrovsky
2022-03-14 23:11     ` Boris Ostrovsky
2022-03-14 23:11     ` Boris Ostrovsky
2022-03-14 23:11     ` Boris Ostrovsky
2022-03-14 23:11     ` Boris Ostrovsky
2022-03-14  7:31 ` [PATCH 15/15] x86: remove cruft from <asm/dma-mapping.h> Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-03-14  7:31   ` Christoph Hellwig
2022-04-04  5:05 cleanup swiotlb initialization v8 Christoph Hellwig
2022-04-04  5:05 ` [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer Christoph Hellwig
2022-04-04  5:05   ` Christoph Hellwig
2022-04-04  5:05   ` Christoph Hellwig
2022-04-04  5:05   ` Christoph Hellwig
2022-04-04  5:05   ` Christoph Hellwig
2022-04-04  5:05   ` Christoph Hellwig
2022-04-04  7:09   ` Dongli Zhang
2022-04-04  7:09     ` Dongli Zhang
2022-04-04  7:09     ` Dongli Zhang
2022-04-04  7:09     ` Dongli Zhang
2022-04-04  7:09     ` Dongli Zhang
2022-04-04  7:09     ` Dongli Zhang
     [not found] ` <67c1784af6f24f3e871ddfb1478e821c@FR3P281MB0843.DEUP281.PROD.OUTLOOK.COM>
2022-04-04  6:19   ` Alan Robinson
2022-04-04  6:19     ` Alan Robinson
2022-04-04  6:19     ` Alan Robinson
2022-04-04  6:19     ` Alan Robinson
2022-04-04  6:19     ` Alan Robinson
2022-04-04  6:19     ` Alan Robinson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3a8cc553-4b60-b6bb-a2d8-2b33c4c1cf23@oracle.com \
    --to=boris.ostrovsky@oracle.com \
    --cc=anshuman.khandual@arm.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=dwmw2@infradead.org \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jgross@suse.com \
    --cc=joro@8bytes.org \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=robin.murphy@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=tboot-devel@lists.sourceforge.net \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.