From: Michael Kelley <mikelley@microsoft.com> To: Tianyu Lan <ltykernel@gmail.com>, KY Srinivasan <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger <sthemmin@microsoft.com>, "wei.liu@kernel.org" <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>, "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "x86@kernel.org" <x86@kernel.org>, "hpa@zytor.com" <hpa@zytor.com>, "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>, "luto@kernel.org" <luto@kernel.org>, "peterz@infradead.org" <peterz@infradead.org>, "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>, "jgross@suse.com" <jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>, "joro@8bytes.org" <joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "davem@davemloft.net" <davem@davemloft.net>, "kuba@kernel.org" <kuba@kernel.org>, "jejb@linux.ibm.com" <jejb@linux.ibm.com>, "martin.petersen@oracle.com" <martin.petersen@oracle.com>, "arnd@arndb.de" <arnd@arndb.de>, "hch@lst.de" <hch@lst.de>, "m.szyprowski@samsung.com" <m.szyprowski@samsung.com>, "robin.murphy@arm.com" <robin.murphy@arm.com>, "thomas.lendacky@amd.com" <thomas.lendacky@amd.com>, "brijesh.singh@amd.com" <brijesh.singh@amd.com>, "ardb@kernel.org" <ardb@kernel.org>, Tianyu Lan <Tianyu.Lan@microsoft.com>, "pgonda@google.com" <pgonda@google.com>, "martin.b.radev@gmail.com" <martin.b.radev@gmail.com>, "akpm@linux-foundation.org" <akpm@linux-foundation.org>, "kirill.shutemov@linux.intel.com" <kirill.shutemov@linux.intel.com>, "rppt@kernel.org" <rppt@kernel.org>, "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>, "saravanand@fb.com" <saravanand@fb.com>, "krish.sadhukhan@oracle.com" <krish.sadhukhan@oracle.com>, "aneesh.kumar@linux.ibm.com" <aneesh.kumar@linux.ibm.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "rientjes@google.com" <rientjes@google.com>, "hannes@cmpxchg.org" <hannes@cmpxchg.org>, "tj@kernel.org" <tj@kernel.org> Cc: "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>, "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, vkuznets <vkuznets@redhat.com>, "parri.andrea@gmail.com" <parri.andrea@gmail.com>, "dave.hansen@intel.com" <dave.hansen@intel.com> Subject: RE: [PATCH V3 08/13] HV/Vmbus: Initialize VMbus ring buffer for Isolation VM Date: Mon, 16 Aug 2021 17:28:30 +0000 [thread overview] Message-ID: <MWHPR21MB1593FFD7F3402753751F433CD7FD9@MWHPR21MB1593.namprd21.prod.outlook.com> (raw) In-Reply-To: <20210809175620.720923-9-ltykernel@gmail.com> From: Tianyu Lan <ltykernel@gmail.com> Sent: Monday, August 9, 2021 10:56 AM > > VMbus ring buffer are shared with host and it's need to s/it's need/it needs/ > be accessed via extra address space of Isolation VM with > SNP support. This patch is to map the ring buffer > address in extra address space via ioremap(). HV host It's actually using vmap_pfn(), not ioremap(). > visibility hvcall smears data in the ring buffer and > so reset the ring buffer memory to zero after calling > visibility hvcall. > > Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> > --- > drivers/hv/Kconfig | 1 + > drivers/hv/channel.c | 10 +++++ > drivers/hv/hyperv_vmbus.h | 2 + > drivers/hv/ring_buffer.c | 84 ++++++++++++++++++++++++++++++--------- > 4 files changed, 79 insertions(+), 18 deletions(-) > > diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig > index d1123ceb38f3..dd12af20e467 100644 > --- a/drivers/hv/Kconfig > +++ b/drivers/hv/Kconfig > @@ -8,6 +8,7 @@ config HYPERV > || (ARM64 && !CPU_BIG_ENDIAN)) > select PARAVIRT > select X86_HV_CALLBACK_VECTOR if X86 > + select VMAP_PFN > help > Select this option to run Linux as a Hyper-V client operating > system. > diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c > index 4c4717c26240..60ef881a700c 100644 > --- a/drivers/hv/channel.c > +++ b/drivers/hv/channel.c > @@ -712,6 +712,16 @@ static int __vmbus_open(struct vmbus_channel *newchannel, > if (err) > goto error_clean_ring; > > + err = hv_ringbuffer_post_init(&newchannel->outbound, > + page, send_pages); > + if (err) > + goto error_free_gpadl; > + > + err = hv_ringbuffer_post_init(&newchannel->inbound, > + &page[send_pages], recv_pages); > + if (err) > + goto error_free_gpadl; > + > /* Create and init the channel open message */ > open_info = kzalloc(sizeof(*open_info) + > sizeof(struct vmbus_channel_open_channel), > diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h > index 40bc0eff6665..15cd23a561f3 100644 > --- a/drivers/hv/hyperv_vmbus.h > +++ b/drivers/hv/hyperv_vmbus.h > @@ -172,6 +172,8 @@ extern int hv_synic_cleanup(unsigned int cpu); > /* Interface */ > > void hv_ringbuffer_pre_init(struct vmbus_channel *channel); > +int hv_ringbuffer_post_init(struct hv_ring_buffer_info *ring_info, > + struct page *pages, u32 page_cnt); > > int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, > struct page *pages, u32 pagecnt, u32 max_pkt_size); > diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c > index 2aee356840a2..d4f93fca1108 100644 > --- a/drivers/hv/ring_buffer.c > +++ b/drivers/hv/ring_buffer.c > @@ -17,6 +17,8 @@ > #include <linux/vmalloc.h> > #include <linux/slab.h> > #include <linux/prefetch.h> > +#include <linux/io.h> > +#include <asm/mshyperv.h> > > #include "hyperv_vmbus.h" > > @@ -179,43 +181,89 @@ void hv_ringbuffer_pre_init(struct vmbus_channel *channel) > mutex_init(&channel->outbound.ring_buffer_mutex); > } > > -/* Initialize the ring buffer. */ > -int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, > - struct page *pages, u32 page_cnt, u32 max_pkt_size) > +int hv_ringbuffer_post_init(struct hv_ring_buffer_info *ring_info, > + struct page *pages, u32 page_cnt) > { > + u64 physic_addr = page_to_pfn(pages) << PAGE_SHIFT; > + unsigned long *pfns_wraparound; > + void *vaddr; > int i; > - struct page **pages_wraparound; > > - BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE)); > + if (!hv_isolation_type_snp()) > + return 0; > + > + physic_addr += ms_hyperv.shared_gpa_boundary; > > /* > * First page holds struct hv_ring_buffer, do wraparound mapping for > * the rest. > */ > - pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *), > + pfns_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(unsigned long), > GFP_KERNEL); > - if (!pages_wraparound) > + if (!pfns_wraparound) > return -ENOMEM; > > - pages_wraparound[0] = pages; > + pfns_wraparound[0] = physic_addr >> PAGE_SHIFT; > for (i = 0; i < 2 * (page_cnt - 1); i++) > - pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1]; > - > - ring_info->ring_buffer = (struct hv_ring_buffer *) > - vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL); > - > - kfree(pages_wraparound); > + pfns_wraparound[i + 1] = (physic_addr >> PAGE_SHIFT) + > + i % (page_cnt - 1) + 1; > > - > - if (!ring_info->ring_buffer) > + vaddr = vmap_pfn(pfns_wraparound, page_cnt * 2 - 1, PAGE_KERNEL_IO); > + kfree(pfns_wraparound); > + if (!vaddr) > return -ENOMEM; > > - ring_info->ring_buffer->read_index = > - ring_info->ring_buffer->write_index = 0; > + /* Clean memory after setting host visibility. */ > + memset((void *)vaddr, 0x00, page_cnt * PAGE_SIZE); > + > + ring_info->ring_buffer = (struct hv_ring_buffer *)vaddr; > + ring_info->ring_buffer->read_index = 0; > + ring_info->ring_buffer->write_index = 0; > > /* Set the feature bit for enabling flow control. */ > ring_info->ring_buffer->feature_bits.value = 1; > > + return 0; > +} > + > +/* Initialize the ring buffer. */ > +int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, > + struct page *pages, u32 page_cnt, u32 max_pkt_size) > +{ > + int i; > + struct page **pages_wraparound; > + > + BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE)); > + > + if (!hv_isolation_type_snp()) { > + /* > + * First page holds struct hv_ring_buffer, do wraparound mapping for > + * the rest. > + */ > + pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *), > + GFP_KERNEL); > + if (!pages_wraparound) > + return -ENOMEM; > + > + pages_wraparound[0] = pages; > + for (i = 0; i < 2 * (page_cnt - 1); i++) > + pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1]; > + > + ring_info->ring_buffer = (struct hv_ring_buffer *) > + vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL); > + > + kfree(pages_wraparound); > + > + if (!ring_info->ring_buffer) > + return -ENOMEM; > + > + ring_info->ring_buffer->read_index = > + ring_info->ring_buffer->write_index = 0; > + > + /* Set the feature bit for enabling flow control. */ > + ring_info->ring_buffer->feature_bits.value = 1; > + } > + > ring_info->ring_size = page_cnt << PAGE_SHIFT; > ring_info->ring_size_div10_reciprocal = > reciprocal_value(ring_info->ring_size / 10); > -- > 2.25.1 This patch does the following: 1) The existing ring buffer wrap-around mapping functionality is still executed in hv_ringbuffer_init() when not doing SNP isolation. This mapping is based on an array of struct page's that describe the contiguous physical memory. 2) New ring buffer wrap-around mapping functionality is added in hv_ringbuffer_post_init() for the SNP isolation case. The case is handled in hv_ringbuffer_post_init() because it must be done after the GPADL is established, since that's where the host visibility is set. What's interesting is that this case is exactly the same as #1 above, except that the mapping is based on physical memory addresses instead of struct page's. We have to use physical addresses because of applying the GPA boundary, and there are no struct page's for those physical addresses. Unfortunately, this duplicates a lot of logic in #1 and #2, except for the struct page vs. physical address difference. Proposal: Couldn't we always do #2, even for the normal case where SNP isolation is not being used? The difference would only be in whether the GPA boundary is added. And it looks like the normal case could be done after the GPADL is established, as setting up the GPADL doesn't have any dependencies on having the ring buffer mapped. This approach would remove a lot of duplication. Just move the calls to hv_ringbuffer_init() to after the GPADL is established, and do all the work there for both cases. Michael
WARNING: multiple messages have this Message-ID (diff)
From: Michael Kelley via iommu <iommu@lists.linux-foundation.org> To: Tianyu Lan <ltykernel@gmail.com>, KY Srinivasan <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger <sthemmin@microsoft.com>, "wei.liu@kernel.org" <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>, "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "x86@kernel.org" <x86@kernel.org>, "hpa@zytor.com" <hpa@zytor.com>, "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>, "luto@kernel.org" <luto@kernel.org>, "peterz@infradead.org" <peterz@infradead.org>, "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>, "jgross@suse.com" <jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>, "joro@8bytes.org" <joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "davem@davemloft.net" <davem@davemloft.net>, "kuba@kernel.org" <kuba@kernel.org>, "jejb@linux.ibm.com" <jejb@linux.ibm.com>, "martin.petersen@oracle.com" <martin.petersen@oracle.com>, "arnd@arndb.de" <arnd@arndb.de>, "hch@lst.de" <hch@lst.de>, "m.szyprowski@samsung.com" <m.szyprowski@samsung.com>, "robin.murphy@arm.com" <robin.murphy@arm.com>, "thomas.lendacky@amd.com" <thomas.lendacky@amd.com>, "brijesh.singh@amd.com" <brijesh.singh@amd.com>, "ardb@kernel.org" <ardb@kernel.org>, Tianyu Lan <Tianyu.Lan@microsoft.com>, "pgonda@google.com" <pgonda@google.com>, "martin.b.radev@gmail.com" <martin.b.radev@gmail.com>, "akpm@linux-foundation.org" <akpm@linux-foundation.org>, "kirill.shutemov@linux.intel.com" <kirill.shutemov@linux.intel.com>, "rppt@kernel.org" <rppt@kernel.org>, "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>, "saravanand@fb.com" <saravanand@fb.com>, "krish.sadhukhan@oracle.com" <krish.sadhukhan@oracle.com>, "aneesh.kumar@linux.ibm.com" <aneesh.kumar@linux.ibm.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "rientjes@google.com" <rientjes@google.com>, "hannes@cmpxchg.org" <hannes@cmpxchg.org>, "tj@kernel.org" <tj@kernel.org> Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>, "parri.andrea@gmail.com" <parri.andrea@gmail.com>, "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>, "linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "dave.hansen@intel.com" <dave.hansen@intel.com>, "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, vkuznets <vkuznets@redhat.com> Subject: RE: [PATCH V3 08/13] HV/Vmbus: Initialize VMbus ring buffer for Isolation VM Date: Mon, 16 Aug 2021 17:28:30 +0000 [thread overview] Message-ID: <MWHPR21MB1593FFD7F3402753751F433CD7FD9@MWHPR21MB1593.namprd21.prod.outlook.com> (raw) In-Reply-To: <20210809175620.720923-9-ltykernel@gmail.com> From: Tianyu Lan <ltykernel@gmail.com> Sent: Monday, August 9, 2021 10:56 AM > > VMbus ring buffer are shared with host and it's need to s/it's need/it needs/ > be accessed via extra address space of Isolation VM with > SNP support. This patch is to map the ring buffer > address in extra address space via ioremap(). HV host It's actually using vmap_pfn(), not ioremap(). > visibility hvcall smears data in the ring buffer and > so reset the ring buffer memory to zero after calling > visibility hvcall. > > Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> > --- > drivers/hv/Kconfig | 1 + > drivers/hv/channel.c | 10 +++++ > drivers/hv/hyperv_vmbus.h | 2 + > drivers/hv/ring_buffer.c | 84 ++++++++++++++++++++++++++++++--------- > 4 files changed, 79 insertions(+), 18 deletions(-) > > diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig > index d1123ceb38f3..dd12af20e467 100644 > --- a/drivers/hv/Kconfig > +++ b/drivers/hv/Kconfig > @@ -8,6 +8,7 @@ config HYPERV > || (ARM64 && !CPU_BIG_ENDIAN)) > select PARAVIRT > select X86_HV_CALLBACK_VECTOR if X86 > + select VMAP_PFN > help > Select this option to run Linux as a Hyper-V client operating > system. > diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c > index 4c4717c26240..60ef881a700c 100644 > --- a/drivers/hv/channel.c > +++ b/drivers/hv/channel.c > @@ -712,6 +712,16 @@ static int __vmbus_open(struct vmbus_channel *newchannel, > if (err) > goto error_clean_ring; > > + err = hv_ringbuffer_post_init(&newchannel->outbound, > + page, send_pages); > + if (err) > + goto error_free_gpadl; > + > + err = hv_ringbuffer_post_init(&newchannel->inbound, > + &page[send_pages], recv_pages); > + if (err) > + goto error_free_gpadl; > + > /* Create and init the channel open message */ > open_info = kzalloc(sizeof(*open_info) + > sizeof(struct vmbus_channel_open_channel), > diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h > index 40bc0eff6665..15cd23a561f3 100644 > --- a/drivers/hv/hyperv_vmbus.h > +++ b/drivers/hv/hyperv_vmbus.h > @@ -172,6 +172,8 @@ extern int hv_synic_cleanup(unsigned int cpu); > /* Interface */ > > void hv_ringbuffer_pre_init(struct vmbus_channel *channel); > +int hv_ringbuffer_post_init(struct hv_ring_buffer_info *ring_info, > + struct page *pages, u32 page_cnt); > > int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, > struct page *pages, u32 pagecnt, u32 max_pkt_size); > diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c > index 2aee356840a2..d4f93fca1108 100644 > --- a/drivers/hv/ring_buffer.c > +++ b/drivers/hv/ring_buffer.c > @@ -17,6 +17,8 @@ > #include <linux/vmalloc.h> > #include <linux/slab.h> > #include <linux/prefetch.h> > +#include <linux/io.h> > +#include <asm/mshyperv.h> > > #include "hyperv_vmbus.h" > > @@ -179,43 +181,89 @@ void hv_ringbuffer_pre_init(struct vmbus_channel *channel) > mutex_init(&channel->outbound.ring_buffer_mutex); > } > > -/* Initialize the ring buffer. */ > -int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, > - struct page *pages, u32 page_cnt, u32 max_pkt_size) > +int hv_ringbuffer_post_init(struct hv_ring_buffer_info *ring_info, > + struct page *pages, u32 page_cnt) > { > + u64 physic_addr = page_to_pfn(pages) << PAGE_SHIFT; > + unsigned long *pfns_wraparound; > + void *vaddr; > int i; > - struct page **pages_wraparound; > > - BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE)); > + if (!hv_isolation_type_snp()) > + return 0; > + > + physic_addr += ms_hyperv.shared_gpa_boundary; > > /* > * First page holds struct hv_ring_buffer, do wraparound mapping for > * the rest. > */ > - pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *), > + pfns_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(unsigned long), > GFP_KERNEL); > - if (!pages_wraparound) > + if (!pfns_wraparound) > return -ENOMEM; > > - pages_wraparound[0] = pages; > + pfns_wraparound[0] = physic_addr >> PAGE_SHIFT; > for (i = 0; i < 2 * (page_cnt - 1); i++) > - pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1]; > - > - ring_info->ring_buffer = (struct hv_ring_buffer *) > - vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL); > - > - kfree(pages_wraparound); > + pfns_wraparound[i + 1] = (physic_addr >> PAGE_SHIFT) + > + i % (page_cnt - 1) + 1; > > - > - if (!ring_info->ring_buffer) > + vaddr = vmap_pfn(pfns_wraparound, page_cnt * 2 - 1, PAGE_KERNEL_IO); > + kfree(pfns_wraparound); > + if (!vaddr) > return -ENOMEM; > > - ring_info->ring_buffer->read_index = > - ring_info->ring_buffer->write_index = 0; > + /* Clean memory after setting host visibility. */ > + memset((void *)vaddr, 0x00, page_cnt * PAGE_SIZE); > + > + ring_info->ring_buffer = (struct hv_ring_buffer *)vaddr; > + ring_info->ring_buffer->read_index = 0; > + ring_info->ring_buffer->write_index = 0; > > /* Set the feature bit for enabling flow control. */ > ring_info->ring_buffer->feature_bits.value = 1; > > + return 0; > +} > + > +/* Initialize the ring buffer. */ > +int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, > + struct page *pages, u32 page_cnt, u32 max_pkt_size) > +{ > + int i; > + struct page **pages_wraparound; > + > + BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE)); > + > + if (!hv_isolation_type_snp()) { > + /* > + * First page holds struct hv_ring_buffer, do wraparound mapping for > + * the rest. > + */ > + pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *), > + GFP_KERNEL); > + if (!pages_wraparound) > + return -ENOMEM; > + > + pages_wraparound[0] = pages; > + for (i = 0; i < 2 * (page_cnt - 1); i++) > + pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1]; > + > + ring_info->ring_buffer = (struct hv_ring_buffer *) > + vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL); > + > + kfree(pages_wraparound); > + > + if (!ring_info->ring_buffer) > + return -ENOMEM; > + > + ring_info->ring_buffer->read_index = > + ring_info->ring_buffer->write_index = 0; > + > + /* Set the feature bit for enabling flow control. */ > + ring_info->ring_buffer->feature_bits.value = 1; > + } > + > ring_info->ring_size = page_cnt << PAGE_SHIFT; > ring_info->ring_size_div10_reciprocal = > reciprocal_value(ring_info->ring_size / 10); > -- > 2.25.1 This patch does the following: 1) The existing ring buffer wrap-around mapping functionality is still executed in hv_ringbuffer_init() when not doing SNP isolation. This mapping is based on an array of struct page's that describe the contiguous physical memory. 2) New ring buffer wrap-around mapping functionality is added in hv_ringbuffer_post_init() for the SNP isolation case. The case is handled in hv_ringbuffer_post_init() because it must be done after the GPADL is established, since that's where the host visibility is set. What's interesting is that this case is exactly the same as #1 above, except that the mapping is based on physical memory addresses instead of struct page's. We have to use physical addresses because of applying the GPA boundary, and there are no struct page's for those physical addresses. Unfortunately, this duplicates a lot of logic in #1 and #2, except for the struct page vs. physical address difference. Proposal: Couldn't we always do #2, even for the normal case where SNP isolation is not being used? The difference would only be in whether the GPA boundary is added. And it looks like the normal case could be done after the GPADL is established, as setting up the GPADL doesn't have any dependencies on having the ring buffer mapped. This approach would remove a lot of duplication. Just move the calls to hv_ringbuffer_init() to after the GPADL is established, and do all the work there for both cases. Michael _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-08-16 17:28 UTC|newest] Thread overview: 128+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-09 17:56 [PATCH V3 00/13] x86/Hyper-V: Add Hyper-V Isolation VM support Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 01/13] x86/HV: Initialize GHCB page in Isolation VM Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-10 10:56 ` Wei Liu 2021-08-10 10:56 ` Wei Liu 2021-08-10 12:17 ` Tianyu Lan 2021-08-10 12:17 ` Tianyu Lan 2021-08-12 19:14 ` Michael Kelley 2021-08-12 19:14 ` Michael Kelley via iommu 2021-08-13 15:46 ` Tianyu Lan 2021-08-13 15:46 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 02/13] x86/HV: Initialize shared memory boundary in the " Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-12 19:18 ` Michael Kelley 2021-08-12 19:18 ` Michael Kelley via iommu 2021-08-14 13:32 ` Tianyu Lan 2021-08-14 13:32 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 03/13] x86/HV: Add new hvcall guest address host visibility support Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-09 22:12 ` Dave Hansen 2021-08-09 22:12 ` Dave Hansen 2021-08-10 13:09 ` Tianyu Lan 2021-08-10 13:09 ` Tianyu Lan 2021-08-10 11:03 ` Wei Liu 2021-08-10 11:03 ` Wei Liu 2021-08-10 12:25 ` Tianyu Lan 2021-08-10 12:25 ` Tianyu Lan 2021-08-12 19:36 ` Michael Kelley 2021-08-12 19:36 ` Michael Kelley via iommu 2021-08-12 21:10 ` Michael Kelley 2021-08-12 21:10 ` Michael Kelley via iommu 2021-08-09 17:56 ` [PATCH V3 04/13] HV: Mark vmbus ring buffer visible to host in Isolation VM Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-12 22:20 ` Michael Kelley 2021-08-12 22:20 ` Michael Kelley via iommu 2021-08-15 15:21 ` Tianyu Lan 2021-08-15 15:21 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 05/13] HV: Add Write/Read MSR registers via ghcb page Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-13 19:31 ` Michael Kelley 2021-08-13 19:31 ` Michael Kelley via iommu 2021-08-13 20:26 ` Michael Kelley 2021-08-13 20:26 ` Michael Kelley via iommu 2021-08-24 8:45 ` Christoph Hellwig 2021-08-24 8:45 ` Christoph Hellwig 2021-08-09 17:56 ` [PATCH V3 06/13] HV: Add ghcb hvcall support for SNP VM Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-13 20:42 ` Michael Kelley 2021-08-13 20:42 ` Michael Kelley via iommu 2021-08-09 17:56 ` [PATCH V3 07/13] HV/Vmbus: Add SNP support for VMbus channel initiate message Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-13 21:28 ` Michael Kelley 2021-08-13 21:28 ` Michael Kelley via iommu 2021-08-09 17:56 ` [PATCH V3 08/13] HV/Vmbus: Initialize VMbus ring buffer for Isolation VM Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-16 17:28 ` Michael Kelley [this message] 2021-08-16 17:28 ` Michael Kelley via iommu 2021-08-17 15:36 ` Tianyu Lan 2021-08-17 15:36 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 09/13] DMA: Add dma_map_decrypted/dma_unmap_encrypted() function Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-12 12:26 ` Christoph Hellwig 2021-08-12 12:26 ` Christoph Hellwig 2021-08-12 15:38 ` Tianyu Lan 2021-08-12 15:38 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 10/13] x86/Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-12 12:27 ` Christoph Hellwig 2021-08-12 12:27 ` Christoph Hellwig 2021-08-13 17:58 ` Tianyu Lan 2021-08-13 17:58 ` Tianyu Lan 2021-08-16 14:50 ` Tianyu Lan 2021-08-16 14:50 ` Tianyu Lan 2021-08-19 8:49 ` Christoph Hellwig 2021-08-19 8:49 ` Christoph Hellwig 2021-08-19 9:59 ` Tianyu Lan 2021-08-19 9:59 ` Tianyu Lan 2021-08-19 10:02 ` Christoph Hellwig 2021-08-19 10:02 ` Christoph Hellwig 2021-08-19 10:03 ` Tianyu Lan 2021-08-19 10:03 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 11/13] HV/IOMMU: Enable swiotlb bounce buffer for Isolation VM Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-19 18:11 ` Michael Kelley 2021-08-19 18:11 ` Michael Kelley via iommu 2021-08-20 4:13 ` hch 2021-08-20 4:13 ` hch 2021-08-20 9:32 ` Tianyu Lan 2021-08-20 9:32 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 12/13] HV/Netvsc: Add Isolation VM support for netvsc driver Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-19 18:14 ` Michael Kelley 2021-08-19 18:14 ` Michael Kelley via iommu 2021-08-20 4:21 ` hch 2021-08-20 4:21 ` hch 2021-08-20 13:11 ` Tianyu Lan 2021-08-20 13:11 ` Tianyu Lan 2021-08-20 13:30 ` Tom Lendacky 2021-08-20 13:30 ` Tom Lendacky via iommu 2021-08-20 18:20 ` Tianyu Lan 2021-08-20 18:20 ` Tianyu Lan 2021-08-09 17:56 ` [PATCH V3 13/13] HV/Storvsc: Add Isolation VM support for storvsc driver Tianyu Lan 2021-08-09 17:56 ` Tianyu Lan 2021-08-19 18:17 ` Michael Kelley 2021-08-19 18:17 ` Michael Kelley via iommu 2021-08-20 4:32 ` hch 2021-08-20 4:32 ` hch 2021-08-20 15:40 ` Michael Kelley 2021-08-20 15:40 ` Michael Kelley via iommu 2021-08-24 8:49 ` min_align_mask " hch 2021-08-24 8:49 ` hch 2021-08-20 16:01 ` Tianyu Lan 2021-08-20 16:01 ` Tianyu Lan 2021-08-20 15:20 ` Tianyu Lan 2021-08-20 15:20 ` Tianyu Lan 2021-08-20 15:37 ` Tianyu Lan 2021-08-20 15:37 ` Tianyu Lan 2021-08-20 16:08 ` Michael Kelley via iommu 2021-08-20 16:08 ` Michael Kelley 2021-08-20 18:04 ` Tianyu Lan 2021-08-20 18:04 ` Tianyu Lan 2021-08-20 19:22 ` Michael Kelley 2021-08-20 19:22 ` Michael Kelley via iommu 2021-08-24 8:46 ` hch 2021-08-24 8:46 ` hch 2021-08-16 14:55 ` [PATCH V3 00/13] x86/Hyper-V: Add Hyper-V Isolation VM support Michael Kelley 2021-08-16 14:55 ` Michael Kelley via iommu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=MWHPR21MB1593FFD7F3402753751F433CD7FD9@MWHPR21MB1593.namprd21.prod.outlook.com \ --to=mikelley@microsoft.com \ --cc=Tianyu.Lan@microsoft.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.ibm.com \ --cc=ardb@kernel.org \ --cc=arnd@arndb.de \ --cc=boris.ostrovsky@oracle.com \ --cc=bp@alien8.de \ --cc=brijesh.singh@amd.com \ --cc=dave.hansen@intel.com \ --cc=dave.hansen@linux.intel.com \ --cc=davem@davemloft.net \ --cc=decui@microsoft.com \ --cc=haiyangz@microsoft.com \ --cc=hannes@cmpxchg.org \ --cc=hch@lst.de \ --cc=hpa@zytor.com \ --cc=iommu@lists.linux-foundation.org \ --cc=jejb@linux.ibm.com \ --cc=jgross@suse.com \ --cc=joro@8bytes.org \ --cc=kirill.shutemov@linux.intel.com \ --cc=konrad.wilk@oracle.com \ --cc=krish.sadhukhan@oracle.com \ --cc=kuba@kernel.org \ --cc=kys@microsoft.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-hyperv@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-scsi@vger.kernel.org \ --cc=ltykernel@gmail.com \ --cc=luto@kernel.org \ --cc=m.szyprowski@samsung.com \ --cc=martin.b.radev@gmail.com \ --cc=martin.petersen@oracle.com \ --cc=mingo@redhat.com \ --cc=netdev@vger.kernel.org \ --cc=parri.andrea@gmail.com \ --cc=peterz@infradead.org \ --cc=pgonda@google.com \ --cc=rientjes@google.com \ --cc=robin.murphy@arm.com \ --cc=rppt@kernel.org \ --cc=saravanand@fb.com \ --cc=sfr@canb.auug.org.au \ --cc=sstabellini@kernel.org \ --cc=sthemmin@microsoft.com \ --cc=tglx@linutronix.de \ --cc=thomas.lendacky@amd.com \ --cc=tj@kernel.org \ --cc=vkuznets@redhat.com \ --cc=wei.liu@kernel.org \ --cc=will@kernel.org \ --cc=x86@kernel.org \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.