From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> To: Ashish Kalra <ashish.kalra@amd.com> Cc: hch@lst.de, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, luto@kernel.org, peterz@infradead.org, dave.hansen@linux-intel.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, brijesh.singh@amd.com, Thomas.Lendacky@amd.com, ssg.sos.patches@amd.com, jon.grimm@amd.com, rientjes@google.com Subject: Re: [PATCH v3] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests. Date: Tue, 17 Nov 2020 12:00:03 -0500 [thread overview] Message-ID: <20201117170003.GF10360@char.us.oracle.com> (raw) In-Reply-To: <20201117153302.GA29293@ashkalra_ubuntu_server> .snip.. > > > > Lets break this down: > > > > > > > > How does the performance improve for one single device if you increase the SWIOTLB? > > > > Is there a specific device/driver that you can talk about that improve with this patch? > > > > > > > > > > > > > > Yes, these are mainly for multi-queue devices such as NICs or even > > > multi-queue virtio. > > > > > > This basically improves performance with concurrent DMA, hence, > > > basically multi-queue devices. > > > > OK, and for _1GB_ guest - what are the "internal teams/external customers" amount > > of CPUs they use? Please lets use real use-cases. > > >> I am sure you will understand we cannot share any external customer > >> data as all that customer information is proprietary. > >> > >> In similar situation if you have to share Oracle data, you will > >> surely have the same concerns and i don't think you will be able > >> to share any such information externally, i.e., outside Oracle. > >> > >I am asking for a simple query - what amount of CPUs does a 1GB > >guest have? The reason for this should be fairly obvious - if > >it is a 1vCPU, then there is no multi-queue and the existing > >SWIOTLB pool size as it is OK. > > > >If however there are say 2 and multiqueue is enabled, that > >gives me an idea of how many you use and I can find out what > >the maximum pool size usage of virtio there is with that configuration. > > Again we cannot share any customer data. > > Also i don't think there can be a definitive answer to how many vCPUs a > 1GB guest will have, it will depend on what kind of configuration we are > testing. > > For example, i usually setup 4-16 vCPUs for as low as 512M configured > gueest memory. Sure, but you are not the normal user. That is I don't like that for 1GB guests your patch ends up doubling the SWIOTLB memory pool. That seems to me we are trying to solve a problem that normal users will not hit. That is why I want 'here is the customer bug'. Here is what I am going to do - I will take out the 1GB and 4GB case out of your patch and call it a day. If there are customers who start reporting issues we can revist that. Nothing wrong with 'Reported-by' XZY (we often ask the customer if he or she would like to be recognized on upstream bugs). And in the meantime I am going to look about adding .. > > I have been also testing with 16 vCPUs configuration for 512M-1G guest > memory with Mellanox SRIOV NICs, and this will be a multi-queue NIC > device environment. .. late SWIOTLB expansion to stich the DMA pools together as both Mellanox and VirtIO are not 32-bit DMA bound. > > So we might be having less configured guest memory, but we still might > be using that configuration with I/O intensive workloads. > > Thanks, > Ashish
WARNING: multiple messages have this Message-ID (diff)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> To: Ashish Kalra <ashish.kalra@amd.com> Cc: Thomas.Lendacky@amd.com, jon.grimm@amd.com, brijesh.singh@amd.com, ssg.sos.patches@amd.com, dave.hansen@linux-intel.com, peterz@infradead.org, x86@kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, mingo@redhat.com, bp@alien8.de, luto@kernel.org, hpa@zytor.com, rientjes@google.com, tglx@linutronix.de, hch@lst.de Subject: Re: [PATCH v3] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests. Date: Tue, 17 Nov 2020 12:00:03 -0500 [thread overview] Message-ID: <20201117170003.GF10360@char.us.oracle.com> (raw) In-Reply-To: <20201117153302.GA29293@ashkalra_ubuntu_server> .snip.. > > > > Lets break this down: > > > > > > > > How does the performance improve for one single device if you increase the SWIOTLB? > > > > Is there a specific device/driver that you can talk about that improve with this patch? > > > > > > > > > > > > > > Yes, these are mainly for multi-queue devices such as NICs or even > > > multi-queue virtio. > > > > > > This basically improves performance with concurrent DMA, hence, > > > basically multi-queue devices. > > > > OK, and for _1GB_ guest - what are the "internal teams/external customers" amount > > of CPUs they use? Please lets use real use-cases. > > >> I am sure you will understand we cannot share any external customer > >> data as all that customer information is proprietary. > >> > >> In similar situation if you have to share Oracle data, you will > >> surely have the same concerns and i don't think you will be able > >> to share any such information externally, i.e., outside Oracle. > >> > >I am asking for a simple query - what amount of CPUs does a 1GB > >guest have? The reason for this should be fairly obvious - if > >it is a 1vCPU, then there is no multi-queue and the existing > >SWIOTLB pool size as it is OK. > > > >If however there are say 2 and multiqueue is enabled, that > >gives me an idea of how many you use and I can find out what > >the maximum pool size usage of virtio there is with that configuration. > > Again we cannot share any customer data. > > Also i don't think there can be a definitive answer to how many vCPUs a > 1GB guest will have, it will depend on what kind of configuration we are > testing. > > For example, i usually setup 4-16 vCPUs for as low as 512M configured > gueest memory. Sure, but you are not the normal user. That is I don't like that for 1GB guests your patch ends up doubling the SWIOTLB memory pool. That seems to me we are trying to solve a problem that normal users will not hit. That is why I want 'here is the customer bug'. Here is what I am going to do - I will take out the 1GB and 4GB case out of your patch and call it a day. If there are customers who start reporting issues we can revist that. Nothing wrong with 'Reported-by' XZY (we often ask the customer if he or she would like to be recognized on upstream bugs). And in the meantime I am going to look about adding .. > > I have been also testing with 16 vCPUs configuration for 512M-1G guest > memory with Mellanox SRIOV NICs, and this will be a multi-queue NIC > device environment. .. late SWIOTLB expansion to stich the DMA pools together as both Mellanox and VirtIO are not 32-bit DMA bound. > > So we might be having less configured guest memory, but we still might > be using that configuration with I/O intensive workloads. > > Thanks, > Ashish _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2020-11-17 16:59 UTC|newest] Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-11-04 22:08 [PATCH v3] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests Ashish Kalra 2020-11-04 22:08 ` Ashish Kalra 2020-11-04 22:14 ` Konrad Rzeszutek Wilk 2020-11-04 22:14 ` Konrad Rzeszutek Wilk 2020-11-04 22:39 ` Ashish Kalra 2020-11-04 22:39 ` Ashish Kalra 2020-11-05 17:43 ` Konrad Rzeszutek Wilk 2020-11-05 17:43 ` Konrad Rzeszutek Wilk 2020-11-05 18:41 ` Ashish Kalra 2020-11-05 18:41 ` Ashish Kalra 2020-11-05 19:06 ` Konrad Rzeszutek Wilk 2020-11-05 19:06 ` Konrad Rzeszutek Wilk 2020-11-05 19:38 ` Ashish Kalra 2020-11-05 19:38 ` Ashish Kalra 2020-11-05 20:20 ` Konrad Rzeszutek Wilk 2020-11-05 20:20 ` Konrad Rzeszutek Wilk 2020-11-05 21:20 ` Ashish Kalra 2020-11-05 21:20 ` Ashish Kalra 2020-11-13 21:19 ` Konrad Rzeszutek Wilk 2020-11-13 21:19 ` Konrad Rzeszutek Wilk 2020-11-13 22:10 ` Ashish Kalra 2020-11-13 22:10 ` Ashish Kalra 2020-11-17 15:33 ` Ashish Kalra 2020-11-17 15:33 ` Ashish Kalra 2020-11-17 17:00 ` Konrad Rzeszutek Wilk [this message] 2020-11-17 17:00 ` Konrad Rzeszutek Wilk 2020-11-17 17:38 ` Ashish Kalra 2020-11-17 17:38 ` Ashish Kalra 2020-11-17 19:04 ` Kalra, Ashish 2020-11-17 20:31 ` Konrad Rzeszutek Wilk 2020-11-17 20:31 ` Konrad Rzeszutek Wilk 2020-11-06 18:24 ` Christoph Hellwig 2020-11-06 18:24 ` Christoph Hellwig 2020-11-04 22:16 ` Andy Shevchenko 2020-11-23 15:31 ` Guilherme Piccoli 2020-11-23 15:31 ` Guilherme Piccoli
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20201117170003.GF10360@char.us.oracle.com \ --to=konrad.wilk@oracle.com \ --cc=Thomas.Lendacky@amd.com \ --cc=ashish.kalra@amd.com \ --cc=bp@alien8.de \ --cc=brijesh.singh@amd.com \ --cc=dave.hansen@linux-intel.com \ --cc=hch@lst.de \ --cc=hpa@zytor.com \ --cc=iommu@lists.linux-foundation.org \ --cc=jon.grimm@amd.com \ --cc=linux-kernel@vger.kernel.org \ --cc=luto@kernel.org \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=rientjes@google.com \ --cc=ssg.sos.patches@amd.com \ --cc=tglx@linutronix.de \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.