From: Ashish Kalra <ashish.kalra@amd.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: hch@lst.de, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
hpa@zytor.com, x86@kernel.org, luto@kernel.org,
peterz@infradead.org, dave.hansen@linux-intel.com,
iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
brijesh.singh@amd.com, Thomas.Lendacky@amd.com,
ssg.sos.patches@amd.com, jon.grimm@amd.com, rientjes@google.com
Subject: Re: [PATCH v3] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests.
Date: Tue, 17 Nov 2020 17:38:29 +0000 [thread overview]
Message-ID: <20201117173829.GA29387@ashkalra_ubuntu_server> (raw)
In-Reply-To: <20201117170003.GF10360@char.us.oracle.com>
Hello Konrad,
On Tue, Nov 17, 2020 at 12:00:03PM -0500, Konrad Rzeszutek Wilk wrote:
> .snip..
> > > > > Lets break this down:
> > > > >
> > > > > How does the performance improve for one single device if you increase the SWIOTLB?
> > > > > Is there a specific device/driver that you can talk about that improve with this patch?
> > > > >
> > > > >
> > > >
> > > > Yes, these are mainly for multi-queue devices such as NICs or even
> > > > multi-queue virtio.
> > > >
> > > > This basically improves performance with concurrent DMA, hence,
> > > > basically multi-queue devices.
> > >
> > > OK, and for _1GB_ guest - what are the "internal teams/external customers" amount
> > > of CPUs they use? Please lets use real use-cases.
> >
> > >> I am sure you will understand we cannot share any external customer
> > >> data as all that customer information is proprietary.
> > >>
> > >> In similar situation if you have to share Oracle data, you will
> > >> surely have the same concerns and i don't think you will be able
> > >> to share any such information externally, i.e., outside Oracle.
> > >>
> > >I am asking for a simple query - what amount of CPUs does a 1GB
> > >guest have? The reason for this should be fairly obvious - if
> > >it is a 1vCPU, then there is no multi-queue and the existing
> > >SWIOTLB pool size as it is OK.
> > >
> > >If however there are say 2 and multiqueue is enabled, that
> > >gives me an idea of how many you use and I can find out what
> > >the maximum pool size usage of virtio there is with that configuration.
> >
> > Again we cannot share any customer data.
> >
> > Also i don't think there can be a definitive answer to how many vCPUs a
> > 1GB guest will have, it will depend on what kind of configuration we are
> > testing.
> >
> > For example, i usually setup 4-16 vCPUs for as low as 512M configured
> > gueest memory.
>
> Sure, but you are not the normal user.
>
> That is I don't like that for 1GB guests your patch ends up doubling the
> SWIOTLB memory pool. That seems to me we are trying to solve a problem
> that normal users will not hit. That is why I want 'here is the customer
> bug'.
>
> Here is what I am going to do - I will take out the 1GB and 4GB case out of
> your patch and call it a day. If there are customers who start reporting issues
> we can revist that. Nothing wrong with 'Reported-by' XZY (we often ask the
> customer if he or she would like to be recognized on upstream bugs).
>
Ok.
> And in the meantime I am going to look about adding ..
> >
> > I have been also testing with 16 vCPUs configuration for 512M-1G guest
> > memory with Mellanox SRIOV NICs, and this will be a multi-queue NIC
> > device environment.
>
> .. late SWIOTLB expansion to stich the DMA pools together as both
> Mellanox and VirtIO are not 32-bit DMA bound.
>
> >
> > So we might be having less configured guest memory, but we still might
> > be using that configuration with I/O intensive workloads.
> >
I am going to submit v4 of my current patch-set which uses max() instead
of clamp() and also replaces constants defined in this patch with the
pre-defined ones in sizes.h
Thanks,
Ashish
next prev parent reply other threads:[~2020-11-17 17:39 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-04 22:08 [PATCH v3] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests Ashish Kalra
2020-11-04 22:14 ` Konrad Rzeszutek Wilk
2020-11-04 22:39 ` Ashish Kalra
2020-11-05 17:43 ` Konrad Rzeszutek Wilk
2020-11-05 18:41 ` Ashish Kalra
2020-11-05 19:06 ` Konrad Rzeszutek Wilk
2020-11-05 19:38 ` Ashish Kalra
2020-11-05 20:20 ` Konrad Rzeszutek Wilk
2020-11-05 21:20 ` Ashish Kalra
2020-11-13 21:19 ` Konrad Rzeszutek Wilk
2020-11-13 22:10 ` Ashish Kalra
2020-11-17 15:33 ` Ashish Kalra
2020-11-17 17:00 ` Konrad Rzeszutek Wilk
2020-11-17 17:38 ` Ashish Kalra [this message]
[not found] ` <7EAA7A38-50B7-4291-9A4E-34668455B59D@amd.com>
2020-11-17 20:31 ` Konrad Rzeszutek Wilk
2020-11-06 18:24 ` Christoph Hellwig
2020-11-23 15:31 ` Guilherme Piccoli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201117173829.GA29387@ashkalra_ubuntu_server \
--to=ashish.kalra@amd.com \
--cc=Thomas.Lendacky@amd.com \
--cc=bp@alien8.de \
--cc=brijesh.singh@amd.com \
--cc=dave.hansen@linux-intel.com \
--cc=hch@lst.de \
--cc=hpa@zytor.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jon.grimm@amd.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rientjes@google.com \
--cc=ssg.sos.patches@amd.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).