All of lore.kernel.org
 help / color / mirror / Atom feed
From: Scott Wood <scottwood@freescale.com>
To: Purcareata Bogdan-B43198 <bogdan.purcareata@freescale.com>
Cc: "kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Caraman Mihai Claudiu-B02008 <mihai.caraman@freescale.com>,
	Tudor Laurentiu-B10716 <Laurentiu.Tudor@freescale.com>
Subject: Re: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock
Date: Fri, 12 Sep 2014 12:50:17 -0500	[thread overview]
Message-ID: <1410544217.24184.397.camel@snotra.buserror.net> (raw)
In-Reply-To: <337352d340114a34a32f71445b496a74@BY2PR03MB189.namprd03.prod.outlook.com>

On Fri, 2014-09-12 at 09:12 -0500, Purcareata Bogdan-B43198 wrote:
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Thursday, September 11, 2014 9:19 PM
> > To: Purcareata Bogdan-B43198
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org
> > Subject: Re: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock
> > 
> > On Thu, 2014-09-11 at 15:25 -0400, Bogdan Purcareata wrote:
> > > This patch enables running intensive I/O workloads, e.g. netperf, in a guest
> > > deployed on a RT host. No change for !RT kernels.
> > >
> > > The openpic spinlock becomes a sleeping mutex on a RT system. This no longer
> > > guarantees that EPR is atomic with exception delivery. The guest VCPU thread
> > > fails due to a BUG_ON(preemptible()) when running netperf.
> > >
> > > In order to make the kvmppc_mpic_set_epr() call safe on RT from non-atomic
> > > context, convert the openpic lock to a raw_spinlock. A similar approach can
> > > be seen for x86 platforms in the following commit [1].
> > >
> > > Here are some comparative cyclitest measurements run inside a high priority
> > RT
> > > guest run on a RT host. The guest has 1 VCPU and the test has been run for
> > 15
> > > minutes. The guest runs ~750 hackbench processes as background stress.
> > 
> > Does hackbench involve triggering interrupts that would go through the
> > MPIC?  You may want to try an I/O-heavy benchmark to stress the MPIC
> > code (the more interrupt sources are active at once, the "better").
> 
> Before this patch, running netperf/iperf in the guest always resulted
> in hitting the afore-mentioned BUG_ON, when the host was RT. This is
> why I can't provide comparative cyclitest measurements before and after
> the patch, with heavy I/O stress. Since I had no problem running
> hackbench before, I'm assuming it doesn't involve interrupts passing
> through the MPIC. The measurements were posted just to show that the
> patch doesn't mess up anything somewhere else.

I know you can't provide before/after, but it would be nice to see what
the after numbers are with heavy MPIC activity.

> > Also try a guest with many vcpus.
> 
> AFAIK, without the MSI affinity patches [1], all vfio interrupts will
> go to core 0 in the guest. In this case, I guess there won't be
> contention induced latencies due to multiple VCPUs expecting to have
> their interrupts delivered. Am I getting it wrong?

It's not about contention, but about loops in the MPIC code that iterate
over the entire set of vcpus.

-Scott

WARNING: multiple messages have this Message-ID (diff)
From: Scott Wood <scottwood@freescale.com>
To: Purcareata Bogdan-B43198 <bogdan.purcareata@freescale.com>
Cc: "kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Caraman Mihai Claudiu-B02008 <mihai.caraman@freescale.com>,
	Tudor Laurentiu-B10716 <Laurentiu.Tudor@freescale.com>
Subject: Re: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock
Date: Fri, 12 Sep 2014 17:50:17 +0000	[thread overview]
Message-ID: <1410544217.24184.397.camel@snotra.buserror.net> (raw)
In-Reply-To: <337352d340114a34a32f71445b496a74@BY2PR03MB189.namprd03.prod.outlook.com>

On Fri, 2014-09-12 at 09:12 -0500, Purcareata Bogdan-B43198 wrote:
> > -----Original Message-----
> > From: Wood Scott-B07421
> > Sent: Thursday, September 11, 2014 9:19 PM
> > To: Purcareata Bogdan-B43198
> > Cc: kvm-ppc@vger.kernel.org; kvm@vger.kernel.org
> > Subject: Re: [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock
> > 
> > On Thu, 2014-09-11 at 15:25 -0400, Bogdan Purcareata wrote:
> > > This patch enables running intensive I/O workloads, e.g. netperf, in a guest
> > > deployed on a RT host. No change for !RT kernels.
> > >
> > > The openpic spinlock becomes a sleeping mutex on a RT system. This no longer
> > > guarantees that EPR is atomic with exception delivery. The guest VCPU thread
> > > fails due to a BUG_ON(preemptible()) when running netperf.
> > >
> > > In order to make the kvmppc_mpic_set_epr() call safe on RT from non-atomic
> > > context, convert the openpic lock to a raw_spinlock. A similar approach can
> > > be seen for x86 platforms in the following commit [1].
> > >
> > > Here are some comparative cyclitest measurements run inside a high priority
> > RT
> > > guest run on a RT host. The guest has 1 VCPU and the test has been run for
> > 15
> > > minutes. The guest runs ~750 hackbench processes as background stress.
> > 
> > Does hackbench involve triggering interrupts that would go through the
> > MPIC?  You may want to try an I/O-heavy benchmark to stress the MPIC
> > code (the more interrupt sources are active at once, the "better").
> 
> Before this patch, running netperf/iperf in the guest always resulted
> in hitting the afore-mentioned BUG_ON, when the host was RT. This is
> why I can't provide comparative cyclitest measurements before and after
> the patch, with heavy I/O stress. Since I had no problem running
> hackbench before, I'm assuming it doesn't involve interrupts passing
> through the MPIC. The measurements were posted just to show that the
> patch doesn't mess up anything somewhere else.

I know you can't provide before/after, but it would be nice to see what
the after numbers are with heavy MPIC activity.

> > Also try a guest with many vcpus.
> 
> AFAIK, without the MSI affinity patches [1], all vfio interrupts will
> go to core 0 in the guest. In this case, I guess there won't be
> contention induced latencies due to multiple VCPUs expecting to have
> their interrupts delivered. Am I getting it wrong?

It's not about contention, but about loops in the MPIC code that iterate
over the entire set of vcpus.

-Scott


  reply	other threads:[~2014-09-12 17:50 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-11 13:06 [PATCH] KVM: PPC: Convert openpic lock to raw_spinlock Bogdan Purcareata
2014-09-11 19:25 ` Bogdan Purcareata
2014-09-11 18:19 ` Scott Wood
2014-09-11 18:19   ` Scott Wood
2014-09-12 14:12   ` bogdan.purcareata
2014-09-12 14:12     ` bogdan.purcareata
2014-09-12 17:50     ` Scott Wood [this message]
2014-09-12 17:50       ` Scott Wood
2015-01-22  9:39 Bogdan Purcareata
2015-02-02  9:35 ` Purcareata Bogdan
2015-02-17 12:27   ` Purcareata Bogdan
2015-02-17 17:53     ` Sebastian Andrzej Siewior
2015-02-17 17:59       ` Sebastian Andrzej Siewior
2015-02-18  8:31         ` Purcareata Bogdan
2015-02-18  8:40           ` Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1410544217.24184.397.camel@snotra.buserror.net \
    --to=scottwood@freescale.com \
    --cc=Laurentiu.Tudor@freescale.com \
    --cc=bogdan.purcareata@freescale.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=mihai.caraman@freescale.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.