All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jingyi Wang <wangjingyi11@huawei.com>
To: Marc Zyngier <maz@kernel.org>
Cc: <kvm@vger.kernel.org>, <kvmarm@lists.cs.columbia.edu>,
	<linux-arm-kernel@lists.infradead.org>, <will@kernel.org>,
	<catalin.marinas@arm.com>, <james.morse@arm.com>,
	<julien.thierry.kdev@gmail.com>, <suzuki.poulose@arm.com>,
	<wanghaibin.wang@huawei.com>, <yezengruan@huawei.com>,
	<shameerali.kolothum.thodi@huawei.com>, <fanhenglong@huawei.com>,
	<prime.zeng@hisilicon.com>
Subject: Re: [RFC PATCH 0/4] Add support for ARMv8.6 TWED feature
Date: Thu, 26 Nov 2020 10:31:03 +0800	[thread overview]
Message-ID: <b084262b-5563-2d80-3065-cf563d978ea3@huawei.com> (raw)
In-Reply-To: <10463cb9a0ae167a89300185c1de348c@kernel.org>

Hi Marc,

I will consider more circumstances in the later test. Thanks for the
advice.

Thanks,
Jingyi


On 11/24/2020 7:02 PM, Marc Zyngier wrote:
> On 2020-11-13 07:54, Jingyi Wang wrote:
>> Hi all,
>>
>> Sorry for the delay. I have been testing the TWED feature performance
>> lately. We select unixbench as the benchmark for some items of it is
>> lock-intensive(fstime/fsbuffer/fsdisk). We run unixbench on a 4-VCPU
>> VM, and bind every two VCPUs on one PCPU. Fixed TWED value is used and
>> here is the result.
> 
> How representative is this?
> 
> TBH, I only know of two real world configurations: one where
> the vCPUs are pinned to different physical CPUs (and in this
> case your patch has absolutely no effect as long as there is
> no concurrent tasks), and one where there is oversubscription,
> and the scheduler moves things around as it sees fit, depending
> on the load.
> 
> Having two vCPUs pinned per CPU feels like a test that has been
> picked to give the result you wanted. I'd like to see the full
> picture, including the case that matters for current use cases.
> I'm specially interested in the cases where the system is
> oversubscribed, because TWED is definitely going to screw with
> the scheduler latency.
> 
> Thanks,
> 
>          M.

WARNING: multiple messages have this Message-ID (diff)
From: Jingyi Wang <wangjingyi11@huawei.com>
To: Marc Zyngier <maz@kernel.org>
Cc: kvm@vger.kernel.org, catalin.marinas@arm.com,
	fanhenglong@huawei.com, prime.zeng@hisilicon.com,
	will@kernel.org, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH 0/4] Add support for ARMv8.6 TWED feature
Date: Thu, 26 Nov 2020 10:31:03 +0800	[thread overview]
Message-ID: <b084262b-5563-2d80-3065-cf563d978ea3@huawei.com> (raw)
In-Reply-To: <10463cb9a0ae167a89300185c1de348c@kernel.org>

Hi Marc,

I will consider more circumstances in the later test. Thanks for the
advice.

Thanks,
Jingyi


On 11/24/2020 7:02 PM, Marc Zyngier wrote:
> On 2020-11-13 07:54, Jingyi Wang wrote:
>> Hi all,
>>
>> Sorry for the delay. I have been testing the TWED feature performance
>> lately. We select unixbench as the benchmark for some items of it is
>> lock-intensive(fstime/fsbuffer/fsdisk). We run unixbench on a 4-VCPU
>> VM, and bind every two VCPUs on one PCPU. Fixed TWED value is used and
>> here is the result.
> 
> How representative is this?
> 
> TBH, I only know of two real world configurations: one where
> the vCPUs are pinned to different physical CPUs (and in this
> case your patch has absolutely no effect as long as there is
> no concurrent tasks), and one where there is oversubscription,
> and the scheduler moves things around as it sees fit, depending
> on the load.
> 
> Having two vCPUs pinned per CPU feels like a test that has been
> picked to give the result you wanted. I'd like to see the full
> picture, including the case that matters for current use cases.
> I'm specially interested in the cases where the system is
> oversubscribed, because TWED is definitely going to screw with
> the scheduler latency.
> 
> Thanks,
> 
>          M.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Jingyi Wang <wangjingyi11@huawei.com>
To: Marc Zyngier <maz@kernel.org>
Cc: kvm@vger.kernel.org, suzuki.poulose@arm.com,
	catalin.marinas@arm.com, shameerali.kolothum.thodi@huawei.com,
	yezengruan@huawei.com, fanhenglong@huawei.com,
	james.morse@arm.com, julien.thierry.kdev@gmail.com,
	prime.zeng@hisilicon.com, wanghaibin.wang@huawei.com,
	will@kernel.org, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH 0/4] Add support for ARMv8.6 TWED feature
Date: Thu, 26 Nov 2020 10:31:03 +0800	[thread overview]
Message-ID: <b084262b-5563-2d80-3065-cf563d978ea3@huawei.com> (raw)
In-Reply-To: <10463cb9a0ae167a89300185c1de348c@kernel.org>

Hi Marc,

I will consider more circumstances in the later test. Thanks for the
advice.

Thanks,
Jingyi


On 11/24/2020 7:02 PM, Marc Zyngier wrote:
> On 2020-11-13 07:54, Jingyi Wang wrote:
>> Hi all,
>>
>> Sorry for the delay. I have been testing the TWED feature performance
>> lately. We select unixbench as the benchmark for some items of it is
>> lock-intensive(fstime/fsbuffer/fsdisk). We run unixbench on a 4-VCPU
>> VM, and bind every two VCPUs on one PCPU. Fixed TWED value is used and
>> here is the result.
> 
> How representative is this?
> 
> TBH, I only know of two real world configurations: one where
> the vCPUs are pinned to different physical CPUs (and in this
> case your patch has absolutely no effect as long as there is
> no concurrent tasks), and one where there is oversubscription,
> and the scheduler moves things around as it sees fit, depending
> on the load.
> 
> Having two vCPUs pinned per CPU feels like a test that has been
> picked to give the result you wanted. I'd like to see the full
> picture, including the case that matters for current use cases.
> I'm specially interested in the cases where the system is
> oversubscribed, because TWED is definitely going to screw with
> the scheduler latency.
> 
> Thanks,
> 
>          M.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-11-26  2:31 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-29  9:17 [RFC PATCH 0/4] Add support for ARMv8.6 TWED feature Jingyi Wang
2020-09-29  9:17 ` Jingyi Wang
2020-09-29  9:17 ` Jingyi Wang
2020-09-29  9:17 ` [RFC PATCH 1/4] arm64: cpufeature: TWED support detection Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29  9:17 ` [RFC PATCH 2/4] KVM: arm64: Make use of TWED feature Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29  9:17 ` [RFC PATCH 3/4] KVM: arm64: Use dynamic TWE Delay value Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29  9:17 ` [RFC PATCH 4/4] KVM: arm64: Add trace for TWED update Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29  9:17   ` Jingyi Wang
2020-09-29 10:50 ` [RFC PATCH 0/4] Add support for ARMv8.6 TWED feature Marc Zyngier
2020-09-29 10:50   ` Marc Zyngier
2020-09-29 10:50   ` Marc Zyngier
2020-09-30  1:21   ` Jingyi Wang
2020-09-30  1:21     ` Jingyi Wang
2020-09-30  1:21     ` Jingyi Wang
2020-11-13  7:54 ` Jingyi Wang
2020-11-13  7:54   ` Jingyi Wang
2020-11-13  7:54   ` Jingyi Wang
2020-11-24  3:19   ` Jingyi Wang
2020-11-24  3:19     ` Jingyi Wang
2020-11-24  3:19     ` Jingyi Wang
2020-11-24 11:02   ` Marc Zyngier
2020-11-24 11:02     ` Marc Zyngier
2020-11-24 11:02     ` Marc Zyngier
2020-11-26  2:31     ` Jingyi Wang [this message]
2020-11-26  2:31       ` Jingyi Wang
2020-11-26  2:31       ` Jingyi Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b084262b-5563-2d80-3065-cf563d978ea3@huawei.com \
    --to=wangjingyi11@huawei.com \
    --cc=catalin.marinas@arm.com \
    --cc=fanhenglong@huawei.com \
    --cc=james.morse@arm.com \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=prime.zeng@hisilicon.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=suzuki.poulose@arm.com \
    --cc=wanghaibin.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=yezengruan@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.