All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: Greg Kurz <groug@kaod.org>,
	xiaoguangrong.eric@gmail.com, mst@redhat.com,
	imammedo@redhat.com, qemu-devel@nongnu.org, qemu-ppc@nongnu.org,
	linux-nvdimm@lists.01.org, aneesh.kumar@linux.ibm.com,
	kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
	bharata@linux.vnet.ibm.com, linuxppc-dev@lists.ozlabs.org
Subject: Re: [RFC Qemu PATCH v2 1/2] spapr: drc: Add support for async hcalls at the drc level
Date: Tue, 23 Mar 2021 13:23:31 +0530	[thread overview]
Message-ID: <0c3591b8-e2bd-3a7a-112f-e410bca4434f@linux.ibm.com> (raw)
In-Reply-To: <20210208062122.GA40668@yekko.fritz.box>

Hi David,

Sorry about the delay.

On 2/8/21 11:51 AM, David Gibson wrote:
> On Tue, Jan 19, 2021 at 12:40:31PM +0530, Shivaprasad G Bhat wrote:
>> Thanks for the comments!
>>
>>
>> On 12/28/20 2:08 PM, David Gibson wrote:
>>
>>> On Mon, Dec 21, 2020 at 01:08:53PM +0100, Greg Kurz wrote:
>> ...
>>>> The overall idea looks good but I think you should consider using
>>>> a thread pool to implement it. See below.
>>> I am not convinced, however.  Specifically, attaching this to the DRC
>>> doesn't make sense to me.  We're adding exactly one DRC related async
>>> hcall, and I can't really see much call for another one.  We could
>>> have other async hcalls - indeed we already have one for HPT resizing
>>> - but attaching this to DRCs doesn't help for those.
>> The semantics of the hcall made me think, if this is going to be
>> re-usable for future if implemented at DRC level.
> It would only be re-usable for operations that are actually connected
> to DRCs.  It doesn't seem to me particularly likely that we'll ever
> have more asynchronous hcalls that are also associated with DRCs.
Okay

>> Other option
>> is to move the async-hcall-state/list into the NVDIMMState structure
>> in include/hw/mem/nvdimm.h and handle it with machine->nvdimms_state
>> at a global level.
> I'm ok with either of two options:
>
> A) Implement this ad-hoc for this specific case, making whatever
> simplifications you can based on this specific case.

I am simplifying it to nvdimm use-case alone and limiting the scope.


> B) Implement a general mechanism for async hcalls that is *not* tied
> to DRCs.  Then use that for the existing H_RESIZE_HPT_PREPARE call as
> well as this new one.
>
>> Hope you are okay with using the pool based approach that Greg
> Honestly a thread pool seems like it might be overkill for this
> application.

I think its appropriate here as that is what is being done by virtio-pmem

too for flush requests. The aio infrastructure simplifies lot of the

thread handling usage. Please suggest if you think there are better ways.


I am sending the next version addressing all the comments from you and Greg.


Thanks,

Shivaprasad
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: xiaoguangrong.eric@gmail.com, mst@redhat.com,
	aneesh.kumar@linux.ibm.com, linux-nvdimm@lists.01.org,
	qemu-devel@nongnu.org, kvm-ppc@vger.kernel.org,
	Greg Kurz <groug@kaod.org>,
	shivaprasadbhat@gmail.com, qemu-ppc@nongnu.org,
	bharata@linux.vnet.ibm.com, imammedo@redhat.com,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [RFC Qemu PATCH v2 1/2] spapr: drc: Add support for async hcalls at the drc level
Date: Tue, 23 Mar 2021 13:23:31 +0530	[thread overview]
Message-ID: <0c3591b8-e2bd-3a7a-112f-e410bca4434f@linux.ibm.com> (raw)
In-Reply-To: <20210208062122.GA40668@yekko.fritz.box>

Hi David,

Sorry about the delay.

On 2/8/21 11:51 AM, David Gibson wrote:
> On Tue, Jan 19, 2021 at 12:40:31PM +0530, Shivaprasad G Bhat wrote:
>> Thanks for the comments!
>>
>>
>> On 12/28/20 2:08 PM, David Gibson wrote:
>>
>>> On Mon, Dec 21, 2020 at 01:08:53PM +0100, Greg Kurz wrote:
>> ...
>>>> The overall idea looks good but I think you should consider using
>>>> a thread pool to implement it. See below.
>>> I am not convinced, however.  Specifically, attaching this to the DRC
>>> doesn't make sense to me.  We're adding exactly one DRC related async
>>> hcall, and I can't really see much call for another one.  We could
>>> have other async hcalls - indeed we already have one for HPT resizing
>>> - but attaching this to DRCs doesn't help for those.
>> The semantics of the hcall made me think, if this is going to be
>> re-usable for future if implemented at DRC level.
> It would only be re-usable for operations that are actually connected
> to DRCs.  It doesn't seem to me particularly likely that we'll ever
> have more asynchronous hcalls that are also associated with DRCs.
Okay

>> Other option
>> is to move the async-hcall-state/list into the NVDIMMState structure
>> in include/hw/mem/nvdimm.h and handle it with machine->nvdimms_state
>> at a global level.
> I'm ok with either of two options:
>
> A) Implement this ad-hoc for this specific case, making whatever
> simplifications you can based on this specific case.

I am simplifying it to nvdimm use-case alone and limiting the scope.


> B) Implement a general mechanism for async hcalls that is *not* tied
> to DRCs.  Then use that for the existing H_RESIZE_HPT_PREPARE call as
> well as this new one.
>
>> Hope you are okay with using the pool based approach that Greg
> Honestly a thread pool seems like it might be overkill for this
> application.

I think its appropriate here as that is what is being done by virtio-pmem

too for flush requests. The aio infrastructure simplifies lot of the

thread handling usage. Please suggest if you think there are better ways.


I am sending the next version addressing all the comments from you and Greg.


Thanks,

Shivaprasad


WARNING: multiple messages have this Message-ID (diff)
From: Shivaprasad G Bhat <sbhat@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: Greg Kurz <groug@kaod.org>,
	xiaoguangrong.eric@gmail.com, mst@redhat.com,
	imammedo@redhat.com, qemu-devel@nongnu.org, qemu-ppc@nongnu.org,
	linux-nvdimm@lists.01.org, aneesh.kumar@linux.ibm.com,
	kvm-ppc@vger.kernel.org, shivaprasadbhat@gmail.com,
	bharata@linux.vnet.ibm.com, linuxppc-dev@lists.ozlabs.org
Subject: Re: [RFC Qemu PATCH v2 1/2] spapr: drc: Add support for async hcalls at the drc level
Date: Tue, 23 Mar 2021 07:54:13 +0000	[thread overview]
Message-ID: <0c3591b8-e2bd-3a7a-112f-e410bca4434f@linux.ibm.com> (raw)
In-Reply-To: <20210208062122.GA40668@yekko.fritz.box>

Hi David,

Sorry about the delay.

On 2/8/21 11:51 AM, David Gibson wrote:
> On Tue, Jan 19, 2021 at 12:40:31PM +0530, Shivaprasad G Bhat wrote:
>> Thanks for the comments!
>>
>>
>> On 12/28/20 2:08 PM, David Gibson wrote:
>>
>>> On Mon, Dec 21, 2020 at 01:08:53PM +0100, Greg Kurz wrote:
>> ...
>>>> The overall idea looks good but I think you should consider using
>>>> a thread pool to implement it. See below.
>>> I am not convinced, however.  Specifically, attaching this to the DRC
>>> doesn't make sense to me.  We're adding exactly one DRC related async
>>> hcall, and I can't really see much call for another one.  We could
>>> have other async hcalls - indeed we already have one for HPT resizing
>>> - but attaching this to DRCs doesn't help for those.
>> The semantics of the hcall made me think, if this is going to be
>> re-usable for future if implemented at DRC level.
> It would only be re-usable for operations that are actually connected
> to DRCs.  It doesn't seem to me particularly likely that we'll ever
> have more asynchronous hcalls that are also associated with DRCs.
Okay

>> Other option
>> is to move the async-hcall-state/list into the NVDIMMState structure
>> in include/hw/mem/nvdimm.h and handle it with machine->nvdimms_state
>> at a global level.
> I'm ok with either of two options:
>
> A) Implement this ad-hoc for this specific case, making whatever
> simplifications you can based on this specific case.

I am simplifying it to nvdimm use-case alone and limiting the scope.


> B) Implement a general mechanism for async hcalls that is *not* tied
> to DRCs.  Then use that for the existing H_RESIZE_HPT_PREPARE call as
> well as this new one.
>
>> Hope you are okay with using the pool based approach that Greg
> Honestly a thread pool seems like it might be overkill for this
> application.

I think its appropriate here as that is what is being done by virtio-pmem

too for flush requests. The aio infrastructure simplifies lot of the

thread handling usage. Please suggest if you think there are better ways.


I am sending the next version addressing all the comments from you and Greg.


Thanks,

Shivaprasad

  reply	other threads:[~2021-03-23  7:53 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-30 15:16 [RFC Qemu PATCH v2 0/2] spapr: nvdimm: Asynchronus flush hcall support Shivaprasad G Bhat
2020-11-30 15:16 ` Shivaprasad G Bhat
2020-11-30 15:16 ` Shivaprasad G Bhat
2020-11-30 15:16 ` [RFC Qemu PATCH v2 1/2] spapr: drc: Add support for async hcalls at the drc level Shivaprasad G Bhat
2020-11-30 15:16   ` Shivaprasad G Bhat
2020-11-30 15:16   ` Shivaprasad G Bhat
2020-12-21 12:08   ` Greg Kurz
2020-12-21 12:08     ` Greg Kurz
2020-12-21 12:08     ` Greg Kurz
2020-12-21 14:37     ` Greg Kurz
2020-12-21 14:37       ` Greg Kurz
2020-12-21 14:37       ` Greg Kurz
2020-12-28  8:38     ` David Gibson
2020-12-28  8:38       ` David Gibson
2020-12-28  8:38       ` David Gibson
2021-01-19  7:10       ` Shivaprasad G Bhat
2021-01-19  7:22         ` Shivaprasad G Bhat
2021-01-19  7:10         ` Shivaprasad G Bhat
2021-02-08  6:21         ` David Gibson
2021-02-08  6:21           ` David Gibson
2021-02-08  6:21           ` David Gibson
2021-03-23  7:53           ` Shivaprasad G Bhat [this message]
2021-03-23  7:54             ` Shivaprasad G Bhat
2021-03-23  7:53             ` Shivaprasad G Bhat
2020-11-30 15:17 ` [RFC Qemu PATCH v2 2/2] spapr: nvdimm: Implement async flush hcalls Shivaprasad G Bhat
2020-11-30 15:17   ` Shivaprasad G Bhat
2020-11-30 15:17   ` Shivaprasad G Bhat
2020-12-21 13:07   ` Greg Kurz
2020-12-21 13:07     ` Greg Kurz
2020-12-21 13:07     ` Greg Kurz
2020-12-21 13:07 ` [RFC Qemu PATCH v2 0/2] spapr: nvdimm: Asynchronus flush hcall support Greg Kurz
2020-12-21 13:07   ` Greg Kurz
2020-12-21 13:07   ` Greg Kurz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0c3591b8-e2bd-3a7a-112f-e410bca4434f@linux.ibm.com \
    --to=sbhat@linux.ibm.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=groug@kaod.org \
    --cc=imammedo@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=shivaprasadbhat@gmail.com \
    --cc=xiaoguangrong.eric@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.