From: Shivaprasad G Bhat <email@example.com>
To: David Gibson <firstname.lastname@example.org>
Cc: Greg Kurz <email@example.com>,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
Subject: Re: [RFC Qemu PATCH v2 1/2] spapr: drc: Add support for async hcalls at the drc level
Date: Tue, 23 Mar 2021 13:23:31 +0530 [thread overview]
Message-ID: <email@example.com> (raw)
Sorry about the delay.
On 2/8/21 11:51 AM, David Gibson wrote:
> On Tue, Jan 19, 2021 at 12:40:31PM +0530, Shivaprasad G Bhat wrote:
>> Thanks for the comments!
>> On 12/28/20 2:08 PM, David Gibson wrote:
>>> On Mon, Dec 21, 2020 at 01:08:53PM +0100, Greg Kurz wrote:
>>>> The overall idea looks good but I think you should consider using
>>>> a thread pool to implement it. See below.
>>> I am not convinced, however. Specifically, attaching this to the DRC
>>> doesn't make sense to me. We're adding exactly one DRC related async
>>> hcall, and I can't really see much call for another one. We could
>>> have other async hcalls - indeed we already have one for HPT resizing
>>> - but attaching this to DRCs doesn't help for those.
>> The semantics of the hcall made me think, if this is going to be
>> re-usable for future if implemented at DRC level.
> It would only be re-usable for operations that are actually connected
> to DRCs. It doesn't seem to me particularly likely that we'll ever
> have more asynchronous hcalls that are also associated with DRCs.
>> Other option
>> is to move the async-hcall-state/list into the NVDIMMState structure
>> in include/hw/mem/nvdimm.h and handle it with machine->nvdimms_state
>> at a global level.
> I'm ok with either of two options:
> A) Implement this ad-hoc for this specific case, making whatever
> simplifications you can based on this specific case.
I am simplifying it to nvdimm use-case alone and limiting the scope.
> B) Implement a general mechanism for async hcalls that is *not* tied
> to DRCs. Then use that for the existing H_RESIZE_HPT_PREPARE call as
> well as this new one.
>> Hope you are okay with using the pool based approach that Greg
> Honestly a thread pool seems like it might be overkill for this
I think its appropriate here as that is what is being done by virtio-pmem
too for flush requests. The aio infrastructure simplifies lot of the
thread handling usage. Please suggest if you think there are better ways.
I am sending the next version addressing all the comments from you and Greg.
Linux-nvdimm mailing list -- firstname.lastname@example.org
To unsubscribe send an email to email@example.com
next prev parent reply other threads:[~2021-03-23 7:53 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-30 15:16 [RFC Qemu PATCH v2 0/2] spapr: nvdimm: Asynchronus flush hcall support Shivaprasad G Bhat
2020-11-30 15:16 ` [RFC Qemu PATCH v2 1/2] spapr: drc: Add support for async hcalls at the drc level Shivaprasad G Bhat
2020-12-21 12:08 ` Greg Kurz
2020-12-21 14:37 ` Greg Kurz
2020-12-28 8:38 ` David Gibson
2021-01-19 7:10 ` Shivaprasad G Bhat
2021-02-08 6:21 ` David Gibson
2021-03-23 7:53 ` Shivaprasad G Bhat [this message]
2020-11-30 15:17 ` [RFC Qemu PATCH v2 2/2] spapr: nvdimm: Implement async flush hcalls Shivaprasad G Bhat
2020-12-21 13:07 ` Greg Kurz
2020-12-21 13:07 ` [RFC Qemu PATCH v2 0/2] spapr: nvdimm: Asynchronus flush hcall support Greg Kurz
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).