All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ben Widawsky <ben.widawsky@intel.com>
To: "Alex Bennée" <alex.bennee@linaro.org>
Cc: "Shreyas Shah" <shreyas.shah@elastics.cloud>,
	"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
	"Saransh Gupta1" <saransh@ibm.com>,
	"Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
	qemu-devel@nongnu.org, "Peter Maydell" <peter.maydell@linaro.org>,
	"Philippe Mathieu-Daudé" <f4bug@amsat.org>
Subject: Re: Follow-up on the CXL discussion at OFTC
Date: Mon, 29 Nov 2021 09:16:31 -0800	[thread overview]
Message-ID: <20211129171631.ytixckw2gz3rya25@intel.com> (raw)
In-Reply-To: <8735njf6f7.fsf@linaro.org>

On 21-11-26 12:08:08, Alex Bennée wrote:
> 
> Ben Widawsky <ben.widawsky@intel.com> writes:
> 
> > On 21-11-19 02:29:51, Shreyas Shah wrote:
> >> Hi Ben
> >> 
> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> >>  
> >
> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > will merge the existing patches, or provide feedback as to what it would take to
> > get them merged. If upstream doesn't see a point in these patches, then I really
> > don't see much value in continuing to further them. Once hardware comes out, the
> > value proposition is certainly less.
> 
> I take it:
> 
>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> 
> is the current state of the support? I saw there was a fair amount of
> discussion on the thread so assumed there would be a v4 forthcoming at
> some point.

Hi Alex,

There is a v4, however, we never really had a solid plan for the primary issue
which was around handling CXL memory expander devices properly (both from an
interleaving standpoint as well as having a device which hosts multiple memory
capacities, persistent and volatile). I didn't feel it was worth sending a v4
unless someone could say
1. we will merge what's there and fix later, or
2. you must have a more perfect emulation in place, or
3. we want to see usages for a real guest

I had hoped we could merge what was there mostly as is and fix it up as we go.
It's useful in the state it is now, and as time goes on, we find more usecases
for it in a VMM, and not just driver development.

> 
> Adding new subsystems to QEMU does seem to be a pain point for new
> contributors. Patches tend to fall through the cracks of existing
> maintainers who spend most of their time looking at stuff that directly
> touches their files. There is also a reluctance to merge large chunks of
> functionality without an identified maintainer (and maybe reviewers) who
> can be the contact point for new patches. So in short you need:
> 
>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems

This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
hw/mem are the two) in the past, but I think there interest is lacking (and
reasonably so, it is an entirely different subsystem).

>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL

I have/had those from Jonathan.

>  - Some* in-tree testing (so it doesn't quietly bitrot)

We had this, but it's stale now. We can bring this back up.

>  - A patch adding the sub-system to MAINTAINERS with identified people

That was there too. Since the original posting, I'd be happy to sign Jonathan up
to this if he's willing.

> 
> * Some means at least ensuring qtest can instantiate the device and not
>   fall over. Obviously more testing is better but it can always be
>   expanded on in later series.

This was in the patch series. It could use more testing for sure, but I had
basic functional testing in place via qtest.

> 
> Is that the feedback you were looking for?

You validated my assumptions as to what's needed, but your first bullet is the
one I can't seem to pin down.

Thanks.
Ben

WARNING: multiple messages have this Message-ID (diff)
From: Ben Widawsky <ben.widawsky@intel.com>
To: "Alex Bennée" <alex.bennee@linaro.org>
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	qemu-devel@nongnu.org, "Saransh Gupta1" <saransh@ibm.com>,
	"Philippe Mathieu-Daudé" <f4bug@amsat.org>,
	"Shreyas Shah" <shreyas.shah@elastics.cloud>,
	"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
	"Jonathan Cameron" <Jonathan.Cameron@huawei.com>
Subject: Re: Follow-up on the CXL discussion at OFTC
Date: Mon, 29 Nov 2021 09:16:31 -0800	[thread overview]
Message-ID: <20211129171631.ytixckw2gz3rya25@intel.com> (raw)
In-Reply-To: <8735njf6f7.fsf@linaro.org>

On 21-11-26 12:08:08, Alex Bennée wrote:
> 
> Ben Widawsky <ben.widawsky@intel.com> writes:
> 
> > On 21-11-19 02:29:51, Shreyas Shah wrote:
> >> Hi Ben
> >> 
> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> >>  
> >
> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > will merge the existing patches, or provide feedback as to what it would take to
> > get them merged. If upstream doesn't see a point in these patches, then I really
> > don't see much value in continuing to further them. Once hardware comes out, the
> > value proposition is certainly less.
> 
> I take it:
> 
>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> 
> is the current state of the support? I saw there was a fair amount of
> discussion on the thread so assumed there would be a v4 forthcoming at
> some point.

Hi Alex,

There is a v4, however, we never really had a solid plan for the primary issue
which was around handling CXL memory expander devices properly (both from an
interleaving standpoint as well as having a device which hosts multiple memory
capacities, persistent and volatile). I didn't feel it was worth sending a v4
unless someone could say
1. we will merge what's there and fix later, or
2. you must have a more perfect emulation in place, or
3. we want to see usages for a real guest

I had hoped we could merge what was there mostly as is and fix it up as we go.
It's useful in the state it is now, and as time goes on, we find more usecases
for it in a VMM, and not just driver development.

> 
> Adding new subsystems to QEMU does seem to be a pain point for new
> contributors. Patches tend to fall through the cracks of existing
> maintainers who spend most of their time looking at stuff that directly
> touches their files. There is also a reluctance to merge large chunks of
> functionality without an identified maintainer (and maybe reviewers) who
> can be the contact point for new patches. So in short you need:
> 
>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems

This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
hw/mem are the two) in the past, but I think there interest is lacking (and
reasonably so, it is an entirely different subsystem).

>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL

I have/had those from Jonathan.

>  - Some* in-tree testing (so it doesn't quietly bitrot)

We had this, but it's stale now. We can bring this back up.

>  - A patch adding the sub-system to MAINTAINERS with identified people

That was there too. Since the original posting, I'd be happy to sign Jonathan up
to this if he's willing.

> 
> * Some means at least ensuring qtest can instantiate the device and not
>   fall over. Obviously more testing is better but it can always be
>   expanded on in later series.

This was in the patch series. It could use more testing for sure, but I had
basic functional testing in place via qtest.

> 
> Is that the feedback you were looking for?

You validated my assumptions as to what's needed, but your first bullet is the
one I can't seem to pin down.

Thanks.
Ben


  reply	other threads:[~2021-11-29 17:20 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <OF255704A1.78FEF164-ON0025878E.00821084-0025878F.00015560@ibm.com>
2021-11-17 16:57 ` Follow-up on the CXL discussion at OFTC Ben Widawsky
2021-11-17 17:32   ` Jonathan Cameron
2021-11-17 17:32     ` Jonathan Cameron
2021-11-18 22:20     ` Saransh Gupta1
2021-11-18 22:20       ` Saransh Gupta1
2021-11-18 22:52       ` Shreyas Shah
2021-11-18 22:52         ` Shreyas Shah via
2021-11-19  1:48         ` Ben Widawsky
2021-11-19  1:48           ` Ben Widawsky
2021-11-19  2:29           ` Shreyas Shah
2021-11-19  2:29             ` Shreyas Shah via
2021-11-19  3:25             ` Ben Widawsky
2021-11-19  3:25               ` Ben Widawsky
2021-11-26 12:08               ` Alex Bennée
2021-11-26 12:08                 ` Alex Bennée
2021-11-29 17:16                 ` Ben Widawsky [this message]
2021-11-29 17:16                   ` Ben Widawsky
2021-11-29 18:28                   ` Alex Bennée
2021-11-29 18:28                     ` Alex Bennée
2021-11-30 13:09                     ` Jonathan Cameron
2021-11-30 13:09                       ` Jonathan Cameron via
2021-11-30 17:21                       ` Ben Widawsky
2021-11-30 17:21                         ` Ben Widawsky
2021-12-01  9:55                         ` Jonathan Cameron
2021-12-01  9:55                           ` Jonathan Cameron via
2021-12-01 10:29                           ` Alex Bennée
2021-12-01 10:29                             ` Alex Bennée
2021-11-19  1:52       ` Ben Widawsky
2021-11-19  1:52         ` Ben Widawsky
2021-11-19 18:53         ` Jonathan Cameron
2021-11-19 18:53           ` Jonathan Cameron
2021-11-19 20:21           ` Ben Widawsky
2021-11-19 20:21             ` Ben Widawsky
2021-11-26 10:59           ` Jonathan Cameron via
2021-11-26 10:59             ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211129171631.ytixckw2gz3rya25@intel.com \
    --to=ben.widawsky@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alex.bennee@linaro.org \
    --cc=f4bug@amsat.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=saransh@ibm.com \
    --cc=shreyas.shah@elastics.cloud \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.