All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Follow-up on the CXL discussion at OFTC
       [not found] <OF255704A1.78FEF164-ON0025878E.00821084-0025878F.00015560@ibm.com>
@ 2021-11-17 16:57 ` Ben Widawsky
  2021-11-17 17:32     ` Jonathan Cameron
  0 siblings, 1 reply; 35+ messages in thread
From: Ben Widawsky @ 2021-11-17 16:57 UTC (permalink / raw)
  To: Saransh Gupta1, linux-cxl

Hi Saransh. Please add the list for these kind of questions. I've converted your
HTML mail, but going forward, the list will eat it, so please use text only.

On 21-11-16 00:14:33, Saransh Gupta1 wrote:
>    Hi Ben,
> 
>    This is Saransh from IBM. Sorry to have (unintentionally) dropped out
>    of the conversion on OFTC, I'm new to IRC.
>    Just wanted to follow-up on the discussion there. We discussed about
>    helping with linux patches reviews. On that front, I have identified
>    some colleague(s) who can help me with this. Let me know if/how you
>    want to proceed with that.

Currently the ball is in my court to re-roll the RFC v2 patches [1] based on
feedback from Dan. I've implemented all/most of it, but I'm still debugging some
issues with the result.

> 
>    Maybe not urgently, but my team would also like to get an understanding
>    of the missing pieces in QEMU. Initially our focus is on type3 memory
>    access and hotplug support. Most of the work that my team does is
>    open-source, so contributing to the QEMU effort is another possible
>    line of collaboration.

If you haven't seen it already, check out my LPC talk [2]. The QEMU patches
could use a lot of love. Mostly, I have little/no motivation until upstream
shows an interest because I don't have time currently to make sure I don't break
vs. upstream. If you want more details here, I can provide them, and I will Cc
the qemu-devel mailing list; the end of the LPC talk [2] does have a list.

> 
>    Thanks for your help and guidance!
> 
>    Best,
>    Saransh Gupta
>    Research Staff Member, IBM Research

[1]: https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t
[2]: https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-17 16:57 ` Follow-up on the CXL discussion at OFTC Ben Widawsky
@ 2021-11-17 17:32     ` Jonathan Cameron
  0 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2021-11-17 17:32 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: Saransh Gupta1, linux-cxl, qemu-devel

On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Hi Saransh. Please add the list for these kind of questions. I've converted your
> HTML mail, but going forward, the list will eat it, so please use text only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >    Hi Ben,
> > 
> >    This is Saransh from IBM. Sorry to have (unintentionally) dropped out
> >    of the conversion on OFTC, I'm new to IRC.
> >    Just wanted to follow-up on the discussion there. We discussed about
> >    helping with linux patches reviews. On that front, I have identified
> >    some colleague(s) who can help me with this. Let me know if/how you
> >    want to proceed with that.  
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1] based on
> feedback from Dan. I've implemented all/most of it, but I'm still debugging some
> issues with the result.
> 
> > 
> >    Maybe not urgently, but my team would also like to get an understanding
> >    of the missing pieces in QEMU. Initially our focus is on type3 memory
> >    access and hotplug support. Most of the work that my team does is
> >    open-source, so contributing to the QEMU effort is another possible
> >    line of collaboration.  
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU patches
> could use a lot of love. Mostly, I have little/no motivation until upstream
> shows an interest because I don't have time currently to make sure I don't break
> vs. upstream. If you want more details here, I can provide them, and I will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is lightly tested,
and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this upstream in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >    Thanks for your help and guidance!
> > 
> >    Best,
> >    Saransh Gupta
> >    Research Staff Member, IBM Research  
> 
> [1]: https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t
> [2]: https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-17 17:32     ` Jonathan Cameron
  0 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2021-11-17 17:32 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: linux-cxl, Saransh Gupta1, qemu-devel

On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Hi Saransh. Please add the list for these kind of questions. I've converted your
> HTML mail, but going forward, the list will eat it, so please use text only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >    Hi Ben,
> > 
> >    This is Saransh from IBM. Sorry to have (unintentionally) dropped out
> >    of the conversion on OFTC, I'm new to IRC.
> >    Just wanted to follow-up on the discussion there. We discussed about
> >    helping with linux patches reviews. On that front, I have identified
> >    some colleague(s) who can help me with this. Let me know if/how you
> >    want to proceed with that.  
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1] based on
> feedback from Dan. I've implemented all/most of it, but I'm still debugging some
> issues with the result.
> 
> > 
> >    Maybe not urgently, but my team would also like to get an understanding
> >    of the missing pieces in QEMU. Initially our focus is on type3 memory
> >    access and hotplug support. Most of the work that my team does is
> >    open-source, so contributing to the QEMU effort is another possible
> >    line of collaboration.  
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU patches
> could use a lot of love. Mostly, I have little/no motivation until upstream
> shows an interest because I don't have time currently to make sure I don't break
> vs. upstream. If you want more details here, I can provide them, and I will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is lightly tested,
and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this upstream in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >    Thanks for your help and guidance!
> > 
> >    Best,
> >    Saransh Gupta
> >    Research Staff Member, IBM Research  
> 
> [1]: https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t
> [2]: https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49



^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Follow-up on the CXL discussion at OFTC
  2021-11-17 17:32     ` Jonathan Cameron
@ 2021-11-18 22:20       ` Saransh Gupta1
  -1 siblings, 0 replies; 35+ messages in thread
From: Saransh Gupta1 @ 2021-11-18 22:20 UTC (permalink / raw)
  To: Jonathan Cameron, Ben Widawsky; +Cc: linux-cxl, qemu-devel

Hi Ben and Jonathan,

Thanks for your replies. I'm looking forward to the patches.

For QEMU, I see hotplug support as an item on the list and would like to 
start working on it. It would be great if you can provide some pointers 
about how I should go about it.
Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
would be a good starting point for it?

Thanks,
Saransh



From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
To:     "Ben Widawsky" <ben.widawsky@intel.com>
Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
<qemu-devel@nongnu.org>
Date:   11/17/2021 09:32 AM
Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC



On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Hi Saransh. Please add the list for these kind of questions. I've 
converted your
> HTML mail, but going forward, the list will eat it, so please use text 
only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >    Hi Ben,
> > 
> >    This is Saransh from IBM. Sorry to have (unintentionally) dropped 
out
> >    of the conversion on OFTC, I'm new to IRC.
> >    Just wanted to follow-up on the discussion there. We discussed 
about
> >    helping with linux patches reviews. On that front, I have 
identified
> >    some colleague(s) who can help me with this. Let me know if/how you
> >    want to proceed with that. 
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1] 
based on
> feedback from Dan. I've implemented all/most of it, but I'm still 
debugging some
> issues with the result.
> 
> > 
> >    Maybe not urgently, but my team would also like to get an 
understanding
> >    of the missing pieces in QEMU. Initially our focus is on type3 
memory
> >    access and hotplug support. Most of the work that my team does is
> >    open-source, so contributing to the QEMU effort is another possible
> >    line of collaboration. 
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU 
patches
> could use a lot of love. Mostly, I have little/no motivation until 
upstream
> shows an interest because I don't have time currently to make sure I 
don't break
> vs. upstream. If you want more details here, I can provide them, and I 
will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a 
list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is 
lightly tested,
and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this 
upstream in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other 
stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >    Thanks for your help and guidance!
> > 
> >    Best,
> >    Saransh Gupta
> >    Research Staff Member, IBM Research 
> 
> [1]: 
https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 

> [2]: 
https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 







^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Follow-up on the CXL discussion at OFTC
@ 2021-11-18 22:20       ` Saransh Gupta1
  0 siblings, 0 replies; 35+ messages in thread
From: Saransh Gupta1 @ 2021-11-18 22:20 UTC (permalink / raw)
  To: Jonathan Cameron, Ben Widawsky; +Cc: qemu-devel, linux-cxl

Hi Ben and Jonathan,

Thanks for your replies. I'm looking forward to the patches.

For QEMU, I see hotplug support as an item on the list and would like to 
start working on it. It would be great if you can provide some pointers 
about how I should go about it.
Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
would be a good starting point for it?

Thanks,
Saransh



From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
To:     "Ben Widawsky" <ben.widawsky@intel.com>
Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
<qemu-devel@nongnu.org>
Date:   11/17/2021 09:32 AM
Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC



On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Hi Saransh. Please add the list for these kind of questions. I've 
converted your
> HTML mail, but going forward, the list will eat it, so please use text 
only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >    Hi Ben,
> > 
> >    This is Saransh from IBM. Sorry to have (unintentionally) dropped 
out
> >    of the conversion on OFTC, I'm new to IRC.
> >    Just wanted to follow-up on the discussion there. We discussed 
about
> >    helping with linux patches reviews. On that front, I have 
identified
> >    some colleague(s) who can help me with this. Let me know if/how you
> >    want to proceed with that. 
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1] 
based on
> feedback from Dan. I've implemented all/most of it, but I'm still 
debugging some
> issues with the result.
> 
> > 
> >    Maybe not urgently, but my team would also like to get an 
understanding
> >    of the missing pieces in QEMU. Initially our focus is on type3 
memory
> >    access and hotplug support. Most of the work that my team does is
> >    open-source, so contributing to the QEMU effort is another possible
> >    line of collaboration. 
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU 
patches
> could use a lot of love. Mostly, I have little/no motivation until 
upstream
> shows an interest because I don't have time currently to make sure I 
don't break
> vs. upstream. If you want more details here, I can provide them, and I 
will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a 
list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is 
lightly tested,
and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this 
upstream in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other 
stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >    Thanks for your help and guidance!
> > 
> >    Best,
> >    Saransh Gupta
> >    Research Staff Member, IBM Research 
> 
> [1]: 
https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 

> [2]: 
https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 








^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Follow-up on the CXL discussion at OFTC
  2021-11-18 22:20       ` Saransh Gupta1
@ 2021-11-18 22:52         ` Shreyas Shah via
  -1 siblings, 0 replies; 35+ messages in thread
From: Shreyas Shah @ 2021-11-18 22:52 UTC (permalink / raw)
  To: Saransh Gupta1, Jonathan Cameron, Ben Widawsky; +Cc: linux-cxl, qemu-devel

Hello Folks,

Any plan to add CXL2.0 switch ports in QEMU? 

Regards,
Shreyas

-----Original Message-----
From: Saransh Gupta1 <saransh@ibm.com> 
Sent: Thursday, November 18, 2021 2:21 PM
To: Jonathan Cameron <Jonathan.Cameron@Huawei.com>; Ben Widawsky <ben.widawsky@intel.com>
Cc: linux-cxl@vger.kernel.org; qemu-devel@nongnu.org
Subject: RE: Follow-up on the CXL discussion at OFTC

Hi Ben and Jonathan,

Thanks for your replies. I'm looking forward to the patches.

For QEMU, I see hotplug support as an item on the list and would like to start working on it. It would be great if you can provide some pointers about how I should go about it.
Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) would be a good starting point for it?

Thanks,
Saransh



From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
To:     "Ben Widawsky" <ben.widawsky@intel.com>
Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
<qemu-devel@nongnu.org>
Date:   11/17/2021 09:32 AM
Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC



On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Hi Saransh. Please add the list for these kind of questions. I've
converted your
> HTML mail, but going forward, the list will eat it, so please use text
only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >    Hi Ben,
> > 
> >    This is Saransh from IBM. Sorry to have (unintentionally) dropped
out
> >    of the conversion on OFTC, I'm new to IRC.
> >    Just wanted to follow-up on the discussion there. We discussed
about
> >    helping with linux patches reviews. On that front, I have
identified
> >    some colleague(s) who can help me with this. Let me know if/how you
> >    want to proceed with that. 
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1]
based on
> feedback from Dan. I've implemented all/most of it, but I'm still
debugging some
> issues with the result.
> 
> > 
> >    Maybe not urgently, but my team would also like to get an
understanding
> >    of the missing pieces in QEMU. Initially our focus is on type3
memory
> >    access and hotplug support. Most of the work that my team does is
> >    open-source, so contributing to the QEMU effort is another possible
> >    line of collaboration. 
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU
patches
> could use a lot of love. Mostly, I have little/no motivation until
upstream
> shows an interest because I don't have time currently to make sure I
don't break
> vs. upstream. If you want more details here, I can provide them, and I
will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a
list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is lightly tested, and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this upstream in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other 
stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >    Thanks for your help and guidance!
> > 
> >    Best,
> >    Saransh Gupta
> >    Research Staff Member, IBM Research 
> 
> [1]: 
https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 

> [2]: 
https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 







^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Follow-up on the CXL discussion at OFTC
@ 2021-11-18 22:52         ` Shreyas Shah via
  0 siblings, 0 replies; 35+ messages in thread
From: Shreyas Shah via @ 2021-11-18 22:52 UTC (permalink / raw)
  To: Saransh Gupta1, Jonathan Cameron, Ben Widawsky; +Cc: linux-cxl, qemu-devel

Hello Folks,

Any plan to add CXL2.0 switch ports in QEMU? 

Regards,
Shreyas

-----Original Message-----
From: Saransh Gupta1 <saransh@ibm.com> 
Sent: Thursday, November 18, 2021 2:21 PM
To: Jonathan Cameron <Jonathan.Cameron@Huawei.com>; Ben Widawsky <ben.widawsky@intel.com>
Cc: linux-cxl@vger.kernel.org; qemu-devel@nongnu.org
Subject: RE: Follow-up on the CXL discussion at OFTC

Hi Ben and Jonathan,

Thanks for your replies. I'm looking forward to the patches.

For QEMU, I see hotplug support as an item on the list and would like to start working on it. It would be great if you can provide some pointers about how I should go about it.
Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) would be a good starting point for it?

Thanks,
Saransh



From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
To:     "Ben Widawsky" <ben.widawsky@intel.com>
Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
<qemu-devel@nongnu.org>
Date:   11/17/2021 09:32 AM
Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC



On Wed, 17 Nov 2021 08:57:19 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> Hi Saransh. Please add the list for these kind of questions. I've
converted your
> HTML mail, but going forward, the list will eat it, so please use text
only.
> 
> On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> >    Hi Ben,
> > 
> >    This is Saransh from IBM. Sorry to have (unintentionally) dropped
out
> >    of the conversion on OFTC, I'm new to IRC.
> >    Just wanted to follow-up on the discussion there. We discussed
about
> >    helping with linux patches reviews. On that front, I have
identified
> >    some colleague(s) who can help me with this. Let me know if/how you
> >    want to proceed with that. 
> 
> Currently the ball is in my court to re-roll the RFC v2 patches [1]
based on
> feedback from Dan. I've implemented all/most of it, but I'm still
debugging some
> issues with the result.
> 
> > 
> >    Maybe not urgently, but my team would also like to get an
understanding
> >    of the missing pieces in QEMU. Initially our focus is on type3
memory
> >    access and hotplug support. Most of the work that my team does is
> >    open-source, so contributing to the QEMU effort is another possible
> >    line of collaboration. 
> 
> If you haven't seen it already, check out my LPC talk [2]. The QEMU
patches
> could use a lot of love. Mostly, I have little/no motivation until
upstream
> shows an interest because I don't have time currently to make sure I
don't break
> vs. upstream. If you want more details here, I can provide them, and I
will Cc
> the qemu-devel mailing list; the end of the LPC talk [2] does have a
list.
Hi Ben, Saransh

I have a forward port of the series + DOE etc to near current QEMU that is lightly tested, and can look to push that out publicly later this week.

I'd also like to push QEMU support forwards and to start getting this upstream in QEMU
+ fill in some of the missing parts.

Was aiming to make progress on this a few weeks ago, but as ever other 
stuff
got in the way.

+CC qemu-devel in case anyone else also looking at this.

Jonathan



> 
> > 
> >    Thanks for your help and guidance!
> > 
> >    Best,
> >    Saransh Gupta
> >    Research Staff Member, IBM Research 
> 
> [1]: 
https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 

> [2]: 
https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 








^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-18 22:52         ` Shreyas Shah via
@ 2021-11-19  1:48           ` Ben Widawsky
  -1 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19  1:48 UTC (permalink / raw)
  To: Shreyas Shah; +Cc: Saransh Gupta1, Jonathan Cameron, linux-cxl, qemu-devel

On 21-11-18 22:52:56, Shreyas Shah wrote:
> Hello Folks,
> 
> Any plan to add CXL2.0 switch ports in QEMU? 

What's your definition of plan?

> 
> Regards,
> Shreyas

[snip]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-19  1:48           ` Ben Widawsky
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19  1:48 UTC (permalink / raw)
  To: Shreyas Shah; +Cc: qemu-devel, linux-cxl, Saransh Gupta1, Jonathan Cameron

On 21-11-18 22:52:56, Shreyas Shah wrote:
> Hello Folks,
> 
> Any plan to add CXL2.0 switch ports in QEMU? 

What's your definition of plan?

> 
> Regards,
> Shreyas

[snip]


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-18 22:20       ` Saransh Gupta1
@ 2021-11-19  1:52         ` Ben Widawsky
  -1 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19  1:52 UTC (permalink / raw)
  To: Saransh Gupta1; +Cc: Jonathan Cameron, linux-cxl, qemu-devel

On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> Hi Ben and Jonathan,
> 
> Thanks for your replies. I'm looking forward to the patches.
> 
> For QEMU, I see hotplug support as an item on the list and would like to 
> start working on it. It would be great if you can provide some pointers 
> about how I should go about it.

It's been a while, so I can't recall what's actually missing. I think it should
mostly behave like a normal PCIe endpoint.

> Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> would be a good starting point for it?

If he rebased and claims it works I have no reason to doubt it :-). I have a
small fix on my v4 branch if you want to use the latest port patches.

> 
> Thanks,
> Saransh
> 
> 
> 
> From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> To:     "Ben Widawsky" <ben.widawsky@intel.com>
> Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> <qemu-devel@nongnu.org>
> Date:   11/17/2021 09:32 AM
> Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> 
> 
> 
> On Wed, 17 Nov 2021 08:57:19 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > Hi Saransh. Please add the list for these kind of questions. I've 
> converted your
> > HTML mail, but going forward, the list will eat it, so please use text 
> only.
> > 
> > On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> > >    Hi Ben,
> > > 
> > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped 
> out
> > >    of the conversion on OFTC, I'm new to IRC.
> > >    Just wanted to follow-up on the discussion there. We discussed 
> about
> > >    helping with linux patches reviews. On that front, I have 
> identified
> > >    some colleague(s) who can help me with this. Let me know if/how you
> > >    want to proceed with that. 
> > 
> > Currently the ball is in my court to re-roll the RFC v2 patches [1] 
> based on
> > feedback from Dan. I've implemented all/most of it, but I'm still 
> debugging some
> > issues with the result.
> > 
> > > 
> > >    Maybe not urgently, but my team would also like to get an 
> understanding
> > >    of the missing pieces in QEMU. Initially our focus is on type3 
> memory
> > >    access and hotplug support. Most of the work that my team does is
> > >    open-source, so contributing to the QEMU effort is another possible
> > >    line of collaboration. 
> > 
> > If you haven't seen it already, check out my LPC talk [2]. The QEMU 
> patches
> > could use a lot of love. Mostly, I have little/no motivation until 
> upstream
> > shows an interest because I don't have time currently to make sure I 
> don't break
> > vs. upstream. If you want more details here, I can provide them, and I 
> will Cc
> > the qemu-devel mailing list; the end of the LPC talk [2] does have a 
> list.
> Hi Ben, Saransh
> 
> I have a forward port of the series + DOE etc to near current QEMU that is 
> lightly tested,
> and can look to push that out publicly later this week.
> 
> I'd also like to push QEMU support forwards and to start getting this 
> upstream in QEMU
> + fill in some of the missing parts.
> 
> Was aiming to make progress on this a few weeks ago, but as ever other 
> stuff
> got in the way.
> 
> +CC qemu-devel in case anyone else also looking at this.
> 
> Jonathan
> 
> 
> 
> > 
> > > 
> > >    Thanks for your help and guidance!
> > > 
> > >    Best,
> > >    Saransh Gupta
> > >    Research Staff Member, IBM Research 
> > 
> > [1]: 
> https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> 
> > [2]: 
> https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> 
> 
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-19  1:52         ` Ben Widawsky
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19  1:52 UTC (permalink / raw)
  To: Saransh Gupta1; +Cc: qemu-devel, linux-cxl, Jonathan Cameron

On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> Hi Ben and Jonathan,
> 
> Thanks for your replies. I'm looking forward to the patches.
> 
> For QEMU, I see hotplug support as an item on the list and would like to 
> start working on it. It would be great if you can provide some pointers 
> about how I should go about it.

It's been a while, so I can't recall what's actually missing. I think it should
mostly behave like a normal PCIe endpoint.

> Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> would be a good starting point for it?

If he rebased and claims it works I have no reason to doubt it :-). I have a
small fix on my v4 branch if you want to use the latest port patches.

> 
> Thanks,
> Saransh
> 
> 
> 
> From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> To:     "Ben Widawsky" <ben.widawsky@intel.com>
> Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> <qemu-devel@nongnu.org>
> Date:   11/17/2021 09:32 AM
> Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> 
> 
> 
> On Wed, 17 Nov 2021 08:57:19 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > Hi Saransh. Please add the list for these kind of questions. I've 
> converted your
> > HTML mail, but going forward, the list will eat it, so please use text 
> only.
> > 
> > On 21-11-16 00:14:33, Saransh Gupta1 wrote:
> > >    Hi Ben,
> > > 
> > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped 
> out
> > >    of the conversion on OFTC, I'm new to IRC.
> > >    Just wanted to follow-up on the discussion there. We discussed 
> about
> > >    helping with linux patches reviews. On that front, I have 
> identified
> > >    some colleague(s) who can help me with this. Let me know if/how you
> > >    want to proceed with that. 
> > 
> > Currently the ball is in my court to re-roll the RFC v2 patches [1] 
> based on
> > feedback from Dan. I've implemented all/most of it, but I'm still 
> debugging some
> > issues with the result.
> > 
> > > 
> > >    Maybe not urgently, but my team would also like to get an 
> understanding
> > >    of the missing pieces in QEMU. Initially our focus is on type3 
> memory
> > >    access and hotplug support. Most of the work that my team does is
> > >    open-source, so contributing to the QEMU effort is another possible
> > >    line of collaboration. 
> > 
> > If you haven't seen it already, check out my LPC talk [2]. The QEMU 
> patches
> > could use a lot of love. Mostly, I have little/no motivation until 
> upstream
> > shows an interest because I don't have time currently to make sure I 
> don't break
> > vs. upstream. If you want more details here, I can provide them, and I 
> will Cc
> > the qemu-devel mailing list; the end of the LPC talk [2] does have a 
> list.
> Hi Ben, Saransh
> 
> I have a forward port of the series + DOE etc to near current QEMU that is 
> lightly tested,
> and can look to push that out publicly later this week.
> 
> I'd also like to push QEMU support forwards and to start getting this 
> upstream in QEMU
> + fill in some of the missing parts.
> 
> Was aiming to make progress on this a few weeks ago, but as ever other 
> stuff
> got in the way.
> 
> +CC qemu-devel in case anyone else also looking at this.
> 
> Jonathan
> 
> 
> 
> > 
> > > 
> > >    Thanks for your help and guidance!
> > > 
> > >    Best,
> > >    Saransh Gupta
> > >    Research Staff Member, IBM Research 
> > 
> > [1]: 
> https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> 
> > [2]: 
> https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> 
> 
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Follow-up on the CXL discussion at OFTC
  2021-11-19  1:48           ` Ben Widawsky
@ 2021-11-19  2:29             ` Shreyas Shah via
  -1 siblings, 0 replies; 35+ messages in thread
From: Shreyas Shah @ 2021-11-19  2:29 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: Saransh Gupta1, Jonathan Cameron, linux-cxl, qemu-devel

Hi Ben

Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
 
Regards,
Shreyas

-----Original Message-----
From: Ben Widawsky <ben.widawsky@intel.com> 
Sent: Thursday, November 18, 2021 5:48 PM
To: Shreyas Shah <shreyas.shah@elastics.cloud>
Cc: Saransh Gupta1 <saransh@ibm.com>; Jonathan Cameron <Jonathan.Cameron@huawei.com>; linux-cxl@vger.kernel.org; qemu-devel@nongnu.org
Subject: Re: Follow-up on the CXL discussion at OFTC

On 21-11-18 22:52:56, Shreyas Shah wrote:
> Hello Folks,
> 
> Any plan to add CXL2.0 switch ports in QEMU? 

What's your definition of plan?

> 
> Regards,
> Shreyas

[snip]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Follow-up on the CXL discussion at OFTC
@ 2021-11-19  2:29             ` Shreyas Shah via
  0 siblings, 0 replies; 35+ messages in thread
From: Shreyas Shah via @ 2021-11-19  2:29 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: Saransh Gupta1, Jonathan Cameron, linux-cxl, qemu-devel

Hi Ben

Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
 
Regards,
Shreyas

-----Original Message-----
From: Ben Widawsky <ben.widawsky@intel.com> 
Sent: Thursday, November 18, 2021 5:48 PM
To: Shreyas Shah <shreyas.shah@elastics.cloud>
Cc: Saransh Gupta1 <saransh@ibm.com>; Jonathan Cameron <Jonathan.Cameron@huawei.com>; linux-cxl@vger.kernel.org; qemu-devel@nongnu.org
Subject: Re: Follow-up on the CXL discussion at OFTC

On 21-11-18 22:52:56, Shreyas Shah wrote:
> Hello Folks,
> 
> Any plan to add CXL2.0 switch ports in QEMU? 

What's your definition of plan?

> 
> Regards,
> Shreyas

[snip]


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-19  2:29             ` Shreyas Shah via
@ 2021-11-19  3:25               ` Ben Widawsky
  -1 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19  3:25 UTC (permalink / raw)
  To: Shreyas Shah; +Cc: Saransh Gupta1, Jonathan Cameron, linux-cxl, qemu-devel

On 21-11-19 02:29:51, Shreyas Shah wrote:
> Hi Ben
> 
> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
>  

From me, there are no plans for QEMU anything until/unless upstream thinks it
will merge the existing patches, or provide feedback as to what it would take to
get them merged. If upstream doesn't see a point in these patches, then I really
don't see much value in continuing to further them. Once hardware comes out, the
value proposition is certainly less.

Having said that, once I get the port/region patches merged for the Linux
driver, I do intend to go back and try to implement a basic switch so that we
can test those flows.

I admit, I'm curious why you're interested in switches.

> Regards,
> Shreyas
> 
> -----Original Message-----
> From: Ben Widawsky <ben.widawsky@intel.com> 
> Sent: Thursday, November 18, 2021 5:48 PM
> To: Shreyas Shah <shreyas.shah@elastics.cloud>
> Cc: Saransh Gupta1 <saransh@ibm.com>; Jonathan Cameron <Jonathan.Cameron@huawei.com>; linux-cxl@vger.kernel.org; qemu-devel@nongnu.org
> Subject: Re: Follow-up on the CXL discussion at OFTC
> 
> On 21-11-18 22:52:56, Shreyas Shah wrote:
> > Hello Folks,
> > 
> > Any plan to add CXL2.0 switch ports in QEMU? 
> 
> What's your definition of plan?
> 
> > 
> > Regards,
> > Shreyas
> 
> [snip]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-19  3:25               ` Ben Widawsky
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19  3:25 UTC (permalink / raw)
  To: Shreyas Shah; +Cc: qemu-devel, linux-cxl, Saransh Gupta1, Jonathan Cameron

On 21-11-19 02:29:51, Shreyas Shah wrote:
> Hi Ben
> 
> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
>  

From me, there are no plans for QEMU anything until/unless upstream thinks it
will merge the existing patches, or provide feedback as to what it would take to
get them merged. If upstream doesn't see a point in these patches, then I really
don't see much value in continuing to further them. Once hardware comes out, the
value proposition is certainly less.

Having said that, once I get the port/region patches merged for the Linux
driver, I do intend to go back and try to implement a basic switch so that we
can test those flows.

I admit, I'm curious why you're interested in switches.

> Regards,
> Shreyas
> 
> -----Original Message-----
> From: Ben Widawsky <ben.widawsky@intel.com> 
> Sent: Thursday, November 18, 2021 5:48 PM
> To: Shreyas Shah <shreyas.shah@elastics.cloud>
> Cc: Saransh Gupta1 <saransh@ibm.com>; Jonathan Cameron <Jonathan.Cameron@huawei.com>; linux-cxl@vger.kernel.org; qemu-devel@nongnu.org
> Subject: Re: Follow-up on the CXL discussion at OFTC
> 
> On 21-11-18 22:52:56, Shreyas Shah wrote:
> > Hello Folks,
> > 
> > Any plan to add CXL2.0 switch ports in QEMU? 
> 
> What's your definition of plan?
> 
> > 
> > Regards,
> > Shreyas
> 
> [snip]


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-19  1:52         ` Ben Widawsky
@ 2021-11-19 18:53           ` Jonathan Cameron
  -1 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2021-11-19 18:53 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: Saransh Gupta1, linux-cxl, qemu-devel

On Thu, 18 Nov 2021 17:52:07 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> > Hi Ben and Jonathan,
> > 
> > Thanks for your replies. I'm looking forward to the patches.
> > 
> > For QEMU, I see hotplug support as an item on the list and would like to 
> > start working on it. It would be great if you can provide some pointers 
> > about how I should go about it.  
> 
> It's been a while, so I can't recall what's actually missing. I think it should
> mostly behave like a normal PCIe endpoint.
> 
> > Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> > would be a good starting point for it?  
> 
> If he rebased and claims it works I have no reason to doubt it :-). I have a
> small fix on my v4 branch if you want to use the latest port patches.

Thanks. I'd missed that one. Now pushed down into the original patch.

It occurred to me that technically I only know my rebase works on Arm64...
Fingers crossed for x86.

Anyhow, I'll run more tests on it next week (possibly even including x86),

Available at: 
https://github.com/hisilicon/qemu/tree/cxl-hacks

For arm64 the description at
https://people.kernel.org/jic23/ will almost work with this. 
There is a bug however that I need to track down which currently means you
need to set the pxb uid to the same as the bus number.   Shouldn't take
long to fix but it's Friday evening...
(add uid=0x80 to the options for pxb-cxl)

I dropped the CMA patch from Avery from this tree as need to improve
the way it's getting hold of some parts of libSPDM and move to the current
version of that library (rather than the old openSPDM)

Ben, if you don't mind me trying to push this forwards, I'll do a bit
of cleanup and reordering then make use of the QEMU folks we have / know and
try and start getting your hard work upstream.

Whilst I've not poked the various interfaces yet, this is working with
a kernel tree that is current cxl/next + Ira's DOE series and Ben's region series
+ (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
to share it unless anyone has particular need of it.  Hopefully the various
parts will move forwards this cycle anyway so I can stop having to spend
as much time on rebases!

Jonathan 

> 
> > 
> > Thanks,
> > Saransh
> > 
> > 
> > 
> > From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> > To:     "Ben Widawsky" <ben.widawsky@intel.com>
> > Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> > <qemu-devel@nongnu.org>
> > Date:   11/17/2021 09:32 AM
> > Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > 
> > 
> > 
> > On Wed, 17 Nov 2021 08:57:19 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >   
> > > Hi Saransh. Please add the list for these kind of questions. I've   
> > converted your  
> > > HTML mail, but going forward, the list will eat it, so please use text   
> > only.  
> > > 
> > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:  
> > > >    Hi Ben,
> > > > 
> > > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped   
> > out  
> > > >    of the conversion on OFTC, I'm new to IRC.
> > > >    Just wanted to follow-up on the discussion there. We discussed   
> > about  
> > > >    helping with linux patches reviews. On that front, I have   
> > identified  
> > > >    some colleague(s) who can help me with this. Let me know if/how you
> > > >    want to proceed with that.   
> > > 
> > > Currently the ball is in my court to re-roll the RFC v2 patches [1]   
> > based on  
> > > feedback from Dan. I've implemented all/most of it, but I'm still   
> > debugging some  
> > > issues with the result.
> > >   
> > > > 
> > > >    Maybe not urgently, but my team would also like to get an   
> > understanding  
> > > >    of the missing pieces in QEMU. Initially our focus is on type3   
> > memory  
> > > >    access and hotplug support. Most of the work that my team does is
> > > >    open-source, so contributing to the QEMU effort is another possible
> > > >    line of collaboration.   
> > > 
> > > If you haven't seen it already, check out my LPC talk [2]. The QEMU   
> > patches  
> > > could use a lot of love. Mostly, I have little/no motivation until   
> > upstream  
> > > shows an interest because I don't have time currently to make sure I   
> > don't break  
> > > vs. upstream. If you want more details here, I can provide them, and I   
> > will Cc  
> > > the qemu-devel mailing list; the end of the LPC talk [2] does have a   
> > list.
> > Hi Ben, Saransh
> > 
> > I have a forward port of the series + DOE etc to near current QEMU that is 
> > lightly tested,
> > and can look to push that out publicly later this week.
> > 
> > I'd also like to push QEMU support forwards and to start getting this 
> > upstream in QEMU
> > + fill in some of the missing parts.
> > 
> > Was aiming to make progress on this a few weeks ago, but as ever other 
> > stuff
> > got in the way.
> > 
> > +CC qemu-devel in case anyone else also looking at this.
> > 
> > Jonathan
> > 
> > 
> >   
> > >   
> > > > 
> > > >    Thanks for your help and guidance!
> > > > 
> > > >    Best,
> > > >    Saransh Gupta
> > > >    Research Staff Member, IBM Research   
> > > 
> > > [1]:   
> > https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> >   
> > > [2]:   
> > https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> > 
> > 
> > 
> > 
> > 
> >   


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-19 18:53           ` Jonathan Cameron
  0 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2021-11-19 18:53 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: linux-cxl, Saransh Gupta1, qemu-devel

On Thu, 18 Nov 2021 17:52:07 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> > Hi Ben and Jonathan,
> > 
> > Thanks for your replies. I'm looking forward to the patches.
> > 
> > For QEMU, I see hotplug support as an item on the list and would like to 
> > start working on it. It would be great if you can provide some pointers 
> > about how I should go about it.  
> 
> It's been a while, so I can't recall what's actually missing. I think it should
> mostly behave like a normal PCIe endpoint.
> 
> > Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> > would be a good starting point for it?  
> 
> If he rebased and claims it works I have no reason to doubt it :-). I have a
> small fix on my v4 branch if you want to use the latest port patches.

Thanks. I'd missed that one. Now pushed down into the original patch.

It occurred to me that technically I only know my rebase works on Arm64...
Fingers crossed for x86.

Anyhow, I'll run more tests on it next week (possibly even including x86),

Available at: 
https://github.com/hisilicon/qemu/tree/cxl-hacks

For arm64 the description at
https://people.kernel.org/jic23/ will almost work with this. 
There is a bug however that I need to track down which currently means you
need to set the pxb uid to the same as the bus number.   Shouldn't take
long to fix but it's Friday evening...
(add uid=0x80 to the options for pxb-cxl)

I dropped the CMA patch from Avery from this tree as need to improve
the way it's getting hold of some parts of libSPDM and move to the current
version of that library (rather than the old openSPDM)

Ben, if you don't mind me trying to push this forwards, I'll do a bit
of cleanup and reordering then make use of the QEMU folks we have / know and
try and start getting your hard work upstream.

Whilst I've not poked the various interfaces yet, this is working with
a kernel tree that is current cxl/next + Ira's DOE series and Ben's region series
+ (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
to share it unless anyone has particular need of it.  Hopefully the various
parts will move forwards this cycle anyway so I can stop having to spend
as much time on rebases!

Jonathan 

> 
> > 
> > Thanks,
> > Saransh
> > 
> > 
> > 
> > From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> > To:     "Ben Widawsky" <ben.widawsky@intel.com>
> > Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> > <qemu-devel@nongnu.org>
> > Date:   11/17/2021 09:32 AM
> > Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > 
> > 
> > 
> > On Wed, 17 Nov 2021 08:57:19 -0800
> > Ben Widawsky <ben.widawsky@intel.com> wrote:
> >   
> > > Hi Saransh. Please add the list for these kind of questions. I've   
> > converted your  
> > > HTML mail, but going forward, the list will eat it, so please use text   
> > only.  
> > > 
> > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:  
> > > >    Hi Ben,
> > > > 
> > > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped   
> > out  
> > > >    of the conversion on OFTC, I'm new to IRC.
> > > >    Just wanted to follow-up on the discussion there. We discussed   
> > about  
> > > >    helping with linux patches reviews. On that front, I have   
> > identified  
> > > >    some colleague(s) who can help me with this. Let me know if/how you
> > > >    want to proceed with that.   
> > > 
> > > Currently the ball is in my court to re-roll the RFC v2 patches [1]   
> > based on  
> > > feedback from Dan. I've implemented all/most of it, but I'm still   
> > debugging some  
> > > issues with the result.
> > >   
> > > > 
> > > >    Maybe not urgently, but my team would also like to get an   
> > understanding  
> > > >    of the missing pieces in QEMU. Initially our focus is on type3   
> > memory  
> > > >    access and hotplug support. Most of the work that my team does is
> > > >    open-source, so contributing to the QEMU effort is another possible
> > > >    line of collaboration.   
> > > 
> > > If you haven't seen it already, check out my LPC talk [2]. The QEMU   
> > patches  
> > > could use a lot of love. Mostly, I have little/no motivation until   
> > upstream  
> > > shows an interest because I don't have time currently to make sure I   
> > don't break  
> > > vs. upstream. If you want more details here, I can provide them, and I   
> > will Cc  
> > > the qemu-devel mailing list; the end of the LPC talk [2] does have a   
> > list.
> > Hi Ben, Saransh
> > 
> > I have a forward port of the series + DOE etc to near current QEMU that is 
> > lightly tested,
> > and can look to push that out publicly later this week.
> > 
> > I'd also like to push QEMU support forwards and to start getting this 
> > upstream in QEMU
> > + fill in some of the missing parts.
> > 
> > Was aiming to make progress on this a few weeks ago, but as ever other 
> > stuff
> > got in the way.
> > 
> > +CC qemu-devel in case anyone else also looking at this.
> > 
> > Jonathan
> > 
> > 
> >   
> > >   
> > > > 
> > > >    Thanks for your help and guidance!
> > > > 
> > > >    Best,
> > > >    Saransh Gupta
> > > >    Research Staff Member, IBM Research   
> > > 
> > > [1]:   
> > https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> >   
> > > [2]:   
> > https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> > 
> > 
> > 
> > 
> > 
> >   



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-19 18:53           ` Jonathan Cameron
@ 2021-11-19 20:21             ` Ben Widawsky
  -1 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19 20:21 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: Saransh Gupta1, linux-cxl, qemu-devel

On 21-11-19 18:53:43, Jonathan Cameron wrote:
> On Thu, 18 Nov 2021 17:52:07 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> > > Hi Ben and Jonathan,
> > > 
> > > Thanks for your replies. I'm looking forward to the patches.
> > > 
> > > For QEMU, I see hotplug support as an item on the list and would like to 
> > > start working on it. It would be great if you can provide some pointers 
> > > about how I should go about it.  
> > 
> > It's been a while, so I can't recall what's actually missing. I think it should
> > mostly behave like a normal PCIe endpoint.
> > 
> > > Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> > > would be a good starting point for it?  
> > 
> > If he rebased and claims it works I have no reason to doubt it :-). I have a
> > small fix on my v4 branch if you want to use the latest port patches.
> 
> Thanks. I'd missed that one. Now pushed down into the original patch.
> 
> It occurred to me that technically I only know my rebase works on Arm64...
> Fingers crossed for x86.
> 
> Anyhow, I'll run more tests on it next week (possibly even including x86),
> 
> Available at: 
> https://github.com/hisilicon/qemu/tree/cxl-hacks
> 
> For arm64 the description at
> https://people.kernel.org/jic23/ will almost work with this. 
> There is a bug however that I need to track down which currently means you
> need to set the pxb uid to the same as the bus number.   Shouldn't take
> long to fix but it's Friday evening...
> (add uid=0x80 to the options for pxb-cxl)
> 
> I dropped the CMA patch from Avery from this tree as need to improve
> the way it's getting hold of some parts of libSPDM and move to the current
> version of that library (rather than the old openSPDM)
> 
> Ben, if you don't mind me trying to push this forwards, I'll do a bit
> of cleanup and reordering then make use of the QEMU folks we have / know and
> try and start getting your hard work upstream.

I don't mind at all.

> 
> Whilst I've not poked the various interfaces yet, this is working with
> a kernel tree that is current cxl/next + Ira's DOE series and Ben's region series
> + (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
> to share it unless anyone has particular need of it.  Hopefully the various
> parts will move forwards this cycle anyway so I can stop having to spend
> as much time on rebases!
> 
> Jonathan 
> 
> > 
> > > 
> > > Thanks,
> > > Saransh
> > > 
> > > 
> > > 
> > > From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> > > To:     "Ben Widawsky" <ben.widawsky@intel.com>
> > > Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> > > <qemu-devel@nongnu.org>
> > > Date:   11/17/2021 09:32 AM
> > > Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > > 
> > > 
> > > 
> > > On Wed, 17 Nov 2021 08:57:19 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >   
> > > > Hi Saransh. Please add the list for these kind of questions. I've   
> > > converted your  
> > > > HTML mail, but going forward, the list will eat it, so please use text   
> > > only.  
> > > > 
> > > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:  
> > > > >    Hi Ben,
> > > > > 
> > > > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped   
> > > out  
> > > > >    of the conversion on OFTC, I'm new to IRC.
> > > > >    Just wanted to follow-up on the discussion there. We discussed   
> > > about  
> > > > >    helping with linux patches reviews. On that front, I have   
> > > identified  
> > > > >    some colleague(s) who can help me with this. Let me know if/how you
> > > > >    want to proceed with that.   
> > > > 
> > > > Currently the ball is in my court to re-roll the RFC v2 patches [1]   
> > > based on  
> > > > feedback from Dan. I've implemented all/most of it, but I'm still   
> > > debugging some  
> > > > issues with the result.
> > > >   
> > > > > 
> > > > >    Maybe not urgently, but my team would also like to get an   
> > > understanding  
> > > > >    of the missing pieces in QEMU. Initially our focus is on type3   
> > > memory  
> > > > >    access and hotplug support. Most of the work that my team does is
> > > > >    open-source, so contributing to the QEMU effort is another possible
> > > > >    line of collaboration.   
> > > > 
> > > > If you haven't seen it already, check out my LPC talk [2]. The QEMU   
> > > patches  
> > > > could use a lot of love. Mostly, I have little/no motivation until   
> > > upstream  
> > > > shows an interest because I don't have time currently to make sure I   
> > > don't break  
> > > > vs. upstream. If you want more details here, I can provide them, and I   
> > > will Cc  
> > > > the qemu-devel mailing list; the end of the LPC talk [2] does have a   
> > > list.
> > > Hi Ben, Saransh
> > > 
> > > I have a forward port of the series + DOE etc to near current QEMU that is 
> > > lightly tested,
> > > and can look to push that out publicly later this week.
> > > 
> > > I'd also like to push QEMU support forwards and to start getting this 
> > > upstream in QEMU
> > > + fill in some of the missing parts.
> > > 
> > > Was aiming to make progress on this a few weeks ago, but as ever other 
> > > stuff
> > > got in the way.
> > > 
> > > +CC qemu-devel in case anyone else also looking at this.
> > > 
> > > Jonathan
> > > 
> > > 
> > >   
> > > >   
> > > > > 
> > > > >    Thanks for your help and guidance!
> > > > > 
> > > > >    Best,
> > > > >    Saransh Gupta
> > > > >    Research Staff Member, IBM Research   
> > > > 
> > > > [1]:   
> > > https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> > >   
> > > > [2]:   
> > > https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >   
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-19 20:21             ` Ben Widawsky
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-19 20:21 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: linux-cxl, Saransh Gupta1, qemu-devel

On 21-11-19 18:53:43, Jonathan Cameron wrote:
> On Thu, 18 Nov 2021 17:52:07 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > On 21-11-18 15:20:34, Saransh Gupta1 wrote:
> > > Hi Ben and Jonathan,
> > > 
> > > Thanks for your replies. I'm looking forward to the patches.
> > > 
> > > For QEMU, I see hotplug support as an item on the list and would like to 
> > > start working on it. It would be great if you can provide some pointers 
> > > about how I should go about it.  
> > 
> > It's been a while, so I can't recall what's actually missing. I think it should
> > mostly behave like a normal PCIe endpoint.
> > 
> > > Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> > > would be a good starting point for it?  
> > 
> > If he rebased and claims it works I have no reason to doubt it :-). I have a
> > small fix on my v4 branch if you want to use the latest port patches.
> 
> Thanks. I'd missed that one. Now pushed down into the original patch.
> 
> It occurred to me that technically I only know my rebase works on Arm64...
> Fingers crossed for x86.
> 
> Anyhow, I'll run more tests on it next week (possibly even including x86),
> 
> Available at: 
> https://github.com/hisilicon/qemu/tree/cxl-hacks
> 
> For arm64 the description at
> https://people.kernel.org/jic23/ will almost work with this. 
> There is a bug however that I need to track down which currently means you
> need to set the pxb uid to the same as the bus number.   Shouldn't take
> long to fix but it's Friday evening...
> (add uid=0x80 to the options for pxb-cxl)
> 
> I dropped the CMA patch from Avery from this tree as need to improve
> the way it's getting hold of some parts of libSPDM and move to the current
> version of that library (rather than the old openSPDM)
> 
> Ben, if you don't mind me trying to push this forwards, I'll do a bit
> of cleanup and reordering then make use of the QEMU folks we have / know and
> try and start getting your hard work upstream.

I don't mind at all.

> 
> Whilst I've not poked the various interfaces yet, this is working with
> a kernel tree that is current cxl/next + Ira's DOE series and Ben's region series
> + (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
> to share it unless anyone has particular need of it.  Hopefully the various
> parts will move forwards this cycle anyway so I can stop having to spend
> as much time on rebases!
> 
> Jonathan 
> 
> > 
> > > 
> > > Thanks,
> > > Saransh
> > > 
> > > 
> > > 
> > > From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> > > To:     "Ben Widawsky" <ben.widawsky@intel.com>
> > > Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> > > <qemu-devel@nongnu.org>
> > > Date:   11/17/2021 09:32 AM
> > > Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > > 
> > > 
> > > 
> > > On Wed, 17 Nov 2021 08:57:19 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >   
> > > > Hi Saransh. Please add the list for these kind of questions. I've   
> > > converted your  
> > > > HTML mail, but going forward, the list will eat it, so please use text   
> > > only.  
> > > > 
> > > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:  
> > > > >    Hi Ben,
> > > > > 
> > > > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped   
> > > out  
> > > > >    of the conversion on OFTC, I'm new to IRC.
> > > > >    Just wanted to follow-up on the discussion there. We discussed   
> > > about  
> > > > >    helping with linux patches reviews. On that front, I have   
> > > identified  
> > > > >    some colleague(s) who can help me with this. Let me know if/how you
> > > > >    want to proceed with that.   
> > > > 
> > > > Currently the ball is in my court to re-roll the RFC v2 patches [1]   
> > > based on  
> > > > feedback from Dan. I've implemented all/most of it, but I'm still   
> > > debugging some  
> > > > issues with the result.
> > > >   
> > > > > 
> > > > >    Maybe not urgently, but my team would also like to get an   
> > > understanding  
> > > > >    of the missing pieces in QEMU. Initially our focus is on type3   
> > > memory  
> > > > >    access and hotplug support. Most of the work that my team does is
> > > > >    open-source, so contributing to the QEMU effort is another possible
> > > > >    line of collaboration.   
> > > > 
> > > > If you haven't seen it already, check out my LPC talk [2]. The QEMU   
> > > patches  
> > > > could use a lot of love. Mostly, I have little/no motivation until   
> > > upstream  
> > > > shows an interest because I don't have time currently to make sure I   
> > > don't break  
> > > > vs. upstream. If you want more details here, I can provide them, and I   
> > > will Cc  
> > > > the qemu-devel mailing list; the end of the LPC talk [2] does have a   
> > > list.
> > > Hi Ben, Saransh
> > > 
> > > I have a forward port of the series + DOE etc to near current QEMU that is 
> > > lightly tested,
> > > and can look to push that out publicly later this week.
> > > 
> > > I'd also like to push QEMU support forwards and to start getting this 
> > > upstream in QEMU
> > > + fill in some of the missing parts.
> > > 
> > > Was aiming to make progress on this a few weeks ago, but as ever other 
> > > stuff
> > > got in the way.
> > > 
> > > +CC qemu-devel in case anyone else also looking at this.
> > > 
> > > Jonathan
> > > 
> > > 
> > >   
> > > >   
> > > > > 
> > > > >    Thanks for your help and guidance!
> > > > > 
> > > > >    Best,
> > > > >    Saransh Gupta
> > > > >    Research Staff Member, IBM Research   
> > > > 
> > > > [1]:   
> > > https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> > >   
> > > > [2]:   
> > > https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >   
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-19 18:53           ` Jonathan Cameron
@ 2021-11-26 10:59             ` Jonathan Cameron
  -1 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron via @ 2021-11-26 10:59 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: Saransh Gupta1, linux-cxl, qemu-devel

On Fri, 19 Nov 2021 18:53:43 +0000
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Thu, 18 Nov 2021 17:52:07 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > On 21-11-18 15:20:34, Saransh Gupta1 wrote:  
> > > Hi Ben and Jonathan,
> > > 
> > > Thanks for your replies. I'm looking forward to the patches.
> > > 
> > > For QEMU, I see hotplug support as an item on the list and would like to 
> > > start working on it. It would be great if you can provide some pointers 
> > > about how I should go about it.    
> > 
> > It's been a while, so I can't recall what's actually missing. I think it should
> > mostly behave like a normal PCIe endpoint.
> >   
> > > Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> > > would be a good starting point for it?    
> > 
> > If he rebased and claims it works I have no reason to doubt it :-). I have a
> > small fix on my v4 branch if you want to use the latest port patches.  
> 
> Thanks. I'd missed that one. Now pushed down into the original patch.
> 
> It occurred to me that technically I only know my rebase works on Arm64...
> Fingers crossed for x86.
> 
> Anyhow, I'll run more tests on it next week (possibly even including x86),

x86 tests throw up an issue with a 2 byte write to the box registers.
For now I've papered over that by explicitly adding support - obvious how to
do it if you look at mailbox_reg_read.  I want to understand what the source
of that access is though before deciding if this fix is correct and that might
take a little bit of tracking down.

Jonathan

> 
> Available at: 
> https://github.com/hisilicon/qemu/tree/cxl-hacks
> 
> For arm64 the description at
> https://people.kernel.org/jic23/ will almost work with this. 
> There is a bug however that I need to track down which currently means you
> need to set the pxb uid to the same as the bus number.   Shouldn't take
> long to fix but it's Friday evening...
> (add uid=0x80 to the options for pxb-cxl)
> 
> I dropped the CMA patch from Avery from this tree as need to improve
> the way it's getting hold of some parts of libSPDM and move to the current
> version of that library (rather than the old openSPDM)
> 
> Ben, if you don't mind me trying to push this forwards, I'll do a bit
> of cleanup and reordering then make use of the QEMU folks we have / know and
> try and start getting your hard work upstream.
> 
> Whilst I've not poked the various interfaces yet, this is working with
> a kernel tree that is current cxl/next + Ira's DOE series and Ben's region series
> + (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
> to share it unless anyone has particular need of it.  Hopefully the various
> parts will move forwards this cycle anyway so I can stop having to spend
> as much time on rebases!
> 
> Jonathan 
> 
> >   
> > > 
> > > Thanks,
> > > Saransh
> > > 
> > > 
> > > 
> > > From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> > > To:     "Ben Widawsky" <ben.widawsky@intel.com>
> > > Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> > > <qemu-devel@nongnu.org>
> > > Date:   11/17/2021 09:32 AM
> > > Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > > 
> > > 
> > > 
> > > On Wed, 17 Nov 2021 08:57:19 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >     
> > > > Hi Saransh. Please add the list for these kind of questions. I've     
> > > converted your    
> > > > HTML mail, but going forward, the list will eat it, so please use text     
> > > only.    
> > > > 
> > > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:    
> > > > >    Hi Ben,
> > > > > 
> > > > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped     
> > > out    
> > > > >    of the conversion on OFTC, I'm new to IRC.
> > > > >    Just wanted to follow-up on the discussion there. We discussed     
> > > about    
> > > > >    helping with linux patches reviews. On that front, I have     
> > > identified    
> > > > >    some colleague(s) who can help me with this. Let me know if/how you
> > > > >    want to proceed with that.     
> > > > 
> > > > Currently the ball is in my court to re-roll the RFC v2 patches [1]     
> > > based on    
> > > > feedback from Dan. I've implemented all/most of it, but I'm still     
> > > debugging some    
> > > > issues with the result.
> > > >     
> > > > > 
> > > > >    Maybe not urgently, but my team would also like to get an     
> > > understanding    
> > > > >    of the missing pieces in QEMU. Initially our focus is on type3     
> > > memory    
> > > > >    access and hotplug support. Most of the work that my team does is
> > > > >    open-source, so contributing to the QEMU effort is another possible
> > > > >    line of collaboration.     
> > > > 
> > > > If you haven't seen it already, check out my LPC talk [2]. The QEMU     
> > > patches    
> > > > could use a lot of love. Mostly, I have little/no motivation until     
> > > upstream    
> > > > shows an interest because I don't have time currently to make sure I     
> > > don't break    
> > > > vs. upstream. If you want more details here, I can provide them, and I     
> > > will Cc    
> > > > the qemu-devel mailing list; the end of the LPC talk [2] does have a     
> > > list.
> > > Hi Ben, Saransh
> > > 
> > > I have a forward port of the series + DOE etc to near current QEMU that is 
> > > lightly tested,
> > > and can look to push that out publicly later this week.
> > > 
> > > I'd also like to push QEMU support forwards and to start getting this 
> > > upstream in QEMU
> > > + fill in some of the missing parts.
> > > 
> > > Was aiming to make progress on this a few weeks ago, but as ever other 
> > > stuff
> > > got in the way.
> > > 
> > > +CC qemu-devel in case anyone else also looking at this.
> > > 
> > > Jonathan
> > > 
> > > 
> > >     
> > > >     
> > > > > 
> > > > >    Thanks for your help and guidance!
> > > > > 
> > > > >    Best,
> > > > >    Saransh Gupta
> > > > >    Research Staff Member, IBM Research     
> > > > 
> > > > [1]:     
> > > https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> > >     
> > > > [2]:     
> > > https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >     
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-26 10:59             ` Jonathan Cameron
  0 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2021-11-26 10:59 UTC (permalink / raw)
  To: Ben Widawsky; +Cc: Saransh Gupta1, linux-cxl, qemu-devel

On Fri, 19 Nov 2021 18:53:43 +0000
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Thu, 18 Nov 2021 17:52:07 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
> 
> > On 21-11-18 15:20:34, Saransh Gupta1 wrote:  
> > > Hi Ben and Jonathan,
> > > 
> > > Thanks for your replies. I'm looking forward to the patches.
> > > 
> > > For QEMU, I see hotplug support as an item on the list and would like to 
> > > start working on it. It would be great if you can provide some pointers 
> > > about how I should go about it.    
> > 
> > It's been a while, so I can't recall what's actually missing. I think it should
> > mostly behave like a normal PCIe endpoint.
> >   
> > > Also, which version of kernel and QEMU (maybe Jonathan's upcoming version) 
> > > would be a good starting point for it?    
> > 
> > If he rebased and claims it works I have no reason to doubt it :-). I have a
> > small fix on my v4 branch if you want to use the latest port patches.  
> 
> Thanks. I'd missed that one. Now pushed down into the original patch.
> 
> It occurred to me that technically I only know my rebase works on Arm64...
> Fingers crossed for x86.
> 
> Anyhow, I'll run more tests on it next week (possibly even including x86),

x86 tests throw up an issue with a 2 byte write to the box registers.
For now I've papered over that by explicitly adding support - obvious how to
do it if you look at mailbox_reg_read.  I want to understand what the source
of that access is though before deciding if this fix is correct and that might
take a little bit of tracking down.

Jonathan

> 
> Available at: 
> https://github.com/hisilicon/qemu/tree/cxl-hacks
> 
> For arm64 the description at
> https://people.kernel.org/jic23/ will almost work with this. 
> There is a bug however that I need to track down which currently means you
> need to set the pxb uid to the same as the bus number.   Shouldn't take
> long to fix but it's Friday evening...
> (add uid=0x80 to the options for pxb-cxl)
> 
> I dropped the CMA patch from Avery from this tree as need to improve
> the way it's getting hold of some parts of libSPDM and move to the current
> version of that library (rather than the old openSPDM)
> 
> Ben, if you don't mind me trying to push this forwards, I'll do a bit
> of cleanup and reordering then make use of the QEMU folks we have / know and
> try and start getting your hard work upstream.
> 
> Whilst I've not poked the various interfaces yet, this is working with
> a kernel tree that is current cxl/next + Ira's DOE series and Ben's region series
> + (for fun) my SPDM series.  That tree's a franken monster so I'm not planning
> to share it unless anyone has particular need of it.  Hopefully the various
> parts will move forwards this cycle anyway so I can stop having to spend
> as much time on rebases!
> 
> Jonathan 
> 
> >   
> > > 
> > > Thanks,
> > > Saransh
> > > 
> > > 
> > > 
> > > From:   "Jonathan Cameron" <Jonathan.Cameron@Huawei.com>
> > > To:     "Ben Widawsky" <ben.widawsky@intel.com>
> > > Cc:     "Saransh Gupta1" <saransh@ibm.com>, <linux-cxl@vger.kernel.org>, 
> > > <qemu-devel@nongnu.org>
> > > Date:   11/17/2021 09:32 AM
> > > Subject:        [EXTERNAL] Re: Follow-up on the CXL discussion at OFTC
> > > 
> > > 
> > > 
> > > On Wed, 17 Nov 2021 08:57:19 -0800
> > > Ben Widawsky <ben.widawsky@intel.com> wrote:
> > >     
> > > > Hi Saransh. Please add the list for these kind of questions. I've     
> > > converted your    
> > > > HTML mail, but going forward, the list will eat it, so please use text     
> > > only.    
> > > > 
> > > > On 21-11-16 00:14:33, Saransh Gupta1 wrote:    
> > > > >    Hi Ben,
> > > > > 
> > > > >    This is Saransh from IBM. Sorry to have (unintentionally) dropped     
> > > out    
> > > > >    of the conversion on OFTC, I'm new to IRC.
> > > > >    Just wanted to follow-up on the discussion there. We discussed     
> > > about    
> > > > >    helping with linux patches reviews. On that front, I have     
> > > identified    
> > > > >    some colleague(s) who can help me with this. Let me know if/how you
> > > > >    want to proceed with that.     
> > > > 
> > > > Currently the ball is in my court to re-roll the RFC v2 patches [1]     
> > > based on    
> > > > feedback from Dan. I've implemented all/most of it, but I'm still     
> > > debugging some    
> > > > issues with the result.
> > > >     
> > > > > 
> > > > >    Maybe not urgently, but my team would also like to get an     
> > > understanding    
> > > > >    of the missing pieces in QEMU. Initially our focus is on type3     
> > > memory    
> > > > >    access and hotplug support. Most of the work that my team does is
> > > > >    open-source, so contributing to the QEMU effort is another possible
> > > > >    line of collaboration.     
> > > > 
> > > > If you haven't seen it already, check out my LPC talk [2]. The QEMU     
> > > patches    
> > > > could use a lot of love. Mostly, I have little/no motivation until     
> > > upstream    
> > > > shows an interest because I don't have time currently to make sure I     
> > > don't break    
> > > > vs. upstream. If you want more details here, I can provide them, and I     
> > > will Cc    
> > > > the qemu-devel mailing list; the end of the LPC talk [2] does have a     
> > > list.
> > > Hi Ben, Saransh
> > > 
> > > I have a forward port of the series + DOE etc to near current QEMU that is 
> > > lightly tested,
> > > and can look to push that out publicly later this week.
> > > 
> > > I'd also like to push QEMU support forwards and to start getting this 
> > > upstream in QEMU
> > > + fill in some of the missing parts.
> > > 
> > > Was aiming to make progress on this a few weeks ago, but as ever other 
> > > stuff
> > > got in the way.
> > > 
> > > +CC qemu-devel in case anyone else also looking at this.
> > > 
> > > Jonathan
> > > 
> > > 
> > >     
> > > >     
> > > > > 
> > > > >    Thanks for your help and guidance!
> > > > > 
> > > > >    Best,
> > > > >    Saransh Gupta
> > > > >    Research Staff Member, IBM Research     
> > > > 
> > > > [1]:     
> > > https://lore.kernel.org/linux-cxl/20211022183709.1199701-1-ben.widawsky@intel.com/T/#t 
> > >     
> > > > [2]:     
> > > https://www.youtube.com/watch?v=g89SLjt5Bd4&list=PLVsQ_xZBEyN3wA8Ej4BUjudXFbXuxhnfc&index=49 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >     
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-19  3:25               ` Ben Widawsky
@ 2021-11-26 12:08                 ` Alex Bennée
  -1 siblings, 0 replies; 35+ messages in thread
From: Alex Bennée @ 2021-11-26 12:08 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Peter Maydell, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, Jonathan Cameron


Ben Widawsky <ben.widawsky@intel.com> writes:

> On 21-11-19 02:29:51, Shreyas Shah wrote:
>> Hi Ben
>> 
>> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
>>  
>
> From me, there are no plans for QEMU anything until/unless upstream thinks it
> will merge the existing patches, or provide feedback as to what it would take to
> get them merged. If upstream doesn't see a point in these patches, then I really
> don't see much value in continuing to further them. Once hardware comes out, the
> value proposition is certainly less.

I take it:

  Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
  Date: Mon,  1 Feb 2021 16:59:17 -0800
  Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>

is the current state of the support? I saw there was a fair amount of
discussion on the thread so assumed there would be a v4 forthcoming at
some point.

Adding new subsystems to QEMU does seem to be a pain point for new
contributors. Patches tend to fall through the cracks of existing
maintainers who spend most of their time looking at stuff that directly
touches their files. There is also a reluctance to merge large chunks of
functionality without an identified maintainer (and maybe reviewers) who
can be the contact point for new patches. So in short you need:

 - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems
 - Reviewed-by tags on the new sub-system patches from anyone who understands CXL
 - Some* in-tree testing (so it doesn't quietly bitrot)
 - A patch adding the sub-system to MAINTAINERS with identified people

* Some means at least ensuring qtest can instantiate the device and not
  fall over. Obviously more testing is better but it can always be
  expanded on in later series.

Is that the feedback you were looking for?

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-26 12:08                 ` Alex Bennée
  0 siblings, 0 replies; 35+ messages in thread
From: Alex Bennée @ 2021-11-26 12:08 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Shreyas Shah, linux-cxl, Saransh Gupta1, Jonathan Cameron,
	qemu-devel, Peter Maydell, Philippe Mathieu-Daudé


Ben Widawsky <ben.widawsky@intel.com> writes:

> On 21-11-19 02:29:51, Shreyas Shah wrote:
>> Hi Ben
>> 
>> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
>>  
>
> From me, there are no plans for QEMU anything until/unless upstream thinks it
> will merge the existing patches, or provide feedback as to what it would take to
> get them merged. If upstream doesn't see a point in these patches, then I really
> don't see much value in continuing to further them. Once hardware comes out, the
> value proposition is certainly less.

I take it:

  Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
  Date: Mon,  1 Feb 2021 16:59:17 -0800
  Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>

is the current state of the support? I saw there was a fair amount of
discussion on the thread so assumed there would be a v4 forthcoming at
some point.

Adding new subsystems to QEMU does seem to be a pain point for new
contributors. Patches tend to fall through the cracks of existing
maintainers who spend most of their time looking at stuff that directly
touches their files. There is also a reluctance to merge large chunks of
functionality without an identified maintainer (and maybe reviewers) who
can be the contact point for new patches. So in short you need:

 - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems
 - Reviewed-by tags on the new sub-system patches from anyone who understands CXL
 - Some* in-tree testing (so it doesn't quietly bitrot)
 - A patch adding the sub-system to MAINTAINERS with identified people

* Some means at least ensuring qtest can instantiate the device and not
  fall over. Obviously more testing is better but it can always be
  expanded on in later series.

Is that the feedback you were looking for?

-- 
Alex Bennée

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-26 12:08                 ` Alex Bennée
@ 2021-11-29 17:16                   ` Ben Widawsky
  -1 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-29 17:16 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Shreyas Shah, linux-cxl, Saransh Gupta1, Jonathan Cameron,
	qemu-devel, Peter Maydell, Philippe Mathieu-Daudé

On 21-11-26 12:08:08, Alex Bennée wrote:
> 
> Ben Widawsky <ben.widawsky@intel.com> writes:
> 
> > On 21-11-19 02:29:51, Shreyas Shah wrote:
> >> Hi Ben
> >> 
> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> >>  
> >
> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > will merge the existing patches, or provide feedback as to what it would take to
> > get them merged. If upstream doesn't see a point in these patches, then I really
> > don't see much value in continuing to further them. Once hardware comes out, the
> > value proposition is certainly less.
> 
> I take it:
> 
>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> 
> is the current state of the support? I saw there was a fair amount of
> discussion on the thread so assumed there would be a v4 forthcoming at
> some point.

Hi Alex,

There is a v4, however, we never really had a solid plan for the primary issue
which was around handling CXL memory expander devices properly (both from an
interleaving standpoint as well as having a device which hosts multiple memory
capacities, persistent and volatile). I didn't feel it was worth sending a v4
unless someone could say
1. we will merge what's there and fix later, or
2. you must have a more perfect emulation in place, or
3. we want to see usages for a real guest

I had hoped we could merge what was there mostly as is and fix it up as we go.
It's useful in the state it is now, and as time goes on, we find more usecases
for it in a VMM, and not just driver development.

> 
> Adding new subsystems to QEMU does seem to be a pain point for new
> contributors. Patches tend to fall through the cracks of existing
> maintainers who spend most of their time looking at stuff that directly
> touches their files. There is also a reluctance to merge large chunks of
> functionality without an identified maintainer (and maybe reviewers) who
> can be the contact point for new patches. So in short you need:
> 
>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems

This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
hw/mem are the two) in the past, but I think there interest is lacking (and
reasonably so, it is an entirely different subsystem).

>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL

I have/had those from Jonathan.

>  - Some* in-tree testing (so it doesn't quietly bitrot)

We had this, but it's stale now. We can bring this back up.

>  - A patch adding the sub-system to MAINTAINERS with identified people

That was there too. Since the original posting, I'd be happy to sign Jonathan up
to this if he's willing.

> 
> * Some means at least ensuring qtest can instantiate the device and not
>   fall over. Obviously more testing is better but it can always be
>   expanded on in later series.

This was in the patch series. It could use more testing for sure, but I had
basic functional testing in place via qtest.

> 
> Is that the feedback you were looking for?

You validated my assumptions as to what's needed, but your first bullet is the
one I can't seem to pin down.

Thanks.
Ben

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-29 17:16                   ` Ben Widawsky
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-29 17:16 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Maydell, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, Jonathan Cameron

On 21-11-26 12:08:08, Alex Bennée wrote:
> 
> Ben Widawsky <ben.widawsky@intel.com> writes:
> 
> > On 21-11-19 02:29:51, Shreyas Shah wrote:
> >> Hi Ben
> >> 
> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> >>  
> >
> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > will merge the existing patches, or provide feedback as to what it would take to
> > get them merged. If upstream doesn't see a point in these patches, then I really
> > don't see much value in continuing to further them. Once hardware comes out, the
> > value proposition is certainly less.
> 
> I take it:
> 
>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> 
> is the current state of the support? I saw there was a fair amount of
> discussion on the thread so assumed there would be a v4 forthcoming at
> some point.

Hi Alex,

There is a v4, however, we never really had a solid plan for the primary issue
which was around handling CXL memory expander devices properly (both from an
interleaving standpoint as well as having a device which hosts multiple memory
capacities, persistent and volatile). I didn't feel it was worth sending a v4
unless someone could say
1. we will merge what's there and fix later, or
2. you must have a more perfect emulation in place, or
3. we want to see usages for a real guest

I had hoped we could merge what was there mostly as is and fix it up as we go.
It's useful in the state it is now, and as time goes on, we find more usecases
for it in a VMM, and not just driver development.

> 
> Adding new subsystems to QEMU does seem to be a pain point for new
> contributors. Patches tend to fall through the cracks of existing
> maintainers who spend most of their time looking at stuff that directly
> touches their files. There is also a reluctance to merge large chunks of
> functionality without an identified maintainer (and maybe reviewers) who
> can be the contact point for new patches. So in short you need:
> 
>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems

This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
hw/mem are the two) in the past, but I think there interest is lacking (and
reasonably so, it is an entirely different subsystem).

>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL

I have/had those from Jonathan.

>  - Some* in-tree testing (so it doesn't quietly bitrot)

We had this, but it's stale now. We can bring this back up.

>  - A patch adding the sub-system to MAINTAINERS with identified people

That was there too. Since the original posting, I'd be happy to sign Jonathan up
to this if he's willing.

> 
> * Some means at least ensuring qtest can instantiate the device and not
>   fall over. Obviously more testing is better but it can always be
>   expanded on in later series.

This was in the patch series. It could use more testing for sure, but I had
basic functional testing in place via qtest.

> 
> Is that the feedback you were looking for?

You validated my assumptions as to what's needed, but your first bullet is the
one I can't seem to pin down.

Thanks.
Ben


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-29 17:16                   ` Ben Widawsky
@ 2021-11-29 18:28                     ` Alex Bennée
  -1 siblings, 0 replies; 35+ messages in thread
From: Alex Bennée @ 2021-11-29 18:28 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Peter Maydell, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, Jonathan Cameron


Ben Widawsky <ben.widawsky@intel.com> writes:

> On 21-11-26 12:08:08, Alex Bennée wrote:
>> 
>> Ben Widawsky <ben.widawsky@intel.com> writes:
>> 
>> > On 21-11-19 02:29:51, Shreyas Shah wrote:
>> >> Hi Ben
>> >> 
>> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
>> >>  
>> >
>> > From me, there are no plans for QEMU anything until/unless upstream thinks it
>> > will merge the existing patches, or provide feedback as to what it would take to
>> > get them merged. If upstream doesn't see a point in these patches, then I really
>> > don't see much value in continuing to further them. Once hardware comes out, the
>> > value proposition is certainly less.
>> 
>> I take it:
>> 
>>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
>> 
>> is the current state of the support? I saw there was a fair amount of
>> discussion on the thread so assumed there would be a v4 forthcoming at
>> some point.
>
> Hi Alex,
>
> There is a v4, however, we never really had a solid plan for the primary issue
> which was around handling CXL memory expander devices properly (both from an
> interleaving standpoint as well as having a device which hosts multiple memory
> capacities, persistent and volatile). I didn't feel it was worth sending a v4
> unless someone could say
>
> 1. we will merge what's there and fix later, or
> 2. you must have a more perfect emulation in place, or
> 3. we want to see usages for a real guest

I think 1. is acceptable if the community is happy there will be ongoing
development and it's not just a code dump. Given it will have a
MAINTAINERS entry I think that is demonstrated.

What's the current use case? Testing drivers before real HW comes out?
Will it still be useful after real HW comes out for people wanting to
debug things without HW?

>
> I had hoped we could merge what was there mostly as is and fix it up as we go.
> It's useful in the state it is now, and as time goes on, we find more usecases
> for it in a VMM, and not just driver development.
>
>> 
>> Adding new subsystems to QEMU does seem to be a pain point for new
>> contributors. Patches tend to fall through the cracks of existing
>> maintainers who spend most of their time looking at stuff that directly
>> touches their files. There is also a reluctance to merge large chunks of
>> functionality without an identified maintainer (and maybe reviewers) who
>> can be the contact point for new patches. So in short you need:
>> 
>>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems
>
> This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> hw/mem are the two) in the past, but I think there interest is lacking (and
> reasonably so, it is an entirely different subsystem).

So the best approach to that is to leave a Cc: tag in the patch itself
on your next posting so we can see the maintainer did see it but didn't
contribute a review tag. This is also a good reason to keep Message-Id
tags in patches so we can go back to the original threads.

So in my latest PR you'll see:

  Signed-off-by: Willian Rampazzo <willianr@redhat.com>
  Reviewed-by: Beraldo Leal <bleal@redhat.com>
  Message-Id: <20211122191124.31620-1-willianr@redhat.com>
  Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
  Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
  Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>

which shows the Message-Id from Willian's original posting and the
latest Message-Id from my posting of the maintainer tree (I trim off my
old ones).

>>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL
>
> I have/had those from Jonathan.
>
>>  - Some* in-tree testing (so it doesn't quietly bitrot)
>
> We had this, but it's stale now. We can bring this back up.
>
>>  - A patch adding the sub-system to MAINTAINERS with identified people
>
> That was there too. Since the original posting, I'd be happy to sign Jonathan up
> to this if he's willing.

Sounds good to me.

>> * Some means at least ensuring qtest can instantiate the device and not
>>   fall over. Obviously more testing is better but it can always be
>>   expanded on in later series.
>
> This was in the patch series. It could use more testing for sure, but I had
> basic functional testing in place via qtest.

More is always better but the basic qtest does ensure a device doesn't
segfault if it's instantiated.

>
>> 
>> Is that the feedback you were looking for?
>
> You validated my assumptions as to what's needed, but your first bullet is the
> one I can't seem to pin down.
>
> Thanks.
> Ben


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-29 18:28                     ` Alex Bennée
  0 siblings, 0 replies; 35+ messages in thread
From: Alex Bennée @ 2021-11-29 18:28 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Shreyas Shah, linux-cxl, Saransh Gupta1, Jonathan Cameron,
	qemu-devel, Peter Maydell, Philippe Mathieu-Daudé


Ben Widawsky <ben.widawsky@intel.com> writes:

> On 21-11-26 12:08:08, Alex Bennée wrote:
>> 
>> Ben Widawsky <ben.widawsky@intel.com> writes:
>> 
>> > On 21-11-19 02:29:51, Shreyas Shah wrote:
>> >> Hi Ben
>> >> 
>> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
>> >>  
>> >
>> > From me, there are no plans for QEMU anything until/unless upstream thinks it
>> > will merge the existing patches, or provide feedback as to what it would take to
>> > get them merged. If upstream doesn't see a point in these patches, then I really
>> > don't see much value in continuing to further them. Once hardware comes out, the
>> > value proposition is certainly less.
>> 
>> I take it:
>> 
>>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
>>   Date: Mon,  1 Feb 2021 16:59:17 -0800
>>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
>> 
>> is the current state of the support? I saw there was a fair amount of
>> discussion on the thread so assumed there would be a v4 forthcoming at
>> some point.
>
> Hi Alex,
>
> There is a v4, however, we never really had a solid plan for the primary issue
> which was around handling CXL memory expander devices properly (both from an
> interleaving standpoint as well as having a device which hosts multiple memory
> capacities, persistent and volatile). I didn't feel it was worth sending a v4
> unless someone could say
>
> 1. we will merge what's there and fix later, or
> 2. you must have a more perfect emulation in place, or
> 3. we want to see usages for a real guest

I think 1. is acceptable if the community is happy there will be ongoing
development and it's not just a code dump. Given it will have a
MAINTAINERS entry I think that is demonstrated.

What's the current use case? Testing drivers before real HW comes out?
Will it still be useful after real HW comes out for people wanting to
debug things without HW?

>
> I had hoped we could merge what was there mostly as is and fix it up as we go.
> It's useful in the state it is now, and as time goes on, we find more usecases
> for it in a VMM, and not just driver development.
>
>> 
>> Adding new subsystems to QEMU does seem to be a pain point for new
>> contributors. Patches tend to fall through the cracks of existing
>> maintainers who spend most of their time looking at stuff that directly
>> touches their files. There is also a reluctance to merge large chunks of
>> functionality without an identified maintainer (and maybe reviewers) who
>> can be the contact point for new patches. So in short you need:
>> 
>>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems
>
> This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> hw/mem are the two) in the past, but I think there interest is lacking (and
> reasonably so, it is an entirely different subsystem).

So the best approach to that is to leave a Cc: tag in the patch itself
on your next posting so we can see the maintainer did see it but didn't
contribute a review tag. This is also a good reason to keep Message-Id
tags in patches so we can go back to the original threads.

So in my latest PR you'll see:

  Signed-off-by: Willian Rampazzo <willianr@redhat.com>
  Reviewed-by: Beraldo Leal <bleal@redhat.com>
  Message-Id: <20211122191124.31620-1-willianr@redhat.com>
  Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
  Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
  Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>

which shows the Message-Id from Willian's original posting and the
latest Message-Id from my posting of the maintainer tree (I trim off my
old ones).

>>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL
>
> I have/had those from Jonathan.
>
>>  - Some* in-tree testing (so it doesn't quietly bitrot)
>
> We had this, but it's stale now. We can bring this back up.
>
>>  - A patch adding the sub-system to MAINTAINERS with identified people
>
> That was there too. Since the original posting, I'd be happy to sign Jonathan up
> to this if he's willing.

Sounds good to me.

>> * Some means at least ensuring qtest can instantiate the device and not
>>   fall over. Obviously more testing is better but it can always be
>>   expanded on in later series.
>
> This was in the patch series. It could use more testing for sure, but I had
> basic functional testing in place via qtest.

More is always better but the basic qtest does ensure a device doesn't
segfault if it's instantiated.

>
>> 
>> Is that the feedback you were looking for?
>
> You validated my assumptions as to what's needed, but your first bullet is the
> one I can't seem to pin down.
>
> Thanks.
> Ben


-- 
Alex Bennée

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-29 18:28                     ` Alex Bennée
@ 2021-11-30 13:09                       ` Jonathan Cameron via
  -1 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2021-11-30 13:09 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Ben Widawsky, Shreyas Shah, linux-cxl, Saransh Gupta1,
	qemu-devel, Peter Maydell, Philippe Mathieu-Daudé,
	shameerali.kolothum.thodi

On Mon, 29 Nov 2021 18:28:43 +0000
Alex Bennée <alex.bennee@linaro.org> wrote:

> Ben Widawsky <ben.widawsky@intel.com> writes:
> 
> > On 21-11-26 12:08:08, Alex Bennée wrote:  
> >> 
> >> Ben Widawsky <ben.widawsky@intel.com> writes:
> >>   
> >> > On 21-11-19 02:29:51, Shreyas Shah wrote:  
> >> >> Hi Ben
> >> >> 
> >> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> >> >>    
> >> >
> >> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> >> > will merge the existing patches, or provide feedback as to what it would take to
> >> > get them merged. If upstream doesn't see a point in these patches, then I really
> >> > don't see much value in continuing to further them. Once hardware comes out, the
> >> > value proposition is certainly less.  
> >> 
> >> I take it:
> >> 
> >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> >>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> >> 
> >> is the current state of the support? I saw there was a fair amount of
> >> discussion on the thread so assumed there would be a v4 forthcoming at
> >> some point.  
> >
> > Hi Alex,
> >
> > There is a v4, however, we never really had a solid plan for the primary issue
> > which was around handling CXL memory expander devices properly (both from an
> > interleaving standpoint as well as having a device which hosts multiple memory
> > capacities, persistent and volatile). I didn't feel it was worth sending a v4
> > unless someone could say
> >
> > 1. we will merge what's there and fix later, or
> > 2. you must have a more perfect emulation in place, or
> > 3. we want to see usages for a real guest  
> 
> I think 1. is acceptable if the community is happy there will be ongoing
> development and it's not just a code dump. Given it will have a
> MAINTAINERS entry I think that is demonstrated.

My thought is also 1.  There are a few hacks we need to clean out but
nothing that should take too long.  I'm sure it'll take a rev or two more.
Right now for example, I've added support to arm-virt and maybe need to
move that over to a different machine model...

> 
> What's the current use case? Testing drivers before real HW comes out?
> Will it still be useful after real HW comes out for people wanting to
> debug things without HW?

CXL is continuing to expand in scope and capabilities and I don't see that
reducing any time soon (My guess is 3 years+ to just catch up with what is
under discussion today).  So I see two long term use cases:

1) Automated verification that we haven't broken things.  I suspect no
one person is going to have a test farm covering all the corner cases.
So we'll need emulation + firmware + kernel based testing.

2) New feature prove out.  We have already used it for some features that
will appear in the next spec version. Obviously I can't say what or
send that code out yet.  Its very useful and the spec draft has changed
in various ways a result.  I can't commit others, but Huawei will be
doing more of this going forwards.  For that we need a stable base to
which we add the new stuff once spec publication allows it.

> 
> >
> > I had hoped we could merge what was there mostly as is and fix it up as we go.
> > It's useful in the state it is now, and as time goes on, we find more usecases
> > for it in a VMM, and not just driver development.
> >  
> >> 
> >> Adding new subsystems to QEMU does seem to be a pain point for new
> >> contributors. Patches tend to fall through the cracks of existing
> >> maintainers who spend most of their time looking at stuff that directly
> >> touches their files. There is also a reluctance to merge large chunks of
> >> functionality without an identified maintainer (and maybe reviewers) who
> >> can be the contact point for new patches. So in short you need:
> >> 
> >>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems  
> >
> > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> > hw/mem are the two) in the past, but I think there interest is lacking (and
> > reasonably so, it is an entirely different subsystem).  
> 
> So the best approach to that is to leave a Cc: tag in the patch itself
> on your next posting so we can see the maintainer did see it but didn't
> contribute a review tag. This is also a good reason to keep Message-Id
> tags in patches so we can go back to the original threads.
> 
> So in my latest PR you'll see:
> 
>   Signed-off-by: Willian Rampazzo <willianr@redhat.com>
>   Reviewed-by: Beraldo Leal <bleal@redhat.com>
>   Message-Id: <20211122191124.31620-1-willianr@redhat.com>
>   Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>   Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>   Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>
> 
> which shows the Message-Id from Willian's original posting and the
> latest Message-Id from my posting of the maintainer tree (I trim off my
> old ones).
> 
> >>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL  
> >
> > I have/had those from Jonathan.
> >  
> >>  - Some* in-tree testing (so it doesn't quietly bitrot)  
> >
> > We had this, but it's stale now. We can bring this back up.
> >  
> >>  - A patch adding the sub-system to MAINTAINERS with identified people  
> >
> > That was there too. Since the original posting, I'd be happy to sign Jonathan up
> > to this if he's willing.  
> 
> Sounds good to me.

Sure that's fine with me.  Ben, I'm assuming you are fine with being joint maintainer?

> 
> >> * Some means at least ensuring qtest can instantiate the device and not
> >>   fall over. Obviously more testing is better but it can always be
> >>   expanded on in later series.  
> >
> > This was in the patch series. It could use more testing for sure, but I had
> > basic functional testing in place via qtest.  
> 
> More is always better but the basic qtest does ensure a device doesn't
> segfault if it's instantiated.

I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
me a hand.

Thanks

Jonathan


> 
> >  
> >> 
> >> Is that the feedback you were looking for?  
> >
> > You validated my assumptions as to what's needed, but your first bullet is the
> > one I can't seem to pin down.
> >
> > Thanks.
> > Ben  
> 
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-30 13:09                       ` Jonathan Cameron via
  0 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron via @ 2021-11-30 13:09 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Ben Widawsky, Shreyas Shah, linux-cxl, Saransh Gupta1,
	qemu-devel, Peter Maydell, Philippe Mathieu-Daudé,
	shameerali.kolothum.thodi

On Mon, 29 Nov 2021 18:28:43 +0000
Alex Bennée <alex.bennee@linaro.org> wrote:

> Ben Widawsky <ben.widawsky@intel.com> writes:
> 
> > On 21-11-26 12:08:08, Alex Bennée wrote:  
> >> 
> >> Ben Widawsky <ben.widawsky@intel.com> writes:
> >>   
> >> > On 21-11-19 02:29:51, Shreyas Shah wrote:  
> >> >> Hi Ben
> >> >> 
> >> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> >> >>    
> >> >
> >> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> >> > will merge the existing patches, or provide feedback as to what it would take to
> >> > get them merged. If upstream doesn't see a point in these patches, then I really
> >> > don't see much value in continuing to further them. Once hardware comes out, the
> >> > value proposition is certainly less.  
> >> 
> >> I take it:
> >> 
> >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> >>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> >> 
> >> is the current state of the support? I saw there was a fair amount of
> >> discussion on the thread so assumed there would be a v4 forthcoming at
> >> some point.  
> >
> > Hi Alex,
> >
> > There is a v4, however, we never really had a solid plan for the primary issue
> > which was around handling CXL memory expander devices properly (both from an
> > interleaving standpoint as well as having a device which hosts multiple memory
> > capacities, persistent and volatile). I didn't feel it was worth sending a v4
> > unless someone could say
> >
> > 1. we will merge what's there and fix later, or
> > 2. you must have a more perfect emulation in place, or
> > 3. we want to see usages for a real guest  
> 
> I think 1. is acceptable if the community is happy there will be ongoing
> development and it's not just a code dump. Given it will have a
> MAINTAINERS entry I think that is demonstrated.

My thought is also 1.  There are a few hacks we need to clean out but
nothing that should take too long.  I'm sure it'll take a rev or two more.
Right now for example, I've added support to arm-virt and maybe need to
move that over to a different machine model...

> 
> What's the current use case? Testing drivers before real HW comes out?
> Will it still be useful after real HW comes out for people wanting to
> debug things without HW?

CXL is continuing to expand in scope and capabilities and I don't see that
reducing any time soon (My guess is 3 years+ to just catch up with what is
under discussion today).  So I see two long term use cases:

1) Automated verification that we haven't broken things.  I suspect no
one person is going to have a test farm covering all the corner cases.
So we'll need emulation + firmware + kernel based testing.

2) New feature prove out.  We have already used it for some features that
will appear in the next spec version. Obviously I can't say what or
send that code out yet.  Its very useful and the spec draft has changed
in various ways a result.  I can't commit others, but Huawei will be
doing more of this going forwards.  For that we need a stable base to
which we add the new stuff once spec publication allows it.

> 
> >
> > I had hoped we could merge what was there mostly as is and fix it up as we go.
> > It's useful in the state it is now, and as time goes on, we find more usecases
> > for it in a VMM, and not just driver development.
> >  
> >> 
> >> Adding new subsystems to QEMU does seem to be a pain point for new
> >> contributors. Patches tend to fall through the cracks of existing
> >> maintainers who spend most of their time looking at stuff that directly
> >> touches their files. There is also a reluctance to merge large chunks of
> >> functionality without an identified maintainer (and maybe reviewers) who
> >> can be the contact point for new patches. So in short you need:
> >> 
> >>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems  
> >
> > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> > hw/mem are the two) in the past, but I think there interest is lacking (and
> > reasonably so, it is an entirely different subsystem).  
> 
> So the best approach to that is to leave a Cc: tag in the patch itself
> on your next posting so we can see the maintainer did see it but didn't
> contribute a review tag. This is also a good reason to keep Message-Id
> tags in patches so we can go back to the original threads.
> 
> So in my latest PR you'll see:
> 
>   Signed-off-by: Willian Rampazzo <willianr@redhat.com>
>   Reviewed-by: Beraldo Leal <bleal@redhat.com>
>   Message-Id: <20211122191124.31620-1-willianr@redhat.com>
>   Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>   Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>   Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>
> 
> which shows the Message-Id from Willian's original posting and the
> latest Message-Id from my posting of the maintainer tree (I trim off my
> old ones).
> 
> >>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL  
> >
> > I have/had those from Jonathan.
> >  
> >>  - Some* in-tree testing (so it doesn't quietly bitrot)  
> >
> > We had this, but it's stale now. We can bring this back up.
> >  
> >>  - A patch adding the sub-system to MAINTAINERS with identified people  
> >
> > That was there too. Since the original posting, I'd be happy to sign Jonathan up
> > to this if he's willing.  
> 
> Sounds good to me.

Sure that's fine with me.  Ben, I'm assuming you are fine with being joint maintainer?

> 
> >> * Some means at least ensuring qtest can instantiate the device and not
> >>   fall over. Obviously more testing is better but it can always be
> >>   expanded on in later series.  
> >
> > This was in the patch series. It could use more testing for sure, but I had
> > basic functional testing in place via qtest.  
> 
> More is always better but the basic qtest does ensure a device doesn't
> segfault if it's instantiated.

I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
me a hand.

Thanks

Jonathan


> 
> >  
> >> 
> >> Is that the feedback you were looking for?  
> >
> > You validated my assumptions as to what's needed, but your first bullet is the
> > one I can't seem to pin down.
> >
> > Thanks.
> > Ben  
> 
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-30 13:09                       ` Jonathan Cameron via
@ 2021-11-30 17:21                         ` Ben Widawsky
  -1 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-30 17:21 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, Alex Bennée,
	shameerali.kolothum.thodi

On 21-11-30 13:09:56, Jonathan Cameron wrote:
> On Mon, 29 Nov 2021 18:28:43 +0000
> Alex Bennée <alex.bennee@linaro.org> wrote:
> 
> > Ben Widawsky <ben.widawsky@intel.com> writes:
> > 
> > > On 21-11-26 12:08:08, Alex Bennée wrote:  
> > >> 
> > >> Ben Widawsky <ben.widawsky@intel.com> writes:
> > >>   
> > >> > On 21-11-19 02:29:51, Shreyas Shah wrote:  
> > >> >> Hi Ben
> > >> >> 
> > >> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> > >> >>    
> > >> >
> > >> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > >> > will merge the existing patches, or provide feedback as to what it would take to
> > >> > get them merged. If upstream doesn't see a point in these patches, then I really
> > >> > don't see much value in continuing to further them. Once hardware comes out, the
> > >> > value proposition is certainly less.  
> > >> 
> > >> I take it:
> > >> 
> > >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> > >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> > >>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> > >> 
> > >> is the current state of the support? I saw there was a fair amount of
> > >> discussion on the thread so assumed there would be a v4 forthcoming at
> > >> some point.  
> > >
> > > Hi Alex,
> > >
> > > There is a v4, however, we never really had a solid plan for the primary issue
> > > which was around handling CXL memory expander devices properly (both from an
> > > interleaving standpoint as well as having a device which hosts multiple memory
> > > capacities, persistent and volatile). I didn't feel it was worth sending a v4
> > > unless someone could say
> > >
> > > 1. we will merge what's there and fix later, or
> > > 2. you must have a more perfect emulation in place, or
> > > 3. we want to see usages for a real guest  
> > 
> > I think 1. is acceptable if the community is happy there will be ongoing
> > development and it's not just a code dump. Given it will have a
> > MAINTAINERS entry I think that is demonstrated.
> 
> My thought is also 1.  There are a few hacks we need to clean out but
> nothing that should take too long.  I'm sure it'll take a rev or two more.
> Right now for example, I've added support to arm-virt and maybe need to
> move that over to a different machine model...
> 

The most annoying thing about rebasing it is passing the ACPI tests. They keep
changing upstream. Being able to at least merge up to there would be huge.

> > 
> > What's the current use case? Testing drivers before real HW comes out?
> > Will it still be useful after real HW comes out for people wanting to
> > debug things without HW?
> 
> CXL is continuing to expand in scope and capabilities and I don't see that
> reducing any time soon (My guess is 3 years+ to just catch up with what is
> under discussion today).  So I see two long term use cases:
> 
> 1) Automated verification that we haven't broken things.  I suspect no
> one person is going to have a test farm covering all the corner cases.
> So we'll need emulation + firmware + kernel based testing.
> 

Does this exist in other forms? AFAICT for x86, there isn't much example of
this.

> 2) New feature prove out.  We have already used it for some features that
> will appear in the next spec version. Obviously I can't say what or
> send that code out yet.  Its very useful and the spec draft has changed
> in various ways a result.  I can't commit others, but Huawei will be
> doing more of this going forwards.  For that we need a stable base to
> which we add the new stuff once spec publication allows it.
> 

I can't commit for Intel but I will say there's more latitude now to work on
projects like this compared to when I first wrote the patches. I have
interesting in continuing to develop this as well. I'm very interested in
supporting interleave and hotplug specifically.

> > 
> > >
> > > I had hoped we could merge what was there mostly as is and fix it up as we go.
> > > It's useful in the state it is now, and as time goes on, we find more usecases
> > > for it in a VMM, and not just driver development.
> > >  
> > >> 
> > >> Adding new subsystems to QEMU does seem to be a pain point for new
> > >> contributors. Patches tend to fall through the cracks of existing
> > >> maintainers who spend most of their time looking at stuff that directly
> > >> touches their files. There is also a reluctance to merge large chunks of
> > >> functionality without an identified maintainer (and maybe reviewers) who
> > >> can be the contact point for new patches. So in short you need:
> > >> 
> > >>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems  
> > >
> > > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> > > hw/mem are the two) in the past, but I think there interest is lacking (and
> > > reasonably so, it is an entirely different subsystem).  
> > 
> > So the best approach to that is to leave a Cc: tag in the patch itself
> > on your next posting so we can see the maintainer did see it but didn't
> > contribute a review tag. This is also a good reason to keep Message-Id
> > tags in patches so we can go back to the original threads.
> > 
> > So in my latest PR you'll see:
> > 
> >   Signed-off-by: Willian Rampazzo <willianr@redhat.com>
> >   Reviewed-by: Beraldo Leal <bleal@redhat.com>
> >   Message-Id: <20211122191124.31620-1-willianr@redhat.com>
> >   Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> >   Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >   Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>
> > 
> > which shows the Message-Id from Willian's original posting and the
> > latest Message-Id from my posting of the maintainer tree (I trim off my
> > old ones).
> > 
> > >>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL  
> > >
> > > I have/had those from Jonathan.
> > >  
> > >>  - Some* in-tree testing (so it doesn't quietly bitrot)  
> > >
> > > We had this, but it's stale now. We can bring this back up.
> > >  
> > >>  - A patch adding the sub-system to MAINTAINERS with identified people  
> > >
> > > That was there too. Since the original posting, I'd be happy to sign Jonathan up
> > > to this if he's willing.  
> > 
> > Sounds good to me.
> 
> Sure that's fine with me.  Ben, I'm assuming you are fine with being joint maintainer?
> 

Yes, I brought it up :D. Once I land the region creation patches I should have
more time for a bit to circle back to this, which I'd like to do. FOSDEM CFP is
out again, perhaps I should advertise there.

> > 
> > >> * Some means at least ensuring qtest can instantiate the device and not
> > >>   fall over. Obviously more testing is better but it can always be
> > >>   expanded on in later series.  
> > >
> > > This was in the patch series. It could use more testing for sure, but I had
> > > basic functional testing in place via qtest.  
> > 
> > More is always better but the basic qtest does ensure a device doesn't
> > segfault if it's instantiated.
> 
> I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
> me a hand.
> 
> Thanks

I'd certainly feel better if we had more tests. I also suspect the qtest I wrote
originally no longer works. The biggest challenge I had was getting gitlab CI
working for me.

> 
> Jonathan
> 
> 
> > 
> > >  
> > >> 
> > >> Is that the feedback you were looking for?  
> > >
> > > You validated my assumptions as to what's needed, but your first bullet is the
> > > one I can't seem to pin down.
> > >
> > > Thanks.
> > > Ben  
> > 
> > 
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-11-30 17:21                         ` Ben Widawsky
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Widawsky @ 2021-11-30 17:21 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Alex Bennée, Shreyas Shah, linux-cxl, Saransh Gupta1,
	qemu-devel, Peter Maydell, Philippe Mathieu-Daudé,
	shameerali.kolothum.thodi

On 21-11-30 13:09:56, Jonathan Cameron wrote:
> On Mon, 29 Nov 2021 18:28:43 +0000
> Alex Bennée <alex.bennee@linaro.org> wrote:
> 
> > Ben Widawsky <ben.widawsky@intel.com> writes:
> > 
> > > On 21-11-26 12:08:08, Alex Bennée wrote:  
> > >> 
> > >> Ben Widawsky <ben.widawsky@intel.com> writes:
> > >>   
> > >> > On 21-11-19 02:29:51, Shreyas Shah wrote:  
> > >> >> Hi Ben
> > >> >> 
> > >> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> > >> >>    
> > >> >
> > >> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > >> > will merge the existing patches, or provide feedback as to what it would take to
> > >> > get them merged. If upstream doesn't see a point in these patches, then I really
> > >> > don't see much value in continuing to further them. Once hardware comes out, the
> > >> > value proposition is certainly less.  
> > >> 
> > >> I take it:
> > >> 
> > >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> > >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> > >>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> > >> 
> > >> is the current state of the support? I saw there was a fair amount of
> > >> discussion on the thread so assumed there would be a v4 forthcoming at
> > >> some point.  
> > >
> > > Hi Alex,
> > >
> > > There is a v4, however, we never really had a solid plan for the primary issue
> > > which was around handling CXL memory expander devices properly (both from an
> > > interleaving standpoint as well as having a device which hosts multiple memory
> > > capacities, persistent and volatile). I didn't feel it was worth sending a v4
> > > unless someone could say
> > >
> > > 1. we will merge what's there and fix later, or
> > > 2. you must have a more perfect emulation in place, or
> > > 3. we want to see usages for a real guest  
> > 
> > I think 1. is acceptable if the community is happy there will be ongoing
> > development and it's not just a code dump. Given it will have a
> > MAINTAINERS entry I think that is demonstrated.
> 
> My thought is also 1.  There are a few hacks we need to clean out but
> nothing that should take too long.  I'm sure it'll take a rev or two more.
> Right now for example, I've added support to arm-virt and maybe need to
> move that over to a different machine model...
> 

The most annoying thing about rebasing it is passing the ACPI tests. They keep
changing upstream. Being able to at least merge up to there would be huge.

> > 
> > What's the current use case? Testing drivers before real HW comes out?
> > Will it still be useful after real HW comes out for people wanting to
> > debug things without HW?
> 
> CXL is continuing to expand in scope and capabilities and I don't see that
> reducing any time soon (My guess is 3 years+ to just catch up with what is
> under discussion today).  So I see two long term use cases:
> 
> 1) Automated verification that we haven't broken things.  I suspect no
> one person is going to have a test farm covering all the corner cases.
> So we'll need emulation + firmware + kernel based testing.
> 

Does this exist in other forms? AFAICT for x86, there isn't much example of
this.

> 2) New feature prove out.  We have already used it for some features that
> will appear in the next spec version. Obviously I can't say what or
> send that code out yet.  Its very useful and the spec draft has changed
> in various ways a result.  I can't commit others, but Huawei will be
> doing more of this going forwards.  For that we need a stable base to
> which we add the new stuff once spec publication allows it.
> 

I can't commit for Intel but I will say there's more latitude now to work on
projects like this compared to when I first wrote the patches. I have
interesting in continuing to develop this as well. I'm very interested in
supporting interleave and hotplug specifically.

> > 
> > >
> > > I had hoped we could merge what was there mostly as is and fix it up as we go.
> > > It's useful in the state it is now, and as time goes on, we find more usecases
> > > for it in a VMM, and not just driver development.
> > >  
> > >> 
> > >> Adding new subsystems to QEMU does seem to be a pain point for new
> > >> contributors. Patches tend to fall through the cracks of existing
> > >> maintainers who spend most of their time looking at stuff that directly
> > >> touches their files. There is also a reluctance to merge large chunks of
> > >> functionality without an identified maintainer (and maybe reviewers) who
> > >> can be the contact point for new patches. So in short you need:
> > >> 
> > >>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems  
> > >
> > > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> > > hw/mem are the two) in the past, but I think there interest is lacking (and
> > > reasonably so, it is an entirely different subsystem).  
> > 
> > So the best approach to that is to leave a Cc: tag in the patch itself
> > on your next posting so we can see the maintainer did see it but didn't
> > contribute a review tag. This is also a good reason to keep Message-Id
> > tags in patches so we can go back to the original threads.
> > 
> > So in my latest PR you'll see:
> > 
> >   Signed-off-by: Willian Rampazzo <willianr@redhat.com>
> >   Reviewed-by: Beraldo Leal <bleal@redhat.com>
> >   Message-Id: <20211122191124.31620-1-willianr@redhat.com>
> >   Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> >   Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >   Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>
> > 
> > which shows the Message-Id from Willian's original posting and the
> > latest Message-Id from my posting of the maintainer tree (I trim off my
> > old ones).
> > 
> > >>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL  
> > >
> > > I have/had those from Jonathan.
> > >  
> > >>  - Some* in-tree testing (so it doesn't quietly bitrot)  
> > >
> > > We had this, but it's stale now. We can bring this back up.
> > >  
> > >>  - A patch adding the sub-system to MAINTAINERS with identified people  
> > >
> > > That was there too. Since the original posting, I'd be happy to sign Jonathan up
> > > to this if he's willing.  
> > 
> > Sounds good to me.
> 
> Sure that's fine with me.  Ben, I'm assuming you are fine with being joint maintainer?
> 

Yes, I brought it up :D. Once I land the region creation patches I should have
more time for a bit to circle back to this, which I'd like to do. FOSDEM CFP is
out again, perhaps I should advertise there.

> > 
> > >> * Some means at least ensuring qtest can instantiate the device and not
> > >>   fall over. Obviously more testing is better but it can always be
> > >>   expanded on in later series.  
> > >
> > > This was in the patch series. It could use more testing for sure, but I had
> > > basic functional testing in place via qtest.  
> > 
> > More is always better but the basic qtest does ensure a device doesn't
> > segfault if it's instantiated.
> 
> I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
> me a hand.
> 
> Thanks

I'd certainly feel better if we had more tests. I also suspect the qtest I wrote
originally no longer works. The biggest challenge I had was getting gitlab CI
working for me.

> 
> Jonathan
> 
> 
> > 
> > >  
> > >> 
> > >> Is that the feedback you were looking for?  
> > >
> > > You validated my assumptions as to what's needed, but your first bullet is the
> > > one I can't seem to pin down.
> > >
> > > Thanks.
> > > Ben  
> > 
> > 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-11-30 17:21                         ` Ben Widawsky
@ 2021-12-01  9:55                           ` Jonathan Cameron via
  -1 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron @ 2021-12-01  9:55 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Peter Maydell, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, Alex Bennée,
	shameerali.kolothum.thodi

On Tue, 30 Nov 2021 09:21:58 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-11-30 13:09:56, Jonathan Cameron wrote:
> > On Mon, 29 Nov 2021 18:28:43 +0000
> > Alex Bennée <alex.bennee@linaro.org> wrote:
> >   
> > > Ben Widawsky <ben.widawsky@intel.com> writes:
> > >   
> > > > On 21-11-26 12:08:08, Alex Bennée wrote:    
> > > >> 
> > > >> Ben Widawsky <ben.widawsky@intel.com> writes:
> > > >>     
> > > >> > On 21-11-19 02:29:51, Shreyas Shah wrote:    
> > > >> >> Hi Ben
> > > >> >> 
> > > >> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> > > >> >>      
> > > >> >
> > > >> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > > >> > will merge the existing patches, or provide feedback as to what it would take to
> > > >> > get them merged. If upstream doesn't see a point in these patches, then I really
> > > >> > don't see much value in continuing to further them. Once hardware comes out, the
> > > >> > value proposition is certainly less.    
> > > >> 
> > > >> I take it:
> > > >> 
> > > >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> > > >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> > > >>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> > > >> 
> > > >> is the current state of the support? I saw there was a fair amount of
> > > >> discussion on the thread so assumed there would be a v4 forthcoming at
> > > >> some point.    
> > > >
> > > > Hi Alex,
> > > >
> > > > There is a v4, however, we never really had a solid plan for the primary issue
> > > > which was around handling CXL memory expander devices properly (both from an
> > > > interleaving standpoint as well as having a device which hosts multiple memory
> > > > capacities, persistent and volatile). I didn't feel it was worth sending a v4
> > > > unless someone could say
> > > >
> > > > 1. we will merge what's there and fix later, or
> > > > 2. you must have a more perfect emulation in place, or
> > > > 3. we want to see usages for a real guest    
> > > 
> > > I think 1. is acceptable if the community is happy there will be ongoing
> > > development and it's not just a code dump. Given it will have a
> > > MAINTAINERS entry I think that is demonstrated.  
> > 
> > My thought is also 1.  There are a few hacks we need to clean out but
> > nothing that should take too long.  I'm sure it'll take a rev or two more.
> > Right now for example, I've added support to arm-virt and maybe need to
> > move that over to a different machine model...
> >   
> 
> The most annoying thing about rebasing it is passing the ACPI tests. They keep
> changing upstream. Being able to at least merge up to there would be huge.

Guess I really need to take a look at the tests :)  It went in clean so
I didn't poke them. Maybe we were just lucky!  A bunch of ACPI infrastructure
had changed which was the biggest update needed + amusingly x86 kernel code now
triggers the issue around smaller writes than the implementation supports for
the mailbox.  For now I've just added the implementations as that removes
a blocker on this going upstream.

> 
> > > 
> > > What's the current use case? Testing drivers before real HW comes out?
> > > Will it still be useful after real HW comes out for people wanting to
> > > debug things without HW?  
> > 
> > CXL is continuing to expand in scope and capabilities and I don't see that
> > reducing any time soon (My guess is 3 years+ to just catch up with what is
> > under discussion today).  So I see two long term use cases:
> > 
> > 1) Automated verification that we haven't broken things.  I suspect no
> > one person is going to have a test farm covering all the corner cases.
> > So we'll need emulation + firmware + kernel based testing.
> >   
> 
> Does this exist in other forms? AFAICT for x86, there isn't much example of
> this.

We run a bunch of stuff internally on a CI farm, targetting various trees,
though this is a complex case because of more elements than most hardware tests
etc.  Our friends in openEuler run a bunch more stuff as well on a mixture of
physical and emulated machines on various architectures.  The other distros have
similar setups though perhaps don't provide as much public info as our folks do.
We are a bit early for CXL support so far so I don't think we have
yet moved beyond manual testing.  It'll come though as it's vital once customers
start caring about the hardware they bought.

Otherwise, if we contribute the resources there are various other orgs who
run tests on stable / mainline and next + various vendor trees. That stuff is
a mixture of real and virtual hardware and is used to verify stable releases
very quickly before Greg pushes them out.

Emulation based testing is easier obviously and we do some of that + I know others
do. Once the CXL support is upstream, adding all the tuning parameters to QEMU to
start exercising corner cases will be needed to support this. 

> 
> > 2) New feature prove out.  We have already used it for some features that
> > will appear in the next spec version. Obviously I can't say what or
> > send that code out yet.  Its very useful and the spec draft has changed
> > in various ways a result.  I can't commit others, but Huawei will be
> > doing more of this going forwards.  For that we need a stable base to
> > which we add the new stuff once spec publication allows it.
> >   
> 
> I can't commit for Intel but I will say there's more latitude now to work on
> projects like this compared to when I first wrote the patches. I have
> interesting in continuing to develop this as well. I'm very interested in
> supporting interleave and hotplug specifically.

Great. 

> 
> > >   
> > > >
> > > > I had hoped we could merge what was there mostly as is and fix it up as we go.
> > > > It's useful in the state it is now, and as time goes on, we find more usecases
> > > > for it in a VMM, and not just driver development.
> > > >    
> > > >> 
> > > >> Adding new subsystems to QEMU does seem to be a pain point for new
> > > >> contributors. Patches tend to fall through the cracks of existing
> > > >> maintainers who spend most of their time looking at stuff that directly
> > > >> touches their files. There is also a reluctance to merge large chunks of
> > > >> functionality without an identified maintainer (and maybe reviewers) who
> > > >> can be the contact point for new patches. So in short you need:
> > > >> 
> > > >>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems    
> > > >
> > > > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> > > > hw/mem are the two) in the past, but I think there interest is lacking (and
> > > > reasonably so, it is an entirely different subsystem).    
> > > 
> > > So the best approach to that is to leave a Cc: tag in the patch itself
> > > on your next posting so we can see the maintainer did see it but didn't
> > > contribute a review tag. This is also a good reason to keep Message-Id
> > > tags in patches so we can go back to the original threads.
> > > 
> > > So in my latest PR you'll see:
> > > 
> > >   Signed-off-by: Willian Rampazzo <willianr@redhat.com>
> > >   Reviewed-by: Beraldo Leal <bleal@redhat.com>
> > >   Message-Id: <20211122191124.31620-1-willianr@redhat.com>
> > >   Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> > >   Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > >   Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>
> > > 
> > > which shows the Message-Id from Willian's original posting and the
> > > latest Message-Id from my posting of the maintainer tree (I trim off my
> > > old ones).
> > >   
> > > >>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL    
> > > >
> > > > I have/had those from Jonathan.
> > > >    
> > > >>  - Some* in-tree testing (so it doesn't quietly bitrot)    
> > > >
> > > > We had this, but it's stale now. We can bring this back up.
> > > >    
> > > >>  - A patch adding the sub-system to MAINTAINERS with identified people    
> > > >
> > > > That was there too. Since the original posting, I'd be happy to sign Jonathan up
> > > > to this if he's willing.    
> > > 
> > > Sounds good to me.  
> > 
> > Sure that's fine with me.  Ben, I'm assuming you are fine with being joint maintainer?
> >   
> 
> Yes, I brought it up :D. Once I land the region creation patches I should have
> more time for a bit to circle back to this, which I'd like to do. FOSDEM CFP is
> out again, perhaps I should advertise there.

Great!

> 
> > >   
> > > >> * Some means at least ensuring qtest can instantiate the device and not
> > > >>   fall over. Obviously more testing is better but it can always be
> > > >>   expanded on in later series.    
> > > >
> > > > This was in the patch series. It could use more testing for sure, but I had
> > > > basic functional testing in place via qtest.    
> > > 
> > > More is always better but the basic qtest does ensure a device doesn't
> > > segfault if it's instantiated.  
> > 
> > I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
> > me a hand.
> > 
> > Thanks  
> 
> I'd certainly feel better if we had more tests. I also suspect the qtest I wrote
> originally no longer works. The biggest challenge I had was getting gitlab CI
> working for me.

Looks like it'll be tests that slow things down. *sigh*.

Why are there not enough days in the week?

Jonathan

> 
> > 
> > Jonathan
> > 
> >   
> > >   
> > > >    
> > > >> 
> > > >> Is that the feedback you were looking for?    
> > > >
> > > > You validated my assumptions as to what's needed, but your first bullet is the
> > > > one I can't seem to pin down.
> > > >
> > > > Thanks.
> > > > Ben    
> > > 
> > >   
> >   
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-12-01  9:55                           ` Jonathan Cameron via
  0 siblings, 0 replies; 35+ messages in thread
From: Jonathan Cameron via @ 2021-12-01  9:55 UTC (permalink / raw)
  To: Ben Widawsky
  Cc: Peter Maydell, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, Alex Bennée,
	shameerali.kolothum.thodi

On Tue, 30 Nov 2021 09:21:58 -0800
Ben Widawsky <ben.widawsky@intel.com> wrote:

> On 21-11-30 13:09:56, Jonathan Cameron wrote:
> > On Mon, 29 Nov 2021 18:28:43 +0000
> > Alex Bennée <alex.bennee@linaro.org> wrote:
> >   
> > > Ben Widawsky <ben.widawsky@intel.com> writes:
> > >   
> > > > On 21-11-26 12:08:08, Alex Bennée wrote:    
> > > >> 
> > > >> Ben Widawsky <ben.widawsky@intel.com> writes:
> > > >>     
> > > >> > On 21-11-19 02:29:51, Shreyas Shah wrote:    
> > > >> >> Hi Ben
> > > >> >> 
> > > >> >> Are you planning to add the CXL2.0 switch inside QEMU or already added in one of the version? 
> > > >> >>      
> > > >> >
> > > >> > From me, there are no plans for QEMU anything until/unless upstream thinks it
> > > >> > will merge the existing patches, or provide feedback as to what it would take to
> > > >> > get them merged. If upstream doesn't see a point in these patches, then I really
> > > >> > don't see much value in continuing to further them. Once hardware comes out, the
> > > >> > value proposition is certainly less.    
> > > >> 
> > > >> I take it:
> > > >> 
> > > >>   Subject: [RFC PATCH v3 00/31] CXL 2.0 Support
> > > >>   Date: Mon,  1 Feb 2021 16:59:17 -0800
> > > >>   Message-Id: <20210202005948.241655-1-ben.widawsky@intel.com>
> > > >> 
> > > >> is the current state of the support? I saw there was a fair amount of
> > > >> discussion on the thread so assumed there would be a v4 forthcoming at
> > > >> some point.    
> > > >
> > > > Hi Alex,
> > > >
> > > > There is a v4, however, we never really had a solid plan for the primary issue
> > > > which was around handling CXL memory expander devices properly (both from an
> > > > interleaving standpoint as well as having a device which hosts multiple memory
> > > > capacities, persistent and volatile). I didn't feel it was worth sending a v4
> > > > unless someone could say
> > > >
> > > > 1. we will merge what's there and fix later, or
> > > > 2. you must have a more perfect emulation in place, or
> > > > 3. we want to see usages for a real guest    
> > > 
> > > I think 1. is acceptable if the community is happy there will be ongoing
> > > development and it's not just a code dump. Given it will have a
> > > MAINTAINERS entry I think that is demonstrated.  
> > 
> > My thought is also 1.  There are a few hacks we need to clean out but
> > nothing that should take too long.  I'm sure it'll take a rev or two more.
> > Right now for example, I've added support to arm-virt and maybe need to
> > move that over to a different machine model...
> >   
> 
> The most annoying thing about rebasing it is passing the ACPI tests. They keep
> changing upstream. Being able to at least merge up to there would be huge.

Guess I really need to take a look at the tests :)  It went in clean so
I didn't poke them. Maybe we were just lucky!  A bunch of ACPI infrastructure
had changed which was the biggest update needed + amusingly x86 kernel code now
triggers the issue around smaller writes than the implementation supports for
the mailbox.  For now I've just added the implementations as that removes
a blocker on this going upstream.

> 
> > > 
> > > What's the current use case? Testing drivers before real HW comes out?
> > > Will it still be useful after real HW comes out for people wanting to
> > > debug things without HW?  
> > 
> > CXL is continuing to expand in scope and capabilities and I don't see that
> > reducing any time soon (My guess is 3 years+ to just catch up with what is
> > under discussion today).  So I see two long term use cases:
> > 
> > 1) Automated verification that we haven't broken things.  I suspect no
> > one person is going to have a test farm covering all the corner cases.
> > So we'll need emulation + firmware + kernel based testing.
> >   
> 
> Does this exist in other forms? AFAICT for x86, there isn't much example of
> this.

We run a bunch of stuff internally on a CI farm, targetting various trees,
though this is a complex case because of more elements than most hardware tests
etc.  Our friends in openEuler run a bunch more stuff as well on a mixture of
physical and emulated machines on various architectures.  The other distros have
similar setups though perhaps don't provide as much public info as our folks do.
We are a bit early for CXL support so far so I don't think we have
yet moved beyond manual testing.  It'll come though as it's vital once customers
start caring about the hardware they bought.

Otherwise, if we contribute the resources there are various other orgs who
run tests on stable / mainline and next + various vendor trees. That stuff is
a mixture of real and virtual hardware and is used to verify stable releases
very quickly before Greg pushes them out.

Emulation based testing is easier obviously and we do some of that + I know others
do. Once the CXL support is upstream, adding all the tuning parameters to QEMU to
start exercising corner cases will be needed to support this. 

> 
> > 2) New feature prove out.  We have already used it for some features that
> > will appear in the next spec version. Obviously I can't say what or
> > send that code out yet.  Its very useful and the spec draft has changed
> > in various ways a result.  I can't commit others, but Huawei will be
> > doing more of this going forwards.  For that we need a stable base to
> > which we add the new stuff once spec publication allows it.
> >   
> 
> I can't commit for Intel but I will say there's more latitude now to work on
> projects like this compared to when I first wrote the patches. I have
> interesting in continuing to develop this as well. I'm very interested in
> supporting interleave and hotplug specifically.

Great. 

> 
> > >   
> > > >
> > > > I had hoped we could merge what was there mostly as is and fix it up as we go.
> > > > It's useful in the state it is now, and as time goes on, we find more usecases
> > > > for it in a VMM, and not just driver development.
> > > >    
> > > >> 
> > > >> Adding new subsystems to QEMU does seem to be a pain point for new
> > > >> contributors. Patches tend to fall through the cracks of existing
> > > >> maintainers who spend most of their time looking at stuff that directly
> > > >> touches their files. There is also a reluctance to merge large chunks of
> > > >> functionality without an identified maintainer (and maybe reviewers) who
> > > >> can be the contact point for new patches. So in short you need:
> > > >> 
> > > >>  - Maintainer Reviewed-by/Acked-by on patches that touch other sub-systems    
> > > >
> > > > This is the challenging one. I have Cc'd the relevant maintainers (hw/pci and
> > > > hw/mem are the two) in the past, but I think there interest is lacking (and
> > > > reasonably so, it is an entirely different subsystem).    
> > > 
> > > So the best approach to that is to leave a Cc: tag in the patch itself
> > > on your next posting so we can see the maintainer did see it but didn't
> > > contribute a review tag. This is also a good reason to keep Message-Id
> > > tags in patches so we can go back to the original threads.
> > > 
> > > So in my latest PR you'll see:
> > > 
> > >   Signed-off-by: Willian Rampazzo <willianr@redhat.com>
> > >   Reviewed-by: Beraldo Leal <bleal@redhat.com>
> > >   Message-Id: <20211122191124.31620-1-willianr@redhat.com>
> > >   Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> > >   Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > >   Message-Id: <20211129140932.4115115-7-alex.bennee@linaro.org>
> > > 
> > > which shows the Message-Id from Willian's original posting and the
> > > latest Message-Id from my posting of the maintainer tree (I trim off my
> > > old ones).
> > >   
> > > >>  - Reviewed-by tags on the new sub-system patches from anyone who understands CXL    
> > > >
> > > > I have/had those from Jonathan.
> > > >    
> > > >>  - Some* in-tree testing (so it doesn't quietly bitrot)    
> > > >
> > > > We had this, but it's stale now. We can bring this back up.
> > > >    
> > > >>  - A patch adding the sub-system to MAINTAINERS with identified people    
> > > >
> > > > That was there too. Since the original posting, I'd be happy to sign Jonathan up
> > > > to this if he's willing.    
> > > 
> > > Sounds good to me.  
> > 
> > Sure that's fine with me.  Ben, I'm assuming you are fine with being joint maintainer?
> >   
> 
> Yes, I brought it up :D. Once I land the region creation patches I should have
> more time for a bit to circle back to this, which I'd like to do. FOSDEM CFP is
> out again, perhaps I should advertise there.

Great!

> 
> > >   
> > > >> * Some means at least ensuring qtest can instantiate the device and not
> > > >>   fall over. Obviously more testing is better but it can always be
> > > >>   expanded on in later series.    
> > > >
> > > > This was in the patch series. It could use more testing for sure, but I had
> > > > basic functional testing in place via qtest.    
> > > 
> > > More is always better but the basic qtest does ensure a device doesn't
> > > segfault if it's instantiated.  
> > 
> > I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
> > me a hand.
> > 
> > Thanks  
> 
> I'd certainly feel better if we had more tests. I also suspect the qtest I wrote
> originally no longer works. The biggest challenge I had was getting gitlab CI
> working for me.

Looks like it'll be tests that slow things down. *sigh*.

Why are there not enough days in the week?

Jonathan

> 
> > 
> > Jonathan
> > 
> >   
> > >   
> > > >    
> > > >> 
> > > >> Is that the feedback you were looking for?    
> > > >
> > > > You validated my assumptions as to what's needed, but your first bullet is the
> > > > one I can't seem to pin down.
> > > >
> > > > Thanks.
> > > > Ben    
> > > 
> > >   
> >   
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
  2021-12-01  9:55                           ` Jonathan Cameron via
@ 2021-12-01 10:29                             ` Alex Bennée
  -1 siblings, 0 replies; 35+ messages in thread
From: Alex Bennée @ 2021-12-01 10:29 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Ben Widawsky, Peter Maydell, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, shameerali.kolothum.thodi


Jonathan Cameron <Jonathan.Cameron@Huawei.com> writes:

> On Tue, 30 Nov 2021 09:21:58 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
>
>> On 21-11-30 13:09:56, Jonathan Cameron wrote:
>> > On Mon, 29 Nov 2021 18:28:43 +0000
>> > Alex Bennée <alex.bennee@linaro.org> wrote:
>> >   
>> > > Ben Widawsky <ben.widawsky@intel.com> writes:
>> > >   
>> > > > On 21-11-26 12:08:08, Alex Bennée wrote:    
>> > > >> 
>> > > >> Ben Widawsky <ben.widawsky@intel.com> writes:
>> > > >>     
<snip>
>> > >   
>> > > >> * Some means at least ensuring qtest can instantiate the device and not
>> > > >>   fall over. Obviously more testing is better but it can always be
>> > > >>   expanded on in later series.    
>> > > >
>> > > > This was in the patch series. It could use more testing for sure, but I had
>> > > > basic functional testing in place via qtest.    
>> > > 
>> > > More is always better but the basic qtest does ensure a device doesn't
>> > > segfault if it's instantiated.  
>> > 
>> > I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
>> > me a hand.
>> > 
>> > Thanks  
>> 
>> I'd certainly feel better if we had more tests. I also suspect the qtest I wrote
>> originally no longer works. The biggest challenge I had was getting gitlab CI
>> working for me.
>
> Looks like it'll be tests that slow things down. *sigh*.

Hopefully the GitLab stuff has stabilised over the last year as we've
aggressively pushed out stuff that times out and also limited some test
to only run on upstream staging branches.

The biggest hole is properly exercising KVM stuff (due to the
limitations of GitLab runners). As a result you fall back to TCG which
can get slow if your booting full distros with it.

> Why are there not enough days in the week?

"oh it's softfreeze already?" - a regular occurrence for me ;-)

>
> Jonathan

-- 
Alex Bennée

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Follow-up on the CXL discussion at OFTC
@ 2021-12-01 10:29                             ` Alex Bennée
  0 siblings, 0 replies; 35+ messages in thread
From: Alex Bennée @ 2021-12-01 10:29 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Peter Maydell, Ben Widawsky, qemu-devel, Saransh Gupta1,
	Philippe Mathieu-Daudé,
	Shreyas Shah, linux-cxl, shameerali.kolothum.thodi


Jonathan Cameron <Jonathan.Cameron@Huawei.com> writes:

> On Tue, 30 Nov 2021 09:21:58 -0800
> Ben Widawsky <ben.widawsky@intel.com> wrote:
>
>> On 21-11-30 13:09:56, Jonathan Cameron wrote:
>> > On Mon, 29 Nov 2021 18:28:43 +0000
>> > Alex Bennée <alex.bennee@linaro.org> wrote:
>> >   
>> > > Ben Widawsky <ben.widawsky@intel.com> writes:
>> > >   
>> > > > On 21-11-26 12:08:08, Alex Bennée wrote:    
>> > > >> 
>> > > >> Ben Widawsky <ben.widawsky@intel.com> writes:
>> > > >>     
<snip>
>> > >   
>> > > >> * Some means at least ensuring qtest can instantiate the device and not
>> > > >>   fall over. Obviously more testing is better but it can always be
>> > > >>   expanded on in later series.    
>> > > >
>> > > > This was in the patch series. It could use more testing for sure, but I had
>> > > > basic functional testing in place via qtest.    
>> > > 
>> > > More is always better but the basic qtest does ensure a device doesn't
>> > > segfault if it's instantiated.  
>> > 
>> > I'll confess this is a bit I haven't looked at yet. Will get Shameer to give
>> > me a hand.
>> > 
>> > Thanks  
>> 
>> I'd certainly feel better if we had more tests. I also suspect the qtest I wrote
>> originally no longer works. The biggest challenge I had was getting gitlab CI
>> working for me.
>
> Looks like it'll be tests that slow things down. *sigh*.

Hopefully the GitLab stuff has stabilised over the last year as we've
aggressively pushed out stuff that times out and also limited some test
to only run on upstream staging branches.

The biggest hole is properly exercising KVM stuff (due to the
limitations of GitLab runners). As a result you fall back to TCG which
can get slow if your booting full distros with it.

> Why are there not enough days in the week?

"oh it's softfreeze already?" - a regular occurrence for me ;-)

>
> Jonathan

-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2021-12-01 11:29 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <OF255704A1.78FEF164-ON0025878E.00821084-0025878F.00015560@ibm.com>
2021-11-17 16:57 ` Follow-up on the CXL discussion at OFTC Ben Widawsky
2021-11-17 17:32   ` Jonathan Cameron
2021-11-17 17:32     ` Jonathan Cameron
2021-11-18 22:20     ` Saransh Gupta1
2021-11-18 22:20       ` Saransh Gupta1
2021-11-18 22:52       ` Shreyas Shah
2021-11-18 22:52         ` Shreyas Shah via
2021-11-19  1:48         ` Ben Widawsky
2021-11-19  1:48           ` Ben Widawsky
2021-11-19  2:29           ` Shreyas Shah
2021-11-19  2:29             ` Shreyas Shah via
2021-11-19  3:25             ` Ben Widawsky
2021-11-19  3:25               ` Ben Widawsky
2021-11-26 12:08               ` Alex Bennée
2021-11-26 12:08                 ` Alex Bennée
2021-11-29 17:16                 ` Ben Widawsky
2021-11-29 17:16                   ` Ben Widawsky
2021-11-29 18:28                   ` Alex Bennée
2021-11-29 18:28                     ` Alex Bennée
2021-11-30 13:09                     ` Jonathan Cameron
2021-11-30 13:09                       ` Jonathan Cameron via
2021-11-30 17:21                       ` Ben Widawsky
2021-11-30 17:21                         ` Ben Widawsky
2021-12-01  9:55                         ` Jonathan Cameron
2021-12-01  9:55                           ` Jonathan Cameron via
2021-12-01 10:29                           ` Alex Bennée
2021-12-01 10:29                             ` Alex Bennée
2021-11-19  1:52       ` Ben Widawsky
2021-11-19  1:52         ` Ben Widawsky
2021-11-19 18:53         ` Jonathan Cameron
2021-11-19 18:53           ` Jonathan Cameron
2021-11-19 20:21           ` Ben Widawsky
2021-11-19 20:21             ` Ben Widawsky
2021-11-26 10:59           ` Jonathan Cameron via
2021-11-26 10:59             ` Jonathan Cameron

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.