All of lore.kernel.org
 help / color / mirror / Atom feed
* Support for CXL v3.0 spec with QEMU
@ 2023-08-10 16:12 Ravi Kanth
  2023-08-10 16:33 ` Jonathan Cameron
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-08-10 16:12 UTC (permalink / raw)
  To: linux-cxl

Hello,

I am writing this mail to know if the latest QEMU has support for CXL
spec v3.0? I understand from the older tutorials that QEMU has support
for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
would like to specifically test the CCI mailbox functionality which
has been added to CXL v3.0 spec.

Thank you in advance for your assistance.

Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-10 16:12 Support for CXL v3.0 spec with QEMU Ravi Kanth
@ 2023-08-10 16:33 ` Jonathan Cameron
  2023-08-10 16:40   ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Jonathan Cameron @ 2023-08-10 16:33 UTC (permalink / raw)
  To: Ravi Kanth; +Cc: linux-cxl

On Thu, 10 Aug 2023 21:42:12 +0530
Ravi Kanth <mvrravikanth@gmail.com> wrote:

> Hello,

Hi Ravi,

> 
> I am writing this mail to know if the latest QEMU has support for CXL
> spec v3.0? I understand from the older tutorials that QEMU has support
> for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> would like to specifically test the CCI mailbox functionality which
> has been added to CXL v3.0 spec.

The nature of the CXL specification and the QEMU support is that for
devices there isn't a clear divide between different generations of
the specification. By that I mean, that a device based on the CXL 2.0
specification is compatible with the CXL 3.0 specification. As such,
the QEMU emulation has focused on features of interest - it is a far
from complete implementation of all the options in the CXL 2.0
specification but conversely there are some parts of CXL 3.0 are supported.

Note that the particular CCI functionality is not yet ready for CXL upstream,
but we do have some support in the staging tree I maintain.

https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
specifically
https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c

Note at this stage we only support subset of commands.
Contributions of more support welcomed!

Intent of the current support and related MCTP CCI access (over I2C) is
to enable ecosystem development, particularly of fabric managers.
For that a tiny subset of commands was sufficient to be sure the architecture
inside QEMU worked and that the proposed kernel code functions correctly.

Jonathan

> 
> Thank you in advance for your assistance.
> 
> Ravi


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-10 16:33 ` Jonathan Cameron
@ 2023-08-10 16:40   ` Ravi Kanth
  2023-08-11 12:04     ` Ravi Kanth
  2023-08-11 13:49     ` Jonathan Cameron
  0 siblings, 2 replies; 21+ messages in thread
From: Ravi Kanth @ 2023-08-10 16:40 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: linux-cxl

Thanks Jonathan for pointing this out. I will definitely take a look
at this. Current support for the simulation of CCI mailbox is only via
MCTP messages or even via CXL driver IOCTLs?

Ravi

On Thu, Aug 10, 2023 at 10:03 PM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Thu, 10 Aug 2023 21:42:12 +0530
> Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> > Hello,
>
> Hi Ravi,
>
> >
> > I am writing this mail to know if the latest QEMU has support for CXL
> > spec v3.0? I understand from the older tutorials that QEMU has support
> > for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> > would like to specifically test the CCI mailbox functionality which
> > has been added to CXL v3.0 spec.
>
> The nature of the CXL specification and the QEMU support is that for
> devices there isn't a clear divide between different generations of
> the specification. By that I mean, that a device based on the CXL 2.0
> specification is compatible with the CXL 3.0 specification. As such,
> the QEMU emulation has focused on features of interest - it is a far
> from complete implementation of all the options in the CXL 2.0
> specification but conversely there are some parts of CXL 3.0 are supported.
>
> Note that the particular CCI functionality is not yet ready for CXL upstream,
> but we do have some support in the staging tree I maintain.
>
> https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
> specifically
> https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c
>
> Note at this stage we only support subset of commands.
> Contributions of more support welcomed!
>
> Intent of the current support and related MCTP CCI access (over I2C) is
> to enable ecosystem development, particularly of fabric managers.
> For that a tiny subset of commands was sufficient to be sure the architecture
> inside QEMU worked and that the proposed kernel code functions correctly.
>
> Jonathan
>
> >
> > Thank you in advance for your assistance.
> >
> > Ravi
>


-- 
Regards
M.V.R.Ravi Kanth

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-10 16:40   ` Ravi Kanth
@ 2023-08-11 12:04     ` Ravi Kanth
  2023-08-11 13:52       ` Jonathan Cameron
  2023-08-11 13:49     ` Jonathan Cameron
  1 sibling, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-08-11 12:04 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: linux-cxl, Sajjan Rao

Hi Jonathan,
We build qemu with your repository code but we do not see any changes
in the output of lspci. Do we have any prerequisites / command
parameters w.r.t on how we boot the qemu for your changes to take
effect? Please let us know.

Thanks for your help.

Thanks
Ravi

On Thu, Aug 10, 2023 at 10:10 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> Thanks Jonathan for pointing this out. I will definitely take a look
> at this. Current support for the simulation of CCI mailbox is only via
> MCTP messages or even via CXL driver IOCTLs?
>
> Ravi
>
> On Thu, Aug 10, 2023 at 10:03 PM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Thu, 10 Aug 2023 21:42:12 +0530
> > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> >
> > > Hello,
> >
> > Hi Ravi,
> >
> > >
> > > I am writing this mail to know if the latest QEMU has support for CXL
> > > spec v3.0? I understand from the older tutorials that QEMU has support
> > > for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> > > would like to specifically test the CCI mailbox functionality which
> > > has been added to CXL v3.0 spec.
> >
> > The nature of the CXL specification and the QEMU support is that for
> > devices there isn't a clear divide between different generations of
> > the specification. By that I mean, that a device based on the CXL 2.0
> > specification is compatible with the CXL 3.0 specification. As such,
> > the QEMU emulation has focused on features of interest - it is a far
> > from complete implementation of all the options in the CXL 2.0
> > specification but conversely there are some parts of CXL 3.0 are supported.
> >
> > Note that the particular CCI functionality is not yet ready for CXL upstream,
> > but we do have some support in the staging tree I maintain.
> >
> > https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
> > specifically
> > https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c
> >
> > Note at this stage we only support subset of commands.
> > Contributions of more support welcomed!
> >
> > Intent of the current support and related MCTP CCI access (over I2C) is
> > to enable ecosystem development, particularly of fabric managers.
> > For that a tiny subset of commands was sufficient to be sure the architecture
> > inside QEMU worked and that the proposed kernel code functions correctly.
> >
> > Jonathan
> >
> > >
> > > Thank you in advance for your assistance.
> > >
> > > Ravi
> >
>
>
> --
> Regards
> M.V.R.Ravi Kanth



-- 
Regards
M.V.R.Ravi Kanth

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-10 16:40   ` Ravi Kanth
  2023-08-11 12:04     ` Ravi Kanth
@ 2023-08-11 13:49     ` Jonathan Cameron
  1 sibling, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2023-08-11 13:49 UTC (permalink / raw)
  To: Ravi Kanth; +Cc: linux-cxl

On Thu, 10 Aug 2023 22:10:48 +0530
Ravi Kanth <mvrravikanth@gmail.com> wrote:

> Thanks Jonathan for pointing this out. I will definitely take a look
> at this. Current support for the simulation of CCI mailbox is only via
> MCTP messages or even via CXL driver IOCTLs?

For switch CCI PCI function it's via CXL driver IOCTLs. 

You will need:
https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com/T/#t
though which is an RFC for the kernel side of things. Examples in the cover letter.
(including relevant QEMU command line interface).

For MCTP you need a bunch of other stuff as we can only use the ASPEED I2C controller
today as it's the only one with MCTP support. The upstream driver doesn't handle ACPI,
so you need:
https://lore.kernel.org/linux-cxl/20230531100600.13543-1-Jonathan.Cameron@huawei.com/
which includes various things I should upstream and some hacks to deal with the reset
that are not suitable for upstream.


> 
> Ravi
> 
> On Thu, Aug 10, 2023 at 10:03 PM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Thu, 10 Aug 2023 21:42:12 +0530
> > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> >  
> > > Hello,  
> >
> > Hi Ravi,
> >  
> > >
> > > I am writing this mail to know if the latest QEMU has support for CXL
> > > spec v3.0? I understand from the older tutorials that QEMU has support
> > > for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> > > would like to specifically test the CCI mailbox functionality which
> > > has been added to CXL v3.0 spec.  
> >
> > The nature of the CXL specification and the QEMU support is that for
> > devices there isn't a clear divide between different generations of
> > the specification. By that I mean, that a device based on the CXL 2.0
> > specification is compatible with the CXL 3.0 specification. As such,
> > the QEMU emulation has focused on features of interest - it is a far
> > from complete implementation of all the options in the CXL 2.0
> > specification but conversely there are some parts of CXL 3.0 are supported.
> >
> > Note that the particular CCI functionality is not yet ready for CXL upstream,
> > but we do have some support in the staging tree I maintain.
> >
> > https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
> > specifically
> > https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c
> >
> > Note at this stage we only support subset of commands.
> > Contributions of more support welcomed!
> >
> > Intent of the current support and related MCTP CCI access (over I2C) is
> > to enable ecosystem development, particularly of fabric managers.
> > For that a tiny subset of commands was sufficient to be sure the architecture
> > inside QEMU worked and that the proposed kernel code functions correctly.
> >
> > Jonathan
> >  
> > >
> > > Thank you in advance for your assistance.
> > >
> > > Ravi  
> >  
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-11 12:04     ` Ravi Kanth
@ 2023-08-11 13:52       ` Jonathan Cameron
  2023-08-16 11:41         ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Jonathan Cameron @ 2023-08-11 13:52 UTC (permalink / raw)
  To: Ravi Kanth; +Cc: linux-cxl, Sajjan Rao

On Fri, 11 Aug 2023 17:34:33 +0530
Ravi Kanth <mvrravikanth@gmail.com> wrote:

> Hi Jonathan,
> We build qemu with your repository code but we do not see any changes
> in the output of lspci. Do we have any prerequisites / command
> parameters w.r.t on how we boot the qemu for your changes to take
> effect? Please let us know.

As the switch cci is a separate PCI function it needs to be explicitly
added. See the cover letter of kernel switch-cci RFC:

https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com

 -device cxl-upstream,bus=cxl_rp_port0,id=us0,addr=0.0,multifunction=on, \
 -device cxl-switch-mailbox-cci,bus=cxl_rp_port0,addr=0.1,target=us0 \
 -device cxl-downstream,port=0,bus=us0,id=swport0,chassis=0,slot=4 \
 -device cxl-downstream,port=1,bus=us0,id=swport1,chassis=0,slot=5 \

Is the switch part of it.

We don't yet have the additions for MHDs though some discussion on how to
do it has happened.  Those will support some FMAPI command via tunnelling throuhg
the main mailbox.

I'm on vacation from tonight until the 22nd, so good luck. Feel free
to post questions in meantime but my reply will take a while!

Jonathan

> 
> Thanks for your help.
> 
> Thanks
> Ravi
> 
> On Thu, Aug 10, 2023 at 10:10 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
> >
> > Thanks Jonathan for pointing this out. I will definitely take a look
> > at this. Current support for the simulation of CCI mailbox is only via
> > MCTP messages or even via CXL driver IOCTLs?
> >
> > Ravi
> >
> > On Thu, Aug 10, 2023 at 10:03 PM Jonathan Cameron
> > <Jonathan.Cameron@huawei.com> wrote:  
> > >
> > > On Thu, 10 Aug 2023 21:42:12 +0530
> > > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > >  
> > > > Hello,  
> > >
> > > Hi Ravi,
> > >  
> > > >
> > > > I am writing this mail to know if the latest QEMU has support for CXL
> > > > spec v3.0? I understand from the older tutorials that QEMU has support
> > > > for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> > > > would like to specifically test the CCI mailbox functionality which
> > > > has been added to CXL v3.0 spec.  
> > >
> > > The nature of the CXL specification and the QEMU support is that for
> > > devices there isn't a clear divide between different generations of
> > > the specification. By that I mean, that a device based on the CXL 2.0
> > > specification is compatible with the CXL 3.0 specification. As such,
> > > the QEMU emulation has focused on features of interest - it is a far
> > > from complete implementation of all the options in the CXL 2.0
> > > specification but conversely there are some parts of CXL 3.0 are supported.
> > >
> > > Note that the particular CCI functionality is not yet ready for CXL upstream,
> > > but we do have some support in the staging tree I maintain.
> > >
> > > https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
> > > specifically
> > > https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c
> > >
> > > Note at this stage we only support subset of commands.
> > > Contributions of more support welcomed!
> > >
> > > Intent of the current support and related MCTP CCI access (over I2C) is
> > > to enable ecosystem development, particularly of fabric managers.
> > > For that a tiny subset of commands was sufficient to be sure the architecture
> > > inside QEMU worked and that the proposed kernel code functions correctly.
> > >
> > > Jonathan
> > >  
> > > >
> > > > Thank you in advance for your assistance.
> > > >
> > > > Ravi  
> > >  
> >
> >
> > --
> > Regards
> > M.V.R.Ravi Kanth  
> 
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-11 13:52       ` Jonathan Cameron
@ 2023-08-16 11:41         ` Ravi Kanth
  2023-08-23 17:03           ` Jonathan Cameron
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-08-16 11:41 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: linux-cxl, Sajjan Rao

Thanks, Jonathan. We were able to now see the CCI endpoint in the lspci output.

However, we are not able to see the /dev/cxl/switch0 node. Should we
be loading the cxl driver with the changes suggested by you in the
link below? If yes, We are not able to download the source code as we
do not see the git repository for the same. Is this part of the
mainline?

https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com/T/#t

Ravi

On Fri, Aug 11, 2023 at 7:22 PM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Fri, 11 Aug 2023 17:34:33 +0530
> Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> > Hi Jonathan,
> > We build qemu with your repository code but we do not see any changes
> > in the output of lspci. Do we have any prerequisites / command
> > parameters w.r.t on how we boot the qemu for your changes to take
> > effect? Please let us know.
>
> As the switch cci is a separate PCI function it needs to be explicitly
> added. See the cover letter of kernel switch-cci RFC:
>
> https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com
>
>  -device cxl-upstream,bus=cxl_rp_port0,id=us0,addr=0.0,multifunction=on, \
>  -device cxl-switch-mailbox-cci,bus=cxl_rp_port0,addr=0.1,target=us0 \
>  -device cxl-downstream,port=0,bus=us0,id=swport0,chassis=0,slot=4 \
>  -device cxl-downstream,port=1,bus=us0,id=swport1,chassis=0,slot=5 \
>
> Is the switch part of it.
>
> We don't yet have the additions for MHDs though some discussion on how to
> do it has happened.  Those will support some FMAPI command via tunnelling throuhg
> the main mailbox.
>
> I'm on vacation from tonight until the 22nd, so good luck. Feel free
> to post questions in meantime but my reply will take a while!
>
> Jonathan
>
> >
> > Thanks for your help.
> >
> > Thanks
> > Ravi
> >
> > On Thu, Aug 10, 2023 at 10:10 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > >
> > > Thanks Jonathan for pointing this out. I will definitely take a look
> > > at this. Current support for the simulation of CCI mailbox is only via
> > > MCTP messages or even via CXL driver IOCTLs?
> > >
> > > Ravi
> > >
> > > On Thu, Aug 10, 2023 at 10:03 PM Jonathan Cameron
> > > <Jonathan.Cameron@huawei.com> wrote:
> > > >
> > > > On Thu, 10 Aug 2023 21:42:12 +0530
> > > > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > > >
> > > > > Hello,
> > > >
> > > > Hi Ravi,
> > > >
> > > > >
> > > > > I am writing this mail to know if the latest QEMU has support for CXL
> > > > > spec v3.0? I understand from the older tutorials that QEMU has support
> > > > > for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> > > > > would like to specifically test the CCI mailbox functionality which
> > > > > has been added to CXL v3.0 spec.
> > > >
> > > > The nature of the CXL specification and the QEMU support is that for
> > > > devices there isn't a clear divide between different generations of
> > > > the specification. By that I mean, that a device based on the CXL 2.0
> > > > specification is compatible with the CXL 3.0 specification. As such,
> > > > the QEMU emulation has focused on features of interest - it is a far
> > > > from complete implementation of all the options in the CXL 2.0
> > > > specification but conversely there are some parts of CXL 3.0 are supported.
> > > >
> > > > Note that the particular CCI functionality is not yet ready for CXL upstream,
> > > > but we do have some support in the staging tree I maintain.
> > > >
> > > > https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
> > > > specifically
> > > > https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c
> > > >
> > > > Note at this stage we only support subset of commands.
> > > > Contributions of more support welcomed!
> > > >
> > > > Intent of the current support and related MCTP CCI access (over I2C) is
> > > > to enable ecosystem development, particularly of fabric managers.
> > > > For that a tiny subset of commands was sufficient to be sure the architecture
> > > > inside QEMU worked and that the proposed kernel code functions correctly.
> > > >
> > > > Jonathan
> > > >
> > > > >
> > > > > Thank you in advance for your assistance.
> > > > >
> > > > > Ravi
> > > >
> > >
> > >
> > > --
> > > Regards
> > > M.V.R.Ravi Kanth
> >
> >
> >
>


-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-16 11:41         ` Ravi Kanth
@ 2023-08-23 17:03           ` Jonathan Cameron
  2023-09-01 14:03             ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Jonathan Cameron @ 2023-08-23 17:03 UTC (permalink / raw)
  To: Ravi Kanth; +Cc: linux-cxl, Sajjan Rao

On Wed, 16 Aug 2023 17:11:17 +0530
Ravi Kanth <mvrravikanth@gmail.com> wrote:

> Thanks, Jonathan. We were able to now see the CCI endpoint in the lspci output.
> 
> However, we are not able to see the /dev/cxl/switch0 node. Should we
> be loading the cxl driver with the changes suggested by you in the
> link below? If yes, We are not able to download the source code as we
> do not see the git repository for the same. Is this part of the
> mainline?

Not yet part of mainline so you will have to apply the patches and build
a custom kernel. 

Jonathan

> 
> https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com/T/#t
> 
> Ravi
> 
> On Fri, Aug 11, 2023 at 7:22 PM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Fri, 11 Aug 2023 17:34:33 +0530
> > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> >  
> > > Hi Jonathan,
> > > We build qemu with your repository code but we do not see any changes
> > > in the output of lspci. Do we have any prerequisites / command
> > > parameters w.r.t on how we boot the qemu for your changes to take
> > > effect? Please let us know.  
> >
> > As the switch cci is a separate PCI function it needs to be explicitly
> > added. See the cover letter of kernel switch-cci RFC:
> >
> > https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com
> >
> >  -device cxl-upstream,bus=cxl_rp_port0,id=us0,addr=0.0,multifunction=on, \
> >  -device cxl-switch-mailbox-cci,bus=cxl_rp_port0,addr=0.1,target=us0 \
> >  -device cxl-downstream,port=0,bus=us0,id=swport0,chassis=0,slot=4 \
> >  -device cxl-downstream,port=1,bus=us0,id=swport1,chassis=0,slot=5 \
> >
> > Is the switch part of it.
> >
> > We don't yet have the additions for MHDs though some discussion on how to
> > do it has happened.  Those will support some FMAPI command via tunnelling throuhg
> > the main mailbox.
> >
> > I'm on vacation from tonight until the 22nd, so good luck. Feel free
> > to post questions in meantime but my reply will take a while!
> >
> > Jonathan
> >  
> > >
> > > Thanks for your help.
> > >
> > > Thanks
> > > Ravi
> > >
> > > On Thu, Aug 10, 2023 at 10:10 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:  
> > > >
> > > > Thanks Jonathan for pointing this out. I will definitely take a look
> > > > at this. Current support for the simulation of CCI mailbox is only via
> > > > MCTP messages or even via CXL driver IOCTLs?
> > > >
> > > > Ravi
> > > >
> > > > On Thu, Aug 10, 2023 at 10:03 PM Jonathan Cameron
> > > > <Jonathan.Cameron@huawei.com> wrote:  
> > > > >
> > > > > On Thu, 10 Aug 2023 21:42:12 +0530
> > > > > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > > > >  
> > > > > > Hello,  
> > > > >
> > > > > Hi Ravi,
> > > > >  
> > > > > >
> > > > > > I am writing this mail to know if the latest QEMU has support for CXL
> > > > > > spec v3.0? I understand from the older tutorials that QEMU has support
> > > > > > for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> > > > > > would like to specifically test the CCI mailbox functionality which
> > > > > > has been added to CXL v3.0 spec.  
> > > > >
> > > > > The nature of the CXL specification and the QEMU support is that for
> > > > > devices there isn't a clear divide between different generations of
> > > > > the specification. By that I mean, that a device based on the CXL 2.0
> > > > > specification is compatible with the CXL 3.0 specification. As such,
> > > > > the QEMU emulation has focused on features of interest - it is a far
> > > > > from complete implementation of all the options in the CXL 2.0
> > > > > specification but conversely there are some parts of CXL 3.0 are supported.
> > > > >
> > > > > Note that the particular CCI functionality is not yet ready for CXL upstream,
> > > > > but we do have some support in the staging tree I maintain.
> > > > >
> > > > > https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
> > > > > specifically
> > > > > https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c
> > > > >
> > > > > Note at this stage we only support subset of commands.
> > > > > Contributions of more support welcomed!
> > > > >
> > > > > Intent of the current support and related MCTP CCI access (over I2C) is
> > > > > to enable ecosystem development, particularly of fabric managers.
> > > > > For that a tiny subset of commands was sufficient to be sure the architecture
> > > > > inside QEMU worked and that the proposed kernel code functions correctly.
> > > > >
> > > > > Jonathan
> > > > >  
> > > > > >
> > > > > > Thank you in advance for your assistance.
> > > > > >
> > > > > > Ravi  
> > > > >  
> > > >
> > > >
> > > > --
> > > > Regards
> > > > M.V.R.Ravi Kanth  
> > >
> > >
> > >  
> >  
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-08-23 17:03           ` Jonathan Cameron
@ 2023-09-01 14:03             ` Ravi Kanth
  2023-09-01 14:18               ` Gregory Price
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-09-01 14:03 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: linux-cxl, Sajjan Rao

Thanks Jonathan. We built the custom kernel and were able to run the
test tool successfully and get the data.

However when we modify the code to send raw commands, we are not able
to send raw commands to the driver. We always see ENOPERM from the
driver. Below is the code snippet. Should we enable some settings or
from a code perspective should changes be made?

From CXL driver code perspective, we see that ENOPERM can only be
returned by cxl_mem_raw_command_allowed(). Not really sure why the raw
command is not able to go through but the MEM
command(CXL_MEM_COMMAND_ID_INFO_STAT_IDENTIFY) is able to run
successfully.

Can you please help us here?

Code snippet:
  cmd.id = CXL_MEM_COMMAND_ID_RAW;
  cmd.raw.opcode= 0x0001; /*This is a Identify (Opcode 0001h) per CXL spec */
  cmd.out.size = sizeof(is_identify);
  cmd.out.payload = (__u64)&is_identify;

  printf("sending identify command...\n");
  printf("cmd.id:%x cmd.raw.opcode:%x\n",cmd.id,cmd.raw.opcode);
  printf("cmd.raw.rsvd:%d\n",cmd.raw.rsvd);
  rc = ioctl(fd, CXL_MEM_SEND_COMMAND, &cmd);

Appreciate your help all along.

Ravi

On Wed, Aug 23, 2023 at 10:33 PM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Wed, 16 Aug 2023 17:11:17 +0530
> Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> > Thanks, Jonathan. We were able to now see the CCI endpoint in the lspci output.
> >
> > However, we are not able to see the /dev/cxl/switch0 node. Should we
> > be loading the cxl driver with the changes suggested by you in the
> > link below? If yes, We are not able to download the source code as we
> > do not see the git repository for the same. Is this part of the
> > mainline?
>
> Not yet part of mainline so you will have to apply the patches and build
> a custom kernel.
>
> Jonathan
>
> >
> > https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com/T/#t
> >
> > Ravi
> >
> > On Fri, Aug 11, 2023 at 7:22 PM Jonathan Cameron
> > <Jonathan.Cameron@huawei.com> wrote:
> > >
> > > On Fri, 11 Aug 2023 17:34:33 +0530
> > > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > >
> > > > Hi Jonathan,
> > > > We build qemu with your repository code but we do not see any changes
> > > > in the output of lspci. Do we have any prerequisites / command
> > > > parameters w.r.t on how we boot the qemu for your changes to take
> > > > effect? Please let us know.
> > >
> > > As the switch cci is a separate PCI function it needs to be explicitly
> > > added. See the cover letter of kernel switch-cci RFC:
> > >
> > > https://lore.kernel.org/linux-cxl/20230804115414.14391-1-Jonathan.Cameron@huawei.com
> > >
> > >  -device cxl-upstream,bus=cxl_rp_port0,id=us0,addr=0.0,multifunction=on, \
> > >  -device cxl-switch-mailbox-cci,bus=cxl_rp_port0,addr=0.1,target=us0 \
> > >  -device cxl-downstream,port=0,bus=us0,id=swport0,chassis=0,slot=4 \
> > >  -device cxl-downstream,port=1,bus=us0,id=swport1,chassis=0,slot=5 \
> > >
> > > Is the switch part of it.
> > >
> > > We don't yet have the additions for MHDs though some discussion on how to
> > > do it has happened.  Those will support some FMAPI command via tunnelling throuhg
> > > the main mailbox.
> > >
> > > I'm on vacation from tonight until the 22nd, so good luck. Feel free
> > > to post questions in meantime but my reply will take a while!
> > >
> > > Jonathan
> > >
> > > >
> > > > Thanks for your help.
> > > >
> > > > Thanks
> > > > Ravi
> > > >
> > > > On Thu, Aug 10, 2023 at 10:10 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > > > >
> > > > > Thanks Jonathan for pointing this out. I will definitely take a look
> > > > > at this. Current support for the simulation of CCI mailbox is only via
> > > > > MCTP messages or even via CXL driver IOCTLs?
> > > > >
> > > > > Ravi
> > > > >
> > > > > On Thu, Aug 10, 2023 at 10:03 PM Jonathan Cameron
> > > > > <Jonathan.Cameron@huawei.com> wrote:
> > > > > >
> > > > > > On Thu, 10 Aug 2023 21:42:12 +0530
> > > > > > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > > > > >
> > > > > > > Hello,
> > > > > >
> > > > > > Hi Ravi,
> > > > > >
> > > > > > >
> > > > > > > I am writing this mail to know if the latest QEMU has support for CXL
> > > > > > > spec v3.0? I understand from the older tutorials that QEMU has support
> > > > > > > for CXL v2.0. Is there any update to QEMU to support the CXL v3.0? I
> > > > > > > would like to specifically test the CCI mailbox functionality which
> > > > > > > has been added to CXL v3.0 spec.
> > > > > >
> > > > > > The nature of the CXL specification and the QEMU support is that for
> > > > > > devices there isn't a clear divide between different generations of
> > > > > > the specification. By that I mean, that a device based on the CXL 2.0
> > > > > > specification is compatible with the CXL 3.0 specification. As such,
> > > > > > the QEMU emulation has focused on features of interest - it is a far
> > > > > > from complete implementation of all the options in the CXL 2.0
> > > > > > specification but conversely there are some parts of CXL 3.0 are supported.
> > > > > >
> > > > > > Note that the particular CCI functionality is not yet ready for CXL upstream,
> > > > > > but we do have some support in the staging tree I maintain.
> > > > > >
> > > > > > https://gitlab.com/jic23/qemu/-/commits/cxl-2023-08-07/
> > > > > > specifically
> > > > > > https://gitlab.com/jic23/qemu/-/commit/2eb7e6402a45b359c304cea894a1d27625a4b80c
> > > > > >
> > > > > > Note at this stage we only support subset of commands.
> > > > > > Contributions of more support welcomed!
> > > > > >
> > > > > > Intent of the current support and related MCTP CCI access (over I2C) is
> > > > > > to enable ecosystem development, particularly of fabric managers.
> > > > > > For that a tiny subset of commands was sufficient to be sure the architecture
> > > > > > inside QEMU worked and that the proposed kernel code functions correctly.
> > > > > >
> > > > > > Jonathan
> > > > > >
> > > > > > >
> > > > > > > Thank you in advance for your assistance.
> > > > > > >
> > > > > > > Ravi
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Regards
> > > > > M.V.R.Ravi Kanth
> > > >
> > > >
> > > >
> > >
> >
> >
>


-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-09-01 14:03             ` Ravi Kanth
@ 2023-09-01 14:18               ` Gregory Price
  2023-09-01 14:37                 ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Gregory Price @ 2023-09-01 14:18 UTC (permalink / raw)
  To: Ravi Kanth; +Cc: Jonathan Cameron, linux-cxl, Sajjan Rao

On Fri, Sep 01, 2023 at 07:33:57PM +0530, Ravi Kanth wrote:
> Thanks Jonathan. We built the custom kernel and were able to run the
> test tool successfully and get the data.
> 
> However when we modify the code to send raw commands, we are not able
> to send raw commands to the driver. We always see ENOPERM from the
> driver. Below is the code snippet. Should we enable some settings or
> from a code perspective should changes be made?
> 
> From CXL driver code perspective, we see that ENOPERM can only be
> returned by cxl_mem_raw_command_allowed(). Not really sure why the raw
> command is not able to go through but the MEM
> command(CXL_MEM_COMMAND_ID_INFO_STAT_IDENTIFY) is able to run
> successfully.
> 
> Can you please help us here?
> 
> Code snippet:
>   cmd.id = CXL_MEM_COMMAND_ID_RAW;
>   cmd.raw.opcode= 0x0001; /*This is a Identify (Opcode 0001h) per CXL spec */
>   cmd.out.size = sizeof(is_identify);
>   cmd.out.payload = (__u64)&is_identify;
> 
>   printf("sending identify command...\n");
>   printf("cmd.id:%x cmd.raw.opcode:%x\n",cmd.id,cmd.raw.opcode);
>   printf("cmd.raw.rsvd:%d\n",cmd.raw.rsvd);
>   rc = ioctl(fd, CXL_MEM_SEND_COMMAND, &cmd);
> 
> Appreciate your help all along.
> 
> Ravi
> 

raw commands are not allowed by default, you must turn on
CONFIG_CXL_MEM_RAW_COMMANDS in the kernel build config.

~Gregory

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-09-01 14:18               ` Gregory Price
@ 2023-09-01 14:37                 ` Ravi Kanth
  2023-09-01 16:19                   ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-09-01 14:37 UTC (permalink / raw)
  To: Gregory Price; +Cc: Jonathan Cameron, linux-cxl, Sajjan Rao

Thanks Gregory. I will try enabling the CONFIG_CXL_MEM_RAW_COMMANDS and check.

The reason we want to use the raw command interface is to send
necessary opcodes that are of interest to the application as defined
in the CXL spec. cxl_mem_commands has a very limited opcode set.

Ravi

On Fri, Sep 1, 2023 at 7:48 PM Gregory Price <gregory.price@memverge.com> wrote:
>
> On Fri, Sep 01, 2023 at 07:33:57PM +0530, Ravi Kanth wrote:
> > Thanks Jonathan. We built the custom kernel and were able to run the
> > test tool successfully and get the data.
> >
> > However when we modify the code to send raw commands, we are not able
> > to send raw commands to the driver. We always see ENOPERM from the
> > driver. Below is the code snippet. Should we enable some settings or
> > from a code perspective should changes be made?
> >
> > From CXL driver code perspective, we see that ENOPERM can only be
> > returned by cxl_mem_raw_command_allowed(). Not really sure why the raw
> > command is not able to go through but the MEM
> > command(CXL_MEM_COMMAND_ID_INFO_STAT_IDENTIFY) is able to run
> > successfully.
> >
> > Can you please help us here?
> >
> > Code snippet:
> >   cmd.id = CXL_MEM_COMMAND_ID_RAW;
> >   cmd.raw.opcode= 0x0001; /*This is a Identify (Opcode 0001h) per CXL spec */
> >   cmd.out.size = sizeof(is_identify);
> >   cmd.out.payload = (__u64)&is_identify;
> >
> >   printf("sending identify command...\n");
> >   printf("cmd.id:%x cmd.raw.opcode:%x\n",cmd.id,cmd.raw.opcode);
> >   printf("cmd.raw.rsvd:%d\n",cmd.raw.rsvd);
> >   rc = ioctl(fd, CXL_MEM_SEND_COMMAND, &cmd);
> >
> > Appreciate your help all along.
> >
> > Ravi
> >
>
> raw commands are not allowed by default, you must turn on
> CONFIG_CXL_MEM_RAW_COMMANDS in the kernel build config.
>
> ~Gregory



-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-09-01 14:37                 ` Ravi Kanth
@ 2023-09-01 16:19                   ` Ravi Kanth
       [not found]                     ` <SJ0PR17MB5512449C5FFD76AD50B3D3AF83E4A@SJ0PR17MB5512.namprd17.prod.outlook.com>
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-09-01 16:19 UTC (permalink / raw)
  To: Gregory Price; +Cc: Jonathan Cameron, linux-cxl, Sajjan Rao

We enabled the CONFIG_CXL_MEM_RAW_COMMANDS in the kernel. Now the raw
commands are going through and we are getting the responses. However
we saw the below crash when we sent the command for the first time.
Not able to recreate this now.

Stack trace -
[  257.746414] ------------[ cut here ]------------
[  257.746918] cxl_switchdev 0000:0d:00.1: raw command path used
[  257.747572] WARNING: CPU: 3 PID: 1013 at
drivers/cxl/core/mbox.c:673 cxl_send_cmd+0x43f/0x550 [cxl_core]
[  257.748589] Modules linked in: nft_fib_inet nft_fib_ipv4
nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6
nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6
nf_defrag_ig
[  257.755464] CPU: 3 PID: 1013 Comm: a.out Not tainted 6.4.11 #4
[  257.756080] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009),
BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
[  257.757264] RIP: 0010:cxl_send_cmd+0x43f/0x550 [cxl_core]
[  257.757857] Code: c6 05 4e 63 01 00 01 4c 8b 77 50 4d 85 f6 75 03
4c 8b 37 e8 43 b6 2a ce 4c 89 f2 48 c7 c7 39 12 84 c0 48 89 c6 e8 31
f5 8f cd <0f> 0b e9 a9 fe ff ff 41 bf ff ff ff ff e9 db fd ff ff 0fb
[  257.759817] RSP: 0018:ff3ad3ba01e23dd8 EFLAGS: 00010292
[  257.760374] RAX: 0000000000000000 RBX: ff2df715038ed758 RCX: 0000000000000000
[  257.761150] RDX: ff2df7153bbae580 RSI: ff2df7153bba1540 RDI: ff2df7153bba1540
[  257.761932] RBP: ff3ad3ba01e23e10 R08: 0000000000000000 R09: ff3ad3ba01e23c68
[  257.762702] R10: 0000000000000003 R11: ffffffff90146508 R12: 00007ffeaada2cf0
[  257.763478] R13: ff3ad3ba01e23de0 R14: ff2df714c106f3c0 R15: 00000000ffffffff
[  257.764232] FS:  00007f88913e3640(0000) GS:ff2df7153bb80000(0000)
knlGS:0000000000000000
[  257.765085] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  257.765704] CR2: 00007f889139afd8 CR3: 000000013e69a006 CR4: 0000000000771ee0
[  257.766468] PKRU: 55555554
[  257.766762] Call Trace:
[  257.767029]  <TASK>
[  257.767263]  ? cxl_send_cmd+0x43f/0x550 [cxl_core]
[  257.767785]  ? __warn+0x81/0x130
[  257.768140]  ? cxl_send_cmd+0x43f/0x550 [cxl_core]
[  257.768681]  ? report_bug+0x171/0x1a0
[  257.769080]  ? __x86_return_thunk+0x9/0x10
[  257.769543]  ? prb_read_valid+0x1b/0x30
[  257.769962]  ? handle_bug+0x3c/0x80
[  257.770349]  ? exc_invalid_op+0x17/0x70
[  257.770771]  ? asm_exc_invalid_op+0x1a/0x20
[  257.771228]  ? cxl_send_cmd+0x43f/0x550 [cxl_core]
[  257.771760]  ? cxl_send_cmd+0x43f/0x550 [cxl_core]
[  257.772289]  cxl_swdev_ioctl+0x4f/0x80 [cxl_core]
[  257.772809]  __x64_sys_ioctl+0x91/0xd0
[  257.773222]  do_syscall_64+0x5d/0x90
[  257.773620]  ? do_syscall_64+0x6c/0x90
[  257.774017]  ? do_syscall_64+0x6c/0x90
[  257.774438]  ? __x86_return_thunk+0x9/0x10
[  257.774873]  ? exc_page_fault+0x7f/0x180
[  257.775586]  entry_SYSCALL_64_after_hwframe+0x77/0xe1
[  257.776373] RIP: 0033:0x7f889130aedd
[  257.777025] Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10
c7 45 b0 10 00 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00
00 0f 05 <89> c2 3d 00 f0 ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 280
[  257.779555] RSP: 002b:00007ffeaada2c10 EFLAGS: 00000246 ORIG_RAX:
0000000000000010
[  257.780632] RAX: ffffffffffffffda RBX: 00007ffeaada2e88 RCX: 00007f889130aedd
[  257.781656] RDX: 00007ffeaada2cf0 RSI: 00000000c030ce02 RDI: 0000000000000003
[  257.782692] RBP: 00007ffeaada2c60 R08: 0000000000000064 R09: 0000000000000000
[  257.783714] R10: 00007f889121b2e0 R11: 0000000000000246 R12: 0000000000000001
[  257.784741] R13: 0000000000000000 R14: 00007f889141e000 R15: 0000000000403e00
[  257.785753]  </TASK>
[  257.786243] ---[ end trace 0000000000000000 ]---

On Fri, Sep 1, 2023 at 8:07 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> Thanks Gregory. I will try enabling the CONFIG_CXL_MEM_RAW_COMMANDS and check.
>
> The reason we want to use the raw command interface is to send
> necessary opcodes that are of interest to the application as defined
> in the CXL spec. cxl_mem_commands has a very limited opcode set.
>
> Ravi
>
> On Fri, Sep 1, 2023 at 7:48 PM Gregory Price <gregory.price@memverge.com> wrote:
> >
> > On Fri, Sep 01, 2023 at 07:33:57PM +0530, Ravi Kanth wrote:
> > > Thanks Jonathan. We built the custom kernel and were able to run the
> > > test tool successfully and get the data.
> > >
> > > However when we modify the code to send raw commands, we are not able
> > > to send raw commands to the driver. We always see ENOPERM from the
> > > driver. Below is the code snippet. Should we enable some settings or
> > > from a code perspective should changes be made?
> > >
> > > From CXL driver code perspective, we see that ENOPERM can only be
> > > returned by cxl_mem_raw_command_allowed(). Not really sure why the raw
> > > command is not able to go through but the MEM
> > > command(CXL_MEM_COMMAND_ID_INFO_STAT_IDENTIFY) is able to run
> > > successfully.
> > >
> > > Can you please help us here?
> > >
> > > Code snippet:
> > >   cmd.id = CXL_MEM_COMMAND_ID_RAW;
> > >   cmd.raw.opcode= 0x0001; /*This is a Identify (Opcode 0001h) per CXL spec */
> > >   cmd.out.size = sizeof(is_identify);
> > >   cmd.out.payload = (__u64)&is_identify;
> > >
> > >   printf("sending identify command...\n");
> > >   printf("cmd.id:%x cmd.raw.opcode:%x\n",cmd.id,cmd.raw.opcode);
> > >   printf("cmd.raw.rsvd:%d\n",cmd.raw.rsvd);
> > >   rc = ioctl(fd, CXL_MEM_SEND_COMMAND, &cmd);
> > >
> > > Appreciate your help all along.
> > >
> > > Ravi
> > >
> >
> > raw commands are not allowed by default, you must turn on
> > CONFIG_CXL_MEM_RAW_COMMANDS in the kernel build config.
> >
> > ~Gregory
>
>
>
> --
> Ravi



-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
       [not found]                     ` <SJ0PR17MB5512449C5FFD76AD50B3D3AF83E4A@SJ0PR17MB5512.namprd17.prod.outlook.com>
@ 2023-09-02 14:58                       ` Ravi Kanth
  2023-10-18 10:30                         ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-09-02 14:58 UTC (permalink / raw)
  To: Gregory Price; +Cc: Jonathan Cameron, linux-cxl, Sajjan Rao

Ok. Got it. Thanks Gregory.

Ravi

On Fri, Sep 1, 2023 at 10:07 PM Gregory Price
<gregory.price@memverge.com> wrote:
>
>
> > However we saw the below crash when we sent the command for the first time.
>
> That's expected, there's an explicit warning the first time a raw command is sent.   This is intended behavior, no crash has occurred.



-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-09-02 14:58                       ` Ravi Kanth
@ 2023-10-18 10:30                         ` Ravi Kanth
  2023-10-19 17:08                           ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-10-18 10:30 UTC (permalink / raw)
  To: Gregory Price; +Cc: Jonathan Cameron, linux-cxl, Sajjan Rao

Hi Jonathan / Gregory,

1. Do we have the support in CXL driver to read the "Device status
registers" and specifically "Event status register" section 8.2.8.3
and 8.2.8.3.1?
2. If an interrupt is posted by the device firmware, how will user
space applications be notified ? Do we have an interface for the same
in CXL driver?

If the above features are already supported in CXL driver, Can you
please point us to the sample code snippets to achieve the same?

Also does the switch cci function change part of the mainline?

Thanks for your help in advance.

Thanks
Ravi

On Sat, Sep 2, 2023 at 8:28 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> Ok. Got it. Thanks Gregory.
>
> Ravi
>
> On Fri, Sep 1, 2023 at 10:07 PM Gregory Price
> <gregory.price@memverge.com> wrote:
> >
> >
> > > However we saw the below crash when we sent the command for the first time.
> >
> > That's expected, there's an explicit warning the first time a raw command is sent.   This is intended behavior, no crash has occurred.
>
>
>
> --
> Ravi



-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-10-18 10:30                         ` Ravi Kanth
@ 2023-10-19 17:08                           ` Ravi Kanth
  2023-10-20 20:38                             ` Ira Weiny
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-10-19 17:08 UTC (permalink / raw)
  To: Gregory Price; +Cc: Jonathan Cameron, linux-cxl, Sajjan Rao

Hi Jonathan / Gregory,
Just wanted to touch base on below questions and if you have any
inputs on the same. Thanks for your help.

Thanks
Ravi

On Wed, Oct 18, 2023 at 4:00 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> Hi Jonathan / Gregory,
>
> 1. Do we have the support in CXL driver to read the "Device status
> registers" and specifically "Event status register" section 8.2.8.3
> and 8.2.8.3.1?
> 2. If an interrupt is posted by the device firmware, how will user
> space applications be notified ? Do we have an interface for the same
> in CXL driver?
>
> If the above features are already supported in CXL driver, Can you
> please point us to the sample code snippets to achieve the same?
>
> Also does the switch cci function change part of the mainline?
>
> Thanks for your help in advance.
>
> Thanks
> Ravi
>
> On Sat, Sep 2, 2023 at 8:28 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
> >
> > Ok. Got it. Thanks Gregory.
> >
> > Ravi
> >
> > On Fri, Sep 1, 2023 at 10:07 PM Gregory Price
> > <gregory.price@memverge.com> wrote:
> > >
> > >
> > > > However we saw the below crash when we sent the command for the first time.
> > >
> > > That's expected, there's an explicit warning the first time a raw command is sent.   This is intended behavior, no crash has occurred.
> >
> >
> >
> > --
> > Ravi
>
>
>
> --
> Ravi



-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-10-19 17:08                           ` Ravi Kanth
@ 2023-10-20 20:38                             ` Ira Weiny
  2023-10-22  6:45                               ` Ravi Kanth
  2023-10-23 13:44                               ` Jonathan Cameron
  0 siblings, 2 replies; 21+ messages in thread
From: Ira Weiny @ 2023-10-20 20:38 UTC (permalink / raw)
  To: Ravi Kanth, Gregory Price; +Cc: Jonathan Cameron, linux-cxl, Sajjan Rao

Ravi Kanth wrote:
> Hi Jonathan / Gregory,
> Just wanted to touch base on below questions and if you have any
> inputs on the same. Thanks for your help.
> 
> Thanks
> Ravi
> 
> On Wed, Oct 18, 2023 at 4:00 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
> >
> > Hi Jonathan / Gregory,
> >
> > 1. Do we have the support in CXL driver to read the "Device status
> > registers" and specifically "Event status register" section 8.2.8.3
> > and 8.2.8.3.1?

Yes the upstream driver reads this when processing the Event interrupt
from the device.  Then it uses the value to chose which logs to read.

See cxl_event_thread() in the kernel source.

> > 2. If an interrupt is posted by the device firmware, how will user
> > space applications be notified ? Do we have an interface for the same
> > in CXL driver?

All events are reported through the trace infrastructure.

> >
> > If the above features are already supported in CXL driver, Can you
> > please point us to the sample code snippets to achieve the same?

ndctl has the ability to monitor these events and example C code in there.

See .../cxl/event_trace.c in the ndctl project.[1]

[1] https://github.com/pmem/ndctl

> >
> > Also does the switch cci function change part of the mainline?

I'm not sure I parse this question but I'm also not super familiar with
switch cci.  So I'll let Jonathan answer this.

Ira

> >
> > Thanks for your help in advance.
> >
> > Thanks
> > Ravi
> >

[snip]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-10-20 20:38                             ` Ira Weiny
@ 2023-10-22  6:45                               ` Ravi Kanth
  2023-10-23 13:50                                 ` Jonathan Cameron
  2023-10-23 13:44                               ` Jonathan Cameron
  1 sibling, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-10-22  6:45 UTC (permalink / raw)
  To: Ira Weiny; +Cc: Gregory Price, Jonathan Cameron, linux-cxl, Sajjan Rao

Thanks Ira for providing the references.

I am specifically looking for event support for the switch CCI
endpoint(/dev/cxl/switch) via the IOCTL interface. I could see
cxl_swmb_setup_mailbox() in the switchdev.c file. However I am not
able to understand how user space applications could make use of it.

The ndctl project does not also have references on how we could make
use of the switch CCI endpoint to get the interrupts nor on how we can
read "Device status registers" and specifically "Event status
register" section 8.2.8.3 and 8.2.8.3.1 via IOCTL interface from the
switch CCI endpoint.

- Ravi



On Sat, Oct 21, 2023 at 2:08 AM Ira Weiny <ira.weiny@intel.com> wrote:
>
> Ravi Kanth wrote:
> > Hi Jonathan / Gregory,
> > Just wanted to touch base on below questions and if you have any
> > inputs on the same. Thanks for your help.
> >
> > Thanks
> > Ravi
> >
> > On Wed, Oct 18, 2023 at 4:00 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > >
> > > Hi Jonathan / Gregory,
> > >
> > > 1. Do we have the support in CXL driver to read the "Device status
> > > registers" and specifically "Event status register" section 8.2.8.3
> > > and 8.2.8.3.1?
>
> Yes the upstream driver reads this when processing the Event interrupt
> from the device.  Then it uses the value to chose which logs to read.
>
> See cxl_event_thread() in the kernel source.
>
> > > 2. If an interrupt is posted by the device firmware, how will user
> > > space applications be notified ? Do we have an interface for the same
> > > in CXL driver?
>
> All events are reported through the trace infrastructure.
>
> > >
> > > If the above features are already supported in CXL driver, Can you
> > > please point us to the sample code snippets to achieve the same?
>
> ndctl has the ability to monitor these events and example C code in there.
>
> See .../cxl/event_trace.c in the ndctl project.[1]
>
> [1] https://github.com/pmem/ndctl
>
> > >
> > > Also does the switch cci function change part of the mainline?
>
> I'm not sure I parse this question but I'm also not super familiar with
> switch cci.  So I'll let Jonathan answer this.
>
> Ira
>
> > >
> > > Thanks for your help in advance.
> > >
> > > Thanks
> > > Ravi
> > >
>
> [snip]



-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-10-20 20:38                             ` Ira Weiny
  2023-10-22  6:45                               ` Ravi Kanth
@ 2023-10-23 13:44                               ` Jonathan Cameron
  1 sibling, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2023-10-23 13:44 UTC (permalink / raw)
  To: Ira Weiny; +Cc: Ravi Kanth, Gregory Price, linux-cxl, Sajjan Rao

On Fri, 20 Oct 2023 13:38:22 -0700
Ira Weiny <ira.weiny@intel.com> wrote:

> Ravi Kanth wrote:
> > Hi Jonathan / Gregory,
> > Just wanted to touch base on below questions and if you have any
> > inputs on the same. Thanks for your help.
> > 
> > Thanks
> > Ravi
> > 
> > On Wed, Oct 18, 2023 at 4:00 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:  
> > >
> > > Hi Jonathan / Gregory,
> > >
> > > 1. Do we have the support in CXL driver to read the "Device status
> > > registers" and specifically "Event status register" section 8.2.8.3
> > > and 8.2.8.3.1?  
> 
> Yes the upstream driver reads this when processing the Event interrupt
> from the device.  Then it uses the value to chose which logs to read.
> 
> See cxl_event_thread() in the kernel source.
> 
> > > 2. If an interrupt is posted by the device firmware, how will user
> > > space applications be notified ? Do we have an interface for the same
> > > in CXL driver?  
> 
> All events are reported through the trace infrastructure.
> 
> > >
> > > If the above features are already supported in CXL driver, Can you
> > > please point us to the sample code snippets to achieve the same?  
> 
> ndctl has the ability to monitor these events and example C code in there.
> 
> See .../cxl/event_trace.c in the ndctl project.[1]
> 
> [1] https://github.com/pmem/ndctl
> 
> > >
> > > Also does the switch cci function change part of the mainline?  
> 
> I'm not sure I parse this question but I'm also not super familiar with
> switch cci.  So I'll let Jonathan answer this.

Patches posted for review only at the moment - so 6.8 at earliest given timing.
https://lore.kernel.org/all/20231016125323.18318-1-Jonathan.Cameron@huawei.com/

Main change to existing handling is that a bunch of mailbox related handling is
factored out for reuse.  Otherwise, it's 'just another PCI endpoint driver'.

Jonathan

> 
> Ira
> 
> > >
> > > Thanks for your help in advance.
> > >
> > > Thanks
> > > Ravi
> > >  
> 
> [snip]
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-10-22  6:45                               ` Ravi Kanth
@ 2023-10-23 13:50                                 ` Jonathan Cameron
  2023-10-26  1:52                                   ` Ravi Kanth
  0 siblings, 1 reply; 21+ messages in thread
From: Jonathan Cameron @ 2023-10-23 13:50 UTC (permalink / raw)
  To: Ravi Kanth; +Cc: Ira Weiny, Gregory Price, linux-cxl, Sajjan Rao

On Sun, 22 Oct 2023 12:15:13 +0530
Ravi Kanth <mvrravikanth@gmail.com> wrote:

> Thanks Ira for providing the references.
> 
> I am specifically looking for event support for the switch CCI
> endpoint(/dev/cxl/switch) via the IOCTL interface. I could see
> cxl_swmb_setup_mailbox() in the switchdev.c file. However I am not
> able to understand how user space applications could make use of it.

If you have all the moving parts (i.e. recent qemu + the kernel patches
I just linked to in another branch of this thread) then the example at:
https://gitlab.com/jic23/cxl-fmapi-tests

will let you interact with the emulated switch mailbox CCI
via the raw ioctl command.  So far it does basic crawl out and enumerate
what what it finds.

> 
> The ndctl project does not also have references on how we could make
> use of the switch CCI endpoint to get the interrupts nor on how we can
> read "Device status registers" and specifically "Event status
> register" section 8.2.8.3 and 8.2.8.3.1 via IOCTL interface from the
> switch CCI endpoint.

Whilst we haven't really discussed it yet, I'd not expect ndctl (which
is focused on host interaction) will support much in the way of specific
features for fabric management.

The cxl-fmapi-tests are not intended to be used for production use cases
either. My expectation is that one of the projects more generally looking
at CXL fabric management will provide that functionality. I've not really
been keeping track of these but I gather there is work in various standards
orgs (outside of the CXL consortium) to define how it will be done at a
higher level.

Jonathan

> 
> - Ravi
> 
> 
> 
> On Sat, Oct 21, 2023 at 2:08 AM Ira Weiny <ira.weiny@intel.com> wrote:
> >
> > Ravi Kanth wrote:  
> > > Hi Jonathan / Gregory,
> > > Just wanted to touch base on below questions and if you have any
> > > inputs on the same. Thanks for your help.
> > >
> > > Thanks
> > > Ravi
> > >
> > > On Wed, Oct 18, 2023 at 4:00 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:  
> > > >
> > > > Hi Jonathan / Gregory,
> > > >
> > > > 1. Do we have the support in CXL driver to read the "Device status
> > > > registers" and specifically "Event status register" section 8.2.8.3
> > > > and 8.2.8.3.1?  
> >
> > Yes the upstream driver reads this when processing the Event interrupt
> > from the device.  Then it uses the value to chose which logs to read.
> >
> > See cxl_event_thread() in the kernel source.
> >  
> > > > 2. If an interrupt is posted by the device firmware, how will user
> > > > space applications be notified ? Do we have an interface for the same
> > > > in CXL driver?  
> >
> > All events are reported through the trace infrastructure.
> >  
> > > >
> > > > If the above features are already supported in CXL driver, Can you
> > > > please point us to the sample code snippets to achieve the same?  
> >
> > ndctl has the ability to monitor these events and example C code in there.
> >
> > See .../cxl/event_trace.c in the ndctl project.[1]
> >
> > [1] https://github.com/pmem/ndctl
> >  
> > > >
> > > > Also does the switch cci function change part of the mainline?  
> >
> > I'm not sure I parse this question but I'm also not super familiar with
> > switch cci.  So I'll let Jonathan answer this.
> >
> > Ira
> >  
> > > >
> > > > Thanks for your help in advance.
> > > >
> > > > Thanks
> > > > Ravi
> > > >  
> >
> > [snip]  
> 
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-10-23 13:50                                 ` Jonathan Cameron
@ 2023-10-26  1:52                                   ` Ravi Kanth
  2023-10-26  8:59                                     ` Jonathan Cameron
  0 siblings, 1 reply; 21+ messages in thread
From: Ravi Kanth @ 2023-10-26  1:52 UTC (permalink / raw)
  To: Jonathan Cameron; +Cc: Ira Weiny, Gregory Price, linux-cxl, Sajjan Rao

Thanks Jonathan. For switch devices via IOCTL, Do you mean that CXL
driver will not have functionality to support or notify host
applications about async events / interrupts via IOCTL or this has not
yet been discussed?

Is it possible to provide an interface to read "Device status
registers" and specifically "Event status register" section 8.2.8.3
and 8.2.8.3.1 via IOCTL interface from the switch CCI endpoint via
IOCTL using CXL driver?

Ravi

On Mon, Oct 23, 2023 at 7:20 PM Jonathan Cameron
<Jonathan.Cameron@huawei.com> wrote:
>
> On Sun, 22 Oct 2023 12:15:13 +0530
> Ravi Kanth <mvrravikanth@gmail.com> wrote:
>
> > Thanks Ira for providing the references.
> >
> > I am specifically looking for event support for the switch CCI
> > endpoint(/dev/cxl/switch) via the IOCTL interface. I could see
> > cxl_swmb_setup_mailbox() in the switchdev.c file. However I am not
> > able to understand how user space applications could make use of it.
>
> If you have all the moving parts (i.e. recent qemu + the kernel patches
> I just linked to in another branch of this thread) then the example at:
> https://gitlab.com/jic23/cxl-fmapi-tests
>
> will let you interact with the emulated switch mailbox CCI
> via the raw ioctl command.  So far it does basic crawl out and enumerate
> what what it finds.
>
> >
> > The ndctl project does not also have references on how we could make
> > use of the switch CCI endpoint to get the interrupts nor on how we can
> > read "Device status registers" and specifically "Event status
> > register" section 8.2.8.3 and 8.2.8.3.1 via IOCTL interface from the
> > switch CCI endpoint.
>
> Whilst we haven't really discussed it yet, I'd not expect ndctl (which
> is focused on host interaction) will support much in the way of specific
> features for fabric management.
>
> The cxl-fmapi-tests are not intended to be used for production use cases
> either. My expectation is that one of the projects more generally looking
> at CXL fabric management will provide that functionality. I've not really
> been keeping track of these but I gather there is work in various standards
> orgs (outside of the CXL consortium) to define how it will be done at a
> higher level.
>
> Jonathan
>
> >
> > - Ravi
> >
> >
> >
> > On Sat, Oct 21, 2023 at 2:08 AM Ira Weiny <ira.weiny@intel.com> wrote:
> > >
> > > Ravi Kanth wrote:
> > > > Hi Jonathan / Gregory,
> > > > Just wanted to touch base on below questions and if you have any
> > > > inputs on the same. Thanks for your help.
> > > >
> > > > Thanks
> > > > Ravi
> > > >
> > > > On Wed, Oct 18, 2023 at 4:00 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:
> > > > >
> > > > > Hi Jonathan / Gregory,
> > > > >
> > > > > 1. Do we have the support in CXL driver to read the "Device status
> > > > > registers" and specifically "Event status register" section 8.2.8.3
> > > > > and 8.2.8.3.1?
> > >
> > > Yes the upstream driver reads this when processing the Event interrupt
> > > from the device.  Then it uses the value to chose which logs to read.
> > >
> > > See cxl_event_thread() in the kernel source.
> > >
> > > > > 2. If an interrupt is posted by the device firmware, how will user
> > > > > space applications be notified ? Do we have an interface for the same
> > > > > in CXL driver?
> > >
> > > All events are reported through the trace infrastructure.
> > >
> > > > >
> > > > > If the above features are already supported in CXL driver, Can you
> > > > > please point us to the sample code snippets to achieve the same?
> > >
> > > ndctl has the ability to monitor these events and example C code in there.
> > >
> > > See .../cxl/event_trace.c in the ndctl project.[1]
> > >
> > > [1] https://github.com/pmem/ndctl
> > >
> > > > >
> > > > > Also does the switch cci function change part of the mainline?
> > >
> > > I'm not sure I parse this question but I'm also not super familiar with
> > > switch cci.  So I'll let Jonathan answer this.
> > >
> > > Ira
> > >
> > > > >
> > > > > Thanks for your help in advance.
> > > > >
> > > > > Thanks
> > > > > Ravi
> > > > >
> > >
> > > [snip]
> >
> >
> >
>


-- 
Ravi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Support for CXL v3.0 spec with QEMU
  2023-10-26  1:52                                   ` Ravi Kanth
@ 2023-10-26  8:59                                     ` Jonathan Cameron
  0 siblings, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2023-10-26  8:59 UTC (permalink / raw)
  To: Ravi Kanth; +Cc: Ira Weiny, Gregory Price, linux-cxl, Sajjan Rao

On Thu, 26 Oct 2023 07:22:16 +0530
Ravi Kanth <mvrravikanth@gmail.com> wrote:

> Thanks Jonathan. For switch devices via IOCTL, Do you mean that CXL
> driver will not have functionality to support or notify host
> applications about async events / interrupts via IOCTL or this has not
> yet been discussed?

No notifications yet. We'll figure it out, but just not gotten there
yet. For events, we might just implement the tracepoint stuff from
the type3 driver.  Should be fairly easy to do.. though I've not yet done enough
diving in the spec to be sure...

> 
> Is it possible to provide an interface to read "Device status
> registers" and specifically "Event status register" section 8.2.8.3
> and 8.2.8.3.1 via IOCTL interface from the switch CCI endpoint via
> IOCTL using CXL driver?

I'd rather avoid that if we can as it will share even less infrastructure
with the main driver than we currently do.

Jonathan

> 
> Ravi
> 
> On Mon, Oct 23, 2023 at 7:20 PM Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Sun, 22 Oct 2023 12:15:13 +0530
> > Ravi Kanth <mvrravikanth@gmail.com> wrote:
> >  
> > > Thanks Ira for providing the references.
> > >
> > > I am specifically looking for event support for the switch CCI
> > > endpoint(/dev/cxl/switch) via the IOCTL interface. I could see
> > > cxl_swmb_setup_mailbox() in the switchdev.c file. However I am not
> > > able to understand how user space applications could make use of it.  
> >
> > If you have all the moving parts (i.e. recent qemu + the kernel patches
> > I just linked to in another branch of this thread) then the example at:
> > https://gitlab.com/jic23/cxl-fmapi-tests
> >
> > will let you interact with the emulated switch mailbox CCI
> > via the raw ioctl command.  So far it does basic crawl out and enumerate
> > what what it finds.
> >  
> > >
> > > The ndctl project does not also have references on how we could make
> > > use of the switch CCI endpoint to get the interrupts nor on how we can
> > > read "Device status registers" and specifically "Event status
> > > register" section 8.2.8.3 and 8.2.8.3.1 via IOCTL interface from the
> > > switch CCI endpoint.  
> >
> > Whilst we haven't really discussed it yet, I'd not expect ndctl (which
> > is focused on host interaction) will support much in the way of specific
> > features for fabric management.
> >
> > The cxl-fmapi-tests are not intended to be used for production use cases
> > either. My expectation is that one of the projects more generally looking
> > at CXL fabric management will provide that functionality. I've not really
> > been keeping track of these but I gather there is work in various standards
> > orgs (outside of the CXL consortium) to define how it will be done at a
> > higher level.
> >
> > Jonathan
> >  
> > >
> > > - Ravi
> > >
> > >
> > >
> > > On Sat, Oct 21, 2023 at 2:08 AM Ira Weiny <ira.weiny@intel.com> wrote:  
> > > >
> > > > Ravi Kanth wrote:  
> > > > > Hi Jonathan / Gregory,
> > > > > Just wanted to touch base on below questions and if you have any
> > > > > inputs on the same. Thanks for your help.
> > > > >
> > > > > Thanks
> > > > > Ravi
> > > > >
> > > > > On Wed, Oct 18, 2023 at 4:00 PM Ravi Kanth <mvrravikanth@gmail.com> wrote:  
> > > > > >
> > > > > > Hi Jonathan / Gregory,
> > > > > >
> > > > > > 1. Do we have the support in CXL driver to read the "Device status
> > > > > > registers" and specifically "Event status register" section 8.2.8.3
> > > > > > and 8.2.8.3.1?  
> > > >
> > > > Yes the upstream driver reads this when processing the Event interrupt
> > > > from the device.  Then it uses the value to chose which logs to read.
> > > >
> > > > See cxl_event_thread() in the kernel source.
> > > >  
> > > > > > 2. If an interrupt is posted by the device firmware, how will user
> > > > > > space applications be notified ? Do we have an interface for the same
> > > > > > in CXL driver?  
> > > >
> > > > All events are reported through the trace infrastructure.
> > > >  
> > > > > >
> > > > > > If the above features are already supported in CXL driver, Can you
> > > > > > please point us to the sample code snippets to achieve the same?  
> > > >
> > > > ndctl has the ability to monitor these events and example C code in there.
> > > >
> > > > See .../cxl/event_trace.c in the ndctl project.[1]
> > > >
> > > > [1] https://github.com/pmem/ndctl
> > > >  
> > > > > >
> > > > > > Also does the switch cci function change part of the mainline?  
> > > >
> > > > I'm not sure I parse this question but I'm also not super familiar with
> > > > switch cci.  So I'll let Jonathan answer this.
> > > >
> > > > Ira
> > > >  
> > > > > >
> > > > > > Thanks for your help in advance.
> > > > > >
> > > > > > Thanks
> > > > > > Ravi
> > > > > >  
> > > >
> > > > [snip]  
> > >
> > >
> > >  
> >  
> 
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-10-26  8:59 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-10 16:12 Support for CXL v3.0 spec with QEMU Ravi Kanth
2023-08-10 16:33 ` Jonathan Cameron
2023-08-10 16:40   ` Ravi Kanth
2023-08-11 12:04     ` Ravi Kanth
2023-08-11 13:52       ` Jonathan Cameron
2023-08-16 11:41         ` Ravi Kanth
2023-08-23 17:03           ` Jonathan Cameron
2023-09-01 14:03             ` Ravi Kanth
2023-09-01 14:18               ` Gregory Price
2023-09-01 14:37                 ` Ravi Kanth
2023-09-01 16:19                   ` Ravi Kanth
     [not found]                     ` <SJ0PR17MB5512449C5FFD76AD50B3D3AF83E4A@SJ0PR17MB5512.namprd17.prod.outlook.com>
2023-09-02 14:58                       ` Ravi Kanth
2023-10-18 10:30                         ` Ravi Kanth
2023-10-19 17:08                           ` Ravi Kanth
2023-10-20 20:38                             ` Ira Weiny
2023-10-22  6:45                               ` Ravi Kanth
2023-10-23 13:50                                 ` Jonathan Cameron
2023-10-26  1:52                                   ` Ravi Kanth
2023-10-26  8:59                                     ` Jonathan Cameron
2023-10-23 13:44                               ` Jonathan Cameron
2023-08-11 13:49     ` Jonathan Cameron

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.