All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
@ 2010-08-19 18:32 Adnan Khaleel
  2010-08-20  5:22 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
  0 siblings, 1 reply; 19+ messages in thread
From: Adnan Khaleel @ 2010-08-19 18:32 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2986 bytes --]

Isaku,

I'm having some difficulties building the sources, I get the following message

*akhaleel@yar95 qemu-q35 $ ./configure --help
: bad interpreter: No such file or directory

And I get a similar error while compiling seabios as well.

What shell are you using or am I missing something? I'm compiling from a typical bash shell and using gcc v4.4.0.

In vgabios, there is a requirement for bcc. Is that borland C compiler?

Thanks

Adnan
  _____  

From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
To: Adnan Khaleel [mailto:adnan@khaleel.us]
Cc: qemu-devel@nongnu.org
Sent: Wed, 18 Aug 2010 22:19:04 -0500
Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X

On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
  > Hello Qemu developers,
  >
  > I'm interested in developing a device model that plugs into Qemu that is based
  > on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
  > simulator to this PCIe interface and use the entire platfom (Qemu + GPU
  > simulator) for studying cpu, gpu interactions.
  > 
  > I'm not terribly familiar with the Qemu device model and I'm looking for some
  > assistance, perhaps a starting template for pcie and msi-x that would offer the
  > basic functionality that I could then build upon.
  > 
  > I have looked at the various devices that already modelled that are included
  > with Qemu (v0.12.5 at least) and I've noticed several a few pci devices, eg;
  > ne2k and cirrus-pci etc, however only one device truly seems to utilize both
  > the technologies that I'm interested in and that is the virtio-pci.c
  > 
  > I'm not sure what virtio-pci does so I'm not sure if that is a suitable
  > starting point for me.
  > 
  > Any help, suggestions etc would be extremely helpful and much appreciated.
  
  Qemu doesn't support pcie at the moment.
  Only partial patches have been merged, still more patches have to
  be merged for pcie to fully work. The following repo is available.
  
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
  
  Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
  for q35.
  example:
  qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/q35-acpi-dsdt.aml
  
  This repo is for those who want to try/develop pcie support,
  not for upstream merge. So they include patches unsuitable for upstream.
  The repo includes pcie port switch emulator which utilize pcie and
  MSI(not MSI-X).
  
  The difference between PCI device and PCIe device is configuration
  space size.
  By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
  space. Helper functions for pcie are found in qemu/hw/pcie.c
  For msi-x, see qemu/hw/msix.c.
  
  Thanks,
  -- 
  yamahata
    

[-- Attachment #2: Type: text/html, Size: 3900 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X
  2010-08-19 18:32 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
@ 2010-08-20  5:22 ` Isaku Yamahata
  0 siblings, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-08-20  5:22 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: qemu-devel

On Thu, Aug 19, 2010 at 01:32:42PM -0500, Adnan Khaleel wrote:
> Isaku,
> 
> I'm having some difficulties building the sources, I get the following message
> 
> *akhaleel@yar95 qemu-q35 $ ./configure --help
> : bad interpreter: No such file or directory
> 
> And I get a similar error while compiling seabios as well.
>
> What shell are you using or am I missing something? I'm compiling from a
> typical bash shell and using gcc v4.4.0.

I'm not sure. configure script isn't modified.
Can you compile normal qemu?
The first line of the script is #!/bin/sh. I suppose you have /bin/sh.


> In vgabios, there is a requirement for bcc. Is that borland C compiler?

No. Most Linux destro has bcc package. Just you need to install it like
yum install bcc or something.

Thanks,


> 
> Thanks
> 
> Adnan
> 
>     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
>     From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
>     To: Adnan Khaleel [mailto:adnan@khaleel.us]
>     Cc: qemu-devel@nongnu.org
>     Sent: Wed, 18 Aug 2010 22:19:04 -0500
>     Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
>     and MSI-X
> 
>     On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
>     > Hello Qemu developers,
>     >
>     > I'm interested in developing a device model that plugs into Qemu that is
>     based
>     > on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
>     > simulator to this PCIe interface and use the entire platfom (Qemu + GPU
>     > simulator) for studying cpu, gpu interactions.
>     >
>     > I'm not terribly familiar with the Qemu device model and I'm looking for
>     some
>     > assistance, perhaps a starting template for pcie and msi-x that would
>     offer the
>     > basic functionality that I could then build upon.
>     >
>     > I have looked at the various devices that already modelled that are
>     included
>     > with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
>     eg;
>     > ne2k and cirrus-pci etc, however only one device truly seems to utilize
>     both
>     > the technologies that I'm interested in and that is the virtio-pci.c
>     >
>     > I'm not sure what virtio-pci does so I'm not sure if that is a suitable
>     > starting point for me.
>     >
>     > Any help, suggestions etc would be extremely helpful and much
>     appreciated.
> 
>     Qemu doesn't support pcie at the moment.
>     Only partial patches have been merged, still more patches have to
>     be merged for pcie to fully work. The following repo is available.
> 
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
> 
>     Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
>     for q35.
>     example:
>     qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/
>     q35-acpi-dsdt.aml
> 
>     This repo is for those who want to try/develop pcie support,
>     not for upstream merge. So they include patches unsuitable for upstream.
>     The repo includes pcie port switch emulator which utilize pcie and
>     MSI(not MSI-X).
> 
>     The difference between PCI device and PCIe device is configuration
>     space size.
>     By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
>     space. Helper functions for pcie are found in qemu/hw/pcie.c
>     For msi-x, see qemu/hw/msix.c.
> 
>     Thanks,
>     --
>     yamahata
> 

-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with?PCIe?and MSI-X
       [not found] <20100909190713.d8dc99ce@shadowfax.no-ip.com>
@ 2010-09-10  2:00 ` Isaku Yamahata
  0 siblings, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-09-10  2:00 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: qemu-devel

http://www.seabios.org/pipermail/seabios/2010-July/000796.html

I haven't found my time to respin to check PMM stuff yet.
If you give it a try, it would be appreciated.

thanks,

On Thu, Sep 09, 2010 at 02:07:13PM -0500, Adnan Khaleel wrote:
> Can you point me to this patch? I found one for BAR overflow checking that you
> wrote which isn't merged with the seabios git source I downloaded from you. I'm
> assuming this is not the one you're talking about correct?
> 
> 
>     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
>     From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
>     To: Adnan Khaleel [mailto:adnan@khaleel.us]
>     Cc: Cam Macdonell [mailto:cam@cs.ualberta.ca], qemu-devel@nongnu.org
>     Sent: Thu, 02 Sep 2010 21:20:12 -0500
>     Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe?
>     and MSI-X
> 
>     On Thu, Sep 02, 2010 at 12:42:42PM -0500, Adnan Khaleel wrote:
>     > I've tried everything you mentioned and I still get the same problem. The
>     only
>     > thing that seems to avoid that issue is if I reduce the aperture size
>     from
>     > 0x2000000000ull to 0x2000000ull.
> 
>     I suppose that Cam is seeing the same issue.
> 
>     Right now seabios can't handle too huge BAR due
>     to overflow.
>     There is a rejected patch floating around,
>     but I haven't created a revised patch yet.
> 
>     >
>     > Here is the relevant section of code:
>     >
>     > static const unsigned long long BAR_Regions[6][2] =
>     > {
>     > // len , type
>     > { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY |
>     > PCI_BASE_ADDRESS_MEM_TYPE_64} , //BAR0,
>     > { 0, 0} , // BAR1
>     > { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_IO } , //BAR2,
>     > { 0, 0} , // BAR3 for MSI-X
>     > { 0, 0} , // BAR4
>     > { 0, 0} , // BAR5
>     > };
>     >
>     > static int pcie_msix_initfn(PCIDevice *pci_dev)
>     > {
>     > PCIE_MSIX_DEVState *d = DO_UPCAST(PCIE_MSIX_DEVState, dev, pci_dev);
>     > PCIBridge *br = DO_UPCAST(PCIBridge, dev, pci_dev);
>     > PCIEPort *p = DO_UPCAST(PCIEPort, br, br);
>     > int rc, i;
>     >
>     > PRINT_DEBUG("%s: PCIE MSIX Device init...\n", __FUNCTION__);
>     >
>     > pci_config_set_vendor_id(d->dev.config, PCIE_MSIX_VID);
>     > pci_config_set_device_id(d->dev.config, PCIE_MSIX_DID);
>     >
>     > memcpy(d->dev.config, g_cfg_init, sizeof(g_cfg_init[0x20]));
>     > d->mmio_index = cpu_register_io_memory(pcie_msix_mem_read_fn,
>     > pcie_msix_mem_write_fn, d);
>     >
>     > int msix_mem_bar = 0; // Since its a 64bit BAR, we take up BAR0 & BAR1
>     > int msix_io_bar = 2;
>     > int msix_mmio_bar = 3;
>     >
>     > pci_register_bar(&d->dev, msix_mem_bar, BAR_Regions[msix_mem_bar][0],
>     > BAR_Regions[msix_mem_bar][1], pcie_msix_mem_map);
>     > pci_register_bar(&d->dev, msix_io_bar, BAR_Regions[msix_io_bar][0],
>     > BAR_Regions[msix_io_bar][1], pcie_msix_io_map);
>     >
>     > rc = msix_init(&d->dev, d->vectors, msix_mmio_bar, 0);
>     >
>     > if (!rc) {
>     > PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__,
>     > msix_mmio_bar);
>     > pci_register_bar(&d->dev, msix_mmio_bar, msix_bar_size(&d->dev),
>     > PCI_BASE_ADDRESS_SPACE_MEMORY, msix_mmio_map);
>     > PRINT_DEBUG("%s: MSI-X initialized (%d vectors)\n", __FUNCTION__, d->
>     > vectors);
>     > }
>     > else {
>     > PRINT_DEBUG("%s: MSI-X initialization failed!\n", __FUNCTION__);
>     > return rc;
>     > }
>     >
>     > // Activate the vectors
>     > for (i = 0; i < d->vectors; i++) {
>     > msix_vector_use(&d->dev, i);
>     > }
>     >
>     > rc = pci_pcie_cap_init(&d->dev, PCIE_MSIX_EXP_OFFSET,
>     > PCI_EXP_TYPE_ENDPOINT, p->port);
>     > if (rc < 0) {
>     > return rc;
>     > }
>     >
>     > pcie_cap_flr_init(&d->dev, &pcie_msix_flr);
>     > pcie_cap_deverr_init(&d->dev);
>     > pcie_cap_ari_init(&d->dev);
>     > rc = pcie_aer_init(&d->dev, PCIE_MSIX_AER_OFFSET);
>     > if (rc < 0) {
>     > return rc;
>     > }
>     >
>     > PRINT_DEBUG("%s: Init done\n", __FUNCTION__);
>     > return 0;
>     > }
>     >
>     > Another question I have is why doesn't the device show up when I try a
>     cat /
>     > proc/interrupts.
>     >
>     > linux-an84:~/AriesKernelModules/gni/aries/ghal # cat /proc/interrupts
>     > CPU0
>     > 0: 694 IO-APIC-edge timer
>     > 1: 6 IO-APIC-edge i8042
>     > 4: 753 IO-APIC-edge serial
>     > 8: 1 IO-APIC-edge rtc0
>     > 9: 0 IO-APIC-fasteoi acpi
>     > 12: 89 IO-APIC-edge i8042
>     > 14: 3522 IO-APIC-edge ata_piix
>     > 15: 785 IO-APIC-edge ata_piix
>     > 16: 162 IO-APIC-fasteoi eth0
>     > 4344: 0 PCI-MSI-edge aerdrv
>     > 4345: 0 PCI-MSI-edge aerdrv
>     > 4346: 0 PCI-MSI-edge aerdrv
>     > 4347: 0 PCI-MSI-edge aerdrv
>     > 4348: 0 PCI-MSI-edge aerdrv
>     > 4349: 0 PCI-MSI-edge aerdrv
>     > 4350: 0 PCI-MSI-edge aerdrv
>     > 4351: 0 PCI-MSI-edge aerdrv
>     > NMI: 0 Non-maskable interrupts
>     > LOC: 107095 Local timer interrupts
>     > RES: 0 Rescheduling interrupts
>     > CAL: 0 function call interrupts
>     > TLB: 0 TLB shootdowns
>     > TRM: 0 Thermal event interrupts
>     > THR: 0 Threshold APIC interrupts
>     > SPU: 0 Spurious interrupts
>     > ERR: 0
>     >
>     > Shouldn't there be an entry for the MSI-X device?
>     >
>     > Thanks for all your input.
>     >
>     > AK
>     >
>     >
>     >
>     > ?????????????????????????????????????
>     > Probably what you want is something like
>     >
>     > { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY |
>     > PCI_BASE_ADDRESS_MEM_TYPE_64} , //BAR0
>     > { 0, 0} , //BAR1
>     > // 64bit BAR occupies 2 BAR entries so that BAR1 can't be used.
>     > { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_IO } , //BAR2
>     > { 0, 0} , //BAR3
>     > // for MSI-X
>     > { 0, 0} , //BAR4
>     > { 0, 0} //BAR5
>     >
>     >
>     >
> 
>     --
>     yamahata
> 

-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X
  2010-09-02 17:42 Adnan Khaleel
@ 2010-09-03  2:20 ` Isaku Yamahata
  0 siblings, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-09-03  2:20 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: Cam Macdonell, qemu-devel

On Thu, Sep 02, 2010 at 12:42:42PM -0500, Adnan Khaleel wrote:
> I've tried everything you mentioned and I still get the same problem. The only
> thing that seems to avoid that issue is if I reduce the aperture size from
> 0x2000000000ull to 0x2000000ull.

I suppose that Cam is seeing the same issue.

Right now seabios can't handle too huge BAR due
to overflow.
There is a rejected patch floating around,
but I haven't created a revised patch yet.

> 
> Here is the relevant section of code:
> 
> static const unsigned long long BAR_Regions[6][2] =
> {
>     // len , type
>     { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY |
> PCI_BASE_ADDRESS_MEM_TYPE_64} ,  //BAR0,       
>     { 0, 0} , // BAR1
>     { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR2,
>     { 0, 0} , // BAR3 for MSI-X
>     { 0, 0} , // BAR4
>     { 0, 0} , // BAR5   
> };
> 
> static int pcie_msix_initfn(PCIDevice *pci_dev)
> {
>     PCIE_MSIX_DEVState *d = DO_UPCAST(PCIE_MSIX_DEVState, dev, pci_dev);
>     PCIBridge *br = DO_UPCAST(PCIBridge, dev, pci_dev);
>     PCIEPort *p = DO_UPCAST(PCIEPort, br, br);
>     int rc, i;
> 
>     PRINT_DEBUG("%s: PCIE MSIX Device init...\n", __FUNCTION__);
> 
>     pci_config_set_vendor_id(d->dev.config, PCIE_MSIX_VID);
>     pci_config_set_device_id(d->dev.config, PCIE_MSIX_DID);
> 
>     memcpy(d->dev.config, g_cfg_init, sizeof(g_cfg_init[0x20]));
>     d->mmio_index = cpu_register_io_memory(pcie_msix_mem_read_fn,
> pcie_msix_mem_write_fn, d);
> 
>     int msix_mem_bar  = 0; // Since its a 64bit BAR, we take up BAR0 & BAR1
>     int msix_io_bar   = 2;
>     int msix_mmio_bar = 3;
> 
>     pci_register_bar(&d->dev, msix_mem_bar, BAR_Regions[msix_mem_bar][0],
> BAR_Regions[msix_mem_bar][1], pcie_msix_mem_map);
>     pci_register_bar(&d->dev, msix_io_bar, BAR_Regions[msix_io_bar][0],
> BAR_Regions[msix_io_bar][1], pcie_msix_io_map);
> 
>     rc = msix_init(&d->dev, d->vectors, msix_mmio_bar, 0);
>    
>     if (!rc) {
>         PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__,
> msix_mmio_bar);
>         pci_register_bar(&d->dev, msix_mmio_bar, msix_bar_size(&d->dev),
> PCI_BASE_ADDRESS_SPACE_MEMORY, msix_mmio_map);
>         PRINT_DEBUG("%s: MSI-X initialized (%d vectors)\n", __FUNCTION__, d->
> vectors);
>     }
>     else {
>         PRINT_DEBUG("%s: MSI-X initialization failed!\n", __FUNCTION__);
>         return rc;
>     }
>    
>     // Activate the vectors
>     for (i = 0; i < d->vectors; i++) {
>         msix_vector_use(&d->dev, i);
>     }
> 
>     rc = pci_pcie_cap_init(&d->dev, PCIE_MSIX_EXP_OFFSET,
> PCI_EXP_TYPE_ENDPOINT, p->port);
>     if (rc < 0) {
>         return rc;
>     }
> 
>     pcie_cap_flr_init(&d->dev, &pcie_msix_flr);
>     pcie_cap_deverr_init(&d->dev);
>     pcie_cap_ari_init(&d->dev);
>     rc = pcie_aer_init(&d->dev, PCIE_MSIX_AER_OFFSET);
>     if (rc < 0) {
>         return rc;
>     }
> 
>     PRINT_DEBUG("%s: Init done\n", __FUNCTION__);
>     return 0;
> }
> 
> Another question I have is why doesn't the device show up when I try a cat /
> proc/interrupts.
> 
> linux-an84:~/AriesKernelModules/gni/aries/ghal # cat /proc/interrupts
>            CPU0
>   0:        694   IO-APIC-edge      timer
>   1:          6   IO-APIC-edge      i8042
>   4:        753   IO-APIC-edge      serial
>   8:          1   IO-APIC-edge      rtc0
>   9:          0   IO-APIC-fasteoi   acpi
>  12:         89   IO-APIC-edge      i8042
>  14:       3522   IO-APIC-edge      ata_piix
>  15:        785   IO-APIC-edge      ata_piix
>  16:        162   IO-APIC-fasteoi   eth0
> 4344:          0   PCI-MSI-edge      aerdrv
> 4345:          0   PCI-MSI-edge      aerdrv
> 4346:          0   PCI-MSI-edge      aerdrv
> 4347:          0   PCI-MSI-edge      aerdrv
> 4348:          0   PCI-MSI-edge      aerdrv
> 4349:          0   PCI-MSI-edge      aerdrv
> 4350:          0   PCI-MSI-edge      aerdrv
> 4351:          0   PCI-MSI-edge      aerdrv
> NMI:          0   Non-maskable interrupts
> LOC:     107095   Local timer interrupts
> RES:          0   Rescheduling interrupts
> CAL:          0   function call interrupts
> TLB:          0   TLB shootdowns
> TRM:          0   Thermal event interrupts
> THR:          0   Threshold APIC interrupts
> SPU:          0   Spurious interrupts
> ERR:          0
> 
> Shouldn't there be an entry for the MSI-X device?
> 
> Thanks for all your input.
> 
> AK
> 
> 
> 
>     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
>     Probably what you want is something like
> 
>     { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY |
>     PCI_BASE_ADDRESS_MEM_TYPE_64} , //BAR0
>     { 0, 0} , //BAR1
>     // 64bit BAR occupies 2 BAR entries so that BAR1 can't be used.
>     { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_IO } , //BAR2
>     { 0, 0} , //BAR3
>     // for MSI-X
>     { 0, 0} , //BAR4
>     { 0, 0} //BAR5
> 
> 
> 

-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
@ 2010-09-02 22:56 Adnan Khaleel
  0 siblings, 0 replies; 19+ messages in thread
From: Adnan Khaleel @ 2010-09-02 22:56 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 13343 bytes --]

During bootup, I also see the following error message:

pci 0000:40:00.0: BAR 0 bad alignment 2000000000: 0x00000000000000-0x00001fffffffff

Any idea whats causing this?

AK

  _____  

From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
To: Adnan Khaleel [mailto:adnan@khaleel.us]
Cc: Cam Macdonell [mailto:cam@cs.ualberta.ca], qemu-devel@nongnu.org
Sent: Wed, 01 Sep 2010 21:38:31 -0500
Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X

On Wed, Sep 01, 2010 at 02:07:33PM -0500, Adnan Khaleel wrote:
  > Yamahata, Cam,
  > 
  > Thank you both very much for pointers about Qemu coding for PCIe and MSI-X.
  > 
  > I'm at a point where I can see my device when I do an lspci -t -v as shown
  > below.
  > 
  > linux-an84:~ # lspci -t -v
  > -[0000:00]-+-00.0  Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
  >            +-01.0  Cirrus Logic GD 5446
  >            +-04.0-[0000:20]--
  >            +-18.0-[0000:21]--
  >            +-18.1-[0000:22]--
  >            +-18.2-[0000:23]--
  >            +-18.3-[0000:24]--
  >            +-18.4-[0000:25]--
  >            +-18.5-[0000:26]--
  >            +-19.0-[0000:36-bf]--+-00.0-[0000:37-47]--+-00.0-[0000:38]--
  >            |                    |                    +-01.0-[0000:39]--
  >            |                    |                    +-02.0-[0000:3a]--
  >            |                    |                    +-03.0-[0000:3b]--
  >            |                    |                    +-04.0-[0000:3c]--
  >            |                    |                    +-05.0-[0000:3d]--
  >            |                    |                    +-06.0-[0000:3e]--
  >            |                    |                    +-07.0-[0000:3f]--
  >            |                    |                    +-08.0-[0000:40]----00.0 
  > Cray Inc Device 0301  <- The device that I've included
  > 
  > However, I'm having a bit of an issue with the MSI-X.
  > 
  > I'm following the code examples in virtio-pci.c and ivshmem.c that Cam pointed
  > out to. I've got bar 0&1 already occupied so I assign the msix_mmio_map to bar
  > 2. However, when I do that, Qemu fails to boot and fails with the following
  > assertion fail:
  > 
  > unused outb: port=0x00f1 data=0x00
  > qemu: fatal: Trying to execute code outside RAM or ROM at 0x0000000000100000
  > 
  > Couple of thins I'm unsure about:
  > 1. Am I registering the 64bit bar addresses correctly?
  > pci_register_bar(&d->dev, i, BAR_Regions[i][0], PCI_BASE_ADDRESS_SPACE_IO |
  > PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map);
  
  No. PCI_BASE_ADDRESS_SPACE_IO | PCI_BASE_ADDRESS_MEM_TYPE_64 is invalid.
  PCI doesn't support 64bit io space, but 32bit. And x86 supports 64k
  io sparce.
  
  
  > 2. In the function int msix_init(PCIDevice *pdev, unsigned short nentries,
  > unsigned bar_nr, unsigned bar_size);
  > I'm not sure what bar_nr is, however setting it to 1 (as in code examples
  > above) or to 2 (bar that I want to register the msix_mmio_map to) both fail
  > with the same error.
  
  It should be 2. I'm not sure why it failed.
  
  
  > rc = msix_init(&d->dev, d->vectors, 2, 0);
  > :
  > :
  > pci_register_bar(&d->dev, 2, msix_bar_size(&d->dev),
  > PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map);
  > 
  > 
  > Here is my init function code in its entirety:
  > 
  > static const unsigned long long BAR_Regions[6][2] =
  > {
  >     // len , type
  >     { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY} ,  //BAR0
  >     { 0x2000000ull,    PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR1
  >     { 0, 0} ,  //BAR2
  >     { 0, 0} ,  //BAR3
  >     { 0, 0} ,  //BAR4
  >     { 0, 0}    //BAR5
  > };
  
  Probably what you want is something like
  
  { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64} ,  //BAR0
  { 0, 0} ,  //BAR1 
   // 64bit BAR occupies 2 BAR entries so that BAR1 can't be used.
  { 0x2000000ull,    PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR2
  { 0, 0} ,  //BAR3
   // for MSI-X
  { 0, 0} ,  //BAR4
  { 0, 0}    //BAR5
  
  > 
  > static int pcie_msix_initfn(PCIDevice *pci_dev)
  > {
  >     PCIE_MSIX_DEVState *d = DO_UPCAST(PCIE_MSIX_DEVState, dev, pci_dev);
  >     PCIBridge *br = DO_UPCAST(PCIBridge, dev, pci_dev);
  >     PCIEPort *p = DO_UPCAST(PCIEPort, br, br);
  >     int rc, i;
  > 
  >     PRINT_DEBUG("%s: PCIE MSIX Device init...\n", __FUNCTION__);
  > 
  >     pci_config_set_vendor_id(d->dev.config, PCIE_MSIX_VID);
  >     pci_config_set_device_id(d->dev.config, PCIE_MSIX_DID);
  >     d->dev.config[PCI_REVISION_ID] = PCIE_MSIX_VERSION;
  >     d->dev.config[PCI_SUBSYSTEM_VENDOR_ID]   = PCIE_MSIX_VID & 0xff;
  >     d->dev.config[PCI_SUBSYSTEM_VENDOR_ID+1] = (PCIE_MSIX_VID >> 8) & 0xff;
  >     d->dev.config[PCI_SUBSYSTEM_ID]   = PCIE_MSIX_SS_DID & 0xff;
  >     d->dev.config[PCI_SUBSYSTEM_ID+1] = (PCIE_MSIX_SS_DID >> 8) & 0xff;
  
  Use pci_set_word().
  
  >     d->mmio_index = cpu_register_io_memory(pcie_msix_mem_read_fn,
  > pcie_msix_mem_write_fn, d);
  >    
  >     for(i=0; i < PCI_NUM_REGIONS -1; i++) { //-1 for the Exp ROM BAR
  >         if(BAR_Regions[i][0] != 0)
  >         {
  >             if(BAR_Regions[i][1] == PCI_BASE_ADDRESS_SPACE_IO)
  >             {
  >                 //io region
  >                 PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n",
  > __FUNCTION__, i);
  >                 pci_register_bar(&d->dev, i, BAR_Regions[i][0],
  > PCI_BASE_ADDRESS_SPACE_IO | PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map);
  >             } else {
  >                 //mem region
  >                 PRINT_DEBUG("%s: Registering Bar %i as MEM BAR\n",
  > __FUNCTION__, i);
  >                 pci_register_bar(&d->dev, i, BAR_Regions[i][0],
  > PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64,
  > pcie_msix_mem_map);
  >             }
  >         }
  >     }
  >     d->dev.config[PCI_INTERRUPT_PIN] = 1;
  > 
  >     rc = msix_init(&d->dev, d->vectors, 2, 0);
  >    
  >     if (!rc) {
  >         PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__, i);
  >         pci_register_bar(&d->dev, 2, msix_bar_size(&d->dev),
  > PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map);
  >         PRINT_DEBUG("%s: MSI-X initialized (%d vectors)\n", __FUNCTION__, d->
  > vectors);
  >     }
  >     else {
  >         PRINT_DEBUG("%s: MSI-X initialization failed!\n", __FUNCTION__);
  >         exit(1);
  >     }
  >    
  >     // Activate the vectors
  >     for (i = 0; i < d->vectors; i++) {
  >         msix_vector_use(&d->dev, i);
  >     }
  > 
  >     rc = pci_pcie_cap_init(&d->dev, PCIE_MSIX_EXP_OFFSET,
  > PCI_EXP_TYPE_ENDPOINT, p->port);
  >     if (rc < 0) {
  >         return rc;
  >     }
  > 
  >     pcie_cap_flr_init(&d->dev, &pcie_msix_flr);
  >     pcie_cap_deverr_init(&d->dev);
  >     pcie_cap_ari_init(&d->dev);
  >     rc = pcie_aer_init(&d->dev, PCIE_MSIX_AER_OFFSET);
  >     if (rc < 0) {
  >         return rc;
  >     }
  > 
  >     PRINT_DEBUG("%s: Init done\n", __FUNCTION__);
  >     return 0;
  > }
  > 
  > 
  > Thanks
  > 
  > AK
  > 
  > 
  > 
  > 
  >     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  >     From: Cam Macdonell [mailto:cam@cs.ualberta.ca]
  >     To: adnan@khaleel.us
  >     Cc: Isaku Yamahata [mailto:yamahata@valinux.co.jp], qemu-devel@nongnu.org
  >     Sent: Fri, 27 Aug 2010 10:48:48 -0500
  >     Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
  >     and MSI-X
  > 
  >     On Wed, Aug 25, 2010 at 4:39 PM, Adnan Khaleel <adnan@khaleel.us> wrote:
  >     > Hi Isaku,
  >     >
  >     > I've made some progress in coding the device template but its no where
  >     near
  >     > complete.
  >     >
  >     > I've created some files and am attaching it to this note. Based on what I
  >     > could gather from the pcie source files I've made a stab at creating a
  >     > simple model. I've also attached a file for a simple pci device that
  >     works
  >     > under regular Qemu. I would like to duplicate its functionality in your
  >     pcie
  >     > environment for starters.
  >     >
  >     > Could you please take a look at the files I've created and tell me if
  >     I've
  >     > understood your pcie model correctly. Any help will be truly appreciated.
  >     >
  >     > Adnan
  > 
  >     Hi Adnan,
  > 
  >     There is a fairly simple device I've created called "ivshmem" that is
  >     in the qemu git tree. It is a regular PCI device that exports a
  >     shared memory object via a BAR and supports a few registers and
  >     optional MSI-X interrupts (I had to pick through the virtio code to
  >     get MSI-X working, so looking at ivshmem might save you some effort).
  >     My device is somewhat similar to a graphics card actually which I
  >     recall is your goal. The purpose of ivshmem is to support sharing
  >     memory between multiple guests running on the same host. It follows
  >     the qdev model which you will need to do.
  > 
  >     Cam
  > 
  >     >
  >     > The five files I've modified from your git repository are as follows
  >     >
  >     > hw/pci_ids.h                    // Added vendor id defines
  >     > hw/pc_q35.c                    // Device instantiation
  >     > hw/pcie_msix_template.h  // Device header file
  >     > hw/pcie_msix_template.c  // Device file
  >     > Makefile.objs                   // Added pcie_msix_template.o to list of
  >     > objects being built
  >     >
  >     > Everything should compile without any warnings or errors.
  >     >
  >     > The last file:
  >     > sc_link_pci.c
  >     > Is the original PCI device that I'm trying to convert into being PCIe and
  >     > MSI-X and is included merely for reference to help you understand what
  >     I'd
  >     > like to achieve in your environment.
  >     >
  >     >
  >     > ________________________________
  >     > From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
  >     > To: Adnan Khaleel [mailto:adnan@khaleel.us]
  >     > Cc: qemu-devel@nongnu.org
  >     > Sent: Wed, 18 Aug 2010 22:19:04 -0500
  >     > Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
  >     > and MSI-X
  >     >
  >     > On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
  >     >> Hello Qemu developers,
  >     >>
  >     >> I'm interested in developing a device model that plugs into Qemu that is
  >     >> based
  >     >> on a PCIe interface and uses MSI-X. My goal is to ultimately attach a
  >     GPU
  >     >> simulator to this PCIe interface and use the entire platfom (Qemu + GPU
  >     >> simulator) for studying cpu, gpu interactions.
  >     >>
  >     >> I'm not terribly familiar with the Qemu device model and I'm looking for
  >     >> some
  >     >> assistance, perhaps a starting template for pcie and msi-x that would
  >     >> offer the
  >     >> basic functionality that I could then build upon.
  >     >>
  >     >> I have looked at the various devices that already modelled that are
  >     >> included
  >     >> with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
  >     >> eg;
  >     >> ne2k and cirrus-pci etc, however only one device truly seems to utilize
  >     >> both
  >     >> the technologies that I'm interested in and that is the virtio-pci.c
  >     >>
  >     >> I'm not sure what virtio-pci does so I'm not sure if that is a suitable
  >     >> starting point for me.
  >     >>
  >     >> Any help, suggestions etc would be extremely helpful and much
  >     appreciated.
  >     >
  >     > Qemu doesn't support pcie at the moment.
  >     > Only partial patches have been merged, still more patches have to
  >     > be merged for pcie to fully work. The following repo is available.
  >     >
  >     > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
  >     > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
  >     > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
  >     >
  >     > Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
  >     > for q35.
  >     > example:
  >     > qemu-system-x86_64 -M pc_q35 -acpitable
  >     > load_header,data=roms/seabios/src/q35-acpi-dsdt.aml
  >     >
  >     > This repo is for those who want to try/develop pcie support,
  >     > not for upstream merge. So they include patches unsuitable for upstream.
  >     > The repo includes pcie port switch emulator which utilize pcie and
  >     > MSI(not MSI-X).
  >     >
  >     > The difference between PCI device and PCIe device is configuration
  >     > space size.
  >     > By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
  >     > space. Helper functions for pcie are found in qemu/hw/pcie.c
  >     > For msi-x, see qemu/hw/msix.c.
  >     >
  >     > Thanks,
  >     > --
  >     > yamahata
  >     >
  > 
  
  -- 
  yamahata
    

[-- Attachment #2: Type: text/html, Size: 16508 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
@ 2010-09-02 17:42 Adnan Khaleel
  2010-09-03  2:20 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
  0 siblings, 1 reply; 19+ messages in thread
From: Adnan Khaleel @ 2010-09-02 17:42 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: Cam Macdonell, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 4405 bytes --]

I've tried everything you mentioned and I still get the same problem. The only thing that seems to avoid that issue is if I reduce the aperture size from 0x2000000000ull to 0x2000000ull.

Here is the relevant section of code:

static const unsigned long long BAR_Regions[6][2] = 
{
    // len , type 
    { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64} ,  //BAR0,        
    { 0, 0} , // BAR1
    { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR2,
    { 0, 0} , // BAR3 for MSI-X
    { 0, 0} , // BAR4
    { 0, 0} , // BAR5    
};

static int pcie_msix_initfn(PCIDevice *pci_dev)
{
    PCIE_MSIX_DEVState *d = DO_UPCAST(PCIE_MSIX_DEVState, dev, pci_dev);
    PCIBridge *br = DO_UPCAST(PCIBridge, dev, pci_dev);
    PCIEPort *p = DO_UPCAST(PCIEPort, br, br);
    int rc, i;

    PRINT_DEBUG("%s: PCIE MSIX Device init...\n", __FUNCTION__);

    pci_config_set_vendor_id(d->dev.config, PCIE_MSIX_VID);
    pci_config_set_device_id(d->dev.config, PCIE_MSIX_DID);

    memcpy(d->dev.config, g_cfg_init, sizeof(g_cfg_init[0x20]));
    d->mmio_index = cpu_register_io_memory(pcie_msix_mem_read_fn, pcie_msix_mem_write_fn, d);

    int msix_mem_bar  = 0; // Since its a 64bit BAR, we take up BAR0 & BAR1
    int msix_io_bar   = 2;
    int msix_mmio_bar = 3;

    pci_register_bar(&d->dev, msix_mem_bar, BAR_Regions[msix_mem_bar][0], BAR_Regions[msix_mem_bar][1], pcie_msix_mem_map);
    pci_register_bar(&d->dev, msix_io_bar, BAR_Regions[msix_io_bar][0], BAR_Regions[msix_io_bar][1], pcie_msix_io_map);

    rc = msix_init(&d->dev, d->vectors, msix_mmio_bar, 0);
    
    if (!rc) {
        PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__, msix_mmio_bar);
        pci_register_bar(&d->dev, msix_mmio_bar, msix_bar_size(&d->dev), PCI_BASE_ADDRESS_SPACE_MEMORY, msix_mmio_map);
        PRINT_DEBUG("%s: MSI-X initialized (%d vectors)\n", __FUNCTION__, d->vectors);
    }
    else {
        PRINT_DEBUG("%s: MSI-X initialization failed!\n", __FUNCTION__);
        return rc;
    }
    
    // Activate the vectors
    for (i = 0; i < d->vectors; i++) {
        msix_vector_use(&d->dev, i);
    }

    rc = pci_pcie_cap_init(&d->dev, PCIE_MSIX_EXP_OFFSET, PCI_EXP_TYPE_ENDPOINT, p->port);
    if (rc < 0) {
        return rc;
    }

    pcie_cap_flr_init(&d->dev, &pcie_msix_flr);
    pcie_cap_deverr_init(&d->dev);
    pcie_cap_ari_init(&d->dev);
    rc = pcie_aer_init(&d->dev, PCIE_MSIX_AER_OFFSET);
    if (rc < 0) {
        return rc;
    }

    PRINT_DEBUG("%s: Init done\n", __FUNCTION__);
    return 0;
}

Another question I have is why doesn't the device show up when I try a cat /proc/interrupts.

linux-an84:~/AriesKernelModules/gni/aries/ghal # cat /proc/interrupts
           CPU0
  0:        694   IO-APIC-edge      timer
  1:          6   IO-APIC-edge      i8042
  4:        753   IO-APIC-edge      serial
  8:          1   IO-APIC-edge      rtc0
  9:          0   IO-APIC-fasteoi   acpi
 12:         89   IO-APIC-edge      i8042
 14:       3522   IO-APIC-edge      ata_piix
 15:        785   IO-APIC-edge      ata_piix
 16:        162   IO-APIC-fasteoi   eth0
4344:          0   PCI-MSI-edge      aerdrv
4345:          0   PCI-MSI-edge      aerdrv
4346:          0   PCI-MSI-edge      aerdrv
4347:          0   PCI-MSI-edge      aerdrv
4348:          0   PCI-MSI-edge      aerdrv
4349:          0   PCI-MSI-edge      aerdrv
4350:          0   PCI-MSI-edge      aerdrv
4351:          0   PCI-MSI-edge      aerdrv
NMI:          0   Non-maskable interrupts
LOC:     107095   Local timer interrupts
RES:          0   Rescheduling interrupts
CAL:          0   function call interrupts
TLB:          0   TLB shootdowns
TRM:          0   Thermal event interrupts
THR:          0   Threshold APIC interrupts
SPU:          0   Spurious interrupts
ERR:          0

Shouldn't there be an entry for the MSI-X device?

Thanks for all your input.

AK


  _____  

Probably what you want is something like
  
  { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64} ,  //BAR0
  { 0, 0} ,  //BAR1 
   // 64bit BAR occupies 2 BAR entries so that BAR1 can't be used.
  { 0x2000000ull,    PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR2
  { 0, 0} ,  //BAR3
   // for MSI-X
  { 0, 0} ,  //BAR4
  { 0, 0}    //BAR5
  
  
    

[-- Attachment #2: Type: text/html, Size: 14398 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
  2010-09-01 19:07 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
@ 2010-09-02  2:38 ` Isaku Yamahata
  0 siblings, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-09-02  2:38 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: Cam Macdonell, qemu-devel

On Wed, Sep 01, 2010 at 02:07:33PM -0500, Adnan Khaleel wrote:
> Yamahata, Cam,
> 
> Thank you both very much for pointers about Qemu coding for PCIe and MSI-X.
> 
> I'm at a point where I can see my device when I do an lspci -t -v as shown
> below.
> 
> linux-an84:~ # lspci -t -v
> -[0000:00]-+-00.0  Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
>            +-01.0  Cirrus Logic GD 5446
>            +-04.0-[0000:20]--
>            +-18.0-[0000:21]--
>            +-18.1-[0000:22]--
>            +-18.2-[0000:23]--
>            +-18.3-[0000:24]--
>            +-18.4-[0000:25]--
>            +-18.5-[0000:26]--
>            +-19.0-[0000:36-bf]--+-00.0-[0000:37-47]--+-00.0-[0000:38]--
>            |                    |                    +-01.0-[0000:39]--
>            |                    |                    +-02.0-[0000:3a]--
>            |                    |                    +-03.0-[0000:3b]--
>            |                    |                    +-04.0-[0000:3c]--
>            |                    |                    +-05.0-[0000:3d]--
>            |                    |                    +-06.0-[0000:3e]--
>            |                    |                    +-07.0-[0000:3f]--
>            |                    |                    +-08.0-[0000:40]----00.0 
> Cray Inc Device 0301  <- The device that I've included
> 
> However, I'm having a bit of an issue with the MSI-X.
> 
> I'm following the code examples in virtio-pci.c and ivshmem.c that Cam pointed
> out to. I've got bar 0&1 already occupied so I assign the msix_mmio_map to bar
> 2. However, when I do that, Qemu fails to boot and fails with the following
> assertion fail:
> 
> unused outb: port=0x00f1 data=0x00
> qemu: fatal: Trying to execute code outside RAM or ROM at 0x0000000000100000
> 
> Couple of thins I'm unsure about:
> 1. Am I registering the 64bit bar addresses correctly?
> pci_register_bar(&d->dev, i, BAR_Regions[i][0], PCI_BASE_ADDRESS_SPACE_IO |
> PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map);

No. PCI_BASE_ADDRESS_SPACE_IO | PCI_BASE_ADDRESS_MEM_TYPE_64 is invalid.
PCI doesn't support 64bit io space, but 32bit. And x86 supports 64k
io sparce.


> 2. In the function int msix_init(PCIDevice *pdev, unsigned short nentries,
> unsigned bar_nr, unsigned bar_size);
> I'm not sure what bar_nr is, however setting it to 1 (as in code examples
> above) or to 2 (bar that I want to register the msix_mmio_map to) both fail
> with the same error.

It should be 2. I'm not sure why it failed.


> rc = msix_init(&d->dev, d->vectors, 2, 0);
> :
> :
> pci_register_bar(&d->dev, 2, msix_bar_size(&d->dev),
> PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map);
> 
> 
> Here is my init function code in its entirety:
> 
> static const unsigned long long BAR_Regions[6][2] =
> {
>     // len , type
>     { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY} ,  //BAR0
>     { 0x2000000ull,    PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR1
>     { 0, 0} ,  //BAR2
>     { 0, 0} ,  //BAR3
>     { 0, 0} ,  //BAR4
>     { 0, 0}    //BAR5
> };

Probably what you want is something like

{ 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64} ,  //BAR0
{ 0, 0} ,  //BAR1	
	// 64bit BAR occupies 2 BAR entries so that BAR1 can't be used.
{ 0x2000000ull,    PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR2
{ 0, 0} ,  //BAR3
	// for MSI-X
{ 0, 0} ,  //BAR4
{ 0, 0}    //BAR5

> 
> static int pcie_msix_initfn(PCIDevice *pci_dev)
> {
>     PCIE_MSIX_DEVState *d = DO_UPCAST(PCIE_MSIX_DEVState, dev, pci_dev);
>     PCIBridge *br = DO_UPCAST(PCIBridge, dev, pci_dev);
>     PCIEPort *p = DO_UPCAST(PCIEPort, br, br);
>     int rc, i;
> 
>     PRINT_DEBUG("%s: PCIE MSIX Device init...\n", __FUNCTION__);
> 
>     pci_config_set_vendor_id(d->dev.config, PCIE_MSIX_VID);
>     pci_config_set_device_id(d->dev.config, PCIE_MSIX_DID);
>     d->dev.config[PCI_REVISION_ID] = PCIE_MSIX_VERSION;
>     d->dev.config[PCI_SUBSYSTEM_VENDOR_ID]   = PCIE_MSIX_VID & 0xff;
>     d->dev.config[PCI_SUBSYSTEM_VENDOR_ID+1] = (PCIE_MSIX_VID >> 8) & 0xff;
>     d->dev.config[PCI_SUBSYSTEM_ID]   = PCIE_MSIX_SS_DID & 0xff;
>     d->dev.config[PCI_SUBSYSTEM_ID+1] = (PCIE_MSIX_SS_DID >> 8) & 0xff;

Use pci_set_word().

>     d->mmio_index = cpu_register_io_memory(pcie_msix_mem_read_fn,
> pcie_msix_mem_write_fn, d);
>    
>     for(i=0; i < PCI_NUM_REGIONS -1; i++) { //-1 for the Exp ROM BAR
>         if(BAR_Regions[i][0] != 0)
>         {
>             if(BAR_Regions[i][1] == PCI_BASE_ADDRESS_SPACE_IO)
>             {
>                 //io region
>                 PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n",
> __FUNCTION__, i);
>                 pci_register_bar(&d->dev, i, BAR_Regions[i][0],
> PCI_BASE_ADDRESS_SPACE_IO | PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map);
>             } else {
>                 //mem region
>                 PRINT_DEBUG("%s: Registering Bar %i as MEM BAR\n",
> __FUNCTION__, i);
>                 pci_register_bar(&d->dev, i, BAR_Regions[i][0],
> PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64,
> pcie_msix_mem_map);
>             }
>         }
>     }
>     d->dev.config[PCI_INTERRUPT_PIN] = 1;
> 
>     rc = msix_init(&d->dev, d->vectors, 2, 0);
>    
>     if (!rc) {
>         PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__, i);
>         pci_register_bar(&d->dev, 2, msix_bar_size(&d->dev),
> PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map);
>         PRINT_DEBUG("%s: MSI-X initialized (%d vectors)\n", __FUNCTION__, d->
> vectors);
>     }
>     else {
>         PRINT_DEBUG("%s: MSI-X initialization failed!\n", __FUNCTION__);
>         exit(1);
>     }
>    
>     // Activate the vectors
>     for (i = 0; i < d->vectors; i++) {
>         msix_vector_use(&d->dev, i);
>     }
> 
>     rc = pci_pcie_cap_init(&d->dev, PCIE_MSIX_EXP_OFFSET,
> PCI_EXP_TYPE_ENDPOINT, p->port);
>     if (rc < 0) {
>         return rc;
>     }
> 
>     pcie_cap_flr_init(&d->dev, &pcie_msix_flr);
>     pcie_cap_deverr_init(&d->dev);
>     pcie_cap_ari_init(&d->dev);
>     rc = pcie_aer_init(&d->dev, PCIE_MSIX_AER_OFFSET);
>     if (rc < 0) {
>         return rc;
>     }
> 
>     PRINT_DEBUG("%s: Init done\n", __FUNCTION__);
>     return 0;
> }
> 
> 
> Thanks
> 
> AK
> 
> 
> 
> 
>     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
>     From: Cam Macdonell [mailto:cam@cs.ualberta.ca]
>     To: adnan@khaleel.us
>     Cc: Isaku Yamahata [mailto:yamahata@valinux.co.jp], qemu-devel@nongnu.org
>     Sent: Fri, 27 Aug 2010 10:48:48 -0500
>     Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
>     and MSI-X
> 
>     On Wed, Aug 25, 2010 at 4:39 PM, Adnan Khaleel <adnan@khaleel.us> wrote:
>     > Hi Isaku,
>     >
>     > I've made some progress in coding the device template but its no where
>     near
>     > complete.
>     >
>     > I've created some files and am attaching it to this note. Based on what I
>     > could gather from the pcie source files I've made a stab at creating a
>     > simple model. I've also attached a file for a simple pci device that
>     works
>     > under regular Qemu. I would like to duplicate its functionality in your
>     pcie
>     > environment for starters.
>     >
>     > Could you please take a look at the files I've created and tell me if
>     I've
>     > understood your pcie model correctly. Any help will be truly appreciated.
>     >
>     > Adnan
> 
>     Hi Adnan,
> 
>     There is a fairly simple device I've created called "ivshmem" that is
>     in the qemu git tree. It is a regular PCI device that exports a
>     shared memory object via a BAR and supports a few registers and
>     optional MSI-X interrupts (I had to pick through the virtio code to
>     get MSI-X working, so looking at ivshmem might save you some effort).
>     My device is somewhat similar to a graphics card actually which I
>     recall is your goal. The purpose of ivshmem is to support sharing
>     memory between multiple guests running on the same host. It follows
>     the qdev model which you will need to do.
> 
>     Cam
> 
>     >
>     > The five files I've modified from your git repository are as follows
>     >
>     > hw/pci_ids.h                    // Added vendor id defines
>     > hw/pc_q35.c                    // Device instantiation
>     > hw/pcie_msix_template.h  // Device header file
>     > hw/pcie_msix_template.c  // Device file
>     > Makefile.objs                   // Added pcie_msix_template.o to list of
>     > objects being built
>     >
>     > Everything should compile without any warnings or errors.
>     >
>     > The last file:
>     > sc_link_pci.c
>     > Is the original PCI device that I'm trying to convert into being PCIe and
>     > MSI-X and is included merely for reference to help you understand what
>     I'd
>     > like to achieve in your environment.
>     >
>     >
>     > ________________________________
>     > From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
>     > To: Adnan Khaleel [mailto:adnan@khaleel.us]
>     > Cc: qemu-devel@nongnu.org
>     > Sent: Wed, 18 Aug 2010 22:19:04 -0500
>     > Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
>     > and MSI-X
>     >
>     > On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
>     >> Hello Qemu developers,
>     >>
>     >> I'm interested in developing a device model that plugs into Qemu that is
>     >> based
>     >> on a PCIe interface and uses MSI-X. My goal is to ultimately attach a
>     GPU
>     >> simulator to this PCIe interface and use the entire platfom (Qemu + GPU
>     >> simulator) for studying cpu, gpu interactions.
>     >>
>     >> I'm not terribly familiar with the Qemu device model and I'm looking for
>     >> some
>     >> assistance, perhaps a starting template for pcie and msi-x that would
>     >> offer the
>     >> basic functionality that I could then build upon.
>     >>
>     >> I have looked at the various devices that already modelled that are
>     >> included
>     >> with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
>     >> eg;
>     >> ne2k and cirrus-pci etc, however only one device truly seems to utilize
>     >> both
>     >> the technologies that I'm interested in and that is the virtio-pci.c
>     >>
>     >> I'm not sure what virtio-pci does so I'm not sure if that is a suitable
>     >> starting point for me.
>     >>
>     >> Any help, suggestions etc would be extremely helpful and much
>     appreciated.
>     >
>     > Qemu doesn't support pcie at the moment.
>     > Only partial patches have been merged, still more patches have to
>     > be merged for pcie to fully work. The following repo is available.
>     >
>     > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
>     > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
>     > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
>     >
>     > Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
>     > for q35.
>     > example:
>     > qemu-system-x86_64 -M pc_q35 -acpitable
>     > load_header,data=roms/seabios/src/q35-acpi-dsdt.aml
>     >
>     > This repo is for those who want to try/develop pcie support,
>     > not for upstream merge. So they include patches unsuitable for upstream.
>     > The repo includes pcie port switch emulator which utilize pcie and
>     > MSI(not MSI-X).
>     >
>     > The difference between PCI device and PCIe device is configuration
>     > space size.
>     > By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
>     > space. Helper functions for pcie are found in qemu/hw/pcie.c
>     > For msi-x, see qemu/hw/msix.c.
>     >
>     > Thanks,
>     > --
>     > yamahata
>     >
> 

-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
@ 2010-09-01 19:07 Adnan Khaleel
  2010-09-02  2:38 ` Isaku Yamahata
  0 siblings, 1 reply; 19+ messages in thread
From: Adnan Khaleel @ 2010-09-01 19:07 UTC (permalink / raw)
  To: Cam Macdonell; +Cc: Isaku Yamahata, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 10612 bytes --]

Yamahata, Cam,

Thank you both very much for pointers about Qemu coding for PCIe and MSI-X.

I'm at a point where I can see my device when I do an lspci -t -v as shown below.

linux-an84:~ # lspci -t -v
-[0000:00]-+-00.0  Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
           +-01.0  Cirrus Logic GD 5446
           +-04.0-[0000:20]--
           +-18.0-[0000:21]--
           +-18.1-[0000:22]--
           +-18.2-[0000:23]--
           +-18.3-[0000:24]--
           +-18.4-[0000:25]--
           +-18.5-[0000:26]--
           +-19.0-[0000:36-bf]--+-00.0-[0000:37-47]--+-00.0-[0000:38]--
           |                    |                    +-01.0-[0000:39]--
           |                    |                    +-02.0-[0000:3a]--
           |                    |                    +-03.0-[0000:3b]--
           |                    |                    +-04.0-[0000:3c]--
           |                    |                    +-05.0-[0000:3d]--
           |                    |                    +-06.0-[0000:3e]--
           |                    |                    +-07.0-[0000:3f]--
           |                    |                    +-08.0-[0000:40]----00.0  Cray Inc Device 0301  <- The device that I've included

However, I'm having a bit of an issue with the MSI-X.

I'm following the code examples in virtio-pci.c and ivshmem.c that Cam pointed out to. I've got bar 0&1 already occupied so I assign the msix_mmio_map to bar 2. However, when I do that, Qemu fails to boot and fails with the following assertion fail:

  unused outb: port=0x00f1 data=0x00
  qemu: fatal: Trying to execute code   outside RAM or ROM at 0x0000000000100000

Couple of thins I'm unsure about:
1. Am I registering the 64bit bar addresses correctly?
pci_register_bar(&d->dev, i,   BAR_Regions[i][0], PCI_BASE_ADDRESS_SPACE_IO |   PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map);

2. In the function int msix_init(PCIDevice *pdev, unsigned short nentries, unsigned bar_nr, unsigned bar_size);
I'm not sure what bar_nr is, however setting it to 1 (as in code examples above) or to 2 (bar that I want to register the msix_mmio_map to) both fail with the same error.

rc = msix_init(&d->dev,   d->vectors, 2, 0);
:
:
pci_register_bar(&d->dev, 2,   msix_bar_size(&d->dev), PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map);


Here is my init function code in its entirety:

static const unsigned long long BAR_Regions[6][2] = 
{
    // len , type 
    { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY} ,  //BAR0
    { 0x2000000ull,    PCI_BASE_ADDRESS_SPACE_IO    } ,  //BAR1
    { 0, 0} ,  //BAR2
    { 0, 0} ,  //BAR3
    { 0, 0} ,  //BAR4
    { 0, 0}    //BAR5 
};

static int pcie_msix_initfn(PCIDevice *pci_dev)
{
    PCIE_MSIX_DEVState *d = DO_UPCAST(PCIE_MSIX_DEVState, dev, pci_dev);
    PCIBridge *br = DO_UPCAST(PCIBridge, dev, pci_dev);
    PCIEPort *p = DO_UPCAST(PCIEPort, br, br);
    int rc, i;

    PRINT_DEBUG("%s: PCIE MSIX Device init...\n", __FUNCTION__);

    pci_config_set_vendor_id(d->dev.config, PCIE_MSIX_VID);
    pci_config_set_device_id(d->dev.config, PCIE_MSIX_DID);
    d->dev.config[PCI_REVISION_ID] = PCIE_MSIX_VERSION;
    d->dev.config[PCI_SUBSYSTEM_VENDOR_ID]   = PCIE_MSIX_VID & 0xff;
    d->dev.config[PCI_SUBSYSTEM_VENDOR_ID+1] = (PCIE_MSIX_VID >> 8) & 0xff;
    d->dev.config[PCI_SUBSYSTEM_ID]   = PCIE_MSIX_SS_DID & 0xff;
    d->dev.config[PCI_SUBSYSTEM_ID+1] = (PCIE_MSIX_SS_DID >> 8) & 0xff;

    d->mmio_index = cpu_register_io_memory(pcie_msix_mem_read_fn, pcie_msix_mem_write_fn, d);
    
    for(i=0; i < PCI_NUM_REGIONS -1; i++) { //-1 for the Exp ROM BAR
        if(BAR_Regions[i][0] != 0)
        { 
            if(BAR_Regions[i][1] == PCI_BASE_ADDRESS_SPACE_IO) 
            {
                //io region
                PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__, i);
                pci_register_bar(&d->dev, i, BAR_Regions[i][0], PCI_BASE_ADDRESS_SPACE_IO | PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map);
            } else { 
                //mem region
                PRINT_DEBUG("%s: Registering Bar %i as MEM BAR\n", __FUNCTION__, i);
                pci_register_bar(&d->dev, i, BAR_Regions[i][0], PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_mem_map);
            }
        }
    }
    d->dev.config[PCI_INTERRUPT_PIN] = 1;

    rc = msix_init(&d->dev, d->vectors, 2, 0);
    
    if (!rc) {
        PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__, i);
        pci_register_bar(&d->dev, 2, msix_bar_size(&d->dev), PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map);
        PRINT_DEBUG("%s: MSI-X initialized (%d vectors)\n", __FUNCTION__, d->vectors);
    }
    else {
        PRINT_DEBUG("%s: MSI-X initialization failed!\n", __FUNCTION__);
        exit(1);
    }
    
    // Activate the vectors
    for (i = 0; i < d->vectors; i++) {
        msix_vector_use(&d->dev, i);
    }

    rc = pci_pcie_cap_init(&d->dev, PCIE_MSIX_EXP_OFFSET, PCI_EXP_TYPE_ENDPOINT, p->port);
    if (rc < 0) {
        return rc;
    }

    pcie_cap_flr_init(&d->dev, &pcie_msix_flr);
    pcie_cap_deverr_init(&d->dev);
    pcie_cap_ari_init(&d->dev);
    rc = pcie_aer_init(&d->dev, PCIE_MSIX_AER_OFFSET);
    if (rc < 0) {
        return rc;
    }

    PRINT_DEBUG("%s: Init done\n", __FUNCTION__);
    return 0;
}


Thanks

AK



  _____  

From: Cam Macdonell [mailto:cam@cs.ualberta.ca]
To: adnan@khaleel.us
Cc: Isaku Yamahata [mailto:yamahata@valinux.co.jp], qemu-devel@nongnu.org
Sent: Fri, 27 Aug 2010 10:48:48 -0500
Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X

On Wed, Aug 25, 2010 at 4:39 PM, Adnan Khaleel <adnan@khaleel.us> wrote:
  > Hi Isaku,
  >
  > I've made some progress in coding the device template but its no where near
  > complete.
  >
  > I've created some files and am attaching it to this note. Based on what I
  > could gather from the pcie source files I've made a stab at creating a
  > simple model. I've also attached a file for a simple pci device that works
  > under regular Qemu. I would like to duplicate its functionality in your pcie
  > environment for starters.
  >
  > Could you please take a look at the files I've created and tell me if I've
  > understood your pcie model correctly. Any help will be truly appreciated.
  >
  > Adnan
  
  Hi Adnan,
  
  There is a fairly simple device I've created called "ivshmem" that is
  in the qemu git tree.  It is a regular PCI device that exports a
  shared memory object via a BAR and supports a few registers and
  optional MSI-X interrupts (I had to pick through the virtio code to
  get MSI-X working, so looking at ivshmem might save you some effort).
  My device is somewhat similar to a graphics card actually which I
  recall is your goal.   The purpose of ivshmem is to support sharing
  memory between multiple guests running on the same host.  It follows
  the qdev model which you will need to do.
  
  Cam
  
  >
  > The five files I've modified from your git repository are as follows
  >
  > hw/pci_ids.h                    // Added vendor id defines
  > hw/pc_q35.c                    // Device instantiation
  > hw/pcie_msix_template.h  // Device header file
  > hw/pcie_msix_template.c  // Device file
  > Makefile.objs                   // Added pcie_msix_template.o to list of
  > objects being built
  >
  > Everything should compile without any warnings or errors.
  >
  > The last file:
  > sc_link_pci.c
  > Is the original PCI device that I'm trying to convert into being PCIe and
  > MSI-X and is included merely for reference to help you understand what I'd
  > like to achieve in your environment.
  >
  >
  > ________________________________
  > From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
  > To: Adnan Khaleel [mailto:adnan@khaleel.us]
  > Cc: qemu-devel@nongnu.org
  > Sent: Wed, 18 Aug 2010 22:19:04 -0500
  > Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
  > and MSI-X
  >
  > On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
  >> Hello Qemu developers,
  >>
  >> I'm interested in developing a device model that plugs into Qemu that is
  >> based
  >> on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
  >> simulator to this PCIe interface and use the entire platfom (Qemu + GPU
  >> simulator) for studying cpu, gpu interactions.
  >>
  >> I'm not terribly familiar with the Qemu device model and I'm looking for
  >> some
  >> assistance, perhaps a starting template for pcie and msi-x that would
  >> offer the
  >> basic functionality that I could then build upon.
  >>
  >> I have looked at the various devices that already modelled that are
  >> included
  >> with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
  >> eg;
  >> ne2k and cirrus-pci etc, however only one device truly seems to utilize
  >> both
  >> the technologies that I'm interested in and that is the virtio-pci.c
  >>
  >> I'm not sure what virtio-pci does so I'm not sure if that is a suitable
  >> starting point for me.
  >>
  >> Any help, suggestions etc would be extremely helpful and much appreciated.
  >
  > Qemu doesn't support pcie at the moment.
  > Only partial patches have been merged, still more patches have to
  > be merged for pcie to fully work. The following repo is available.
  >
  > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
  > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
  > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
  >
  > Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
  > for q35.
  > example:
  > qemu-system-x86_64 -M pc_q35 -acpitable
  > load_header,data=roms/seabios/src/q35-acpi-dsdt.aml
  >
  > This repo is for those who want to try/develop pcie support,
  > not for upstream merge. So they include patches unsuitable for upstream.
  > The repo includes pcie port switch emulator which utilize pcie and
  > MSI(not MSI-X).
  >
  > The difference between PCI device and PCIe device is configuration
  > space size.
  > By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
  > space. Helper functions for pcie are found in qemu/hw/pcie.c
  > For msi-x, see qemu/hw/msix.c.
  >
  > Thanks,
  > --
  > yamahata
  >
    

[-- Attachment #2: Type: text/html, Size: 24372 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
  2010-08-25 22:39 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
  2010-08-26  9:43 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
@ 2010-08-27 15:48 ` Cam Macdonell
  1 sibling, 0 replies; 19+ messages in thread
From: Cam Macdonell @ 2010-08-27 15:48 UTC (permalink / raw)
  To: adnan; +Cc: Isaku Yamahata, qemu-devel

On Wed, Aug 25, 2010 at 4:39 PM, Adnan Khaleel <adnan@khaleel.us> wrote:
> Hi Isaku,
>
> I've made some progress in coding the device template but its no where near
> complete.
>
> I've created some files and am attaching it to this note. Based on what I
> could gather from the pcie source files I've made a stab at creating a
> simple model. I've also attached a file for a simple pci device that works
> under regular Qemu. I would like to duplicate its functionality in your pcie
> environment for starters.
>
> Could you please take a look at the files I've created and tell me if I've
> understood your pcie model correctly. Any help will be truly appreciated.
>
> Adnan

Hi Adnan,

There is a fairly simple device I've created called "ivshmem" that is
in the qemu git tree.  It is a regular PCI device that exports a
shared memory object via a BAR and supports a few registers and
optional MSI-X interrupts (I had to pick through the virtio code to
get MSI-X working, so looking at ivshmem might save you some effort).
My device is somewhat similar to a graphics card actually which I
recall is your goal.   The purpose of ivshmem is to support sharing
memory between multiple guests running on the same host.  It follows
the qdev model which you will need to do.

Cam

>
> The five files I've modified from your git repository are as follows
>
> hw/pci_ids.h                    // Added vendor id defines
> hw/pc_q35.c                    // Device instantiation
> hw/pcie_msix_template.h  // Device header file
> hw/pcie_msix_template.c  // Device file
> Makefile.objs                   // Added pcie_msix_template.o to list of
> objects being built
>
> Everything should compile without any warnings or errors.
>
> The last file:
> sc_link_pci.c
> Is the original PCI device that I'm trying to convert into being PCIe and
> MSI-X and is included merely for reference to help you understand what I'd
> like to achieve in your environment.
>
>
> ________________________________
> From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
> To: Adnan Khaleel [mailto:adnan@khaleel.us]
> Cc: qemu-devel@nongnu.org
> Sent: Wed, 18 Aug 2010 22:19:04 -0500
> Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
> and MSI-X
>
> On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
>> Hello Qemu developers,
>>
>> I'm interested in developing a device model that plugs into Qemu that is
>> based
>> on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
>> simulator to this PCIe interface and use the entire platfom (Qemu + GPU
>> simulator) for studying cpu, gpu interactions.
>>
>> I'm not terribly familiar with the Qemu device model and I'm looking for
>> some
>> assistance, perhaps a starting template for pcie and msi-x that would
>> offer the
>> basic functionality that I could then build upon.
>>
>> I have looked at the various devices that already modelled that are
>> included
>> with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
>> eg;
>> ne2k and cirrus-pci etc, however only one device truly seems to utilize
>> both
>> the technologies that I'm interested in and that is the virtio-pci.c
>>
>> I'm not sure what virtio-pci does so I'm not sure if that is a suitable
>> starting point for me.
>>
>> Any help, suggestions etc would be extremely helpful and much appreciated.
>
> Qemu doesn't support pcie at the moment.
> Only partial patches have been merged, still more patches have to
> be merged for pcie to fully work. The following repo is available.
>
> git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
> git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
> git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
>
> Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
> for q35.
> example:
> qemu-system-x86_64 -M pc_q35 -acpitable
> load_header,data=roms/seabios/src/q35-acpi-dsdt.aml
>
> This repo is for those who want to try/develop pcie support,
> not for upstream merge. So they include patches unsuitable for upstream.
> The repo includes pcie port switch emulator which utilize pcie and
> MSI(not MSI-X).
>
> The difference between PCI device and PCIe device is configuration
> space size.
> By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
> space. Helper functions for pcie are found in qemu/hw/pcie.c
> For msi-x, see qemu/hw/msix.c.
>
> Thanks,
> --
> yamahata
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with?PCIe?and MSI-X
  2010-08-26 18:17 [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Adnan Khaleel
@ 2010-08-27  7:57 ` Isaku Yamahata
  0 siblings, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-08-27  7:57 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: qemu-devel

On Thu, Aug 26, 2010 at 01:17:38PM -0500, Adnan Khaleel wrote:
>     You also want to catch up pci api clean up.
>     pci_{set, get}_{byte, word, long, quad}(),
>     pci_config_set_vendor() ...
> 
> Are you referring to the setting up of the config registers where we pass on
> the vendor id and device id etc? Would you elaborate a little more.

Yes. Now there are helper functions to address pci endian.
So there is no need to ugly bit operation in  pcie_msix_initfn().


> Also, I've got a bunch of questions but let me state my assumptions first so
> that you have a better idea of what I'm referring to.
> - The device template is a pcie endpoint
> - I want to be able to setup 64bit bar addresses, with large apertures

Use PCI_BASE_ADDRESS_MEM_TYPE_64.


> - For now, I'd like to be able to do basic MMIO and regular IO reads and
> writes.
> 
> 
> PCIe questions.
> 1. What does the topology of the bridge with respect to the root look like? Is
> it
> 
> Root <---> PCIe Bridge

lspci -t would help.
Roughly the bus topology looks like

  root port - upstream port --- downstream port -> end node device
            |                |
            ...             ...


> 2. If so, where is the slot where I can insert the PCIe device? Is it off the
> Bridge or would it be better for it to be off the root?
> 
> Root <---> PCIe Bridge <---> PCIe/MSI-X device
> 
> Or
> 
> Root <---> PCIe Bridge
>      <---> PCIe/MSI-X Device

On downstream port switch. See above.


> And hence my confusion about how to do the following:
> static void pcie_msix_register(void)
> {  
>     pci_bridge_qdev_register(&pcie_msix_info);  // Is this what I should be
> doing?
>         OR
>     pci_qdev_register(&pcie_msix_info);         // Or this
> }

pci_qdev_register()


> 3. I wasn't sure how to register the device how to do the initializing. Please
> see the following section of code:
> 
> void pcie_msix_init(PCIBus *bus)
> {
>     // Is this how we should be doing this?
>     pci_create_simple(bus, -1, "pcie_msix_device");
>         OR
>         pci_bridge_create(...);
> }
> Or if should I use pci_bridge_create(...) in place of the pci_create_simple
> (...)

pci_create_simple()


> Also, this confusion led me to being unsure what the following device struct
> should look like
> 
> typedef struct PCIE_MSIX_DEVState_St {
>     PCIDevice dev;
>     int mmio_index;
> } PCIE_MSIX_DEVState;
> 
> For the simple device function that I've described above, what is the purpose
> of this struct? What other data should be captured?
> 
> Which include the initializing of the following static structs. Btw, can you
> tell me what VMStateDescrption is used for by Qemu? Also, what should the
> "fields" member contain? I couldn't quite make out.
> 
> static const VMStateDescription vmstate_pcie_msix = {
>     .name = "pcie-msix-device",
>     .version_id = 1,
>     .minimum_version_id = 1,
>     .minimum_version_id_old = 1,
>     .fields = (VMStateField[]) {
>         VMSTATE_PCIE_DEVICE(dev, PCIE_MSIX_DEVState),
>         VMSTATE_STRUCT(dev.aer_log, PCIE_MSIX_DEVState, 0,
> vmstate_pcie_aer_log, struct pcie_aer_log),
>         VMSTATE_END_OF_LIST()
>     }
> };
>
> 4. What is the qdev.props field used for?
> static PCIDeviceInfo pcie_msix_info = {
>     .qdev.name = PCIE_MSIX_DEVICE,
>     .qdev.desc = "PCIE MSIX device template",
>     .qdev.size = sizeof(PCIE_MSIX_DEVState),
>     .qdev.reset = pcie_msix_reset,
>     .qdev.vmsd = &vmstate_pcie_msix,
>     .is_express = 1,
>     .config_write = pcie_msix_write_config,
>     .init = pcie_msix_initfn,
>     .exit = pcie_msix_exitfn,
>     .qdev.props = (Property[]) {       
>         DEFINE_PROP_END_OF_LIST(),
>     }
> };

Perhaps at first it would be better to get a idea of what qdev device model is.
The following slide would be a good start point.
http://www.linux-kvm.org/wiki/images/f/fe/2010-forum-armbru-qdev.pdf


> 5. Device instantiation
> I init the device in pc_q35_bridge_init() in pc_q35.c
> 
> pcie_msix_init(root_port_bus);
> 
> I know I'm doing this incorrectly since I'm not specifying several things.
> Again, is this the correct place to init the device?

Yes if you really wanted the end node device to be on root port.
Otherwise downstream port is the place on which the end node device is.


> MSI/MSIX questions
> 1. How is an interrupt notification passed on to Qemu? In the regular case I'd
> use qemu_set_irq(..) to do so but what is the correct way of doing it in the
> MSIX paradigm? For example in the case of a DMA transfer.

Use msix_notify()/msi_notify().
virtio_pci_notify() in virtio-pci.c is an example.
pcie_notify() in pcie.c is also another example.
-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X
@ 2010-08-26 18:17 Adnan Khaleel
  2010-08-27  7:57 ` [Qemu-devel] Template for developing a Qemu device with?PCIe?and MSI-X Isaku Yamahata
  0 siblings, 1 reply; 19+ messages in thread
From: Adnan Khaleel @ 2010-08-26 18:17 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 3910 bytes --]

Hi there. I should have sent a lot of these with my note yesterday but I was in a hurry to get  the files to you first. 
See my comments below and thanks again.

AK
  
  pcie_msix_write_config() should call pci_default_write_config()
  unless you did it so intentionally.I've made this change. Thanks for the pointer.

  
  You also want to catch up pci api clean up.
  pci_{set, get}_{byte, word, long, quad}(),
  pci_config_set_vendor() ...Are you referring to the setting up of the config registers where we pass on the vendor id and device id etc? Would you elaborate a little more.

Also, I've got a bunch of questions but let me state my assumptions first so that you have a better idea of what I'm referring to.
- The device template is a pcie endpoint
- I want to be able to setup 64bit bar addresses, with large apertures
- For now, I'd like to be able to do basic MMIO and regular IO reads and writes. 


PCIe questions.
1. What does the topology of the bridge with respect to the root look like? Is it

Root <---> PCIe Bridge

2. If so, where is the slot where I can insert the PCIe device? Is it off the Bridge or would it be better for it to be off the root?

Root <---> PCIe Bridge <---> PCIe/MSI-X device

Or

Root <---> PCIe Bridge
     <---> PCIe/MSI-X Device

And hence my confusion about how to do the following:
static void pcie_msix_register(void)
  {   
      pci_bridge_qdev_register(&pcie_msix_info);  // Is this what I should be doing?
        OR
      pci_qdev_register(&pcie_msix_info);         // Or this
  }

3. I wasn't sure how to register the device how to do the initializing. Please see the following section of code:

void pcie_msix_init(PCIBus *bus)
{
    // Is this how we should be doing this?
    pci_create_simple(bus, -1, "pcie_msix_device");
        OR
         pci_bridge_create(...);
}

Or if should I use pci_bridge_create(...) in place of the pci_create_simple(...)

Also, this confusion led me to being unsure what the following device struct should look like

typedef struct PCIE_MSIX_DEVState_St {
    PCIDevice dev;
    int mmio_index;
} PCIE_MSIX_DEVState;

For the simple device function that I've described above, what is the purpose of this struct? What other data should be captured?

Which include the initializing of the following static structs. Btw, can you tell me what VMStateDescrption is used for by Qemu? Also, what should the "fields" member contain? I couldn't quite make out.

static const VMStateDescription vmstate_pcie_msix = {
    .name = "pcie-msix-device",
    .version_id = 1,
    .minimum_version_id = 1,
    .minimum_version_id_old = 1,
    .fields = (VMStateField[]) {
        VMSTATE_PCIE_DEVICE(dev, PCIE_MSIX_DEVState),
        VMSTATE_STRUCT(dev.aer_log, PCIE_MSIX_DEVState, 0, vmstate_pcie_aer_log, struct pcie_aer_log),
        VMSTATE_END_OF_LIST()
    }
};

4. What is the qdev.props field used for?
static PCIDeviceInfo pcie_msix_info = {
    .qdev.name = PCIE_MSIX_DEVICE,
    .qdev.desc = "PCIE MSIX device template",
    .qdev.size = sizeof(PCIE_MSIX_DEVState),
    .qdev.reset = pcie_msix_reset,
    .qdev.vmsd = &vmstate_pcie_msix,
    .is_express = 1,
    .config_write = pcie_msix_write_config,
    .init = pcie_msix_initfn,
    .exit = pcie_msix_exitfn,
    .qdev.props = (Property[]) {        
        DEFINE_PROP_END_OF_LIST(),
    }
};


5. Device instantiation
I init the device in pc_q35_bridge_init() in pc_q35.c

pcie_msix_init(root_port_bus);

I know I'm doing this incorrectly since I'm not specifying several things. Again, is this the correct place to init the device?


MSI/MSIX questions
1. How is an interrupt notification passed on to Qemu? In the regular case I'd use qemu_set_irq(..) to do so but what is the correct way of doing it in the MSIX paradigm? For example in the case of a DMA transfer.

[-- Attachment #2: Type: text/html, Size: 8643 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X
  2010-08-25 22:39 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
@ 2010-08-26  9:43 ` Isaku Yamahata
  2010-08-27 15:48 ` [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Cam Macdonell
  1 sibling, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-08-26  9:43 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: qemu-devel

On Wed, Aug 25, 2010 at 05:39:50PM -0500, Adnan Khaleel wrote:
> Hi Isaku,
> 
> I've made some progress in coding the device template but its no where near
> complete.
> 
> I've created some files and am attaching it to this note. Based on what I could
> gather from the pcie source files I've made a stab at creating a simple model.
> I've also attached a file for a simple pci device that works under regular
> Qemu. I would like to duplicate its functionality in your pcie environment for
> starters.
> 
> Could you please take a look at the files I've created and tell me if I've
> understood your pcie model correctly. Any help will be truly appreciated.

pcie_msix_write_config() should call pci_default_write_config()
unless you did it so intentionally.

You also want to catch up pci api clean up.
pci_{set, get}_{byte, word, long, quad}(),
pci_config_set_vendor() ...


> 
> Adnan
> 
> The five files I've modified from your git repository are as follows
> 
> hw/pci_ids.h                    // Added vendor id defines
> hw/pc_q35.c                    // Device instantiation
> hw/pcie_msix_template.h  // Device header file
> hw/pcie_msix_template.c  // Device file
> Makefile.objs                   // Added pcie_msix_template.o to list of
> objects being built
> 
> Everything should compile without any warnings or errors.
> 
> The last file:
> sc_link_pci.c
> Is the original PCI device that I'm trying to convert into being PCIe and MSI-X
> and is included merely for reference to help you understand what I'd like to
> achieve in your environment.
> 
> 
> 
>     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
>     From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
>     To: Adnan Khaleel [mailto:adnan@khaleel.us]
>     Cc: qemu-devel@nongnu.org
>     Sent: Wed, 18 Aug 2010 22:19:04 -0500
>     Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
>     and MSI-X
> 
>     On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
>     > Hello Qemu developers,
>     >
>     > I'm interested in developing a device model that plugs into Qemu that is
>     based
>     > on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
>     > simulator to this PCIe interface and use the entire platfom (Qemu + GPU
>     > simulator) for studying cpu, gpu interactions.
>     >
>     > I'm not terribly familiar with the Qemu device model and I'm looking for
>     some
>     > assistance, perhaps a starting template for pcie and msi-x that would
>     offer the
>     > basic functionality that I could then build upon.
>     >
>     > I have looked at the various devices that already modelled that are
>     included
>     > with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
>     eg;
>     > ne2k and cirrus-pci etc, however only one device truly seems to utilize
>     both
>     > the technologies that I'm interested in and that is the virtio-pci.c
>     >
>     > I'm not sure what virtio-pci does so I'm not sure if that is a suitable
>     > starting point for me.
>     >
>     > Any help, suggestions etc would be extremely helpful and much
>     appreciated.
> 
>     Qemu doesn't support pcie at the moment.
>     Only partial patches have been merged, still more patches have to
>     be merged for pcie to fully work. The following repo is available.
> 
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
> 
>     Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
>     for q35.
>     example:
>     qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/
>     q35-acpi-dsdt.aml
> 
>     This repo is for those who want to try/develop pcie support,
>     not for upstream merge. So they include patches unsuitable for upstream.
>     The repo includes pcie port switch emulator which utilize pcie and
>     MSI(not MSI-X).
> 
>     The difference between PCI device and PCIe device is configuration
>     space size.
>     By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
>     space. Helper functions for pcie are found in qemu/hw/pcie.c
>     For msi-x, see qemu/hw/msix.c.
> 
>     Thanks,
>     --
>     yamahata
> 



-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
@ 2010-08-25 22:39 Adnan Khaleel
  2010-08-26  9:43 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
  2010-08-27 15:48 ` [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Cam Macdonell
  0 siblings, 2 replies; 19+ messages in thread
From: Adnan Khaleel @ 2010-08-25 22:39 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 3775 bytes --]

Hi Isaku,

I've made some progress in coding the device template but its no where near complete.

I've created some files and am attaching it to this note. Based on what I could gather from the pcie source files I've made a stab at creating a simple model. I've also attached a file for a simple pci device that works under regular Qemu. I would like to duplicate its functionality in your pcie environment for starters.

Could you please take a look at the files I've created and tell me if I've understood your pcie model correctly. Any help will be truly appreciated.

Adnan

The five files I've modified from your git repository are as follows

hw/pci_ids.h                    // Added vendor id defines
hw/pc_q35.c                    // Device instantiation
  hw/pcie_msix_template.h  // Device header file
hw/pcie_msix_template.c  // Device file
Makefile.objs                   // Added pcie_msix_template.o to list of objects being built

Everything should compile without any warnings or errors.

The last file:
sc_link_pci.c
Is the original PCI device that I'm trying to convert into being PCIe and MSI-X and is included merely for reference to help you understand what I'd like to achieve in your environment.


  _____  

From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
To: Adnan Khaleel [mailto:adnan@khaleel.us]
Cc: qemu-devel@nongnu.org
Sent: Wed, 18 Aug 2010 22:19:04 -0500
Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X

On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
  > Hello Qemu developers,
  >
  > I'm interested in developing a device model that plugs into Qemu that is based
  > on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
  > simulator to this PCIe interface and use the entire platfom (Qemu + GPU
  > simulator) for studying cpu, gpu interactions.
  > 
  > I'm not terribly familiar with the Qemu device model and I'm looking for some
  > assistance, perhaps a starting template for pcie and msi-x that would offer the
  > basic functionality that I could then build upon.
  > 
  > I have looked at the various devices that already modelled that are included
  > with Qemu (v0.12.5 at least) and I've noticed several a few pci devices, eg;
  > ne2k and cirrus-pci etc, however only one device truly seems to utilize both
  > the technologies that I'm interested in and that is the virtio-pci.c
  > 
  > I'm not sure what virtio-pci does so I'm not sure if that is a suitable
  > starting point for me.
  > 
  > Any help, suggestions etc would be extremely helpful and much appreciated.
  
  Qemu doesn't support pcie at the moment.
  Only partial patches have been merged, still more patches have to
  be merged for pcie to fully work. The following repo is available.
  
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
  
  Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
  for q35.
  example:
  qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/q35-acpi-dsdt.aml
  
  This repo is for those who want to try/develop pcie support,
  not for upstream merge. So they include patches unsuitable for upstream.
  The repo includes pcie port switch emulator which utilize pcie and
  MSI(not MSI-X).
  
  The difference between PCI device and PCIe device is configuration
  space size.
  By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
  space. Helper functions for pcie are found in qemu/hw/pcie.c
  For msi-x, see qemu/hw/msix.c.
  
  Thanks,
  -- 
  yamahata
    

[-- Attachment #1.2: Type: text/html, Size: 4997 bytes --]

[-- Attachment #2: pcie_msix_template.zip --]
[-- Type: application/zip, Size: 15634 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X
@ 2010-08-20 22:22 Adnan Khaleel
  0 siblings, 0 replies; 19+ messages in thread
From: Adnan Khaleel @ 2010-08-20 22:22 UTC (permalink / raw)
  To: qemu-devel; +Cc: Isaku Yamahata


[-- Attachment #1.1: Type: text/plain, Size: 4683 bytes --]

After messing around with bcc, as86 and ld86 I finally got the vgabios to compile.

Everything works as it should I'm guessing. I've attached the output from lspci from the guest. I'll spend sometime looking at the device models and see how I can implement a model of what I'm interested in.

Is the overall architecture of any qemu device still similar to how a typical qemu pci device is or is there anything I should be aware of?

Thanks again Isaku.

AK
  _____  

From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
To: Adnan Khaleel [mailto:adnan@khaleel.us]
Cc: qemu-devel@nongnu.org
Sent: Fri, 20 Aug 2010 00:22:03 -0500
Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X

On Thu, Aug 19, 2010 at 01:32:42PM -0500, Adnan Khaleel wrote:
  > Isaku,
  > 
  > I'm having some difficulties building the sources, I get the following message
  > 
  > *akhaleel@yar95 qemu-q35 $ ./configure --help
  > : bad interpreter: No such file or directory
  > 
  > And I get a similar error while compiling seabios as well.
  >
  > What shell are you using or am I missing something? I'm compiling from a
  > typical bash shell and using gcc v4.4.0.
  
  I'm not sure. configure script isn't modified.
  Can you compile normal qemu?
  The first line of the script is #!/bin/sh. I suppose you have /bin/sh.
  
  
  > In vgabios, there is a requirement for bcc. Is that borland C compiler?
  
  No. Most Linux destro has bcc package. Just you need to install it like
  yum install bcc or something.
  
  Thanks,
  
  
  > 
  > Thanks
  > 
  > Adnan
  > 
  >     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  >     From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
  >     To: Adnan Khaleel [mailto:adnan@khaleel.us]
  >     Cc: qemu-devel@nongnu.org
  >     Sent: Wed, 18 Aug 2010 22:19:04 -0500
  >     Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
  >     and MSI-X
  > 
  >     On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
  >     > Hello Qemu developers,
  >     >
  >     > I'm interested in developing a device model that plugs into Qemu that is
  >     based
  >     > on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
  >     > simulator to this PCIe interface and use the entire platfom (Qemu + GPU
  >     > simulator) for studying cpu, gpu interactions.
  >     >
  >     > I'm not terribly familiar with the Qemu device model and I'm looking for
  >     some
  >     > assistance, perhaps a starting template for pcie and msi-x that would
  >     offer the
  >     > basic functionality that I could then build upon.
  >     >
  >     > I have looked at the various devices that already modelled that are
  >     included
  >     > with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
  >     eg;
  >     > ne2k and cirrus-pci etc, however only one device truly seems to utilize
  >     both
  >     > the technologies that I'm interested in and that is the virtio-pci.c
  >     >
  >     > I'm not sure what virtio-pci does so I'm not sure if that is a suitable
  >     > starting point for me.
  >     >
  >     > Any help, suggestions etc would be extremely helpful and much
  >     appreciated.
  > 
  >     Qemu doesn't support pcie at the moment.
  >     Only partial patches have been merged, still more patches have to
  >     be merged for pcie to fully work. The following repo is available.
  > 
  >     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
  >     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
  >     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
  > 
  >     Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
  >     for q35.
  >     example:
  >     qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/
  >     q35-acpi-dsdt.aml
  > 
  >     This repo is for those who want to try/develop pcie support,
  >     not for upstream merge. So they include patches unsuitable for upstream.
  >     The repo includes pcie port switch emulator which utilize pcie and
  >     MSI(not MSI-X).
  > 
  >     The difference between PCI device and PCIe device is configuration
  >     space size.
  >     By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
  >     space. Helper functions for pcie are found in qemu/hw/pcie.c
  >     For msi-x, see qemu/hw/msix.c.
  > 
  >     Thanks,
  >     --
  >     yamahata
  > 
  
  -- 
  yamahata
    

[-- Attachment #1.2: Type: text/html, Size: 6092 bytes --]

[-- Attachment #2: pci.txt --]
[-- Type: text/plain, Size: 9671 bytes --]

00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
00:01.0 VGA compatible controller: Cirrus Logic GD 5446
00:04.0 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:18.0 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:18.1 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:18.2 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:18.3 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:18.4 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:18.5 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:19.0 PCI bridge: Intel Corporation X58 I/O Hub PCI Express Root Port 0 (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)
36:00.0 PCI bridge: Texas Instruments Device 8232 (rev 02)
36:00.1 PCI bridge: Texas Instruments Device 8232 (rev 02)
36:00.2 PCI bridge: Texas Instruments Device 8232 (rev 02)
36:00.3 PCI bridge: Texas Instruments Device 8232 (rev 02)
36:00.4 PCI bridge: Texas Instruments Device 8232 (rev 02)
36:00.5 PCI bridge: Texas Instruments Device 8232 (rev 02)
36:00.6 PCI bridge: Texas Instruments Device 8232 (rev 02)
36:00.7 PCI bridge: Texas Instruments Device 8232 (rev 02)
37:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
37:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
48:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
59:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
6a:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
7b:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
8c:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
9d:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:00.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:01.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:02.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:03.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:04.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:05.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:06.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:07.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:08.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:09.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:0a.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:0b.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:0c.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:0d.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:0e.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
ae:0f.0 PCI bridge: Texas Instruments Device 8233 (rev 01)
c0:00.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
c0:01.0 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
c0:1c.0 PCI bridge: Digital Equipment Corporation DECchip 21154 (rev 05)
c0:1d.0 PCI bridge: Digital Equipment Corporation DECchip 21154 (rev 05)
c0:1e.0 PCI bridge: Digital Equipment Corporation DECchip 21154 (rev 05)
c0:1f.0 PCI bridge: Digital Equipment Corporation DECchip 21154 (rev 05)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X
@ 2010-08-20 20:13 Adnan Khaleel
  0 siblings, 0 replies; 19+ messages in thread
From: Adnan Khaleel @ 2010-08-20 20:13 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 4649 bytes --]

I'm not sure what the problem was but I had checked out the code on a windows machine and then copied it over to a linux box. That was causing problems somehow.

I've managed to compile qemu and seabios but vgabios is taking forever to compile. The problem seems to be bcc. I've tried a simple helloworld program and even that seems to be taking forever to compile. Not sure if there is something wrong with my bcc install.

AK
  _____  

From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
To: Adnan Khaleel [mailto:adnan@khaleel.us]
Cc: qemu-devel@nongnu.org
Sent: Fri, 20 Aug 2010 00:22:03 -0500
Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X

On Thu, Aug 19, 2010 at 01:32:42PM -0500, Adnan Khaleel wrote:
  > Isaku,
  > 
  > I'm having some difficulties building the sources, I get the following message
  > 
  > *akhaleel@yar95 qemu-q35 $ ./configure --help
  > : bad interpreter: No such file or directory
  > 
  > And I get a similar error while compiling seabios as well.
  >
  > What shell are you using or am I missing something? I'm compiling from a
  > typical bash shell and using gcc v4.4.0.
  
  I'm not sure. configure script isn't modified.
  Can you compile normal qemu?
  The first line of the script is #!/bin/sh. I suppose you have /bin/sh.
  
  
  > In vgabios, there is a requirement for bcc. Is that borland C compiler?
  
  No. Most Linux destro has bcc package. Just you need to install it like
  yum install bcc or something.
  
  Thanks,
  
  
  > 
  > Thanks
  > 
  > Adnan
  > 
  >     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  >     From: Isaku Yamahata [mailto:yamahata@valinux.co.jp]
  >     To: Adnan Khaleel [mailto:adnan@khaleel.us]
  >     Cc: qemu-devel@nongnu.org
  >     Sent: Wed, 18 Aug 2010 22:19:04 -0500
  >     Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe
  >     and MSI-X
  > 
  >     On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
  >     > Hello Qemu developers,
  >     >
  >     > I'm interested in developing a device model that plugs into Qemu that is
  >     based
  >     > on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
  >     > simulator to this PCIe interface and use the entire platfom (Qemu + GPU
  >     > simulator) for studying cpu, gpu interactions.
  >     >
  >     > I'm not terribly familiar with the Qemu device model and I'm looking for
  >     some
  >     > assistance, perhaps a starting template for pcie and msi-x that would
  >     offer the
  >     > basic functionality that I could then build upon.
  >     >
  >     > I have looked at the various devices that already modelled that are
  >     included
  >     > with Qemu (v0.12.5 at least) and I've noticed several a few pci devices,
  >     eg;
  >     > ne2k and cirrus-pci etc, however only one device truly seems to utilize
  >     both
  >     > the technologies that I'm interested in and that is the virtio-pci.c
  >     >
  >     > I'm not sure what virtio-pci does so I'm not sure if that is a suitable
  >     > starting point for me.
  >     >
  >     > Any help, suggestions etc would be extremely helpful and much
  >     appreciated.
  > 
  >     Qemu doesn't support pcie at the moment.
  >     Only partial patches have been merged, still more patches have to
  >     be merged for pcie to fully work. The following repo is available.
  > 
  >     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
  >     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
  >     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
  > 
  >     Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
  >     for q35.
  >     example:
  >     qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/
  >     q35-acpi-dsdt.aml
  > 
  >     This repo is for those who want to try/develop pcie support,
  >     not for upstream merge. So they include patches unsuitable for upstream.
  >     The repo includes pcie port switch emulator which utilize pcie and
  >     MSI(not MSI-X).
  > 
  >     The difference between PCI device and PCIe device is configuration
  >     space size.
  >     By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
  >     space. Helper functions for pcie are found in qemu/hw/pcie.c
  >     For msi-x, see qemu/hw/msix.c.
  > 
  >     Thanks,
  >     --
  >     yamahata
  > 
  
  -- 
  yamahata
    

[-- Attachment #2: Type: text/html, Size: 6050 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X
  2010-08-19 17:01 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
@ 2010-08-20  5:15 ` Isaku Yamahata
  0 siblings, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-08-20  5:15 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: qemu-devel

On Thu, Aug 19, 2010 at 12:01:40PM -0500, Adnan Khaleel wrote:
> Hi Isaku, thank you very much for your very detailed response. I have a few
> questions, see below.
> 
> Thanks again,
> 
> Adnan
> 
> 
>     Qemu doesn't support pcie at the moment.
>     Only partial patches have been merged, still more patches have to
>     be merged for pcie to fully work. The following repo is available.
> 
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
>     git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
> 
>     Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
>     for q35.
>     example:
>     qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/
>     q35-acpi-dsdt.aml
> 
>     This repo is for those who want to try/develop pcie support,
>     not for upstream merge. So they include patches unsuitable for upstream.
> 
> I'm looking at Qemu 0.12.3 and there are 2 files, pci_host.c and pcie_host.c.
> Can you explain what these do?

They are for configuration space.
pci_host.c abstracts the emulation of indirect access to configuration space.
On PC, ioport 0xcf8 and 0xcfc.
pcie_host abtracts the emulation of access to MMCONFIG space. There is no
user at the moment, though.

> Also, I see virtio_pci.c is the only device that uses msi-x in qemu. Can you
> explain what device this is trying to emulate?

virtio is a paravirtual IO framework for kvm. There is no corresponding
real hardware.


> Also, will the support for PCIe be merged with the mail Qemu at some point?

I've been trying it. Your help will be appreciated. 


>     The repo includes pcie port switch emulator which utilize pcie and
>     MSI(not MSI-X).
> 
> I guess I could use this as a template for my qemu device mode correct?

Half yes. You need to be aware that port switch is pci-to-pci bridge which
is slightly different from normal pci/pcie device.
There is no emulator of pcie normal device at the moment.


>     The difference between PCI device and PCIe device is configuration
>     space size.
>     By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
>     space. Helper functions for pcie are found in qemu/hw/pcie.c
>     For msi-x, see qemu/hw/msix.c.
> 
> One last question, does the current implementation allow for 64bit BAR
> addresses?

Yes. 64bit BAR emulation was alread merged. seabios is also ready for it.
-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
@ 2010-08-19 17:01 Adnan Khaleel
  2010-08-20  5:15 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
  0 siblings, 1 reply; 19+ messages in thread
From: Adnan Khaleel @ 2010-08-19 17:01 UTC (permalink / raw)
  To: Isaku Yamahata; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1715 bytes --]

Hi Isaku, thank you very much for your very detailed response. I have a few questions, see below.

Thanks again,

Adnan

  Qemu doesn't support pcie at the moment.
  Only partial patches have been merged, still more patches have to
  be merged for pcie to fully work. The following repo is available.
  
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
  git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios
  
  Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
  for q35.
  example:
  qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/q35-acpi-dsdt.aml
  
  This repo is for those who want to try/develop pcie support,
  not for upstream merge. So they include patches unsuitable for upstream.I'm looking at Qemu 0.12.3 and there are 2 files, pci_host.c and pcie_host.c. Can you explain what these do?
Also, I see virtio_pci.c is the only device that uses msi-x in qemu. Can you explain what device this is trying to emulate?
Also, will the support for PCIe be merged with the mail Qemu at some point?
  
  The repo includes pcie port switch emulator which utilize pcie and
  MSI(not MSI-X).
  I guess I could use this as a template for my qemu device mode correct?

  The difference between PCI device and PCIe device is configuration
  space size.
  By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
  space. Helper functions for pcie are found in qemu/hw/pcie.c
  For msi-x, see qemu/hw/msix.c.
  One last question, does the current implementation allow for 64bit BAR addresses?

  Thanks,
  -- 
  yamahata
    

[-- Attachment #2: Type: text/html, Size: 2900 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
  2010-08-18 19:10 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
@ 2010-08-19  3:19 ` Isaku Yamahata
  0 siblings, 0 replies; 19+ messages in thread
From: Isaku Yamahata @ 2010-08-19  3:19 UTC (permalink / raw)
  To: Adnan Khaleel; +Cc: qemu-devel

On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote:
> Hello Qemu developers,
>
> I'm interested in developing a device model that plugs into Qemu that is based
> on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU
> simulator to this PCIe interface and use the entire platfom (Qemu + GPU
> simulator) for studying cpu, gpu interactions.
> 
> I'm not terribly familiar with the Qemu device model and I'm looking for some
> assistance, perhaps a starting template for pcie and msi-x that would offer the
> basic functionality that I could then build upon.
> 
> I have looked at the various devices that already modelled that are included
> with Qemu (v0.12.5 at least) and I've noticed several a few pci devices, eg;
> ne2k and cirrus-pci etc, however only one device truly seems to utilize both
> the technologies that I'm interested in and that is the virtio-pci.c
> 
> I'm not sure what virtio-pci does so I'm not sure if that is a suitable
> starting point for me.
> 
> Any help, suggestions etc would be extremely helpful and much appreciated.

Qemu doesn't support pcie at the moment.
Only partial patches have been merged, still more patches have to
be merged for pcie to fully work. The following repo is available.

git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu
git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios
git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios

Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT
for q35.
example:
qemu-system-x86_64 -M pc_q35 -acpitable load_header,data=roms/seabios/src/q35-acpi-dsdt.aml

This repo is for those who want to try/develop pcie support,
not for upstream merge. So they include patches unsuitable for upstream.
The repo includes pcie port switch emulator which utilize pcie and
MSI(not MSI-X).

The difference between PCI device and PCIe device is configuration
space size.
By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration
space. Helper functions for pcie are found in qemu/hw/pcie.c
For msi-x, see qemu/hw/msix.c.

Thanks,
-- 
yamahata

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X
@ 2010-08-18 19:10 Adnan Khaleel
  2010-08-19  3:19 ` Isaku Yamahata
  0 siblings, 1 reply; 19+ messages in thread
From: Adnan Khaleel @ 2010-08-18 19:10 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1009 bytes --]

Hello Qemu developers,

I'm interested in developing a device model that plugs into Qemu that is based on a PCIe interface and uses MSI-X. My goal is to ultimately attach a GPU simulator to this PCIe interface and use the entire platfom (Qemu + GPU simulator) for studying cpu, gpu interactions.

I'm not terribly familiar with the Qemu device model and I'm looking for some assistance, perhaps a starting template for pcie and msi-x that would offer the basic functionality that I could then build upon. 

I have looked at the various devices that already modelled that are included with Qemu (v0.12.5 at least) and I've noticed several a few pci devices, eg; ne2k and cirrus-pci etc, however only one device truly seems to utilize both the technologies that I'm interested in and that is the virtio-pci.c

I'm not sure what virtio-pci does so I'm not sure if that is a suitable starting point for me. 

Any help, suggestions etc would be extremely helpful and much appreciated.

Sincerely,

AK

[-- Attachment #2: Type: text/html, Size: 1324 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2010-09-10  2:23 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-19 18:32 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
2010-08-20  5:22 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
     [not found] <20100909190713.d8dc99ce@shadowfax.no-ip.com>
2010-09-10  2:00 ` [Qemu-devel] Template for developing a Qemu device with?PCIe?and MSI-X Isaku Yamahata
  -- strict thread matches above, loose matches on Subject: below --
2010-09-02 22:56 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
2010-09-02 17:42 Adnan Khaleel
2010-09-03  2:20 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
2010-09-01 19:07 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
2010-09-02  2:38 ` Isaku Yamahata
2010-08-26 18:17 [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Adnan Khaleel
2010-08-27  7:57 ` [Qemu-devel] Template for developing a Qemu device with?PCIe?and MSI-X Isaku Yamahata
2010-08-25 22:39 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
2010-08-26  9:43 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
2010-08-27 15:48 ` [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Cam Macdonell
2010-08-20 22:22 [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Adnan Khaleel
2010-08-20 20:13 Adnan Khaleel
2010-08-19 17:01 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
2010-08-20  5:15 ` [Qemu-devel] Template for developing a Qemu device with PCIe?and MSI-X Isaku Yamahata
2010-08-18 19:10 [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Adnan Khaleel
2010-08-19  3:19 ` Isaku Yamahata

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.