From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=55833 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OqzV9-000205-Lg for qemu-devel@nongnu.org; Wed, 01 Sep 2010 22:26:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OqzV7-0005cr-KY for qemu-devel@nongnu.org; Wed, 01 Sep 2010 22:26:07 -0400 Received: from mail.valinux.co.jp ([210.128.90.3]:36845) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OqzV6-0005cF-Nn for qemu-devel@nongnu.org; Wed, 01 Sep 2010 22:26:05 -0400 Date: Thu, 2 Sep 2010 11:38:31 +0900 From: Isaku Yamahata Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe and MSI-X Message-ID: <20100902023831.GA20106@valinux.co.jp> References: <20100901190733.5ea9ca03@shadowfax.no-ip.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-2022-jp Content-Disposition: inline In-Reply-To: <20100901190733.5ea9ca03@shadowfax.no-ip.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Adnan Khaleel Cc: Cam Macdonell , qemu-devel@nongnu.org On Wed, Sep 01, 2010 at 02:07:33PM -0500, Adnan Khaleel wrote: > Yamahata, Cam, > > Thank you both very much for pointers about Qemu coding for PCIe and MSI-X. > > I'm at a point where I can see my device when I do an lspci -t -v as shown > below. > > linux-an84:~ # lspci -t -v > -[0000:00]-+-00.0 Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller > +-01.0 Cirrus Logic GD 5446 > +-04.0-[0000:20]-- > +-18.0-[0000:21]-- > +-18.1-[0000:22]-- > +-18.2-[0000:23]-- > +-18.3-[0000:24]-- > +-18.4-[0000:25]-- > +-18.5-[0000:26]-- > +-19.0-[0000:36-bf]--+-00.0-[0000:37-47]--+-00.0-[0000:38]-- > | | +-01.0-[0000:39]-- > | | +-02.0-[0000:3a]-- > | | +-03.0-[0000:3b]-- > | | +-04.0-[0000:3c]-- > | | +-05.0-[0000:3d]-- > | | +-06.0-[0000:3e]-- > | | +-07.0-[0000:3f]-- > | | +-08.0-[0000:40]----00.0 > Cray Inc Device 0301 <- The device that I've included > > However, I'm having a bit of an issue with the MSI-X. > > I'm following the code examples in virtio-pci.c and ivshmem.c that Cam pointed > out to. I've got bar 0&1 already occupied so I assign the msix_mmio_map to bar > 2. However, when I do that, Qemu fails to boot and fails with the following > assertion fail: > > unused outb: port=0x00f1 data=0x00 > qemu: fatal: Trying to execute code outside RAM or ROM at 0x0000000000100000 > > Couple of thins I'm unsure about: > 1. Am I registering the 64bit bar addresses correctly? > pci_register_bar(&d->dev, i, BAR_Regions[i][0], PCI_BASE_ADDRESS_SPACE_IO | > PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map); No. PCI_BASE_ADDRESS_SPACE_IO | PCI_BASE_ADDRESS_MEM_TYPE_64 is invalid. PCI doesn't support 64bit io space, but 32bit. And x86 supports 64k io sparce. > 2. In the function int msix_init(PCIDevice *pdev, unsigned short nentries, > unsigned bar_nr, unsigned bar_size); > I'm not sure what bar_nr is, however setting it to 1 (as in code examples > above) or to 2 (bar that I want to register the msix_mmio_map to) both fail > with the same error. It should be 2. I'm not sure why it failed. > rc = msix_init(&d->dev, d->vectors, 2, 0); > : > : > pci_register_bar(&d->dev, 2, msix_bar_size(&d->dev), > PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map); > > > Here is my init function code in its entirety: > > static const unsigned long long BAR_Regions[6][2] = > { > // len , type > { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY} , //BAR0 > { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_IO } , //BAR1 > { 0, 0} , //BAR2 > { 0, 0} , //BAR3 > { 0, 0} , //BAR4 > { 0, 0} //BAR5 > }; Probably what you want is something like { 0x2000000000ull, PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64} , //BAR0 { 0, 0} , //BAR1 // 64bit BAR occupies 2 BAR entries so that BAR1 can't be used. { 0x2000000ull, PCI_BASE_ADDRESS_SPACE_IO } , //BAR2 { 0, 0} , //BAR3 // for MSI-X { 0, 0} , //BAR4 { 0, 0} //BAR5 > > static int pcie_msix_initfn(PCIDevice *pci_dev) > { > PCIE_MSIX_DEVState *d = DO_UPCAST(PCIE_MSIX_DEVState, dev, pci_dev); > PCIBridge *br = DO_UPCAST(PCIBridge, dev, pci_dev); > PCIEPort *p = DO_UPCAST(PCIEPort, br, br); > int rc, i; > > PRINT_DEBUG("%s: PCIE MSIX Device init...\n", __FUNCTION__); > > pci_config_set_vendor_id(d->dev.config, PCIE_MSIX_VID); > pci_config_set_device_id(d->dev.config, PCIE_MSIX_DID); > d->dev.config[PCI_REVISION_ID] = PCIE_MSIX_VERSION; > d->dev.config[PCI_SUBSYSTEM_VENDOR_ID] = PCIE_MSIX_VID & 0xff; > d->dev.config[PCI_SUBSYSTEM_VENDOR_ID+1] = (PCIE_MSIX_VID >> 8) & 0xff; > d->dev.config[PCI_SUBSYSTEM_ID] = PCIE_MSIX_SS_DID & 0xff; > d->dev.config[PCI_SUBSYSTEM_ID+1] = (PCIE_MSIX_SS_DID >> 8) & 0xff; Use pci_set_word(). > d->mmio_index = cpu_register_io_memory(pcie_msix_mem_read_fn, > pcie_msix_mem_write_fn, d); > > for(i=0; i < PCI_NUM_REGIONS -1; i++) { //-1 for the Exp ROM BAR > if(BAR_Regions[i][0] != 0) > { > if(BAR_Regions[i][1] == PCI_BASE_ADDRESS_SPACE_IO) > { > //io region > PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", > __FUNCTION__, i); > pci_register_bar(&d->dev, i, BAR_Regions[i][0], > PCI_BASE_ADDRESS_SPACE_IO | PCI_BASE_ADDRESS_MEM_TYPE_64, pcie_msix_io_map); > } else { > //mem region > PRINT_DEBUG("%s: Registering Bar %i as MEM BAR\n", > __FUNCTION__, i); > pci_register_bar(&d->dev, i, BAR_Regions[i][0], > PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, > pcie_msix_mem_map); > } > } > } > d->dev.config[PCI_INTERRUPT_PIN] = 1; > > rc = msix_init(&d->dev, d->vectors, 2, 0); > > if (!rc) { > PRINT_DEBUG("%s: Registering Bar %i as I/O BAR\n", __FUNCTION__, i); > pci_register_bar(&d->dev, 2, msix_bar_size(&d->dev), > PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64, msix_mmio_map); > PRINT_DEBUG("%s: MSI-X initialized (%d vectors)\n", __FUNCTION__, d-> > vectors); > } > else { > PRINT_DEBUG("%s: MSI-X initialization failed!\n", __FUNCTION__); > exit(1); > } > > // Activate the vectors > for (i = 0; i < d->vectors; i++) { > msix_vector_use(&d->dev, i); > } > > rc = pci_pcie_cap_init(&d->dev, PCIE_MSIX_EXP_OFFSET, > PCI_EXP_TYPE_ENDPOINT, p->port); > if (rc < 0) { > return rc; > } > > pcie_cap_flr_init(&d->dev, &pcie_msix_flr); > pcie_cap_deverr_init(&d->dev); > pcie_cap_ari_init(&d->dev); > rc = pcie_aer_init(&d->dev, PCIE_MSIX_AER_OFFSET); > if (rc < 0) { > return rc; > } > > PRINT_DEBUG("%s: Init done\n", __FUNCTION__); > return 0; > } > > > Thanks > > AK > > > > > ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ > From: Cam Macdonell [mailto:cam@cs.ualberta.ca] > To: adnan@khaleel.us > Cc: Isaku Yamahata [mailto:yamahata@valinux.co.jp], qemu-devel@nongnu.org > Sent: Fri, 27 Aug 2010 10:48:48 -0500 > Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe > and MSI-X > > On Wed, Aug 25, 2010 at 4:39 PM, Adnan Khaleel wrote: > > Hi Isaku, > > > > I've made some progress in coding the device template but its no where > near > > complete. > > > > I've created some files and am attaching it to this note. Based on what I > > could gather from the pcie source files I've made a stab at creating a > > simple model. I've also attached a file for a simple pci device that > works > > under regular Qemu. I would like to duplicate its functionality in your > pcie > > environment for starters. > > > > Could you please take a look at the files I've created and tell me if > I've > > understood your pcie model correctly. Any help will be truly appreciated. > > > > Adnan > > Hi Adnan, > > There is a fairly simple device I've created called "ivshmem" that is > in the qemu git tree. It is a regular PCI device that exports a > shared memory object via a BAR and supports a few registers and > optional MSI-X interrupts (I had to pick through the virtio code to > get MSI-X working, so looking at ivshmem might save you some effort). > My device is somewhat similar to a graphics card actually which I > recall is your goal. The purpose of ivshmem is to support sharing > memory between multiple guests running on the same host. It follows > the qdev model which you will need to do. > > Cam > > > > > The five files I've modified from your git repository are as follows > > > > hw/pci_ids.h // Added vendor id defines > > hw/pc_q35.c // Device instantiation > > hw/pcie_msix_template.h // Device header file > > hw/pcie_msix_template.c // Device file > > Makefile.objs // Added pcie_msix_template.o to list of > > objects being built > > > > Everything should compile without any warnings or errors. > > > > The last file: > > sc_link_pci.c > > Is the original PCI device that I'm trying to convert into being PCIe and > > MSI-X and is included merely for reference to help you understand what > I'd > > like to achieve in your environment. > > > > > > ________________________________ > > From: Isaku Yamahata [mailto:yamahata@valinux.co.jp] > > To: Adnan Khaleel [mailto:adnan@khaleel.us] > > Cc: qemu-devel@nongnu.org > > Sent: Wed, 18 Aug 2010 22:19:04 -0500 > > Subject: Re: [Qemu-devel] Template for developing a Qemu device with PCIe > > and MSI-X > > > > On Wed, Aug 18, 2010 at 02:10:10PM -0500, Adnan Khaleel wrote: > >> Hello Qemu developers, > >> > >> I'm interested in developing a device model that plugs into Qemu that is > >> based > >> on a PCIe interface and uses MSI-X. My goal is to ultimately attach a > GPU > >> simulator to this PCIe interface and use the entire platfom (Qemu + GPU > >> simulator) for studying cpu, gpu interactions. > >> > >> I'm not terribly familiar with the Qemu device model and I'm looking for > >> some > >> assistance, perhaps a starting template for pcie and msi-x that would > >> offer the > >> basic functionality that I could then build upon. > >> > >> I have looked at the various devices that already modelled that are > >> included > >> with Qemu (v0.12.5 at least) and I've noticed several a few pci devices, > >> eg; > >> ne2k and cirrus-pci etc, however only one device truly seems to utilize > >> both > >> the technologies that I'm interested in and that is the virtio-pci.c > >> > >> I'm not sure what virtio-pci does so I'm not sure if that is a suitable > >> starting point for me. > >> > >> Any help, suggestions etc would be extremely helpful and much > appreciated. > > > > Qemu doesn't support pcie at the moment. > > Only partial patches have been merged, still more patches have to > > be merged for pcie to fully work. The following repo is available. > > > > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/qemu > > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/seabios > > git clone http://people.valinux.co.jp/~yamahata/qemu/q35/vgabios > > > > Note: patched seabios and vgabios are needed, you have to pass ACPI DSDT > > for q35. > > example: > > qemu-system-x86_64 -M pc_q35 -acpitable > > load_header,data=roms/seabios/src/q35-acpi-dsdt.aml > > > > This repo is for those who want to try/develop pcie support, > > not for upstream merge. So they include patches unsuitable for upstream. > > The repo includes pcie port switch emulator which utilize pcie and > > MSI(not MSI-X). > > > > The difference between PCI device and PCIe device is configuration > > space size. > > By setting PCIDeviceInfo::is_express = 1, you'll get 4K configuration > > space. Helper functions for pcie are found in qemu/hw/pcie.c > > For msi-x, see qemu/hw/msix.c. > > > > Thanks, > > -- > > yamahata > > > -- yamahata