From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51065) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ddE6T-00058L-9Z for qemu-devel@nongnu.org; Thu, 03 Aug 2017 07:15:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ddE6R-0008KR-Uo for qemu-devel@nongnu.org; Thu, 03 Aug 2017 07:15:45 -0400 References: <1499633493-19865-1-git-send-email-eric.auger@redhat.com> <50d1685e-bfa6-02ac-2650-37637745431a@redhat.com> <35467208-4e8f-b944-c247-c049c2301878@caviumnetworks.com> From: Auger Eric Message-ID: <8f2b1045-e7f5-a2e1-8a14-54c3d1054e0b@redhat.com> Date: Thu, 3 Aug 2017 13:15:10 +0200 MIME-Version: 1.0 In-Reply-To: <35467208-4e8f-b944-c247-c049c2301878@caviumnetworks.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC v5 0/8] ARM SMMUv3 Emulation Support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Tomasz Nowicki , eric.auger.pro@gmail.com, peter.maydell@linaro.org, qemu-arm@nongnu.org, qemu-devel@nongnu.org, alex.williamson@redhat.com, prem.mallappa@gmail.com Cc: mohun106@gmail.com, drjones@redhat.com, tcain@qti.qualcomm.com, Radha.Chintakuntla@cavium.com, Sunil.Goutham@cavium.com, mst@redhat.com, jean-philippe.brucker@arm.com, robin.murphy@arm.com, will.deacon@arm.com, "Nair, Jayachandran" , peterx@redhat.com, edgar.iglesias@gmail.com, bharat.bhushan@nxp.com, christoffer.dall@linaro.org Hi Tomasz, On 03/08/2017 12:11, Tomasz Nowicki wrote: > Hi Eric, > > On 01.08.2017 15:07, Auger Eric wrote: >> Hi Tomasz, >> On 01/08/2017 13:01, Tomasz Nowicki wrote: >>> Hi Eric, >>> >>> Just letting you know that I am facing another issue with the following >>> setup: >>> 1. host (4.12 kernel & 64K page) and VM (4.12 kernel & 64K page) >>> 2. QEMU + -netdev type=tap,ifname=tap,id=net0 -device >>> virtio-net-pci,netdev=net0,iommu_platform,disable-modern=off,disable-legacy=on >>> >>> >>> 2. On VM, I allocate some huge pages and run DPDK testpmd app: >>> # echo 4 > /sys/kernel/mm/hugepages/hugepages-524288kB/nr_hugepages >>> # ./dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:00:02.0 >>> # ./dpdk/build/app/testpmd -l 0-13 -n 4 -w 0000:00:02.0 -- >>> --disable-hw-vlan-filter --disable-rss -i >>> EAL: Detected 14 lcore(s) >>> EAL: Probing VFIO support... >>> EAL: VFIO support initialized >>> EAL: PCI device 0000:00:02.0 on NUMA socket -1 >>> EAL: probe driver: 1af4:1041 net_virtio >>> EAL: using IOMMU type 1 (Type 1) >>> EAL: iommu_map_dma vaddr ffff20000000 size 80000000 iova 120000000 >>> EAL: Can't write to PCI bar (0) : offset (12) >>> EAL: Can't read from PCI bar (0) : offset (12) >>> EAL: Can't read from PCI bar (0) : offset (12) >>> EAL: Can't write to PCI bar (0) : offset (12) >>> EAL: Can't read from PCI bar (0) : offset (12) >>> EAL: Can't write to PCI bar (0) : offset (12) >>> EAL: Can't read from PCI bar (0) : offset (0) >>> EAL: Can't write to PCI bar (0) : offset (4) >>> EAL: Can't write to PCI bar (0) : offset (14) >>> EAL: Can't write to PCI bar (0) : offset (e) >>> EAL: Can't read from PCI bar (0) : offset (c) >>> EAL: Requested device 0000:00:02.0 cannot be used >>> EAL: No probed ethernet devices >>> Interactive-mode selected >>> USER1: create a new mbuf pool : n=251456, size=2176, >>> socket=0 >>> >>> When VM uses *4K pages* the same setup works fine. I will work on this >>> but please let me know in case you already know what is going on. >> >> No I did not face that one. I was able to launch testpmd without such >> early message. However I assigned an igbvf device to the guest and then >> to DPDK. I've never tested your config. >> >> However as stated in my cover letter at the moment DPDK is not working >> for me because of storms of tlbi-on-maps. I intend to work on this as >> soon as get some bandwidth, sorry. > > I found what was the reason of failure. > > QEMU creates BARs for VIRTIO PCI device. The size of it depends on what > is necessary for VIRTIO protocol. In my case the BAR is 16K size which > is too small to be mmapable for kernel with 64K pages: > vfio_pci_enable() -> vfio_pci_probe_mmaps() -> > here guest kernel checks that BAR size is smaller than current PAGE_SIZE > and clears VFIO_REGION_INFO_FLAG_MMAP flag which prevents BAR from being > mmapped later on. I added -device virtio-net-pci,...,page-per-vq=on to > enlarge BAR size to 8M and now testpmd works fine. I wonder how the same > setup is working with e.g. Intel or AMD IOMMU. Hum OK. Yet another thing to investigate! thank you for your efforts and excellent news overall. Preparing a rebase ... Thanks Eric > > Thanks, > Tomasz >