From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Dor Laor" Subject: Re: paravirtualization & cr3-cache feature Date: Tue, 20 Feb 2007 01:48:08 -0800 Message-ID: <64F9B87B6B770947A9F8391472E032160A818817@ehost011-8.exch011.intermedia.net> References: <64F9B87B6B770947A9F8391472E032160A66BE0A@ehost011-8.exch011.intermedia.net> <20070212130644.GF25460@amd.com> <64F9B87B6B770947A9F8391472E032160A66C0A6@ehost011-8.exch011.intermedia.net> <20070212140359.GE1879@redhat.com> <2e59e6970702122052x69a4ea9asafd2d0ef438f55ce@mail.gmail.com> <64F9B87B6B770947A9F8391472E032160A66C679@ehost011-8.exch011.intermedia.net> <2e59e6970702181156l3591b801l1d6b5bccf177717c@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Return-path: Content-class: urn:content-classes:message In-Reply-To: <2e59e6970702181156l3591b801l1d6b5bccf177717c-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org >> >Why so many vm switches? First up, a typical I/O system maxes at >> >about 1Gb/s, right? That would be a gigabit NIC, or striped RAID, or >> >something like that. This suggests an average of only about 300 >> >bytes/transfer, to get >150k individual transfers per second? I >> >thought block I/O usually dealt with 1kbyte or more. Next, we're >> >living in a world where most CPUs supporting the required extended >> >instruction set are multi-core, and you specifically said core duo. >> >Shouldn't an extensive I/O workload tend toward one CPU in each VM, >> >with contraposed producer-consumer queues, and almost zero context >> >switches? >> >> First 150k-200k vm switches is the maximum number we can reach for a >> single core. The exits are not necessarily related to IO, for instance >> before the new MMU code page fault exits were performance bottle neck >> which major part of it was the vm switches. >> >> Second, we currently use Qemu's device emulation were the ne2k device >> does dozens of IO accesses per packet! The rtl8139 is better and does >> about 3 IO(MMIO) per packet. The current maximum throughput using the >> rtl8139 is ~30Mbps. Very soon we'll have PV drivers that will boost >> performance and use the P/C queues you were talking about. PV drivers >> will boost performance beyond 1G/bps. > >Thanks for the explanation. Two things had thrown me off: first the >phrase "extensive I/O" made me think of disk I/O with rather large >blocks, not a bunch of smallish network packets. And I'd forgotten >that we're dealing with full virt, so you can't pipeline requests, >since the driver expects synchronous memory access. Is it possible >for any of qemu's hardware emulation code to run without a VMEXIT, >more or less like a software interrupt within the VM? Or is VMEXIT >already the equivalent of a software interrupt or interrupt from >request for privileged instruction from ring 3, as applied to virtual >machine instead of ring 3 user mode? I didn't completely understand, do you mean that the guest would send the software interrupts for cheaper VM exit? If that's the case, I find it impossible for fully virtualized devices since these devices use I/O or MMIO for connectivity with software. We currently add PV drivers (even to fully virtualized guests) and we will queue/coalesce packet together. ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV