From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dante Cinco Subject: Re: swiotlb=force in Konrad's xen-pcifront-0.8.2 pvops domU kernel with PCI passthrough Date: Thu, 18 Nov 2010 11:35:42 -0800 Message-ID: References: <20101112165541.GA10339@dumpdata.com> <20101112223333.GD26189@dumpdata.com> <20101116185748.GA11549@dumpdata.com> <20101116201349.GA18315@dumpdata.com> <20101118171936.GA29275@dumpdata.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Konrad Rzeszutek Wilk Cc: Jeremy Fitzhardinge , Xen-devel , mathieu.desnoyers@polymtl.ca, andrew.thomas@oracle.com, keir.fraser@eu.citrix.com, chris.mason@oracle.com List-Id: xen-devel@lists.xenproject.org I mentioned earlier in an previous post to this thread that I'm able to apply Dulloor's xenoprofile patch to the dom0 kernel but not the domU kernel. So I can't do active-domain profiling but I'm able to do passive-domain profiling but I don't know how reliable the results are since it shows pvclock_clocksource_read as the top consumer of CPU cycles at 28%. CPU: Intel Architectural Perfmon, speed 2665.98 MHz (estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000 samples % image name app name symbol name 918089 27.9310 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel pvclock_clocksource_read 217811 6.6265 domain1-modules domain1-modules /domain1-modules 188327 5.7295 vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug mutex_spin_on_owner 186684 5.6795 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel __xen_spin_lock 149514 4.5487 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel __write_lock_failed 123278 3.7505 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel __kernel_text_address 122906 3.7392 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel xen_spin_unlock 90903 2.7655 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel __spin_time_accum 85880 2.6127 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel __module_address 75223 2.2885 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel print_context_stack 66778 2.0316 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel __module_text_address 57389 1.7459 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel is_module_text_address 47282 1.4385 xen-syms-4.1-unstable domain1-xen syscall_enter 47219 1.4365 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel prio_tree_insert 46495 1.4145 vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug pvclock_clocksource_read 44501 1.3539 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel prio_tree_left 32482 0.9882 vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug domain1-kernel native_read_tsc I ran oprofile (0.9.5 with xenoprofile patch) for 20 seconds while the I/Os were running. Here's the command I used: opcontrol --start --xen=/boot/xen-syms-4.1-unstable --vmlinux=/boot/vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug --passive-domains=1 --passive-images=/boot/vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.11.dcinco-debug I had to remove dom0_max_vcpus=1 (but kept dom0_vcpus_pin=true) in the Xen command line. Otherwise, oprofile only gives the samples from CPU0. I'm going to try perf next. - Dante