From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753362Ab1HHRRt (ORCPT ); Mon, 8 Aug 2011 13:17:49 -0400 Received: from e2.ny.us.ibm.com ([32.97.182.142]:55648 "EHLO e2.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751921Ab1HHRRs (ORCPT ); Mon, 8 Aug 2011 13:17:48 -0400 Message-ID: <4E4019E1.2090508@us.ibm.com> Date: Mon, 08 Aug 2011 10:16:17 -0700 From: Badari Pulavarty User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.16) Gecko/20101125 Thunderbird/3.0.11 MIME-Version: 1.0 To: Liu Yuan CC: Stefan Hajnoczi , "Michael S. Tsirkin" , Rusty Russell , Avi Kivity , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Khoa Huynh Subject: Re: [RFC PATCH]vhost-blk: In-kernel accelerator for virtio block device References: <1311863346-4338-1-git-send-email-namei.unix@gmail.com> <4E325F98.5090308@gmail.com> <4E32F7F2.4080607@us.ibm.com> <4E363DB9.70801@gmail.com> <1312495132.9603.4.camel@badari-desktop> <4E3BCE4D.7090809@gmail.com> <4E3C302A.3040500@us.ibm.com> <4E3F3D4E.70104@gmail.com> <4E3F6E72.1000907@us.ibm.com> <4E3F90E3.9080600@gmail.com> In-Reply-To: <4E3F90E3.9080600@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 8/8/2011 12:31 AM, Liu Yuan wrote: > On 08/08/2011 01:04 PM, Badari Pulavarty wrote: >> On 8/7/2011 6:35 PM, Liu Yuan wrote: >>> On 08/06/2011 02:02 AM, Badari Pulavarty wrote: >>>> On 8/5/2011 4:04 AM, Liu Yuan wrote: >>>>> On 08/05/2011 05:58 AM, Badari Pulavarty wrote: >>>>>> Hi Liu Yuan, >>>>>> >>>>>> I started testing your patches. I applied your kernel patch to 3.0 >>>>>> and applied QEMU to latest git. >>>>>> >>>>>> I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM). >>>>>> I ran simple "dd" read tests from the guest on all block devices >>>>>> (with various blocksizes, iflag=direct). >>>>>> >>>>>> Unfortunately, system doesn't stay up. I immediately get into >>>>>> panic on the host. I didn't get time to debug the problem. Wondering >>>>>> if you have seen this issue before and/or you have new patchset >>>>>> to try ? >>>>>> >>>>>> Let me know. >>>>>> >>>>>> Thanks, >>>>>> Badari >>>>>> >>>>> >>>>> Okay, it is actually a bug pointed out by MST on the other thread, >>>>> that it needs a mutex for completion thread. >>>>> >>>>> Now would you please this attachment?This patch only applies to >>>>> kernel part, on top of v1 kernel patch. >>>>> >>>>> This patch mainly moves completion thread into vhost thread as a >>>>> function. As a result, both requests submitting and completion >>>>> signalling is in the same thread. >>>>> >>>>> Yuan >>>> >>>> Unfortunately, "dd" tests (4 out of 6) in the guest hung. I see >>>> following messages >>>> >>>> virtio_blk virtio2: requests: id 0 is not a head ! >>>> virtio_blk virtio3: requests: id 1 is not a head ! >>>> virtio_blk virtio5: requests: id 1 is not a head ! >>>> virtio_blk virtio1: requests: id 1 is not a head ! >>>> >>>> I still see host panics. I will collect the host panic and see if >>>> its still same or not. >>>> >>>> Thanks, >>>> Badari >>>> >>>> >>> Would you please show me how to reproduce it step by step? I tried >>> dd with two block device attached, but didn't get hung nor panic. >>> >>> Yuan >> >> I did 6 "dd"s on 6 block devices.. >> >> dd if=/dev/vdb of=/dev/null bs=1M iflag=direct & >> dd if=/dev/vdc of=/dev/null bs=1M iflag=direct & >> dd if=/dev/vdd of=/dev/null bs=1M iflag=direct & >> dd if=/dev/vde of=/dev/null bs=1M iflag=direct & >> dd if=/dev/vdf of=/dev/null bs=1M iflag=direct & >> dd if=/dev/vdg of=/dev/null bs=1M iflag=direct & >> >> I can reproduce the problem with in 3 minutes :( >> >> Thanks, >> Badari >> >> > Ah...I made an embarrassing mistake that I tried to 'free()' an > kmem_cache object. > > Would you please revert the vblk-for-kernel-2 patch and apply the new > one attached in this letter? > Hmm.. My version of the code seems to have kzalloc() for used_info. I don't have a version that is using kmem_cache_alloc(). Would it be possible for you to send out complete patch (with all the fixes applied) for me to try ? This will avoid all the confusion .. Thanks, Badari