From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753045AbdDLIDa (ORCPT ); Wed, 12 Apr 2017 04:03:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49080 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752183AbdDLIDX (ORCPT ); Wed, 12 Apr 2017 04:03:23 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 150F14E4EE Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jasowang@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 150F14E4EE Subject: Re: [PATCH 1/3] ptr_ring: batch ring zeroing To: "Michael S. Tsirkin" , linux-kernel@vger.kernel.org References: <1491544049-19108-1-git-send-email-mst@redhat.com> Cc: brouer@redhat.com From: Jason Wang Message-ID: <19f56d99-279a-5a8b-39a7-1017a3cb4bdd@redhat.com> Date: Wed, 12 Apr 2017 16:03:13 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <1491544049-19108-1-git-send-email-mst@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 12 Apr 2017 08:03:23 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2017年04月07日 13:49, Michael S. Tsirkin wrote: > A known weakness in ptr_ring design is that it does not handle well the > situation when ring is almost full: as entries are consumed they are > immediately used again by the producer, so consumer and producer are > writing to a shared cache line. > > To fix this, add batching to consume calls: as entries are > consumed do not write NULL into the ring until we get > a multiple (in current implementation 2x) of cache lines > away from the producer. At that point, write them all out. > > We do the write out in the reverse order to keep > producer from sharing cache with consumer for as long > as possible. > > Writeout also triggers when ring wraps around - there's > no special reason to do this but it helps keep the code > a bit simpler. > > What should we do if getting away from producer by 2 cache lines > would mean we are keeping the ring moe than half empty? > Maybe we should reduce the batching in this case, > current patch simply reduces the batching. > > Notes: > - it is no longer true that a call to consume guarantees > that the following call to produce will succeed. > No users seem to assume that. > - batching can also in theory reduce the signalling rate: > users that would previously send interrups to the producer > to wake it up after consuming each entry would now only > need to do this once in a batch. > Doing this would be easy by returning a flag to the caller. > No users seem to do signalling on consume yet so this was not > implemented yet. > > Signed-off-by: Michael S. Tsirkin > --- > > Jason, I am curious whether the following gives you some of > the performance boost that you see with vhost batching > patches. Is vhost batching on top still helpful? The patch looks good to me, will have a test for vhost batching patches. Thanks