From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:39969) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UK3JV-0001WC-At for qemu-devel@nongnu.org; Mon, 25 Mar 2013 05:03:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UK3JQ-000700-Hx for qemu-devel@nongnu.org; Mon, 25 Mar 2013 05:03:33 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51912) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UK3JQ-0006zr-A1 for qemu-devel@nongnu.org; Mon, 25 Mar 2013 05:03:28 -0400 Message-ID: <51501321.30501@redhat.com> Date: Mon, 25 Mar 2013 11:04:33 +0200 From: Orit Wasserman MIME-Version: 1.0 References: <1363956370-23681-1-git-send-email-pl@kamp.de> <1363956370-23681-5-git-send-email-pl@kamp.de> In-Reply-To: <1363956370-23681-5-git-send-email-pl@kamp.de> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCHv4 4/9] bitops: use vector algorithm to optimize find_next_bit() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: Stefan Hajnoczi , Paolo Bonzini , qemu-devel@nongnu.org, quintela@redhat.com On 03/22/2013 02:46 PM, Peter Lieven wrote: > this patch adds the usage of buffer_find_nonzero_offset() > to skip large areas of zeroes. > > compared to loop unrolling presented in an earlier > patch this adds another 50% performance benefit for > skipping large areas of zeroes. loop unrolling alone > added close to 100% speedup. > > Signed-off-by: Peter Lieven > Reviewed-by: Eric Blake > --- > util/bitops.c | 24 +++++++++++++++++++++--- > 1 file changed, 21 insertions(+), 3 deletions(-) > > diff --git a/util/bitops.c b/util/bitops.c > index e72237a..9bb61ff 100644 > --- a/util/bitops.c > +++ b/util/bitops.c > @@ -42,10 +42,28 @@ unsigned long find_next_bit(const unsigned long *addr, unsigned long size, > size -= BITS_PER_LONG; > result += BITS_PER_LONG; > } > - while (size & ~(BITS_PER_LONG-1)) { > - if ((tmp = *(p++))) { > - goto found_middle; > + while (size >= BITS_PER_LONG) { > + tmp = *p; > + if (tmp) { > + goto found_middle; > + } > + if (can_use_buffer_find_nonzero_offset(p, size / BITS_PER_BYTE)) { > + size_t tmp2 = > + buffer_find_nonzero_offset(p, size / BITS_PER_BYTE); > + result += tmp2 * BITS_PER_BYTE; > + size -= tmp2 * BITS_PER_BYTE; > + p += tmp2 / sizeof(unsigned long); > + if (!size) { > + return result; > + } > + if (tmp2) { > + tmp = *p; > + if (tmp) { > + goto found_middle; > + } > + } > } > + p++; > result += BITS_PER_LONG; > size -= BITS_PER_LONG; > } > Reviewed-by: Orit Wasserman