From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756017AbZCXDTz (ORCPT ); Mon, 23 Mar 2009 23:19:55 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751735AbZCXDTq (ORCPT ); Mon, 23 Mar 2009 23:19:46 -0400 Received: from byss.tchmachines.com ([208.76.80.75]:53189 "EHLO byss.tchmachines.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751408AbZCXDTp (ORCPT ); Mon, 23 Mar 2009 23:19:45 -0400 Date: Mon, 23 Mar 2009 20:19:25 -0700 From: Ravikiran G Thirumalai To: Eric Dumazet Cc: linux-kernel@vger.kernel.org, Ingo Molnar , shai@scalex86.org, Andrew Morton Subject: Re: [rfc] [patch 1/2 ] Process private hash tables for private futexes Message-ID: <20090324031925.GF7278@localdomain> References: <20090321044637.GA7278@localdomain> <49C4AE64.4060400@cosmosbay.com> <20090322045414.GD7278@localdomain> <49C5F3FD.9010606@cosmosbay.com> <20090323202837.GE7278@localdomain> <49C805E3.4060808@cosmosbay.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <49C805E3.4060808@cosmosbay.com> User-Agent: Mutt/1.5.15+20070412 (2007-04-11) X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - byss.tchmachines.com X-AntiAbuse: Original Domain - vger.kernel.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - scalex86.org Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 23, 2009 at 10:57:55PM +0100, Eric Dumazet wrote: >Ravikiran G Thirumalai a écrit : >> [...] >> Hmm! How about >> a) Reduce hash table size for private futex hash and increase hash table >> size for the global hash? >> >> OR, better, >> >> b) Since it is multiple spinlocks on the same cacheline which is a PITA >> here, how about keeping the global table, but just add a dereference >> to each hash slot, and interleave the adjacent hash buckets between >> nodes/cpus? So even without needing to lose out memory from padding, >> we avoid false sharing on cachelines due to unrelated futexes hashing >> onto adjacent hash buckets? >> > >Because of jhash(), futex slots are almost random. No need to try to interleave >them. Since you have a "cache line" of 4096 bytes, you need almost 4 pages >per cpu to avoid in a statistical way false sharing. How did you come up with that number? So there is no way adjacent cachelines will never ever be used in the global hash??