From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754415AbdAFQzf (ORCPT ); Fri, 6 Jan 2017 11:55:35 -0500 Received: from mx2.suse.de ([195.135.220.15]:59187 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967823AbdAFQzY (ORCPT ); Fri, 6 Jan 2017 11:55:24 -0500 Subject: Re: __GFP_REPEAT usage in fq_alloc_node To: Eric Dumazet References: <20170106152052.GS5556@dhcp22.suse.cz> <20901069-5eb7-f5ff-0641-078635544531@suse.cz> Cc: Michal Hocko , linux-mm@kvack.org, LKML From: Vlastimil Babka Message-ID: <97be60da-72df-ad8f-db03-03f01c392823@suse.cz> Date: Fri, 6 Jan 2017 17:55:23 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/06/2017 05:48 PM, Eric Dumazet wrote: > On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote: > >> >> I wonder what's that cause of the penalty (when accessing the vmapped >> area I suppose?) Is it higher risk of collisions cache misses within the >> area, compared to consecutive physical adresses? > > I believe tests were done with 48 fq qdisc, each having 2^16 slots. > So I had 48 blocs,of 524288 bytes. > > Trying a bit harder at setup time to get 128 consecutive pages got > less TLB pressure. Hmm that's rather surprising to me. TLB caches the page table lookups and the PFN's of the physical pages it translates to shouldn't matter - the page tables will look the same. With 128 consecutive pages could manifest the reduced collision cache miss effect though. > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org >