From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030663AbdAFRJH (ORCPT ); Fri, 6 Jan 2017 12:09:07 -0500 Received: from mail-io0-f173.google.com ([209.85.223.173]:34956 "EHLO mail-io0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030533AbdAFRIp (ORCPT ); Fri, 6 Jan 2017 12:08:45 -0500 MIME-Version: 1.0 In-Reply-To: <97be60da-72df-ad8f-db03-03f01c392823@suse.cz> References: <20170106152052.GS5556@dhcp22.suse.cz> <20901069-5eb7-f5ff-0641-078635544531@suse.cz> <97be60da-72df-ad8f-db03-03f01c392823@suse.cz> From: Eric Dumazet Date: Fri, 6 Jan 2017 09:08:43 -0800 Message-ID: Subject: Re: __GFP_REPEAT usage in fq_alloc_node To: Vlastimil Babka Cc: Michal Hocko , linux-mm@kvack.org, LKML Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 6, 2017 at 8:55 AM, Vlastimil Babka wrote: > On 01/06/2017 05:48 PM, Eric Dumazet wrote: >> On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote: >> >>> >>> I wonder what's that cause of the penalty (when accessing the vmapped >>> area I suppose?) Is it higher risk of collisions cache misses within the >>> area, compared to consecutive physical adresses? >> >> I believe tests were done with 48 fq qdisc, each having 2^16 slots. >> So I had 48 blocs,of 524288 bytes. >> >> Trying a bit harder at setup time to get 128 consecutive pages got >> less TLB pressure. > > Hmm that's rather surprising to me. TLB caches the page table lookups > and the PFN's of the physical pages it translates to shouldn't matter - > the page tables will look the same. With 128 consecutive pages could > manifest the reduced collision cache miss effect though. > To be clear, the difference came from : Using kmalloc() to allocate 48 x 524288 bytes Or using vmalloc() Are you telling me HugePages are not in play there ?