From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S937373AbdAFRSZ (ORCPT ); Fri, 6 Jan 2017 12:18:25 -0500 Received: from mx2.suse.de ([195.135.220.15]:60151 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936306AbdAFRSR (ORCPT ); Fri, 6 Jan 2017 12:18:17 -0500 Subject: Re: __GFP_REPEAT usage in fq_alloc_node To: Eric Dumazet References: <20170106152052.GS5556@dhcp22.suse.cz> <20901069-5eb7-f5ff-0641-078635544531@suse.cz> <97be60da-72df-ad8f-db03-03f01c392823@suse.cz> Cc: Michal Hocko , linux-mm@kvack.org, LKML From: Vlastimil Babka Message-ID: Date: Fri, 6 Jan 2017 18:18:15 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/06/2017 06:08 PM, Eric Dumazet wrote: > On Fri, Jan 6, 2017 at 8:55 AM, Vlastimil Babka wrote: >> On 01/06/2017 05:48 PM, Eric Dumazet wrote: >>> On Fri, Jan 6, 2017 at 8:31 AM, Vlastimil Babka wrote: >>> >>>> >>>> I wonder what's that cause of the penalty (when accessing the vmapped >>>> area I suppose?) Is it higher risk of collisions cache misses within the >>>> area, compared to consecutive physical adresses? >>> >>> I believe tests were done with 48 fq qdisc, each having 2^16 slots. >>> So I had 48 blocs,of 524288 bytes. >>> >>> Trying a bit harder at setup time to get 128 consecutive pages got >>> less TLB pressure. >> >> Hmm that's rather surprising to me. TLB caches the page table lookups >> and the PFN's of the physical pages it translates to shouldn't matter - >> the page tables will look the same. With 128 consecutive pages could >> manifest the reduced collision cache miss effect though. >> > > To be clear, the difference came from : > > Using kmalloc() to allocate 48 x 524288 bytes > > Or using vmalloc() > > Are you telling me HugePages are not in play there ? Oh that's certainly a difference, as kmalloc() will give you the kernel mapping which can use 1GB Hugepages. But if you just combine these kmalloc chunks into vmalloc mapping (IIUC that's what your RFC was doing?), you lose that benefit AFAIK. On the other hand I recall reading that AMD Zen will have PTE Coalescing [1] which, if true and I understand that correctly, would indeed result in better TLB usage with adjacent page table entries pointing to consecutive pages. But perhaps the starting pte's position will also have to be aligned to make this work, dunno. [1] http://www.anandtech.com/show/10591/amd-zen-microarchiture-part-2-extracting-instructionlevel-parallelism/6