From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752795AbdHGKAU (ORCPT ); Mon, 7 Aug 2017 06:00:20 -0400 Received: from mga09.intel.com ([134.134.136.24]:50096 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752724AbdHGKAT (ORCPT ); Mon, 7 Aug 2017 06:00:19 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,337,1498546800"; d="scan'208";a="1202847770" From: "Huang\, Ying" To: Jan Kara Cc: "Huang\, Ying" , Andrew Morton , , , Andrea Arcangeli , "Kirill A. Shutemov" , Nadia Yvette Chambers , Michal Hocko , Matthew Wilcox , Hugh Dickins , Minchan Kim , Shaohua Li Subject: Re: [PATCH -mm] mm: Clear to access sub-page last when clearing huge page References: <20170807072131.8343-1-ying.huang@intel.com> <20170807095515.GA6470@quack2.suse.cz> Date: Mon, 07 Aug 2017 18:00:12 +0800 In-Reply-To: <20170807095515.GA6470@quack2.suse.cz> (Jan Kara's message of "Mon, 7 Aug 2017 11:55:15 +0200") Message-ID: <8760dzlmtv.fsf@yhuang-mobile.sh.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Jan Kara writes: > On Mon 07-08-17 15:21:31, Huang, Ying wrote: >> From: Huang Ying >> >> Huge page helps to reduce TLB miss rate, but it has higher cache >> footprint, sometimes this may cause some issue. For example, when >> clearing huge page on x86_64 platform, the cache footprint is 2M. But >> on a Xeon E5 v3 2699 CPU, there are 18 cores, 36 threads, and only 45M >> LLC (last level cache). That is, in average, there are 2.5M LLC for >> each core and 1.25M LLC for each thread. If the cache pressure is >> heavy when clearing the huge page, and we clear the huge page from the >> begin to the end, it is possible that the begin of huge page is >> evicted from the cache after we finishing clearing the end of the huge >> page. And it is possible for the application to access the begin of >> the huge page after clearing the huge page. >> >> To help the above situation, in this patch, when we clear a huge page, >> the order to clear sub-pages is changed. In quite some situation, we >> can get the address that the application will access after we clear >> the huge page, for example, in a page fault handler. Instead of >> clearing the huge page from begin to end, we will clear the sub-pages >> farthest from the the sub-page to access firstly, and clear the >> sub-page to access last. This will make the sub-page to access most >> cache-hot and sub-pages around it more cache-hot too. If we cannot >> know the address the application will access, the begin of the huge >> page is assumed to be the the address the application will access. >> >> With this patch, the throughput increases ~28.3% in vm-scalability >> anon-w-seq test case with 72 processes on a 2 socket Xeon E5 v3 2699 >> system (36 cores, 72 threads). The test case creates 72 processes, >> each process mmap a big anonymous memory area and writes to it from >> the begin to the end. For each process, other processes could be seen >> as other workload which generates heavy cache pressure. At the same >> time, the cache miss rate reduced from ~33.4% to ~31.7%, the >> IPC (instruction per cycle) increased from 0.56 to 0.74, and the time >> spent in user space is reduced ~7.9% > > Hum, the improvement looks impressive enough that it is probably worth the > bother. But please add at least a brief explanation why you do stuff in > this more complicated way to a comment in clear_huge_page() so that people > don't have to look it up in the changelog. Good suggestion! I will do that in the next version. > Otherwise the patch looks good > to me so feel free to add: > > Acked-by: Jan Kara Thanks! Best Regards, Huang, Ying > Honza > >> @@ -4374,9 +4374,31 @@ void clear_huge_page(struct page *page, >> } >> >> might_sleep(); >> - for (i = 0; i < pages_per_huge_page; i++) { >> + VM_BUG_ON(clamp(addr_hint, addr, addr + >> + (pages_per_huge_page << PAGE_SHIFT)) != addr_hint); >> + n = (addr_hint - addr) / PAGE_SIZE; >> + if (2 * n <= pages_per_huge_page) { >> + base = 0; >> + l = n; >> + for (i = pages_per_huge_page - 1; i >= 2 * n; i--) { >> + cond_resched(); >> + clear_user_highpage(page + i, addr + i * PAGE_SIZE); >> + } >> + } else { >> + base = 2 * n - pages_per_huge_page; >> + l = pages_per_huge_page - n; >> + for (i = 0; i < base; i++) { >> + cond_resched(); >> + clear_user_highpage(page + i, addr + i * PAGE_SIZE); >> + } >> + } >> + for (i = 0; i < l; i++) { >> + cond_resched(); >> + clear_user_highpage(page + base + i, >> + addr + (base + i) * PAGE_SIZE); >> cond_resched(); >> - clear_user_highpage(page + i, addr + i * PAGE_SIZE); >> + clear_user_highpage(page + base + 2 * l - 1 - i, >> + addr + (base + 2 * l - 1 - i) * PAGE_SIZE); >> } >> } >> >> -- >> 2.11.0 >>