From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80DC9ECDE39 for ; Wed, 17 Oct 2018 16:31:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4F95021528 for ; Wed, 17 Oct 2018 16:31:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F95021528 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728022AbeJRA1m (ORCPT ); Wed, 17 Oct 2018 20:27:42 -0400 Received: from mga01.intel.com ([192.55.52.88]:26498 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727028AbeJRA1l (ORCPT ); Wed, 17 Oct 2018 20:27:41 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Oct 2018 09:31:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,393,1534834800"; d="scan'208";a="83384536" Received: from ahduyck-mobl.amr.corp.intel.com (HELO [10.7.198.154]) ([10.7.198.154]) by orsmga006.jf.intel.com with ESMTP; 17 Oct 2018 09:31:12 -0700 Subject: Re: [mm PATCH v3 1/6] mm: Use mm_zero_struct_page from SPARC on all 64b architectures To: David Laight , 'Pavel Tatashin' , Michal Hocko Cc: "linux-mm@kvack.org" , "akpm@linux-foundation.org" , "pavel.tatashin@microsoft.com" , "dave.jiang@intel.com" , "linux-kernel@vger.kernel.org" , "willy@infradead.org" , "davem@davemloft.net" , "yi.z.zhang@linux.intel.com" , "khalid.aziz@oracle.com" , "rppt@linux.vnet.ibm.com" , "vbabka@suse.cz" , "sparclinux@vger.kernel.org" , "dan.j.williams@intel.com" , "ldufour@linux.vnet.ibm.com" , "mgorman@techsingularity.net" , "mingo@kernel.org" , "kirill.shutemov@linux.intel.com" References: <20181015202456.2171.88406.stgit@localhost.localdomain> <20181015202656.2171.92963.stgit@localhost.localdomain> <20181017084744.GH18839@dhcp22.suse.cz> <9700b00f-a8a4-e318-f6a8-71fd1e7021b3@linux.intel.com> <8aaa0fa2-5f12-ea3c-a0ca-ded9e1a639e2@gmail.com> <7d313318f1234a1eb45b608bd853c17c@AcuMS.aculab.com> From: Alexander Duyck Message-ID: Date: Wed, 17 Oct 2018 09:31:12 -0700 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: <7d313318f1234a1eb45b608bd853c17c@AcuMS.aculab.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/17/2018 8:40 AM, David Laight wrote: > From: Pavel Tatashin >> Sent: 17 October 2018 16:12 >> On 10/17/18 11:07 AM, Alexander Duyck wrote: >>> On 10/17/2018 1:47 AM, Michal Hocko wrote: >>>> On Mon 15-10-18 13:26:56, Alexander Duyck wrote: >>>>> This change makes it so that we use the same approach that was >>>>> already in >>>>> use on Sparc on all the archtectures that support a 64b long. >>>>> >>>>> This is mostly motivated by the fact that 8 to 10 store/move >>>>> instructions >>>>> are likely always going to be faster than having to call into a function >>>>> that is not specialized for handling page init. >>>>> >>>>> An added advantage to doing it this way is that the compiler can get >>>>> away >>>>> with combining writes in the __init_single_page call. As a result the >>>>> memset call will be reduced to only about 4 write operations, or at >>>>> least >>>>> that is what I am seeing with GCC 6.2 as the flags, LRU poitners, and >>>>> count/mapcount seem to be cancelling out at least 4 of the 8 >>>>> assignments on >>>>> my system. >>>>> >>>>> One change I had to make to the function was to reduce the minimum page >>>>> size to 56 to support some powerpc64 configurations. >>>> >>>> This really begs for numbers. I do not mind the change itself with some >>>> minor comments below. >>>> >>>> [...] >>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>>>> index bb0de406f8e7..ec6e57a0c14e 100644 >>>>> --- a/include/linux/mm.h >>>>> +++ b/include/linux/mm.h >>>>> @@ -102,8 +102,42 @@ static inline void set_max_mapnr(unsigned long >>>>> limit) { } >>>>>    * zeroing by defining this macro in . >>>>>    */ >>>>>   #ifndef mm_zero_struct_page >>>> >>>> Do we still need this ifdef? I guess we can wait for an arch which >>>> doesn't like this change and then add the override. I would rather go >>>> simple if possible. >>> >>> We probably don't, but as soon as I remove it somebody will probably >>> complain somewhere. I guess I could drop it for now and see if anybody >>> screams. Adding it back should be pretty straight forward since it would >>> only be 2 lines. >>> >>>>> +#if BITS_PER_LONG == 64 >>>>> +/* This function must be updated when the size of struct page grows >>>>> above 80 >>>>> + * or reduces below 64. The idea that compiler optimizes out switch() >>>>> + * statement, and only leaves move/store instructions >>>>> + */ >>>>> +#define    mm_zero_struct_page(pp) __mm_zero_struct_page(pp) >>>>> +static inline void __mm_zero_struct_page(struct page *page) >>>>> +{ >>>>> +    unsigned long *_pp = (void *)page; >>>>> + >>>>> +     /* Check that struct page is either 56, 64, 72, or 80 bytes */ >>>>> +    BUILD_BUG_ON(sizeof(struct page) & 7); >>>>> +    BUILD_BUG_ON(sizeof(struct page) < 56); >>>>> +    BUILD_BUG_ON(sizeof(struct page) > 80); >>>>> + >>>>> +    switch (sizeof(struct page)) { >>>>> +    case 80: >>>>> +        _pp[9] = 0;    /* fallthrough */ >>>>> +    case 72: >>>>> +        _pp[8] = 0;    /* fallthrough */ >>>>> +    default: >>>>> +        _pp[7] = 0;    /* fallthrough */ >>>>> +    case 56: >>>>> +        _pp[6] = 0; >>>>> +        _pp[5] = 0; >>>>> +        _pp[4] = 0; >>>>> +        _pp[3] = 0; >>>>> +        _pp[2] = 0; >>>>> +        _pp[1] = 0; >>>>> +        _pp[0] = 0; >>>>> +    } >>>> >>>> This just hit my eyes. I have to confess I have never seen default: to >>>> be not the last one in the switch. Can we have case 64 instead or does >>>> gcc >>>> complain? I would be surprised with the set of BUILD_BUG_ONs. >> >> It was me, C does not really care where default is placed, I was trying >> to keep stores sequential for better cache locality, but "case 64" >> should be OK, and even better for this purpose. > > You'd need to put memory barriers between them to force sequential stores. > I'm also surprised that gcc doesn't inline the memset(). > > David We don't need them to be sequential. The general idea is we have have to fill a given amount of space with 0s. After that we have some calls that are initialing the memory that doesn't have to be zero. Ideally the compiler is smart enough to realize that since we don't have barriers and we are performing assignments after the assignment of zero it can just combine the two writes into one and drop the zero assignment. - Alex