From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C49BC432C0 for ; Mon, 2 Dec 2019 06:54:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B480D20833 for ; Mon, 2 Dec 2019 06:54:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B480D20833 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D62406B0003; Mon, 2 Dec 2019 01:53:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D13216B0006; Mon, 2 Dec 2019 01:53:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDA0F6B0007; Mon, 2 Dec 2019 01:53:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id A5DF56B0003 for ; Mon, 2 Dec 2019 01:53:59 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 3AFF22C8F for ; Mon, 2 Dec 2019 06:53:59 +0000 (UTC) X-FDA: 76219286598.21.loaf43_5ceaebad29a20 X-HE-Tag: loaf43_5ceaebad29a20 X-Filterd-Recvd-Size: 4223 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Dec 2019 06:53:58 +0000 (UTC) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Dec 2019 22:53:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,268,1571727600"; d="scan'208";a="212943124" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga003.jf.intel.com with ESMTP; 01 Dec 2019 22:53:54 -0800 Date: Mon, 2 Dec 2019 14:53:47 +0800 From: Wei Yang To: Matthew Wilcox Cc: Wei Yang , Wei Yang , "Kirill A. Shutemov" , akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] mm/page_vma_mapped: page table boundary is already guaranteed Message-ID: <20191202065347.GA22786@richard> Reply-To: Wei Yang References: <20191128010321.21730-1-richardw.yang@linux.intel.com> <20191128010321.21730-2-richardw.yang@linux.intel.com> <20191128083143.kwih655snxqa2qnm@box.shutemov.name> <20191128210945.6gtt7wlygsvxip4n@master> <20191128223904.GG20752@bombadil.infradead.org> <20191129083002.GA1669@richard> <20191129111801.GH20752@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191129111801.GH20752@bombadil.infradead.org> User-Agent: Mutt/1.9.4 (2018-02-28) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 29, 2019 at 03:18:01AM -0800, Matthew Wilcox wrote: >On Fri, Nov 29, 2019 at 04:30:02PM +0800, Wei Yang wrote: >> On Thu, Nov 28, 2019 at 02:39:04PM -0800, Matthew Wilcox wrote: >> >On Thu, Nov 28, 2019 at 09:09:45PM +0000, Wei Yang wrote: >> >> On Thu, Nov 28, 2019 at 11:31:43AM +0300, Kirill A. Shutemov wrote: >> >> >On Thu, Nov 28, 2019 at 09:03:21AM +0800, Wei Yang wrote: >> >> >> The check here is to guarantee pvmw->address iteration is limited in one >> >> >> page table boundary. To be specific, here the address range should be in >> >> >> one PMD_SIZE. >> >> >> >> >> >> If my understanding is correct, this check is already done in the above >> >> >> check: >> >> >> >> >> >> address >= __vma_address(page, vma) + PMD_SIZE >> >> >> >> >> >> The boundary check here seems not necessary. >> >> >> >> >> >> Signed-off-by: Wei Yang >> >> > >> >> >NAK. >> >> > >> >> >THP can be mapped with PTE not aligned to PMD_SIZE. Consider mremap(). >> >> > >> >> >> >> Hi, Kirill >> >> >> >> Thanks for your comment during Thanks Giving Day. Happy holiday:-) >> >> >> >> I didn't think about this case before, thanks for reminding. Then I tried to >> >> understand your concern. >> >> >> >> mremap() would expand/shrink a memory mapping. In this case, probably shrink >> >> is in concern. Since pvmw->page and pvmw->vma are not changed in the loop, the >> >> case you mentioned maybe pvmw->page is the head of a THP but part of it is >> >> unmapped. >> > >> >mremap() can also move a mapping, see MREMAP_FIXED. >> >> Hi, Matthew >> >> Thanks for your comment. >> >> I took a look into the MREMAP_FIXED case, but still not clear in which case it >> fall into the situation Kirill mentioned. >> >> Per my understanding, move mapping is achieved in two steps: >> >> * unmap some range in old vma if old_len >= new_len >> * move vma >> >> If the length doesn't change, we are expecting to have the "copy" of old >> vma. This doesn't change the THP PMD mapping. >> >> So the change still happens in the unmap step, if I am correct. >> >> Would you mind giving me more hint on the case when we would have the >> situation as Kirill mentioned? > >Set up a THP mapping. >Move it to an address which is no longer 2MB aligned. >Unmap it. Thanks Matthew I got the point, thanks a lot :-) -- Wei Yang Help you, Help me