From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8711C433DB for ; Wed, 3 Feb 2021 14:52:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A255464E31 for ; Wed, 3 Feb 2021 14:52:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233265AbhBCOw5 (ORCPT ); Wed, 3 Feb 2021 09:52:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232684AbhBCOwv (ORCPT ); Wed, 3 Feb 2021 09:52:51 -0500 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84673C0613ED for ; Wed, 3 Feb 2021 06:52:09 -0800 (PST) Received: by mail-ej1-x632.google.com with SMTP id b9so16618550ejy.12 for ; Wed, 03 Feb 2021 06:52:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ba8vo7NYBPo9eSD9iOWqZ0kAa5nRbWhxZY4okQVnGnw=; b=kabU4DJVcmFBq7Y/Y5iY/7dh3vcJdF72Bi6T33jpaCj1V26uxUKBXh1yOn5EA8wwlm /+xBYhOekNZRvBc1hobzHgb4+0FgkyDE+jt/TEMFdCRxDCAPoG+/L4TCs5MdVBOk1kgI yvHaN0O7w+ZJwAz1QB54rkyZLH2uyg+Qp5Fw0ZxIZc19NV/skkUUHB2fi7a6iEQGtx+h sO+l+EY5zNKxiTg9dV0wXzHynAABbrarCWdlBcYumPEZOgXBsUs8kPLaCROsqPa4dLs4 Tt1tki7Aetn+Rysl2ZK6MvVicNY0q72pvToY74ctM7GWm6X9w5xCVvSlPQ545FqVh69n /opA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ba8vo7NYBPo9eSD9iOWqZ0kAa5nRbWhxZY4okQVnGnw=; b=gIhl0P6pu7/GE8FJbavZI17CjJGt++mzz3RFqOs3vCPKqAY5E9Oq8930sV15qK4i73 Sxro7rwsJ7klwJ+diHUy0TdccYweMbs/V2NcmKW68ykbHr7AUPFzdFy+mGVLboFVa67O cbLZINVP47Rsjiehat3TvVZ/YpKkhn6k9rqJ5OERnP1ZvYnDo+/AlHYMi/Td3G5sXJdW wdg+HLxHLf54h05y65dxthZQ+4QGx10vuiyYp/4bRi/b3GL02lIapB+sH1nEjUejyjJ9 MwiOAjUR5BceDJ6PW0ynKIbS9CZo2zuuy3Qgb/owRHJr1wIZhC+R6KpOGwcvn7QgQobI gY7Q== X-Gm-Message-State: AOAM5328nUV6Ojy3uMdvfg51BU8L8Gm0KXIaXh0ixIKCATXCZyi0uffQ tGJ7FNfmaekz3FMwa9+ui4w+sK2I96D2WtzFtKZnRg== X-Google-Smtp-Source: ABdhPJwDEE6FfRYKexhd7R5DOEGVnNd4L8WynkLlX3PUiXSfp1YsW6WZeZoAOwShPU8WkJdcJtYI8XYp1uZ5i8UmZts= X-Received: by 2002:a17:906:eddd:: with SMTP id sb29mr3441623ejb.383.1612363928105; Wed, 03 Feb 2021 06:52:08 -0800 (PST) MIME-Version: 1.0 References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> <20210125194751.1275316-3-pasha.tatashin@soleen.com> <05a66361-214c-2afe-22e4-12862ea1e4e2@oracle.com> In-Reply-To: <05a66361-214c-2afe-22e4-12862ea1e4e2@oracle.com> From: Pavel Tatashin Date: Wed, 3 Feb 2021 09:51:32 -0500 Message-ID: Subject: Re: [PATCH v8 02/14] mm/gup: check every subpage of a compound page during isolation To: Joao Martins Cc: Jason Gunthorpe , LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , Mike Kravetz , Steven Rostedt , Ingo Molnar , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard , Linux Doc Mailing List , Ira Weiny , linux-kselftest@vger.kernel.org, James Morris Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 3, 2021 at 8:23 AM Joao Martins wrote: > > On 1/25/21 7:47 PM, Pavel Tatashin wrote: > > When pages are isolated in check_and_migrate_movable_pages() we skip > > compound number of pages at a time. However, as Jason noted, it is > > not necessary correct that pages[i] corresponds to the pages that > > we skipped. This is because it is possible that the addresses in > > this range had split_huge_pmd()/split_huge_pud(), and these functions > > do not update the compound page metadata. > > > > The problem can be reproduced if something like this occurs: > > > > 1. User faulted huge pages. > > 2. split_huge_pmd() was called for some reason > > 3. User has unmapped some sub-pages in the range > > 4. User tries to longterm pin the addresses. > > > > The resulting pages[i] might end-up having pages which are not compound > > size page aligned. > > > > Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge page") > > Reported-by: Jason Gunthorpe > > Signed-off-by: Pavel Tatashin > > Reviewed-by: Jason Gunthorpe > > --- > > [...] > > > /* > > * If we get a page from the CMA zone, since we are going to > > * be pinning these entries, we might as well move them out > > @@ -1599,8 +1596,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, > > } > > } > > } > > - > > - i += step; > > } > > > Hi Joao, > With this, longterm gup will 'regress' for hugetlbfs e.g. from ~6k -> 32k usecs when > pinning a 16G hugetlb file. Estimate or you actually measured? > > Splitting can only occur on THP right? If so, perhaps we could retain the @step increment Yes, I do not think we can split HugePage, only THP. > for compound pages but when !is_transparent_hugepage(head) or just PageHuge(head) like: > > + if (!is_transparent_hugepage(head) && PageCompound(page)) > + i += (compound_nr(head) - (pages[i] - head)); > > Or making specific to hugetlbfs: > > + if (PageHuge(head)) > + i += (compound_nr(head) - (pages[i] - head)); Yes, this is reasonable optimization. I will submit a follow up patch against linux-next. Thank you, Pasha From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB964C433E0 for ; Wed, 3 Feb 2021 14:52:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 10BA064E31 for ; Wed, 3 Feb 2021 14:52:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10BA064E31 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E4036B0070; Wed, 3 Feb 2021 09:52:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 692336B0071; Wed, 3 Feb 2021 09:52:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A7416B0072; Wed, 3 Feb 2021 09:52:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id 46CB56B0070 for ; Wed, 3 Feb 2021 09:52:10 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F32A41EE6 for ; Wed, 3 Feb 2021 14:52:09 +0000 (UTC) X-FDA: 77777246778.08.skin51_2c05fb8275d4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id CBE031819E76B for ; Wed, 3 Feb 2021 14:52:09 +0000 (UTC) X-HE-Tag: skin51_2c05fb8275d4 X-Filterd-Recvd-Size: 6013 Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Feb 2021 14:52:09 +0000 (UTC) Received: by mail-ej1-f45.google.com with SMTP id w2so1851870ejk.13 for ; Wed, 03 Feb 2021 06:52:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ba8vo7NYBPo9eSD9iOWqZ0kAa5nRbWhxZY4okQVnGnw=; b=kabU4DJVcmFBq7Y/Y5iY/7dh3vcJdF72Bi6T33jpaCj1V26uxUKBXh1yOn5EA8wwlm /+xBYhOekNZRvBc1hobzHgb4+0FgkyDE+jt/TEMFdCRxDCAPoG+/L4TCs5MdVBOk1kgI yvHaN0O7w+ZJwAz1QB54rkyZLH2uyg+Qp5Fw0ZxIZc19NV/skkUUHB2fi7a6iEQGtx+h sO+l+EY5zNKxiTg9dV0wXzHynAABbrarCWdlBcYumPEZOgXBsUs8kPLaCROsqPa4dLs4 Tt1tki7Aetn+Rysl2ZK6MvVicNY0q72pvToY74ctM7GWm6X9w5xCVvSlPQ545FqVh69n /opA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ba8vo7NYBPo9eSD9iOWqZ0kAa5nRbWhxZY4okQVnGnw=; b=kokckL7xiFf7yLbr6TyCOTLlzvhk1+pXtQc6NBTlqSVmnj59To9xhbL+N1XRy6uF0l oX3sg4gNrTbvmex0n2A3ZcDAI8Txtf/T8Fql1npPgR+Nxd5OPorye8nFHXkiaKWc6lV4 A5hMsnH0IG9MAW89MMKDbiw1oe3y7MFT6pw+xnVC/S+to0udbMzwlt7hPiKU9yWp1Y22 gAzrbs8bzS1KOWXrmjPgrvTXElqgowh0upl/T6BxjoYkpSsCju4aBfSLJu989YBl9CeB j7bCHwhYTqX+Rm/woOYbrxZ22/N4cvSB8mahzyAi+ueCJzTgmJztv1koVczjr9H0UCAG wIcw== X-Gm-Message-State: AOAM533Aj3NAu5DpgV0OtfZ1fciXY/YqKdfsUCMkPc6IJwvVicRiwyn8 xBke7DNglyjTMtVOwUYOROAaFE8xwL8qPp79sC3jaw== X-Google-Smtp-Source: ABdhPJwDEE6FfRYKexhd7R5DOEGVnNd4L8WynkLlX3PUiXSfp1YsW6WZeZoAOwShPU8WkJdcJtYI8XYp1uZ5i8UmZts= X-Received: by 2002:a17:906:eddd:: with SMTP id sb29mr3441623ejb.383.1612363928105; Wed, 03 Feb 2021 06:52:08 -0800 (PST) MIME-Version: 1.0 References: <20210125194751.1275316-1-pasha.tatashin@soleen.com> <20210125194751.1275316-3-pasha.tatashin@soleen.com> <05a66361-214c-2afe-22e4-12862ea1e4e2@oracle.com> In-Reply-To: <05a66361-214c-2afe-22e4-12862ea1e4e2@oracle.com> From: Pavel Tatashin Date: Wed, 3 Feb 2021 09:51:32 -0500 Message-ID: Subject: Re: [PATCH v8 02/14] mm/gup: check every subpage of a compound page during isolation To: Joao Martins Cc: Jason Gunthorpe , LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , Mike Kravetz , Steven Rostedt , Ingo Molnar , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard , Linux Doc Mailing List , Ira Weiny , linux-kselftest@vger.kernel.org, James Morris Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 3, 2021 at 8:23 AM Joao Martins wrote: > > On 1/25/21 7:47 PM, Pavel Tatashin wrote: > > When pages are isolated in check_and_migrate_movable_pages() we skip > > compound number of pages at a time. However, as Jason noted, it is > > not necessary correct that pages[i] corresponds to the pages that > > we skipped. This is because it is possible that the addresses in > > this range had split_huge_pmd()/split_huge_pud(), and these functions > > do not update the compound page metadata. > > > > The problem can be reproduced if something like this occurs: > > > > 1. User faulted huge pages. > > 2. split_huge_pmd() was called for some reason > > 3. User has unmapped some sub-pages in the range > > 4. User tries to longterm pin the addresses. > > > > The resulting pages[i] might end-up having pages which are not compound > > size page aligned. > > > > Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge page") > > Reported-by: Jason Gunthorpe > > Signed-off-by: Pavel Tatashin > > Reviewed-by: Jason Gunthorpe > > --- > > [...] > > > /* > > * If we get a page from the CMA zone, since we are going to > > * be pinning these entries, we might as well move them out > > @@ -1599,8 +1596,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, > > } > > } > > } > > - > > - i += step; > > } > > > Hi Joao, > With this, longterm gup will 'regress' for hugetlbfs e.g. from ~6k -> 32k usecs when > pinning a 16G hugetlb file. Estimate or you actually measured? > > Splitting can only occur on THP right? If so, perhaps we could retain the @step increment Yes, I do not think we can split HugePage, only THP. > for compound pages but when !is_transparent_hugepage(head) or just PageHuge(head) like: > > + if (!is_transparent_hugepage(head) && PageCompound(page)) > + i += (compound_nr(head) - (pages[i] - head)); > > Or making specific to hugetlbfs: > > + if (PageHuge(head)) > + i += (compound_nr(head) - (pages[i] - head)); Yes, this is reasonable optimization. I will submit a follow up patch against linux-next. Thank you, Pasha