From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68B4DC31E45 for ; Thu, 13 Jun 2019 16:30:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3C01220449 for ; Thu, 13 Jun 2019 16:30:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560443412; bh=U4760CROfIGrP7kVNNhjVj+d+Sn/8lEJniZaqu9pCKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=mTBjethZqRULde7sf40SPzNp6RUavVuti5Gj71LOUhm9V2NXIv1rpWok7X9IcmgFp uWEMvioiDtJG95pOU/5bzZgjrypA/1Rresrp02+zXGQ4vGMe3Qq7lBTZac+53pzzU2 bFHUYp9LOxd801CUM7nXUywy+hfgZD3d3MGFhA70= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392409AbfFMQaK (ORCPT ); Thu, 13 Jun 2019 12:30:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:52872 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730919AbfFMIfH (ORCPT ); Thu, 13 Jun 2019 04:35:07 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 93499206E0; Thu, 13 Jun 2019 08:35:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560414906; bh=U4760CROfIGrP7kVNNhjVj+d+Sn/8lEJniZaqu9pCKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lqTm3i63hCKD32/9i16F1LbaAM3HpVHFtVsYDQDry7WhwMK/yXZzJ0WAwt67LBJoL R6uz63nA5vCFET5nc9V/o91rj1rcZqpcy/dB10PtRK6BTDEiyORzjcnHOil82R7Avg uY76j3RHIlD+d7OpxWRNF4DmqjGOuHCWR09r2OBA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , Naoya Horiguchi , Davidlohr Bueso , Joonsoo Kim , Michal Hocko , "Kirill A . Shutemov" , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.14 07/81] hugetlbfs: on restore reserve error path retain subpool reservation Date: Thu, 13 Jun 2019 10:32:50 +0200 Message-Id: <20190613075649.595507050@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190613075649.074682929@linuxfoundation.org> References: <20190613075649.074682929@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [ Upstream commit 0919e1b69ab459e06df45d3ba6658d281962db80 ] When a huge page is allocated, PagePrivate() is set if the allocation consumed a reservation. When freeing a huge page, PagePrivate is checked. If set, it indicates the reservation should be restored. PagePrivate being set at free huge page time mostly happens on error paths. When huge page reservations are created, a check is made to determine if the mapping is associated with an explicitly mounted filesystem. If so, pages are also reserved within the filesystem. The default action when freeing a huge page is to decrement the usage count in any associated explicitly mounted filesystem. However, if the reservation is to be restored the reservation/use count within the filesystem should not be decrementd. Otherwise, a subsequent page allocation and free for the same mapping location will cause the file filesystem usage to go 'negative'. Filesystem Size Used Avail Use% Mounted on nodev 4.0G -4.0M 4.1G - /opt/hugepool To fix, when freeing a huge page do not adjust filesystem usage if PagePrivate() is set to indicate the reservation should be restored. I did not cc stable as the problem has been around since reserves were added to hugetlbfs and nobody has noticed. Link: http://lkml.kernel.org/r/20190328234704.27083-2-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz Reviewed-by: Naoya Horiguchi Cc: Davidlohr Bueso Cc: Joonsoo Kim Cc: Michal Hocko Cc: "Kirill A . Shutemov" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/hugetlb.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 741bdde54954..8ca0075a5464 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1271,12 +1271,23 @@ void free_huge_page(struct page *page) ClearPagePrivate(page); /* - * A return code of zero implies that the subpool will be under its - * minimum size if the reservation is not restored after page is free. - * Therefore, force restore_reserve operation. + * If PagePrivate() was set on page, page allocation consumed a + * reservation. If the page was associated with a subpool, there + * would have been a page reserved in the subpool before allocation + * via hugepage_subpool_get_pages(). Since we are 'restoring' the + * reservtion, do not call hugepage_subpool_put_pages() as this will + * remove the reserved page from the subpool. */ - if (hugepage_subpool_put_pages(spool, 1) == 0) - restore_reserve = true; + if (!restore_reserve) { + /* + * A return code of zero implies that the subpool will be + * under its minimum size if the reservation is not restored + * after page is free. Therefore, force restore_reserve + * operation. + */ + if (hugepage_subpool_put_pages(spool, 1) == 0) + restore_reserve = true; + } spin_lock(&hugetlb_lock); clear_page_huge_active(page); -- 2.20.1