From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22C1FC43381 for ; Fri, 22 Mar 2019 13:03:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E7C2A2075E for ; Fri, 22 Mar 2019 13:03:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553259833; bh=zEPt5oNpiE74asWyC0ghl4A0fo8eF0TG/s4b3XWzDls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=IoAn83ZLzVJ9ZBy9uHyZOD3vJ2hrcVHfyK5xDT6LeCh4MVpEn/WYNuks9fIc1HvyE dpQjDLzEalytrXsyvqBPFGujNAxkuURfHZEz4rb2DLTMhw3VGyS91YRpP8zReJKPYs 0bipfeGdVyQjTA29sZT85QG1CYqlSTMB7p3qPN2s= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730859AbfCVLkK (ORCPT ); Fri, 22 Mar 2019 07:40:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:41976 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730909AbfCVLkG (ORCPT ); Fri, 22 Mar 2019 07:40:06 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0F1DC218B0; Fri, 22 Mar 2019 11:40:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553254805; bh=zEPt5oNpiE74asWyC0ghl4A0fo8eF0TG/s4b3XWzDls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WHkQr2vh2RyhBDTSft/36BzkfZ3yYLOk4ca+JGVdHQKrJoWoni1l2CqNZ9MQ656ig VC/LrQRuCUm2jU9vU/86fTOnsPPFPJMAje/IXK5kKZeWipMQUHkSyDtYA/7sf80Ov3 lv0ZO7nxP4wSAqxKh+BfoGsH34gInPUBoEnSp/sM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jann Horn , "David S. Miller" , Sasha Levin Subject: [PATCH 4.9 017/118] mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs Date: Fri, 22 Mar 2019 12:14:49 +0100 Message-Id: <20190322111217.211616862@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190322111215.873964544@linuxfoundation.org> References: <20190322111215.873964544@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ [ Upstream commit 2c2ade81741c66082f8211f0b96cf509cc4c0218 ] The basic idea behind ->pagecnt_bias is: If we pre-allocate the maximum number of references that we might need to create in the fastpath later, the bump-allocation fastpath only has to modify the non-atomic bias value that tracks the number of extra references we hold instead of the atomic refcount. The maximum number of allocations we can serve (under the assumption that no allocation is made with size 0) is nc->size, so that's the bias used. However, even when all memory in the allocation has been given away, a reference to the page is still held; and in the `offset < 0` slowpath, the page may be reused if everyone else has dropped their references. This means that the necessary number of references is actually `nc->size+1`. Luckily, from a quick grep, it looks like the only path that can call page_frag_alloc(fragsz=1) is TAP with the IFF_NAPI_FRAGS flag, which requires CAP_NET_ADMIN in the init namespace and is only intended to be used for kernel testing and fuzzing. To test for this issue, put a `WARN_ON(page_ref_count(page) == 0)` in the `offset < 0` path, below the virt_to_page() call, and then repeatedly call writev() on a TAP device with IFF_TAP|IFF_NO_PI|IFF_NAPI_FRAGS|IFF_NAPI, with a vector consisting of 15 elements containing 1 byte each. Signed-off-by: Jann Horn Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3af727d95c17..05f141e39ac1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3955,11 +3955,11 @@ refill: /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ - page_ref_add(page, size - 1); + page_ref_add(page, size); /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = size; + nc->pagecnt_bias = size + 1; nc->offset = size; } @@ -3975,10 +3975,10 @@ refill: size = nc->size; #endif /* OK, page count is 0, we can safely set it */ - set_page_count(page, size); + set_page_count(page, size + 1); /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = size; + nc->pagecnt_bias = size + 1; offset = size - fragsz; } -- 2.19.1