From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A0F0C433F5 for ; Tue, 26 Oct 2021 21:34:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A4E1D60F9B for ; Tue, 26 Oct 2021 21:34:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A4E1D60F9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 02016940008; Tue, 26 Oct 2021 17:34:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F109B940007; Tue, 26 Oct 2021 17:34:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0132940008; Tue, 26 Oct 2021 17:34:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id D21AB940007 for ; Tue, 26 Oct 2021 17:34:56 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9192018265D12 for ; Tue, 26 Oct 2021 21:34:56 +0000 (UTC) X-FDA: 78739893792.08.B4CC6A7 Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) by imf06.hostedemail.com (Postfix) with ESMTP id 3BCC2801A8B8 for ; Tue, 26 Oct 2021 21:34:56 +0000 (UTC) Received: by mail-lj1-f178.google.com with SMTP id n7so1164308ljp.5 for ; Tue, 26 Oct 2021 14:34:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=g6bRc3hEdmoyhbDnFfzjpCbgpj5AnIIMpxpQWkoEAPc=; b=Q4c50WAkXBkEgY6gvfcvo5e8MhJnvWwBTJNOzFIu/Getb0WstjIXLdG5eqbfR8JHCO E6Y0rOmp5bYSVo1Q9PcxShZiHfhSraEdNL+WfcryfjngLxGyqGCsA5mFOURH3O99GL/i KVy9PC23hRaZpo6bFOq801aVnJMORPKPAlnrqhhYhS6shRWe6cLbC0zIajMFQL8Rjy+Y 9c1nn6cpFy+B2gmYUl2It8YpTn+2JQ5ZHX22aV2hXltq1IDGBQO1f7LThO574E+8qUnV crYxEAr25H0V5ROMUrRin0hi1zM0AlaD7yMwLRHVEEVtJLGIPpPI2022dKE11PYdtY6s 4Pvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=g6bRc3hEdmoyhbDnFfzjpCbgpj5AnIIMpxpQWkoEAPc=; b=kSCQ9VmhCNIEzJ0HKnk+dPmF76vOM9UwBninYkztPJbfHfYhxWN0vRbtUlNYs1+Iiw zW00aslpKJKS2LeQsDSP7yUUiwlY9Eaj67US4AQRKi7x8VypxU6x3dA65uMJIjhapig+ QT6vCbjamG8bWXTCXR2jNWUcCsQtglc5ji6fdx6g8gqKxTF3TW5I3647+xJgUdMn5Wga vcq/1yIfmHE1FIYxauN/SURRKKaj9VcLA/HrLVC9n/lBrUkaPAtkD+F7pqixBRKrpHZu Zp6iYaCLgx8PFt1MCmQvXbsfEuuTCI/zOJ3yCXSuIeDQP2ldPouhjAQwRPAO4NdJAiid ZQzg== X-Gm-Message-State: AOAM531kunPKCcKBG+ZEH0AXyBwdDkoVs0pJvYod4rirlCK4LV/wXVyM jU/nT+5i9EEHy9/s3OX9DKZQEvvwdGW2dtVjLI0t8Q== X-Google-Smtp-Source: ABdhPJyq6ZHqYiDS8wM3KDSM1FRUj5xGQE4Nq6PhjHX04LBcqXGgXrhGvXWB7NaICqxAg2MRMZNWWZ7nPiqdXVQhY5o= X-Received: by 2002:a2e:a0d7:: with SMTP id f23mr29374831ljm.422.1635284094799; Tue, 26 Oct 2021 14:34:54 -0700 (PDT) MIME-Version: 1.0 References: <20211026173822.502506-1-pasha.tatashin@soleen.com> <20211026173822.502506-2-pasha.tatashin@soleen.com> In-Reply-To: From: Pasha Tatashin Date: Tue, 26 Oct 2021 17:34:18 -0400 Message-ID: Subject: Re: [RFC 1/8] mm: add overflow and underflow checks for page->_refcount To: Matthew Wilcox Cc: LKML , linux-mm , linux-m68k@lists.linux-m68k.org, Anshuman Khandual , Andrew Morton , william.kucharski@oracle.com, Mike Kravetz , Vlastimil Babka , Geert Uytterhoeven , schmitzmic@gmail.com, Steven Rostedt , Ingo Molnar , Johannes Weiner , Roman Gushchin , songmuchun@bytedance.com, weixugc@google.com, Greg Thelen Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: 5qqraopen9q6kxjzo6wo5bb3jp9g3imw Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=Q4c50WAk; spf=pass (imf06.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.208.178 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3BCC2801A8B8 X-HE-Tag: 1635284096-471826 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Oct 26, 2021 at 3:50 PM Matthew Wilcox wrote: > > On Tue, Oct 26, 2021 at 05:38:15PM +0000, Pasha Tatashin wrote: > > static inline void page_ref_add(struct page *page, int nr) > > { > > - atomic_add(nr, &page->_refcount); > > + int ret; > > + > > + VM_BUG_ON(nr <= 0); > > + ret = atomic_add_return(nr, &page->_refcount); > > + VM_BUG_ON_PAGE(ret <= 0, page); > > This isn't right. _refcount is allowed to overflow into the negatives. > See page_ref_zero_or_close_to_overflow() and the conversations that led > to it being added. #define page_ref_zero_or_close_to_overflow(page) \ 1204 ((unsigned int) page_ref_count(page) + 127u <= 127u) Uh, right, I saw the macro but did not realize there was an (unsigned int) cast. OK, I think we can move this macro inside: include/linux/page_ref.h modify it to something like this: #define page_ref_zero_or_close_to_overflow(page) \ ((unsigned int) page_ref_count(page) + v + 127u <= v + 127u) The sub/dec can also be fixed to ensure that we do not underflow but still working with the fact that we use all 32bits of _refcount. Pasha