From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B80A6C4743F for ; Sun, 6 Jun 2021 01:51:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98EA0611AE for ; Sun, 6 Jun 2021 01:51:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230150AbhFFBxU (ORCPT ); Sat, 5 Jun 2021 21:53:20 -0400 Received: from linux.microsoft.com ([13.77.154.182]:58846 "EHLO linux.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230022AbhFFBxT (ORCPT ); Sat, 5 Jun 2021 21:53:19 -0400 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by linux.microsoft.com (Postfix) with ESMTPSA id 9A2F920B8027; Sat, 5 Jun 2021 18:51:30 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 9A2F920B8027 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1622944290; bh=hq9cHjEwlYtFNMaabuWi/qemHRYJLP/I1ov0U2e1p9c=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=blGG0CEumDG1fGxvE6DeFwVbmie32LXID+WEhn9ijU0HEN+ANXpBhDL8wVR4US7/I r6zk0VtBlimq/zydlcYiN0jTYRpVyP/rXxGwxU/joeLE7aGjP7M2UBcTTxY10TMIL4 xjDxaOfZEeOCGsDfLJ9W172fhe/9bPYpxat837LI= Received: by mail-pl1-f169.google.com with SMTP id 69so6687748plc.5; Sat, 05 Jun 2021 18:51:30 -0700 (PDT) X-Gm-Message-State: AOAM532kaejNa9EwLwC1bzI1fagjL0imPPbgw5iPmxMbIkriNg2JhUHf pdWkWMpThzJYMY1g/GKAMPUuAlIIjIsImmsQGuQ= X-Google-Smtp-Source: ABdhPJz1wH7ptf/ewmtIOXdDzgWJUmOSxfHGsbcxc3/LyYj2NOa9PgQfFtm4iA9MSYUKxqVC4IwqKesok/67GPHLQpE= X-Received: by 2002:a17:90b:4b49:: with SMTP id mi9mr12670611pjb.187.1622944290004; Sat, 05 Jun 2021 18:51:30 -0700 (PDT) MIME-Version: 1.0 References: <20210604183349.30040-1-mcroce@linux.microsoft.com> <20210604183349.30040-2-mcroce@linux.microsoft.com> In-Reply-To: From: Matteo Croce Date: Sun, 6 Jun 2021 03:50:54 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH net-next v7 1/5] mm: add a signature in struct page To: Matthew Wilcox Cc: netdev@vger.kernel.org, linux-mm@kvack.org, Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Sat, Jun 5, 2021 at 4:32 PM Matthew Wilcox wrote: > > On Sat, Jun 05, 2021 at 12:59:50AM +0200, Matteo Croce wrote: > > On Fri, Jun 4, 2021 at 9:08 PM Matthew Wilcox wrote: > > > > > > On Fri, Jun 04, 2021 at 08:33:45PM +0200, Matteo Croce wrote: > > > > @@ -130,7 +137,10 @@ struct page { > > > > }; > > > > }; > > > > struct { /* Tail pages of compound page */ > > > > - unsigned long compound_head; /* Bit zero is set */ > > > > + /* Bit zero is set > > > > + * Bit one if pfmemalloc page > > > > + */ > > > > + unsigned long compound_head; > > > > > > I would drop this hunk. Bit 1 is not used for this purpose in tail > > > pages; it's used for that purpose in head and base pages. > > > > > > I suppose we could do something like ... > > > > > > static inline void set_page_pfmemalloc(struct page *page) > > > { > > > - page->index = -1UL; > > > + page->lru.next = (void *)2; > > > } > > > > > > if it's causing confusion. > > > > > And change all the *_pfmemalloc functions to use page->lru.next like this? @@ -1668,10 +1668,12 @@ struct address_space *page_mapping(struct page *page); static inline bool page_is_pfmemalloc(const struct page *page) { /* - * Page index cannot be this large so this must be - * a pfmemalloc page. + * This is not a tail page; compound_head of a head page is unused + * at return from the page allocator, and will be overwritten + * by callers who do not care whether the page came from the + * reserves. */ - return page->index == -1UL; + return (uintptr_t)page->lru.next & BIT(1); } /* @@ -1680,12 +1682,12 @@ static inline bool page_is_pfmemalloc(const struct page *page) */ static inline void set_page_pfmemalloc(struct page *page) { - page->index = -1UL; + page->lru.next = (void *)BIT(1); } static inline void clear_page_pfmemalloc(struct page *page) { - page->index = 0; + page->lru.next = NULL; } -- per aspera ad upstream