From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53126C432BE for ; Sat, 28 Aug 2021 21:59:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3919A60E77 for ; Sat, 28 Aug 2021 21:59:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234660AbhH1WAh (ORCPT ); Sat, 28 Aug 2021 18:00:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234697AbhH1WAc (ORCPT ); Sat, 28 Aug 2021 18:00:32 -0400 Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07865C0612A7 for ; Sat, 28 Aug 2021 14:59:28 -0700 (PDT) Received: by mail-yb1-xb2b.google.com with SMTP id r4so19901169ybp.4 for ; Sat, 28 Aug 2021 14:59:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=pyWbJNHHz8eg8MTD7gm92l9NEwd2/uLCVNwtVRUk+N8=; b=RtWc0eULZASFS2c8UT3utLEPhZqsSiVw4hdmjvsPupSThfmle1yxbAmgOSFLM1o7lV 7X72oEIxSujnXwjV8bx2uMA4uZgDJ5sYemIqpxhRKsyJZJr6OEjZbWXiXnBE7GiPSS7e gDC2jE+VsDNNpvIxQfR/v1D46KuGZfJO/lyuifEhSVr8GTZbrc9XmwRcFat4Ght4g83P j5lsNBhkGkRZGyKtF9aPiYaNts0GyiDAxnqG68VrxKwwwqP5usAYcy6aBN7avZicL7CR bYoDwqQf4ZtSk8C66XQOP04BnHYF+fnfJCGDzn8ZriiSkUjhboIZQ+ijtZZgiFhDd6vC OElg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=pyWbJNHHz8eg8MTD7gm92l9NEwd2/uLCVNwtVRUk+N8=; b=CEP3ZmBlfCxQ0F1/SgDKA/GGQg/shW+LCdNQyN/S1ni+CyR6u/EIF/n0OkRZ3SgUZD v/E7EqEuUvg2vPjeToL5fKaSTbL74R1vzDI+twErT5zQqSMKtZlYmQQeBZhiHj3upQuD cVbTUQqvbTyuQLfCrePsGskQVLz2yUqJ9XR4C8J5T67oRi9sF7wGDaPD0006Als8JQMl pyJbZUb3JbQd5NRSfdY3p7CQUh6VhqG2u/RC9uMFkshw/uTU0TK0nddnCaIKTHXfHc3I OLIE1dd9aeTn2n+16DDL1h8Yk/rgNXYWmGtJQdyu9XFI0MdxkK7nGL7mzQs8M1eT3jdo Hcmg== X-Gm-Message-State: AOAM533qFAA2LLXGWLYfez1tjB2n19RWBguwHN+XMxGlZtu9eEhKnQbT 5YHSz97Y4AKrrtzwc+xzjKxsilhq9XtiFpZtzWgnIg== X-Google-Smtp-Source: ABdhPJxcsvbqwcIf6UU/EabbulfP5bhcIGDCQHmNnC1Py259tPI6pmKG7LONTo/wpRs8g1yhIw0178/u97GU7v8ekPI= X-Received: by 2002:a25:7ec4:: with SMTP id z187mr13867314ybc.136.1630187967034; Sat, 28 Aug 2021 14:59:27 -0700 (PDT) MIME-Version: 1.0 References: <20210827191858.2037087-1-surenb@google.com> <20210827191858.2037087-2-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Sat, 28 Aug 2021 14:59:16 -0700 Message-ID: Subject: Re: [PATCH v8 1/3] mm: rearrange madvise code to allow for reuse To: Cyrill Gorcunov Cc: Andrew Morton , Colin Cross , Sumit Semwal , Michal Hocko , Dave Hansen , Kees Cook , Matthew Wilcox , "Kirill A . Shutemov" , Vlastimil Babka , Johannes Weiner , Jonathan Corbet , Al Viro , Randy Dunlap , Kalesh Singh , Peter Xu , rppt@kernel.org, Peter Zijlstra , Catalin Marinas , vincenzo.frascino@arm.com, =?UTF-8?B?Q2hpbndlbiBDaGFuZyAo5by16Yym5paHKQ==?= , Axel Rasmussen , Andrea Arcangeli , Jann Horn , apopple@nvidia.com, John Hubbard , Yu Zhao , Will Deacon , fenghua.yu@intel.com, thunder.leizhen@huawei.com, Hugh Dickins , feng.tang@intel.com, Jason Gunthorpe , Roman Gushchin , Thomas Gleixner , krisman@collabora.com, chris.hyser@oracle.com, Peter Collingbourne , "Eric W. Biederman" , Jens Axboe , legion@kernel.org, eb@emlix.com, Muchun Song , Viresh Kumar , Thomas Cedeno , sashal@kernel.org, cxfcosmos@gmail.com, Rasmus Villemoes , LKML , linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm , kernel-team , Pekka Enberg , Ingo Molnar , Oleg Nesterov , Jan Glauber , John Stultz , Rob Landley , "Serge E. Hallyn" , David Rientjes , Rik van Riel , Mel Gorman , Michel Lespinasse , Tang Chen , Robin Holt , Shaohua Li , Sasha Levin , Minchan Kim Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Aug 28, 2021 at 9:19 AM Cyrill Gorcunov wrote: > > On Fri, Aug 27, 2021 at 12:18:56PM -0700, Suren Baghdasaryan wrote: > ... > > > > +/* > > + * Apply an madvise behavior to a region of a vma. madvise_update_vma > > + * will handle splitting a vm area into separate areas, each area with its own > > + * behavior. > > + */ > > +static int madvise_vma_behavior(struct vm_area_struct *vma, > > + struct vm_area_struct **prev, > > + unsigned long start, unsigned long end, > > + unsigned long behavior) > > +{ > > + int error = 0; > > > Hi Suren! A nitpick -- this variable is never used with default value > so I think we could drop assignment here. > ... > > + case MADV_DONTFORK: > > + new_flags |= VM_DONTCOPY; > > + break; > > + case MADV_DOFORK: > > + if (vma->vm_flags & VM_IO) { > > + error = -EINVAL; > > We can exit early here, without jumping to the end of the function, right? > > > + goto out; > > + } > > + new_flags &= ~VM_DONTCOPY; > > + break; > > + case MADV_WIPEONFORK: > > + /* MADV_WIPEONFORK is only supported on anonymous memory. */ > > + if (vma->vm_file || vma->vm_flags & VM_SHARED) { > > + error = -EINVAL; > > And here too. > > > + goto out; > > + } > > + new_flags |= VM_WIPEONFORK; > > + break; > > + case MADV_KEEPONFORK: > > + new_flags &= ~VM_WIPEONFORK; > > + break; > > + case MADV_DONTDUMP: > > + new_flags |= VM_DONTDUMP; > > + break; > > + case MADV_DODUMP: > > + if (!is_vm_hugetlb_page(vma) && new_flags & VM_SPECIAL) { > > + error = -EINVAL; > > Same. > > > + goto out; > > + } > > + new_flags &= ~VM_DONTDUMP; > > + break; > > + case MADV_MERGEABLE: > > + case MADV_UNMERGEABLE: > > + error = ksm_madvise(vma, start, end, behavior, &new_flags); > > + if (error) > > + goto out; > > + break; > > + case MADV_HUGEPAGE: > > + case MADV_NOHUGEPAGE: > > + error = hugepage_madvise(vma, &new_flags, behavior); > > + if (error) > > + goto out; > > + break; > > + } > > + > > + error = madvise_update_vma(vma, prev, start, end, new_flags); > > + > > +out: > > I suppose we better keep the former comment on why we maps ENOMEM to EAGAIN? Thanks for the review Cyrill! Proposed changes sound good to me. Will change in the next revision. Suren. > > Cyrill