From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3844FC3F2CD for ; Fri, 28 Feb 2020 22:20:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EEB30246B6 for ; Fri, 28 Feb 2020 22:20:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iGHi9zGl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EEB30246B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8FB9E6B0005; Fri, 28 Feb 2020 17:20:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AD326B0006; Fri, 28 Feb 2020 17:20:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79A8A6B0007; Fri, 28 Feb 2020 17:20:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 61BE56B0005 for ; Fri, 28 Feb 2020 17:20:40 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1966C181C440A for ; Fri, 28 Feb 2020 22:20:40 +0000 (UTC) X-FDA: 76540956240.15.tail32_400017388852a X-HE-Tag: tail32_400017388852a X-Filterd-Recvd-Size: 8714 Received: from mail-vs1-f68.google.com (mail-vs1-f68.google.com [209.85.217.68]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Fri, 28 Feb 2020 22:20:39 +0000 (UTC) Received: by mail-vs1-f68.google.com with SMTP id w142so2964438vsw.9 for ; Fri, 28 Feb 2020 14:20:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=RLZDPkr+sbLOsLAqSk5t24KKVvpcoUu7pZBHwyGVF/w=; b=iGHi9zGlTxXPUGyfJ6aUjX/jWRV86OTlh52H3HYRIcBPHLkMLGGLkwHZ2G2RGrhjo+ lc0LQFLKpfqzcLXa4Zq8VPTs0yfXEEM9ERN9sbCUxvBGj66WYsuk6KiWPfIPSNZmLp30 IEH632S07sXyUQ79F3J5ezao+yF/Fpe7qru2HjbwueLbUIf4YiXSkL8NdnKY60uecYp0 sJqGmPztqjek/tGyz+px0ir2Dfc4PCHkQsPr3Jxwbh701LYSw7ALikrNIcKSGoUHwcyx NXpQTdUZH0k3K3iD6/qJiQlPWTSMk22J8iaPaZl7KbxUT28WcnjG8YvZYCsAuaRXyrMd +PKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=RLZDPkr+sbLOsLAqSk5t24KKVvpcoUu7pZBHwyGVF/w=; b=MFmOX4d2mx3G+NRf/P/k5HlkZDve8KBGdab34rUqxazOjAHbYSssaKwhF7oqh/nBug 8vKNW2/z/mRQwnQu2kylA7/6BRTOhxQnS26sAj3RpgOHe7z3WdyM/UNTnqtcfGzka0px X+7WN2yshedz6arrLlpTX+50JSiHj+nzRQjzuBB3x7/dYDfLk5GImnDJ0TFpunMLFzeh atsmRKzRtrj9+8Ja+9T99sZJH9qPluqXrRniz55z6Q1JZFc9etpyzLQ87S94FUBdBdHm Xc47T3PzMWYmuYT/BazHgBcb9cnSUkxXTpT4DGg6iThjY7iLFJA2KdfNfcxu1PZFy94i EvnA== X-Gm-Message-State: ANhLgQ1twRjw+vAX6S9f76m8au0S+ZyqrItKWnJYeQMveF5q7yvoxy55 lAwk+6Egj1SKIh6AWSXWKyjAZED3/nVvAGKfodiKEg== X-Google-Smtp-Source: ADFU+vucSYcoD6unlTY7xz6q7iqehwa133kDvpqR74txDiukj9Kg4F2LjdFUucWdyOzLGB0JoFtmLEYDoHqkSSGHZWQ= X-Received: by 2002:a67:ed0a:: with SMTP id l10mr3731827vsp.239.1582928438632; Fri, 28 Feb 2020 14:20:38 -0800 (PST) MIME-Version: 1.0 References: <20200219014433.88424-1-minchan@kernel.org> <20200219014433.88424-4-minchan@kernel.org> In-Reply-To: <20200219014433.88424-4-minchan@kernel.org> From: Suren Baghdasaryan Date: Fri, 28 Feb 2020 14:20:27 -0800 Message-ID: Subject: Re: [PATCH v6 3/7] mm: check fatal signal pending of target process To: Minchan Kim Cc: Andrew Morton , LKML , linux-mm , linux-api@vger.kernel.org, oleksandr@redhat.com, Tim Murray , Daniel Colascione , Sandeep Patil , Sonny Rao , Brian Geffon , Michal Hocko , Johannes Weiner , Shakeel Butt , John Dias , Joel Fernandes , sj38.park@gmail.com, alexander.h.duyck@linux.intel.com, Jann Horn Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 18, 2020 at 5:44 PM Minchan Kim wrote: > > Bail out to prevent unnecessary CPU overhead if target process has > pending fatal signal during (MADV_COLD|MADV_PAGEOUT) operation. > > Signed-off-by: Minchan Kim > --- > mm/madvise.c | 29 +++++++++++++++++++++-------- > 1 file changed, 21 insertions(+), 8 deletions(-) > > diff --git a/mm/madvise.c b/mm/madvise.c > index f29155b8185d..def1507c2030 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -36,6 +36,7 @@ > struct madvise_walk_private { > struct mmu_gather *tlb; > bool pageout; > + struct task_struct *target_task; > }; > > /* > @@ -316,6 +317,10 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > if (fatal_signal_pending(current)) > return -EINTR; > > + if (private->target_task && > + fatal_signal_pending(private->target_task)) > + return -EINTR; > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > if (pmd_trans_huge(*pmd)) { > pmd_t orig_pmd; > @@ -471,12 +476,14 @@ static const struct mm_walk_ops cold_walk_ops = { > }; > > static void madvise_cold_page_range(struct mmu_gather *tlb, > + struct task_struct *task, > struct vm_area_struct *vma, > unsigned long addr, unsigned long end) > { > struct madvise_walk_private walk_private = { > .pageout = false, > .tlb = tlb, > + .target_task = task, > }; > > tlb_start_vma(tlb, vma); > @@ -484,7 +491,8 @@ static void madvise_cold_page_range(struct mmu_gather *tlb, > tlb_end_vma(tlb, vma); > } > > -static long madvise_cold(struct vm_area_struct *vma, > +static long madvise_cold(struct task_struct *task, > + struct vm_area_struct *vma, > struct vm_area_struct **prev, > unsigned long start_addr, unsigned long end_addr) > { > @@ -497,19 +505,21 @@ static long madvise_cold(struct vm_area_struct *vma, > > lru_add_drain(); > tlb_gather_mmu(&tlb, mm, start_addr, end_addr); > - madvise_cold_page_range(&tlb, vma, start_addr, end_addr); > + madvise_cold_page_range(&tlb, task, vma, start_addr, end_addr); > tlb_finish_mmu(&tlb, start_addr, end_addr); > > return 0; > } > > static void madvise_pageout_page_range(struct mmu_gather *tlb, > + struct task_struct *task, > struct vm_area_struct *vma, > unsigned long addr, unsigned long end) > { > struct madvise_walk_private walk_private = { > .pageout = true, > .tlb = tlb, > + .target_task = task, > }; > > tlb_start_vma(tlb, vma); > @@ -533,7 +543,8 @@ static inline bool can_do_pageout(struct vm_area_struct *vma) > inode_permission(file_inode(vma->vm_file), MAY_WRITE) == 0; > } > > -static long madvise_pageout(struct vm_area_struct *vma, > +static long madvise_pageout(struct task_struct *task, > + struct vm_area_struct *vma, > struct vm_area_struct **prev, > unsigned long start_addr, unsigned long end_addr) > { > @@ -549,7 +560,7 @@ static long madvise_pageout(struct vm_area_struct *vma, > > lru_add_drain(); > tlb_gather_mmu(&tlb, mm, start_addr, end_addr); > - madvise_pageout_page_range(&tlb, vma, start_addr, end_addr); > + madvise_pageout_page_range(&tlb, task, vma, start_addr, end_addr); > tlb_finish_mmu(&tlb, start_addr, end_addr); > > return 0; > @@ -929,7 +940,8 @@ static int madvise_inject_error(int behavior, > #endif > > static long > -madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev, > +madvise_vma(struct task_struct *task, struct vm_area_struct *vma, > + struct vm_area_struct **prev, > unsigned long start, unsigned long end, int behavior) > { > switch (behavior) { > @@ -938,9 +950,9 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev, > case MADV_WILLNEED: > return madvise_willneed(vma, prev, start, end); > case MADV_COLD: > - return madvise_cold(vma, prev, start, end); > + return madvise_cold(task, vma, prev, start, end); > case MADV_PAGEOUT: > - return madvise_pageout(vma, prev, start, end); > + return madvise_pageout(task, vma, prev, start, end); > case MADV_FREE: > case MADV_DONTNEED: > return madvise_dontneed_free(vma, prev, start, end, behavior); > @@ -1140,7 +1152,8 @@ int do_madvise(struct task_struct *target_task, struct mm_struct *mm, > tmp = end; > > /* Here vma->vm_start <= start < tmp <= (end|vma->vm_end). */ > - error = madvise_vma(vma, &prev, start, tmp, behavior); > + error = madvise_vma(target_task, vma, &prev, > + start, tmp, behavior); > if (error) > goto out; > start = tmp; > -- > 2.25.0.265.gbab2e86ba0-goog > Reviewed-by: Suren Baghdasaryan