From: Linus Torvalds <torvalds@linux-foundation.org>
To: Peter Xu <peterx@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>, Linux-MM <linux-mm@kvack.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Michal Hocko <mhocko@suse.com>,
Kirill Shutemov <kirill@shutemov.name>,
Jann Horn <jannh@google.com>,
Kirill Tkhai <ktkhai@virtuozzo.com>,
Hugh Dickins <hughd@google.com>,
Leon Romanovsky <leonro@nvidia.com>, Jan Kara <jack@suse.cz>,
John Hubbard <jhubbard@nvidia.com>,
Christoph Hellwig <hch@lst.de>,
Andrew Morton <akpm@linux-foundation.org>,
Jason Gunthorpe <jgg@ziepe.ca>,
Andrea Arcangeli <aarcange@redhat.com>
Subject: Re: [PATCH 4/5] mm: Do early cow for pinned pages during fork() for ptes
Date: Wed, 23 Sep 2020 13:25:52 -0700 [thread overview]
Message-ID: <CAHk-=whBth_SpXYCmYLiZTRadAvncCDAmK_Kw1QNTg-HS23aKA@mail.gmail.com> (raw)
In-Reply-To: <20200923010332.GP19098@xz-x1>
[-- Attachment #1: Type: text/plain, Size: 1023 bytes --]
On Tue, Sep 22, 2020 at 6:03 PM Peter Xu <peterx@redhat.com> wrote:
>
> > If we rely on "copy_ret == COPY_MM_BREAK_COW" we can unify "again" and
> > "again_break_cow", we don't need to clear ->cow_new_page, this makes the
> > logic more understandable. To me at least ;)
>
> I see your point. I'll definitely try it out. I think I'll at least use what
> you preferred above since it's actually the same as before, logically. Then
> I'll consider drop the again_break_cow, as long as I'm still as confident after
> I do the change on not leaking anything :).
So the two patches I sent out to re-organize copy_one_pte() were
literally meant to make all this mess go away.
IOW, the third patch would be something (COMPLETELY UNTESTED) like the attached.
I think the logic for the preallocation is fairly obvious, but it
might be better to allocate a batch of pages for all I know. That
said, I can't really make myself care about the performance of a
fork() after you've pinned pages in it, so..
Linus
[-- Attachment #2: patch --]
[-- Type: application/octet-stream, Size: 2943 bytes --]
mm/memory.c | 38 +++++++++++++++++++++++++++++++-------
1 file changed, 31 insertions(+), 7 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index e315b1f1ef08..524aa7183971 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -773,10 +773,14 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
return 0;
}
-static inline void
+/*
+ * This returns 0 for success, >0 for "success, and I used the prealloc page",
+ * and <0 for "you need to preallocate a page and retry".
+ */
+static inline int
copy_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma,
- unsigned long addr, int *rss)
+ unsigned long addr, int *rss, struct page *prealloc)
{
unsigned long vm_flags = vma->vm_flags;
pte_t pte = *src_pte;
@@ -815,6 +819,7 @@ copy_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
}
set_pte_at(dst_mm, addr, dst_pte, pte);
+ return 0;
}
static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
@@ -824,16 +829,19 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pte_t *orig_src_pte, *orig_dst_pte;
pte_t *src_pte, *dst_pte;
spinlock_t *src_ptl, *dst_ptl;
- int progress = 0;
+ int progress, used_page;
int rss[NR_MM_COUNTERS];
swp_entry_t entry = (swp_entry_t){0};
+ struct page *prealloc = NULL;
again:
+ progress = 0;
+ used_page = 0;
init_rss_vec(rss);
dst_pte = pte_alloc_map_lock(dst_mm, dst_pmd, addr, &dst_ptl);
if (!dst_pte)
- return -ENOMEM;
+ goto out_of_memory;
src_pte = pte_offset_map(src_pmd, addr);
src_ptl = pte_lockptr(src_mm, src_pmd);
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
@@ -865,8 +873,12 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
progress += 8;
continue;
}
- copy_present_pte(dst_mm, src_mm, dst_pte, src_pte,
- vma, addr, rss);
+ /* copy_present_page() may need to have a pre-allocated temporary page */
+ used_page = copy_present_pte(dst_mm, src_mm, dst_pte, src_pte, vma, addr, rss, prealloc);
+ if (used_page < 0)
+ break;
+ if (used_page)
+ prealloc = NULL;
progress += 8;
} while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);
@@ -879,12 +891,24 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (entry.val) {
if (add_swap_count_continuation(entry, GFP_KERNEL) < 0)
+ goto out_of_memory;
+ }
+ /* Did we exit from the pte lock because we needed a new page? */
+ if (used_page < 0) {
+ prealloc = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, addr);
+ if (!prealloc)
return -ENOMEM;
- progress = 0;
}
if (addr != end)
goto again;
+ if (prealloc)
+ free_unref_page(prealloc);
return 0;
+
+out_of_memory:
+ if (prealloc)
+ free_unref_page(prealloc);
+ return -ENOMEM;
}
static inline int copy_pmd_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
next prev parent reply other threads:[~2020-09-23 20:26 UTC|newest]
Thread overview: 110+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-21 21:17 [PATCH 0/5] mm: Break COW for pinned pages during fork() Peter Xu
2020-09-21 21:17 ` [PATCH 1/5] mm: Introduce mm_struct.has_pinned Peter Xu
2020-09-21 21:43 ` Jann Horn
2020-09-21 22:30 ` Peter Xu
2020-09-21 22:47 ` Jann Horn
2020-09-22 11:54 ` Jason Gunthorpe
2020-09-22 14:28 ` Peter Xu
2020-09-22 15:56 ` Jason Gunthorpe
2020-09-22 16:25 ` Linus Torvalds
2020-09-21 23:53 ` John Hubbard
2020-09-22 0:01 ` John Hubbard
2020-09-22 15:17 ` Peter Xu
2020-09-22 16:10 ` Jason Gunthorpe
2020-09-22 17:54 ` Peter Xu
2020-09-22 19:11 ` Jason Gunthorpe
2020-09-23 0:27 ` Peter Xu
2020-09-23 13:10 ` Peter Xu
2020-09-23 14:20 ` Jan Kara
2020-09-23 17:12 ` Jason Gunthorpe
2020-09-24 7:44 ` Jan Kara
2020-09-24 14:02 ` Jason Gunthorpe
2020-09-24 14:45 ` Jan Kara
2020-09-23 17:07 ` Jason Gunthorpe
2020-09-24 14:35 ` Peter Xu
2020-09-24 16:51 ` Jason Gunthorpe
2020-09-24 17:55 ` Peter Xu
2020-09-24 18:15 ` Jason Gunthorpe
2020-09-24 18:34 ` Peter Xu
2020-09-24 18:39 ` Jason Gunthorpe
2020-09-24 21:30 ` Peter Xu
2020-09-25 19:56 ` Linus Torvalds
2020-09-25 21:06 ` Linus Torvalds
2020-09-26 0:41 ` Jason Gunthorpe
2020-09-26 1:15 ` Linus Torvalds
2020-09-26 22:28 ` Linus Torvalds
2020-09-27 6:23 ` Leon Romanovsky
2020-09-27 18:16 ` Linus Torvalds
2020-09-27 18:45 ` Linus Torvalds
2020-09-28 12:49 ` Jason Gunthorpe
2020-09-28 16:17 ` Linus Torvalds
2020-09-28 17:22 ` Peter Xu
2020-09-28 17:54 ` Linus Torvalds
2020-09-28 18:39 ` Jason Gunthorpe
2020-09-28 19:29 ` Linus Torvalds
2020-09-28 23:57 ` Jason Gunthorpe
2020-09-29 0:18 ` John Hubbard
2020-09-28 19:36 ` Linus Torvalds
2020-09-28 19:50 ` Linus Torvalds
2020-09-28 22:51 ` Jason Gunthorpe
2020-09-29 0:30 ` Peter Xu
2020-10-08 5:49 ` Leon Romanovsky
2020-09-28 17:13 ` Peter Xu
2020-09-25 21:13 ` Peter Xu
2020-09-25 22:08 ` Linus Torvalds
2020-09-22 18:02 ` John Hubbard
2020-09-22 18:15 ` Peter Xu
2020-09-22 19:11 ` John Hubbard
2020-09-27 0:41 ` [mm] 698ac7610f: will-it-scale.per_thread_ops 8.2% improvement kernel test robot
2020-09-21 21:17 ` [PATCH 2/5] mm/fork: Pass new vma pointer into copy_page_range() Peter Xu
2020-09-21 21:17 ` [PATCH 3/5] mm: Rework return value for copy_one_pte() Peter Xu
2020-09-22 7:11 ` John Hubbard
2020-09-22 15:29 ` Peter Xu
2020-09-22 10:08 ` Oleg Nesterov
2020-09-22 10:18 ` Oleg Nesterov
2020-09-22 15:36 ` Peter Xu
2020-09-22 15:48 ` Oleg Nesterov
2020-09-22 16:03 ` Peter Xu
2020-09-22 16:53 ` Oleg Nesterov
2020-09-22 18:13 ` Peter Xu
2020-09-22 18:23 ` Oleg Nesterov
2020-09-22 18:49 ` Peter Xu
2020-09-23 6:52 ` Oleg Nesterov
2020-09-23 17:16 ` Linus Torvalds
2020-09-23 21:24 ` Linus Torvalds
2020-09-21 21:20 ` [PATCH 4/5] mm: Do early cow for pinned pages during fork() for ptes Peter Xu
2020-09-21 21:55 ` Jann Horn
2020-09-21 22:18 ` John Hubbard
2020-09-21 22:27 ` Jann Horn
2020-09-22 0:08 ` John Hubbard
2020-09-21 22:27 ` Peter Xu
2020-09-22 11:48 ` Oleg Nesterov
2020-09-22 12:40 ` Oleg Nesterov
2020-09-22 15:58 ` Peter Xu
2020-09-22 16:52 ` Oleg Nesterov
2020-09-22 18:34 ` Peter Xu
2020-09-22 18:44 ` Oleg Nesterov
2020-09-23 1:03 ` Peter Xu
2020-09-23 20:25 ` Linus Torvalds [this message]
2020-09-24 15:08 ` Peter Xu
2020-09-24 11:48 ` Kirill Tkhai
2020-09-24 15:16 ` Peter Xu
2020-09-21 21:20 ` [PATCH 5/5] mm/thp: Split huge pmds/puds if they're pinned when fork() Peter Xu
2020-09-22 6:41 ` John Hubbard
2020-09-22 10:33 ` Jan Kara
2020-09-22 20:01 ` John Hubbard
2020-09-23 9:22 ` Jan Kara
2020-09-23 13:50 ` Peter Xu
2020-09-23 14:01 ` Jan Kara
2020-09-23 15:44 ` Peter Xu
2020-09-23 20:19 ` John Hubbard
2020-09-24 18:49 ` Peter Xu
2020-09-23 16:06 ` Peter Xu
2020-09-22 12:05 ` Jason Gunthorpe
2020-09-23 15:24 ` Peter Xu
2020-09-23 16:07 ` Yang Shi
2020-09-24 15:47 ` Peter Xu
2020-09-24 17:29 ` Yang Shi
2020-09-23 17:17 ` Jason Gunthorpe
2020-09-23 10:21 ` [PATCH 0/5] mm: Break COW for pinned pages during fork() Leon Romanovsky
2020-09-23 15:37 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAHk-=whBth_SpXYCmYLiZTRadAvncCDAmK_Kw1QNTg-HS23aKA@mail.gmail.com' \
--to=torvalds@linux-foundation.org \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hch@lst.de \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=kirill@shutemov.name \
--cc=ktkhai@virtuozzo.com \
--cc=leonro@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=oleg@redhat.com \
--cc=peterx@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).