All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Huang Ying <ying.huang@intel.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Minchan Kim <minchan@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Nadav Amit <nadav.amit@gmail.com>,
	Hugh Dickins <hughd@google.com>,
	Andi Kleen <andi.kleen@intel.com>,
	"Kirill A . Shutemov" <kirill@shutemov.name>
Subject: Re: [PATCH v2 2/2] mm: Remember young/dirty bit for page migrations
Date: Fri, 5 Aug 2022 12:36:50 -0400	[thread overview]
Message-ID: <Yu1HIiWt9t1IE9RE@xz-m1.local> (raw)
In-Reply-To: <62d52657-12d2-8563-4ead-027480065d9f@redhat.com>

On Fri, Aug 05, 2022 at 02:17:55PM +0200, David Hildenbrand wrote:
> On 04.08.22 22:39, Peter Xu wrote:
> > When page migration happens, we always ignore the young/dirty bit settings
> > in the old pgtable, and marking the page as old in the new page table using
> > either pte_mkold() or pmd_mkold(), and keeping the pte clean.
> > 
> > That's fine from functional-wise, but that's not friendly to page reclaim
> > because the moving page can be actively accessed within the procedure.  Not
> > to mention hardware setting the young bit can bring quite some overhead on
> > some systems, e.g. x86_64 needs a few hundreds nanoseconds to set the bit.
> > The same slowdown problem to dirty bits when the memory is first written
> > after page migration happened.
> > 
> > Actually we can easily remember the A/D bit configuration and recover the
> > information after the page is migrated.  To achieve it, define a new set of
> > bits in the migration swap offset field to cache the A/D bits for old pte.
> > Then when removing/recovering the migration entry, we can recover the A/D
> > bits even if the page changed.
> > 
> > One thing to mention is that here we used max_swapfile_size() to detect how
> > many swp offset bits we have, and we'll only enable this feature if we know
> > the swp offset can be big enough to store both the PFN value and the young
> > bit.  Otherwise the A/D bits are dropped like before.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  include/linux/swapops.h | 91 +++++++++++++++++++++++++++++++++++++++++
> >  mm/huge_memory.c        | 26 +++++++++++-
> >  mm/migrate.c            |  6 ++-
> >  mm/migrate_device.c     |  4 ++
> >  mm/rmap.c               |  5 ++-
> >  5 files changed, 128 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/linux/swapops.h b/include/linux/swapops.h
> > index 1d17e4bb3d2f..34aa448ac6ee 100644
> > --- a/include/linux/swapops.h
> > +++ b/include/linux/swapops.h
> > @@ -8,6 +8,8 @@
> >  
> >  #ifdef CONFIG_MMU
> >  
> > +#include <linux/swapfile.h>
> > +
> >  /*
> >   * swapcache pages are stored in the swapper_space radix tree.  We want to
> >   * get good packing density in that tree, so the index should be dense in
> > @@ -35,6 +37,24 @@
> >  #endif
> >  #define SWP_PFN_MASK			((1UL << SWP_PFN_BITS) - 1)
> >  
> > +/**
> > + * Migration swap entry specific bitfield definitions.
> > + *
> > + * @SWP_MIG_YOUNG_BIT: Whether the page used to have young bit set
> > + * @SWP_MIG_DIRTY_BIT: Whether the page used to have dirty bit set
> > + *
> > + * Note: these bits will be stored in migration entries iff there're enough
> > + * free bits in arch specific swp offset.  By default we'll ignore A/D bits
> > + * when migrating a page.  Please refer to migration_entry_supports_ad()
> > + * for more information.
> > + */
> > +#define SWP_MIG_YOUNG_BIT		(SWP_PFN_BITS)
> > +#define SWP_MIG_DIRTY_BIT		(SWP_PFN_BITS + 1)
> > +#define SWP_MIG_TOTAL_BITS		(SWP_PFN_BITS + 2)
> > +
> > +#define SWP_MIG_YOUNG			(1UL << SWP_MIG_YOUNG_BIT)
> > +#define SWP_MIG_DIRTY			(1UL << SWP_MIG_DIRTY_BIT)
> > +
> >  static inline bool is_pfn_swap_entry(swp_entry_t entry);
> >  
> >  /* Clear all flags but only keep swp_entry_t related information */
> > @@ -265,6 +285,57 @@ static inline swp_entry_t make_writable_migration_entry(pgoff_t offset)
> >  	return swp_entry(SWP_MIGRATION_WRITE, offset);
> >  }
> >  
> > +/*
> > + * Returns whether the host has large enough swap offset field to support
> > + * carrying over pgtable A/D bits for page migrations.  The result is
> > + * pretty much arch specific.
> > + */
> > +static inline bool migration_entry_supports_ad(void)
> > +{
> > +	/*
> > +	 * max_swapfile_size() returns the max supported swp-offset plus 1.
> > +	 * We can support the migration A/D bits iff the pfn swap entry has
> > +	 * the offset large enough to cover all of them (PFN, A & D bits).
> > +	 */
> > +#ifdef CONFIG_SWAP
> > +	return max_swapfile_size() >= (1UL << SWP_MIG_TOTAL_BITS);
> > +#else
> > +	return false;
> > +#endif
> > +}
> 
> 
> This looks much cleaner to me. It might be helpful to draw an ascii
> picture where exatcly these bits reside isnide the offset.

Yes that'll be helpful especially when more bits are defined.  Not sure how
much it'll help for now but I can definitely do that.

> 
> > +
> > +static inline swp_entry_t make_migration_entry_young(swp_entry_t entry)
> > +{
> > +	if (migration_entry_supports_ad())
> 
> Do we maybe want to turn that into a static key and enable it once and
> for all? As Nadav says, the repeated max_swapfile_size() calls/checks
> might be worth optimizing out.

Since there're a few arch related issues to answer (as replied to Nadav -
both max_swapfile_size and SWP_MIG_TOTAL_BITS may not be constant), my
current plan is to first attach the const attribute, then leave the other
optimizations for later.

If this is a super hot path, I probably need to do this in reversed order,
but hopefully it's fine to this case.

Thanks,

-- 
Peter Xu


  reply	other threads:[~2022-08-05 16:37 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-04 20:39 [PATCH v2 0/2] mm: Remember a/d bits for migration entries Peter Xu
2022-08-04 20:39 ` [PATCH v2 1/2] mm/swap: Add swp_offset_pfn() to fetch PFN from swap entry Peter Xu
2022-08-04 20:39 ` [PATCH v2 2/2] mm: Remember young/dirty bit for page migrations Peter Xu
2022-08-04 22:40   ` Nadav Amit
2022-08-05 16:30     ` Peter Xu
2022-08-09  8:45       ` Huang, Ying
2022-08-09  8:47         ` David Hildenbrand
2022-08-09 14:59         ` Peter Xu
2022-08-05 12:17   ` David Hildenbrand
2022-08-05 16:36     ` Peter Xu [this message]
2022-08-09  8:40   ` Huang, Ying
2022-08-09 17:59     ` Peter Xu
2022-08-10  0:53       ` Huang, Ying
2022-08-10 19:21         ` Peter Xu
2022-08-11  5:44           ` Alistair Popple
2022-08-04 22:17 ` [PATCH v2 0/2] mm: Remember a/d bits for migration entries Nadav Amit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yu1HIiWt9t1IE9RE@xz-m1.local \
    --to=peterx@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi.kleen@intel.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=nadav.amit@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.