All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Gomez <dagmcr@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Christoph Hellwig <hch@lst.de>,  Hugh Dickins <hughd@google.com>,
	Chandan Babu R <chandan.babu@oracle.com>,
	 "Darrick J . Wong" <djwong@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	 David Howells <dhowells@redhat.com>,
	Jarkko Sakkinen <jarkko@kernel.org>,
	 Dave Hansen <dave.hansen@linux.intel.com>,
	 Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>,
	 Thomas Zimmermann <tzimmermann@suse.de>,
	David Airlie <airlied@gmail.com>, Daniel Vetter <daniel@ffwll.ch>,
	 Christian Koenig <christian.koenig@amd.com>,
	Huang Rui <ray.huang@amd.com>,
	 Jani Nikula <jani.nikula@linux.intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	 Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	intel-gfx@lists.freedesktop.org,
	 dri-devel@lists.freedesktop.org, x86@kernel.org,
	linux-sgx@vger.kernel.org,  linux-mm@kvack.org,
	linux-fsdevel@vger.kernel.org, keyrings@vger.kernel.org
Subject: Re: disable large folios for shmem file used by xfs xfile
Date: Thu, 11 Jan 2024 22:30:31 +0100	[thread overview]
Message-ID: <CAPsT6hkQixVvvE94Rjop-7jOXi3FOMfv8BOFhxYLWUs906x2CQ@mail.gmail.com> (raw)
In-Reply-To: <ZZ64/F/yeSymOCcI@casper.infradead.org>

On Wed, Jan 10, 2024 at 4:35 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 10, 2024 at 05:28:22PM +0200, Joonas Lahtinen wrote:
> > Quoting Joonas Lahtinen (2024-01-10 17:20:24)
> > > However we specifically pass "huge=within_size" to vfs_kern_mount when
> > > creating a private mount of tmpfs for the purpose of i915 created
> > > allocations.
> > >
> > > Older hardware also had some address hashing bugs where 2M aligned
> > > memory caused a lot of collisions in TLB so we don't enable it always.
> > >
> > > You can see drivers/gpu/drm/i915/gem/i915_gemfs.c function
> > > i915_gemfs_init for details and references.
> > >
> > > So in short, functionality wise we should be fine either default
> > > for using 2M pages or not. If they become the default, we'd probably
> > > want an option that would still be able to prevent them for performance
> > > regression reasons on older hardware.
> >
> > To maybe write out my concern better:
> >
> > Is there plan to enable huge pages by default in shmem?
>
> Not in the next kernel release, but eventually the plan is to allow
> arbitrary order folios to be used in shmem.  So you could ask it to create
> a 256kB folio for you, if that's the right size to manage memory in.
>
> How shmem and its various users go about choosing the right size is not
> quite clear to me yet.  Perhaps somebody else will do it before I get
> to it; I have a lot of different sub-projects to work on at the moment,
> and shmem isn't blocking any of them.  And I have a sneaking suspicion
> that more work is needed in the swap code to deal with arbitrary order
> folios, so that's another reason for me to delay looking at this ;-)

I have sent large folios support for shmem for the write and fallocate
path some releases ago. The main problem I was facing was a current
upstream problem with huge pages when seeking holes/data (fstests
generic/285 and generic/436). The strategy suggested was to use large
folios in an opportunistic way based on the file size. This hit the
same problem we currently have with huge pages and I considered that a
regression. We have made some progress to fix seeking in huge pages
upstream but is not yet finished. I can send the patches tomorrow for
further discussion.

>

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Gomez <dagmcr@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Darrick J . Wong" <djwong@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org,
	David Howells <dhowells@redhat.com>,
	linux-mm@kvack.org, Huang Rui <ray.huang@amd.com>,
	Christoph Hellwig <hch@lst.de>,
	x86@kernel.org, Hugh Dickins <hughd@google.com>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	intel-gfx@lists.freedesktop.org,
	Maxime Ripard <mripard@kernel.org>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	linux-sgx@vger.kernel.org,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Jarkko Sakkinen <jarkko@kernel.org>,
	keyrings@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Christian Koenig <christian.koenig@amd.com>,
	Chandan Babu R <chandan.babu@oracle.com>
Subject: Re: disable large folios for shmem file used by xfs xfile
Date: Thu, 11 Jan 2024 22:30:31 +0100	[thread overview]
Message-ID: <CAPsT6hkQixVvvE94Rjop-7jOXi3FOMfv8BOFhxYLWUs906x2CQ@mail.gmail.com> (raw)
In-Reply-To: <ZZ64/F/yeSymOCcI@casper.infradead.org>

On Wed, Jan 10, 2024 at 4:35 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 10, 2024 at 05:28:22PM +0200, Joonas Lahtinen wrote:
> > Quoting Joonas Lahtinen (2024-01-10 17:20:24)
> > > However we specifically pass "huge=within_size" to vfs_kern_mount when
> > > creating a private mount of tmpfs for the purpose of i915 created
> > > allocations.
> > >
> > > Older hardware also had some address hashing bugs where 2M aligned
> > > memory caused a lot of collisions in TLB so we don't enable it always.
> > >
> > > You can see drivers/gpu/drm/i915/gem/i915_gemfs.c function
> > > i915_gemfs_init for details and references.
> > >
> > > So in short, functionality wise we should be fine either default
> > > for using 2M pages or not. If they become the default, we'd probably
> > > want an option that would still be able to prevent them for performance
> > > regression reasons on older hardware.
> >
> > To maybe write out my concern better:
> >
> > Is there plan to enable huge pages by default in shmem?
>
> Not in the next kernel release, but eventually the plan is to allow
> arbitrary order folios to be used in shmem.  So you could ask it to create
> a 256kB folio for you, if that's the right size to manage memory in.
>
> How shmem and its various users go about choosing the right size is not
> quite clear to me yet.  Perhaps somebody else will do it before I get
> to it; I have a lot of different sub-projects to work on at the moment,
> and shmem isn't blocking any of them.  And I have a sneaking suspicion
> that more work is needed in the swap code to deal with arbitrary order
> folios, so that's another reason for me to delay looking at this ;-)

I have sent large folios support for shmem for the write and fallocate
path some releases ago. The main problem I was facing was a current
upstream problem with huge pages when seeking holes/data (fstests
generic/285 and generic/436). The strategy suggested was to use large
folios in an opportunistic way based on the file size. This hit the
same problem we currently have with huge pages and I considered that a
regression. We have made some progress to fix seeking in huge pages
upstream but is not yet finished. I can send the patches tomorrow for
further discussion.

>

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Gomez <dagmcr@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Darrick J . Wong" <djwong@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org,
	David Howells <dhowells@redhat.com>,
	linux-mm@kvack.org, Huang Rui <ray.huang@amd.com>,
	David Airlie <airlied@gmail.com>, Christoph Hellwig <hch@lst.de>,
	x86@kernel.org, Hugh Dickins <hughd@google.com>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	intel-gfx@lists.freedesktop.org,
	Maxime Ripard <mripard@kernel.org>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	linux-sgx@vger.kernel.org, Jarkko Sakkinen <jarkko@kernel.org>,
	keyrings@vger.kernel.org, Daniel Vetter <daniel@ffwll.ch>,
	linux-fsdevel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Christian Koenig <christian.koenig@amd.com>,
	Chandan Babu R <chandan.babu@oracle.com>
Subject: Re: disable large folios for shmem file used by xfs xfile
Date: Thu, 11 Jan 2024 22:30:31 +0100	[thread overview]
Message-ID: <CAPsT6hkQixVvvE94Rjop-7jOXi3FOMfv8BOFhxYLWUs906x2CQ@mail.gmail.com> (raw)
In-Reply-To: <ZZ64/F/yeSymOCcI@casper.infradead.org>

On Wed, Jan 10, 2024 at 4:35 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 10, 2024 at 05:28:22PM +0200, Joonas Lahtinen wrote:
> > Quoting Joonas Lahtinen (2024-01-10 17:20:24)
> > > However we specifically pass "huge=within_size" to vfs_kern_mount when
> > > creating a private mount of tmpfs for the purpose of i915 created
> > > allocations.
> > >
> > > Older hardware also had some address hashing bugs where 2M aligned
> > > memory caused a lot of collisions in TLB so we don't enable it always.
> > >
> > > You can see drivers/gpu/drm/i915/gem/i915_gemfs.c function
> > > i915_gemfs_init for details and references.
> > >
> > > So in short, functionality wise we should be fine either default
> > > for using 2M pages or not. If they become the default, we'd probably
> > > want an option that would still be able to prevent them for performance
> > > regression reasons on older hardware.
> >
> > To maybe write out my concern better:
> >
> > Is there plan to enable huge pages by default in shmem?
>
> Not in the next kernel release, but eventually the plan is to allow
> arbitrary order folios to be used in shmem.  So you could ask it to create
> a 256kB folio for you, if that's the right size to manage memory in.
>
> How shmem and its various users go about choosing the right size is not
> quite clear to me yet.  Perhaps somebody else will do it before I get
> to it; I have a lot of different sub-projects to work on at the moment,
> and shmem isn't blocking any of them.  And I have a sneaking suspicion
> that more work is needed in the swap code to deal with arbitrary order
> folios, so that's another reason for me to delay looking at this ;-)

I have sent large folios support for shmem for the write and fallocate
path some releases ago. The main problem I was facing was a current
upstream problem with huge pages when seeking holes/data (fstests
generic/285 and generic/436). The strategy suggested was to use large
folios in an opportunistic way based on the file size. This hit the
same problem we currently have with huge pages and I considered that a
regression. We have made some progress to fix seeking in huge pages
upstream but is not yet finished. I can send the patches tomorrow for
further discussion.

>

  reply	other threads:[~2024-01-11 21:30 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-10  9:21 disable large folios for shmem file used by xfs xfile Christoph Hellwig
2024-01-10  9:21 ` Christoph Hellwig
2024-01-10  9:21 ` [PATCH 1/2] mm: add a mapping_clear_large_folios helper Christoph Hellwig
2024-01-10  9:21   ` Christoph Hellwig
2024-01-10  9:21 ` [PATCH 2/2] xfs: disable large folio support in xfile_create Christoph Hellwig
2024-01-10  9:21   ` Christoph Hellwig
2024-01-10 17:55   ` Darrick J. Wong
2024-01-10 17:55     ` Darrick J. Wong
2024-01-10 17:55     ` Darrick J. Wong
2024-01-10 20:04     ` Darrick J. Wong
2024-01-10 20:04       ` Darrick J. Wong
2024-01-10 20:04       ` Darrick J. Wong
2024-01-11 22:00       ` Andrew Morton
2024-01-11 22:00         ` Andrew Morton
2024-01-11 22:00         ` Andrew Morton
2024-01-11 22:45         ` Matthew Wilcox
2024-01-11 22:45           ` Matthew Wilcox
2024-01-11 22:45           ` Matthew Wilcox
2024-01-12  2:22           ` Darrick J. Wong
2024-01-12  2:22             ` Darrick J. Wong
2024-01-12  2:22             ` Darrick J. Wong
2024-02-08  1:56             ` Andrew Morton
2024-02-08 16:03               ` Darrick J. Wong
2024-01-10 12:37 ` disable large folios for shmem file used by xfs xfile Matthew Wilcox
2024-01-10 12:37   ` Matthew Wilcox
2024-01-10 12:37   ` Matthew Wilcox
2024-01-10 15:20   ` Joonas Lahtinen
2024-01-10 15:20     ` Joonas Lahtinen
2024-01-10 15:20     ` Joonas Lahtinen
2024-01-10 15:28     ` Joonas Lahtinen
2024-01-10 15:28       ` Joonas Lahtinen
2024-01-10 15:28       ` Joonas Lahtinen
2024-01-10 15:34       ` Matthew Wilcox
2024-01-10 15:34         ` Matthew Wilcox
2024-01-10 15:34         ` Matthew Wilcox
2024-01-11 21:30         ` Daniel Gomez [this message]
2024-01-11 21:30           ` Daniel Gomez
2024-01-11 21:30           ` Daniel Gomez
2024-01-10 16:18   ` Christoph Hellwig
2024-01-10 16:18     ` Christoph Hellwig
2024-01-10 14:35 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/2] mm: add a mapping_clear_large_folios helper Patchwork
2024-01-10 14:35 ` ✗ Fi.CI.SPARSE: " Patchwork
2024-01-10 14:54 ` ✗ Fi.CI.BAT: failure " Patchwork
2024-01-10 15:38 ` disable large folios for shmem file used by xfs xfile Andrew Morton
2024-01-10 15:38   ` Andrew Morton
2024-01-10 15:38   ` Andrew Morton
2024-01-10 16:19   ` Christoph Hellwig
2024-01-10 16:19     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPsT6hkQixVvvE94Rjop-7jOXi3FOMfv8BOFhxYLWUs906x2CQ@mail.gmail.com \
    --to=dagmcr@gmail.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chandan.babu@oracle.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dave.hansen@linux.intel.com \
    --cc=dhowells@redhat.com \
    --cc=djwong@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=hughd@google.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=jarkko@kernel.org \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=keyrings@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-sgx@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mripard@kernel.org \
    --cc=ray.huang@amd.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=tvrtko.ursulin@linux.intel.com \
    --cc=tzimmermann@suse.de \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.