All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Jason Gunthorpe <jgg@mellanox.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Jan Kara" <jack@suse.cz>,
	"Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Matthew Wilcox" <mawilcox@microsoft.com>,
	"Ross Zwisler" <zwisler@kernel.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Michal Hocko" <mhocko@kernel.org>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH v4 9/9] RDMA/umem_odp: optimize out the case when a range is updated to read only
Date: Wed, 23 Jan 2019 17:46:40 -0500	[thread overview]
Message-ID: <20190123224640.GA1257@redhat.com> (raw)
In-Reply-To: <20190123223153.GP8986@mellanox.com>

On Wed, Jan 23, 2019 at 10:32:00PM +0000, Jason Gunthorpe wrote:
> On Wed, Jan 23, 2019 at 05:23:15PM -0500, jglisse@redhat.com wrote:
> > From: Jérôme Glisse <jglisse@redhat.com>
> > 
> > When range of virtual address is updated read only and corresponding
> > user ptr object are already read only it is pointless to do anything.
> > Optimize this case out.
> > 
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Jan Kara <jack@suse.cz>
> > Cc: Felix Kuehling <Felix.Kuehling@amd.com>
> > Cc: Jason Gunthorpe <jgg@mellanox.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Matthew Wilcox <mawilcox@microsoft.com>
> > Cc: Ross Zwisler <zwisler@kernel.org>
> > Cc: Dan Williams <dan.j.williams@intel.com>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: kvm@vger.kernel.org
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: linux-rdma@vger.kernel.org
> > Cc: linux-fsdevel@vger.kernel.org
> > Cc: Arnd Bergmann <arnd@arndb.de>
> >  drivers/infiniband/core/umem_odp.c | 22 +++++++++++++++++++---
> >  include/rdma/ib_umem_odp.h         |  1 +
> >  2 files changed, 20 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
> > index a4ec43093cb3..fa4e7fdcabfc 100644
> > +++ b/drivers/infiniband/core/umem_odp.c
> > @@ -140,8 +140,15 @@ static void ib_umem_notifier_release(struct mmu_notifier *mn,
> >  static int invalidate_range_start_trampoline(struct ib_umem_odp *item,
> >  					     u64 start, u64 end, void *cookie)
> >  {
> > +	bool update_to_read_only = *((bool *)cookie);
> > +
> >  	ib_umem_notifier_start_account(item);
> > -	item->umem.context->invalidate_range(item, start, end);
> > +	/*
> > +	 * If it is already read only and we are updating to read only then we
> > +	 * do not need to change anything. So save time and skip this one.
> > +	 */
> > +	if (!update_to_read_only || !item->read_only)
> > +		item->umem.context->invalidate_range(item, start, end);
> >  	return 0;
> >  }
> >  
> > @@ -150,6 +157,7 @@ static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn,
> >  {
> >  	struct ib_ucontext_per_mm *per_mm =
> >  		container_of(mn, struct ib_ucontext_per_mm, mn);
> > +	bool update_to_read_only;
> >  
> >  	if (range->blockable)
> >  		down_read(&per_mm->umem_rwsem);
> > @@ -166,10 +174,13 @@ static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn,
> >  		return 0;
> >  	}
> >  
> > +	update_to_read_only = mmu_notifier_range_update_to_read_only(range);
> > +
> >  	return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start,
> >  					     range->end,
> >  					     invalidate_range_start_trampoline,
> > -					     range->blockable, NULL);
> > +					     range->blockable,
> > +					     &update_to_read_only);
> >  }
> >  
> >  static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start,
> > @@ -363,6 +374,9 @@ struct ib_umem_odp *ib_alloc_odp_umem(struct ib_ucontext_per_mm *per_mm,
> >  		goto out_odp_data;
> >  	}
> >  
> > +	/* Assume read only at first, each time GUP is call this is updated. */
> > +	odp_data->read_only = true;
> > +
> >  	odp_data->dma_list =
> >  		vzalloc(array_size(pages, sizeof(*odp_data->dma_list)));
> >  	if (!odp_data->dma_list) {
> > @@ -619,8 +633,10 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
> >  		goto out_put_task;
> >  	}
> >  
> > -	if (access_mask & ODP_WRITE_ALLOWED_BIT)
> > +	if (access_mask & ODP_WRITE_ALLOWED_BIT) {
> > +		umem_odp->read_only = false;
> 
> No locking?

The mmu notitfier exclusion will ensure that it is not missed
ie it will be false before any mmu notifier might be call on
page GUPed with write flag which is what matter here. So lock
are useless here.

> 
> >  		flags |= FOLL_WRITE;
> > +	}
> >  
> >  	start_idx = (user_virt - ib_umem_start(umem)) >> page_shift;
> >  	k = start_idx;
> > diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h
> > index 0b1446fe2fab..8256668c6170 100644
> > +++ b/include/rdma/ib_umem_odp.h
> > @@ -76,6 +76,7 @@ struct ib_umem_odp {
> >  	struct completion	notifier_completion;
> >  	int			dying;
> >  	struct work_struct	work;
> > +	bool read_only;
> >  };
> 
> The ib_umem already has a writeable flag. This reflects if the user
> asked for write permission to be granted.. The tracking here is if any
> remote fault thus far has requested write, is this an important
> difference to justify the new flag?

I did that patch couple week ago and now i do not remember why
i did not use that, i remember thinking about it ... damm i need
to keep better notes. I will review the code again.

Cheers,
Jérôme

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: Jason Gunthorpe <jgg@mellanox.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Jan Kara" <jack@suse.cz>,
	"Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Matthew Wilcox" <mawilcox@microsoft.com>,
	"Ross Zwisler" <zwisler@kernel.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Michal Hocko" <mhocko@kernel.org>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"Arnd Bergmann" <arnd@arndb.de>
Subject: Re: [PATCH v4 9/9] RDMA/umem_odp: optimize out the case when a range is updated to read only
Date: Wed, 23 Jan 2019 17:46:40 -0500	[thread overview]
Message-ID: <20190123224640.GA1257@redhat.com> (raw)
In-Reply-To: <20190123223153.GP8986@mellanox.com>

On Wed, Jan 23, 2019 at 10:32:00PM +0000, Jason Gunthorpe wrote:
> On Wed, Jan 23, 2019 at 05:23:15PM -0500, jglisse@redhat.com wrote:
> > From: Jérôme Glisse <jglisse@redhat.com>
> > 
> > When range of virtual address is updated read only and corresponding
> > user ptr object are already read only it is pointless to do anything.
> > Optimize this case out.
> > 
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Jan Kara <jack@suse.cz>
> > Cc: Felix Kuehling <Felix.Kuehling@amd.com>
> > Cc: Jason Gunthorpe <jgg@mellanox.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Matthew Wilcox <mawilcox@microsoft.com>
> > Cc: Ross Zwisler <zwisler@kernel.org>
> > Cc: Dan Williams <dan.j.williams@intel.com>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: kvm@vger.kernel.org
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: linux-rdma@vger.kernel.org
> > Cc: linux-fsdevel@vger.kernel.org
> > Cc: Arnd Bergmann <arnd@arndb.de>
> >  drivers/infiniband/core/umem_odp.c | 22 +++++++++++++++++++---
> >  include/rdma/ib_umem_odp.h         |  1 +
> >  2 files changed, 20 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
> > index a4ec43093cb3..fa4e7fdcabfc 100644
> > +++ b/drivers/infiniband/core/umem_odp.c
> > @@ -140,8 +140,15 @@ static void ib_umem_notifier_release(struct mmu_notifier *mn,
> >  static int invalidate_range_start_trampoline(struct ib_umem_odp *item,
> >  					     u64 start, u64 end, void *cookie)
> >  {
> > +	bool update_to_read_only = *((bool *)cookie);
> > +
> >  	ib_umem_notifier_start_account(item);
> > -	item->umem.context->invalidate_range(item, start, end);
> > +	/*
> > +	 * If it is already read only and we are updating to read only then we
> > +	 * do not need to change anything. So save time and skip this one.
> > +	 */
> > +	if (!update_to_read_only || !item->read_only)
> > +		item->umem.context->invalidate_range(item, start, end);
> >  	return 0;
> >  }
> >  
> > @@ -150,6 +157,7 @@ static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn,
> >  {
> >  	struct ib_ucontext_per_mm *per_mm =
> >  		container_of(mn, struct ib_ucontext_per_mm, mn);
> > +	bool update_to_read_only;
> >  
> >  	if (range->blockable)
> >  		down_read(&per_mm->umem_rwsem);
> > @@ -166,10 +174,13 @@ static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn,
> >  		return 0;
> >  	}
> >  
> > +	update_to_read_only = mmu_notifier_range_update_to_read_only(range);
> > +
> >  	return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start,
> >  					     range->end,
> >  					     invalidate_range_start_trampoline,
> > -					     range->blockable, NULL);
> > +					     range->blockable,
> > +					     &update_to_read_only);
> >  }
> >  
> >  static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start,
> > @@ -363,6 +374,9 @@ struct ib_umem_odp *ib_alloc_odp_umem(struct ib_ucontext_per_mm *per_mm,
> >  		goto out_odp_data;
> >  	}
> >  
> > +	/* Assume read only at first, each time GUP is call this is updated. */
> > +	odp_data->read_only = true;
> > +
> >  	odp_data->dma_list =
> >  		vzalloc(array_size(pages, sizeof(*odp_data->dma_list)));
> >  	if (!odp_data->dma_list) {
> > @@ -619,8 +633,10 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
> >  		goto out_put_task;
> >  	}
> >  
> > -	if (access_mask & ODP_WRITE_ALLOWED_BIT)
> > +	if (access_mask & ODP_WRITE_ALLOWED_BIT) {
> > +		umem_odp->read_only = false;
> 
> No locking?

The mmu notitfier exclusion will ensure that it is not missed
ie it will be false before any mmu notifier might be call on
page GUPed with write flag which is what matter here. So lock
are useless here.

> 
> >  		flags |= FOLL_WRITE;
> > +	}
> >  
> >  	start_idx = (user_virt - ib_umem_start(umem)) >> page_shift;
> >  	k = start_idx;
> > diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h
> > index 0b1446fe2fab..8256668c6170 100644
> > +++ b/include/rdma/ib_umem_odp.h
> > @@ -76,6 +76,7 @@ struct ib_umem_odp {
> >  	struct completion	notifier_completion;
> >  	int			dying;
> >  	struct work_struct	work;
> > +	bool read_only;
> >  };
> 
> The ib_umem already has a writeable flag. This reflects if the user
> asked for write permission to be granted.. The tracking here is if any
> remote fault thus far has requested write, is this an important
> difference to justify the new flag?

I did that patch couple week ago and now i do not remember why
i did not use that, i remember thinking about it ... damm i need
to keep better notes. I will review the code again.

Cheers,
Jérôme

  reply	other threads:[~2019-01-23 22:46 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-23 22:23 [PATCH v4 0/9] mmu notifier provide context informations jglisse
2019-01-23 22:23 ` [PATCH v4 1/9] mm/mmu_notifier: contextual information for event enums jglisse
2019-01-23 22:23   ` jglisse
2019-01-23 22:23 ` [PATCH v4 2/9] mm/mmu_notifier: contextual information for event triggering invalidation jglisse
2019-01-23 22:23   ` jglisse
2019-01-23 22:23 ` [PATCH v4 3/9] mm/mmu_notifier: use correct mmu_notifier events for each invalidation jglisse
2019-01-23 22:23   ` jglisse
2019-01-23 22:23 ` [PATCH v4 4/9] mm/mmu_notifier: pass down vma and reasons why mmu notifier is happening jglisse
2019-01-23 22:23   ` jglisse
2019-01-23 22:23 ` [PATCH v4 5/9] mm/mmu_notifier: mmu_notifier_range_update_to_read_only() helper jglisse
2019-01-23 22:23   ` jglisse
2019-01-23 22:23 ` [PATCH v4 6/9] gpu/drm/radeon: optimize out the case when a range is updated to read only jglisse
2019-01-23 22:23   ` jglisse
2019-01-23 22:23 ` [PATCH v4 7/9] gpu/drm/amdgpu: " jglisse
2019-01-23 22:23   ` jglisse
2019-01-23 22:23 ` [PATCH v4 8/9] gpu/drm/i915: " jglisse
2019-01-23 22:23   ` jglisse
2019-01-24 12:09   ` Joonas Lahtinen
2019-01-24 12:09     ` Joonas Lahtinen
2019-01-24 12:09     ` Joonas Lahtinen
2019-01-24 12:09     ` Joonas Lahtinen
2019-01-24 15:30     ` Jerome Glisse
2019-01-24 15:30       ` Jerome Glisse
2019-01-29 14:20       ` Joonas Lahtinen
2019-01-29 14:20         ` Joonas Lahtinen
2019-01-29 16:21         ` Jerome Glisse
2019-01-23 22:23 ` [PATCH v4 9/9] RDMA/umem_odp: " jglisse
2019-01-23 22:32   ` Jason Gunthorpe
2019-01-23 22:32     ` Jason Gunthorpe
2019-01-23 22:46     ` Jerome Glisse [this message]
2019-01-23 22:46       ` Jerome Glisse
2019-01-23 22:54 ` [PATCH v4 0/9] mmu notifier provide context informations Dan Williams
2019-01-23 22:54   ` Dan Williams
2019-01-23 23:04   ` Jerome Glisse
2019-01-23 23:04     ` Jerome Glisse
2019-01-24  0:00     ` Dan Williams
2019-01-24  0:00       ` Dan Williams
2019-01-31 16:10 ` Jerome Glisse
2019-01-31 16:10   ` Jerome Glisse
2019-01-31 19:55   ` Andrew Morton
2019-01-31 20:33     ` Jerome Glisse
2019-02-01 12:24   ` Christian König
2019-02-01 12:24     ` Christian König
2019-02-01 21:02   ` Jan Kara
2019-02-01 21:02     ` Jan Kara
2019-02-11 18:54     ` Jerome Glisse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190123224640.GA1257@redhat.com \
    --to=jglisse@redhat.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=christian.koenig@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jack@suse.cz \
    --cc=jgg@mellanox.com \
    --cc=jhubbard@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mawilcox@microsoft.com \
    --cc=mhocko@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rcampbell@nvidia.com \
    --cc=rkrcmar@redhat.com \
    --cc=zwisler@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.