All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Maor Gottlieb <maorg@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	Doug Ledford <dledford@redhat.com>,
	Christoph Hellwig <hch@lst.de>, Daniel Vetter <daniel@ffwll.ch>,
	David Airlie <airlied@linux.ie>,
	<dri-devel@lists.freedesktop.org>,
	<intel-gfx@lists.freedesktop.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	<linux-kernel@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Roland Scheidegger <sroland@vmware.com>,
	"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>,
	VMware Graphics <linux-graphics-maintainer@vmware.com>
Subject: Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
Date: Wed, 30 Sep 2020 08:58:37 -0300	[thread overview]
Message-ID: <20200930115837.GF816047@nvidia.com> (raw)
In-Reply-To: <80c49ff1-52c7-638f-553f-9de8130b188d@nvidia.com>

On Wed, Sep 30, 2020 at 02:53:58PM +0300, Maor Gottlieb wrote:
> 
> On 9/30/2020 2:45 PM, Jason Gunthorpe wrote:
> > On Wed, Sep 30, 2020 at 12:53:21PM +0300, Leon Romanovsky wrote:
> > > On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
> > > > On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> > > > > @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> > > > >   			goto umem_release;
> > > > > 
> > > > >   		cur_base += ret * PAGE_SIZE;
> > > > > -		npages   -= ret;
> > > > > -
> > > > > -		sg = ib_umem_add_sg_table(sg, page_list, ret,
> > > > > -			dma_get_max_seg_size(device->dma_device),
> > > > > -			&umem->sg_nents);
> > > > > +		npages -= ret;
> > > > > +		sg = __sg_alloc_table_from_pages(
> > > > > +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> > > > > +			dma_get_max_seg_size(device->dma_device), sg, npages,
> > > > > +			GFP_KERNEL);
> > > > > +		umem->sg_nents = umem->sg_head.nents;
> > > > > +		if (IS_ERR(sg)) {
> > > > > +			unpin_user_pages_dirty_lock(page_list, ret, 0);
> > > > > +			ret = PTR_ERR(sg);
> > > > > +			goto umem_release;
> > > > > +		}
> > > > >   	}
> > > > > 
> > > > >   	sg_mark_end(sg);
> > > > Does it still need the sg_mark_end?
> > > It is preserved here for correctness, the release logic doesn't rely on
> > > this marker, but it is better to leave it.
> > I mean, my read of __sg_alloc_table_from_pages() is that it already
> > placed it, the final __alloc_table() does it?
> 
> It marks the last allocated sge, but not the last populated sge (with page).

Why are those different?

It looks like the last iteration calls __alloc_table() with an exact
number of sges

+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

Jason 

WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@nvidia.com>
To: Maor Gottlieb <maorg@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>,
	David Airlie <airlied@linux.ie>,
	intel-gfx@lists.freedesktop.org,
	Roland Scheidegger <sroland@vmware.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linux-rdma@vger.kernel.org, Doug Ledford <dledford@redhat.com>,
	VMware Graphics <linux-graphics-maintainer@vmware.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
Date: Wed, 30 Sep 2020 08:58:37 -0300	[thread overview]
Message-ID: <20200930115837.GF816047@nvidia.com> (raw)
In-Reply-To: <80c49ff1-52c7-638f-553f-9de8130b188d@nvidia.com>

On Wed, Sep 30, 2020 at 02:53:58PM +0300, Maor Gottlieb wrote:
> 
> On 9/30/2020 2:45 PM, Jason Gunthorpe wrote:
> > On Wed, Sep 30, 2020 at 12:53:21PM +0300, Leon Romanovsky wrote:
> > > On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
> > > > On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> > > > > @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> > > > >   			goto umem_release;
> > > > > 
> > > > >   		cur_base += ret * PAGE_SIZE;
> > > > > -		npages   -= ret;
> > > > > -
> > > > > -		sg = ib_umem_add_sg_table(sg, page_list, ret,
> > > > > -			dma_get_max_seg_size(device->dma_device),
> > > > > -			&umem->sg_nents);
> > > > > +		npages -= ret;
> > > > > +		sg = __sg_alloc_table_from_pages(
> > > > > +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> > > > > +			dma_get_max_seg_size(device->dma_device), sg, npages,
> > > > > +			GFP_KERNEL);
> > > > > +		umem->sg_nents = umem->sg_head.nents;
> > > > > +		if (IS_ERR(sg)) {
> > > > > +			unpin_user_pages_dirty_lock(page_list, ret, 0);
> > > > > +			ret = PTR_ERR(sg);
> > > > > +			goto umem_release;
> > > > > +		}
> > > > >   	}
> > > > > 
> > > > >   	sg_mark_end(sg);
> > > > Does it still need the sg_mark_end?
> > > It is preserved here for correctness, the release logic doesn't rely on
> > > this marker, but it is better to leave it.
> > I mean, my read of __sg_alloc_table_from_pages() is that it already
> > placed it, the final __alloc_table() does it?
> 
> It marks the last allocated sge, but not the last populated sge (with page).

Why are those different?

It looks like the last iteration calls __alloc_table() with an exact
number of sges

+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

Jason 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@nvidia.com>
To: Maor Gottlieb <maorg@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	David Airlie <airlied@linux.ie>,
	intel-gfx@lists.freedesktop.org,
	Roland Scheidegger <sroland@vmware.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linux-rdma@vger.kernel.org, Doug Ledford <dledford@redhat.com>,
	VMware Graphics <linux-graphics-maintainer@vmware.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [Intel-gfx] [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages
Date: Wed, 30 Sep 2020 08:58:37 -0300	[thread overview]
Message-ID: <20200930115837.GF816047@nvidia.com> (raw)
In-Reply-To: <80c49ff1-52c7-638f-553f-9de8130b188d@nvidia.com>

On Wed, Sep 30, 2020 at 02:53:58PM +0300, Maor Gottlieb wrote:
> 
> On 9/30/2020 2:45 PM, Jason Gunthorpe wrote:
> > On Wed, Sep 30, 2020 at 12:53:21PM +0300, Leon Romanovsky wrote:
> > > On Tue, Sep 29, 2020 at 04:59:29PM -0300, Jason Gunthorpe wrote:
> > > > On Sun, Sep 27, 2020 at 09:46:47AM +0300, Leon Romanovsky wrote:
> > > > > @@ -296,11 +223,17 @@ static struct ib_umem *__ib_umem_get(struct ib_device *device,
> > > > >   			goto umem_release;
> > > > > 
> > > > >   		cur_base += ret * PAGE_SIZE;
> > > > > -		npages   -= ret;
> > > > > -
> > > > > -		sg = ib_umem_add_sg_table(sg, page_list, ret,
> > > > > -			dma_get_max_seg_size(device->dma_device),
> > > > > -			&umem->sg_nents);
> > > > > +		npages -= ret;
> > > > > +		sg = __sg_alloc_table_from_pages(
> > > > > +			&umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
> > > > > +			dma_get_max_seg_size(device->dma_device), sg, npages,
> > > > > +			GFP_KERNEL);
> > > > > +		umem->sg_nents = umem->sg_head.nents;
> > > > > +		if (IS_ERR(sg)) {
> > > > > +			unpin_user_pages_dirty_lock(page_list, ret, 0);
> > > > > +			ret = PTR_ERR(sg);
> > > > > +			goto umem_release;
> > > > > +		}
> > > > >   	}
> > > > > 
> > > > >   	sg_mark_end(sg);
> > > > Does it still need the sg_mark_end?
> > > It is preserved here for correctness, the release logic doesn't rely on
> > > this marker, but it is better to leave it.
> > I mean, my read of __sg_alloc_table_from_pages() is that it already
> > placed it, the final __alloc_table() does it?
> 
> It marks the last allocated sge, but not the last populated sge (with page).

Why are those different?

It looks like the last iteration calls __alloc_table() with an exact
number of sges

+	if (!prv) {
+		/* Only the last allocation could be less than the maximum */
+		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
+		ret = sg_alloc_table(sgt, table_size, gfp_mask);
+		if (unlikely(ret))
+			return ERR_PTR(ret);
+	}

Jason 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-09-30 11:58 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-27  6:46 [PATCH rdma-next v4 0/4] Dynamicaly allocate SG table from the pages Leon Romanovsky
2020-09-27  6:46 ` [Intel-gfx] " Leon Romanovsky
2020-09-27  6:46 ` Leon Romanovsky
2020-09-27  6:46 ` [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
2020-09-27  6:46   ` [Intel-gfx] " Leon Romanovsky
2020-09-27  6:46   ` Leon Romanovsky
2020-10-02 15:02   ` Jason Gunthorpe
2020-10-02 15:02     ` [Intel-gfx] " Jason Gunthorpe
2020-10-02 15:02     ` Jason Gunthorpe
2020-10-02 16:11     ` Maor Gottlieb
2020-10-02 16:11       ` [Intel-gfx] " Maor Gottlieb
2020-10-02 16:11       ` Maor Gottlieb
2020-10-02 16:19       ` Jason Gunthorpe
2020-10-02 16:19         ` [Intel-gfx] " Jason Gunthorpe
2020-10-02 16:19         ` Jason Gunthorpe
2020-09-27  6:46 ` [PATCH rdma-next v4 2/4] tools/testing/scatterlist: Rejuvenate bit-rotten test Leon Romanovsky
2020-09-27  6:46   ` [Intel-gfx] " Leon Romanovsky
2020-09-27  6:46   ` Leon Romanovsky
2020-09-27  6:46 ` [PATCH rdma-next v4 3/4] tools/testing/scatterlist: Show errors in human readable form Leon Romanovsky
2020-09-27  6:46   ` [Intel-gfx] " Leon Romanovsky
2020-09-27  6:46   ` Leon Romanovsky
2020-09-27  6:46 ` [PATCH rdma-next v4 4/4] RDMA/umem: Move to allocate SG table from pages Leon Romanovsky
2020-09-27  6:46   ` [Intel-gfx] " Leon Romanovsky
2020-09-27  6:46   ` Leon Romanovsky
2020-09-29 19:59   ` Jason Gunthorpe
2020-09-29 19:59     ` [Intel-gfx] " Jason Gunthorpe
2020-09-29 19:59     ` Jason Gunthorpe
2020-09-30  9:53     ` Leon Romanovsky
2020-09-30  9:53       ` [Intel-gfx] " Leon Romanovsky
2020-09-30  9:53       ` Leon Romanovsky
2020-09-30 11:45       ` Jason Gunthorpe
2020-09-30 11:45         ` [Intel-gfx] " Jason Gunthorpe
2020-09-30 11:45         ` Jason Gunthorpe
2020-09-30 11:53         ` Maor Gottlieb
2020-09-30 11:53           ` [Intel-gfx] " Maor Gottlieb
2020-09-30 11:53           ` Maor Gottlieb
2020-09-30 11:58           ` Jason Gunthorpe [this message]
2020-09-30 11:58             ` [Intel-gfx] " Jason Gunthorpe
2020-09-30 11:58             ` Jason Gunthorpe
2020-09-30 15:05             ` Maor Gottlieb
2020-09-30 15:05               ` [Intel-gfx] " Maor Gottlieb
2020-09-30 15:05               ` Maor Gottlieb
2020-09-30 15:14               ` Jason Gunthorpe
2020-09-30 15:14                 ` [Intel-gfx] " Jason Gunthorpe
2020-09-30 15:14                 ` Jason Gunthorpe
2020-09-30 15:40                 ` Maor Gottlieb
2020-09-30 15:40                   ` [Intel-gfx] " Maor Gottlieb
2020-09-30 15:40                   ` Maor Gottlieb
2020-09-30 16:51                 ` Leon Romanovsky
2020-09-30 16:51                   ` [Intel-gfx] " Leon Romanovsky
2020-09-30 16:51                   ` Leon Romanovsky
2020-09-27  6:48 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Dynamicaly allocate SG table from the pages (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200930115837.GF816047@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=airlied@linux.ie \
    --cc=daniel@ffwll.ch \
    --cc=dledford@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=leon@kernel.org \
    --cc=linux-graphics-maintainer@vmware.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maorg@nvidia.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=sroland@vmware.com \
    --cc=tvrtko.ursulin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.