linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] RDMA/umem: add a schedule point in ib_umem_get()
@ 2020-07-30  1:57 Eric Dumazet
  2020-07-31 17:17 ` Jason Gunthorpe
  0 siblings, 1 reply; 3+ messages in thread
From: Eric Dumazet @ 2020-07-30  1:57 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: linux-kernel, Eric Dumazet, Eric Dumazet, linux-rdma

Mapping as little as 64GB can take more than 10 seconds,
triggering issues on kernels with CONFIG_PREEMPT_NONE=y.

ib_umem_get() already splits the work in 2MB units on x86_64,
adding a cond_resched() in the long-lasting loop is enough
to solve the issue.

Note that sg_alloc_table() can still use more than 100 ms,
which is also problematic. This might be addressed later
in ib_umem_add_sg_table(), adding new blocks in sgl
on demand.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: linux-rdma@vger.kernel.org
---
 drivers/infiniband/core/umem.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 82455a1392f1d19c96ae956f0bd4e93e3a52d29c..831bff8d52e547834e9e04064127fbb280595126 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
 	sg = umem->sg_head.sgl;
 
 	while (npages) {
+		cond_resched();
 		ret = pin_user_pages_fast(cur_base,
 					  min_t(unsigned long, npages,
 						PAGE_SIZE /
-- 
2.28.0.rc0.142.g3c755180ce-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net] RDMA/umem: add a schedule point in ib_umem_get()
  2020-07-30  1:57 [PATCH net] RDMA/umem: add a schedule point in ib_umem_get() Eric Dumazet
@ 2020-07-31 17:17 ` Jason Gunthorpe
  2020-07-31 17:21   ` Eric Dumazet
  0 siblings, 1 reply; 3+ messages in thread
From: Jason Gunthorpe @ 2020-07-31 17:17 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Doug Ledford, linux-kernel, Eric Dumazet, linux-rdma

On Wed, Jul 29, 2020 at 06:57:55PM -0700, Eric Dumazet wrote:
> Mapping as little as 64GB can take more than 10 seconds,
> triggering issues on kernels with CONFIG_PREEMPT_NONE=y.
> 
> ib_umem_get() already splits the work in 2MB units on x86_64,
> adding a cond_resched() in the long-lasting loop is enough
> to solve the issue.
> 
> Note that sg_alloc_table() can still use more than 100 ms,
> which is also problematic. This might be addressed later
> in ib_umem_add_sg_table(), adding new blocks in sgl
> on demand.

I have seen some patches in progress to do exactly this, the
motivation is to reduce the memory consumption if a lot of pages are
combined.

> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Doug Ledford <dledford@redhat.com>
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: linux-rdma@vger.kernel.org
> ---
>  drivers/infiniband/core/umem.c | 1 +
>  1 file changed, 1 insertion(+)

Why [PATCH net] ?

Anyhow, applied to rdma for-next

Thanks,
Jason

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net] RDMA/umem: add a schedule point in ib_umem_get()
  2020-07-31 17:17 ` Jason Gunthorpe
@ 2020-07-31 17:21   ` Eric Dumazet
  0 siblings, 0 replies; 3+ messages in thread
From: Eric Dumazet @ 2020-07-31 17:21 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Doug Ledford, linux-kernel, Eric Dumazet, linux-rdma

On Fri, Jul 31, 2020 at 10:17 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Wed, Jul 29, 2020 at 06:57:55PM -0700, Eric Dumazet wrote:
> > Mapping as little as 64GB can take more than 10 seconds,
> > triggering issues on kernels with CONFIG_PREEMPT_NONE=y.
> >
> > ib_umem_get() already splits the work in 2MB units on x86_64,
> > adding a cond_resched() in the long-lasting loop is enough
> > to solve the issue.
> >
> > Note that sg_alloc_table() can still use more than 100 ms,
> > which is also problematic. This might be addressed later
> > in ib_umem_add_sg_table(), adding new blocks in sgl
> > on demand.
>
> I have seen some patches in progress to do exactly this, the
> motivation is to reduce the memory consumption if a lot of pages are
> combined.

Nice ;)

>
> > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > Cc: Doug Ledford <dledford@redhat.com>
> > Cc: Jason Gunthorpe <jgg@ziepe.ca>
> > Cc: linux-rdma@vger.kernel.org
> > ---
> >  drivers/infiniband/core/umem.c | 1 +
> >  1 file changed, 1 insertion(+)
>
> Why [PATCH net] ?

Sorry, I used a script that I normally use for net submissions, forgot
to remove this tag ;)

>
> Anyhow, applied to rdma for-next

Thanks !

>
> Thanks,
> Jason

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-07-31 17:21 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-30  1:57 [PATCH net] RDMA/umem: add a schedule point in ib_umem_get() Eric Dumazet
2020-07-31 17:17 ` Jason Gunthorpe
2020-07-31 17:21   ` Eric Dumazet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).