From: Zhu Yanjun <zyjzyj2000@gmail.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
Doug Ledford <dledford@redhat.com>,
RDMA mailing list <linux-rdma@vger.kernel.org>,
maorg@nvidia.com
Subject: Re: Fwd: [PATCH 1/1] RDMA/umem: add back hugepage sg list
Date: Sat, 20 Mar 2021 11:38:26 +0800 [thread overview]
Message-ID: <CAD=hENcN8dfD9ZGQ-2in2dUeJ9Wzd2+WFWFbhUgovxwCrETL1A@mail.gmail.com> (raw)
In-Reply-To: <20210319134845.GR2356281@nvidia.com>
On Fri, Mar 19, 2021 at 9:48 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Fri, Mar 19, 2021 at 09:33:13PM +0800, Zhu Yanjun wrote:
> > On Fri, Mar 19, 2021 at 9:01 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > >
> > > On Sat, Mar 13, 2021 at 11:02:41AM +0800, Zhu Yanjun wrote:
> > > > On Fri, Mar 12, 2021 at 10:01 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > > >
> > > > > On Fri, Mar 12, 2021 at 09:49:52PM +0800, Zhu Yanjun wrote:
> > > > > > In short, the sg list from __sg_alloc_table_from_pages is different
> > > > > > from the sg list from ib_umem_add_sg_table.
> > > > >
> > > > > I don't care about different. Tell me what is wrong with what we have
> > > > > today.
> > > > >
> > > > > I thought your first message said the sgl's were too small, but now
> > > > > you seem to say they are too big?
> > > >
> > > > Sure.
> > > >
> > > > The sg list from __sg_alloc_table_from_pages, length of sg is too big.
> > > > And the dma address is like the followings:
> > > >
> > > > "
> > > > sg_dma_address(sg):0x4b3c1ce000
> > > > sg_dma_address(sg):0x4c3c1cd000
> > > > sg_dma_address(sg):0x4d3c1cc000
> > > > sg_dma_address(sg):0x4e3c1cb000
> > > > "
> > >
> > > Ok, so how does too big a dma segment side cause
> > > __sg_alloc_table_from_pages() to return sg elements that are too
> > > small?
> > >
> > > I assume there is some kind of maths overflow here?
> > Please check this function __sg_alloc_table_from_pages
> > "
> > ...
> > 457 /* Merge contiguous pages into the last SG */
> > 458 prv_len = prv->length;
> > 459 while (n_pages && page_to_pfn(pages[0]) == paddr) {
> > 460 if (prv->length + PAGE_SIZE >
> > max_segment) <--max_segment is too big. So n_pages will be 0. Then
> > the function will goto out to exit.
>
> You already said this.
>
> You are reporting 4k pages, if max_segment is larger than 4k there is
> no such thing as "too big"
>
> I assume it is "too small" because of some maths overflow.
459 while (n_pages && page_to_pfn(pages[0]) == paddr) {
460 if (prv->length + PAGE_SIZE >
max_segment) <--it max_segment is big, n_pages is zero.
461 break;
462 prv->length += PAGE_SIZE;
463 paddr++;
464 pages++;
465 n_pages--;
466 }
467 if (!n_pages) <---here, this function will goto out.
468 goto out;
...
509 chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
510 sg_set_page(s, pages[cur_page],
511 min_t(unsigned long, size,
chunk_size), offset); <----this function will not have many chance to
be called if max_segment is big.
512 added_nents++;
513 size -= chunk_size;
If the max_segment is not big enough, for example it is SZ-2M,
sg_set_page will be called every SZ_2M.
To now, I do not find any math overflow.
Zhu Yanjun
>
> You should add some prints and find out what is going on.
>
> Jason
next prev parent reply other threads:[~2021-03-20 3:39 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20210307221034.568606-1-yanjun.zhu@intel.com>
2021-03-07 14:28 ` Fwd: [PATCH 1/1] RDMA/umem: add back hugepage sg list Zhu Yanjun
2021-03-07 17:20 ` Leon Romanovsky
2021-03-07 17:22 ` Leon Romanovsky
2021-03-08 2:44 ` Zhu Yanjun
2021-03-08 10:13 ` Zhu Yanjun
2021-03-08 12:16 ` Jason Gunthorpe
2021-03-11 10:41 ` Zhu Yanjun
2021-03-12 0:25 ` Jason Gunthorpe
2021-03-12 8:04 ` Zhu Yanjun
2021-03-12 8:05 ` Zhu Yanjun
2021-03-12 13:02 ` Jason Gunthorpe
2021-03-12 13:42 ` Zhu Yanjun
2021-03-12 13:49 ` Zhu Yanjun
2021-03-12 14:01 ` Jason Gunthorpe
2021-03-13 3:02 ` Zhu Yanjun
2021-03-19 13:00 ` Jason Gunthorpe
2021-03-19 13:33 ` Zhu Yanjun
2021-03-19 13:48 ` Jason Gunthorpe
2021-03-20 3:38 ` Zhu Yanjun [this message]
2021-03-20 3:49 ` Zhu Yanjun
2021-03-20 20:38 ` Jason Gunthorpe
2021-03-21 8:06 ` Zhu Yanjun
2021-03-21 12:07 ` Jason Gunthorpe
2021-03-21 12:54 ` Zhu Yanjun
2021-03-21 13:03 ` Jason Gunthorpe
2021-03-21 14:38 ` Zhu Yanjun
2021-03-21 15:52 ` Jason Gunthorpe
2021-03-22 5:07 ` Zhu Yanjun
2021-03-08 0:44 ` Jason Gunthorpe
2021-03-08 2:47 ` Zhu Yanjun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAD=hENcN8dfD9ZGQ-2in2dUeJ9Wzd2+WFWFbhUgovxwCrETL1A@mail.gmail.com' \
--to=zyjzyj2000@gmail.com \
--cc=dledford@redhat.com \
--cc=jgg@nvidia.com \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=maorg@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.