All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
@ 2023-03-24 10:32 Linus Walleij
  2023-03-24 12:27 ` kernel test robot
  2023-03-24 13:59 ` Jason Gunthorpe
  0 siblings, 2 replies; 9+ messages in thread
From: Linus Walleij @ 2023-03-24 10:32 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky, Bernard Metzler
  Cc: linux-rdma, Linus Walleij

Like the other calls in this function virt_to_page() expects
a pointer, not an integer.

However since many architectures implement virt_to_pfn() as
a macro, this function becomes polymorphic and accepts both a
(unsigned long) and a (void *).

Fix this up with an explicit cast.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
 drivers/infiniband/sw/rxe/rxe_mr.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index b10aa1580a64..5c90d83002f0 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -213,7 +213,7 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr)
 static int rxe_set_page(struct ib_mr *ibmr, u64 iova)
 {
 	struct rxe_mr *mr = to_rmr(ibmr);
-	struct page *page = virt_to_page(iova & mr->page_mask);
+	struct page *page = virt_to_page((void *)(iova & mr->page_mask));
 	bool persistent = !!(mr->access & IB_ACCESS_FLUSH_PERSISTENT);
 	int err;
 
@@ -288,7 +288,7 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr,
 	u8 *va;
 
 	while (length) {
-		page = virt_to_page(iova & mr->page_mask);
+		page = virt_to_page((void *)(iova & mr->page_mask));
 		bytes = min_t(unsigned int, length,
 				PAGE_SIZE - page_offset);
 		va = kmap_local_page(page);
@@ -488,7 +488,7 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
 
 	if (mr->ibmr.type == IB_MR_TYPE_DMA) {
 		page_offset = iova & (PAGE_SIZE - 1);
-		page = virt_to_page(iova & PAGE_MASK);
+		page = virt_to_page((void *)(iova & PAGE_MASK));
 	} else {
 		unsigned long index;
 		int err;
@@ -545,7 +545,7 @@ int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value)
 
 	if (mr->ibmr.type == IB_MR_TYPE_DMA) {
 		page_offset = iova & (PAGE_SIZE - 1);
-		page = virt_to_page(iova & PAGE_MASK);
+		page = virt_to_page((void *)(iova & PAGE_MASK));
 	} else {
 		unsigned long index;
 		int err;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-24 10:32 [PATCH] RDMA/rxe: Pass a pointer to virt_to_page() Linus Walleij
@ 2023-03-24 12:27 ` kernel test robot
  2023-03-24 13:59 ` Jason Gunthorpe
  1 sibling, 0 replies; 9+ messages in thread
From: kernel test robot @ 2023-03-24 12:27 UTC (permalink / raw)
  To: Linus Walleij, Jason Gunthorpe, Leon Romanovsky, Bernard Metzler
  Cc: oe-kbuild-all, linux-rdma, Linus Walleij

Hi Linus,

I love your patch! Perhaps something to improve:

[auto build test WARNING on rdma/for-next]
[also build test WARNING on linus/master v6.3-rc3 next-20230324]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Linus-Walleij/RDMA-rxe-Pass-a-pointer-to-virt_to_page/20230324-183329
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git for-next
patch link:    https://lore.kernel.org/r/20230324103252.712107-1-linus.walleij%40linaro.org
patch subject: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
config: parisc-allyesconfig (https://download.01.org/0day-ci/archive/20230324/202303242000.HmTaa6yB-lkp@intel.com/config)
compiler: hppa-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/f162d87e28eb5241d1a0e2085b80bfc70f626536
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Linus-Walleij/RDMA-rxe-Pass-a-pointer-to-virt_to_page/20230324-183329
        git checkout f162d87e28eb5241d1a0e2085b80bfc70f626536
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=parisc olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=parisc SHELL=/bin/bash drivers/infiniband/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202303242000.HmTaa6yB-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from arch/parisc/include/asm/page.h:181,
                    from include/linux/mm_types_task.h:16,
                    from include/linux/mm_types.h:5,
                    from include/linux/mmzone.h:22,
                    from include/linux/gfp.h:7,
                    from include/linux/xarray.h:15,
                    from include/linux/list_lru.h:14,
                    from include/linux/fs.h:13,
                    from include/linux/highmem.h:5,
                    from include/linux/bvec.h:10,
                    from include/linux/blk_types.h:10,
                    from include/linux/bio.h:10,
                    from include/linux/libnvdimm.h:14,
                    from drivers/infiniband/sw/rxe/rxe_mr.c:7:
   drivers/infiniband/sw/rxe/rxe_mr.c: In function 'rxe_set_page':
>> drivers/infiniband/sw/rxe/rxe_mr.c:216:42: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     216 |         struct page *page = virt_to_page((void *)(iova & mr->page_mask));
         |                                          ^
   include/asm-generic/memory_model.h:18:46: note: in definition of macro '__pfn_to_page'
      18 | #define __pfn_to_page(pfn)      (mem_map + ((pfn) - ARCH_PFN_OFFSET))
         |                                              ^~~
   arch/parisc/include/asm/page.h:179:45: note: in expansion of macro '__pa'
     179 | #define virt_to_page(kaddr)     pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
         |                                             ^~~~
   drivers/infiniband/sw/rxe/rxe_mr.c:216:29: note: in expansion of macro 'virt_to_page'
     216 |         struct page *page = virt_to_page((void *)(iova & mr->page_mask));
         |                             ^~~~~~~~~~~~
   drivers/infiniband/sw/rxe/rxe_mr.c: In function 'rxe_mr_copy_dma':
   drivers/infiniband/sw/rxe/rxe_mr.c:291:37: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     291 |                 page = virt_to_page((void *)(iova & mr->page_mask));
         |                                     ^
   include/asm-generic/memory_model.h:18:46: note: in definition of macro '__pfn_to_page'
      18 | #define __pfn_to_page(pfn)      (mem_map + ((pfn) - ARCH_PFN_OFFSET))
         |                                              ^~~
   arch/parisc/include/asm/page.h:179:45: note: in expansion of macro '__pa'
     179 | #define virt_to_page(kaddr)     pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
         |                                             ^~~~
   drivers/infiniband/sw/rxe/rxe_mr.c:291:24: note: in expansion of macro 'virt_to_page'
     291 |                 page = virt_to_page((void *)(iova & mr->page_mask));
         |                        ^~~~~~~~~~~~
   drivers/infiniband/sw/rxe/rxe_mr.c: In function 'rxe_mr_do_atomic_op':
   drivers/infiniband/sw/rxe/rxe_mr.c:491:37: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     491 |                 page = virt_to_page((void *)(iova & PAGE_MASK));
         |                                     ^
   include/asm-generic/memory_model.h:18:46: note: in definition of macro '__pfn_to_page'
      18 | #define __pfn_to_page(pfn)      (mem_map + ((pfn) - ARCH_PFN_OFFSET))
         |                                              ^~~
   arch/parisc/include/asm/page.h:179:45: note: in expansion of macro '__pa'
     179 | #define virt_to_page(kaddr)     pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
         |                                             ^~~~
   drivers/infiniband/sw/rxe/rxe_mr.c:491:24: note: in expansion of macro 'virt_to_page'
     491 |                 page = virt_to_page((void *)(iova & PAGE_MASK));
         |                        ^~~~~~~~~~~~


vim +216 drivers/infiniband/sw/rxe/rxe_mr.c

   212	
   213	static int rxe_set_page(struct ib_mr *ibmr, u64 iova)
   214	{
   215		struct rxe_mr *mr = to_rmr(ibmr);
 > 216		struct page *page = virt_to_page((void *)(iova & mr->page_mask));
   217		bool persistent = !!(mr->access & IB_ACCESS_FLUSH_PERSISTENT);
   218		int err;
   219	
   220		if (persistent && !is_pmem_page(page)) {
   221			rxe_dbg_mr(mr, "Page cannot be persistent\n");
   222			return -EINVAL;
   223		}
   224	
   225		if (unlikely(mr->nbuf == mr->num_buf))
   226			return -ENOMEM;
   227	
   228		err = xa_err(xa_store(&mr->page_list, mr->nbuf, page, GFP_KERNEL));
   229		if (err)
   230			return err;
   231	
   232		mr->nbuf++;
   233		return 0;
   234	}
   235	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-24 10:32 [PATCH] RDMA/rxe: Pass a pointer to virt_to_page() Linus Walleij
  2023-03-24 12:27 ` kernel test robot
@ 2023-03-24 13:59 ` Jason Gunthorpe
  2023-03-29 14:28   ` Linus Walleij
  1 sibling, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2023-03-24 13:59 UTC (permalink / raw)
  To: Linus Walleij; +Cc: Leon Romanovsky, Bernard Metzler, linux-rdma

On Fri, Mar 24, 2023 at 11:32:52AM +0100, Linus Walleij wrote:
> Like the other calls in this function virt_to_page() expects
> a pointer, not an integer.
> 
> However since many architectures implement virt_to_pfn() as
> a macro, this function becomes polymorphic and accepts both a
> (unsigned long) and a (void *).
> 
> Fix this up with an explicit cast.
> 
> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
> ---
>  drivers/infiniband/sw/rxe/rxe_mr.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
> index b10aa1580a64..5c90d83002f0 100644
> --- a/drivers/infiniband/sw/rxe/rxe_mr.c
> +++ b/drivers/infiniband/sw/rxe/rxe_mr.c
> @@ -213,7 +213,7 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr)
>  static int rxe_set_page(struct ib_mr *ibmr, u64 iova)
>  {

All these functions have the wrong names, they are kva not IOVA.


> @@ -288,7 +288,7 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr,
>  	u8 *va;

>  	while (length) {
> -		page = virt_to_page(iova & mr->page_mask);
> +		page = virt_to_page((void *)(iova & mr->page_mask));
>  		bytes = min_t(unsigned int, length,
>  				PAGE_SIZE - page_offset);

This is actually a bug, this function is only called on IB_MR_TYPE_DMA
and in that case 'iova' is actually a phys addr

So iova should be called phys and the above should be:

		page = pfn_to_page(phys >> PAGE_SHIFT);

> @@ -488,7 +488,7 @@ int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
>  
>  	if (mr->ibmr.type == IB_MR_TYPE_DMA) {
>  		page_offset = iova & (PAGE_SIZE - 1);
> -		page = virt_to_page(iova & PAGE_MASK);
> +		page = virt_to_page((void *)(iova & PAGE_MASK));

These masks against PAGE_MASK are not needed, virt_to_page does them
already.

>  	} else {
>  		unsigned long index;
>  		int err;
> @@ -545,7 +545,7 @@ int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value)
>  
>  	if (mr->ibmr.type == IB_MR_TYPE_DMA) {
>  		page_offset = iova & (PAGE_SIZE - 1);
> -		page = virt_to_page(iova & PAGE_MASK);
> +		page = virt_to_page((void *)(iova & PAGE_MASK));

pfn_to_page here as well

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-24 13:59 ` Jason Gunthorpe
@ 2023-03-29 14:28   ` Linus Walleij
  2023-03-29 20:17     ` Bob Pearson
  2023-03-29 23:16     ` Jason Gunthorpe
  0 siblings, 2 replies; 9+ messages in thread
From: Linus Walleij @ 2023-03-29 14:28 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Leon Romanovsky, Bernard Metzler, linux-rdma

On Fri, Mar 24, 2023 at 3:00 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> On Fri, Mar 24, 2023 at 11:32:52AM +0100, Linus Walleij wrote:
> > Like the other calls in this function virt_to_page() expects
> > a pointer, not an integer.
> >
> > However since many architectures implement virt_to_pfn() as
> > a macro, this function becomes polymorphic and accepts both a
> > (unsigned long) and a (void *).
> >
> > Fix this up with an explicit cast.
> >
> > Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
> > ---
> >  drivers/infiniband/sw/rxe/rxe_mr.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
> > index b10aa1580a64..5c90d83002f0 100644
> > --- a/drivers/infiniband/sw/rxe/rxe_mr.c
> > +++ b/drivers/infiniband/sw/rxe/rxe_mr.c
> > @@ -213,7 +213,7 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr)
> >  static int rxe_set_page(struct ib_mr *ibmr, u64 iova)
> >  {
>
> All these functions have the wrong names, they are kva not IOVA.

This escalated quickly. :D

I'm trying to do the right thing, I try to fix the technical issues first,
and I can follow up with a rename patch if you want.

> > @@ -288,7 +288,7 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr,
> >       u8 *va;
>
> >       while (length) {
> > -             page = virt_to_page(iova & mr->page_mask);
> > +             page = virt_to_page((void *)(iova & mr->page_mask));
> >               bytes = min_t(unsigned int, length,
> >                               PAGE_SIZE - page_offset);
>
> This is actually a bug, this function is only called on IB_MR_TYPE_DMA
> and in that case 'iova' is actually a phys addr
>
> So iova should be called phys and the above should be:
>
>                 page = pfn_to_page(phys >> PAGE_SHIFT);

I tried to make a patch fixing all of these up and prepended to v2 of this
patch (which also needed adjustments).

This is a bit tricky so bear with me, also I have no idea how to test this
so hoping for some help there.

I'm a bit puzzled: could the above code (which exist in
three instances in the driver) even work as it is? Or is it not used?
Or is there some failover from DMA to something else that is constantly
happening?

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-29 14:28   ` Linus Walleij
@ 2023-03-29 20:17     ` Bob Pearson
  2023-03-29 23:16     ` Jason Gunthorpe
  1 sibling, 0 replies; 9+ messages in thread
From: Bob Pearson @ 2023-03-29 20:17 UTC (permalink / raw)
  To: Linus Walleij, Jason Gunthorpe
  Cc: Leon Romanovsky, Bernard Metzler, linux-rdma

On 3/29/23 09:28, Linus Walleij wrote:
> On Fri, Mar 24, 2023 at 3:00 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>> On Fri, Mar 24, 2023 at 11:32:52AM +0100, Linus Walleij wrote:
>>> Like the other calls in this function virt_to_page() expects
>>> a pointer, not an integer.
>>>
>>> However since many architectures implement virt_to_pfn() as
>>> a macro, this function becomes polymorphic and accepts both a
>>> (unsigned long) and a (void *).
>>>
>>> Fix this up with an explicit cast.
>>>
>>> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
>>> ---
>>>  drivers/infiniband/sw/rxe/rxe_mr.c | 8 ++++----
>>>  1 file changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
>>> index b10aa1580a64..5c90d83002f0 100644
>>> --- a/drivers/infiniband/sw/rxe/rxe_mr.c
>>> +++ b/drivers/infiniband/sw/rxe/rxe_mr.c
>>> @@ -213,7 +213,7 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr)
>>>  static int rxe_set_page(struct ib_mr *ibmr, u64 iova)
>>>  {
>>
>> All these functions have the wrong names, they are kva not IOVA.
> 
> This escalated quickly. :D
> 
> I'm trying to do the right thing, I try to fix the technical issues first,
> and I can follow up with a rename patch if you want.
> 
>>> @@ -288,7 +288,7 @@ static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr,
>>>       u8 *va;
>>
>>>       while (length) {
>>> -             page = virt_to_page(iova & mr->page_mask);
>>> +             page = virt_to_page((void *)(iova & mr->page_mask));
>>>               bytes = min_t(unsigned int, length,
>>>                               PAGE_SIZE - page_offset);
>>
>> This is actually a bug, this function is only called on IB_MR_TYPE_DMA
>> and in that case 'iova' is actually a phys addr
>>
>> So iova should be called phys and the above should be:
>>
>>                 page = pfn_to_page(phys >> PAGE_SHIFT);
> 
> I tried to make a patch fixing all of these up and prepended to v2 of this
> patch (which also needed adjustments).
> 
> This is a bit tricky so bear with me, also I have no idea how to test this
> so hoping for some help there.
> 
> I'm a bit puzzled: could the above code (which exist in
> three instances in the driver) even work as it is? Or is it not used?
> Or is there some failover from DMA to something else that is constantly
> happening?
> 
> Yours,
> Linus Walleij

The driver is a software only emulation of an RDMA device. So there isn't any 'real' DMA going on
just a bunch of memcpy()s.

Bob

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-29 14:28   ` Linus Walleij
  2023-03-29 20:17     ` Bob Pearson
@ 2023-03-29 23:16     ` Jason Gunthorpe
  2023-03-30  2:45       ` Bob Pearson
  1 sibling, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2023-03-29 23:16 UTC (permalink / raw)
  To: Linus Walleij; +Cc: Leon Romanovsky, Bernard Metzler, linux-rdma

On Wed, Mar 29, 2023 at 04:28:08PM +0200, Linus Walleij wrote:

> I'm a bit puzzled: could the above code (which exist in
> three instances in the driver) even work as it is? Or is it not used?
> Or is there some failover from DMA to something else that is constantly
> happening?

The physical address dma type IB_MR_TYPE_DMA is rarely used and maybe
nobody ever tested it, at least in a configuration where kva != pa

Then again, maybe I got it wrong and it is still a kva in this case
because it is still ultimately DMA mapped when using IB_MR_TYPE_DMA?

Bob?

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-29 23:16     ` Jason Gunthorpe
@ 2023-03-30  2:45       ` Bob Pearson
  2023-03-30 12:06         ` Jason Gunthorpe
  0 siblings, 1 reply; 9+ messages in thread
From: Bob Pearson @ 2023-03-30  2:45 UTC (permalink / raw)
  To: Jason Gunthorpe, Linus Walleij
  Cc: Leon Romanovsky, Bernard Metzler, linux-rdma

On 3/29/23 18:16, Jason Gunthorpe wrote:
> On Wed, Mar 29, 2023 at 04:28:08PM +0200, Linus Walleij wrote:
> 
>> I'm a bit puzzled: could the above code (which exist in
>> three instances in the driver) even work as it is? Or is it not used?
>> Or is there some failover from DMA to something else that is constantly
>> happening?
> 
> The physical address dma type IB_MR_TYPE_DMA is rarely used and maybe
> nobody ever tested it, at least in a configuration where kva != pa
> 
> Then again, maybe I got it wrong and it is still a kva in this case
> because it is still ultimately DMA mapped when using IB_MR_TYPE_DMA?
> 
> Bob?
> 
> Jason

In the amd64 environment I use to dev and test there is no difference AFAIK.
I have to go on faith that other platforms work but have very little experience.

Bob

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-30  2:45       ` Bob Pearson
@ 2023-03-30 12:06         ` Jason Gunthorpe
  2023-03-30 15:19           ` Linus Walleij
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2023-03-30 12:06 UTC (permalink / raw)
  To: Bob Pearson; +Cc: Linus Walleij, Leon Romanovsky, Bernard Metzler, linux-rdma

On Wed, Mar 29, 2023 at 09:45:45PM -0500, Bob Pearson wrote:
> On 3/29/23 18:16, Jason Gunthorpe wrote:
> > On Wed, Mar 29, 2023 at 04:28:08PM +0200, Linus Walleij wrote:
> > 
> >> I'm a bit puzzled: could the above code (which exist in
> >> three instances in the driver) even work as it is? Or is it not used?
> >> Or is there some failover from DMA to something else that is constantly
> >> happening?
> > 
> > The physical address dma type IB_MR_TYPE_DMA is rarely used and maybe
> > nobody ever tested it, at least in a configuration where kva != pa
> > 
> > Then again, maybe I got it wrong and it is still a kva in this case
> > because it is still ultimately DMA mapped when using IB_MR_TYPE_DMA?
> > 
> > Bob?
> > 
> > Jason
> 
> In the amd64 environment I use to dev and test there is no difference AFAIK.
> I have to go on faith that other platforms work but have very little experience.

Honestly, I would be much happier if the virtual wrappers used
physical not kva :\ But that is too much to ask

So I suspsect I got it wrong and it is a kva still

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] RDMA/rxe: Pass a pointer to virt_to_page()
  2023-03-30 12:06         ` Jason Gunthorpe
@ 2023-03-30 15:19           ` Linus Walleij
  0 siblings, 0 replies; 9+ messages in thread
From: Linus Walleij @ 2023-03-30 15:19 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Bob Pearson, Leon Romanovsky, Bernard Metzler, linux-rdma

On Thu, Mar 30, 2023 at 2:06 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> On Wed, Mar 29, 2023 at 09:45:45PM -0500, Bob Pearson wrote:
> > On 3/29/23 18:16, Jason Gunthorpe wrote:
> > > On Wed, Mar 29, 2023 at 04:28:08PM +0200, Linus Walleij wrote:
> > >
> > >> I'm a bit puzzled: could the above code (which exist in
> > >> three instances in the driver) even work as it is? Or is it not used?
> > >> Or is there some failover from DMA to something else that is constantly
> > >> happening?
> > >
> > > The physical address dma type IB_MR_TYPE_DMA is rarely used and maybe
> > > nobody ever tested it, at least in a configuration where kva != pa
> > >
> > > Then again, maybe I got it wrong and it is still a kva in this case
> > > because it is still ultimately DMA mapped when using IB_MR_TYPE_DMA?
> > >
> > > Bob?
> > >
> > > Jason
> >
> > In the amd64 environment I use to dev and test there is no difference AFAIK.
> > I have to go on faith that other platforms work but have very little experience.
>
> Honestly, I would be much happier if the virtual wrappers used
> physical not kva :\ But that is too much to ask
>
> So I suspsect I got it wrong and it is a kva still

OK I'll respin the first version of the patch but with the quirks making it work
on parisc.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-03-30 15:20 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-24 10:32 [PATCH] RDMA/rxe: Pass a pointer to virt_to_page() Linus Walleij
2023-03-24 12:27 ` kernel test robot
2023-03-24 13:59 ` Jason Gunthorpe
2023-03-29 14:28   ` Linus Walleij
2023-03-29 20:17     ` Bob Pearson
2023-03-29 23:16     ` Jason Gunthorpe
2023-03-30  2:45       ` Bob Pearson
2023-03-30 12:06         ` Jason Gunthorpe
2023-03-30 15:19           ` Linus Walleij

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.