All of lore.kernel.org
 help / color / mirror / Atom feed
From: Claudio Imbrenda <imbrenda@linux.ibm.com>
To: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: pbonzini@redhat.com, thuth@redhat.com, andrew.jones@linux.dev,
	kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
	Laurent Vivier <lvivier@redhat.com>,
	Janosch Frank <frankja@linux.ibm.com>
Subject: Re: [kvm-unit-tests PATCH 1/3] lib/vmalloc: Treat virt_to_pte_phys() as returning a physical address
Date: Thu, 6 Oct 2022 16:50:37 +0200	[thread overview]
Message-ID: <20221006165037.19b9488f@p-imbrenda> (raw)
In-Reply-To: <Yz7FZPWAsFV9Cwpv@monolith.localdoman>

On Thu, 6 Oct 2022 13:09:08 +0100
Alexandru Elisei <alexandru.elisei@arm.com> wrote:

> Hi,
> 
> On Thu, Oct 06, 2022 at 01:35:52PM +0200, Claudio Imbrenda wrote:
> > On Thu,  6 Oct 2022 12:12:39 +0100
> > Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> >   
> > > All architectures that implements virt_to_pte_phys() (s390x, x86,
> > > arm and arm64) return a physical address from the function. Teach
> > > vmalloc to treat it as such, instead of confusing the return
> > > value with a page table entry.  
> > 
> > I'm not sure I understand what you mean  
> 
> I thought that vmalloc uses PAGE_MASK because it expects
> virt_to_pte_phys() to return a pteval (because of the "pte' part in
> the virt_to_pte_phys()

I agree that the name of the function is confusing; there are comments
in lib/vmalloc.h and for virt_to_pte_phys it says:

/* Walk the page table and resolve the virtual address to a physical
address */

> function name), which might have the [PAGE_SHIFT-1:0] bits used to store
> page metadata by an architecture (like permissions), but like you've
> explained below it uses PAGE_MASK to align the page address (which is
> identically mapped) before passing it to the page allocator to be freed.
> 
> >   
> > > Changing things the other way around (having the function return a page
> > > table entry instead) is not feasible, because it is possible for an
> > > architecture to use the upper bits of the table entry to store metadata
> > > about the page.
> > > 
> > > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > Cc: Thomas Huth <thuth@redhat.com>
> > > Cc: Andrew Jones <andrew.jones@linux.dev>
> > > Cc: Laurent Vivier <lvivier@redhat.com>
> > > Cc: Janosch Frank <frankja@linux.ibm.com>
> > > Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > > Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > > ---
> > >  lib/vmalloc.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/lib/vmalloc.c b/lib/vmalloc.c
> > > index 572682576cc3..0696b5da8190 100644
> > > --- a/lib/vmalloc.c
> > > +++ b/lib/vmalloc.c
> > > @@ -169,7 +169,7 @@ static void vm_free(void *mem)
> > >  	/* the pointer is not page-aligned, it was a single-page allocation */
> > >  	if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) {
> > >  		assert(GET_MAGIC(mem) == VM_MAGIC);
> > > -		page = virt_to_pte_phys(page_root, mem) & PAGE_MASK;
> > > +		page = virt_to_pte_phys(page_root, mem);  
> > 
> > this will break things for small allocations, though. if the pointer is
> > not aligned, then the result of virt_to_pte_phys will also not be
> > aligned....  
> 
> I agree, I missed that part. Would be nice if it were written using
> PAGE_ALIGN to avoid mistakes like mine in the future, but that's

PAGE_ALIGN rounds UP, though, and we need to round down.

I think it's easier and more readable to & PAGE_MASK, instead of a more
cumbersome ALIGN_DOWN((thing), PAGE_SIZE)

> unimportant.
> 
> >   
> > >  		assert(page);
> > >  		free_page(phys_to_virt(page));  
> > 
> > ...and phys_to_virt will also return an unaligned address, and
> > free_page will complain about it.
> >   
> > >  		return;
> > > @@ -183,7 +183,7 @@ static void vm_free(void *mem)
> > >  	/* free all the pages including the metadata page */
> > >  	ptr = (uintptr_t)m & PAGE_MASK;  
> > 
> > ptr gets page aligned here
> >   
> > >  	for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) {
> > > -		page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK;
> > > +		page = virt_to_pte_phys(page_root, (void *)ptr);  
> > 
> > so virt_to_pte_phys will also return an aligned address;
> > I agree that & PAGE_MASK is redundant here  
> 
> You are correct, if we've ended up here it means that the pointer is
> already page aligned, and it will be incremented by PAGE_SIZE each
> iteration, hence the virt_to_pte_phys() will also be paged aligned.
> 
> I don't see much point in writing a patch just to remove the unnecessary
> alignment here, so I'll drop this patch entirely.
> 
> Thank you for the prompt explanation!

I'm glad things have been clarified :)

> 
> Alex
> 
> >   
> > >  		assert(page);
> > >  		free_page(phys_to_virt(page));
> > >  	}  
> >   


WARNING: multiple messages have this Message-ID (diff)
From: Claudio Imbrenda <imbrenda@linux.ibm.com>
To: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Laurent Vivier <lvivier@redhat.com>,
	thuth@redhat.com, Janosch Frank <frankja@linux.ibm.com>,
	kvm@vger.kernel.org, andrew.jones@linux.dev, pbonzini@redhat.com,
	kvmarm@lists.cs.columbia.edu
Subject: Re: [kvm-unit-tests PATCH 1/3] lib/vmalloc: Treat virt_to_pte_phys() as returning a physical address
Date: Thu, 6 Oct 2022 16:50:37 +0200	[thread overview]
Message-ID: <20221006165037.19b9488f@p-imbrenda> (raw)
In-Reply-To: <Yz7FZPWAsFV9Cwpv@monolith.localdoman>

On Thu, 6 Oct 2022 13:09:08 +0100
Alexandru Elisei <alexandru.elisei@arm.com> wrote:

> Hi,
> 
> On Thu, Oct 06, 2022 at 01:35:52PM +0200, Claudio Imbrenda wrote:
> > On Thu,  6 Oct 2022 12:12:39 +0100
> > Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> >   
> > > All architectures that implements virt_to_pte_phys() (s390x, x86,
> > > arm and arm64) return a physical address from the function. Teach
> > > vmalloc to treat it as such, instead of confusing the return
> > > value with a page table entry.  
> > 
> > I'm not sure I understand what you mean  
> 
> I thought that vmalloc uses PAGE_MASK because it expects
> virt_to_pte_phys() to return a pteval (because of the "pte' part in
> the virt_to_pte_phys()

I agree that the name of the function is confusing; there are comments
in lib/vmalloc.h and for virt_to_pte_phys it says:

/* Walk the page table and resolve the virtual address to a physical
address */

> function name), which might have the [PAGE_SHIFT-1:0] bits used to store
> page metadata by an architecture (like permissions), but like you've
> explained below it uses PAGE_MASK to align the page address (which is
> identically mapped) before passing it to the page allocator to be freed.
> 
> >   
> > > Changing things the other way around (having the function return a page
> > > table entry instead) is not feasible, because it is possible for an
> > > architecture to use the upper bits of the table entry to store metadata
> > > about the page.
> > > 
> > > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > Cc: Thomas Huth <thuth@redhat.com>
> > > Cc: Andrew Jones <andrew.jones@linux.dev>
> > > Cc: Laurent Vivier <lvivier@redhat.com>
> > > Cc: Janosch Frank <frankja@linux.ibm.com>
> > > Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
> > > Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > > ---
> > >  lib/vmalloc.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/lib/vmalloc.c b/lib/vmalloc.c
> > > index 572682576cc3..0696b5da8190 100644
> > > --- a/lib/vmalloc.c
> > > +++ b/lib/vmalloc.c
> > > @@ -169,7 +169,7 @@ static void vm_free(void *mem)
> > >  	/* the pointer is not page-aligned, it was a single-page allocation */
> > >  	if (!IS_ALIGNED((uintptr_t)mem, PAGE_SIZE)) {
> > >  		assert(GET_MAGIC(mem) == VM_MAGIC);
> > > -		page = virt_to_pte_phys(page_root, mem) & PAGE_MASK;
> > > +		page = virt_to_pte_phys(page_root, mem);  
> > 
> > this will break things for small allocations, though. if the pointer is
> > not aligned, then the result of virt_to_pte_phys will also not be
> > aligned....  
> 
> I agree, I missed that part. Would be nice if it were written using
> PAGE_ALIGN to avoid mistakes like mine in the future, but that's

PAGE_ALIGN rounds UP, though, and we need to round down.

I think it's easier and more readable to & PAGE_MASK, instead of a more
cumbersome ALIGN_DOWN((thing), PAGE_SIZE)

> unimportant.
> 
> >   
> > >  		assert(page);
> > >  		free_page(phys_to_virt(page));  
> > 
> > ...and phys_to_virt will also return an unaligned address, and
> > free_page will complain about it.
> >   
> > >  		return;
> > > @@ -183,7 +183,7 @@ static void vm_free(void *mem)
> > >  	/* free all the pages including the metadata page */
> > >  	ptr = (uintptr_t)m & PAGE_MASK;  
> > 
> > ptr gets page aligned here
> >   
> > >  	for (i = 0 ; i < m->npages + 1; i++, ptr += PAGE_SIZE) {
> > > -		page = virt_to_pte_phys(page_root, (void *)ptr) & PAGE_MASK;
> > > +		page = virt_to_pte_phys(page_root, (void *)ptr);  
> > 
> > so virt_to_pte_phys will also return an aligned address;
> > I agree that & PAGE_MASK is redundant here  
> 
> You are correct, if we've ended up here it means that the pointer is
> already page aligned, and it will be incremented by PAGE_SIZE each
> iteration, hence the virt_to_pte_phys() will also be paged aligned.
> 
> I don't see much point in writing a patch just to remove the unnecessary
> alignment here, so I'll drop this patch entirely.
> 
> Thank you for the prompt explanation!

I'm glad things have been clarified :)

> 
> Alex
> 
> >   
> > >  		assert(page);
> > >  		free_page(phys_to_virt(page));
> > >  	}  
> >   

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-10-06 14:50 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-06 11:12 [kvm-unit-tests PATCH 0/3] arm/arm64: mmu cleanups and fixes Alexandru Elisei
2022-10-06 11:12 ` Alexandru Elisei
2022-10-06 11:12 ` [kvm-unit-tests PATCH 1/3] lib/vmalloc: Treat virt_to_pte_phys() as returning a physical address Alexandru Elisei
2022-10-06 11:12   ` Alexandru Elisei
2022-10-06 11:35   ` Claudio Imbrenda
2022-10-06 11:35     ` Claudio Imbrenda
2022-10-06 12:09     ` Alexandru Elisei
2022-10-06 12:09       ` Alexandru Elisei
2022-10-06 14:50       ` Claudio Imbrenda [this message]
2022-10-06 14:50         ` Claudio Imbrenda
2022-10-06 11:12 ` [kvm-unit-tests PATCH 2/3] arm/arm64: mmu: Teach virt_to_pte_phys() about block descriptors Alexandru Elisei
2022-10-06 11:12   ` Alexandru Elisei
2022-10-06 11:12 ` [kvm-unit-tests PATCH 3/3] arm/arm64: mmu: Rename mmu_get_pte() -> follow_pte() Alexandru Elisei
2022-10-06 11:12   ` Alexandru Elisei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221006165037.19b9488f@p-imbrenda \
    --to=imbrenda@linux.ibm.com \
    --cc=alexandru.elisei@arm.com \
    --cc=andrew.jones@linux.dev \
    --cc=frankja@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=lvivier@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.