All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}
@ 2015-10-22 15:43 Julien Grall
  2015-10-22 15:48 ` Jan Beulich
  0 siblings, 1 reply; 5+ messages in thread
From: Julien Grall @ 2015-10-22 15:43 UTC (permalink / raw)
  To: xen-devel
  Cc: Julien Grall, George Dunlap, Keir Fraser, Jan Beulich, Andrew Cooper

The last parameter of alloc_domheap_page{s,} contain the memory flags and
not the order of the allocation.

Use 0 as it was before 1069d63c5ef2510d08b83b2171af660e5bb18c63
"x86/mm/p2m: use defines for page sizes".

Note that PAGE_ORDER_4K is also equal to 0 so the behavior stays the
same.

Signed-off-by: Julien Grall <julien.grall@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/mm/p2m-pod.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 901da37..61cca6f 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -222,7 +222,7 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
         else
             order = PAGE_ORDER_4K;
     retry:
-        page = alloc_domheap_pages(d, order, PAGE_ORDER_4K);
+        page = alloc_domheap_pages(d, order, 0);
         if ( unlikely(page == NULL) )
         {
             if ( order == PAGE_ORDER_2M )
@@ -477,7 +477,7 @@ p2m_pod_offline_or_broken_replace(struct page_info *p)
 
     free_domheap_page(p);
 
-    p = alloc_domheap_page(d, PAGE_ORDER_4K);
+    p = alloc_domheap_page(d, 0);
     if ( unlikely(!p) )
         return;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}
  2015-10-22 15:43 [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s} Julien Grall
@ 2015-10-22 15:48 ` Jan Beulich
  2015-10-22 16:13   ` Julien Grall
  0 siblings, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2015-10-22 15:48 UTC (permalink / raw)
  To: Julien Grall; +Cc: George Dunlap, Andrew Cooper, Keir Fraser, xen-devel

>>> On 22.10.15 at 17:43, <julien.grall@citrix.com> wrote:
> @@ -477,7 +477,7 @@ p2m_pod_offline_or_broken_replace(struct page_info *p)
>  
>      free_domheap_page(p);
>  
> -    p = alloc_domheap_page(d, PAGE_ORDER_4K);
> +    p = alloc_domheap_page(d, 0);

I realize that this is the easiest fix, but I think here we instead want
something like

@@ -477,13 +477,14 @@ p2m_pod_offline_or_broken_replace(struct
 {
     struct domain *d;
     struct p2m_domain *p2m;
+    nodeid_t node = phys_to_nid(page_to_maddr(p));
 
     if ( !(d = page_get_owner(p)) || !(p2m = p2m_get_hostp2m(d)) )
         return;
 
     free_domheap_page(p);
 
-    p = alloc_domheap_page(d, PAGE_ORDER_4K);
+    p = alloc_domheap_pages(d, PAGE_ORDER_4K, MEMF_node(node));
     if ( unlikely(!p) )
         return;
 

Jan

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}
  2015-10-22 15:48 ` Jan Beulich
@ 2015-10-22 16:13   ` Julien Grall
  2015-10-22 16:54     ` Dario Faggioli
  2015-10-23  6:45     ` Jan Beulich
  0 siblings, 2 replies; 5+ messages in thread
From: Julien Grall @ 2015-10-22 16:13 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, Keir Fraser, xen-devel

On 22/10/15 16:48, Jan Beulich wrote:
>>>> On 22.10.15 at 17:43, <julien.grall@citrix.com> wrote:
>> @@ -477,7 +477,7 @@ p2m_pod_offline_or_broken_replace(struct page_info *p)
>>  
>>      free_domheap_page(p);
>>  
>> -    p = alloc_domheap_page(d, PAGE_ORDER_4K);
>> +    p = alloc_domheap_page(d, 0);
> 
> I realize that this is the easiest fix, but I think here we instead want
> something like

It sounds sensible to me to re-allocate the page on the same numa node.

I will send another version of this patch. Although, I would appreciate
if someone can test it because I don't have any NUMA platform.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}
  2015-10-22 16:13   ` Julien Grall
@ 2015-10-22 16:54     ` Dario Faggioli
  2015-10-23  6:45     ` Jan Beulich
  1 sibling, 0 replies; 5+ messages in thread
From: Dario Faggioli @ 2015-10-22 16:54 UTC (permalink / raw)
  To: Julien Grall, Jan Beulich
  Cc: George Dunlap, Andrew Cooper, Keir Fraser, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1486 bytes --]

On Thu, 2015-10-22 at 17:13 +0100, Julien Grall wrote:
> On 22/10/15 16:48, Jan Beulich wrote:
> > > > > On 22.10.15 at 17:43, <julien.grall@citrix.com> wrote:
> > > @@ -477,7 +477,7 @@ p2m_pod_offline_or_broken_replace(struct
> > > page_info *p)
> > >  
> > >      free_domheap_page(p);
> > >  
> > > -    p = alloc_domheap_page(d, PAGE_ORDER_4K);
> > > +    p = alloc_domheap_page(d, 0);
> > 
> > I realize that this is the easiest fix, but I think here we instead
> > want
> > something like
> 
> It sounds sensible to me to re-allocate the page on the same numa
> node.
> 
Indeed. It may be worth mentioning this in the changelog too, IMHO.

> I will send another version of this patch. Although, I would
> appreciate
> if someone can test it because I don't have any NUMA platform.
> 
I'm up for it... What would it be a reasonable test, that actually
stress this?

I certainly can do a "regular" test cycle such as: boot --> create a
guest --> play a bit with it --> shutdown. Is it enough?

I think it should be an HVM guest, right? And perhaps I should specify
different mem= and memmax= ?

Just let me know and, if you remember, Cc me when sending next version.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}
  2015-10-22 16:13   ` Julien Grall
  2015-10-22 16:54     ` Dario Faggioli
@ 2015-10-23  6:45     ` Jan Beulich
  1 sibling, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2015-10-23  6:45 UTC (permalink / raw)
  To: Julien Grall; +Cc: George Dunlap, Andrew Cooper, Keir Fraser, xen-devel

>>> On 22.10.15 at 18:13, <julien.grall@citrix.com> wrote:
> On 22/10/15 16:48, Jan Beulich wrote:
>>>>> On 22.10.15 at 17:43, <julien.grall@citrix.com> wrote:
>>> @@ -477,7 +477,7 @@ p2m_pod_offline_or_broken_replace(struct page_info *p)
>>>  
>>>      free_domheap_page(p);
>>>  
>>> -    p = alloc_domheap_page(d, PAGE_ORDER_4K);
>>> +    p = alloc_domheap_page(d, 0);
>> 
>> I realize that this is the easiest fix, but I think here we instead want
>> something like
> 
> It sounds sensible to me to re-allocate the page on the same numa node.
> 
> I will send another version of this patch. Although, I would appreciate
> if someone can test it because I don't have any NUMA platform.

To test this code path, memory hut unplug would need to be
supported by the platform too, or you'd have to manage to be
unlucky for your system to have a faulty memory page in the
"right" slot.

Jan

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-10-23  6:45 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-22 15:43 [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s} Julien Grall
2015-10-22 15:48 ` Jan Beulich
2015-10-22 16:13   ` Julien Grall
2015-10-22 16:54     ` Dario Faggioli
2015-10-23  6:45     ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.