xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>,
	"Tamas K Lengyel" <tamas@tklengyel.com>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Ian Jackson" <iwj@xenproject.org>,
	"Julien Grall" <julien@xen.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Wei Liu" <wl@xen.org>
Subject: [PATCH 14/16] paged_pages field is MEM_PAGING-only
Date: Mon, 5 Jul 2021 18:14:17 +0200	[thread overview]
Message-ID: <ab136038-0242-086c-9e67-02c47e1db3e0@suse.com> (raw)
In-Reply-To: <d1fd572d-5bfe-21d8-3b50-d9b0646ce2f0@suse.com>

Conditionalize it and its uses accordingly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was on the edge of introducing a helper for
atomic_read(&d->paged_pages) but decided against because of
dump_domains() not being able to use it sensibly (I really want to omit
the output field altogether there when !MEM_PAGING).

--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1213,6 +1213,7 @@ int add_to_physmap(struct domain *sd, un
     }
     else
     {
+#ifdef CONFIG_MEM_PAGING
         /*
          * There is a chance we're plugging a hole where a paged out
          * page was.
@@ -1238,6 +1239,7 @@ int add_to_physmap(struct domain *sd, un
                 put_page(cpage);
             }
         }
+#endif
     }
 
     atomic_inc(&nr_saved_mfns);
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -666,11 +666,13 @@ p2m_add_page(struct domain *d, gfn_t gfn
             /* Count how man PoD entries we'll be replacing if successful */
             pod_count++;
         }
+#ifdef CONFIG_MEM_PAGING
         else if ( p2m_is_paging(ot) && (ot != p2m_ram_paging_out) )
         {
             /* We're plugging a hole in the physmap where a paged out page was */
             atomic_dec(&d->paged_pages);
         }
+#endif
     }
 
     /* Then, look for m->p mappings for this range and deal with them */
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -114,7 +114,11 @@ void getdomaininfo(struct domain *d, str
 #else
     info->shr_pages         = 0;
 #endif
+#ifdef CONFIG_MEM_PAGING
     info->paged_pages       = atomic_read(&d->paged_pages);
+#else
+    info->paged_pages       = 0;
+#endif
     info->shared_info_frame =
         gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -278,14 +278,18 @@ static void dump_domains(unsigned char k
 #ifdef CONFIG_MEM_SHARING
                " shared_pages=%u"
 #endif
+#ifdef CONFIG_MEM_PAGING
                " paged_pages=%u"
+#endif
                " dirty_cpus={%*pbl} max_pages=%u\n",
                domain_tot_pages(d), d->xenheap_pages,
 #ifdef CONFIG_MEM_SHARING
                atomic_read(&d->shr_pages),
 #endif
-               atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
-               d->max_pages);
+#ifdef CONFIG_MEM_PAGING
+               atomic_read(&d->paged_pages),
+#endif
+               CPUMASK_PR(d->dirty_cpumask), d->max_pages);
         printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
                "%02x%02x-%02x%02x%02x%02x%02x%02x vm_assist=%08lx\n",
                d->handle[ 0], d->handle[ 1], d->handle[ 2], d->handle[ 3],
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -390,7 +390,9 @@ struct domain
     atomic_t         shr_pages;         /* shared pages */
 #endif
 
+#ifdef CONFIG_MEM_PAGING
     atomic_t         paged_pages;       /* paged-out pages */
+#endif
 
     /* Scheduling. */
     void            *sched_priv;    /* scheduler-specific data */



  parent reply	other threads:[~2021-07-05 16:14 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
2021-07-05 16:05 ` Jan Beulich
2021-07-05 16:05 ` [PATCH 01/16] x86/P2M: rename p2m_remove_page() Jan Beulich
2022-02-04 21:54   ` George Dunlap
2022-02-07  9:20     ` Jan Beulich
2021-07-05 16:06 ` [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
2021-07-05 17:47   ` Paul Durrant
2021-07-06  7:05     ` Jan Beulich
2022-02-04 22:07   ` George Dunlap
2022-02-07  9:38     ` Jan Beulich
2022-02-07 15:49       ` George Dunlap
2021-07-05 16:06 ` [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM Jan Beulich
2022-02-04 22:13   ` George Dunlap
2022-02-07  9:51     ` Jan Beulich
2021-07-05 16:07 ` [PATCH 04/16] x86/P2M: move map_domain_gfn() (again) Jan Beulich
2022-02-04 22:17   ` George Dunlap
2021-07-05 16:07 ` [PATCH 05/16] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
2022-02-05 21:06   ` George Dunlap
2021-07-05 16:07 ` [PATCH 06/16] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
2022-02-05 21:09   ` George Dunlap
2021-07-05 16:09 ` [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
2021-07-07  1:35   ` Tian, Kevin
2022-02-05 21:17   ` George Dunlap
2021-07-05 16:09 ` [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
2022-02-05 21:29   ` George Dunlap
2022-02-07 10:11     ` Jan Beulich
2022-02-07 14:45       ` George Dunlap
2022-02-07 15:23         ` Jan Beulich
2021-07-05 16:10 ` [PATCH 09/16] x86/P2M: split out init/teardown functions Jan Beulich
2022-02-05 21:31   ` George Dunlap
2021-07-05 16:10 ` [PATCH 10/16] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
2022-02-14 14:26   ` George Dunlap
2021-07-05 16:12 ` [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access() Jan Beulich
2022-02-14 15:12   ` George Dunlap
2022-02-14 15:20     ` Jan Beulich
2021-07-05 16:12 ` [PATCH 12/16] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
2022-02-14 15:17   ` George Dunlap
2021-07-05 16:13 ` [PATCH 13/16] shr_pages field is MEM_SHARING-only Jan Beulich
2021-07-06 12:42   ` Tamas K Lengyel
2022-02-14 15:36   ` George Dunlap
2021-07-05 16:14 ` Jan Beulich [this message]
2021-07-06 12:44   ` [PATCH 14/16] paged_pages field is MEM_PAGING-only Tamas K Lengyel
2022-02-14 15:38   ` George Dunlap
2021-07-05 16:14 ` [PATCH 15/16] x86/P2M: p2m.c is HVM-only Jan Beulich
2022-02-14 15:39   ` George Dunlap
2021-07-05 16:15 ` [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
2021-07-05 17:49   ` Paul Durrant
2022-02-14 15:51   ` George Dunlap
2022-02-14 16:07     ` Jan Beulich
2022-02-16  7:54       ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ab136038-0242-086c-9e67-02c47e1db3e0@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=julien@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tamas@tklengyel.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).