* [PATCH] x86-64: fix page table accounting
@ 2012-10-04 13:48 Jan Beulich
2012-10-10 22:07 ` Hugh Dickins
2012-10-24 10:10 ` [tip:x86/urgent] x86-64: Fix " tip-bot for Jan Beulich
0 siblings, 2 replies; 3+ messages in thread
From: Jan Beulich @ 2012-10-04 13:48 UTC (permalink / raw)
To: mingo, tglx, hpa; +Cc: Hugh Dickins, linux-kernel
Commit 20167d3421a089a1bf1bd680b150dc69c9506810 ("x86-64: Fix
accounting in kernel_physical_mapping_init()") went a little too far
by entirely removing the counting of pre-populated page tables: This
should be done at boot time (to cover the page tables set up in early
boot code), but shouldn't be done during memory hot add.
Hence, re-add the removed increments of "pages", but make them and the
one in phys_pte_init() conditional upon !after_bootmem.
Reported-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Not sure if this ought to be copied to stable@.
---
arch/x86/mm/init_64.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
--- 3.6/arch/x86/mm/init_64.c
+++ 3.6-x86_64-page-table-count/arch/x86/mm/init_64.c
@@ -386,7 +386,8 @@ phys_pte_init(pte_t *pte_page, unsigned
* these mappings are more intelligent.
*/
if (pte_val(*pte)) {
- pages++;
+ if (!after_bootmem)
+ pages++;
continue;
}
@@ -451,6 +452,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned
* attributes.
*/
if (page_size_mask & (1 << PG_LEVEL_2M)) {
+ if (!after_bootmem)
+ pages++;
last_map_addr = next;
continue;
}
@@ -526,6 +529,8 @@ phys_pud_init(pud_t *pud_page, unsigned
* attributes.
*/
if (page_size_mask & (1 << PG_LEVEL_1G)) {
+ if (!after_bootmem)
+ pages++;
last_map_addr = next;
continue;
}
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] x86-64: fix page table accounting
2012-10-04 13:48 [PATCH] x86-64: fix page table accounting Jan Beulich
@ 2012-10-10 22:07 ` Hugh Dickins
2012-10-24 10:10 ` [tip:x86/urgent] x86-64: Fix " tip-bot for Jan Beulich
1 sibling, 0 replies; 3+ messages in thread
From: Hugh Dickins @ 2012-10-10 22:07 UTC (permalink / raw)
To: Jan Beulich; +Cc: mingo, tglx, hpa, linux-kernel
On Thu, 4 Oct 2012, Jan Beulich wrote:
> Commit 20167d3421a089a1bf1bd680b150dc69c9506810 ("x86-64: Fix
> accounting in kernel_physical_mapping_init()") went a little too far
> by entirely removing the counting of pre-populated page tables: This
> should be done at boot time (to cover the page tables set up in early
> boot code), but shouldn't be done during memory hot add.
>
> Hence, re-add the removed increments of "pages", but make them and the
> one in phys_pte_init() conditional upon !after_bootmem.
>
> Reported-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
I think this has not yet been picked up: perhaps it's awaiting an
Acked-by: Hugh Dickins <hughd@google.com>
or a
Tested-by: Hugh Dickins <hughd@google.com>
but I hesitated to give those because you understand what's going
on here much better than I pretend to.
>
> ---
> Not sure if this ought to be copied to stable@.
I guess not. Much as I like my kernels to show good meminfo numbers,
I was recently saying that David Rientjes's patches to get Unevictable
and Mlocked right were not important enough for stable, and I think
those numbers are more interesting to most people than the DirectMaps.
But I'd be happily overruled on all three patches.
Hugh
>
> ---
> arch/x86/mm/init_64.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> --- 3.6/arch/x86/mm/init_64.c
> +++ 3.6-x86_64-page-table-count/arch/x86/mm/init_64.c
> @@ -386,7 +386,8 @@ phys_pte_init(pte_t *pte_page, unsigned
> * these mappings are more intelligent.
> */
> if (pte_val(*pte)) {
> - pages++;
> + if (!after_bootmem)
> + pages++;
> continue;
> }
>
> @@ -451,6 +452,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned
> * attributes.
> */
> if (page_size_mask & (1 << PG_LEVEL_2M)) {
> + if (!after_bootmem)
> + pages++;
> last_map_addr = next;
> continue;
> }
> @@ -526,6 +529,8 @@ phys_pud_init(pud_t *pud_page, unsigned
> * attributes.
> */
> if (page_size_mask & (1 << PG_LEVEL_1G)) {
> + if (!after_bootmem)
> + pages++;
> last_map_addr = next;
> continue;
> }
^ permalink raw reply [flat|nested] 3+ messages in thread
* [tip:x86/urgent] x86-64: Fix page table accounting
2012-10-04 13:48 [PATCH] x86-64: fix page table accounting Jan Beulich
2012-10-10 22:07 ` Hugh Dickins
@ 2012-10-24 10:10 ` tip-bot for Jan Beulich
1 sibling, 0 replies; 3+ messages in thread
From: tip-bot for Jan Beulich @ 2012-10-24 10:10 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, jbeulich, hughd, JBeulich, stable, tglx
Commit-ID: 876ee61aadf01aa0db981b5d249cbdd53dc28b5e
Gitweb: http://git.kernel.org/tip/876ee61aadf01aa0db981b5d249cbdd53dc28b5e
Author: Jan Beulich <JBeulich@suse.com>
AuthorDate: Thu, 4 Oct 2012 14:48:10 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 24 Oct 2012 10:50:25 +0200
x86-64: Fix page table accounting
Commit 20167d3421a089a1bf1bd680b150dc69c9506810 ("x86-64: Fix
accounting in kernel_physical_mapping_init()") went a little too
far by entirely removing the counting of pre-populated page
tables: this should be done at boot time (to cover the page
tables set up in early boot code), but shouldn't be done during
memory hot add.
Hence, re-add the removed increments of "pages", but make them
and the one in phys_pte_init() conditional upon !after_bootmem.
Reported-Acked-and-Tested-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: <stable@kernel.org>
Link: http://lkml.kernel.org/r/506DAFBA020000780009FA8C@nat28.tlf.novell.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/mm/init_64.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 2b6b4a3..3baff25 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -386,7 +386,8 @@ phys_pte_init(pte_t *pte_page, unsigned long addr, unsigned long end,
* these mappings are more intelligent.
*/
if (pte_val(*pte)) {
- pages++;
+ if (!after_bootmem)
+ pages++;
continue;
}
@@ -451,6 +452,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
* attributes.
*/
if (page_size_mask & (1 << PG_LEVEL_2M)) {
+ if (!after_bootmem)
+ pages++;
last_map_addr = next;
continue;
}
@@ -526,6 +529,8 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
* attributes.
*/
if (page_size_mask & (1 << PG_LEVEL_1G)) {
+ if (!after_bootmem)
+ pages++;
last_map_addr = next;
continue;
}
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2012-10-24 10:11 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-04 13:48 [PATCH] x86-64: fix page table accounting Jan Beulich
2012-10-10 22:07 ` Hugh Dickins
2012-10-24 10:10 ` [tip:x86/urgent] x86-64: Fix " tip-bot for Jan Beulich
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).