linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, Peter Zijlstra <peterz@infradead.org>,
	Bin Yang <bin.yang@intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Mark Gross <mark.gross@intel.com>
Subject: [patch V2 04/10] x86/mm/cpa: Add debug mechanism
Date: Fri, 14 Sep 2018 15:09:21 +0200	[thread overview]
Message-ID: <20180914131301.941953185@linutronix.de> (raw)
In-Reply-To: 20180914130917.155416208@linutronix.de

[-- Attachment #1: x86-mm-cpa--Add-proper-debug-output.patch --]
[-- Type: text/plain, Size: 4135 bytes --]

The whole static protection magic is silently fixing up anything which is
handed in. That's just wrong. The offending call sites need to be fixed.

Add a debug mechanism which emits a warning if a requested mapping needs to be
fixed up. The DETECT debug mechanism is really not meant to be enabled except
for developers, so limit the output hard to the protection fixups.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pageattr.c |   60 +++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 51 insertions(+), 9 deletions(-)

--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -42,6 +42,13 @@ struct cpa_data {
 	struct page	**pages;
 };
 
+enum cpa_warn {
+	CPA_PROTECT,
+	CPA_DETECT,
+};
+
+static const int cpa_warn_level = CPA_PROTECT;
+
 /*
  * Serialize cpa() (for !DEBUG_PAGEALLOC which uses large identity mappings)
  * using cpa_lock. So that we don't allow any other cpu, with stale large tlb
@@ -392,6 +399,27 @@ static pgprotval_t protect_kernel_text_r
 }
 #endif
 
+static inline bool conflicts(pgprot_t prot, pgprotval_t val)
+{
+	return (pgprot_val(prot) & ~val) != pgprot_val(prot);
+}
+
+static inline void check_conflict(int warnlvl, pgprot_t prot, pgprotval_t val,
+				  unsigned long start, unsigned long end,
+				  unsigned long pfn, const char *txt)
+{
+	static const char *lvltxt[] = {
+		[CPA_PROTECT]	= "protect",
+		[CPA_DETECT]	= "detect",
+	};
+
+	if (warnlvl > cpa_warn_level || !conflicts(prot, val))
+		return;
+
+	pr_warn("CPA %8s %10s: 0x%016lx - 0x%016lx PFN %lx req %016lx prevent %016lx\n",
+		lvltxt[warnlvl], txt, start, end, pfn, pgprot_val(prot), val);
+}
+
 /*
  * Certain areas of memory on x86 require very specific protection flags,
  * for example the BIOS area or kernel text. Callers don't always get this
@@ -399,19 +427,31 @@ static pgprotval_t protect_kernel_text_r
  * checks and fixes these known static required protection bits.
  */
 static inline pgprot_t static_protections(pgprot_t prot, unsigned long start,
-					  unsigned long pfn, unsigned long npg)
+					  unsigned long pfn, unsigned long npg,
+					  int warnlvl)
 {
-	pgprotval_t forbidden;
+	pgprotval_t forbidden, res;
 	unsigned long end;
 
 	/* Operate on the virtual address */
 	end = start + npg * PAGE_SIZE - 1;
-	forbidden  = protect_kernel_text(start, end);
-	forbidden |= protect_kernel_text_ro(start, end);
+
+	res = protect_kernel_text(start, end);
+	check_conflict(warnlvl, prot, res, start, end, pfn, "Text NX");
+	forbidden = res;
+
+	res = protect_kernel_text_ro(start, end);
+	check_conflict(warnlvl, prot, res, start, end, pfn, "Text RO");
+	forbidden |= res;
 
 	/* Check the PFN directly */
-	forbidden |= protect_pci_bios(pfn, pfn + npg - 1);
-	forbidden |= protect_rodata(pfn, pfn + npg - 1);
+	res = protect_pci_bios(pfn, pfn + npg - 1);
+	check_conflict(warnlvl, prot, res, start, end, pfn, "PCIBIOS NX");
+	forbidden |= res;
+
+	res = protect_rodata(pfn, pfn + npg - 1);
+	check_conflict(warnlvl, prot, res, start, end, pfn, "Rodata RO");
+	forbidden |= res;
 
 	return __pgprot(pgprot_val(prot) & ~forbidden);
 }
@@ -683,10 +723,11 @@ static int __should_split_large_page(pte
 	 * in it results in a different pgprot than the first one of the
 	 * requested range. If yes, then the page needs to be split.
 	 */
-	new_prot = static_protections(req_prot, address, pfn, 1);
+	new_prot = static_protections(req_prot, address, pfn, 1, CPA_DETECT);
 	pfn = old_pfn;
 	for (i = 0, addr = lpaddr; i < numpages; i++, addr += PAGE_SIZE, pfn++) {
-		pgprot_t chk_prot = static_protections(req_prot, addr, pfn, 1);
+		pgprot_t chk_prot = static_protections(req_prot, addr, pfn, 1,
+						       CPA_DETECT);
 
 		if (pgprot_val(chk_prot) != pgprot_val(new_prot))
 			return 1;
@@ -1296,7 +1337,8 @@ static int __change_page_attr(struct cpa
 		pgprot_val(new_prot) &= ~pgprot_val(cpa->mask_clr);
 		pgprot_val(new_prot) |= pgprot_val(cpa->mask_set);
 
-		new_prot = static_protections(new_prot, address, pfn, 1);
+		new_prot = static_protections(new_prot, address, pfn, 1,
+					      CPA_PROTECT);
 
 		new_prot = pgprot_clear_protnone_bits(new_prot);
 



  parent reply	other threads:[~2018-09-14 13:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-14 13:09 [patch V2 00/10] x86/mm/cpa: Improve large page preservation handling Thomas Gleixner
2018-09-14 13:09 ` [patch V2 01/10] x86/mm/cpa: Split, rename and clean up try_preserve_large_page() Thomas Gleixner
2018-09-14 13:09 ` [patch V2 02/10] x86/mm/cpa: Rework static_protections() Thomas Gleixner
2018-09-14 13:09 ` [patch V2 03/10] x86/mm/cpa: Allow range check for static protections Thomas Gleixner
2018-09-14 13:09 ` Thomas Gleixner [this message]
2018-09-14 23:22   ` [patch V2 04/10] x86/mm/cpa: Add debug mechanism kbuild test robot
2018-09-15  6:58   ` kbuild test robot
2018-09-15 13:06   ` [patch V3 " Thomas Gleixner
2018-09-14 13:09 ` [patch V2 05/10] x86/mm/cpa: Add large page preservation statistics Thomas Gleixner
2018-09-14 13:09 ` [patch V2 06/10] x86/mm/cpa: Avoid static protection checks on unmap Thomas Gleixner
2018-09-14 13:09 ` [patch V2 07/10] x86/mm/cpa: Add sanity check for existing mappings Thomas Gleixner
2018-09-17  0:31   ` [LKP] [x86/mm/cpa] c77d419f92: WARNING:at_arch/x86/mm/pageattr.c:#__change_page_attr_set_clr kernel test robot
2018-09-17 11:05     ` Thomas Gleixner
2018-09-14 13:09 ` [patch V2 08/10] x86/mm/cpa: Optimize same protection check Thomas Gleixner
2018-09-14 13:09 ` [patch V2 09/10] x86/mm/cpa: Do the range check early Thomas Gleixner
2018-09-14 13:09 ` [patch V2 10/10] x86/mm/cpa: Avoid the 4k pages check completely Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180914131301.941953185@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=bin.yang@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.gross@intel.com \
    --cc=peterz@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).