linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] x86/intel/imr: Fix IMR lock logic
@ 2016-02-23  1:29 Bryan O'Donoghue
  2016-02-23  1:29 ` [PATCH v3 1/2] x86/intel/imr: Change the kernel's IMR lock bit to false Bryan O'Donoghue
  2016-02-23  1:29 ` [PATCH v3 2/2] x86/intel/imr: Drop IMR lock bit support Bryan O'Donoghue
  0 siblings, 2 replies; 7+ messages in thread
From: Bryan O'Donoghue @ 2016-02-23  1:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, hpa, x86, andriy.shevchenko, boon.leong.ong,
	paul.gortmaker, Bryan O'Donoghue

This patchset changes the lock logic for Isolated Memory Regions. As a
result of a conversation with Andriy we determined that the IMR associated
with the kernel .text section should be unlocked so that kernels executed
by kexec could tear-down the old kernel IMR and setup a new one. On
subsequent discussions with Ingo we discussed removing lock bit support in
totality to simplify the IMR interface. This patchset implements both of
those changes.

V1:
Set the kernel IMR lock bit to false - Bryan

V2:
Make a parameter of the kernel IMR lock logic - Andriy

V3:
Revert to setting the kernel IMR lock bit to false - Ingo
Drop IMR lock bit support - Ingo

Bryan O'Donoghue (2):
  x86/intel/imr: Change the kernel's IMR lock bit to false
  x86/intel/imr: Drop IMR lock bit support

 arch/x86/include/asm/imr.h                   |  2 +-
 arch/x86/platform/intel-quark/imr.c          | 26 +++++++-------------------
 arch/x86/platform/intel-quark/imr_selftest.c | 15 +++++++--------
 3 files changed, 15 insertions(+), 28 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/2] x86/intel/imr: Change the kernel's IMR lock bit to false
  2016-02-23  1:29 [PATCH v3 0/2] x86/intel/imr: Fix IMR lock logic Bryan O'Donoghue
@ 2016-02-23  1:29 ` Bryan O'Donoghue
  2016-02-23  8:54   ` [tip:x86/platform] x86/platform/intel/quark: " =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=
  2016-02-23  1:29 ` [PATCH v3 2/2] x86/intel/imr: Drop IMR lock bit support Bryan O'Donoghue
  1 sibling, 1 reply; 7+ messages in thread
From: Bryan O'Donoghue @ 2016-02-23  1:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, hpa, x86, andriy.shevchenko, boon.leong.ong,
	paul.gortmaker, Bryan O'Donoghue

Currently when setting up an IMR around the kernel's .text section we lock
that IMR, preventing further modification. While superficially this appears
to be the right thing to do, in fact this doesn't account for a legitimate
change in the memory map such as when executing a new kernel via kexec.

In such a scenario a second kernel can have a different size and location
to it's predecessor and can view some of the memory occupied by it's
predecessor as legitimately usable DMA RAM. If this RAM were then
subsequently allocated to DMA agents within the system it could conceivably
trigger an IMR violation.

This patch fixes the this potential situation by keeping the kernel's .text
section IMR lock bit false by default.

Signed-off-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Suggested-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/platform/intel-quark/imr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/intel-quark/imr.c b/arch/x86/platform/intel-quark/imr.c
index c61b6c3..bfadcd0 100644
--- a/arch/x86/platform/intel-quark/imr.c
+++ b/arch/x86/platform/intel-quark/imr.c
@@ -592,14 +592,14 @@ static void __init imr_fixup_memmap(struct imr_device *idev)
 	end = (unsigned long)__end_rodata - 1;
 
 	/*
-	 * Setup a locked IMR around the physical extent of the kernel
+	 * Setup an unlocked IMR around the physical extent of the kernel
 	 * from the beginning of the .text secton to the end of the
 	 * .rodata section as one physically contiguous block.
 	 *
 	 * We don't round up @size since it is already PAGE_SIZE aligned.
 	 * See vmlinux.lds.S for details.
 	 */
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, true);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
 	if (ret < 0) {
 		pr_err("unable to setup IMR for kernel: %zu KiB (%lx - %lx)\n",
 			size / 1024, start, end);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/2] x86/intel/imr: Drop IMR lock bit support
  2016-02-23  1:29 [PATCH v3 0/2] x86/intel/imr: Fix IMR lock logic Bryan O'Donoghue
  2016-02-23  1:29 ` [PATCH v3 1/2] x86/intel/imr: Change the kernel's IMR lock bit to false Bryan O'Donoghue
@ 2016-02-23  1:29 ` Bryan O'Donoghue
  2016-02-23  8:55   ` [tip:x86/platform] x86/platform/intel/quark: " =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=
  1 sibling, 1 reply; 7+ messages in thread
From: Bryan O'Donoghue @ 2016-02-23  1:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, hpa, x86, andriy.shevchenko, boon.leong.ong,
	paul.gortmaker, Bryan O'Donoghue

Isolated Memory Regions support a lock bit. The lock bit in an IMR prevents
modification of the IMR until the core goes through a warm or cold reset.
The lock bit feature is not useful in the context of the kernel API and is
not really necessary since modification of IMRs is possible only from
ring-zero anyway. This patch drops support for IMR locks bits, it
simplifies the kernel API and removes an unnecessary and needlessly complex
feature.

Signed-off-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Suggested-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/imr.h                   |  2 +-
 arch/x86/platform/intel-quark/imr.c          | 24 ++++++------------------
 arch/x86/platform/intel-quark/imr_selftest.c | 15 +++++++--------
 3 files changed, 14 insertions(+), 27 deletions(-)

diff --git a/arch/x86/include/asm/imr.h b/arch/x86/include/asm/imr.h
index cd2ce40..ebea2c9 100644
--- a/arch/x86/include/asm/imr.h
+++ b/arch/x86/include/asm/imr.h
@@ -53,7 +53,7 @@
 #define IMR_MASK		(IMR_ALIGN - 1)
 
 int imr_add_range(phys_addr_t base, size_t size,
-		  unsigned int rmask, unsigned int wmask, bool lock);
+		  unsigned int rmask, unsigned int wmask);
 
 int imr_remove_range(phys_addr_t base, size_t size);
 
diff --git a/arch/x86/platform/intel-quark/imr.c b/arch/x86/platform/intel-quark/imr.c
index bfadcd0..a0db298 100644
--- a/arch/x86/platform/intel-quark/imr.c
+++ b/arch/x86/platform/intel-quark/imr.c
@@ -135,11 +135,9 @@ static int imr_read(struct imr_device *idev, u32 imr_id, struct imr_regs *imr)
  * @idev:	pointer to imr_device structure.
  * @imr_id:	IMR entry to write.
  * @imr:	IMR structure representing address and access masks.
- * @lock:	indicates if the IMR lock bit should be applied.
  * @return:	0 on success or error code passed from mbi_iosf on failure.
  */
-static int imr_write(struct imr_device *idev, u32 imr_id,
-		     struct imr_regs *imr, bool lock)
+static int imr_write(struct imr_device *idev, u32 imr_id, struct imr_regs *imr)
 {
 	unsigned long flags;
 	u32 reg = imr_id * IMR_NUM_REGS + idev->reg_base;
@@ -163,15 +161,6 @@ static int imr_write(struct imr_device *idev, u32 imr_id,
 	if (ret)
 		goto failed;
 
-	/* Lock bit must be set separately to addr_lo address bits. */
-	if (lock) {
-		imr->addr_lo |= IMR_LOCK;
-		ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE,
-				     reg - IMR_NUM_REGS, imr->addr_lo);
-		if (ret)
-			goto failed;
-	}
-
 	local_irq_restore(flags);
 	return 0;
 failed:
@@ -334,11 +323,10 @@ static inline int imr_address_overlap(phys_addr_t addr, struct imr_regs *imr)
  * @size:	physical size of region in bytes must be aligned to 1KiB.
  * @read_mask:	read access mask.
  * @write_mask:	write access mask.
- * @lock:	indicates whether or not to permanently lock this region.
  * @return:	zero on success or negative value indicating error.
  */
 int imr_add_range(phys_addr_t base, size_t size,
-		  unsigned int rmask, unsigned int wmask, bool lock)
+		  unsigned int rmask, unsigned int wmask)
 {
 	phys_addr_t end;
 	unsigned int i;
@@ -411,7 +399,7 @@ int imr_add_range(phys_addr_t base, size_t size,
 	imr.rmask = rmask;
 	imr.wmask = wmask;
 
-	ret = imr_write(idev, reg, &imr, lock);
+	ret = imr_write(idev, reg, &imr);
 	if (ret < 0) {
 		/*
 		 * In the highly unlikely event iosf_mbi_write failed
@@ -422,7 +410,7 @@ int imr_add_range(phys_addr_t base, size_t size,
 		imr.addr_hi = 0;
 		imr.rmask = IMR_READ_ACCESS_ALL;
 		imr.wmask = IMR_WRITE_ACCESS_ALL;
-		imr_write(idev, reg, &imr, false);
+		imr_write(idev, reg, &imr);
 	}
 failed:
 	mutex_unlock(&idev->lock);
@@ -518,7 +506,7 @@ static int __imr_remove_range(int reg, phys_addr_t base, size_t size)
 	imr.rmask = IMR_READ_ACCESS_ALL;
 	imr.wmask = IMR_WRITE_ACCESS_ALL;
 
-	ret = imr_write(idev, reg, &imr, false);
+	ret = imr_write(idev, reg, &imr);
 
 failed:
 	mutex_unlock(&idev->lock);
@@ -599,7 +587,7 @@ static void __init imr_fixup_memmap(struct imr_device *idev)
 	 * We don't round up @size since it is already PAGE_SIZE aligned.
 	 * See vmlinux.lds.S for details.
 	 */
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	if (ret < 0) {
 		pr_err("unable to setup IMR for kernel: %zu KiB (%lx - %lx)\n",
 			size / 1024, start, end);
diff --git a/arch/x86/platform/intel-quark/imr_selftest.c b/arch/x86/platform/intel-quark/imr_selftest.c
index 278e4da..28dd9d1 100644
--- a/arch/x86/platform/intel-quark/imr_selftest.c
+++ b/arch/x86/platform/intel-quark/imr_selftest.c
@@ -61,30 +61,30 @@ static void __init imr_self_test(void)
 	int ret;
 
 	/* Test zero zero. */
-	ret = imr_add_range(0, 0, 0, 0, false);
+	ret = imr_add_range(0, 0, 0, 0);
 	imr_self_test_result(ret < 0, "zero sized IMR\n");
 
 	/* Test exact overlap. */
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret < 0, fmt_over, __va(base), __va(base + size));
 
 	/* Test overlap with base inside of existing. */
 	base += size - IMR_ALIGN;
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret < 0, fmt_over, __va(base), __va(base + size));
 
 	/* Test overlap with end inside of existing. */
 	base -= size + IMR_ALIGN * 2;
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret < 0, fmt_over, __va(base), __va(base + size));
 
 	/* Test that a 1 KiB IMR @ zero with read/write all will bomb out. */
 	ret = imr_add_range(0, IMR_ALIGN, IMR_READ_ACCESS_ALL,
-			    IMR_WRITE_ACCESS_ALL, false);
+			    IMR_WRITE_ACCESS_ALL);
 	imr_self_test_result(ret < 0, "1KiB IMR @ 0x00000000 - access-all\n");
 
 	/* Test that a 1 KiB IMR @ zero with CPU only will work. */
-	ret = imr_add_range(0, IMR_ALIGN, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(0, IMR_ALIGN, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret >= 0, "1KiB IMR @ 0x00000000 - cpu-access\n");
 	if (ret >= 0) {
 		ret = imr_remove_range(0, IMR_ALIGN);
@@ -93,8 +93,7 @@ static void __init imr_self_test(void)
 
 	/* Test 2 KiB works. */
 	size = IMR_ALIGN * 2;
-	ret = imr_add_range(0, size, IMR_READ_ACCESS_ALL,
-			    IMR_WRITE_ACCESS_ALL, false);
+	ret = imr_add_range(0, size, IMR_READ_ACCESS_ALL, IMR_WRITE_ACCESS_ALL);
 	imr_self_test_result(ret >= 0, "2KiB IMR @ 0x00000000\n");
 	if (ret >= 0) {
 		ret = imr_remove_range(0, size);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:x86/platform] x86/platform/intel/quark: Change the kernel's IMR lock bit to false
  2016-02-23  1:29 ` [PATCH v3 1/2] x86/intel/imr: Change the kernel's IMR lock bit to false Bryan O'Donoghue
@ 2016-02-23  8:54   ` =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=
  2016-02-23  9:26     ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?= @ 2016-02-23  8:54 UTC (permalink / raw)
  To: =?UTF-8?B?bGludXgtdGlwLWNvbW1pdHNAdmdlci5rZXJuZWwub3Jn?=
  Cc: torvalds, andriy.shevchenko, mingo, linux-kernel, tglx, peterz,
	hpa, pure.logic

Commit-ID:  dd71a17b1193dd4a4c35ecd0ba227aac3d110836
Gitweb:     http://git.kernel.org/tip/dd71a17b1193dd4a4c35ecd0ba227aac3d110836
Author:     Bryan O'Donoghue <pure.logic@nexus-software.ie>
AuthorDate: Tue, 23 Feb 2016 01:29:58 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 23 Feb 2016 07:35:53 +0100

x86/platform/intel/quark: Change the kernel's IMR lock bit to false

Currently when setting up an IMR around the kernel's .text section we lock
that IMR, preventing further modification. While superficially this appears
to be the right thing to do, in fact this doesn't account for a legitimate
change in the memory map such as when executing a new kernel via kexec.

In such a scenario a second kernel can have a different size and location
to it's predecessor and can view some of the memory occupied by it's
predecessor as legitimately usable DMA RAM. If this RAM were then
subsequently allocated to DMA agents within the system it could conceivably
trigger an IMR violation.

This patch fixes the this potential situation by keeping the kernel's .text
section IMR lock bit false by default.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boon.leong.ong@intel.com
Cc: paul.gortmaker@windriver.com
Link: http://lkml.kernel.org/r/1456190999-12685-2-git-send-email-pure.logic@nexus-software.ie
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/platform/intel-quark/imr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/intel-quark/imr.c b/arch/x86/platform/intel-quark/imr.c
index c61b6c3..bfadcd0 100644
--- a/arch/x86/platform/intel-quark/imr.c
+++ b/arch/x86/platform/intel-quark/imr.c
@@ -592,14 +592,14 @@ static void __init imr_fixup_memmap(struct imr_device *idev)
 	end = (unsigned long)__end_rodata - 1;
 
 	/*
-	 * Setup a locked IMR around the physical extent of the kernel
+	 * Setup an unlocked IMR around the physical extent of the kernel
 	 * from the beginning of the .text secton to the end of the
 	 * .rodata section as one physically contiguous block.
 	 *
 	 * We don't round up @size since it is already PAGE_SIZE aligned.
 	 * See vmlinux.lds.S for details.
 	 */
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, true);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
 	if (ret < 0) {
 		pr_err("unable to setup IMR for kernel: %zu KiB (%lx - %lx)\n",
 			size / 1024, start, end);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:x86/platform] x86/platform/intel/quark: Drop IMR lock bit support
  2016-02-23  1:29 ` [PATCH v3 2/2] x86/intel/imr: Drop IMR lock bit support Bryan O'Donoghue
@ 2016-02-23  8:55   ` =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=
  0 siblings, 0 replies; 7+ messages in thread
From: =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?= @ 2016-02-23  8:55 UTC (permalink / raw)
  To: =?UTF-8?B?bGludXgtdGlwLWNvbW1pdHNAdmdlci5rZXJuZWwub3Jn?=
  Cc: tglx, pure.logic, linux-kernel, torvalds, mingo, peterz, hpa

Commit-ID:  c637fa5294cefeda8be73cce20ba6693d22262dc
Gitweb:     http://git.kernel.org/tip/c637fa5294cefeda8be73cce20ba6693d22262dc
Author:     Bryan O'Donoghue <pure.logic@nexus-software.ie>
AuthorDate: Tue, 23 Feb 2016 01:29:59 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 23 Feb 2016 07:37:23 +0100

x86/platform/intel/quark: Drop IMR lock bit support

Isolated Memory Regions support a lock bit. The lock bit in an IMR prevents
modification of the IMR until the core goes through a warm or cold reset.
The lock bit feature is not useful in the context of the kernel API and is
not really necessary since modification of IMRs is possible only from
ring-zero anyway. This patch drops support for IMR locks bits, it
simplifies the kernel API and removes an unnecessary and needlessly complex
feature.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: andriy.shevchenko@linux.intel.com
Cc: boon.leong.ong@intel.com
Cc: paul.gortmaker@windriver.com
Link: http://lkml.kernel.org/r/1456190999-12685-3-git-send-email-pure.logic@nexus-software.ie
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/imr.h                   |  2 +-
 arch/x86/platform/intel-quark/imr.c          | 24 ++++++------------------
 arch/x86/platform/intel-quark/imr_selftest.c | 15 +++++++--------
 3 files changed, 14 insertions(+), 27 deletions(-)

diff --git a/arch/x86/include/asm/imr.h b/arch/x86/include/asm/imr.h
index cd2ce40..ebea2c9 100644
--- a/arch/x86/include/asm/imr.h
+++ b/arch/x86/include/asm/imr.h
@@ -53,7 +53,7 @@
 #define IMR_MASK		(IMR_ALIGN - 1)
 
 int imr_add_range(phys_addr_t base, size_t size,
-		  unsigned int rmask, unsigned int wmask, bool lock);
+		  unsigned int rmask, unsigned int wmask);
 
 int imr_remove_range(phys_addr_t base, size_t size);
 
diff --git a/arch/x86/platform/intel-quark/imr.c b/arch/x86/platform/intel-quark/imr.c
index 740445a..17d6d22 100644
--- a/arch/x86/platform/intel-quark/imr.c
+++ b/arch/x86/platform/intel-quark/imr.c
@@ -134,11 +134,9 @@ static int imr_read(struct imr_device *idev, u32 imr_id, struct imr_regs *imr)
  * @idev:	pointer to imr_device structure.
  * @imr_id:	IMR entry to write.
  * @imr:	IMR structure representing address and access masks.
- * @lock:	indicates if the IMR lock bit should be applied.
  * @return:	0 on success or error code passed from mbi_iosf on failure.
  */
-static int imr_write(struct imr_device *idev, u32 imr_id,
-		     struct imr_regs *imr, bool lock)
+static int imr_write(struct imr_device *idev, u32 imr_id, struct imr_regs *imr)
 {
 	unsigned long flags;
 	u32 reg = imr_id * IMR_NUM_REGS + idev->reg_base;
@@ -162,15 +160,6 @@ static int imr_write(struct imr_device *idev, u32 imr_id,
 	if (ret)
 		goto failed;
 
-	/* Lock bit must be set separately to addr_lo address bits. */
-	if (lock) {
-		imr->addr_lo |= IMR_LOCK;
-		ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE,
-				     reg - IMR_NUM_REGS, imr->addr_lo);
-		if (ret)
-			goto failed;
-	}
-
 	local_irq_restore(flags);
 	return 0;
 failed:
@@ -322,11 +311,10 @@ static inline int imr_address_overlap(phys_addr_t addr, struct imr_regs *imr)
  * @size:	physical size of region in bytes must be aligned to 1KiB.
  * @read_mask:	read access mask.
  * @write_mask:	write access mask.
- * @lock:	indicates whether or not to permanently lock this region.
  * @return:	zero on success or negative value indicating error.
  */
 int imr_add_range(phys_addr_t base, size_t size,
-		  unsigned int rmask, unsigned int wmask, bool lock)
+		  unsigned int rmask, unsigned int wmask)
 {
 	phys_addr_t end;
 	unsigned int i;
@@ -399,7 +387,7 @@ int imr_add_range(phys_addr_t base, size_t size,
 	imr.rmask = rmask;
 	imr.wmask = wmask;
 
-	ret = imr_write(idev, reg, &imr, lock);
+	ret = imr_write(idev, reg, &imr);
 	if (ret < 0) {
 		/*
 		 * In the highly unlikely event iosf_mbi_write failed
@@ -410,7 +398,7 @@ int imr_add_range(phys_addr_t base, size_t size,
 		imr.addr_hi = 0;
 		imr.rmask = IMR_READ_ACCESS_ALL;
 		imr.wmask = IMR_WRITE_ACCESS_ALL;
-		imr_write(idev, reg, &imr, false);
+		imr_write(idev, reg, &imr);
 	}
 failed:
 	mutex_unlock(&idev->lock);
@@ -506,7 +494,7 @@ static int __imr_remove_range(int reg, phys_addr_t base, size_t size)
 	imr.rmask = IMR_READ_ACCESS_ALL;
 	imr.wmask = IMR_WRITE_ACCESS_ALL;
 
-	ret = imr_write(idev, reg, &imr, false);
+	ret = imr_write(idev, reg, &imr);
 
 failed:
 	mutex_unlock(&idev->lock);
@@ -587,7 +575,7 @@ static void __init imr_fixup_memmap(struct imr_device *idev)
 	 * We don't round up @size since it is already PAGE_SIZE aligned.
 	 * See vmlinux.lds.S for details.
 	 */
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	if (ret < 0) {
 		pr_err("unable to setup IMR for kernel: %zu KiB (%lx - %lx)\n",
 			size / 1024, start, end);
diff --git a/arch/x86/platform/intel-quark/imr_selftest.c b/arch/x86/platform/intel-quark/imr_selftest.c
index 0381343..f5bad40 100644
--- a/arch/x86/platform/intel-quark/imr_selftest.c
+++ b/arch/x86/platform/intel-quark/imr_selftest.c
@@ -60,30 +60,30 @@ static void __init imr_self_test(void)
 	int ret;
 
 	/* Test zero zero. */
-	ret = imr_add_range(0, 0, 0, 0, false);
+	ret = imr_add_range(0, 0, 0, 0);
 	imr_self_test_result(ret < 0, "zero sized IMR\n");
 
 	/* Test exact overlap. */
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret < 0, fmt_over, __va(base), __va(base + size));
 
 	/* Test overlap with base inside of existing. */
 	base += size - IMR_ALIGN;
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret < 0, fmt_over, __va(base), __va(base + size));
 
 	/* Test overlap with end inside of existing. */
 	base -= size + IMR_ALIGN * 2;
-	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(base, size, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret < 0, fmt_over, __va(base), __va(base + size));
 
 	/* Test that a 1 KiB IMR @ zero with read/write all will bomb out. */
 	ret = imr_add_range(0, IMR_ALIGN, IMR_READ_ACCESS_ALL,
-			    IMR_WRITE_ACCESS_ALL, false);
+			    IMR_WRITE_ACCESS_ALL);
 	imr_self_test_result(ret < 0, "1KiB IMR @ 0x00000000 - access-all\n");
 
 	/* Test that a 1 KiB IMR @ zero with CPU only will work. */
-	ret = imr_add_range(0, IMR_ALIGN, IMR_CPU, IMR_CPU, false);
+	ret = imr_add_range(0, IMR_ALIGN, IMR_CPU, IMR_CPU);
 	imr_self_test_result(ret >= 0, "1KiB IMR @ 0x00000000 - cpu-access\n");
 	if (ret >= 0) {
 		ret = imr_remove_range(0, IMR_ALIGN);
@@ -92,8 +92,7 @@ static void __init imr_self_test(void)
 
 	/* Test 2 KiB works. */
 	size = IMR_ALIGN * 2;
-	ret = imr_add_range(0, size, IMR_READ_ACCESS_ALL,
-			    IMR_WRITE_ACCESS_ALL, false);
+	ret = imr_add_range(0, size, IMR_READ_ACCESS_ALL, IMR_WRITE_ACCESS_ALL);
 	imr_self_test_result(ret >= 0, "2KiB IMR @ 0x00000000\n");
 	if (ret >= 0) {
 		ret = imr_remove_range(0, size);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [tip:x86/platform] x86/platform/intel/quark: Change the kernel's IMR lock bit to false
  2016-02-23  8:54   ` [tip:x86/platform] x86/platform/intel/quark: " =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=
@ 2016-02-23  9:26     ` Peter Zijlstra
  2016-02-23 10:12       ` Ingo Molnar
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2016-02-23  9:26 UTC (permalink / raw)
  To: torvalds, andriy.shevchenko, mingo, linux-kernel, tglx, pure.logic, hpa
  Cc: =?UTF-8?B?bGludXgtdGlwLWNvbW1pdHNAdmdlci5rZXJuZWwub3Jn?=

On Tue, Feb 23, 2016 at 12:54:40AM -0800, =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=@zytor.com wrote:

I'm not sure what happened here, but mutt is completely incapable of
viewing this message.

I also tried a GUI mail client, and that too choked on it.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [tip:x86/platform] x86/platform/intel/quark: Change the kernel's IMR lock bit to false
  2016-02-23  9:26     ` Peter Zijlstra
@ 2016-02-23 10:12       ` Ingo Molnar
  0 siblings, 0 replies; 7+ messages in thread
From: Ingo Molnar @ 2016-02-23 10:12 UTC (permalink / raw)
  To: Peter Zijlstra, H. Peter Anvin
  Cc: torvalds, andriy.shevchenko, linux-kernel, tglx, pure.logic, hpa


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Tue, Feb 23, 2016 at 12:54:40AM -0800, =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=@zytor.com wrote:
> 
> I'm not sure what happened here, but mutt is completely incapable of
> viewing this message.
> 
> I also tried a GUI mail client, and that too choked on it.

Yeah, sorry about that - about a dozen mails went out in that bogus form.

Won't commit more patches until it's fixed or disabled.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-02-23 10:12 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-23  1:29 [PATCH v3 0/2] x86/intel/imr: Fix IMR lock logic Bryan O'Donoghue
2016-02-23  1:29 ` [PATCH v3 1/2] x86/intel/imr: Change the kernel's IMR lock bit to false Bryan O'Donoghue
2016-02-23  8:54   ` [tip:x86/platform] x86/platform/intel/quark: " =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=
2016-02-23  9:26     ` Peter Zijlstra
2016-02-23 10:12       ` Ingo Molnar
2016-02-23  1:29 ` [PATCH v3 2/2] x86/intel/imr: Drop IMR lock bit support Bryan O'Donoghue
2016-02-23  8:55   ` [tip:x86/platform] x86/platform/intel/quark: " =?UTF-8?B?InRpcC1ib3QgZm9yIEJyeWFuIE8nRG9ub2dodWUiIDx0aXBib3RAenl0b3IuY28=?=.=?UTF-8?B?bT4=?=

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).