All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] first step of standardising OPAL_BUSY handling
@ 2018-04-05  8:15 Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 1/6] powerpc/powernv: define a standard delay for OPAL_BUSY type retry loops Nicholas Piggin
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Nicholas Piggin @ 2018-04-05  8:15 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Stewart Smith

Patch 1 explains most of the reasoning.
Patch 1+2 and possibly 4 (just that we've seen a bug caused by the
RTC driver but not yet one caused by NVRAM) could be backported as
bugfixes, in most other cases the changes are inconsequential or
unlikely to be a problem.

Thanks,
Nick

Nicholas Piggin (6):
  powerpc/powernv: define a standard delay for OPAL_BUSY type retry
    loops
  powerpc/powernv: OPAL RTC driver standardise OPAL_BUSY loops
  powerpc/powernv: OPAL platform standardise OPAL_BUSY loops
  powerpc/powernv: OPAL NVRAM driver standardise OPAL_BUSY delays
  powerpc/powernv: OPAL dump support standardise OPAL_BUSY delays
  powerpc/xive: standardise OPAL_BUSY delays

 arch/powerpc/include/asm/opal.h             |   3 +
 arch/powerpc/platforms/powernv/opal-dump.c  |   4 +-
 arch/powerpc/platforms/powernv/opal-nvram.c |   7 +-
 arch/powerpc/platforms/powernv/opal-rtc.c   |   6 +-
 arch/powerpc/platforms/powernv/opal.c       |   8 +-
 arch/powerpc/platforms/powernv/setup.c      |  16 ++-
 arch/powerpc/sysdev/xive/native.c           | 193 ++++++++++++++++------------
 drivers/rtc/rtc-opal.c                      |  33 +++--
 8 files changed, 163 insertions(+), 107 deletions(-)

-- 
2.16.3

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/6] powerpc/powernv: define a standard delay for OPAL_BUSY type retry loops
  2018-04-05  8:15 [PATCH 0/6] first step of standardising OPAL_BUSY handling Nicholas Piggin
@ 2018-04-05  8:15 ` Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 2/6] powerpc/powernv: OPAL RTC driver standardise OPAL_BUSY loops Nicholas Piggin
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2018-04-05  8:15 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Stewart Smith

This is the start of an effort to tidy up and standardise all the
delays. Existing loops have a range of delay/sleep periods from 1ms
to 20ms, and some have no delay. They all loop forever except rtc,
which times out after 10 retries, and that uses 10ms delays. So use
10ms as our standard delay. The OPAL maintainer agrees 10ms is a
reasonable starting point.

The idea is to use the same recipe everywhere, once this is proven to
work then it will be documented as an OPAL API standard. Then both
firmware and OS can agree, and if a particular call needs something
else, then that can be documented with reasoning.

This is not the end-all of this effort, it's just a relatively easy
change that fixes some existing high latency delays. There should be
provision for standardising timeouts and/or interruptible loops where
possible, so non-fatal firmware errors don't cause hangs.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/opal.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 12e70fb58700..fcf3ed5b8b18 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -21,6 +21,9 @@
 /* We calculate number of sg entries based on PAGE_SIZE */
 #define SG_ENTRIES_PER_NODE ((PAGE_SIZE - 16) / sizeof(struct opal_sg_entry))
 
+/* Default time to sleep or delay between OPAL_BUSY/OPAL_BUSY_EVENT loops */
+#define OPAL_BUSY_DELAY_MS	10
+
 /* /sys/firmware/opal */
 extern struct kobject *opal_kobj;
 
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/6] powerpc/powernv: OPAL RTC driver standardise OPAL_BUSY loops
  2018-04-05  8:15 [PATCH 0/6] first step of standardising OPAL_BUSY handling Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 1/6] powerpc/powernv: define a standard delay for OPAL_BUSY type retry loops Nicholas Piggin
@ 2018-04-05  8:15 ` Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 3/6] powerpc/powernv: OPAL platform " Nicholas Piggin
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2018-04-05  8:15 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Nicholas Piggin, Benjamin Herrenschmidt, Stewart Smith, linux-rtc

Convert to using the standard delay poll/delay form.

The OPAL RTC driver:

- Did not previously delay or sleep in the OPAL_BUSY_EVENT case.
  There have been scheduling delays of up to 50 seconds observed here
  (BMC reboot can do it), which this should fix.

Cc: linux-rtc@vger.kernel.org
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/platforms/powernv/opal-rtc.c |  6 ++++--
 drivers/rtc/rtc-opal.c                    | 33 ++++++++++++++++++++-----------
 2 files changed, 25 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/opal-rtc.c b/arch/powerpc/platforms/powernv/opal-rtc.c
index f8868864f373..f530cf62594d 100644
--- a/arch/powerpc/platforms/powernv/opal-rtc.c
+++ b/arch/powerpc/platforms/powernv/opal-rtc.c
@@ -48,10 +48,12 @@ unsigned long __init opal_get_boot_time(void)
 
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);
-		if (rc == OPAL_BUSY_EVENT)
+		if (rc == OPAL_BUSY_EVENT) {
+			mdelay(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
-		else if (rc == OPAL_BUSY)
+		} else if (rc == OPAL_BUSY) {
 			mdelay(10);
+		}
 	}
 	if (rc != OPAL_SUCCESS)
 		return 0;
diff --git a/drivers/rtc/rtc-opal.c b/drivers/rtc/rtc-opal.c
index 304e891e35fc..cddcc4749d39 100644
--- a/drivers/rtc/rtc-opal.c
+++ b/drivers/rtc/rtc-opal.c
@@ -57,7 +57,7 @@ static void tm_to_opal(struct rtc_time *tm, u32 *y_m_d, u64 *h_m_s_ms)
 
 static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
 {
-	long rc = OPAL_BUSY;
+	s64 rc = OPAL_BUSY;
 	int retries = 10;
 	u32 y_m_d;
 	u64 h_m_s_ms;
@@ -66,13 +66,17 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
 
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);
-		if (rc == OPAL_BUSY_EVENT)
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
-		else if (retries-- && (rc == OPAL_HARDWARE
-				       || rc == OPAL_INTERNAL_ERROR))
+		} else if (rc == OPAL_BUSY) {
 			msleep(10);
-		else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)
-			break;
+		} else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) {
+			if (retries--) {
+				msleep(10); /* Wait 10ms before retry */
+				rc = OPAL_BUSY; /* go around again */
+			}
+		}
 	}
 
 	if (rc != OPAL_SUCCESS)
@@ -87,21 +91,26 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
 
 static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm)
 {
-	long rc = OPAL_BUSY;
+	s64 rc = OPAL_BUSY;
 	int retries = 10;
 	u32 y_m_d = 0;
 	u64 h_m_s_ms = 0;
 
 	tm_to_opal(tm, &y_m_d, &h_m_s_ms);
+
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_rtc_write(y_m_d, h_m_s_ms);
-		if (rc == OPAL_BUSY_EVENT)
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
-		else if (retries-- && (rc == OPAL_HARDWARE
-				       || rc == OPAL_INTERNAL_ERROR))
+		} else if (rc == OPAL_BUSY) {
 			msleep(10);
-		else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)
-			break;
+		} else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) {
+			if (retries--) {
+				msleep(10); /* Wait 10ms before retry */
+				rc = OPAL_BUSY; /* go around again */
+			}
+		}
 	}
 
 	return rc == OPAL_SUCCESS ? 0 : -EIO;
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/6] powerpc/powernv: OPAL platform standardise OPAL_BUSY loops
  2018-04-05  8:15 [PATCH 0/6] first step of standardising OPAL_BUSY handling Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 1/6] powerpc/powernv: define a standard delay for OPAL_BUSY type retry loops Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 2/6] powerpc/powernv: OPAL RTC driver standardise OPAL_BUSY loops Nicholas Piggin
@ 2018-04-05  8:15 ` Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 4/6] powerpc/powernv: OPAL NVRAM driver standardise OPAL_BUSY delays Nicholas Piggin
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2018-04-05  8:15 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Stewart Smith

Convert to using the standard delay poll/delay form.

The platform code:

- Used delay when called from a schedule()able context.
- Did not previously delay or sleep in the OPAL_BUSY_EVENT case.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/platforms/powernv/opal.c  |  8 +++++---
 arch/powerpc/platforms/powernv/setup.c | 16 ++++++++++------
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
index c15182765ff5..fb13bcabe609 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -896,10 +896,12 @@ void opal_shutdown(void)
 	 */
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_sync_host_reboot();
-		if (rc == OPAL_BUSY)
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
-		else
-			mdelay(10);
+		} else {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 
 	/* Unregister memory dump region */
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 092715b9674b..6ea79d906784 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -187,10 +187,12 @@ static void  __noreturn pnv_restart(char *cmd)
 
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_cec_reboot();
-		if (rc == OPAL_BUSY_EVENT)
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
-		else
-			mdelay(10);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 	for (;;)
 		opal_poll_events(NULL);
@@ -204,10 +206,12 @@ static void __noreturn pnv_power_off(void)
 
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_cec_power_down(0);
-		if (rc == OPAL_BUSY_EVENT)
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
-		else
-			mdelay(10);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 	for (;;)
 		opal_poll_events(NULL);
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/6] powerpc/powernv: OPAL NVRAM driver standardise OPAL_BUSY delays
  2018-04-05  8:15 [PATCH 0/6] first step of standardising OPAL_BUSY handling Nicholas Piggin
                   ` (2 preceding siblings ...)
  2018-04-05  8:15 ` [PATCH 3/6] powerpc/powernv: OPAL platform " Nicholas Piggin
@ 2018-04-05  8:15 ` Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 5/6] powerpc/powernv: OPAL dump support " Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 6/6] powerpc/xive: " Nicholas Piggin
  5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2018-04-05  8:15 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Stewart Smith

Convert to using the standard delay poll/delay form.

The NVRAM driver:

- Did not previously delay or sleep in its OPAL_BUSY loop.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/platforms/powernv/opal-nvram.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/powernv/opal-nvram.c b/arch/powerpc/platforms/powernv/opal-nvram.c
index 9db4398ded5d..732732bddc28 100644
--- a/arch/powerpc/platforms/powernv/opal-nvram.c
+++ b/arch/powerpc/platforms/powernv/opal-nvram.c
@@ -11,6 +11,7 @@
 
 #define DEBUG
 
+#include <linux/delay.h>
 #include <linux/kernel.h>
 #include <linux/init.h>
 #include <linux/of.h>
@@ -56,8 +57,12 @@ static ssize_t opal_nvram_write(char *buf, size_t count, loff_t *index)
 
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_write_nvram(__pa(buf), count, off);
-		if (rc == OPAL_BUSY_EVENT)
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 	*index += count;
 	return count;
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/6] powerpc/powernv: OPAL dump support standardise OPAL_BUSY delays
  2018-04-05  8:15 [PATCH 0/6] first step of standardising OPAL_BUSY handling Nicholas Piggin
                   ` (3 preceding siblings ...)
  2018-04-05  8:15 ` [PATCH 4/6] powerpc/powernv: OPAL NVRAM driver standardise OPAL_BUSY delays Nicholas Piggin
@ 2018-04-05  8:15 ` Nicholas Piggin
  2018-04-05  8:15 ` [PATCH 6/6] powerpc/xive: " Nicholas Piggin
  5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2018-04-05  8:15 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Stewart Smith

Convert to using the standard delay poll/delay form.

The dump code:

- Did not previously delay or sleep in the OPAL_BUSY case.
- Used a 20ms sleep.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/platforms/powernv/opal-dump.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/powernv/opal-dump.c b/arch/powerpc/platforms/powernv/opal-dump.c
index 0dc8fa4e0af2..603c4ffdb45c 100644
--- a/arch/powerpc/platforms/powernv/opal-dump.c
+++ b/arch/powerpc/platforms/powernv/opal-dump.c
@@ -264,8 +264,10 @@ static int64_t dump_read_data(struct dump_obj *dump)
 	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_dump_read(dump->id, addr);
 		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
 			opal_poll_events(NULL);
-			msleep(20);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
 		}
 	}
 
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6/6] powerpc/xive: standardise OPAL_BUSY delays
  2018-04-05  8:15 [PATCH 0/6] first step of standardising OPAL_BUSY handling Nicholas Piggin
                   ` (4 preceding siblings ...)
  2018-04-05  8:15 ` [PATCH 5/6] powerpc/powernv: OPAL dump support " Nicholas Piggin
@ 2018-04-05  8:15 ` Nicholas Piggin
  5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2018-04-05  8:15 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Stewart Smith

Convert to using the standard delay poll/delay form.

The XIVE driver:

- Did not previously loop on the OPAL_BUSY_EVENT case.
- Used a 1ms sleep.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/sysdev/xive/native.c | 193 ++++++++++++++++++++++----------------
 1 file changed, 111 insertions(+), 82 deletions(-)

diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c
index d22aeb0b69e1..682f79dabb4a 100644
--- a/arch/powerpc/sysdev/xive/native.c
+++ b/arch/powerpc/sysdev/xive/native.c
@@ -103,14 +103,18 @@ EXPORT_SYMBOL_GPL(xive_native_populate_irq_data);
 
 int xive_native_configure_irq(u32 hw_irq, u32 target, u8 prio, u32 sw_irq)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 
-	for (;;) {
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_set_irq_config(hw_irq, target, prio, sw_irq);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
+
 	return rc == 0 ? 0 : -ENXIO;
 }
 EXPORT_SYMBOL_GPL(xive_native_configure_irq);
@@ -159,12 +163,17 @@ int xive_native_configure_queue(u32 vp_id, struct xive_q *q, u8 prio,
 	}
 
 	/* Configure and enable the queue in HW */
-	for (;;) {
+	rc = OPAL_BUSY;
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_set_queue_info(vp_id, prio, qpage_phys, order, flags);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
+
 	if (rc) {
 		pr_err("Error %lld setting queue for prio %d\n", rc, prio);
 		rc = -EIO;
@@ -183,14 +192,17 @@ EXPORT_SYMBOL_GPL(xive_native_configure_queue);
 
 static void __xive_native_disable_queue(u32 vp_id, struct xive_q *q, u8 prio)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 
 	/* Disable the queue in HW */
-	for (;;) {
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_set_queue_info(vp_id, prio, 0, 0, 0);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 	if (rc)
 		pr_err("Error %lld disabling queue for prio %d\n", rc, prio);
@@ -240,7 +252,7 @@ static int xive_native_get_ipi(unsigned int cpu, struct xive_cpu *xc)
 {
 	struct device_node *np;
 	unsigned int chip_id;
-	s64 irq;
+	s64 rc = OPAL_BUSY;
 
 	/* Find the chip ID */
 	np = of_get_cpu_node(cpu, NULL);
@@ -250,33 +262,39 @@ static int xive_native_get_ipi(unsigned int cpu, struct xive_cpu *xc)
 	}
 
 	/* Allocate an IPI and populate info about it */
-	for (;;) {
-		irq = opal_xive_allocate_irq(chip_id);
-		if (irq == OPAL_BUSY) {
-			msleep(1);
-			continue;
-		}
-		if (irq < 0) {
-			pr_err("Failed to allocate IPI on CPU %d\n", cpu);
-			return -ENXIO;
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
+		rc = opal_xive_allocate_irq(chip_id);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
 		}
-		xc->hw_ipi = irq;
-		break;
 	}
+	if (rc < 0) {
+		pr_err("Failed to allocate IPI on CPU %d\n", cpu);
+		return -ENXIO;
+	}
+	xc->hw_ipi = rc;
+
 	return 0;
 }
 #endif /* CONFIG_SMP */
 
 u32 xive_native_alloc_irq(void)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 
-	for (;;) {
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_allocate_irq(OPAL_XIVE_ANY_CHIP);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
+
 	if (rc < 0)
 		return 0;
 	return rc;
@@ -285,11 +303,16 @@ EXPORT_SYMBOL_GPL(xive_native_alloc_irq);
 
 void xive_native_free_irq(u32 irq)
 {
-	for (;;) {
-		s64 rc = opal_xive_free_irq(irq);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+	s64 rc = OPAL_BUSY;
+
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
+		rc = opal_xive_free_irq(irq);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 }
 EXPORT_SYMBOL_GPL(xive_native_free_irq);
@@ -297,20 +320,11 @@ EXPORT_SYMBOL_GPL(xive_native_free_irq);
 #ifdef CONFIG_SMP
 static void xive_native_put_ipi(unsigned int cpu, struct xive_cpu *xc)
 {
-	s64 rc;
-
 	/* Free the IPI */
 	if (!xc->hw_ipi)
 		return;
-	for (;;) {
-		rc = opal_xive_free_irq(xc->hw_ipi);
-		if (rc == OPAL_BUSY) {
-			msleep(1);
-			continue;
-		}
-		xc->hw_ipi = 0;
-		break;
-	}
+	xive_native_free_irq(xc->hw_ipi);
+	xc->hw_ipi = 0;
 }
 #endif /* CONFIG_SMP */
 
@@ -381,7 +395,7 @@ static void xive_native_eoi(u32 hw_irq)
 
 static void xive_native_setup_cpu(unsigned int cpu, struct xive_cpu *xc)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 	u32 vp;
 	__be64 vp_cam_be;
 	u64 vp_cam;
@@ -392,12 +406,16 @@ static void xive_native_setup_cpu(unsigned int cpu, struct xive_cpu *xc)
 	/* Enable the pool VP */
 	vp = xive_pool_vps + cpu;
 	pr_debug("CPU %d setting up pool VP 0x%x\n", cpu, vp);
-	for (;;) {
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_set_vp_info(vp, OPAL_XIVE_VP_ENABLED, 0);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
+
 	if (rc) {
 		pr_err("Failed to enable pool VP on CPU %d\n", cpu);
 		return;
@@ -425,7 +443,7 @@ static void xive_native_setup_cpu(unsigned int cpu, struct xive_cpu *xc)
 
 static void xive_native_teardown_cpu(unsigned int cpu, struct xive_cpu *xc)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 	u32 vp;
 
 	if (xive_pool_vps == XIVE_INVALID_VP)
@@ -436,11 +454,14 @@ static void xive_native_teardown_cpu(unsigned int cpu, struct xive_cpu *xc)
 
 	/* Disable it */
 	vp = xive_pool_vps + cpu;
-	for (;;) {
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_set_vp_info(vp, 0, 0);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 }
 
@@ -627,7 +648,7 @@ static bool xive_native_provision_pages(void)
 
 u32 xive_native_alloc_vp_block(u32 max_vcpus)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 	u32 order;
 
 	order = fls(max_vcpus) - 1;
@@ -637,25 +658,25 @@ u32 xive_native_alloc_vp_block(u32 max_vcpus)
 	pr_debug("VP block alloc, for max VCPUs %d use order %d\n",
 		 max_vcpus, order);
 
-	for (;;) {
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_alloc_vp_block(order);
-		switch (rc) {
-		case OPAL_BUSY:
-			msleep(1);
-			break;
-		case OPAL_XIVE_PROVISIONING:
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		} else if (rc == OPAL_XIVE_PROVISIONING) {
 			if (!xive_native_provision_pages())
 				return XIVE_INVALID_VP;
-			break;
-		default:
-			if (rc < 0) {
-				pr_err("OPAL failed to allocate VCPUs order %d, err %lld\n",
-				       order, rc);
-				return XIVE_INVALID_VP;
-			}
-			return rc;
+			rc = OPAL_BUSY; /* go around again */
 		}
 	}
+	if (rc < 0) {
+		pr_err("OPAL failed to allocate VCPUs order %d, err %lld\n",
+		       order, rc);
+		return XIVE_INVALID_VP;
+	}
+	return rc;
 }
 EXPORT_SYMBOL_GPL(xive_native_alloc_vp_block);
 
@@ -674,30 +695,38 @@ EXPORT_SYMBOL_GPL(xive_native_free_vp_block);
 
 int xive_native_enable_vp(u32 vp_id, bool single_escalation)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 	u64 flags = OPAL_XIVE_VP_ENABLED;
 
 	if (single_escalation)
 		flags |= OPAL_XIVE_VP_SINGLE_ESCALATION;
-	for (;;) {
+
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_set_vp_info(vp_id, flags, 0);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
+
 	return rc ? -EIO : 0;
 }
 EXPORT_SYMBOL_GPL(xive_native_enable_vp);
 
 int xive_native_disable_vp(u32 vp_id)
 {
-	s64 rc;
+	s64 rc = OPAL_BUSY;
 
-	for (;;) {
+	while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
 		rc = opal_xive_set_vp_info(vp_id, 0, 0);
-		if (rc != OPAL_BUSY)
-			break;
-		msleep(1);
+		if (rc == OPAL_BUSY_EVENT) {
+			msleep(OPAL_BUSY_DELAY_MS);
+			opal_poll_events(NULL);
+		} else if (rc == OPAL_BUSY) {
+			msleep(OPAL_BUSY_DELAY_MS);
+		}
 	}
 	return rc ? -EIO : 0;
 }
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-04-05  8:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-05  8:15 [PATCH 0/6] first step of standardising OPAL_BUSY handling Nicholas Piggin
2018-04-05  8:15 ` [PATCH 1/6] powerpc/powernv: define a standard delay for OPAL_BUSY type retry loops Nicholas Piggin
2018-04-05  8:15 ` [PATCH 2/6] powerpc/powernv: OPAL RTC driver standardise OPAL_BUSY loops Nicholas Piggin
2018-04-05  8:15 ` [PATCH 3/6] powerpc/powernv: OPAL platform " Nicholas Piggin
2018-04-05  8:15 ` [PATCH 4/6] powerpc/powernv: OPAL NVRAM driver standardise OPAL_BUSY delays Nicholas Piggin
2018-04-05  8:15 ` [PATCH 5/6] powerpc/powernv: OPAL dump support " Nicholas Piggin
2018-04-05  8:15 ` [PATCH 6/6] powerpc/xive: " Nicholas Piggin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.