All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler()
@ 2013-09-12 14:44 Tony Camuso
  2013-09-12 15:12 ` Rafael J. Wysocki
  0 siblings, 1 reply; 6+ messages in thread
From: Tony Camuso @ 2013-09-12 14:44 UTC (permalink / raw)
  To: lenb; +Cc: linux-acpi, lv.zheng, Don Zickus

Hi, Len.

On Jul 23, Lv Zheng submitted this patch (02/13)
https://lkml.org/lkml/2013/7/23/96

.. and early this month, not having seen Lv's patch, I
submitted a virtual identical one, here. 
https://lkml.org/lkml/2013/9/2/305

We've been seeing lockdep splats that are fixed by this
patch, so I'm wondering if Lv's patch will be included
in kernel v3.12.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler()
  2013-09-12 14:44 [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler() Tony Camuso
@ 2013-09-12 15:12 ` Rafael J. Wysocki
  2013-09-12 17:09   ` Tony Camuso
  2013-09-13  1:08   ` Zheng, Lv
  0 siblings, 2 replies; 6+ messages in thread
From: Rafael J. Wysocki @ 2013-09-12 15:12 UTC (permalink / raw)
  To: Tony Camuso; +Cc: lenb, linux-acpi, lv.zheng, Don Zickus

On Thursday, September 12, 2013 10:44:02 AM Tony Camuso wrote:
> Hi, Len.
> 
> On Jul 23, Lv Zheng submitted this patch (02/13)
> https://lkml.org/lkml/2013/7/23/96
> 
> .. and early this month, not having seen Lv's patch, I
> submitted a virtual identical one, here. 
> https://lkml.org/lkml/2013/9/2/305
> 
> We've been seeing lockdep splats that are fixed by this
> patch, so I'm wondering if Lv's patch will be included
> in kernel v3.12.

I actually have a plan to apply your patch, because it doesn't depend on the
additional changes made by Lv that don't belong in 3.12.

Thanks,
Rafael


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler()
  2013-09-12 15:12 ` Rafael J. Wysocki
@ 2013-09-12 17:09   ` Tony Camuso
  2013-09-13  1:08   ` Zheng, Lv
  1 sibling, 0 replies; 6+ messages in thread
From: Tony Camuso @ 2013-09-12 17:09 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: lenb, linux-acpi, lv.zheng, Don Zickus

On 09/12/2013 11:12 AM, Rafael J. Wysocki wrote:
> On Thursday, September 12, 2013 10:44:02 AM Tony Camuso wrote:
>> Hi, Len.
>>
>> On Jul 23, Lv Zheng submitted this patch (02/13)
>> https://lkml.org/lkml/2013/7/23/96
>>
>> .. and early this month, not having seen Lv's patch, I
>> submitted a virtual identical one, here. 
>> https://lkml.org/lkml/2013/9/2/305
>>
>> We've been seeing lockdep splats that are fixed by this
>> patch, so I'm wondering if Lv's patch will be included
>> in kernel v3.12.
> 
> I actually have a plan to apply your patch, because it doesn't depend on the
> additional changes made by Lv that don't belong in 3.12.
> 
> Thanks,
> Rafael
> 

Yes, that would be great.

Regards,
Tony


^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler()
  2013-09-12 15:12 ` Rafael J. Wysocki
  2013-09-12 17:09   ` Tony Camuso
@ 2013-09-13  1:08   ` Zheng, Lv
  1 sibling, 0 replies; 6+ messages in thread
From: Zheng, Lv @ 2013-09-13  1:08 UTC (permalink / raw)
  To: Wysocki, Rafael J, Tony Camuso
  Cc: Brown, Len, Huang, Ying, Zhang, Rui, linux-acpi

It's OK, though I think my patch doesn't depend on any other patches in the series.
I'll change my patch order, making this one the first one for the series and rebase the whole series again so that it's up to you to determine to accept which one.

But if you could wait a minute, I can send out an upgraded bug-fix part today as after the internal discussion, I now actually know everything you want.
I do think other bug fixes should also be included in the 3.12.
I hope you could accept my series as I have the environment that be able to do a complete functional test on the ACPI IPMI functionality.
It's just because we have some business reasons (something like internal process, other assignments) that delayed the upgraded version.

Thanks and best regards
-Lv


> From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net]
> Sent: Thursday, September 12, 2013 11:13 PM
> 
> On Thursday, September 12, 2013 10:44:02 AM Tony Camuso wrote:
> > Hi, Len.
> >
> > On Jul 23, Lv Zheng submitted this patch (02/13)
> > https://lkml.org/lkml/2013/7/23/96
> >
> > .. and early this month, not having seen Lv's patch, I
> > submitted a virtual identical one, here.
> > https://lkml.org/lkml/2013/9/2/305
> >
> > We've been seeing lockdep splats that are fixed by this
> > patch, so I'm wondering if Lv's patch will be included
> > in kernel v3.12.
> 
> I actually have a plan to apply your patch, because it doesn't depend on the
> additional changes made by Lv that don't belong in 3.12.
> 
> Thanks,
> Rafael


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler()
  2013-07-23  8:08 ` [PATCH 00/13] ACPI/IPMI: Fix several issues in the current codes Lv Zheng
@ 2013-07-23  8:09     ` Lv Zheng
  0 siblings, 0 replies; 6+ messages in thread
From: Lv Zheng @ 2013-07-23  8:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, Len Brown, Corey Minyard, Zhao Yakui
  Cc: Lv Zheng, linux-kernel, stable, linux-acpi, openipmi-developer

This patch quick fixes the issues indicated by the test results that
ipmi_msg_handler() is invoked in atomic context.

BUG: scheduling while atomic: kipmi0/18933/0x10000100
Modules linked in: ipmi_si acpi_ipmi ...
CPU: 3 PID: 18933 Comm: kipmi0 Tainted: G       AW    3.10.0-rc7+ #2
Hardware name: QCI QSSC-S4R/QSSC-S4R, BIOS QSSC-S4R.QCI.01.00.0027.070120100606 07/01/2010
 ffff8838245eea00 ffff88103fc63c98 ffffffff814c4a1e ffff88103fc63ca8
 ffffffff814bfbab ffff88103fc63d28 ffffffff814c73e0 ffff88103933cbd4
 0000000000000096 ffff88103fc63ce8 ffff88102f618000 ffff881035c01fd8
Call Trace:
 <IRQ>  [<ffffffff814c4a1e>] dump_stack+0x19/0x1b
 [<ffffffff814bfbab>] __schedule_bug+0x46/0x54
 [<ffffffff814c73e0>] __schedule+0x83/0x59c
 [<ffffffff81058853>] __cond_resched+0x22/0x2d
 [<ffffffff814c794b>] _cond_resched+0x14/0x1d
 [<ffffffff814c6d82>] mutex_lock+0x11/0x32
 [<ffffffff8101e1e9>] ? __default_send_IPI_dest_field.constprop.0+0x53/0x58
 [<ffffffffa09e3f9c>] ipmi_msg_handler+0x23/0x166 [ipmi_si]
 [<ffffffff812bf6e4>] deliver_response+0x55/0x5a
 [<ffffffff812c0fd4>] handle_new_recv_msgs+0xb67/0xc65
 [<ffffffff81007ad1>] ? read_tsc+0x9/0x19
 [<ffffffff814c8620>] ? _raw_spin_lock_irq+0xa/0xc
 [<ffffffffa09e1128>] ipmi_thread+0x5c/0x146 [ipmi_si]
 ...

Known issues:
- Replacing tx_msg_lock with spinlock is not performance friendly
  Current solution works but does not have the best performance because it
  is better to make atomic context run as fast as possible.  Given there
  are no many IPMI messages created by ACPI, performance of current
  solution may be OK.  It can be better via linking ipmi_recv_msg into an
  RX message queue and process it in other contexts.

Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Reviewed-by: Huang Ying <ying.huang@intel.com>
---
 drivers/acpi/acpi_ipmi.c |   24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/acpi/acpi_ipmi.c b/drivers/acpi/acpi_ipmi.c
index 28e2b4c..b37c189 100644
--- a/drivers/acpi/acpi_ipmi.c
+++ b/drivers/acpi/acpi_ipmi.c
@@ -39,6 +39,7 @@
 #include <linux/ipmi.h>
 #include <linux/device.h>
 #include <linux/pnp.h>
+#include <linux/spinlock.h>
 
 MODULE_AUTHOR("Zhao Yakui");
 MODULE_DESCRIPTION("ACPI IPMI Opregion driver");
@@ -58,7 +59,7 @@ struct acpi_ipmi_device {
 	struct list_head head;
 	/* the IPMI request message list */
 	struct list_head tx_msg_list;
-	struct mutex	tx_msg_lock;
+	spinlock_t	tx_msg_lock;
 	acpi_handle handle;
 	struct pnp_dev *pnp_dev;
 	ipmi_user_t	user_interface;
@@ -146,6 +147,7 @@ static int acpi_format_ipmi_request(struct acpi_ipmi_msg *tx_msg,
 	struct kernel_ipmi_msg *msg;
 	struct acpi_ipmi_buffer *buffer;
 	struct acpi_ipmi_device *device;
+	unsigned long flags;
 
 	msg = &tx_msg->tx_message;
 	/*
@@ -182,10 +184,10 @@ static int acpi_format_ipmi_request(struct acpi_ipmi_msg *tx_msg,
 
 	/* Get the msgid */
 	device = tx_msg->device;
-	mutex_lock(&device->tx_msg_lock);
+	spin_lock_irqsave(&device->tx_msg_lock, flags);
 	device->curr_msgid++;
 	tx_msg->tx_msgid = device->curr_msgid;
-	mutex_unlock(&device->tx_msg_lock);
+	spin_unlock_irqrestore(&device->tx_msg_lock, flags);
 
 	return 0;
 }
@@ -250,6 +252,7 @@ static void ipmi_msg_handler(struct ipmi_recv_msg *msg, void *user_msg_data)
 	int msg_found = 0;
 	struct acpi_ipmi_msg *tx_msg;
 	struct pnp_dev *pnp_dev = ipmi_device->pnp_dev;
+	unsigned long flags;
 
 	if (msg->user != ipmi_device->user_interface) {
 		dev_warn(&pnp_dev->dev,
@@ -258,14 +261,14 @@ static void ipmi_msg_handler(struct ipmi_recv_msg *msg, void *user_msg_data)
 		goto out_msg;
 	}
 
-	mutex_lock(&ipmi_device->tx_msg_lock);
+	spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
 	list_for_each_entry(tx_msg, &ipmi_device->tx_msg_list, head) {
 		if (msg->msgid == tx_msg->tx_msgid) {
 			msg_found = 1;
 			break;
 		}
 	}
-	mutex_unlock(&ipmi_device->tx_msg_lock);
+	spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
 
 	if (!msg_found) {
 		dev_warn(&pnp_dev->dev,
@@ -394,6 +397,7 @@ acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
 	struct acpi_ipmi_device *ipmi_device = handler_context;
 	int err, rem_time;
 	acpi_status status;
+	unsigned long flags;
 
 	/*
 	 * IPMI opregion message.
@@ -416,9 +420,9 @@ acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
 		goto out_msg;
 	}
 
-	mutex_lock(&ipmi_device->tx_msg_lock);
+	spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
 	list_add_tail(&tx_msg->head, &ipmi_device->tx_msg_list);
-	mutex_unlock(&ipmi_device->tx_msg_lock);
+	spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
 	err = ipmi_request_settime(ipmi_device->user_interface,
 				   &tx_msg->addr,
 				   tx_msg->tx_msgid,
@@ -434,9 +438,9 @@ acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
 	status = AE_OK;
 
 out_list:
-	mutex_lock(&ipmi_device->tx_msg_lock);
+	spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
 	list_del(&tx_msg->head);
-	mutex_unlock(&ipmi_device->tx_msg_lock);
+	spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
 out_msg:
 	kfree(tx_msg);
 	return status;
@@ -479,7 +483,7 @@ static void acpi_add_ipmi_device(struct acpi_ipmi_device *ipmi_device)
 
 	INIT_LIST_HEAD(&ipmi_device->head);
 
-	mutex_init(&ipmi_device->tx_msg_lock);
+	spin_lock_init(&ipmi_device->tx_msg_lock);
 	INIT_LIST_HEAD(&ipmi_device->tx_msg_list);
 	ipmi_install_space_handler(ipmi_device);
 
-- 
1.7.10


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler()
@ 2013-07-23  8:09     ` Lv Zheng
  0 siblings, 0 replies; 6+ messages in thread
From: Lv Zheng @ 2013-07-23  8:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, Len Brown, Corey Minyard, Zhao Yakui
  Cc: Lv Zheng, linux-kernel, stable, linux-acpi, openipmi-developer

This patch quick fixes the issues indicated by the test results that
ipmi_msg_handler() is invoked in atomic context.

BUG: scheduling while atomic: kipmi0/18933/0x10000100
Modules linked in: ipmi_si acpi_ipmi ...
CPU: 3 PID: 18933 Comm: kipmi0 Tainted: G       AW    3.10.0-rc7+ #2
Hardware name: QCI QSSC-S4R/QSSC-S4R, BIOS QSSC-S4R.QCI.01.00.0027.070120100606 07/01/2010
 ffff8838245eea00 ffff88103fc63c98 ffffffff814c4a1e ffff88103fc63ca8
 ffffffff814bfbab ffff88103fc63d28 ffffffff814c73e0 ffff88103933cbd4
 0000000000000096 ffff88103fc63ce8 ffff88102f618000 ffff881035c01fd8
Call Trace:
 <IRQ>  [<ffffffff814c4a1e>] dump_stack+0x19/0x1b
 [<ffffffff814bfbab>] __schedule_bug+0x46/0x54
 [<ffffffff814c73e0>] __schedule+0x83/0x59c
 [<ffffffff81058853>] __cond_resched+0x22/0x2d
 [<ffffffff814c794b>] _cond_resched+0x14/0x1d
 [<ffffffff814c6d82>] mutex_lock+0x11/0x32
 [<ffffffff8101e1e9>] ? __default_send_IPI_dest_field.constprop.0+0x53/0x58
 [<ffffffffa09e3f9c>] ipmi_msg_handler+0x23/0x166 [ipmi_si]
 [<ffffffff812bf6e4>] deliver_response+0x55/0x5a
 [<ffffffff812c0fd4>] handle_new_recv_msgs+0xb67/0xc65
 [<ffffffff81007ad1>] ? read_tsc+0x9/0x19
 [<ffffffff814c8620>] ? _raw_spin_lock_irq+0xa/0xc
 [<ffffffffa09e1128>] ipmi_thread+0x5c/0x146 [ipmi_si]
 ...

Known issues:
- Replacing tx_msg_lock with spinlock is not performance friendly
  Current solution works but does not have the best performance because it
  is better to make atomic context run as fast as possible.  Given there
  are no many IPMI messages created by ACPI, performance of current
  solution may be OK.  It can be better via linking ipmi_recv_msg into an
  RX message queue and process it in other contexts.

Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Reviewed-by: Huang Ying <ying.huang@intel.com>
---
 drivers/acpi/acpi_ipmi.c |   24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/acpi/acpi_ipmi.c b/drivers/acpi/acpi_ipmi.c
index 28e2b4c..b37c189 100644
--- a/drivers/acpi/acpi_ipmi.c
+++ b/drivers/acpi/acpi_ipmi.c
@@ -39,6 +39,7 @@
 #include <linux/ipmi.h>
 #include <linux/device.h>
 #include <linux/pnp.h>
+#include <linux/spinlock.h>
 
 MODULE_AUTHOR("Zhao Yakui");
 MODULE_DESCRIPTION("ACPI IPMI Opregion driver");
@@ -58,7 +59,7 @@ struct acpi_ipmi_device {
 	struct list_head head;
 	/* the IPMI request message list */
 	struct list_head tx_msg_list;
-	struct mutex	tx_msg_lock;
+	spinlock_t	tx_msg_lock;
 	acpi_handle handle;
 	struct pnp_dev *pnp_dev;
 	ipmi_user_t	user_interface;
@@ -146,6 +147,7 @@ static int acpi_format_ipmi_request(struct acpi_ipmi_msg *tx_msg,
 	struct kernel_ipmi_msg *msg;
 	struct acpi_ipmi_buffer *buffer;
 	struct acpi_ipmi_device *device;
+	unsigned long flags;
 
 	msg = &tx_msg->tx_message;
 	/*
@@ -182,10 +184,10 @@ static int acpi_format_ipmi_request(struct acpi_ipmi_msg *tx_msg,
 
 	/* Get the msgid */
 	device = tx_msg->device;
-	mutex_lock(&device->tx_msg_lock);
+	spin_lock_irqsave(&device->tx_msg_lock, flags);
 	device->curr_msgid++;
 	tx_msg->tx_msgid = device->curr_msgid;
-	mutex_unlock(&device->tx_msg_lock);
+	spin_unlock_irqrestore(&device->tx_msg_lock, flags);
 
 	return 0;
 }
@@ -250,6 +252,7 @@ static void ipmi_msg_handler(struct ipmi_recv_msg *msg, void *user_msg_data)
 	int msg_found = 0;
 	struct acpi_ipmi_msg *tx_msg;
 	struct pnp_dev *pnp_dev = ipmi_device->pnp_dev;
+	unsigned long flags;
 
 	if (msg->user != ipmi_device->user_interface) {
 		dev_warn(&pnp_dev->dev,
@@ -258,14 +261,14 @@ static void ipmi_msg_handler(struct ipmi_recv_msg *msg, void *user_msg_data)
 		goto out_msg;
 	}
 
-	mutex_lock(&ipmi_device->tx_msg_lock);
+	spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
 	list_for_each_entry(tx_msg, &ipmi_device->tx_msg_list, head) {
 		if (msg->msgid == tx_msg->tx_msgid) {
 			msg_found = 1;
 			break;
 		}
 	}
-	mutex_unlock(&ipmi_device->tx_msg_lock);
+	spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
 
 	if (!msg_found) {
 		dev_warn(&pnp_dev->dev,
@@ -394,6 +397,7 @@ acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
 	struct acpi_ipmi_device *ipmi_device = handler_context;
 	int err, rem_time;
 	acpi_status status;
+	unsigned long flags;
 
 	/*
 	 * IPMI opregion message.
@@ -416,9 +420,9 @@ acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
 		goto out_msg;
 	}
 
-	mutex_lock(&ipmi_device->tx_msg_lock);
+	spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
 	list_add_tail(&tx_msg->head, &ipmi_device->tx_msg_list);
-	mutex_unlock(&ipmi_device->tx_msg_lock);
+	spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
 	err = ipmi_request_settime(ipmi_device->user_interface,
 				   &tx_msg->addr,
 				   tx_msg->tx_msgid,
@@ -434,9 +438,9 @@ acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
 	status = AE_OK;
 
 out_list:
-	mutex_lock(&ipmi_device->tx_msg_lock);
+	spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
 	list_del(&tx_msg->head);
-	mutex_unlock(&ipmi_device->tx_msg_lock);
+	spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
 out_msg:
 	kfree(tx_msg);
 	return status;
@@ -479,7 +483,7 @@ static void acpi_add_ipmi_device(struct acpi_ipmi_device *ipmi_device)
 
 	INIT_LIST_HEAD(&ipmi_device->head);
 
-	mutex_init(&ipmi_device->tx_msg_lock);
+	spin_lock_init(&ipmi_device->tx_msg_lock);
 	INIT_LIST_HEAD(&ipmi_device->tx_msg_list);
 	ipmi_install_space_handler(ipmi_device);
 
-- 
1.7.10


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-09-13  1:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-12 14:44 [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler() Tony Camuso
2013-09-12 15:12 ` Rafael J. Wysocki
2013-09-12 17:09   ` Tony Camuso
2013-09-13  1:08   ` Zheng, Lv
     [not found] <cover.1370652213.git.lv.zheng@intel.com>
2013-07-23  8:08 ` [PATCH 00/13] ACPI/IPMI: Fix several issues in the current codes Lv Zheng
2013-07-23  8:09   ` [PATCH 02/13] ACPI/IPMI: Fix atomic context requirement of ipmi_msg_handler() Lv Zheng
2013-07-23  8:09     ` Lv Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.