* [PATCH] Drivers: hv: vmbus: fix the race when querying & updating the percpu list
@ 2016-05-17 5:13 Dexuan Cui
2016-05-17 8:14 ` Vitaly Kuznetsov
0 siblings, 1 reply; 4+ messages in thread
From: Dexuan Cui @ 2016-05-17 5:13 UTC (permalink / raw)
To: gregkh, linux-kernel, driverdev-devel, olaf, apw, jasowang, kys,
vkuznets
Cc: haiyangz
There is a rare race when we remove an entry from the global list
hv_context.percpu_list[cpu] in hv_process_channel_removal() ->
percpu_channel_deq() -> list_del(): at this time, if vmbus_on_event() ->
process_chn_event() -> pcpu_relid2channel() is trying to query the list,
we can get the general protection fault:
general protection fault: 0000 [#1] SMP
...
RIP: 0010:[<ffffffff81461b6b>] [<ffffffff81461b6b>] vmbus_on_event+0xc4/0x149
Similarly, we also have the issue in the code path: vmbus_process_offer() ->
percpu_channel_enq().
We can resolve the issue by disabling the tasklet when updating the list.
Reported-by: Rolf Neugebauer <rolf.neugebauer@docker.com>
Signed-off-by: Dexuan Cui <decui@microsoft.com>
---
drivers/hv/channel.c | 3 +++
drivers/hv/channel_mgmt.c | 20 +++++++++-----------
include/linux/hyperv.h | 3 +++
3 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index 56dd261..7811cf9 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -546,8 +546,11 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
put_cpu();
smp_call_function_single(channel->target_cpu, reset_channel_cb,
channel, true);
+ smp_call_function_single(channel->target_cpu,
+ percpu_channel_deq, channel, true);
} else {
reset_channel_cb(channel);
+ percpu_channel_deq(channel);
put_cpu();
}
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 38b682ba..c4b5c07 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -21,6 +21,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h>
+#include <linux/interrupt.h>
#include <linux/sched.h>
#include <linux/wait.h>
#include <linux/mm.h>
@@ -277,7 +278,7 @@ static void free_channel(struct vmbus_channel *channel)
kfree(channel);
}
-static void percpu_channel_enq(void *arg)
+void percpu_channel_enq(void *arg)
{
struct vmbus_channel *channel = arg;
int cpu = smp_processor_id();
@@ -285,7 +286,7 @@ static void percpu_channel_enq(void *arg)
list_add_tail(&channel->percpu_list, &hv_context.percpu_list[cpu]);
}
-static void percpu_channel_deq(void *arg)
+void percpu_channel_deq(void *arg)
{
struct vmbus_channel *channel = arg;
@@ -313,15 +314,6 @@ void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid)
BUG_ON(!channel->rescind);
BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
- if (channel->target_cpu != get_cpu()) {
- put_cpu();
- smp_call_function_single(channel->target_cpu,
- percpu_channel_deq, channel, true);
- } else {
- percpu_channel_deq(channel);
- put_cpu();
- }
-
if (channel->primary_channel == NULL) {
list_del(&channel->listentry);
@@ -363,6 +355,7 @@ void vmbus_free_channels(void)
*/
static void vmbus_process_offer(struct vmbus_channel *newchannel)
{
+ struct tasklet_struct *tasklet;
struct vmbus_channel *channel;
bool fnew = true;
unsigned long flags;
@@ -409,6 +402,8 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
init_vp_index(newchannel, dev_type);
+ tasklet = hv_context.event_dpc[newchannel->target_cpu];
+ tasklet_disable(tasklet);
if (newchannel->target_cpu != get_cpu()) {
put_cpu();
smp_call_function_single(newchannel->target_cpu,
@@ -418,6 +413,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
percpu_channel_enq(newchannel);
put_cpu();
}
+ tasklet_enable(tasklet);
/*
* This state is used to indicate a successful open
@@ -469,6 +465,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
list_del(&newchannel->listentry);
mutex_unlock(&vmbus_connection.channel_mutex);
+ tasklet_disable(tasklet);
if (newchannel->target_cpu != get_cpu()) {
put_cpu();
smp_call_function_single(newchannel->target_cpu,
@@ -477,6 +474,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
percpu_channel_deq(newchannel);
put_cpu();
}
+ tasklet_enable(tasklet);
err_free_chan:
free_channel(newchannel);
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 7be7237..95aea09 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1328,6 +1328,9 @@ extern bool vmbus_prep_negotiate_resp(struct icmsg_hdr *,
struct icmsg_negotiate *, u8 *, int,
int);
+void percpu_channel_enq(void *arg);
+void percpu_channel_deq(void *arg);
+
void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid);
/*
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] Drivers: hv: vmbus: fix the race when querying & updating the percpu list
2016-05-17 5:13 [PATCH] Drivers: hv: vmbus: fix the race when querying & updating the percpu list Dexuan Cui
@ 2016-05-17 8:14 ` Vitaly Kuznetsov
2016-05-17 8:35 ` Dexuan Cui
0 siblings, 1 reply; 4+ messages in thread
From: Vitaly Kuznetsov @ 2016-05-17 8:14 UTC (permalink / raw)
To: Dexuan Cui
Cc: gregkh, linux-kernel, driverdev-devel, olaf, apw, jasowang, kys,
haiyangz
Dexuan Cui <decui@microsoft.com> writes:
> There is a rare race when we remove an entry from the global list
> hv_context.percpu_list[cpu] in hv_process_channel_removal() ->
> percpu_channel_deq() -> list_del(): at this time, if vmbus_on_event() ->
> process_chn_event() -> pcpu_relid2channel() is trying to query the list,
> we can get the general protection fault:
>
> general protection fault: 0000 [#1] SMP
> ...
> RIP: 0010:[<ffffffff81461b6b>] [<ffffffff81461b6b>] vmbus_on_event+0xc4/0x149
>
> Similarly, we also have the issue in the code path: vmbus_process_offer() ->
> percpu_channel_enq().
>
> We can resolve the issue by disabling the tasklet when updating the list.
>
> Reported-by: Rolf Neugebauer <rolf.neugebauer@docker.com>
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> ---
> drivers/hv/channel.c | 3 +++
> drivers/hv/channel_mgmt.c | 20 +++++++++-----------
> include/linux/hyperv.h | 3 +++
> 3 files changed, 15 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
> index 56dd261..7811cf9 100644
> --- a/drivers/hv/channel.c
> +++ b/drivers/hv/channel.c
> @@ -546,8 +546,11 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
> put_cpu();
> smp_call_function_single(channel->target_cpu, reset_channel_cb,
> channel, true);
> + smp_call_function_single(channel->target_cpu,
> + percpu_channel_deq, channel, true);
> } else {
> reset_channel_cb(channel);
> + percpu_channel_deq(channel);
> put_cpu();
> }
>
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index 38b682ba..c4b5c07 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -21,6 +21,7 @@
> #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>
> #include <linux/kernel.h>
> +#include <linux/interrupt.h>
> #include <linux/sched.h>
> #include <linux/wait.h>
> #include <linux/mm.h>
> @@ -277,7 +278,7 @@ static void free_channel(struct vmbus_channel *channel)
> kfree(channel);
> }
>
> -static void percpu_channel_enq(void *arg)
> +void percpu_channel_enq(void *arg)
> {
> struct vmbus_channel *channel = arg;
> int cpu = smp_processor_id();
> @@ -285,7 +286,7 @@ static void percpu_channel_enq(void *arg)
> list_add_tail(&channel->percpu_list, &hv_context.percpu_list[cpu]);
> }
>
> -static void percpu_channel_deq(void *arg)
> +void percpu_channel_deq(void *arg)
> {
> struct vmbus_channel *channel = arg;
>
> @@ -313,15 +314,6 @@ void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid)
> BUG_ON(!channel->rescind);
> BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
>
> - if (channel->target_cpu != get_cpu()) {
> - put_cpu();
> - smp_call_function_single(channel->target_cpu,
> - percpu_channel_deq, channel, true);
> - } else {
> - percpu_channel_deq(channel);
> - put_cpu();
> - }
> -
> if (channel->primary_channel == NULL) {
> list_del(&channel->listentry);
>
> @@ -363,6 +355,7 @@ void vmbus_free_channels(void)
> */
> static void vmbus_process_offer(struct vmbus_channel *newchannel)
> {
> + struct tasklet_struct *tasklet;
> struct vmbus_channel *channel;
> bool fnew = true;
> unsigned long flags;
> @@ -409,6 +402,8 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
>
> init_vp_index(newchannel, dev_type);
>
> + tasklet = hv_context.event_dpc[newchannel->target_cpu];
> + tasklet_disable(tasklet);
> if (newchannel->target_cpu != get_cpu()) {
> put_cpu();
> smp_call_function_single(newchannel->target_cpu,
> @@ -418,6 +413,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
> percpu_channel_enq(newchannel);
> put_cpu();
> }
> + tasklet_enable(tasklet);
Do we need to do tasklet_schedule() to make sure there are no events
pending? This is probably not a big issue as some other event will
trigger scheduling but in some corner cases it may bite. Same question
applies to the code below and to vmbus_close_internal().
>
> /*
> * This state is used to indicate a successful open
> @@ -469,6 +465,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
> list_del(&newchannel->listentry);
> mutex_unlock(&vmbus_connection.channel_mutex);
>
> + tasklet_disable(tasklet);
> if (newchannel->target_cpu != get_cpu()) {
> put_cpu();
> smp_call_function_single(newchannel->target_cpu,
> @@ -477,6 +474,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
> percpu_channel_deq(newchannel);
> put_cpu();
> }
> + tasklet_enable(tasklet);
>
> err_free_chan:
> free_channel(newchannel);
> diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
> index 7be7237..95aea09 100644
> --- a/include/linux/hyperv.h
> +++ b/include/linux/hyperv.h
> @@ -1328,6 +1328,9 @@ extern bool vmbus_prep_negotiate_resp(struct icmsg_hdr *,
> struct icmsg_negotiate *, u8 *, int,
> int);
>
> +void percpu_channel_enq(void *arg);
> +void percpu_channel_deq(void *arg);
> +
> void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid);
>
> /*
--
Vitaly
^ permalink raw reply [flat|nested] 4+ messages in thread
* RE: [PATCH] Drivers: hv: vmbus: fix the race when querying & updating the percpu list
2016-05-17 8:14 ` Vitaly Kuznetsov
@ 2016-05-17 8:35 ` Dexuan Cui
2016-05-17 8:37 ` Dexuan Cui
0 siblings, 1 reply; 4+ messages in thread
From: Dexuan Cui @ 2016-05-17 8:35 UTC (permalink / raw)
To: Vitaly Kuznetsov
Cc: gregkh, linux-kernel, driverdev-devel, olaf, apw, jasowang,
KY Srinivasan, Haiyang Zhang
> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> Sent: Tuesday, May 17, 2016 16:15
> To: Dexuan Cui <decui@microsoft.com>
> Cc: gregkh@linuxfoundation.org; linux-kernel@vger.kernel.org; driverdev-
> devel@linuxdriverproject.org; olaf@aepfle.de; apw@canonical.com;
> jasowang@redhat.com; KY Srinivasan <kys@microsoft.com>; Haiyang Zhang
> <haiyangz@microsoft.com>
> Subject: Re: [PATCH] Drivers: hv: vmbus: fix the race when querying & updating
> the percpu list
>
> Dexuan Cui <decui@microsoft.com> writes:
>
> > There is a rare race when we remove an entry from the global list
> > hv_context.percpu_list[cpu] in hv_process_channel_removal() ->
> > percpu_channel_deq() -> list_del(): at this time, if vmbus_on_event() ->
> > process_chn_event() -> pcpu_relid2channel() is trying to query the list,
> > we can get the general protection fault:
> >...
> >
> > We can resolve the issue by disabling the tasklet when updating the list.
> >
> > @@ -418,6 +413,7 @@ static void vmbus_process_offer(struct
> vmbus_channel *newchannel)
> > percpu_channel_enq(newchannel);
> > put_cpu();
> > }
> > + tasklet_enable(tasklet);
>
> Do we need to do tasklet_schedule() to make sure there are no events
> pending? This is probably not a big issue as some other event will
> trigger scheduling but in some corner cases it may bite. Same question
> applies to the code below and to vmbus_close_internal().
Hi Vitaly,
Thanks for spotting this!
I think you're correct.
I'll add tasklet_schedul() before the tasklet_enable() in the 2 places.
Thanks,
-- Dexuan
^ permalink raw reply [flat|nested] 4+ messages in thread
* RE: [PATCH] Drivers: hv: vmbus: fix the race when querying & updating the percpu list
2016-05-17 8:35 ` Dexuan Cui
@ 2016-05-17 8:37 ` Dexuan Cui
0 siblings, 0 replies; 4+ messages in thread
From: Dexuan Cui @ 2016-05-17 8:37 UTC (permalink / raw)
To: Dexuan Cui, Vitaly Kuznetsov
Cc: olaf, gregkh, jasowang, driverdev-devel, linux-kernel, apw,
Haiyang Zhang
> From: devel [mailto:driverdev-devel-bounces@linuxdriverproject.org] On Behalf
> Of Dexuan Cui
> >
> > Do we need to do tasklet_schedule() to make sure there are no events
> > pending? This is probably not a big issue as some other event will
> > trigger scheduling but in some corner cases it may bite. Same question
> > applies to the code below and to vmbus_close_internal().
>
> Hi Vitaly,
> Thanks for spotting this!
> I think you're correct.
> I'll add tasklet_schedul() before the tasklet_enable() in the 2 places.
Sorry for the typo -- I meant "after" rather than "before".
Thanks
-- Dexuan
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-05-17 8:37 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-17 5:13 [PATCH] Drivers: hv: vmbus: fix the race when querying & updating the percpu list Dexuan Cui
2016-05-17 8:14 ` Vitaly Kuznetsov
2016-05-17 8:35 ` Dexuan Cui
2016-05-17 8:37 ` Dexuan Cui
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).