All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Drivers: hv: vmbus: Add cpu read lock
@ 2022-06-09  5:27 Saurabh Sengar
  2022-06-09 13:51 ` Michael Kelley (LINUX)
  0 siblings, 1 reply; 4+ messages in thread
From: Saurabh Sengar @ 2022-06-09  5:27 UTC (permalink / raw)
  To: kys, haiyangz, sthemmin, wei.liu, decui, linux-hyperv,
	linux-kernel, ssengar, mikelley

Add cpus_read_lock to prevent CPUs from going offline between query and
actual use of cpumask. cpumask_of_node is first queried, and based on it
used later, in case any CPU goes offline between these two events, it can
potentially cause an infinite loop of retries.

Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
---
 drivers/hv/channel_mgmt.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 85a2142..6a88b7e 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -749,6 +749,9 @@ static void init_vp_index(struct vmbus_channel *channel)
 		return;
 	}
 
+	/* No CPUs should come up or down during this. */
+	cpus_read_lock();
+
 	for (i = 1; i <= ncpu + 1; i++) {
 		while (true) {
 			numa_node = next_numa_node_id++;
@@ -781,6 +784,7 @@ static void init_vp_index(struct vmbus_channel *channel)
 			break;
 	}
 
+	cpus_read_unlock();
 	channel->target_cpu = target_cpu;
 
 	free_cpumask_var(available_mask);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* RE: [PATCH] Drivers: hv: vmbus: Add cpu read lock
  2022-06-09  5:27 [PATCH] Drivers: hv: vmbus: Add cpu read lock Saurabh Sengar
@ 2022-06-09 13:51 ` Michael Kelley (LINUX)
  2022-06-09 13:59   ` Haiyang Zhang
  0 siblings, 1 reply; 4+ messages in thread
From: Michael Kelley (LINUX) @ 2022-06-09 13:51 UTC (permalink / raw)
  To: Saurabh Sengar, KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
	wei.liu, Dexuan Cui, linux-hyperv, linux-kernel,
	Saurabh Singh Sengar

From: Saurabh Sengar <ssengar@linux.microsoft.com> Sent: Wednesday, June 8, 2022 10:27 PM
> 
> Add cpus_read_lock to prevent CPUs from going offline between query and
> actual use of cpumask. cpumask_of_node is first queried, and based on it
> used later, in case any CPU goes offline between these two events, it can
> potentially cause an infinite loop of retries.
> 
> Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
> ---
>  drivers/hv/channel_mgmt.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index 85a2142..6a88b7e 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -749,6 +749,9 @@ static void init_vp_index(struct vmbus_channel *channel)
>  		return;
>  	}
> 
> +	/* No CPUs should come up or down during this. */
> +	cpus_read_lock();
> +
>  	for (i = 1; i <= ncpu + 1; i++) {
>  		while (true) {
>  			numa_node = next_numa_node_id++;
> @@ -781,6 +784,7 @@ static void init_vp_index(struct vmbus_channel *channel)
>  			break;
>  	}
> 
> +	cpus_read_unlock();
>  	channel->target_cpu = target_cpu;
> 
>  	free_cpumask_var(available_mask);
> --
> 1.8.3.1

This patch was motivated because I suggested a potential issue here during
a separate conversation with Saurabh, but it turns out I was wrong. :-(

init_vp_index() is only called from vmbus_process_offer(), and the
cpus_read_lock() is already held when init_vp_index() is called.  So the
issue doesn't exist, and this patch isn't needed.

However, looking at vmbus_process_offer(), there appears to be a
different problem in that cpus_read_unlock() is not called when taking
the error return because the sub_channel_index is zero.

Michael



^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [PATCH] Drivers: hv: vmbus: Add cpu read lock
  2022-06-09 13:51 ` Michael Kelley (LINUX)
@ 2022-06-09 13:59   ` Haiyang Zhang
  2022-06-09 14:15     ` Saurabh Singh Sengar
  0 siblings, 1 reply; 4+ messages in thread
From: Haiyang Zhang @ 2022-06-09 13:59 UTC (permalink / raw)
  To: Michael Kelley (LINUX),
	Saurabh Sengar, KY Srinivasan, Stephen Hemminger, wei.liu,
	Dexuan Cui, linux-hyperv, linux-kernel, Saurabh Singh Sengar



> -----Original Message-----
> From: Michael Kelley (LINUX) <mikelley@microsoft.com>
> Sent: Thursday, June 9, 2022 9:51 AM
> To: Saurabh Sengar <ssengar@linux.microsoft.com>; KY Srinivasan
> <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen
> Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Dexuan Cui
> <decui@microsoft.com>; linux-hyperv@vger.kernel.org; linux-
> kernel@vger.kernel.org; Saurabh Singh Sengar <ssengar@microsoft.com>
> Subject: RE: [PATCH] Drivers: hv: vmbus: Add cpu read lock
> 
> From: Saurabh Sengar <ssengar@linux.microsoft.com> Sent: Wednesday, June
> 8, 2022 10:27 PM
> >
> > Add cpus_read_lock to prevent CPUs from going offline between query and
> > actual use of cpumask. cpumask_of_node is first queried, and based on it
> > used later, in case any CPU goes offline between these two events, it can
> > potentially cause an infinite loop of retries.
> >
> > Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
> > ---
> >  drivers/hv/channel_mgmt.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> > index 85a2142..6a88b7e 100644
> > --- a/drivers/hv/channel_mgmt.c
> > +++ b/drivers/hv/channel_mgmt.c
> > @@ -749,6 +749,9 @@ static void init_vp_index(struct vmbus_channel
> *channel)
> >  		return;
> >  	}
> >
> > +	/* No CPUs should come up or down during this. */
> > +	cpus_read_lock();
> > +
> >  	for (i = 1; i <= ncpu + 1; i++) {
> >  		while (true) {
> >  			numa_node = next_numa_node_id++;
> > @@ -781,6 +784,7 @@ static void init_vp_index(struct vmbus_channel
> *channel)
> >  			break;
> >  	}
> >
> > +	cpus_read_unlock();
> >  	channel->target_cpu = target_cpu;
> >
> >  	free_cpumask_var(available_mask);
> > --
> > 1.8.3.1
> 
> This patch was motivated because I suggested a potential issue here during
> a separate conversation with Saurabh, but it turns out I was wrong. :-(
> 
> init_vp_index() is only called from vmbus_process_offer(), and the
> cpus_read_lock() is already held when init_vp_index() is called.  So the
> issue doesn't exist, and this patch isn't needed.
> 
> However, looking at vmbus_process_offer(), there appears to be a
> different problem in that cpus_read_unlock() is not called when taking
> the error return because the sub_channel_index is zero.
> 
> Michael
> 

        } else {
                /*
                 * Check to see if this is a valid sub-channel.
                 */
                if (newchannel->offermsg.offer.sub_channel_index == 0) {
                        mutex_unlock(&vmbus_connection.channel_mutex);
                        /*
                         * Don't call free_channel(), because newchannel->kobj
                         * is not initialized yet.
                         */
                        kfree(newchannel);
                        WARN_ON_ONCE(1);
                        return;
                }

If this happens, it should be a host bug. Yes, I also think the cpus_read_unlock()
is missing in this error path.

Thanks,
- Haiyang

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] Drivers: hv: vmbus: Add cpu read lock
  2022-06-09 13:59   ` Haiyang Zhang
@ 2022-06-09 14:15     ` Saurabh Singh Sengar
  0 siblings, 0 replies; 4+ messages in thread
From: Saurabh Singh Sengar @ 2022-06-09 14:15 UTC (permalink / raw)
  To: Haiyang Zhang
  Cc: Michael Kelley (LINUX),
	KY Srinivasan, Stephen Hemminger, wei.liu, Dexuan Cui,
	linux-hyperv, linux-kernel, Saurabh Singh Sengar

On Thu, Jun 09, 2022 at 01:59:02PM +0000, Haiyang Zhang wrote:
> 
> 
> > -----Original Message-----
> > From: Michael Kelley (LINUX) <mikelley@microsoft.com>
> > Sent: Thursday, June 9, 2022 9:51 AM
> > To: Saurabh Sengar <ssengar@linux.microsoft.com>; KY Srinivasan
> > <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen
> > Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Dexuan Cui
> > <decui@microsoft.com>; linux-hyperv@vger.kernel.org; linux-
> > kernel@vger.kernel.org; Saurabh Singh Sengar <ssengar@microsoft.com>
> > Subject: RE: [PATCH] Drivers: hv: vmbus: Add cpu read lock
> > 
> > From: Saurabh Sengar <ssengar@linux.microsoft.com> Sent: Wednesday, June
> > 8, 2022 10:27 PM
> > >
> > > Add cpus_read_lock to prevent CPUs from going offline between query and
> > > actual use of cpumask. cpumask_of_node is first queried, and based on it
> > > used later, in case any CPU goes offline between these two events, it can
> > > potentially cause an infinite loop of retries.
> > >
> > > Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
> > > ---
> > >  drivers/hv/channel_mgmt.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > >
> > > diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> > > index 85a2142..6a88b7e 100644
> > > --- a/drivers/hv/channel_mgmt.c
> > > +++ b/drivers/hv/channel_mgmt.c
> > > @@ -749,6 +749,9 @@ static void init_vp_index(struct vmbus_channel
> > *channel)
> > >  		return;
> > >  	}
> > >
> > > +	/* No CPUs should come up or down during this. */
> > > +	cpus_read_lock();
> > > +
> > >  	for (i = 1; i <= ncpu + 1; i++) {
> > >  		while (true) {
> > >  			numa_node = next_numa_node_id++;
> > > @@ -781,6 +784,7 @@ static void init_vp_index(struct vmbus_channel
> > *channel)
> > >  			break;
> > >  	}
> > >
> > > +	cpus_read_unlock();
> > >  	channel->target_cpu = target_cpu;
> > >
> > >  	free_cpumask_var(available_mask);
> > > --
> > > 1.8.3.1
> > 
> > This patch was motivated because I suggested a potential issue here during
> > a separate conversation with Saurabh, but it turns out I was wrong. :-(
> > 
> > init_vp_index() is only called from vmbus_process_offer(), and the
> > cpus_read_lock() is already held when init_vp_index() is called.  So the
> > issue doesn't exist, and this patch isn't needed.
> > 
> > However, looking at vmbus_process_offer(), there appears to be a
> > different problem in that cpus_read_unlock() is not called when taking
> > the error return because the sub_channel_index is zero.
> > 
> > Michael
> > 
> 
>         } else {
>                 /*
>                  * Check to see if this is a valid sub-channel.
>                  */
>                 if (newchannel->offermsg.offer.sub_channel_index == 0) {
>                         mutex_unlock(&vmbus_connection.channel_mutex);
>                         /*
>                          * Don't call free_channel(), because newchannel->kobj
>                          * is not initialized yet.
>                          */
>                         kfree(newchannel);
>                         WARN_ON_ONCE(1);
>                         return;
>                 }
> 
> If this happens, it should be a host bug. Yes, I also think the cpus_read_unlock()
> is missing in this error path.
> 
> Thanks,
> - Haiyang

I see, will send another patch to fix this.

Regards,
Saurabh

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-06-09 14:15 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-09  5:27 [PATCH] Drivers: hv: vmbus: Add cpu read lock Saurabh Sengar
2022-06-09 13:51 ` Michael Kelley (LINUX)
2022-06-09 13:59   ` Haiyang Zhang
2022-06-09 14:15     ` Saurabh Singh Sengar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.