All of lore.kernel.org
 help / color / mirror / Atom feed
From: Corey Minyard <minyard@acm.org>
To: Allen <allen.lkml@gmail.com>
Cc: Allen Pais <apais@linux.microsoft.com>,
	linux-kernel@vger.kernel.org, tj@kernel.org,
	keescook@chromium.org, vkoul@kernel.org, marcan@marcan.st,
	sven@svenpeter.dev, florian.fainelli@broadcom.com,
	rjui@broadcom.com, sbranden@broadcom.com, paul@crapouillou.net,
	Eugeniy.Paltsev@synopsys.com, manivannan.sadhasivam@linaro.org,
	vireshk@kernel.org, Frank.Li@nxp.com, leoyang.li@nxp.com,
	zw@zh-kernel.org, wangzhou1@hisilicon.com, haijie1@huawei.com,
	shawnguo@kernel.org, s.hauer@pengutronix.de,
	sean.wang@mediatek.com, matthias.bgg@gmail.com,
	angelogioacchino.delregno@collabora.com, afaerber@suse.de,
	logang@deltatee.com, daniel@zonque.org, haojian.zhuang@gmail.com,
	robert.jarzmik@free.fr, andersson@kernel.org,
	konrad.dybcio@linaro.org, orsonzhai@gmail.com,
	baolin.wang@linux.alibaba.com, zhang.lyra@gmail.com,
	patrice.chotard@foss.st.com, linus.walleij@linaro.org,
	wens@csie.org, jernej.skrabec@gmail.com,
	peter.ujfalusi@gmail.com, kys@microsoft.com,
	haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com,
	jassisinghbrar@gmail.com, mchehab@kernel.org,
	maintainers@bluecherrydvr.com, aubin.constans@microchip.com,
	ulf.hansson@linaro.org, manuel.lauss@gmail.com,
	mirq-linux@rere.qmqm.pl, jh80.chung@samsung.com, oakad@yahoo.com,
	hayashi.kunihiko@socionext.com, mhiramat@kernel.org,
	brucechang@via.com.tw, HaraldWelte@viatech.com, pierre@ossman.eu,
	duncan.sands@free.fr, stern@rowland.harvard.edu,
	oneukum@suse.com, openipmi-developer@lists.sourceforge.net,
	dmaengine@vger.kernel.org, asahi@lists.linux.dev,
	linux-arm-kernel@lists.infradead.org,
	linux-rpi-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	imx@lists.linux.dev, linuxppc-dev@lists.ozlabs.org,
	linux-mediatek@lists.infradead.org,
	linux-actions@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	linux-riscv@lists.infradead.org, linux-sunxi@lists.linux.dev,
	linux-tegra@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-omap@vger.kernel.org,
	linux-renesas-soc@vger.kernel.org, linux-s390@vger.kernel.org,
	netdev@vger.kernel.org, linux-usb@vger.kernel.org
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue
Date: Thu, 28 Mar 2024 14:23:07 -0500	[thread overview]
Message-ID: <ZgXDmx1HvujsMYAR@mail.minyard.net> (raw)
In-Reply-To: <CAOMdWS+1AFxEqmACiBYzPHc+q0Ut6hp15tdV50JHvfVeUNCGQw@mail.gmail.com>

On Thu, Mar 28, 2024 at 10:52:16AM -0700, Allen wrote:
> On Wed, Mar 27, 2024 at 11:05 AM Corey Minyard <minyard@acm.org> wrote:
> >
> > I believe that work queues items are execute single-threaded for a work
> > queue, so this should be good.  I need to test this, though.  It may be
> > that an IPMI device can have its own work queue; it may not be important
> > to run it in bh context.
> 
>   Fair point. Could you please let me know once you have had a chance to test
> these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
> workqueue.
> 
>  Thanks for taking time out to review.

After looking and thinking about it a bit, a BH context is still
probably the best for this.

I have tested this patch under load and various scenarios and it seems
to work ok.  So:

Tested-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Corey Minyard <cminyard@mvista.com>

Or I can take this into my tree.

-corey

> 
> - Allen
> 
> >
> > -corey
> >
> > >
> > > Based on the work done by Tejun Heo <tj@kernel.org>
> > > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-610
> > >
> > > Signed-off-by: Allen Pais <allen.lkml@gmail.com>
> > > ---
> > >  drivers/char/ipmi/ipmi_msghandler.c | 30 ++++++++++++++---------------
> > >  1 file changed, 15 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> > > index b0eedc4595b3..fce2a2dbdc82 100644
> > > --- a/drivers/char/ipmi/ipmi_msghandler.c
> > > +++ b/drivers/char/ipmi/ipmi_msghandler.c
> > > @@ -36,12 +36,13 @@
> > >  #include <linux/nospec.h>
> > >  #include <linux/vmalloc.h>
> > >  #include <linux/delay.h>
> > > +#include <linux/workqueue.h>
> > >
> > >  #define IPMI_DRIVER_VERSION "39.2"
> > >
> > >  static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
> > >  static int ipmi_init_msghandler(void);
> > > -static void smi_recv_tasklet(struct tasklet_struct *t);
> > > +static void smi_recv_work(struct work_struct *t);
> > >  static void handle_new_recv_msgs(struct ipmi_smi *intf);
> > >  static void need_waiter(struct ipmi_smi *intf);
> > >  static int handle_one_recv_msg(struct ipmi_smi *intf,
> > > @@ -498,13 +499,13 @@ struct ipmi_smi {
> > >       /*
> > >        * Messages queued for delivery.  If delivery fails (out of memory
> > >        * for instance), They will stay in here to be processed later in a
> > > -      * periodic timer interrupt.  The tasklet is for handling received
> > > +      * periodic timer interrupt.  The work is for handling received
> > >        * messages directly from the handler.
> > >        */
> > >       spinlock_t       waiting_rcv_msgs_lock;
> > >       struct list_head waiting_rcv_msgs;
> > >       atomic_t         watchdog_pretimeouts_to_deliver;
> > > -     struct tasklet_struct recv_tasklet;
> > > +     struct work_struct recv_work;
> > >
> > >       spinlock_t             xmit_msgs_lock;
> > >       struct list_head       xmit_msgs;
> > > @@ -704,7 +705,7 @@ static void clean_up_interface_data(struct ipmi_smi *intf)
> > >       struct cmd_rcvr  *rcvr, *rcvr2;
> > >       struct list_head list;
> > >
> > > -     tasklet_kill(&intf->recv_tasklet);
> > > +     cancel_work_sync(&intf->recv_work);
> > >
> > >       free_smi_msg_list(&intf->waiting_rcv_msgs);
> > >       free_recv_msg_list(&intf->waiting_events);
> > > @@ -1319,7 +1320,7 @@ static void free_user(struct kref *ref)
> > >  {
> > >       struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
> > >
> > > -     /* SRCU cleanup must happen in task context. */
> > > +     /* SRCU cleanup must happen in work context. */
> > >       queue_work(remove_work_wq, &user->remove_work);
> > >  }
> > >
> > > @@ -3605,8 +3606,7 @@ int ipmi_add_smi(struct module         *owner,
> > >       intf->curr_seq = 0;
> > >       spin_lock_init(&intf->waiting_rcv_msgs_lock);
> > >       INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
> > > -     tasklet_setup(&intf->recv_tasklet,
> > > -                  smi_recv_tasklet);
> > > +     INIT_WORK(&intf->recv_work, smi_recv_work);
> > >       atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
> > >       spin_lock_init(&intf->xmit_msgs_lock);
> > >       INIT_LIST_HEAD(&intf->xmit_msgs);
> > > @@ -4779,7 +4779,7 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > >                        * To preserve message order, quit if we
> > >                        * can't handle a message.  Add the message
> > >                        * back at the head, this is safe because this
> > > -                      * tasklet is the only thing that pulls the
> > > +                      * work is the only thing that pulls the
> > >                        * messages.
> > >                        */
> > >                       list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
> > > @@ -4812,10 +4812,10 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > >       }
> > >  }
> > >
> > > -static void smi_recv_tasklet(struct tasklet_struct *t)
> > > +static void smi_recv_work(struct work_struct *t)
> > >  {
> > >       unsigned long flags = 0; /* keep us warning-free. */
> > > -     struct ipmi_smi *intf = from_tasklet(intf, t, recv_tasklet);
> > > +     struct ipmi_smi *intf = from_work(intf, t, recv_work);
> > >       int run_to_completion = intf->run_to_completion;
> > >       struct ipmi_smi_msg *newmsg = NULL;
> > >
> > > @@ -4866,7 +4866,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > >
> > >       /*
> > >        * To preserve message order, we keep a queue and deliver from
> > > -      * a tasklet.
> > > +      * a work.
> > >        */
> > >       if (!run_to_completion)
> > >               spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
> > > @@ -4887,9 +4887,9 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > >               spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
> > >
> > >       if (run_to_completion)
> > > -             smi_recv_tasklet(&intf->recv_tasklet);
> > > +             smi_recv_work(&intf->recv_work);
> > >       else
> > > -             tasklet_schedule(&intf->recv_tasklet);
> > > +             queue_work(system_bh_wq, &intf->recv_work);
> > >  }
> > >  EXPORT_SYMBOL(ipmi_smi_msg_received);
> > >
> > > @@ -4899,7 +4899,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
> > >               return;
> > >
> > >       atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
> > > -     tasklet_schedule(&intf->recv_tasklet);
> > > +     queue_work(system_bh_wq, &intf->recv_work);
> > >  }
> > >  EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
> > >
> > > @@ -5068,7 +5068,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
> > >                                      flags);
> > >       }
> > >
> > > -     tasklet_schedule(&intf->recv_tasklet);
> > > +     queue_work(system_bh_wq, &intf->recv_work);
> > >
> > >       return need_timer;
> > >  }
> > > --
> > > 2.17.1
> > >
> > >
> >
> 
> 
> -- 
>        - Allen
> 

WARNING: multiple messages have this Message-ID (diff)
From: Corey Minyard <minyard@acm.org>
To: Allen <allen.lkml@gmail.com>
Cc: Allen Pais <apais@linux.microsoft.com>,
	linux-kernel@vger.kernel.org, tj@kernel.org,
	keescook@chromium.org, vkoul@kernel.org, marcan@marcan.st,
	sven@svenpeter.dev, florian.fainelli@broadcom.com,
	rjui@broadcom.com, sbranden@broadcom.com, paul@crapouillou.net,
	Eugeniy.Paltsev@synopsys.com, manivannan.sadhasivam@linaro.org,
	vireshk@kernel.org, Frank.Li@nxp.com, leoyang.li@nxp.com,
	zw@zh-kernel.org, wangzhou1@hisilicon.com, haijie1@huawei.com,
	shawnguo@kernel.org, s.hauer@pengutronix.de,
	sean.wang@mediatek.com, matthias.bgg@gmail.com,
	angelogioacchino.delregno@collabora.com, afaerber@suse.de,
	logang@deltatee.com, daniel@zonque.org, haojian.zhuang@gmail.com,
	robert.jarzmik@free.fr, andersson@kernel.org,
	konrad.dybcio@linaro.org, orsonzhai@gmail.com,
	baolin.wang@linux.alibaba.com, zhang.lyra@gmail.com,
	patrice.chotard@foss.st.com, linus.walleij@linaro.org,
	wens@csie.org, jernej.skrabec@gmail.com,
	peter.ujfalusi@gmail.com, kys@microsoft.com,
	haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com,
	jassisinghbrar@gmail.com, mchehab@kernel.org,
	maintainers@bluecherrydvr.com, aubin.constans@microchip.com,
	ulf.hansson@linaro.org, manuel.lauss@gmail.com,
	mirq-linux@rere.qmqm.pl, jh80.chung@samsung.com, oakad@yahoo.com,
	hayashi.kunihiko@socionext.com, mhiramat@kernel.org,
	brucechang@via.com.tw, HaraldWelte@viatech.com, pierre@ossman.eu,
	duncan.sands@free.fr, stern@rowland.harvard.edu,
	oneukum@suse.com, openipmi-developer@lists.sourceforge.net,
	dmaengine@vger.kernel.org, asahi@lists.linux.dev,
	linux-arm-kernel@lists.infradead.org,
	linux-rpi-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	imx@lists.linux.dev, linuxppc-dev@lists.ozlabs.org,
	linux-mediatek@lists.infradead.org,
	linux-actions@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	linux-riscv@lists.infradead.org, linux-sunxi@lists.linux.dev,
	linux-tegra@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-omap@vger.kernel.org,
	linux-renesas-soc@vger.kernel.org, linux-s390@vger.kernel.org,
	netdev@vger.kernel.org, linux-usb@vger.kernel.org
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue
Date: Thu, 28 Mar 2024 14:23:07 -0500	[thread overview]
Message-ID: <ZgXDmx1HvujsMYAR@mail.minyard.net> (raw)
In-Reply-To: <CAOMdWS+1AFxEqmACiBYzPHc+q0Ut6hp15tdV50JHvfVeUNCGQw@mail.gmail.com>

On Thu, Mar 28, 2024 at 10:52:16AM -0700, Allen wrote:
> On Wed, Mar 27, 2024 at 11:05 AM Corey Minyard <minyard@acm.org> wrote:
> >
> > I believe that work queues items are execute single-threaded for a work
> > queue, so this should be good.  I need to test this, though.  It may be
> > that an IPMI device can have its own work queue; it may not be important
> > to run it in bh context.
> 
>   Fair point. Could you please let me know once you have had a chance to test
> these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
> workqueue.
> 
>  Thanks for taking time out to review.

After looking and thinking about it a bit, a BH context is still
probably the best for this.

I have tested this patch under load and various scenarios and it seems
to work ok.  So:

Tested-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Corey Minyard <cminyard@mvista.com>

Or I can take this into my tree.

-corey

> 
> - Allen
> 
> >
> > -corey
> >
> > >
> > > Based on the work done by Tejun Heo <tj@kernel.org>
> > > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-610
> > >
> > > Signed-off-by: Allen Pais <allen.lkml@gmail.com>
> > > ---
> > >  drivers/char/ipmi/ipmi_msghandler.c | 30 ++++++++++++++---------------
> > >  1 file changed, 15 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> > > index b0eedc4595b3..fce2a2dbdc82 100644
> > > --- a/drivers/char/ipmi/ipmi_msghandler.c
> > > +++ b/drivers/char/ipmi/ipmi_msghandler.c
> > > @@ -36,12 +36,13 @@
> > >  #include <linux/nospec.h>
> > >  #include <linux/vmalloc.h>
> > >  #include <linux/delay.h>
> > > +#include <linux/workqueue.h>
> > >
> > >  #define IPMI_DRIVER_VERSION "39.2"
> > >
> > >  static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
> > >  static int ipmi_init_msghandler(void);
> > > -static void smi_recv_tasklet(struct tasklet_struct *t);
> > > +static void smi_recv_work(struct work_struct *t);
> > >  static void handle_new_recv_msgs(struct ipmi_smi *intf);
> > >  static void need_waiter(struct ipmi_smi *intf);
> > >  static int handle_one_recv_msg(struct ipmi_smi *intf,
> > > @@ -498,13 +499,13 @@ struct ipmi_smi {
> > >       /*
> > >        * Messages queued for delivery.  If delivery fails (out of memory
> > >        * for instance), They will stay in here to be processed later in a
> > > -      * periodic timer interrupt.  The tasklet is for handling received
> > > +      * periodic timer interrupt.  The work is for handling received
> > >        * messages directly from the handler.
> > >        */
> > >       spinlock_t       waiting_rcv_msgs_lock;
> > >       struct list_head waiting_rcv_msgs;
> > >       atomic_t         watchdog_pretimeouts_to_deliver;
> > > -     struct tasklet_struct recv_tasklet;
> > > +     struct work_struct recv_work;
> > >
> > >       spinlock_t             xmit_msgs_lock;
> > >       struct list_head       xmit_msgs;
> > > @@ -704,7 +705,7 @@ static void clean_up_interface_data(struct ipmi_smi *intf)
> > >       struct cmd_rcvr  *rcvr, *rcvr2;
> > >       struct list_head list;
> > >
> > > -     tasklet_kill(&intf->recv_tasklet);
> > > +     cancel_work_sync(&intf->recv_work);
> > >
> > >       free_smi_msg_list(&intf->waiting_rcv_msgs);
> > >       free_recv_msg_list(&intf->waiting_events);
> > > @@ -1319,7 +1320,7 @@ static void free_user(struct kref *ref)
> > >  {
> > >       struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
> > >
> > > -     /* SRCU cleanup must happen in task context. */
> > > +     /* SRCU cleanup must happen in work context. */
> > >       queue_work(remove_work_wq, &user->remove_work);
> > >  }
> > >
> > > @@ -3605,8 +3606,7 @@ int ipmi_add_smi(struct module         *owner,
> > >       intf->curr_seq = 0;
> > >       spin_lock_init(&intf->waiting_rcv_msgs_lock);
> > >       INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
> > > -     tasklet_setup(&intf->recv_tasklet,
> > > -                  smi_recv_tasklet);
> > > +     INIT_WORK(&intf->recv_work, smi_recv_work);
> > >       atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
> > >       spin_lock_init(&intf->xmit_msgs_lock);
> > >       INIT_LIST_HEAD(&intf->xmit_msgs);
> > > @@ -4779,7 +4779,7 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > >                        * To preserve message order, quit if we
> > >                        * can't handle a message.  Add the message
> > >                        * back at the head, this is safe because this
> > > -                      * tasklet is the only thing that pulls the
> > > +                      * work is the only thing that pulls the
> > >                        * messages.
> > >                        */
> > >                       list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
> > > @@ -4812,10 +4812,10 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > >       }
> > >  }
> > >
> > > -static void smi_recv_tasklet(struct tasklet_struct *t)
> > > +static void smi_recv_work(struct work_struct *t)
> > >  {
> > >       unsigned long flags = 0; /* keep us warning-free. */
> > > -     struct ipmi_smi *intf = from_tasklet(intf, t, recv_tasklet);
> > > +     struct ipmi_smi *intf = from_work(intf, t, recv_work);
> > >       int run_to_completion = intf->run_to_completion;
> > >       struct ipmi_smi_msg *newmsg = NULL;
> > >
> > > @@ -4866,7 +4866,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > >
> > >       /*
> > >        * To preserve message order, we keep a queue and deliver from
> > > -      * a tasklet.
> > > +      * a work.
> > >        */
> > >       if (!run_to_completion)
> > >               spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
> > > @@ -4887,9 +4887,9 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > >               spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
> > >
> > >       if (run_to_completion)
> > > -             smi_recv_tasklet(&intf->recv_tasklet);
> > > +             smi_recv_work(&intf->recv_work);
> > >       else
> > > -             tasklet_schedule(&intf->recv_tasklet);
> > > +             queue_work(system_bh_wq, &intf->recv_work);
> > >  }
> > >  EXPORT_SYMBOL(ipmi_smi_msg_received);
> > >
> > > @@ -4899,7 +4899,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
> > >               return;
> > >
> > >       atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
> > > -     tasklet_schedule(&intf->recv_tasklet);
> > > +     queue_work(system_bh_wq, &intf->recv_work);
> > >  }
> > >  EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
> > >
> > > @@ -5068,7 +5068,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
> > >                                      flags);
> > >       }
> > >
> > > -     tasklet_schedule(&intf->recv_tasklet);
> > > +     queue_work(system_bh_wq, &intf->recv_work);
> > >
> > >       return need_timer;
> > >  }
> > > --
> > > 2.17.1
> > >
> > >
> >
> 
> 
> -- 
>        - Allen
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Corey Minyard <minyard@acm.org>
To: Allen <allen.lkml@gmail.com>
Cc: imx@lists.linux.dev, ulf.hansson@linaro.org, oneukum@suse.com,
	duncan.sands@free.fr, hayashi.kunihiko@socionext.com,
	linux-mmc@vger.kernel.org, aubin.constans@microchip.com,
	linus.walleij@linaro.org, Frank.Li@nxp.com,
	linux-hyperv@vger.kernel.org, linux-usb@vger.kernel.org,
	HaraldWelte@viatech.com, paul@crapouillou.net,
	linux-tegra@vger.kernel.org, netdev@vger.kernel.org,
	maintainers@bluecherrydvr.com, peter.ujfalusi@gmail.com,
	manivannan.sadhasivam@linaro.org,
	linux-riscv@lists.infradead.org, kys@microsoft.com,
	robert.jarzmik@free.fr, haijie1@huawei.com,
	linux-renesas-soc@vger.kernel.org, wei.liu@kernel.org,
	linux-omap@vger.kernel.org, florian.fainelli@broadcom.com,
	linux-rdma@vger.kernel.org, vireshk@kernel.org,
	jassisinghbrar@gmail.com, decui@microsoft.com,
	wangzhou1@hisilicon.com, jernej.skrabec@gmail.com,
	jh80.chung@samsung.com, zw@zh-kernel.org, wens@csie.org,
	stern@rowland.harvard.edu, linux-arm-msm@vger.kernel.org,
	orsonzhai@gmail.com, pierre@ossman.eu,
	linux-mips@vger.ke rnel.org, Eugeniy.Paltsev@synopsys.com,
	patrice.chotard@foss.st.com, asahi@lists.linux.dev,
	brucechang@via.com.tw, keescook@chromium.org, oakad@yahoo.com,
	sven@svenpeter.dev, rjui@broadcom.com, s.hauer@pengutronix.de,
	sean.wang@mediatek.com, linux-actions@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, haojian.zhuang@gmail.com,
	mirq-linux@rere.qmqm.pl, dmaengine@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-rpi-kernel@lists.infradead.org,
	baolin.wang@linux.alibaba.com, matthias.bgg@gmail.com,
	openipmi-developer@lists.sourceforge.net, mchehab@kernel.org,
	Allen Pais <apais@linux.microsoft.com>,
	linux-arm-kernel@lists.infradead.org,
	angelogioacchino.delregno@collabora.com, sbranden@broadcom.com,
	logang@deltatee.com, andersson@kernel.org, marcan@marcan.st,
	haiyangz@microsoft.com, linux-kernel@vger.kernel.org,
	leoyang.li@nxp.com, konrad.dybcio@linaro.org,
	linux-sunxi@lists.linux.dev, vkoul@kernel.org,
	linux-s390@vger.kernel.org, mhiramat@kernel.org,
	zhang.lyra@gmail.com, tj@kernel.org, manuel.lauss@gmail.com,
	linux-media@vger.kernel.org, shawnguo@kernel.org,
	afaerber@suse.de, daniel@zonque.org
Subject: Re: [PATCH 6/9] ipmi: Convert from tasklet to BH workqueue
Date: Thu, 28 Mar 2024 14:23:07 -0500	[thread overview]
Message-ID: <ZgXDmx1HvujsMYAR@mail.minyard.net> (raw)
In-Reply-To: <CAOMdWS+1AFxEqmACiBYzPHc+q0Ut6hp15tdV50JHvfVeUNCGQw@mail.gmail.com>

On Thu, Mar 28, 2024 at 10:52:16AM -0700, Allen wrote:
> On Wed, Mar 27, 2024 at 11:05 AM Corey Minyard <minyard@acm.org> wrote:
> >
> > I believe that work queues items are execute single-threaded for a work
> > queue, so this should be good.  I need to test this, though.  It may be
> > that an IPMI device can have its own work queue; it may not be important
> > to run it in bh context.
> 
>   Fair point. Could you please let me know once you have had a chance to test
> these changes. Meanwhile, I will work on RFC wherein IPMI will have its own
> workqueue.
> 
>  Thanks for taking time out to review.

After looking and thinking about it a bit, a BH context is still
probably the best for this.

I have tested this patch under load and various scenarios and it seems
to work ok.  So:

Tested-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Corey Minyard <cminyard@mvista.com>

Or I can take this into my tree.

-corey

> 
> - Allen
> 
> >
> > -corey
> >
> > >
> > > Based on the work done by Tejun Heo <tj@kernel.org>
> > > Branch: https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git for-610
> > >
> > > Signed-off-by: Allen Pais <allen.lkml@gmail.com>
> > > ---
> > >  drivers/char/ipmi/ipmi_msghandler.c | 30 ++++++++++++++---------------
> > >  1 file changed, 15 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> > > index b0eedc4595b3..fce2a2dbdc82 100644
> > > --- a/drivers/char/ipmi/ipmi_msghandler.c
> > > +++ b/drivers/char/ipmi/ipmi_msghandler.c
> > > @@ -36,12 +36,13 @@
> > >  #include <linux/nospec.h>
> > >  #include <linux/vmalloc.h>
> > >  #include <linux/delay.h>
> > > +#include <linux/workqueue.h>
> > >
> > >  #define IPMI_DRIVER_VERSION "39.2"
> > >
> > >  static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void);
> > >  static int ipmi_init_msghandler(void);
> > > -static void smi_recv_tasklet(struct tasklet_struct *t);
> > > +static void smi_recv_work(struct work_struct *t);
> > >  static void handle_new_recv_msgs(struct ipmi_smi *intf);
> > >  static void need_waiter(struct ipmi_smi *intf);
> > >  static int handle_one_recv_msg(struct ipmi_smi *intf,
> > > @@ -498,13 +499,13 @@ struct ipmi_smi {
> > >       /*
> > >        * Messages queued for delivery.  If delivery fails (out of memory
> > >        * for instance), They will stay in here to be processed later in a
> > > -      * periodic timer interrupt.  The tasklet is for handling received
> > > +      * periodic timer interrupt.  The work is for handling received
> > >        * messages directly from the handler.
> > >        */
> > >       spinlock_t       waiting_rcv_msgs_lock;
> > >       struct list_head waiting_rcv_msgs;
> > >       atomic_t         watchdog_pretimeouts_to_deliver;
> > > -     struct tasklet_struct recv_tasklet;
> > > +     struct work_struct recv_work;
> > >
> > >       spinlock_t             xmit_msgs_lock;
> > >       struct list_head       xmit_msgs;
> > > @@ -704,7 +705,7 @@ static void clean_up_interface_data(struct ipmi_smi *intf)
> > >       struct cmd_rcvr  *rcvr, *rcvr2;
> > >       struct list_head list;
> > >
> > > -     tasklet_kill(&intf->recv_tasklet);
> > > +     cancel_work_sync(&intf->recv_work);
> > >
> > >       free_smi_msg_list(&intf->waiting_rcv_msgs);
> > >       free_recv_msg_list(&intf->waiting_events);
> > > @@ -1319,7 +1320,7 @@ static void free_user(struct kref *ref)
> > >  {
> > >       struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
> > >
> > > -     /* SRCU cleanup must happen in task context. */
> > > +     /* SRCU cleanup must happen in work context. */
> > >       queue_work(remove_work_wq, &user->remove_work);
> > >  }
> > >
> > > @@ -3605,8 +3606,7 @@ int ipmi_add_smi(struct module         *owner,
> > >       intf->curr_seq = 0;
> > >       spin_lock_init(&intf->waiting_rcv_msgs_lock);
> > >       INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
> > > -     tasklet_setup(&intf->recv_tasklet,
> > > -                  smi_recv_tasklet);
> > > +     INIT_WORK(&intf->recv_work, smi_recv_work);
> > >       atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
> > >       spin_lock_init(&intf->xmit_msgs_lock);
> > >       INIT_LIST_HEAD(&intf->xmit_msgs);
> > > @@ -4779,7 +4779,7 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > >                        * To preserve message order, quit if we
> > >                        * can't handle a message.  Add the message
> > >                        * back at the head, this is safe because this
> > > -                      * tasklet is the only thing that pulls the
> > > +                      * work is the only thing that pulls the
> > >                        * messages.
> > >                        */
> > >                       list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
> > > @@ -4812,10 +4812,10 @@ static void handle_new_recv_msgs(struct ipmi_smi *intf)
> > >       }
> > >  }
> > >
> > > -static void smi_recv_tasklet(struct tasklet_struct *t)
> > > +static void smi_recv_work(struct work_struct *t)
> > >  {
> > >       unsigned long flags = 0; /* keep us warning-free. */
> > > -     struct ipmi_smi *intf = from_tasklet(intf, t, recv_tasklet);
> > > +     struct ipmi_smi *intf = from_work(intf, t, recv_work);
> > >       int run_to_completion = intf->run_to_completion;
> > >       struct ipmi_smi_msg *newmsg = NULL;
> > >
> > > @@ -4866,7 +4866,7 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > >
> > >       /*
> > >        * To preserve message order, we keep a queue and deliver from
> > > -      * a tasklet.
> > > +      * a work.
> > >        */
> > >       if (!run_to_completion)
> > >               spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
> > > @@ -4887,9 +4887,9 @@ void ipmi_smi_msg_received(struct ipmi_smi *intf,
> > >               spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
> > >
> > >       if (run_to_completion)
> > > -             smi_recv_tasklet(&intf->recv_tasklet);
> > > +             smi_recv_work(&intf->recv_work);
> > >       else
> > > -             tasklet_schedule(&intf->recv_tasklet);
> > > +             queue_work(system_bh_wq, &intf->recv_work);
> > >  }
> > >  EXPORT_SYMBOL(ipmi_smi_msg_received);
> > >
> > > @@ -4899,7 +4899,7 @@ void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
> > >               return;
> > >
> > >       atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
> > > -     tasklet_schedule(&intf->recv_tasklet);
> > > +     queue_work(system_bh_wq, &intf->recv_work);
> > >  }
> > >  EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout);
> > >
> > > @@ -5068,7 +5068,7 @@ static bool ipmi_timeout_handler(struct ipmi_smi *intf,
> > >                                      flags);
> > >       }
> > >
> > > -     tasklet_schedule(&intf->recv_tasklet);
> > > +     queue_work(system_bh_wq, &intf->recv_work);
> > >
> > >       return need_timer;
> > >  }
> > > --
> > > 2.17.1
> > >
> > >
> >
> 
> 
> -- 
>        - Allen
> 

  reply	other threads:[~2024-03-28 19:23 UTC|newest]

Thread overview: 138+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 16:03 [PATCH 0/9] Convert Tasklets to BH Workqueues Allen Pais
2024-03-27 16:03 ` Allen Pais
2024-03-27 16:03 ` Allen Pais
2024-03-27 16:03 ` [PATCH 1/9] hyperv: Convert from tasklet to BH workqueue Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-04-03 16:43   ` Allen
2024-04-03 16:43     ` Allen
2024-04-03 16:43     ` Allen
2024-03-27 16:03 ` [PATCH 2/9] dma: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-28  5:55   ` Vinod Koul
2024-03-28  5:55     ` Vinod Koul
2024-03-28  5:55     ` Vinod Koul
2024-03-28 10:08     ` Arnd Bergmann
2024-03-28 10:08       ` Arnd Bergmann
2024-03-28 10:08       ` Arnd Bergmann
2024-03-28 18:31       ` Vinod Koul
2024-03-28 18:31         ` Vinod Koul
2024-03-28 18:31         ` Vinod Koul
2024-03-28 19:39         ` Allen
2024-03-28 19:39           ` Allen
2024-03-28 19:39           ` Allen
2024-03-28 19:49           ` Arnd Bergmann
2024-03-28 19:49             ` Arnd Bergmann
2024-03-28 19:49             ` Arnd Bergmann
2024-03-28 20:01             ` Allen
2024-03-28 20:01               ` Allen
2024-03-28 20:01               ` Allen
2024-03-29 16:38               ` Vinod Koul
2024-03-29 16:38                 ` Vinod Koul
2024-03-29 16:38                 ` Vinod Koul
2024-03-29 16:39           ` Vinod Koul
2024-03-29 16:39             ` Vinod Koul
2024-03-29 16:39             ` Vinod Koul
2024-03-28 17:49     ` Allen
2024-03-28 17:49       ` Allen
2024-03-28 17:49       ` Allen
2024-04-02 12:25   ` Linus Walleij
2024-04-02 12:25     ` Linus Walleij
2024-04-02 12:25     ` Linus Walleij
2024-04-02 13:11     ` Vinod Koul
2024-04-02 13:11       ` Vinod Koul
2024-04-02 13:11       ` Vinod Koul
2024-03-27 16:03 ` [PATCH 3/9] IB: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-04-07 18:56   ` Zhu Yanjun
2024-04-07 18:56     ` Zhu Yanjun
2024-03-27 16:03 ` [PATCH 4/9] USB: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:20   ` Duncan Sands
2024-03-27 16:20     ` Duncan Sands
2024-03-27 16:20     ` Duncan Sands
2024-03-27 16:55   ` Greg KH
2024-03-27 16:55     ` Greg KH
2024-03-27 16:55     ` Greg KH
2024-03-27 16:58     ` Allen
2024-03-27 16:58       ` Allen
2024-03-27 16:58       ` Allen
2024-03-27 17:55   ` Alan Stern
2024-03-27 17:55     ` Alan Stern
2024-03-27 17:55     ` Alan Stern
2024-03-28 17:54     ` Allen
2024-03-28 17:54       ` Allen
2024-03-28 17:54       ` Allen
2024-03-27 16:03 ` [PATCH 5/9] mailbox: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03 ` [PATCH 6/9] ipmi: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 17:58   ` Corey Minyard
2024-03-27 17:58     ` Corey Minyard
2024-03-27 17:58     ` Corey Minyard
2024-03-28 17:52     ` Allen
2024-03-28 17:52       ` Allen
2024-03-28 17:52       ` Allen
2024-03-28 19:23       ` Corey Minyard [this message]
2024-03-28 19:23         ` Corey Minyard
2024-03-28 19:23         ` Corey Minyard
2024-03-28 19:41         ` Allen
2024-03-28 19:41           ` Allen
2024-03-28 19:41           ` Allen
2024-03-28 19:52           ` Corey Minyard
2024-03-28 19:52             ` Corey Minyard
2024-03-28 19:52             ` Corey Minyard
2024-03-28 19:58             ` Allen
2024-03-28 19:58               ` Allen
2024-03-28 19:58               ` Allen
2024-03-27 16:03 ` [PATCH 7/9] s390: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-04-02 12:48   ` Alexandra Winter
2024-04-02 12:48     ` Alexandra Winter
2024-04-02 12:48     ` Alexandra Winter
2024-04-03 13:33     ` Allen
2024-04-03 13:33       ` Allen
2024-04-03 13:33       ` Allen
2024-04-08  9:33   ` Heiko Carstens
2024-04-08  9:33     ` Heiko Carstens
2024-04-08 10:03   ` Harald Freudenberger
2024-04-08 10:03     ` Harald Freudenberger
2024-03-27 16:03 ` [PATCH 8/9] drivers/media/*: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-04-24  9:12   ` Hans Verkuil
2024-04-24  9:12     ` Hans Verkuil
2024-04-24 16:48     ` Allen Pais
2024-04-24 16:48       ` Allen Pais
2024-03-27 16:03 ` [PATCH 9/9] mmc: " Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 16:03   ` Allen Pais
2024-03-27 19:35   ` Jernej Škrabec
2024-03-27 19:35     ` Jernej Škrabec
2024-03-27 19:35     ` Jernej Škrabec
2024-03-28 10:16   ` Christian Loehle
2024-03-28 10:16     ` Christian Loehle
2024-03-28 10:16     ` Christian Loehle
2024-03-28 17:47     ` Allen
2024-03-28 17:47       ` Allen
2024-03-28 17:47       ` Allen
2024-03-28 12:53   ` Ulf Hansson
2024-03-28 12:53     ` Ulf Hansson
2024-03-28 12:53     ` Ulf Hansson
2024-03-28 13:37     ` Linus Walleij
2024-03-28 13:37       ` Linus Walleij
2024-03-28 13:37       ` Linus Walleij
2024-03-28 16:21     ` Tejun Heo
2024-03-28 16:21       ` Tejun Heo
2024-03-28 16:21       ` Tejun Heo
2024-04-02 10:15       ` Ulf Hansson
2024-04-02 10:15         ` Ulf Hansson
2024-04-02 10:15         ` Ulf Hansson
2024-04-05  9:28   ` Michał Mirosław
2024-04-05  9:28     ` Michał Mirosław

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZgXDmx1HvujsMYAR@mail.minyard.net \
    --to=minyard@acm.org \
    --cc=Eugeniy.Paltsev@synopsys.com \
    --cc=Frank.Li@nxp.com \
    --cc=HaraldWelte@viatech.com \
    --cc=afaerber@suse.de \
    --cc=allen.lkml@gmail.com \
    --cc=andersson@kernel.org \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=apais@linux.microsoft.com \
    --cc=asahi@lists.linux.dev \
    --cc=aubin.constans@microchip.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=brucechang@via.com.tw \
    --cc=daniel@zonque.org \
    --cc=decui@microsoft.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=duncan.sands@free.fr \
    --cc=florian.fainelli@broadcom.com \
    --cc=haijie1@huawei.com \
    --cc=haiyangz@microsoft.com \
    --cc=haojian.zhuang@gmail.com \
    --cc=hayashi.kunihiko@socionext.com \
    --cc=imx@lists.linux.dev \
    --cc=jassisinghbrar@gmail.com \
    --cc=jernej.skrabec@gmail.com \
    --cc=jh80.chung@samsung.com \
    --cc=keescook@chromium.org \
    --cc=konrad.dybcio@linaro.org \
    --cc=kys@microsoft.com \
    --cc=leoyang.li@nxp.com \
    --cc=linus.walleij@linaro.org \
    --cc=linux-actions@lists.infradead.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mmc@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux-renesas-soc@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-rpi-kernel@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sunxi@lists.linux.dev \
    --cc=linux-tegra@vger.kernel.org \
    --cc=linux-usb@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=logang@deltatee.com \
    --cc=maintainers@bluecherrydvr.com \
    --cc=manivannan.sadhasivam@linaro.org \
    --cc=manuel.lauss@gmail.com \
    --cc=marcan@marcan.st \
    --cc=matthias.bgg@gmail.com \
    --cc=mchehab@kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=mirq-linux@rere.qmqm.pl \
    --cc=netdev@vger.kernel.org \
    --cc=oakad@yahoo.com \
    --cc=oneukum@suse.com \
    --cc=openipmi-developer@lists.sourceforge.net \
    --cc=orsonzhai@gmail.com \
    --cc=patrice.chotard@foss.st.com \
    --cc=paul@crapouillou.net \
    --cc=peter.ujfalusi@gmail.com \
    --cc=pierre@ossman.eu \
    --cc=rjui@broadcom.com \
    --cc=robert.jarzmik@free.fr \
    --cc=s.hauer@pengutronix.de \
    --cc=sbranden@broadcom.com \
    --cc=sean.wang@mediatek.com \
    --cc=shawnguo@kernel.org \
    --cc=stern@rowland.harvard.edu \
    --cc=sven@svenpeter.dev \
    --cc=tj@kernel.org \
    --cc=ulf.hansson@linaro.org \
    --cc=vireshk@kernel.org \
    --cc=vkoul@kernel.org \
    --cc=wangzhou1@hisilicon.com \
    --cc=wei.liu@kernel.org \
    --cc=wens@csie.org \
    --cc=zhang.lyra@gmail.com \
    --cc=zw@zh-kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.