* [PATCH] can: ti_hecc: fix close when napi poll is active
@ 2018-04-07 20:21 Jeroen Hofstee
2019-03-01 15:04 ` Måns Rullgård
0 siblings, 1 reply; 2+ messages in thread
From: Jeroen Hofstee @ 2018-04-07 20:21 UTC (permalink / raw)
To: linux-can
Cc: jhofstee, Wolfgang Grandegger, Marc Kleine-Budde, netdev, linux-kernel
When closing this CAN interface while napi poll is active, for example with:
`ip link set can0 down` several interfaces freeze. This seemed to be caused
by napi_disable called from ti_hecc_close expecting the scheduled probe to
either return quota or call napi_complete. Since the poll functions has a
check for netif_running it returns 0 and doesn't call napi_complete and hence
violates the napi its expectation.
So remove this check, so either napi_complete is called or quota is returned.
Signed-off-by: Jeroen Hofstee <jhofstee@victronenergy.com>
---
drivers/net/can/ti_hecc.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/net/can/ti_hecc.c b/drivers/net/can/ti_hecc.c
index db6ea93..42813d3 100644
--- a/drivers/net/can/ti_hecc.c
+++ b/drivers/net/can/ti_hecc.c
@@ -603,9 +603,6 @@ static int ti_hecc_rx_poll(struct napi_struct *napi, int quota)
u32 mbx_mask;
unsigned long pending_pkts, flags;
- if (!netif_running(ndev))
- return 0;
-
while ((pending_pkts = hecc_read(priv, HECC_CANRMP)) &&
num_pkts < quota) {
mbx_mask = BIT(priv->rx_next); /* next rx mailbox to process */
--
2.7.4
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] can: ti_hecc: fix close when napi poll is active
2018-04-07 20:21 [PATCH] can: ti_hecc: fix close when napi poll is active Jeroen Hofstee
@ 2019-03-01 15:04 ` Måns Rullgård
0 siblings, 0 replies; 2+ messages in thread
From: Måns Rullgård @ 2019-03-01 15:04 UTC (permalink / raw)
To: Jeroen Hofstee
Cc: linux-can, Wolfgang Grandegger, Marc Kleine-Budde, netdev, linux-kernel
Jeroen Hofstee <jhofstee@victronenergy.com> writes:
> When closing this CAN interface while napi poll is active, for example with:
> `ip link set can0 down` several interfaces freeze. This seemed to be caused
> by napi_disable called from ti_hecc_close expecting the scheduled probe to
> either return quota or call napi_complete. Since the poll functions has a
> check for netif_running it returns 0 and doesn't call napi_complete and hence
> violates the napi its expectation.
>
> So remove this check, so either napi_complete is called or quota is returned.
>
> Signed-off-by: Jeroen Hofstee <jhofstee@victronenergy.com>
> ---
> drivers/net/can/ti_hecc.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/drivers/net/can/ti_hecc.c b/drivers/net/can/ti_hecc.c
> index db6ea93..42813d3 100644
> --- a/drivers/net/can/ti_hecc.c
> +++ b/drivers/net/can/ti_hecc.c
> @@ -603,9 +603,6 @@ static int ti_hecc_rx_poll(struct napi_struct *napi, int quota)
> u32 mbx_mask;
> unsigned long pending_pkts, flags;
>
> - if (!netif_running(ndev))
> - return 0;
> -
> while ((pending_pkts = hecc_read(priv, HECC_CANRMP)) &&
> num_pkts < quota) {
> mbx_mask = BIT(priv->rx_next); /* next rx mailbox to process */
> --
> 2.7.4
This seems to have been lost or forgotten.
--
Måns Rullgård
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-03-01 15:04 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-07 20:21 [PATCH] can: ti_hecc: fix close when napi poll is active Jeroen Hofstee
2019-03-01 15:04 ` Måns Rullgård
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.