linux-can.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RESEND PATCH 0/4] can: c_can: cache frames to operate as a true FIFO
@ 2021-07-25 16:11 Dario Binacchi
  2021-07-25 16:11 ` [RESEND PATCH 1/4] can: c_can: remove struct c_can_priv::priv field Dario Binacchi
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Dario Binacchi @ 2021-07-25 16:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Gianluca Falavigna, Dario Binacchi, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Marc Kleine-Budde, Oliver Hartkopp, Tong Zhang,
	Vincent Mailhol, Wolfgang Grandegger, linux-can, netdev


Performance tests of the c_can driver led to the patch that gives the
series its name. I also added two patches not really related to the topic
of the series.


Dario Binacchi (4):
  can: c_can: remove struct c_can_priv::priv field
  can: c_can: exit c_can_do_tx() early if no frames have been sent
  can: c_can: support tx ring algorithm
  can: c_can: cache frames to operate as a true FIFO

 drivers/net/can/c_can/c_can.h          |  26 ++++++-
 drivers/net/can/c_can/c_can_main.c     | 100 +++++++++++++++++++------
 drivers/net/can/c_can/c_can_platform.c |   1 -
 3 files changed, 101 insertions(+), 26 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RESEND PATCH 1/4] can: c_can: remove struct c_can_priv::priv field
  2021-07-25 16:11 [RESEND PATCH 0/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
@ 2021-07-25 16:11 ` Dario Binacchi
  2021-07-25 16:11 ` [RESEND PATCH 2/4] can: c_can: exit c_can_do_tx() early if no frames have been sent Dario Binacchi
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: Dario Binacchi @ 2021-07-25 16:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Gianluca Falavigna, Dario Binacchi, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Marc Kleine-Budde, Tong Zhang,
	Wolfgang Grandegger, linux-can, netdev

It references the clock but it is never used. So let's remove it.

Signed-off-by: Dario Binacchi <dariobin@libero.it>
---

 drivers/net/can/c_can/c_can.h          | 1 -
 drivers/net/can/c_can/c_can_platform.c | 1 -
 2 files changed, 2 deletions(-)

diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
index 4247ff80a29c..8f23e9c83c84 100644
--- a/drivers/net/can/c_can/c_can.h
+++ b/drivers/net/can/c_can/c_can.h
@@ -200,7 +200,6 @@ struct c_can_priv {
 	void (*write_reg32)(const struct c_can_priv *priv, enum reg index, u32 val);
 	void __iomem *base;
 	const u16 *regs;
-	void *priv;		/* for board-specific data */
 	enum c_can_dev_id type;
 	struct c_can_raminit raminit_sys;	/* RAMINIT via syscon regmap */
 	void (*raminit)(const struct c_can_priv *priv, bool enable);
diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c
index 36950363682f..86e95e9d6533 100644
--- a/drivers/net/can/c_can/c_can_platform.c
+++ b/drivers/net/can/c_can/c_can_platform.c
@@ -385,7 +385,6 @@ static int c_can_plat_probe(struct platform_device *pdev)
 	priv->base = addr;
 	priv->device = &pdev->dev;
 	priv->can.clock.freq = clk_get_rate(clk);
-	priv->priv = clk;
 	priv->type = drvdata->id;
 
 	platform_set_drvdata(pdev, dev);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RESEND PATCH 2/4] can: c_can: exit c_can_do_tx() early if no frames have been sent
  2021-07-25 16:11 [RESEND PATCH 0/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
  2021-07-25 16:11 ` [RESEND PATCH 1/4] can: c_can: remove struct c_can_priv::priv field Dario Binacchi
@ 2021-07-25 16:11 ` Dario Binacchi
  2021-07-25 16:11 ` [RESEND PATCH 3/4] can: c_can: support tx ring algorithm Dario Binacchi
  2021-07-25 16:11 ` [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
  3 siblings, 0 replies; 13+ messages in thread
From: Dario Binacchi @ 2021-07-25 16:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Gianluca Falavigna, Dario Binacchi, David S. Miller,
	Jakub Kicinski, Marc Kleine-Budde, Oliver Hartkopp,
	Vincent Mailhol, Wolfgang Grandegger, linux-can, netdev

The c_can_poll() handles RX/TX events unconditionally. It may therefore
happen that c_can_do_tx() is called unnecessarily because the interrupt
was triggered by the reception of a frame. In these cases, we avoid to
execute unnecessary statements and exit immediately.

Signed-off-by: Dario Binacchi <dariobin@libero.it>
---

 drivers/net/can/c_can/c_can_main.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
index 7588f70ca0fe..fec0e3416970 100644
--- a/drivers/net/can/c_can/c_can_main.c
+++ b/drivers/net/can/c_can/c_can_main.c
@@ -720,17 +720,18 @@ static void c_can_do_tx(struct net_device *dev)
 		pkts++;
 	}
 
+	if (!pkts)
+		return;
+
 	/* Clear the bits in the tx_active mask */
 	atomic_sub(clr, &priv->tx_active);
 
 	if (clr & BIT(priv->msg_obj_tx_num - 1))
 		netif_wake_queue(dev);
 
-	if (pkts) {
-		stats->tx_bytes += bytes;
-		stats->tx_packets += pkts;
-		can_led_event(dev, CAN_LED_EVENT_TX);
-	}
+	stats->tx_bytes += bytes;
+	stats->tx_packets += pkts;
+	can_led_event(dev, CAN_LED_EVENT_TX);
 }
 
 /* If we have a gap in the pending bits, that means we either
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RESEND PATCH 3/4] can: c_can: support tx ring algorithm
  2021-07-25 16:11 [RESEND PATCH 0/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
  2021-07-25 16:11 ` [RESEND PATCH 1/4] can: c_can: remove struct c_can_priv::priv field Dario Binacchi
  2021-07-25 16:11 ` [RESEND PATCH 2/4] can: c_can: exit c_can_do_tx() early if no frames have been sent Dario Binacchi
@ 2021-07-25 16:11 ` Dario Binacchi
  2021-08-04  9:37   ` Marc Kleine-Budde
  2021-07-25 16:11 ` [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
  3 siblings, 1 reply; 13+ messages in thread
From: Dario Binacchi @ 2021-07-25 16:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Gianluca Falavigna, Dario Binacchi, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Marc Kleine-Budde, Oliver Hartkopp,
	Vincent Mailhol, Wolfgang Grandegger, linux-can, netdev

The algorithm is already used successfully by other CAN drivers
(e.g. mcp251xfd). Its implementation was kindly suggested to me by
Marc Kleine-Budde following a patch I had previously submitted. You can
find every detail at https://lore.kernel.org/patchwork/patch/1422929/.

The idea is that after this patch, it will be easier to patch the driver
to use the message object memory as a true FIFO.

Suggested-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Dario Binacchi <dariobin@libero.it>
---

 drivers/net/can/c_can/c_can.h      | 19 ++++++-
 drivers/net/can/c_can/c_can_main.c | 81 +++++++++++++++++++++++-------
 2 files changed, 82 insertions(+), 18 deletions(-)

diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
index 8f23e9c83c84..8fe7e2138620 100644
--- a/drivers/net/can/c_can/c_can.h
+++ b/drivers/net/can/c_can/c_can.h
@@ -176,6 +176,13 @@ struct c_can_raminit {
 	bool needs_pulse;
 };
 
+/* c_can tx ring structure */
+struct c_can_tx_ring {
+	unsigned int head;
+	unsigned int tail;
+	unsigned int obj_num;
+};
+
 /* c_can private data structure */
 struct c_can_priv {
 	struct can_priv can;	/* must be the first member */
@@ -190,10 +197,10 @@ struct c_can_priv {
 	unsigned int msg_obj_tx_first;
 	unsigned int msg_obj_tx_last;
 	u32 msg_obj_rx_mask;
-	atomic_t tx_active;
 	atomic_t sie_pending;
 	unsigned long tx_dir;
 	int last_status;
+	struct c_can_tx_ring tx;
 	u16 (*read_reg)(const struct c_can_priv *priv, enum reg index);
 	void (*write_reg)(const struct c_can_priv *priv, enum reg index, u16 val);
 	u32 (*read_reg32)(const struct c_can_priv *priv, enum reg index);
@@ -219,4 +226,14 @@ int c_can_power_down(struct net_device *dev);
 
 void c_can_set_ethtool_ops(struct net_device *dev);
 
+static inline u8 c_can_get_tx_head(const struct c_can_tx_ring *ring)
+{
+	return ring->head & (ring->obj_num - 1);
+}
+
+static inline u8 c_can_get_tx_tail(const struct c_can_tx_ring *ring)
+{
+	return ring->tail & (ring->obj_num - 1);
+}
+
 #endif /* C_CAN_H */
diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
index fec0e3416970..451ac9a9586a 100644
--- a/drivers/net/can/c_can/c_can_main.c
+++ b/drivers/net/can/c_can/c_can_main.c
@@ -427,24 +427,64 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
 	c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);
 }
 
+static u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
+{
+	u8 head = c_can_get_tx_head(ring);
+	u8 tail = c_can_get_tx_tail(ring);
+
+	/* This is not a FIFO. C/D_CAN sends out the buffers
+	 * prioritized. The lowest buffer number wins.
+	 */
+	if (head < tail)
+		return 0;
+
+	return ring->obj_num - head;
+}
+
+static bool c_can_tx_busy(const struct c_can_priv *priv,
+			  const struct c_can_tx_ring *tx_ring)
+{
+	if (c_can_get_tx_free(tx_ring) > 0)
+		return false;
+
+	netif_stop_queue(priv->dev);
+
+	/* Memory barrier before checking tx_free (head and tail) */
+	smp_mb();
+
+	if (c_can_get_tx_free(tx_ring) == 0) {
+		netdev_dbg(priv->dev,
+			   "Stopping tx-queue (tx_head=0x%08x, tx_tail=0x%08x, len=%d).\n",
+			   tx_ring->head, tx_ring->tail,
+			   tx_ring->head - tx_ring->tail);
+		return true;
+	}
+
+	netif_start_queue(priv->dev);
+	return false;
+}
+
 static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
 				    struct net_device *dev)
 {
 	struct can_frame *frame = (struct can_frame *)skb->data;
 	struct c_can_priv *priv = netdev_priv(dev);
+	struct c_can_tx_ring *tx_ring = &priv->tx;
 	u32 idx, obj;
 
 	if (can_dropped_invalid_skb(dev, skb))
 		return NETDEV_TX_OK;
-	/* This is not a FIFO. C/D_CAN sends out the buffers
-	 * prioritized. The lowest buffer number wins.
-	 */
-	idx = fls(atomic_read(&priv->tx_active));
-	obj = idx + priv->msg_obj_tx_first;
 
-	/* If this is the last buffer, stop the xmit queue */
-	if (idx == priv->msg_obj_tx_num - 1)
+	if (c_can_tx_busy(priv, tx_ring))
+		return NETDEV_TX_BUSY;
+
+	idx = c_can_get_tx_head(tx_ring);
+	tx_ring->head++;
+	if (c_can_get_tx_free(tx_ring) == 0)
 		netif_stop_queue(dev);
+
+	obj = idx + priv->msg_obj_tx_first;
+
 	/* Store the message in the interface so we can call
 	 * can_put_echo_skb(). We must do this before we enable
 	 * transmit as we might race against do_tx().
@@ -453,8 +493,6 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
 	priv->dlc[idx] = frame->len;
 	can_put_echo_skb(skb, dev, idx, 0);
 
-	/* Update the active bits */
-	atomic_add(BIT(idx), &priv->tx_active);
 	/* Start transmission */
 	c_can_object_put(dev, IF_TX, obj, IF_COMM_TX);
 
@@ -567,6 +605,7 @@ static int c_can_software_reset(struct net_device *dev)
 static int c_can_chip_config(struct net_device *dev)
 {
 	struct c_can_priv *priv = netdev_priv(dev);
+	struct c_can_tx_ring *tx_ring = &priv->tx;
 	int err;
 
 	err = c_can_software_reset(dev);
@@ -598,7 +637,8 @@ static int c_can_chip_config(struct net_device *dev)
 	priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED);
 
 	/* Clear all internal status */
-	atomic_set(&priv->tx_active, 0);
+	tx_ring->head = 0;
+	tx_ring->tail = 0;
 	priv->tx_dir = 0;
 
 	/* set bittiming params */
@@ -696,14 +736,14 @@ static int c_can_get_berr_counter(const struct net_device *dev,
 static void c_can_do_tx(struct net_device *dev)
 {
 	struct c_can_priv *priv = netdev_priv(dev);
+	struct c_can_tx_ring *tx_ring = &priv->tx;
 	struct net_device_stats *stats = &dev->stats;
-	u32 idx, obj, pkts = 0, bytes = 0, pend, clr;
+	u32 idx, obj, pkts = 0, bytes = 0, pend;
 
 	if (priv->msg_obj_tx_last > 32)
 		pend = priv->read_reg32(priv, C_CAN_INTPND3_REG);
 	else
 		pend = priv->read_reg(priv, C_CAN_INTPND2_REG);
-	clr = pend;
 
 	while ((idx = ffs(pend))) {
 		idx--;
@@ -723,11 +763,14 @@ static void c_can_do_tx(struct net_device *dev)
 	if (!pkts)
 		return;
 
-	/* Clear the bits in the tx_active mask */
-	atomic_sub(clr, &priv->tx_active);
-
-	if (clr & BIT(priv->msg_obj_tx_num - 1))
-		netif_wake_queue(dev);
+	tx_ring->tail += pkts;
+	if (c_can_get_tx_free(tx_ring)) {
+		/* Make sure that anybody stopping the queue after
+		 * this sees the new tx_ring->tail.
+		 */
+		smp_mb();
+		netif_wake_queue(priv->dev);
+	}
 
 	stats->tx_bytes += bytes;
 	stats->tx_packets += pkts;
@@ -1206,6 +1249,10 @@ struct net_device *alloc_c_can_dev(int msg_obj_num)
 	priv->msg_obj_tx_last =
 		priv->msg_obj_tx_first + priv->msg_obj_tx_num - 1;
 
+	priv->tx.head = 0;
+	priv->tx.tail = 0;
+	priv->tx.obj_num = msg_obj_tx_num;
+
 	netif_napi_add(dev, &priv->napi, c_can_poll, priv->msg_obj_rx_num);
 
 	priv->dev = dev;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-07-25 16:11 [RESEND PATCH 0/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
                   ` (2 preceding siblings ...)
  2021-07-25 16:11 ` [RESEND PATCH 3/4] can: c_can: support tx ring algorithm Dario Binacchi
@ 2021-07-25 16:11 ` Dario Binacchi
  2021-08-04  9:34   ` Marc Kleine-Budde
  2021-08-04  9:45   ` Marc Kleine-Budde
  3 siblings, 2 replies; 13+ messages in thread
From: Dario Binacchi @ 2021-07-25 16:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: Gianluca Falavigna, Dario Binacchi, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Marc Kleine-Budde, Oliver Hartkopp,
	Vincent Mailhol, Wolfgang Grandegger, linux-can, netdev

As reported by a comment in the c_can_start_xmit() this was not a FIFO.
C/D_CAN controller sends out the buffers prioritized so that the lowest
buffer number wins.

What did c_can_start_xmit() do if head was less tail in the tx ring ? It
waited until all the frames queued in the FIFO was actually transmitted
by the controller before accepting a new CAN frame to transmit, even if
the FIFO was not full, to ensure that the messages were transmitted in
the order in which they were loaded.

By storing the frames in the FIFO without requiring its transmission, we
will be able to use the full size of the FIFO even in cases such as the
one described above. The transmission interrupt will trigger their
transmission only when all the messages previously loaded but stored in
less priority positions of the buffers have been transmitted.

Suggested-by: Gianluca Falavigna <gianluca.falavigna@inwind.it>
Signed-off-by: Dario Binacchi <dariobin@libero.it>

---

 drivers/net/can/c_can/c_can.h      |  6 +++++
 drivers/net/can/c_can/c_can_main.c | 42 +++++++++++++++++-------------
 2 files changed, 30 insertions(+), 18 deletions(-)

diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
index 8fe7e2138620..fc499a70b797 100644
--- a/drivers/net/can/c_can/c_can.h
+++ b/drivers/net/can/c_can/c_can.h
@@ -200,6 +200,7 @@ struct c_can_priv {
 	atomic_t sie_pending;
 	unsigned long tx_dir;
 	int last_status;
+	spinlock_t tx_lock;
 	struct c_can_tx_ring tx;
 	u16 (*read_reg)(const struct c_can_priv *priv, enum reg index);
 	void (*write_reg)(const struct c_can_priv *priv, enum reg index, u16 val);
@@ -236,4 +237,9 @@ static inline u8 c_can_get_tx_tail(const struct c_can_tx_ring *ring)
 	return ring->tail & (ring->obj_num - 1);
 }
 
+static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
+{
+	return ring->obj_num - (ring->head - ring->tail);
+}
+
 #endif /* C_CAN_H */
diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
index 451ac9a9586a..4c061fef002c 100644
--- a/drivers/net/can/c_can/c_can_main.c
+++ b/drivers/net/can/c_can/c_can_main.c
@@ -427,20 +427,6 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
 	c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);
 }
 
-static u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
-{
-	u8 head = c_can_get_tx_head(ring);
-	u8 tail = c_can_get_tx_tail(ring);
-
-	/* This is not a FIFO. C/D_CAN sends out the buffers
-	 * prioritized. The lowest buffer number wins.
-	 */
-	if (head < tail)
-		return 0;
-
-	return ring->obj_num - head;
-}
-
 static bool c_can_tx_busy(const struct c_can_priv *priv,
 			  const struct c_can_tx_ring *tx_ring)
 {
@@ -470,7 +456,7 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
 	struct can_frame *frame = (struct can_frame *)skb->data;
 	struct c_can_priv *priv = netdev_priv(dev);
 	struct c_can_tx_ring *tx_ring = &priv->tx;
-	u32 idx, obj;
+	u32 idx, obj, cmd = IF_COMM_TX;
 
 	if (can_dropped_invalid_skb(dev, skb))
 		return NETDEV_TX_OK;
@@ -483,7 +469,11 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
 	if (c_can_get_tx_free(tx_ring) == 0)
 		netif_stop_queue(dev);
 
-	obj = idx + priv->msg_obj_tx_first;
+	spin_lock_bh(&priv->tx_lock);
+	if (idx < c_can_get_tx_tail(tx_ring))
+		cmd &= ~IF_COMM_TXRQST; /* Cache the message */
+	else
+		spin_unlock_bh(&priv->tx_lock);
 
 	/* Store the message in the interface so we can call
 	 * can_put_echo_skb(). We must do this before we enable
@@ -492,9 +482,11 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
 	c_can_setup_tx_object(dev, IF_TX, frame, idx);
 	priv->dlc[idx] = frame->len;
 	can_put_echo_skb(skb, dev, idx, 0);
+	obj = idx + priv->msg_obj_tx_first;
+	c_can_object_put(dev, IF_TX, obj, cmd);
 
-	/* Start transmission */
-	c_can_object_put(dev, IF_TX, obj, IF_COMM_TX);
+	if (spin_is_locked(&priv->tx_lock))
+		spin_unlock_bh(&priv->tx_lock);
 
 	return NETDEV_TX_OK;
 }
@@ -739,6 +731,7 @@ static void c_can_do_tx(struct net_device *dev)
 	struct c_can_tx_ring *tx_ring = &priv->tx;
 	struct net_device_stats *stats = &dev->stats;
 	u32 idx, obj, pkts = 0, bytes = 0, pend;
+	u8 tail;
 
 	if (priv->msg_obj_tx_last > 32)
 		pend = priv->read_reg32(priv, C_CAN_INTPND3_REG);
@@ -775,6 +768,18 @@ static void c_can_do_tx(struct net_device *dev)
 	stats->tx_bytes += bytes;
 	stats->tx_packets += pkts;
 	can_led_event(dev, CAN_LED_EVENT_TX);
+
+	tail = c_can_get_tx_tail(tx_ring);
+
+	if (tail == 0) {
+		u8 head = c_can_get_tx_head(tx_ring);
+
+		/* Start transmission for all cached messages */
+		for (idx = tail; idx < head; idx++) {
+			obj = idx + priv->msg_obj_tx_first;
+			c_can_object_put(dev, IF_TX, obj, IF_COMM_TXRQST);
+		}
+	}
 }
 
 /* If we have a gap in the pending bits, that means we either
@@ -1237,6 +1242,7 @@ struct net_device *alloc_c_can_dev(int msg_obj_num)
 		return NULL;
 
 	priv = netdev_priv(dev);
+	spin_lock_init(&priv->tx_lock);
 	priv->msg_obj_num = msg_obj_num;
 	priv->msg_obj_rx_num = msg_obj_num - msg_obj_tx_num;
 	priv->msg_obj_rx_first = 1;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-07-25 16:11 ` [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
@ 2021-08-04  9:34   ` Marc Kleine-Budde
  2021-08-05 20:12     ` Dario Binacchi
  2021-08-04  9:45   ` Marc Kleine-Budde
  1 sibling, 1 reply; 13+ messages in thread
From: Marc Kleine-Budde @ 2021-08-04  9:34 UTC (permalink / raw)
  To: Dario Binacchi
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev

[-- Attachment #1: Type: text/plain, Size: 2694 bytes --]

On 25.07.2021 18:11:50, Dario Binacchi wrote:
> As reported by a comment in the c_can_start_xmit() this was not a FIFO.
> C/D_CAN controller sends out the buffers prioritized so that the lowest
> buffer number wins.
> 
> What did c_can_start_xmit() do if head was less tail in the tx ring ? It
> waited until all the frames queued in the FIFO was actually transmitted
> by the controller before accepting a new CAN frame to transmit, even if
> the FIFO was not full, to ensure that the messages were transmitted in
> the order in which they were loaded.
> 
> By storing the frames in the FIFO without requiring its transmission, we
> will be able to use the full size of the FIFO even in cases such as the
> one described above. The transmission interrupt will trigger their
> transmission only when all the messages previously loaded but stored in
> less priority positions of the buffers have been transmitted.
> 
> Suggested-by: Gianluca Falavigna <gianluca.falavigna@inwind.it>
> Signed-off-by: Dario Binacchi <dariobin@libero.it>
> 
> ---
> 
>  drivers/net/can/c_can/c_can.h      |  6 +++++
>  drivers/net/can/c_can/c_can_main.c | 42 +++++++++++++++++-------------
>  2 files changed, 30 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
> index 8fe7e2138620..fc499a70b797 100644
> --- a/drivers/net/can/c_can/c_can.h
> +++ b/drivers/net/can/c_can/c_can.h
> +static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> +{
> +	return ring->obj_num - (ring->head - ring->tail);
> +}
> +
>  #endif /* C_CAN_H */
> diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
> index 451ac9a9586a..4c061fef002c 100644
> --- a/drivers/net/can/c_can/c_can_main.c
> +++ b/drivers/net/can/c_can/c_can_main.c
> @@ -427,20 +427,6 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
>  	c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);
>  }
>  
> -static u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> -{
> -	u8 head = c_can_get_tx_head(ring);
> -	u8 tail = c_can_get_tx_tail(ring);
> -
> -	/* This is not a FIFO. C/D_CAN sends out the buffers
> -	 * prioritized. The lowest buffer number wins.
> -	 */
> -	if (head < tail)
> -		return 0;
> -
> -	return ring->obj_num - head;
> -}
> -

Can you move that change into patch 3?

Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 3/4] can: c_can: support tx ring algorithm
  2021-07-25 16:11 ` [RESEND PATCH 3/4] can: c_can: support tx ring algorithm Dario Binacchi
@ 2021-08-04  9:37   ` Marc Kleine-Budde
  0 siblings, 0 replies; 13+ messages in thread
From: Marc Kleine-Budde @ 2021-08-04  9:37 UTC (permalink / raw)
  To: Dario Binacchi
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev

[-- Attachment #1: Type: text/plain, Size: 932 bytes --]

On 25.07.2021 18:11:49, Dario Binacchi wrote:
> The algorithm is already used successfully by other CAN drivers
> (e.g. mcp251xfd). Its implementation was kindly suggested to me by
> Marc Kleine-Budde following a patch I had previously submitted. You can
> find every detail at https://lore.kernel.org/patchwork/patch/1422929/.
> 
> The idea is that after this patch, it will be easier to patch the driver
> to use the message object memory as a true FIFO.
> 
> Suggested-by: Marc Kleine-Budde <mkl@pengutronix.de>
> Signed-off-by: Dario Binacchi <dariobin@libero.it>

With the proposed change (see other mail). Looks good to me!

regards,
Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-07-25 16:11 ` [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
  2021-08-04  9:34   ` Marc Kleine-Budde
@ 2021-08-04  9:45   ` Marc Kleine-Budde
  2021-08-05 20:16     ` Dario Binacchi
  1 sibling, 1 reply; 13+ messages in thread
From: Marc Kleine-Budde @ 2021-08-04  9:45 UTC (permalink / raw)
  To: Dario Binacchi
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev

[-- Attachment #1: Type: text/plain, Size: 2820 bytes --]

On 25.07.2021 18:11:50, Dario Binacchi wrote:
> diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
> index 8fe7e2138620..fc499a70b797 100644
> --- a/drivers/net/can/c_can/c_can.h
> +++ b/drivers/net/can/c_can/c_can.h
> @@ -200,6 +200,7 @@ struct c_can_priv {
>  	atomic_t sie_pending;
>  	unsigned long tx_dir;
>  	int last_status;
> +	spinlock_t tx_lock;

What does the spin lock protect?

>  	struct c_can_tx_ring tx;
>  	u16 (*read_reg)(const struct c_can_priv *priv, enum reg index);
>  	void (*write_reg)(const struct c_can_priv *priv, enum reg index, u16 val);
> @@ -236,4 +237,9 @@ static inline u8 c_can_get_tx_tail(const struct c_can_tx_ring *ring)
>  	return ring->tail & (ring->obj_num - 1);
>  }
>  
> +static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> +{
> +	return ring->obj_num - (ring->head - ring->tail);
> +}
> +
>  #endif /* C_CAN_H */
> diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
> index 451ac9a9586a..4c061fef002c 100644
> --- a/drivers/net/can/c_can/c_can_main.c
> +++ b/drivers/net/can/c_can/c_can_main.c
> @@ -427,20 +427,6 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
>  	c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);
>  }
>  
> -static u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> -{
> -	u8 head = c_can_get_tx_head(ring);
> -	u8 tail = c_can_get_tx_tail(ring);
> -
> -	/* This is not a FIFO. C/D_CAN sends out the buffers
> -	 * prioritized. The lowest buffer number wins.
> -	 */
> -	if (head < tail)
> -		return 0;
> -
> -	return ring->obj_num - head;
> -}
> -
>  static bool c_can_tx_busy(const struct c_can_priv *priv,
>  			  const struct c_can_tx_ring *tx_ring)
>  {
> @@ -470,7 +456,7 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
>  	struct can_frame *frame = (struct can_frame *)skb->data;
>  	struct c_can_priv *priv = netdev_priv(dev);
>  	struct c_can_tx_ring *tx_ring = &priv->tx;
> -	u32 idx, obj;
> +	u32 idx, obj, cmd = IF_COMM_TX;
>  
>  	if (can_dropped_invalid_skb(dev, skb))
>  		return NETDEV_TX_OK;
> @@ -483,7 +469,11 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
>  	if (c_can_get_tx_free(tx_ring) == 0)
>  		netif_stop_queue(dev);
>  
> -	obj = idx + priv->msg_obj_tx_first;
> +	spin_lock_bh(&priv->tx_lock);

What does the spin_lock protect? The ndo_start_xmit function is properly
serialized by the networking core.

Otherwise the patch looks good!

Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-08-04  9:34   ` Marc Kleine-Budde
@ 2021-08-05 20:12     ` Dario Binacchi
  2021-08-06  7:52       ` Marc Kleine-Budde
  0 siblings, 1 reply; 13+ messages in thread
From: Dario Binacchi @ 2021-08-05 20:12 UTC (permalink / raw)
  To: Marc Kleine-Budde
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev

Hi Marc,

> Il 04/08/2021 11:34 Marc Kleine-Budde <mkl@pengutronix.de> ha scritto:
> 
>  
> On 25.07.2021 18:11:50, Dario Binacchi wrote:
> > As reported by a comment in the c_can_start_xmit() this was not a FIFO.
> > C/D_CAN controller sends out the buffers prioritized so that the lowest
> > buffer number wins.
> > 
> > What did c_can_start_xmit() do if head was less tail in the tx ring ? It
> > waited until all the frames queued in the FIFO was actually transmitted
> > by the controller before accepting a new CAN frame to transmit, even if
> > the FIFO was not full, to ensure that the messages were transmitted in
> > the order in which they were loaded.
> > 
> > By storing the frames in the FIFO without requiring its transmission, we
> > will be able to use the full size of the FIFO even in cases such as the
> > one described above. The transmission interrupt will trigger their
> > transmission only when all the messages previously loaded but stored in
> > less priority positions of the buffers have been transmitted.
> > 
> > Suggested-by: Gianluca Falavigna <gianluca.falavigna@inwind.it>
> > Signed-off-by: Dario Binacchi <dariobin@libero.it>
> > 
> > ---
> > 
> >  drivers/net/can/c_can/c_can.h      |  6 +++++
> >  drivers/net/can/c_can/c_can_main.c | 42 +++++++++++++++++-------------
> >  2 files changed, 30 insertions(+), 18 deletions(-)
> > 
> > diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
> > index 8fe7e2138620..fc499a70b797 100644
> > --- a/drivers/net/can/c_can/c_can.h
> > +++ b/drivers/net/can/c_can/c_can.h
> > +static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> > +{
> > +	return ring->obj_num - (ring->head - ring->tail);
> > +}
> > +
> >  #endif /* C_CAN_H */
> > diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
> > index 451ac9a9586a..4c061fef002c 100644
> > --- a/drivers/net/can/c_can/c_can_main.c
> > +++ b/drivers/net/can/c_can/c_can_main.c
> > @@ -427,20 +427,6 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
> >  	c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);
> >  }
> >  
> > -static u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> > -{
> > -	u8 head = c_can_get_tx_head(ring);
> > -	u8 tail = c_can_get_tx_tail(ring);
> > -
> > -	/* This is not a FIFO. C/D_CAN sends out the buffers
> > -	 * prioritized. The lowest buffer number wins.
> > -	 */
> > -	if (head < tail)
> > -		return 0;
> > -
> > -	return ring->obj_num - head;
> > -}
> > -
> 
> Can you move that change into patch 3?

Patch 3 adds the ring transmission algorithm without compromising the
message transmission order. This is not a FIFO. C/D_CAN controller sends
out the buffers prioritized. The lowest buffer number wins, so moving the
change into patch 3 may not guarantee the transmission order.
In patch 3, however, I will move c_can_get_tx_free() from c_can_main.c to 
c_can.h, so that in patch 4 it will be clearer how the routine has changed.

Thanks and regards,
Dario

> 
> Marc
> 
> -- 
> Pengutronix e.K.                 | Marc Kleine-Budde           |
> Embedded Linux                   | https://www.pengutronix.de  |
> Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
> Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-08-04  9:45   ` Marc Kleine-Budde
@ 2021-08-05 20:16     ` Dario Binacchi
  2021-08-06  9:25       ` Marc Kleine-Budde
  0 siblings, 1 reply; 13+ messages in thread
From: Dario Binacchi @ 2021-08-05 20:16 UTC (permalink / raw)
  To: Marc Kleine-Budde
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev

Hi Marc,

> Il 04/08/2021 11:45 Marc Kleine-Budde <mkl@pengutronix.de> ha scritto:
> 
>  
> On 25.07.2021 18:11:50, Dario Binacchi wrote:
> > diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
> > index 8fe7e2138620..fc499a70b797 100644
> > --- a/drivers/net/can/c_can/c_can.h
> > +++ b/drivers/net/can/c_can/c_can.h
> > @@ -200,6 +200,7 @@ struct c_can_priv {
> >  	atomic_t sie_pending;
> >  	unsigned long tx_dir;
> >  	int last_status;
> > +	spinlock_t tx_lock;
> 
> What does the spin lock protect?
> 
> >  	struct c_can_tx_ring tx;
> >  	u16 (*read_reg)(const struct c_can_priv *priv, enum reg index);
> >  	void (*write_reg)(const struct c_can_priv *priv, enum reg index, u16 val);
> > @@ -236,4 +237,9 @@ static inline u8 c_can_get_tx_tail(const struct c_can_tx_ring *ring)
> >  	return ring->tail & (ring->obj_num - 1);
> >  }
> >  
> > +static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> > +{
> > +	return ring->obj_num - (ring->head - ring->tail);
> > +}
> > +
> >  #endif /* C_CAN_H */
> > diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
> > index 451ac9a9586a..4c061fef002c 100644
> > --- a/drivers/net/can/c_can/c_can_main.c
> > +++ b/drivers/net/can/c_can/c_can_main.c
> > @@ -427,20 +427,6 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
> >  	c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);
> >  }
> >  
> > -static u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> > -{
> > -	u8 head = c_can_get_tx_head(ring);
> > -	u8 tail = c_can_get_tx_tail(ring);
> > -
> > -	/* This is not a FIFO. C/D_CAN sends out the buffers
> > -	 * prioritized. The lowest buffer number wins.
> > -	 */
> > -	if (head < tail)
> > -		return 0;
> > -
> > -	return ring->obj_num - head;
> > -}
> > -
> >  static bool c_can_tx_busy(const struct c_can_priv *priv,
> >  			  const struct c_can_tx_ring *tx_ring)
> >  {
> > @@ -470,7 +456,7 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
> >  	struct can_frame *frame = (struct can_frame *)skb->data;
> >  	struct c_can_priv *priv = netdev_priv(dev);
> >  	struct c_can_tx_ring *tx_ring = &priv->tx;
> > -	u32 idx, obj;
> > +	u32 idx, obj, cmd = IF_COMM_TX;
> >  
> >  	if (can_dropped_invalid_skb(dev, skb))
> >  		return NETDEV_TX_OK;
> > @@ -483,7 +469,11 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
> >  	if (c_can_get_tx_free(tx_ring) == 0)
> >  		netif_stop_queue(dev);
> >  
> > -	obj = idx + priv->msg_obj_tx_first;
> > +	spin_lock_bh(&priv->tx_lock);
> 
> What does the spin_lock protect? The ndo_start_xmit function is properly
> serialized by the networking core.
> 

The spin_lock protects the access to the IF_TX interface. Enabling the transmission 
of cached messages occur inside interrupt and the use of the IF_RX interface,
which would avoid the use of the spinlock, has not been validated by
the tests.

Thanks and regards,
Dario

> Otherwise the patch looks good!
> 
> Marc
> 
> -- 
> Pengutronix e.K.                 | Marc Kleine-Budde           |
> Embedded Linux                   | https://www.pengutronix.de  |
> Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
> Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-08-05 20:12     ` Dario Binacchi
@ 2021-08-06  7:52       ` Marc Kleine-Budde
  0 siblings, 0 replies; 13+ messages in thread
From: Marc Kleine-Budde @ 2021-08-06  7:52 UTC (permalink / raw)
  To: Dario Binacchi
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev

[-- Attachment #1: Type: text/plain, Size: 2149 bytes --]

On 05.08.2021 22:12:18, Dario Binacchi wrote:
> > > diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h
> > > index 8fe7e2138620..fc499a70b797 100644
> > > --- a/drivers/net/can/c_can/c_can.h
> > > +++ b/drivers/net/can/c_can/c_can.h
> > > +static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> > > +{
> > > +	return ring->obj_num - (ring->head - ring->tail);
> > > +}
> > > +
> > >  #endif /* C_CAN_H */
> > > diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
> > > index 451ac9a9586a..4c061fef002c 100644
> > > --- a/drivers/net/can/c_can/c_can_main.c
> > > +++ b/drivers/net/can/c_can/c_can_main.c
> > > @@ -427,20 +427,6 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface,
> > >  	c_can_object_put(dev, iface, obj, IF_COMM_RCV_SETUP);
> > >  }
> > >  
> > > -static u8 c_can_get_tx_free(const struct c_can_tx_ring *ring)
> > > -{
> > > -	u8 head = c_can_get_tx_head(ring);
> > > -	u8 tail = c_can_get_tx_tail(ring);
> > > -
> > > -	/* This is not a FIFO. C/D_CAN sends out the buffers
> > > -	 * prioritized. The lowest buffer number wins.
> > > -	 */
> > > -	if (head < tail)
> > > -		return 0;
> > > -
> > > -	return ring->obj_num - head;
> > > -}
> > > -
> > 
> > Can you move that change into patch 3?
> 
> Patch 3 adds the ring transmission algorithm without compromising the
> message transmission order. This is not a FIFO.

Right, thanks!

> C/D_CAN controller sends out the buffers prioritized. The lowest
> buffer number wins, so moving the change into patch 3 may not
> guarantee the transmission order. In patch 3, however, I will move
> c_can_get_tx_free() from c_can_main.c to c_can.h, so that in patch 4
> it will be clearer how the routine has changed.

The updated patch looks much nicer now, thanks!

Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-08-05 20:16     ` Dario Binacchi
@ 2021-08-06  9:25       ` Marc Kleine-Budde
  2021-08-07 12:36         ` Dario Binacchi
  0 siblings, 1 reply; 13+ messages in thread
From: Marc Kleine-Budde @ 2021-08-06  9:25 UTC (permalink / raw)
  To: Dario Binacchi
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev

[-- Attachment #1: Type: text/plain, Size: 1760 bytes --]

On 05.08.2021 22:16:06, Dario Binacchi wrote:
> > > --- a/drivers/net/can/c_can/c_can.h
> > > +++ b/drivers/net/can/c_can/c_can.h
> > > @@ -200,6 +200,7 @@ struct c_can_priv {
> > >  	atomic_t sie_pending;
> > >  	unsigned long tx_dir;
> > >  	int last_status;
> > > +	spinlock_t tx_lock;
> > 
> > What does the spin lock protect?
[...]
> > > @@ -483,7 +469,11 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
> > >  	if (c_can_get_tx_free(tx_ring) == 0)
> > >  		netif_stop_queue(dev);
> > >  
> > > -	obj = idx + priv->msg_obj_tx_first;
> > > +	spin_lock_bh(&priv->tx_lock);
> > 
> > What does the spin_lock protect? The ndo_start_xmit function is properly
> > serialized by the networking core.
> > 
> 
> The spin_lock protects the access to the IF_TX interface.

How? You only use the spin_lock in c_can_start_xmit(), but not anywhere
else.

> Enabling the transmission of cached messages occur inside interrupt

The call chain is c_can_poll() -> c_can_do_tx(), and c_can_poll() is
called from NAPI, which is not the IRQ handler.

> and the use of the IF_RX interface, which would avoid the use of the
> spinlock, has not been validated by the tests.

What do you mean be has not been validated?

The driver already uses IF_RX to avoid concurrent access in
c_can_do_tx() for c_can_inval_tx_object() [1], why not use IF_RX for
c_can_object_put(), too?

[1] https://lore.kernel.org/r/20210302215435.18286-4-dariobin@libero.it

Marc

-- 
Pengutronix e.K.                 | Marc Kleine-Budde           |
Embedded Linux                   | https://www.pengutronix.de  |
Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO
  2021-08-06  9:25       ` Marc Kleine-Budde
@ 2021-08-07 12:36         ` Dario Binacchi
  0 siblings, 0 replies; 13+ messages in thread
From: Dario Binacchi @ 2021-08-07 12:36 UTC (permalink / raw)
  To: Marc Kleine-Budde
  Cc: linux-kernel, Gianluca Falavigna, Andrew Lunn, David S. Miller,
	Jakub Kicinski, Oliver Hartkopp, Vincent Mailhol,
	Wolfgang Grandegger, linux-can, netdev


> Il 06/08/2021 11:25 Marc Kleine-Budde <mkl@pengutronix.de> ha scritto:
> 
>  
> On 05.08.2021 22:16:06, Dario Binacchi wrote:
> > > > --- a/drivers/net/can/c_can/c_can.h
> > > > +++ b/drivers/net/can/c_can/c_can.h
> > > > @@ -200,6 +200,7 @@ struct c_can_priv {
> > > >  	atomic_t sie_pending;
> > > >  	unsigned long tx_dir;
> > > >  	int last_status;
> > > > +	spinlock_t tx_lock;
> > > 
> > > What does the spin lock protect?
> [...]
> > > > @@ -483,7 +469,11 @@ static netdev_tx_t c_can_start_xmit(struct sk_buff *skb,
> > > >  	if (c_can_get_tx_free(tx_ring) == 0)
> > > >  		netif_stop_queue(dev);
> > > >  
> > > > -	obj = idx + priv->msg_obj_tx_first;
> > > > +	spin_lock_bh(&priv->tx_lock);
> > > 
> > > What does the spin_lock protect? The ndo_start_xmit function is properly
> > > serialized by the networking core.
> > > 
> > 
> > The spin_lock protects the access to the IF_TX interface.
> 
> How? You only use the spin_lock in c_can_start_xmit(), but not anywhere
> else.
> 
> > Enabling the transmission of cached messages occur inside interrupt
> 
> The call chain is c_can_poll() -> c_can_do_tx(), and c_can_poll() is
> called from NAPI, which is not the IRQ handler.
> 
> > and the use of the IF_RX interface, which would avoid the use of the
> > spinlock, has not been validated by the tests.
> 
> What do you mean be has not been validated?

It's been a while since I submitted the series and I certainly got confused.

> 
> The driver already uses IF_RX to avoid concurrent access in
> c_can_do_tx() for c_can_inval_tx_object() [1], why not use IF_RX for
> c_can_object_put(), too?
> 
> [1] https://lore.kernel.org/r/20210302215435.18286-4-dariobin@libero.it

Right!

Thanks and Regards,
Dario

> 
> Marc
> 
> -- 
> Pengutronix e.K.                 | Marc Kleine-Budde           |
> Embedded Linux                   | https://www.pengutronix.de  |
> Vertretung West/Dortmund         | Phone: +49-231-2826-924     |
> Amtsgericht Hildesheim, HRA 2686 | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-08-07 12:40 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-25 16:11 [RESEND PATCH 0/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
2021-07-25 16:11 ` [RESEND PATCH 1/4] can: c_can: remove struct c_can_priv::priv field Dario Binacchi
2021-07-25 16:11 ` [RESEND PATCH 2/4] can: c_can: exit c_can_do_tx() early if no frames have been sent Dario Binacchi
2021-07-25 16:11 ` [RESEND PATCH 3/4] can: c_can: support tx ring algorithm Dario Binacchi
2021-08-04  9:37   ` Marc Kleine-Budde
2021-07-25 16:11 ` [RESEND PATCH 4/4] can: c_can: cache frames to operate as a true FIFO Dario Binacchi
2021-08-04  9:34   ` Marc Kleine-Budde
2021-08-05 20:12     ` Dario Binacchi
2021-08-06  7:52       ` Marc Kleine-Budde
2021-08-04  9:45   ` Marc Kleine-Budde
2021-08-05 20:16     ` Dario Binacchi
2021-08-06  9:25       ` Marc Kleine-Budde
2021-08-07 12:36         ` Dario Binacchi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).