All of lore.kernel.org
 help / color / mirror / Atom feed
From: Arnaud POULIQUEN <arnaud.pouliquen@foss.st.com>
To: Tim Blechmann <tim@klingt.org>,
	Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Tim Blechmann <tim.blechmann@gmail.com>,
	<linux-remoteproc@vger.kernel.org>,
	<agostino.carballeira@native-instruments.de>
Subject: Re: [PATCH 1/1] rpmsg: virtio_rpmsg_bus - prevent possible race condition
Date: Wed, 13 Sep 2023 09:44:09 +0200	[thread overview]
Message-ID: <d37a0431-9b4d-1215-2c2c-14c8963f93a7@foss.st.com> (raw)
In-Reply-To: <d2a303ad-4dcf-de52-38db-53695169fe33@klingt.org>

hello Tim

On 9/13/23 03:11, Tim Blechmann wrote:
> On 9/13/23 09:07, Tim Blechmann wrote:
>>>> virtio emits a few times:
>>>> [  404.XXXXXX] No more buffers in queue
>>>> and then
>>>> [  404.588722] remoteproc remoteproc0: vq index 1 is interrupted
>>>> [  404.597249] virtqueue interrupt with no work for 6d53f13a
>>>
>>> Difficult to understand the scheduling with your trace. Could you enable
>>> ftrace to
>>> observe it (mailbox interrupts,rpmsg and virtio functions)
> 
> full trace, that my worker did (quite large, so i zipped it):
> https://fileservice.klingt.org/downloads/15526951694567393180-tracefull.zip

please find below an extract of your trace with my analysis:


 stm32mp1_bulk_p-390     [001] .....   907.241226: rpmsg_send
<-rpmsg_intercore_send_buffer.constprop.0
 stm32mp1_bulk_p-390     [001] .....   907.241228: virtio_rpmsg_send <-rpmsg_send
 stm32mp1_bulk_p-390     [001] .....   907.241237: virtqueue_enable_cb
<-rpmsg_send_offchannel_raw
 stm32mp1_bulk_p-390     [001] .....   907.241239: virtqueue_enable_cb_prepare

At this point seems that no more TX-buffer

<-rpmsg_recv_single
     kworker/0:4-67      [000] .....   907.242533: vring_interrupt
<-rproc_vq_interrupt
     kworker/0:4-67      [000] .....   907.242536: rpmsg_xmit_done

Here you receive  an interrupt indicating that TX buffer has been released by
the remote. that's the expected behavior.


<-rpmsg_send_offchannel_raw
 stm32mp1_bulk_p-390     [000] .....   984.054941: rpmsg_send
<-rpmsg_intercore_send_buffer.constprop.0
 stm32mp1_bulk_p-390     [000] .....   984.054943: virtio_rpmsg_send <-rpmsg_send
 stm32mp1_bulk_p-390     [000] .....   984.054956: virtqueue_enable_cb
<-rpmsg_send_offchannel_raw
 stm32mp1_bulk_p-390     [000] .....   984.054958: virtqueue_enable_cb_prepare
<-virtqueue_enable_cb
 stm32mp1_bulk_p-390     [000] .....   999.398667: virtqueue_disable_cb
<-rpmsg_send_offchannel_raw
 stm32mp1_bulk_p-390     [000] .....   999.414840: rpmsg_send
<-rpmsg_intercore_send_buffer.constprop.0
 stm32mp1_bulk_p-390     [000] .....   999.414843: virtio_rpmsg_send <-rpmsg_send
 stm32mp1_bulk_p-390     [000] .....   999.414855: virtqueue_enable_cb
<-rpmsg_send_offchannel_raw
 stm32mp1_bulk_p-390     [000] .....   999.414857: virtqueue_enable_cb_prepare

Here you have again no more TX buffer. From this point there is no more activity
neither in TX nor in RX until the timeout of 15 seconds.
If you have a look to rproc_vq_interrupt the last one occurs at 907.242533


As you have no more virtqueue interrupts call for both directions, the issue is
probably either in the Cortex-M firmware, which seems to be stalled, or due to a
disable of the IRQs in Linux.

<-virtqueue_enable_cb
 stm32mp1_bulk_p-390     [000] .....  1014.758678: virtqueue_disable_cb
<-rpmsg_send_offchannel_raw
 stm32mp1_bulk_p-390     [000] .....  1014.774802: rpmsg_send
<-rpmsg_intercore_send_buffer.constprop.0
 stm32mp1_bulk_p-390     [000] .....  1014.774804: virtio_rpmsg_send <-rpmsg_send
 stm32mp1_bulk_p-390     [000] .....  1014.774815: virtqueue_enable_cb

Regards,
Arnaud

> 
> the interesting part starts at 907.2244250
> 
> many thanks,
> tim

  reply	other threads:[~2023-09-13  7:44 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-04  8:36 [PATCH 1/1] rpmsg: virtio_rpmsg_bus - prevent possible race condition Tim Blechmann
2023-09-04 13:52 ` Arnaud POULIQUEN
2023-09-04 20:43   ` Mathieu Poirier
2023-09-05  1:33     ` Tim Blechmann
2023-09-05 16:02       ` Mathieu Poirier
2023-09-08 15:04       ` Arnaud POULIQUEN
2023-09-09  6:28         ` Tim Blechmann
2023-09-11 17:20           ` Arnaud POULIQUEN
2023-09-13  1:07             ` Tim Blechmann
2023-09-13  1:11               ` Tim Blechmann
2023-09-13  7:44                 ` Arnaud POULIQUEN [this message]
2023-09-13  8:47                   ` Tim Blechmann
2023-09-13 10:02                     ` Arnaud POULIQUEN
     [not found]                     ` <CAG2LOc42AG5H56=tzz8_2WrrBiy9d74qYmgPQaEVGrzWTNqodg@mail.gmail.com>
2023-09-14 17:25                       ` Arnaud POULIQUEN
2023-09-16  1:38                         ` Tim Blechmann
2023-09-13 10:10     ` Arnaud POULIQUEN
2023-09-13 14:46       ` Mathieu Poirier
2023-09-07  4:51 Tim Blechmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d37a0431-9b4d-1215-2c2c-14c8963f93a7@foss.st.com \
    --to=arnaud.pouliquen@foss.st.com \
    --cc=agostino.carballeira@native-instruments.de \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=mathieu.poirier@linaro.org \
    --cc=tim.blechmann@gmail.com \
    --cc=tim@klingt.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.