linux-remoteproc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
       [not found] <CAGETcx8ykYhBzkqZT+5G9oz2MOiHaSy4F3JoHudgK9WFnmRjbw@mail.gmail.com>
@ 2021-05-26  0:59 ` Bjorn Andersson
  0 siblings, 0 replies; 9+ messages in thread
From: Bjorn Andersson @ 2021-05-26  0:59 UTC (permalink / raw)
  To: Saravana Kannan
  Cc: sidgup, linux-arm-kernel, linux-arm-msm, linux-kernel,
	linux-remoteproc, ohad, psodagud, Android Kernel Team

On Tue 25 May 18:54 CDT 2021, Saravana Kannan wrote:

> On XXXXX, Siddharth Gupta wrote:
> > On 5/24/2021 8:03 PM, Bjorn Andersson wrote:
> > > On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:
> > >
> > >> Subdevices at the beginning of the subdev list should have
> > >> higher priority than those at the end of the list. Reverse
> > >> traversal of the list causes priority inversion, which can
> > >> impact the performance of the device.
> > >>
> > > The subdev lists layers of the communication onion, we bring them up
> > > inside out and we take them down outside in.
> > >
> > > This stems from the primary idea that we want to be able to shut things
> > > down cleanly (in the case of a stop) and we pass the "crashed" flag to
> > > indicate to each recipient during "stop" that it may not rely on the
> > > response of a lower layer.
> > >
> > > As such, I don't think it's right to say that we have a priority
> > > inversion.
> > My understanding of the topic was that each subdevice should be
> > independent of the other. In our case unfortunately the sysmon
> > subdevice depends on the glink endpoint.
> 
> In that case, the glink has to be prepared/started before sysmon, right?
> 

Correct, we prepare glink, then prepare sysmon, start glink then start
sysmon - and reverse for stop and unprepare.

> >
> > However the priority inversion doesn't happen in these
> > subdevices, it happens due to the SSR notifications that we send
> > to kernel clients. In this case kernel clients also can have QMI
> > sockets that in turn depend on the glink endpoint, which means
> > when they go to release the QMI socket a broadcast will be sent
> > out to all connected clients about the closure of the connection
> > which in this case happens to be the remoteproc which died. So
> > if we peel the onion, we will be unnecessarily be waiting for a
> > dead remoteproc.
> 
> So why can't the QMI layer be smart about this and check that the
> remoteproc hasn't crashed before you try to communicate with it?

I guess we could do that, if we really have to. But I find it quite
ugly and would like to avoid it.

> Or if the
> glink is torn down before QMI gets to broadcast, then it's a pretty clear
> indication of failure and just notify all the kernel side QMI clients?
> 

No, the system is designed to deal with this; as the remoteproc goes
down glink will be torn down, which will team down the qrtr link to
whatever qrtr nodes exist on (or beyond) that remote processor.

So if it's down the qrtr will naturally fail because there's no path to
that qrtr node.

> > >
> > >> For example a device adds the glink, sysmon and ssr subdevs
> > >> to its list. During a crash the ssr notification would go
> > >> before the glink and sysmon notifications. This can cause a
> > >> degraded response when a client driver waits for a response
> > >> from the crashed rproc.
> > >>
> > > In general the design is such that components are not expected to
> > > communicate with the crashed remote when "crashed" is set, this avoids
> > > the single-remote crash.
> > Here the glink device on the rpmsg bus won't know about the
> > crashed remoteproc till we send glink notification first, right?
> 
> Why not just query the current state of the remote proc before trying to
> talk to it? It should be a quick check.
> 

We notify subdevices (and thereby indirectly other drivers) that the
remoteproc is going down, either cleanly or that it's dead.

The problem seen here is that when remoteproc tell some component that
the particular remote processor is dead (crashed/not going to respond)
they react by attempting to communicate with the dying remote processor
- which will naturally time out.

In the general case the solution is simply to stop communicate with the
remote when you're told it's dead. The question is what kind of implicit
operations we're seeing here.

> > Since we send out sysmon and SSR notifications first, the glink
> > device will still be "alive" on the rpmsg bus.
> > >
> > > The case where this isn't holding up is when two remote processors
> > > crashes simultaneously, in which case e.g. sysmon has been seen hitting
> > > its timeout waiting for an ack from a dead remoteproc - but I was under
> > > the impression that this window shrunk dramatically as a side effect of
> > > us fixing the notification ordering.
> > You are right, the window would become smaller in the case of two
> > remoteprocs, but this issue can come up with even a single
> > remoteproc unless prioritize certain subdevices.
> 
> I think the main problem you have here is rproc sub devices that depend on
> other rproc sub devices. But there's no dependency tracking here. Your
> change just happens to work for your specific case because the order of the
> sub devices in the list happens to work for your inter-subdevice
> dependencies. But this is definitely not going to work for all users of
> subdevices.
> 

Right, in the particular case I'm talking about here we saw two remote
processors dying concurrently and ended up in sysmon with each one
trying to notify the other about the change in status. But as I said, to
a large degree this has been avoided by making sure that sysmon checks
the status of the remoteproc before attempting to send. It is however
still possible that you get past this check before the recipient of your
notification dies, in which case you would end up having to wait out the
timeout.

It might be possible to complete the process waiting for a response in
this case, but I don't have any data indicating if it's worth it.

And more importantly, this is not the problem that Siddharth is
reporting.

> If keeping track of dependency is too much complexity (I haven't read
> enough rproc code to comment on that), at the least, it looks like you need
> another ops instead of changing the order of stop() callbacks. Or at a
> minimum pick the ordering based on the "crashed" flag. A blanket, I'll just
> switch the ordering of stop() for everyone for all cases is wrong.
> 

I unfortunately don't see which problem you're trying to solve, above
looks to me like an extreme micro-optimization and has nothing to do
with dependencies.

> In fact, in the normal/clean shutdown case, I'd think you'll want to stop
> the subdevices in reverse initialization order so that you can cleanly stop
> QMI/sysmon first before shutting down glink.
> 

Yes.

Regards,
Bjorn

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
  2021-05-26  1:16       ` Siddharth Gupta
@ 2021-05-26  3:00         ` Bjorn Andersson
  0 siblings, 0 replies; 9+ messages in thread
From: Bjorn Andersson @ 2021-05-26  3:00 UTC (permalink / raw)
  To: Siddharth Gupta
  Cc: ohad, linux-remoteproc, linux-kernel, linux-arm-msm,
	linux-arm-kernel, psodagud

On Tue 25 May 20:16 CDT 2021, Siddharth Gupta wrote:

> 
> On 5/25/2021 5:37 PM, Bjorn Andersson wrote:
> > On Tue 25 May 14:48 CDT 2021, Siddharth Gupta wrote:
> > 
> > > On 5/24/2021 8:03 PM, Bjorn Andersson wrote:
> > > > On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:
> > > > 
> > > > > Subdevices at the beginning of the subdev list should have
> > > > > higher priority than those at the end of the list. Reverse
> > > > > traversal of the list causes priority inversion, which can
> > > > > impact the performance of the device.
> > > > > 
> > > > The subdev lists layers of the communication onion, we bring them up
> > > > inside out and we take them down outside in.
> > > > 
> > > > This stems from the primary idea that we want to be able to shut things
> > > > down cleanly (in the case of a stop) and we pass the "crashed" flag to
> > > > indicate to each recipient during "stop" that it may not rely on the
> > > > response of a lower layer.
> > > > 
> > > > As such, I don't think it's right to say that we have a priority
> > > > inversion.
> > > My understanding of the topic was that each subdevice should be
> > > independent of the other. In our case unfortunately the sysmon
> > > subdevice depends on the glink endpoint.
> > > 
> > We need to care for the ordering if sysmon is to be able to use smd or
> > glink to send the shutdown request.
> Right, I meant the dependence of either sysmon or SSR is on QMI,
> which in turn depends on glink.

The difference between the two is that sysmon ensures that it won't send
any messages to the stopping remoteproc.

> > 
> > > However the priority inversion doesn't happen in these
> > > subdevices, it happens due to the SSR notifications that we send
> > > to kernel clients. In this case kernel clients also can have QMI
> > > sockets that in turn depend on the glink endpoint, which means
> > > when they go to release the QMI socket a broadcast will be sent
> > > out to all connected clients about the closure of the connection
> > > which in this case happens to be the remoteproc which died. So
> > > if we peel the onion, we will be unnecessarily be waiting for a
> > > dead remoteproc.
> > I see, that is indeed a problem.
> > 
> > > > > For example a device adds the glink, sysmon and ssr subdevs
> > > > > to its list. During a crash the ssr notification would go
> > > > > before the glink and sysmon notifications. This can cause a
> > > > > degraded response when a client driver waits for a response
> > > > > from the crashed rproc.
> > > > > 
> > > > In general the design is such that components are not expected to
> > > > communicate with the crashed remote when "crashed" is set, this avoids
> > > > the single-remote crash.
> > > Here the glink device on the rpmsg bus won't know about the
> > > crashed remoteproc till we send glink notification first, right?
> > > Since we send out sysmon and SSR notifications first, the glink
> > > device will still be "alive" on the rpmsg bus.
> > Yes, and this all stems from the design that everything communicating
> > over glink is a child of glink, which isn't the case when you have a SSR
> > event that will end up blocking the sequence in qrtr.
> > 
> > For sysmon this is not a problem, because sysmon is implemented to not
> > attempt to communicate with the parent remoteproc upon a crash.
> Yes, exactly.
> > And all rpmsg devices will be torn down as a result of glink being torn
> > down, so glink can fail early based on this (not sure if this was
> > implemented downstream though).
> This was implemented downstream as a part of an early
> notification that was sent out to the glink device.
> > 
> > > > The case where this isn't holding up is when two remote processors
> > > > crashes simultaneously, in which case e.g. sysmon has been seen hitting
> > > > its timeout waiting for an ack from a dead remoteproc - but I was under
> > > > the impression that this window shrunk dramatically as a side effect of
> > > > us fixing the notification ordering.
> > > You are right, the window would become smaller in the case of two
> > > remoteprocs, but this issue can come up with even a single
> > > remoteproc unless prioritize certain subdevices.
> > The problem that you describe where an SSR notification will directly or
> > indirectly attempt to communicate over QRTR will certainly cause issues
> > in the single-rproc case as well.
> > 
> > 
> > But is there any reason why these listeners has to do the wrong thing at
> > stop(crashed=true)?
> I don't think the listeners are doing anything wrong by closing
> the QMI handle/QRTR socket, the issue is that the glink device
> still thinks that it can communicate.
> > 

The design is such that subdev notifications are not allowed to
communicate with the dying remoteproc when crashed=true.

This means that any listeners to these notifications needs to ensure
they play nice with regards to the dying remoteproc, which includes
sending messages to it or waiting for any new or pending incoming
notifications - QMI or otherwise.

As such I think it makes sense to consider qmi_handle_release() to be an
operation that will send messages and should not be performed in the
notification handler.

> > > > > Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org>
> > > > > ---
> > > > >    drivers/remoteproc/remoteproc_core.c | 24 ++++++++++++++----------
> > > > >    1 file changed, 14 insertions(+), 10 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
> > > > > index 626a6b90f..ac8fc42 100644
> > > > > --- a/drivers/remoteproc/remoteproc_core.c
> > > > > +++ b/drivers/remoteproc/remoteproc_core.c
> > > > > @@ -1167,7 +1167,7 @@ static int rproc_handle_resources(struct rproc *rproc,
> > > > >    static int rproc_prepare_subdevices(struct rproc *rproc)
> > > > >    {
> > > > > -	struct rproc_subdev *subdev;
> > > > > +	struct rproc_subdev *subdev, *itr;
> > > > >    	int ret;
> > > > >    	list_for_each_entry(subdev, &rproc->subdevs, node) {
> > > > > @@ -1181,9 +1181,11 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
> > > > >    	return 0;
> > > > >    unroll_preparation:
> > > > > -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
> > > > > -		if (subdev->unprepare)
> > > > > -			subdev->unprepare(subdev);
> > > > > +	list_for_each_entry(itr, &rproc->subdevs, node) {
> > > > > +		if (itr == subdev)
> > > > > +			break;
> > > > > +		if (itr->unprepare)
> > > > > +			itr->unprepare(subdev);
> > > > >    	}
> > > > >    	return ret;
> > > > > @@ -1191,7 +1193,7 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
> > > > >    static int rproc_start_subdevices(struct rproc *rproc)
> > > > >    {
> > > > > -	struct rproc_subdev *subdev;
> > > > > +	struct rproc_subdev *subdev, *itr;
> > > > >    	int ret;
> > > > >    	list_for_each_entry(subdev, &rproc->subdevs, node) {
> > > > > @@ -1205,9 +1207,11 @@ static int rproc_start_subdevices(struct rproc *rproc)
> > > > >    	return 0;
> > > > >    unroll_registration:
> > > > > -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
> > > > > -		if (subdev->stop)
> > > > > -			subdev->stop(subdev, true);
> > > > > +	list_for_each_entry(itr, &rproc->subdevs, node) {
> > > > > +		if (itr == subdev)
> > > > > +			break;
> > > > > +		if (itr->stop)
> > > > > +			itr->stop(itr, true);
> > > > >    	}
> > > > >    	return ret;
> > > > > @@ -1217,7 +1221,7 @@ static void rproc_stop_subdevices(struct rproc *rproc, bool crashed)
> > > > >    {
> > > > >    	struct rproc_subdev *subdev;
> > > > > -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
> > > > > +	list_for_each_entry(subdev, &rproc->subdevs, node) {
> > > > I presume this is the case you actually care about, can you help me
> > > > understand if you changed the others for consistence or if there's some
> > > > flow of events where that might be necessary.
> > > Yes you are right, I only changed the others for consistence.
> > > However, I will give this more thought and see if unprepare in
> > > the reverse order can make a difference.
> > > 
> > Per above argument I don't think things depend on the unrolling on error
> > happening in reverse order. But it's idiomatic.
> I say unprepare in any order might not make a difference because
> prepare would indicate to the subdevice that it should get its
> resources initialized because the remoteproc is going to come up,
> so unprepare would only be the subdevice releasing its resources.
> However start and stop in the reverse order will make a big
> difference. Please correct me if I am wrong.
> 

I think you're right, for the operations we describe today there
shouldn't be any dependencies between the layers for prepare and
unprepare.

Regards,
Bjorn

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
  2021-05-26  0:37     ` Bjorn Andersson
@ 2021-05-26  1:16       ` Siddharth Gupta
  2021-05-26  3:00         ` Bjorn Andersson
  0 siblings, 1 reply; 9+ messages in thread
From: Siddharth Gupta @ 2021-05-26  1:16 UTC (permalink / raw)
  To: Bjorn Andersson
  Cc: ohad, linux-remoteproc, linux-kernel, linux-arm-msm,
	linux-arm-kernel, psodagud


On 5/25/2021 5:37 PM, Bjorn Andersson wrote:
> On Tue 25 May 14:48 CDT 2021, Siddharth Gupta wrote:
>
>> On 5/24/2021 8:03 PM, Bjorn Andersson wrote:
>>> On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:
>>>
>>>> Subdevices at the beginning of the subdev list should have
>>>> higher priority than those at the end of the list. Reverse
>>>> traversal of the list causes priority inversion, which can
>>>> impact the performance of the device.
>>>>
>>> The subdev lists layers of the communication onion, we bring them up
>>> inside out and we take them down outside in.
>>>
>>> This stems from the primary idea that we want to be able to shut things
>>> down cleanly (in the case of a stop) and we pass the "crashed" flag to
>>> indicate to each recipient during "stop" that it may not rely on the
>>> response of a lower layer.
>>>
>>> As such, I don't think it's right to say that we have a priority
>>> inversion.
>> My understanding of the topic was that each subdevice should be
>> independent of the other. In our case unfortunately the sysmon
>> subdevice depends on the glink endpoint.
>>
> We need to care for the ordering if sysmon is to be able to use smd or
> glink to send the shutdown request.
Right, I meant the dependence of either sysmon or SSR is on QMI,
which in turn depends on glink.
>
>> However the priority inversion doesn't happen in these
>> subdevices, it happens due to the SSR notifications that we send
>> to kernel clients. In this case kernel clients also can have QMI
>> sockets that in turn depend on the glink endpoint, which means
>> when they go to release the QMI socket a broadcast will be sent
>> out to all connected clients about the closure of the connection
>> which in this case happens to be the remoteproc which died. So
>> if we peel the onion, we will be unnecessarily be waiting for a
>> dead remoteproc.
> I see, that is indeed a problem.
>
>>>> For example a device adds the glink, sysmon and ssr subdevs
>>>> to its list. During a crash the ssr notification would go
>>>> before the glink and sysmon notifications. This can cause a
>>>> degraded response when a client driver waits for a response
>>>> from the crashed rproc.
>>>>
>>> In general the design is such that components are not expected to
>>> communicate with the crashed remote when "crashed" is set, this avoids
>>> the single-remote crash.
>> Here the glink device on the rpmsg bus won't know about the
>> crashed remoteproc till we send glink notification first, right?
>> Since we send out sysmon and SSR notifications first, the glink
>> device will still be "alive" on the rpmsg bus.
> Yes, and this all stems from the design that everything communicating
> over glink is a child of glink, which isn't the case when you have a SSR
> event that will end up blocking the sequence in qrtr.
>
> For sysmon this is not a problem, because sysmon is implemented to not
> attempt to communicate with the parent remoteproc upon a crash.
Yes, exactly.
> And all rpmsg devices will be torn down as a result of glink being torn
> down, so glink can fail early based on this (not sure if this was
> implemented downstream though).
This was implemented downstream as a part of an early
notification that was sent out to the glink device.
>
>>> The case where this isn't holding up is when two remote processors
>>> crashes simultaneously, in which case e.g. sysmon has been seen hitting
>>> its timeout waiting for an ack from a dead remoteproc - but I was under
>>> the impression that this window shrunk dramatically as a side effect of
>>> us fixing the notification ordering.
>> You are right, the window would become smaller in the case of two
>> remoteprocs, but this issue can come up with even a single
>> remoteproc unless prioritize certain subdevices.
> The problem that you describe where an SSR notification will directly or
> indirectly attempt to communicate over QRTR will certainly cause issues
> in the single-rproc case as well.
>
>
> But is there any reason why these listeners has to do the wrong thing at
> stop(crashed=true)?
I don't think the listeners are doing anything wrong by closing
the QMI handle/QRTR socket, the issue is that the glink device
still thinks that it can communicate.
>
>>>> Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org>
>>>> ---
>>>>    drivers/remoteproc/remoteproc_core.c | 24 ++++++++++++++----------
>>>>    1 file changed, 14 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
>>>> index 626a6b90f..ac8fc42 100644
>>>> --- a/drivers/remoteproc/remoteproc_core.c
>>>> +++ b/drivers/remoteproc/remoteproc_core.c
>>>> @@ -1167,7 +1167,7 @@ static int rproc_handle_resources(struct rproc *rproc,
>>>>    static int rproc_prepare_subdevices(struct rproc *rproc)
>>>>    {
>>>> -	struct rproc_subdev *subdev;
>>>> +	struct rproc_subdev *subdev, *itr;
>>>>    	int ret;
>>>>    	list_for_each_entry(subdev, &rproc->subdevs, node) {
>>>> @@ -1181,9 +1181,11 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
>>>>    	return 0;
>>>>    unroll_preparation:
>>>> -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
>>>> -		if (subdev->unprepare)
>>>> -			subdev->unprepare(subdev);
>>>> +	list_for_each_entry(itr, &rproc->subdevs, node) {
>>>> +		if (itr == subdev)
>>>> +			break;
>>>> +		if (itr->unprepare)
>>>> +			itr->unprepare(subdev);
>>>>    	}
>>>>    	return ret;
>>>> @@ -1191,7 +1193,7 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
>>>>    static int rproc_start_subdevices(struct rproc *rproc)
>>>>    {
>>>> -	struct rproc_subdev *subdev;
>>>> +	struct rproc_subdev *subdev, *itr;
>>>>    	int ret;
>>>>    	list_for_each_entry(subdev, &rproc->subdevs, node) {
>>>> @@ -1205,9 +1207,11 @@ static int rproc_start_subdevices(struct rproc *rproc)
>>>>    	return 0;
>>>>    unroll_registration:
>>>> -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
>>>> -		if (subdev->stop)
>>>> -			subdev->stop(subdev, true);
>>>> +	list_for_each_entry(itr, &rproc->subdevs, node) {
>>>> +		if (itr == subdev)
>>>> +			break;
>>>> +		if (itr->stop)
>>>> +			itr->stop(itr, true);
>>>>    	}
>>>>    	return ret;
>>>> @@ -1217,7 +1221,7 @@ static void rproc_stop_subdevices(struct rproc *rproc, bool crashed)
>>>>    {
>>>>    	struct rproc_subdev *subdev;
>>>> -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
>>>> +	list_for_each_entry(subdev, &rproc->subdevs, node) {
>>> I presume this is the case you actually care about, can you help me
>>> understand if you changed the others for consistence or if there's some
>>> flow of events where that might be necessary.
>> Yes you are right, I only changed the others for consistence.
>> However, I will give this more thought and see if unprepare in
>> the reverse order can make a difference.
>>
> Per above argument I don't think things depend on the unrolling on error
> happening in reverse order. But it's idiomatic.
I say unprepare in any order might not make a difference because
prepare would indicate to the subdevice that it should get its
resources initialized because the remoteproc is going to come up,
so unprepare would only be the subdevice releasing its resources.
However start and stop in the reverse order will make a big
difference. Please correct me if I am wrong.

Thanks,
Sid
>
> Regards,
> Bjorn
>
>> Thanks,
>> Sid
>>> Regards,
>>> Bjorn
>>>
>>>>    		if (subdev->stop)
>>>>    			subdev->stop(subdev, crashed);
>>>>    	}
>>>> @@ -1227,7 +1231,7 @@ static void rproc_unprepare_subdevices(struct rproc *rproc)
>>>>    {
>>>>    	struct rproc_subdev *subdev;
>>>> -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
>>>> +	list_for_each_entry(subdev, &rproc->subdevs, node) {
>>>>    		if (subdev->unprepare)
>>>>    			subdev->unprepare(subdev);
>>>>    	}
>>>> -- 
>>>> Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
>>>> a Linux Foundation Collaborative Project
>>>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
  2021-05-26  0:00 Saravana Kannan
@ 2021-05-26  0:41 ` Siddharth Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Siddharth Gupta @ 2021-05-26  0:41 UTC (permalink / raw)
  To: Saravana Kannan
  Cc: bjorn.andersson, linux-arm-kernel, linux-arm-msm, linux-kernel,
	linux-remoteproc, ohad, psodagud, Android Kernel Team


On 5/25/2021 5:00 PM, Saravana Kannan wrote:
> Sending again due to accidental HTML.
>
> On XXXXX, Siddharth Gupta wrote:
>> On 5/24/2021 8:03 PM, Bjorn Andersson wrote:
>>> On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:
>>>
>>>> Subdevices at the beginning of the subdev list should have
>>>> higher priority than those at the end of the list. Reverse
>>>> traversal of the list causes priority inversion, which can
>>>> impact the performance of the device.
>>>>
>>> The subdev lists layers of the communication onion, we bring them up
>>> inside out and we take them down outside in.
>>>
>>> This stems from the primary idea that we want to be able to shut things
>>> down cleanly (in the case of a stop) and we pass the "crashed" flag to
>>> indicate to each recipient during "stop" that it may not rely on the
>>> response of a lower layer.
>>>
>>> As such, I don't think it's right to say that we have a priority
>>> inversion.
>> My understanding of the topic was that each subdevice should be
>> independent of the other. In our case unfortunately the sysmon
>> subdevice depends on the glink endpoint.
> In that case, the glink has to be prepared/started before sysmon, right?
Yes, that will not change with the introduction of this change.
>
>> However the priority inversion doesn't happen in these
>> subdevices, it happens due to the SSR notifications that we send
>> to kernel clients. In this case kernel clients also can have QMI
>> sockets that in turn depend on the glink endpoint, which means
>> when they go to release the QMI socket a broadcast will be sent
>> out to all connected clients about the closure of the connection
>> which in this case happens to be the remoteproc which died. So
>> if we peel the onion, we will be unnecessarily be waiting for a
>> dead remoteproc.
> So why can't the QMI layer be smart about this and check that the
> remoteproc hasn't crashed before you try to communicate with it? Or if
> the glink is torn down before QMI gets to broadcast, then it's a
> pretty clear indication of failure and just notify all the kernel side
> QMI clients?
I made a mistake earlier, QMI is the layer that creates a QRTR
based socket over glink, and is not going to understand how the
socket works internally (think of an application creating a TCP
socket). The change makes it so that the glink layer is torn
down before.
>
>>>> For example a device adds the glink, sysmon and ssr subdevs
>>>> to its list. During a crash the ssr notification would go
>>>> before the glink and sysmon notifications. This can cause a
>>>> degraded response when a client driver waits for a response
>>>> from the crashed rproc.
>>>>
>>> In general the design is such that components are not expected to
>>> communicate with the crashed remote when "crashed" is set, this avoids
>>> the single-remote crash.
>> Here the glink device on the rpmsg bus won't know about the
>> crashed remoteproc till we send glink notification first, right?
> Why not just query the current state of the remote proc before trying
> to talk to it? It should be a quick check.
The subdevice concept serves the purpose of informing devices
like glink when the remoteproc goes down. It makes the entire
concept redundant if the subdevices need to check if the
remoteproc is up or not.
>
>> Since we send out sysmon and SSR notifications first, the glink
>> device will still be "alive" on the rpmsg bus.
>>> The case where this isn't holding up is when two remote processors
>>> crashes simultaneously, in which case e.g. sysmon has been seen hitting
>>> its timeout waiting for an ack from a dead remoteproc - but I was under
>>> the impression that this window shrunk dramatically as a side effect of
>>> us fixing the notification ordering.
>> You are right, the window would become smaller in the case of two
>> remoteprocs, but this issue can come up with even a single
>> remoteproc unless prioritize certain subdevices.
> I think the main problem you have here is rproc sub devices that
> depend on other rproc sub devices. But there's no dependency tracking
> here. Your change just happens to work for your specific case because
> the order of the sub devices in the list happens to work for your
> inter-subdevice dependencies. But this is definitely not going to work
> for all users of subdevices.
>
> If keeping track of dependency is too much complexity (I haven't read
> enough rproc code to comment on that), at the least, it looks like you
> need another ops instead of changing the order of stop() callbacks. Or
> at a minimum pick the ordering based on the "crashed" flag. A blanket,
> I'll just switch the ordering of stop() for everyone for all cases is
> wrong.
I will agree with you if you call this change ugly (because it
is), but I don't think this should break anything for anyone.
If subdevices are independent of each other the order in which
subdevice stop()/unprepare() is called becomes irrelevant.

In case they are dependent, for example - A(SSR)->B(glink), we
would call B start() before calling A start() since A cannot work
without B. During tear down unless B stop() is called A will
continue to think B exists, so B stop() needs to be called before
A stop(). Think of the TCP socket example I gave before - unless
TCP/IP knows that the NIC died it will continue to wait for the
other side to respond.
>
> In fact, in the normal/clean shutdown case, I'd think you'll want to
> stop the subdevices in reverse initialization order so that you can
> cleanly stop QMI/sysmon first before shutting down glink.
In the case of a normal/clean shutdown the users of the
remoteproc should cleanup their side of the resources before
informing the remoteproc framework to shutdown the remoteproc.
Reference counting in the framework will ensure that a remoteproc
framework isn't shutdown randomly unless it is a crash.

Thanks,
Sid
>
> -Saravana

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
  2021-05-25 19:48   ` Siddharth Gupta
@ 2021-05-26  0:37     ` Bjorn Andersson
  2021-05-26  1:16       ` Siddharth Gupta
  0 siblings, 1 reply; 9+ messages in thread
From: Bjorn Andersson @ 2021-05-26  0:37 UTC (permalink / raw)
  To: Siddharth Gupta
  Cc: ohad, linux-remoteproc, linux-kernel, linux-arm-msm,
	linux-arm-kernel, psodagud

On Tue 25 May 14:48 CDT 2021, Siddharth Gupta wrote:

> 
> On 5/24/2021 8:03 PM, Bjorn Andersson wrote:
> > On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:
> > 
> > > Subdevices at the beginning of the subdev list should have
> > > higher priority than those at the end of the list. Reverse
> > > traversal of the list causes priority inversion, which can
> > > impact the performance of the device.
> > > 
> > The subdev lists layers of the communication onion, we bring them up
> > inside out and we take them down outside in.
> > 
> > This stems from the primary idea that we want to be able to shut things
> > down cleanly (in the case of a stop) and we pass the "crashed" flag to
> > indicate to each recipient during "stop" that it may not rely on the
> > response of a lower layer.
> > 
> > As such, I don't think it's right to say that we have a priority
> > inversion.
> My understanding of the topic was that each subdevice should be
> independent of the other. In our case unfortunately the sysmon
> subdevice depends on the glink endpoint.
> 

We need to care for the ordering if sysmon is to be able to use smd or
glink to send the shutdown request.

> However the priority inversion doesn't happen in these
> subdevices, it happens due to the SSR notifications that we send
> to kernel clients. In this case kernel clients also can have QMI
> sockets that in turn depend on the glink endpoint, which means
> when they go to release the QMI socket a broadcast will be sent
> out to all connected clients about the closure of the connection
> which in this case happens to be the remoteproc which died. So
> if we peel the onion, we will be unnecessarily be waiting for a
> dead remoteproc.

I see, that is indeed a problem.

> > 
> > > For example a device adds the glink, sysmon and ssr subdevs
> > > to its list. During a crash the ssr notification would go
> > > before the glink and sysmon notifications. This can cause a
> > > degraded response when a client driver waits for a response
> > > from the crashed rproc.
> > > 
> > In general the design is such that components are not expected to
> > communicate with the crashed remote when "crashed" is set, this avoids
> > the single-remote crash.
> Here the glink device on the rpmsg bus won't know about the
> crashed remoteproc till we send glink notification first, right?
> Since we send out sysmon and SSR notifications first, the glink
> device will still be "alive" on the rpmsg bus.

Yes, and this all stems from the design that everything communicating
over glink is a child of glink, which isn't the case when you have a SSR
event that will end up blocking the sequence in qrtr.

For sysmon this is not a problem, because sysmon is implemented to not
attempt to communicate with the parent remoteproc upon a crash. And
all rpmsg devices will be torn down as a result of glink being torn
down, so glink can fail early based on this (not sure if this was
implemented downstream though).

> > 
> > The case where this isn't holding up is when two remote processors
> > crashes simultaneously, in which case e.g. sysmon has been seen hitting
> > its timeout waiting for an ack from a dead remoteproc - but I was under
> > the impression that this window shrunk dramatically as a side effect of
> > us fixing the notification ordering.
> You are right, the window would become smaller in the case of two
> remoteprocs, but this issue can come up with even a single
> remoteproc unless prioritize certain subdevices.

The problem that you describe where an SSR notification will directly or
indirectly attempt to communicate over QRTR will certainly cause issues
in the single-rproc case as well.


But is there any reason why these listeners has to do the wrong thing at
stop(crashed=true)?

> > 
> > > Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org>
> > > ---
> > >   drivers/remoteproc/remoteproc_core.c | 24 ++++++++++++++----------
> > >   1 file changed, 14 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
> > > index 626a6b90f..ac8fc42 100644
> > > --- a/drivers/remoteproc/remoteproc_core.c
> > > +++ b/drivers/remoteproc/remoteproc_core.c
> > > @@ -1167,7 +1167,7 @@ static int rproc_handle_resources(struct rproc *rproc,
> > >   static int rproc_prepare_subdevices(struct rproc *rproc)
> > >   {
> > > -	struct rproc_subdev *subdev;
> > > +	struct rproc_subdev *subdev, *itr;
> > >   	int ret;
> > >   	list_for_each_entry(subdev, &rproc->subdevs, node) {
> > > @@ -1181,9 +1181,11 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
> > >   	return 0;
> > >   unroll_preparation:
> > > -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
> > > -		if (subdev->unprepare)
> > > -			subdev->unprepare(subdev);
> > > +	list_for_each_entry(itr, &rproc->subdevs, node) {
> > > +		if (itr == subdev)
> > > +			break;
> > > +		if (itr->unprepare)
> > > +			itr->unprepare(subdev);
> > >   	}
> > >   	return ret;
> > > @@ -1191,7 +1193,7 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
> > >   static int rproc_start_subdevices(struct rproc *rproc)
> > >   {
> > > -	struct rproc_subdev *subdev;
> > > +	struct rproc_subdev *subdev, *itr;
> > >   	int ret;
> > >   	list_for_each_entry(subdev, &rproc->subdevs, node) {
> > > @@ -1205,9 +1207,11 @@ static int rproc_start_subdevices(struct rproc *rproc)
> > >   	return 0;
> > >   unroll_registration:
> > > -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
> > > -		if (subdev->stop)
> > > -			subdev->stop(subdev, true);
> > > +	list_for_each_entry(itr, &rproc->subdevs, node) {
> > > +		if (itr == subdev)
> > > +			break;
> > > +		if (itr->stop)
> > > +			itr->stop(itr, true);
> > >   	}
> > >   	return ret;
> > > @@ -1217,7 +1221,7 @@ static void rproc_stop_subdevices(struct rproc *rproc, bool crashed)
> > >   {
> > >   	struct rproc_subdev *subdev;
> > > -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
> > > +	list_for_each_entry(subdev, &rproc->subdevs, node) {
> > I presume this is the case you actually care about, can you help me
> > understand if you changed the others for consistence or if there's some
> > flow of events where that might be necessary.
> Yes you are right, I only changed the others for consistence.
> However, I will give this more thought and see if unprepare in
> the reverse order can make a difference.
> 

Per above argument I don't think things depend on the unrolling on error
happening in reverse order. But it's idiomatic.

Regards,
Bjorn

> Thanks,
> Sid
> > 
> > Regards,
> > Bjorn
> > 
> > >   		if (subdev->stop)
> > >   			subdev->stop(subdev, crashed);
> > >   	}
> > > @@ -1227,7 +1231,7 @@ static void rproc_unprepare_subdevices(struct rproc *rproc)
> > >   {
> > >   	struct rproc_subdev *subdev;
> > > -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
> > > +	list_for_each_entry(subdev, &rproc->subdevs, node) {
> > >   		if (subdev->unprepare)
> > >   			subdev->unprepare(subdev);
> > >   	}
> > > -- 
> > > Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> > > a Linux Foundation Collaborative Project
> > > 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
@ 2021-05-26  0:00 Saravana Kannan
  2021-05-26  0:41 ` Siddharth Gupta
  0 siblings, 1 reply; 9+ messages in thread
From: Saravana Kannan @ 2021-05-26  0:00 UTC (permalink / raw)
  To: sidgup
  Cc: bjorn.andersson, linux-arm-kernel, linux-arm-msm, linux-kernel,
	linux-remoteproc, ohad, psodagud, Saravana Kannan,
	Android Kernel Team

Sending again due to accidental HTML.

On XXXXX, Siddharth Gupta wrote:
> On 5/24/2021 8:03 PM, Bjorn Andersson wrote:
> > On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:
> >
> >> Subdevices at the beginning of the subdev list should have
> >> higher priority than those at the end of the list. Reverse
> >> traversal of the list causes priority inversion, which can
> >> impact the performance of the device.
> >>
> > The subdev lists layers of the communication onion, we bring them up
> > inside out and we take them down outside in.
> >
> > This stems from the primary idea that we want to be able to shut things
> > down cleanly (in the case of a stop) and we pass the "crashed" flag to
> > indicate to each recipient during "stop" that it may not rely on the
> > response of a lower layer.
> >
> > As such, I don't think it's right to say that we have a priority
> > inversion.
> My understanding of the topic was that each subdevice should be
> independent of the other. In our case unfortunately the sysmon
> subdevice depends on the glink endpoint.

In that case, the glink has to be prepared/started before sysmon, right?

>
> However the priority inversion doesn't happen in these
> subdevices, it happens due to the SSR notifications that we send
> to kernel clients. In this case kernel clients also can have QMI
> sockets that in turn depend on the glink endpoint, which means
> when they go to release the QMI socket a broadcast will be sent
> out to all connected clients about the closure of the connection
> which in this case happens to be the remoteproc which died. So
> if we peel the onion, we will be unnecessarily be waiting for a
> dead remoteproc.

So why can't the QMI layer be smart about this and check that the
remoteproc hasn't crashed before you try to communicate with it? Or if
the glink is torn down before QMI gets to broadcast, then it's a
pretty clear indication of failure and just notify all the kernel side
QMI clients?

> >
> >> For example a device adds the glink, sysmon and ssr subdevs
> >> to its list. During a crash the ssr notification would go
> >> before the glink and sysmon notifications. This can cause a
> >> degraded response when a client driver waits for a response
> >> from the crashed rproc.
> >>
> > In general the design is such that components are not expected to
> > communicate with the crashed remote when "crashed" is set, this avoids
> > the single-remote crash.
> Here the glink device on the rpmsg bus won't know about the
> crashed remoteproc till we send glink notification first, right?

Why not just query the current state of the remote proc before trying
to talk to it? It should be a quick check.

> Since we send out sysmon and SSR notifications first, the glink
> device will still be "alive" on the rpmsg bus.
> >
> > The case where this isn't holding up is when two remote processors
> > crashes simultaneously, in which case e.g. sysmon has been seen hitting
> > its timeout waiting for an ack from a dead remoteproc - but I was under
> > the impression that this window shrunk dramatically as a side effect of
> > us fixing the notification ordering.
> You are right, the window would become smaller in the case of two
> remoteprocs, but this issue can come up with even a single
> remoteproc unless prioritize certain subdevices.

I think the main problem you have here is rproc sub devices that
depend on other rproc sub devices. But there's no dependency tracking
here. Your change just happens to work for your specific case because
the order of the sub devices in the list happens to work for your
inter-subdevice dependencies. But this is definitely not going to work
for all users of subdevices.

If keeping track of dependency is too much complexity (I haven't read
enough rproc code to comment on that), at the least, it looks like you
need another ops instead of changing the order of stop() callbacks. Or
at a minimum pick the ordering based on the "crashed" flag. A blanket,
I'll just switch the ordering of stop() for everyone for all cases is
wrong.

In fact, in the normal/clean shutdown case, I'd think you'll want to
stop the subdevices in reverse initialization order so that you can
cleanly stop QMI/sysmon first before shutting down glink.

-Saravana

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
  2021-05-25  3:03 ` Bjorn Andersson
@ 2021-05-25 19:48   ` Siddharth Gupta
  2021-05-26  0:37     ` Bjorn Andersson
  0 siblings, 1 reply; 9+ messages in thread
From: Siddharth Gupta @ 2021-05-25 19:48 UTC (permalink / raw)
  To: Bjorn Andersson
  Cc: ohad, linux-remoteproc, linux-kernel, linux-arm-msm,
	linux-arm-kernel, psodagud


On 5/24/2021 8:03 PM, Bjorn Andersson wrote:
> On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:
>
>> Subdevices at the beginning of the subdev list should have
>> higher priority than those at the end of the list. Reverse
>> traversal of the list causes priority inversion, which can
>> impact the performance of the device.
>>
> The subdev lists layers of the communication onion, we bring them up
> inside out and we take them down outside in.
>
> This stems from the primary idea that we want to be able to shut things
> down cleanly (in the case of a stop) and we pass the "crashed" flag to
> indicate to each recipient during "stop" that it may not rely on the
> response of a lower layer.
>
> As such, I don't think it's right to say that we have a priority
> inversion.
My understanding of the topic was that each subdevice should be
independent of the other. In our case unfortunately the sysmon
subdevice depends on the glink endpoint.

However the priority inversion doesn't happen in these
subdevices, it happens due to the SSR notifications that we send
to kernel clients. In this case kernel clients also can have QMI
sockets that in turn depend on the glink endpoint, which means
when they go to release the QMI socket a broadcast will be sent
out to all connected clients about the closure of the connection
which in this case happens to be the remoteproc which died. So
if we peel the onion, we will be unnecessarily be waiting for a
dead remoteproc.
>
>> For example a device adds the glink, sysmon and ssr subdevs
>> to its list. During a crash the ssr notification would go
>> before the glink and sysmon notifications. This can cause a
>> degraded response when a client driver waits for a response
>> from the crashed rproc.
>>
> In general the design is such that components are not expected to
> communicate with the crashed remote when "crashed" is set, this avoids
> the single-remote crash.
Here the glink device on the rpmsg bus won't know about the
crashed remoteproc till we send glink notification first, right?
Since we send out sysmon and SSR notifications first, the glink
device will still be "alive" on the rpmsg bus.
>
> The case where this isn't holding up is when two remote processors
> crashes simultaneously, in which case e.g. sysmon has been seen hitting
> its timeout waiting for an ack from a dead remoteproc - but I was under
> the impression that this window shrunk dramatically as a side effect of
> us fixing the notification ordering.
You are right, the window would become smaller in the case of two
remoteprocs, but this issue can come up with even a single
remoteproc unless prioritize certain subdevices.
>
>> Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org>
>> ---
>>   drivers/remoteproc/remoteproc_core.c | 24 ++++++++++++++----------
>>   1 file changed, 14 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
>> index 626a6b90f..ac8fc42 100644
>> --- a/drivers/remoteproc/remoteproc_core.c
>> +++ b/drivers/remoteproc/remoteproc_core.c
>> @@ -1167,7 +1167,7 @@ static int rproc_handle_resources(struct rproc *rproc,
>>   
>>   static int rproc_prepare_subdevices(struct rproc *rproc)
>>   {
>> -	struct rproc_subdev *subdev;
>> +	struct rproc_subdev *subdev, *itr;
>>   	int ret;
>>   
>>   	list_for_each_entry(subdev, &rproc->subdevs, node) {
>> @@ -1181,9 +1181,11 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
>>   	return 0;
>>   
>>   unroll_preparation:
>> -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
>> -		if (subdev->unprepare)
>> -			subdev->unprepare(subdev);
>> +	list_for_each_entry(itr, &rproc->subdevs, node) {
>> +		if (itr == subdev)
>> +			break;
>> +		if (itr->unprepare)
>> +			itr->unprepare(subdev);
>>   	}
>>   
>>   	return ret;
>> @@ -1191,7 +1193,7 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
>>   
>>   static int rproc_start_subdevices(struct rproc *rproc)
>>   {
>> -	struct rproc_subdev *subdev;
>> +	struct rproc_subdev *subdev, *itr;
>>   	int ret;
>>   
>>   	list_for_each_entry(subdev, &rproc->subdevs, node) {
>> @@ -1205,9 +1207,11 @@ static int rproc_start_subdevices(struct rproc *rproc)
>>   	return 0;
>>   
>>   unroll_registration:
>> -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
>> -		if (subdev->stop)
>> -			subdev->stop(subdev, true);
>> +	list_for_each_entry(itr, &rproc->subdevs, node) {
>> +		if (itr == subdev)
>> +			break;
>> +		if (itr->stop)
>> +			itr->stop(itr, true);
>>   	}
>>   
>>   	return ret;
>> @@ -1217,7 +1221,7 @@ static void rproc_stop_subdevices(struct rproc *rproc, bool crashed)
>>   {
>>   	struct rproc_subdev *subdev;
>>   
>> -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
>> +	list_for_each_entry(subdev, &rproc->subdevs, node) {
> I presume this is the case you actually care about, can you help me
> understand if you changed the others for consistence or if there's some
> flow of events where that might be necessary.
Yes you are right, I only changed the others for consistence.
However, I will give this more thought and see if unprepare in
the reverse order can make a difference.

Thanks,
Sid
>
> Regards,
> Bjorn
>
>>   		if (subdev->stop)
>>   			subdev->stop(subdev, crashed);
>>   	}
>> @@ -1227,7 +1231,7 @@ static void rproc_unprepare_subdevices(struct rproc *rproc)
>>   {
>>   	struct rproc_subdev *subdev;
>>   
>> -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
>> +	list_for_each_entry(subdev, &rproc->subdevs, node) {
>>   		if (subdev->unprepare)
>>   			subdev->unprepare(subdev);
>>   	}
>> -- 
>> Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
>> a Linux Foundation Collaborative Project
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] remoteproc: core: Invoke subdev callbacks in list order
  2021-05-17 23:08 Siddharth Gupta
@ 2021-05-25  3:03 ` Bjorn Andersson
  2021-05-25 19:48   ` Siddharth Gupta
  0 siblings, 1 reply; 9+ messages in thread
From: Bjorn Andersson @ 2021-05-25  3:03 UTC (permalink / raw)
  To: Siddharth Gupta
  Cc: ohad, linux-remoteproc, linux-kernel, linux-arm-msm,
	linux-arm-kernel, psodagud

On Mon 17 May 18:08 CDT 2021, Siddharth Gupta wrote:

> Subdevices at the beginning of the subdev list should have
> higher priority than those at the end of the list. Reverse
> traversal of the list causes priority inversion, which can
> impact the performance of the device.
> 

The subdev lists layers of the communication onion, we bring them up
inside out and we take them down outside in.

This stems from the primary idea that we want to be able to shut things
down cleanly (in the case of a stop) and we pass the "crashed" flag to
indicate to each recipient during "stop" that it may not rely on the
response of a lower layer.

As such, I don't think it's right to say that we have a priority
inversion.

> For example a device adds the glink, sysmon and ssr subdevs
> to its list. During a crash the ssr notification would go
> before the glink and sysmon notifications. This can cause a
> degraded response when a client driver waits for a response
> from the crashed rproc.
> 

In general the design is such that components are not expected to
communicate with the crashed remote when "crashed" is set, this avoids
the single-remote crash.

The case where this isn't holding up is when two remote processors
crashes simultaneously, in which case e.g. sysmon has been seen hitting
its timeout waiting for an ack from a dead remoteproc - but I was under
the impression that this window shrunk dramatically as a side effect of
us fixing the notification ordering.

> Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org>
> ---
>  drivers/remoteproc/remoteproc_core.c | 24 ++++++++++++++----------
>  1 file changed, 14 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
> index 626a6b90f..ac8fc42 100644
> --- a/drivers/remoteproc/remoteproc_core.c
> +++ b/drivers/remoteproc/remoteproc_core.c
> @@ -1167,7 +1167,7 @@ static int rproc_handle_resources(struct rproc *rproc,
>  
>  static int rproc_prepare_subdevices(struct rproc *rproc)
>  {
> -	struct rproc_subdev *subdev;
> +	struct rproc_subdev *subdev, *itr;
>  	int ret;
>  
>  	list_for_each_entry(subdev, &rproc->subdevs, node) {
> @@ -1181,9 +1181,11 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
>  	return 0;
>  
>  unroll_preparation:
> -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
> -		if (subdev->unprepare)
> -			subdev->unprepare(subdev);
> +	list_for_each_entry(itr, &rproc->subdevs, node) {
> +		if (itr == subdev)
> +			break;
> +		if (itr->unprepare)
> +			itr->unprepare(subdev);
>  	}
>  
>  	return ret;
> @@ -1191,7 +1193,7 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
>  
>  static int rproc_start_subdevices(struct rproc *rproc)
>  {
> -	struct rproc_subdev *subdev;
> +	struct rproc_subdev *subdev, *itr;
>  	int ret;
>  
>  	list_for_each_entry(subdev, &rproc->subdevs, node) {
> @@ -1205,9 +1207,11 @@ static int rproc_start_subdevices(struct rproc *rproc)
>  	return 0;
>  
>  unroll_registration:
> -	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
> -		if (subdev->stop)
> -			subdev->stop(subdev, true);
> +	list_for_each_entry(itr, &rproc->subdevs, node) {
> +		if (itr == subdev)
> +			break;
> +		if (itr->stop)
> +			itr->stop(itr, true);
>  	}
>  
>  	return ret;
> @@ -1217,7 +1221,7 @@ static void rproc_stop_subdevices(struct rproc *rproc, bool crashed)
>  {
>  	struct rproc_subdev *subdev;
>  
> -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
> +	list_for_each_entry(subdev, &rproc->subdevs, node) {

I presume this is the case you actually care about, can you help me
understand if you changed the others for consistence or if there's some
flow of events where that might be necessary.

Regards,
Bjorn

>  		if (subdev->stop)
>  			subdev->stop(subdev, crashed);
>  	}
> @@ -1227,7 +1231,7 @@ static void rproc_unprepare_subdevices(struct rproc *rproc)
>  {
>  	struct rproc_subdev *subdev;
>  
> -	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
> +	list_for_each_entry(subdev, &rproc->subdevs, node) {
>  		if (subdev->unprepare)
>  			subdev->unprepare(subdev);
>  	}
> -- 
> Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH] remoteproc: core: Invoke subdev callbacks in list order
@ 2021-05-17 23:08 Siddharth Gupta
  2021-05-25  3:03 ` Bjorn Andersson
  0 siblings, 1 reply; 9+ messages in thread
From: Siddharth Gupta @ 2021-05-17 23:08 UTC (permalink / raw)
  To: bjorn.andersson, ohad, linux-remoteproc
  Cc: Siddharth Gupta, linux-kernel, linux-arm-msm, linux-arm-kernel, psodagud

Subdevices at the beginning of the subdev list should have
higher priority than those at the end of the list. Reverse
traversal of the list causes priority inversion, which can
impact the performance of the device.

For example a device adds the glink, sysmon and ssr subdevs
to its list. During a crash the ssr notification would go
before the glink and sysmon notifications. This can cause a
degraded response when a client driver waits for a response
from the crashed rproc.

Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org>
---
 drivers/remoteproc/remoteproc_core.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
index 626a6b90f..ac8fc42 100644
--- a/drivers/remoteproc/remoteproc_core.c
+++ b/drivers/remoteproc/remoteproc_core.c
@@ -1167,7 +1167,7 @@ static int rproc_handle_resources(struct rproc *rproc,
 
 static int rproc_prepare_subdevices(struct rproc *rproc)
 {
-	struct rproc_subdev *subdev;
+	struct rproc_subdev *subdev, *itr;
 	int ret;
 
 	list_for_each_entry(subdev, &rproc->subdevs, node) {
@@ -1181,9 +1181,11 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
 	return 0;
 
 unroll_preparation:
-	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
-		if (subdev->unprepare)
-			subdev->unprepare(subdev);
+	list_for_each_entry(itr, &rproc->subdevs, node) {
+		if (itr == subdev)
+			break;
+		if (itr->unprepare)
+			itr->unprepare(subdev);
 	}
 
 	return ret;
@@ -1191,7 +1193,7 @@ static int rproc_prepare_subdevices(struct rproc *rproc)
 
 static int rproc_start_subdevices(struct rproc *rproc)
 {
-	struct rproc_subdev *subdev;
+	struct rproc_subdev *subdev, *itr;
 	int ret;
 
 	list_for_each_entry(subdev, &rproc->subdevs, node) {
@@ -1205,9 +1207,11 @@ static int rproc_start_subdevices(struct rproc *rproc)
 	return 0;
 
 unroll_registration:
-	list_for_each_entry_continue_reverse(subdev, &rproc->subdevs, node) {
-		if (subdev->stop)
-			subdev->stop(subdev, true);
+	list_for_each_entry(itr, &rproc->subdevs, node) {
+		if (itr == subdev)
+			break;
+		if (itr->stop)
+			itr->stop(itr, true);
 	}
 
 	return ret;
@@ -1217,7 +1221,7 @@ static void rproc_stop_subdevices(struct rproc *rproc, bool crashed)
 {
 	struct rproc_subdev *subdev;
 
-	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
+	list_for_each_entry(subdev, &rproc->subdevs, node) {
 		if (subdev->stop)
 			subdev->stop(subdev, crashed);
 	}
@@ -1227,7 +1231,7 @@ static void rproc_unprepare_subdevices(struct rproc *rproc)
 {
 	struct rproc_subdev *subdev;
 
-	list_for_each_entry_reverse(subdev, &rproc->subdevs, node) {
+	list_for_each_entry(subdev, &rproc->subdevs, node) {
 		if (subdev->unprepare)
 			subdev->unprepare(subdev);
 	}
-- 
Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-05-26  3:00 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAGETcx8ykYhBzkqZT+5G9oz2MOiHaSy4F3JoHudgK9WFnmRjbw@mail.gmail.com>
2021-05-26  0:59 ` [PATCH] remoteproc: core: Invoke subdev callbacks in list order Bjorn Andersson
2021-05-26  0:00 Saravana Kannan
2021-05-26  0:41 ` Siddharth Gupta
  -- strict thread matches above, loose matches on Subject: below --
2021-05-17 23:08 Siddharth Gupta
2021-05-25  3:03 ` Bjorn Andersson
2021-05-25 19:48   ` Siddharth Gupta
2021-05-26  0:37     ` Bjorn Andersson
2021-05-26  1:16       ` Siddharth Gupta
2021-05-26  3:00         ` Bjorn Andersson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).