linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
To: Hemant Kumar <hemantk@codeaurora.org>
Cc: Dan Carpenter <dan.carpenter@oracle.com>,
	linux-arm-msm@vger.kernel.org, kernel-janitors@vger.kernel.org
Subject: Re: [bug report] bus: mhi: core: Add support for data transfer
Date: Tue, 21 Apr 2020 11:22:18 +0530	[thread overview]
Message-ID: <20200421055218.GA4327@Mani-XPS-13-9360> (raw)
In-Reply-To: <7d1bda90-dec3-3cfe-9bfc-dbcf97f9a72d@codeaurora.org>

On Mon, Apr 20, 2020 at 03:57:58PM -0700, Hemant Kumar wrote:
> Hi Mani,
> 
> On 4/18/20 12:19 PM, Manivannan Sadhasivam wrote:
> > Hi Hemant,
> > 
> > Please try to use an email client supporting plain text mode like mutt. Your reply looks mangled.
> > 
> > On 18 April 2020 12:40:10 PM IST, Hemant Kumar <hemantk@codeaurora.org> wrote:
> > > Hi Mani,
> > > 
> > > On 4/17/20 3:14 AM, Manivannan Sadhasivam wrote:
> > > > Hi Hemant,
> > > > 
> > > > On Thu, Apr 16, 2020 at 08:37:16PM -0700, Hemant Kumar wrote:
> > > > > On 4/7/20 7:33 AM, Manivannan Sadhasivam wrote:
> > > > > > Hi Dan,
> > > > > > 
> > > > > > On Tue, Apr 07, 2020 at 04:55:59PM +0300, Dan Carpenter wrote:
> > > > > > > Hello Manivannan Sadhasivam,
> > > > > > > 
> > > > > > > The patch 189ff97cca53: "bus: mhi: core: Add support for data
> > > > > > > transfer" from Feb 20, 2020, leads to the following static checker
> > > > > > > warning:
> > > > > > > 
> > > > > > > 	drivers/bus/mhi/core/main.c:1153 mhi_queue_buf()
> > > > > > > 	error: double locked 'mhi_chan->lock' (orig line 1110)
> > > > > > > 
> > > > > > > drivers/bus/mhi/core/main.c
> > > > > > >      1142          }
> > > > > > >      1143
> > > > > > >      1144          /* Toggle wake to exit out of M2 */
> > > > > > >      1145          mhi_cntrl->wake_toggle(mhi_cntrl);
> > > > > > >      1146
> > > > > > >      1147          if (mhi_chan->dir == DMA_TO_DEVICE)
> > > > > > >      1148                  atomic_inc(&mhi_cntrl->pending_pkts);
> > > > > > >      1149
> > > > > > >      1150          if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
> > > > > > >      1151                  unsigned long flags;
> > > > > > >      1152
> > > > > > >      1153                  read_lock_irqsave(&mhi_chan->lock,
> > > flags);
> > > > > parse_xfer_event is taking read lock :
> > > read_lock_bh(&mhi_chan->lock); first
> > > > > and later
> > > > > 
> > > > > mhi_queue_buf takes read lock: read_lock_irqsave(&mhi_chan->lock,
> > > flags);
> > > > > 
> > > > > Both are read locks which are recursive, is this problematic ?
> > > > > 
> > > > read_locks are recursive but I wanted to make the static checker
> > > happy. But
> > > > looking into it further (and after having a chat with Arnd), we might
> > > need to
> > > > refactor the locking here.
> > > > 
> > > > Since 'chan->lock' only prevents 'mhi_chan->ch_state', how about
> > > doing something
> > > > like below?
> > > 
> > > As comment mentioned for OOB (to enter  DB mode) write lock is acquired
> > > 
> > > with preemption disabled (irqsave ver). In case of OOB event control
> > > does not go to mhi_queue_buf
> > > 
> > > path.
> > 
> > Again, why do we need irq version of write lock. It should only be used if the data is shared with hardirq handlers which I don't see. Otherwise, write_lock_bh() looks sufficient to me as this itself is an exclusive lock.
> irq ver disables preemption where as bh ver does not. In case of OOB event,
> idea is not get preempted and this is for short duration of ringing the
> channel doorbell.

This is a clear abuse of write_lock_irq() API. write_lock_irq() should _only_
be used when the data is shared with a hardirq handler. The original comment
says,

"If it's a DB Event then we need to grab the lock with preemption disabled and
as a write because we have to update db register and there are chances that
another thread could be doing the same."

If the 'another' thread has the lock for this piece of code then we don't need
to disable the irq, isn't it? The irq needs to be disabled only if the 'another'
thread is a hardirq handler. I think the problem here is you are caution of not
getting preempted while mhi_ring_chan_db() which I don't see why. Is this
function non reentrant? I don't think so.

Furthermore, there are _lot_ of places the *_irq and *_bh versions of locks are
mixed. One such instance is mhi_queue_buf() where the read_lock_irq() is used
for mhi_ring_chan_db() while mhi_queue_skb() uses read_lock_bh().

> > 
> > > For transfer completion events >read_lock_bh is acquired and
> > > channel state is checked.
> > > 
> > > This lock is held for entire handling of the transfer completion so
> > > that
> > > in case
> > > 
> > > __mhi_unprepare_channel() is called from power down context write lock
> > > is acquired
> > > 
> > > for channel lock to change channel state, which would wait until
> > > parse_xfer_event for
> > > 
> > > data transfer is done (reader is in critical section).  In case if
> > > __mhi_unprepare_channel() wins then
> > > 
> > > parse_xfer_event is skipped otherwise parse_xfer_event is done and then
> > > 
> > > channel state is changed.
> > > 
> > 
> > So if we get unprepare_channel() after checking the channel state in parse_xfer_event(), what could go wrong?
> > Also, grabbing the lock for the entire function doesn't look good to me. The purpose of the chan->lock is just to protect 'chan_state'/DB and not the whole function.
> > 
> main problem unprepare_channel and parse_xfer_event have lot in common due
> to that we can not let them run in parallel. For example -parse_xfer_event
> is working on transfer ring (rp and wp updates)
> -parse_xfer_event calling dma_unmap_single on buffer
> -__mhi_unprepare_channel() calling mhi_reset_chan() and
> mhi_deinit_chan_ctxt().

Hmm. So the issue will be when __mhi_unprepare_channel() gets called after
parse_xfer_event() checked the 'mhi_chan->ch_state'. So if we have the read_lock
for the whole case then it is guarenteed to run before __mhi_unprepare_channel()
does its part.

Let's keep it as it is. But please look into the irq vs bh part above.

Thanks,
Mani

> > Thanks,
> > Mani
> > 
> > > > 
> > > > diff --git a/drivers/bus/mhi/core/main.c
> > > b/drivers/bus/mhi/core/main.c
> > > > index 3e9aa3b2da77..904f9be7a142 100644
> > > > --- a/drivers/bus/mhi/core/main.c
> > > > +++ b/drivers/bus/mhi/core/main.c
> > > > @@ -474,19 +474,12 @@ static int parse_xfer_event(struct
> > > mhi_controller *mhi_cntrl,
> > > >           result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW)
> > > ?
> > > >                   -EOVERFLOW : 0;
> > > > -       /*
> > > > -        * If it's a DB Event then we need to grab the lock
> > > > -        * with preemption disabled and as a write because we
> > > > -        * have to update db register and there are chances that
> > > > -        * another thread could be doing the same.
> > > > -        */
> > > > -       if (ev_code >= MHI_EV_CC_OOB)
> > > > -               write_lock_irqsave(&mhi_chan->lock, flags);
> > > > -       else
> > > > -               read_lock_bh(&mhi_chan->lock);
> > > > -
> > > > -       if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
> > > > -               goto end_process_tx_event;
> > > > +       read_lock_bh(&mhi_chan->lock);
> > > > +       if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
> > > > +               read_unlock_bh(&mhi_chan->lock);
> > > > +               return 0;
> > > > +       }
> > > > +       read_unlock_bh(&mhi_chan->lock);
> > > >           switch (ev_code) {
> > > >           case MHI_EV_CC_OVERFLOW:
> > > > @@ -559,10 +552,12 @@ static int parse_xfer_event(struct
> > > mhi_controller *mhi_cntrl,
> > > >                   mhi_chan->db_cfg.db_mode = 1;
> > > >                   read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
> > > > +               write_lock_irqsave(&mhi_chan->lock, flags);
> > > >                   if (tre_ring->wp != tre_ring->rp &&
> > > >                       MHI_DB_ACCESS_VALID(mhi_cntrl)) {
> > > >                           mhi_ring_chan_db(mhi_cntrl, mhi_chan);
> > > >                   }
> > > > +               write_unlock_irqrestore(&mhi_chan->lock, flags);
> > > >                   read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
> > > >                   break;
> > > >           }
> > > > @@ -572,12 +567,6 @@ static int parse_xfer_event(struct
> > > mhi_controller *mhi_cntrl,
> > > >                   break;
> > > >           } /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */
> > > > -end_process_tx_event:
> > > > -       if (ev_code >= MHI_EV_CC_OOB)
> > > > -               write_unlock_irqrestore(&mhi_chan->lock, flags);
> > > > -       else
> > > > -               read_unlock_bh(&mhi_chan->lock);
> > > > -
> > > >           return 0;
> > > >    }
> > > > 
> > > > Moreover, I do have couple of concerns:
> > > > 
> > > > 1. 'mhi_chan->db_cfg.db_mode = 1' needs to be added to the critical
> > > section
> > > > above.
> > > > 
> > > > 2. Why we have {write/read}_lock_irq variants for chan->lock? I don't
> > > see where
> > > > the db or ch_state got shared with hardirq handler. Maybe we should
> > > only have
> > > > *_bh (softirq) variants all over the place?
> > > > 
> > > > Thanks,
> > > > Mani
> > > > 
> > > > > > >                                              ^^^^^^^^^^^^^^^
> > > > > > > The caller is already holding this lock.
> > > > > > > 
> > > > > > Hmm. We have one internal user of this function and that's where
> > > the locking
> > > > > > has gone wrong. Will fix it.
> > > > > > 
> > > > > > Thanks for reporting!
> > > > > > 
> > > > > > Regards,
> > > > > > Mani
> > > > > > 
> > > > > > >      1154                  mhi_ring_chan_db(mhi_cntrl, mhi_chan);
> > > > > > >      1155                  read_unlock_irqrestore(&mhi_chan->lock,
> > > flags);
> > > > > > >      1156          }
> > > > > > >      1157
> > > > > > >      1158          read_unlock_irqrestore(&mhi_cntrl->pm_lock,
> > > flags);
> > > > > > >      1159
> > > > > > >      1160          return 0;
> > > > > > >      1161  }
> > > > > > >      1162  EXPORT_SYMBOL_GPL(mhi_queue_buf);
> > > > > > > 
> > > > > > > regards,
> > > > > > > dan carpenter
> > > > > -- 
> > > > > The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
> > > Forum,
> > > > > a Linux Foundation Collaborative Project
> > 
> 
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project

  reply	other threads:[~2020-04-21  5:52 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-07 13:55 [bug report] bus: mhi: core: Add support for data transfer Dan Carpenter
2020-04-07 14:33 ` Manivannan Sadhasivam
2020-04-17  3:37   ` Hemant Kumar
2020-04-17 10:14     ` Manivannan Sadhasivam
2020-04-17 10:24       ` Dan Carpenter
2020-04-18  7:10       ` Hemant Kumar
2020-04-18 19:19         ` Manivannan Sadhasivam
2020-04-20 22:57           ` Hemant Kumar
2020-04-21  5:52             ` Manivannan Sadhasivam [this message]
2020-04-21  6:03 ` Manivannan Sadhasivam
2020-04-07 13:56 Dan Carpenter
2020-04-07 14:31 ` Manivannan Sadhasivam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200421055218.GA4327@Mani-XPS-13-9360 \
    --to=manivannan.sadhasivam@linaro.org \
    --cc=dan.carpenter@oracle.com \
    --cc=hemantk@codeaurora.org \
    --cc=kernel-janitors@vger.kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).