Hi, Pawel Laszczak writes: >>>>> +static irqreturn_t cdns3_device_irq_handler(int irq, void *data) >>>>> +{ >>>>> + struct cdns3_device *priv_dev; >>>>> + struct cdns3 *cdns = data; >>>>> + irqreturn_t ret = IRQ_NONE; >>>>> + unsigned long flags; >>>>> + u32 reg; >>>>> + >>>>> + priv_dev = cdns->gadget_dev; >>>>> + spin_lock_irqsave(&priv_dev->lock, flags); >>>> >>>>you're already running in hardirq context. Why do you need this lock at >>>>all? I would be better to use the hardirq handler to mask your >>>>interrupts, so they don't fire again, then used the top-half (softirq) >>>>handler to actually handle the interrupts. >>> >>> Yes, spin_lock_irqsave is not necessary here. >>> >>> Do you mean replacing devm_request_irq with a request_threaded_irq ? >>> I have single interrupt line shared between Host, Driver, DRD/OTG. >>> I'm not sure if it will work more efficiently. >> >>The whole idea for running very little in hardirq context is to give the >>scheduler a chance to decide what should run. This is important to >>reduce latency when running with RT patchset applied, for >>example. However, I'll give you that, it's a minor requirement. It's >>just that, to me, it's a small detail that's easy to implement. > > I will do it in PATCH v2 or PATCH v3. > I need to post next version before 24 of December, so if I can do it > before this date then it will be in PATCH v2. Take your time :-) Nothing will happen until next year. I already sent my pull request for v4.21 and a large chunk of the community takes a few days off during the holiday season. There's really no rush from a community stand point. -- balbi