All of lore.kernel.org
 help / color / mirror / Atom feed
* INTMS/INTMC not being used in NVME interrupt handling
@ 2018-05-16 12:35 ` Bharat Kumar Gogada
  0 siblings, 0 replies; 8+ messages in thread
From: Bharat Kumar Gogada @ 2018-05-16 12:35 UTC (permalink / raw)
  To: linux-nvme, linux-kernel; +Cc: keith.busch, axboe, hch, sagi

Hi,

As per NVME specification: 
7.5.1.1 Host Software Interrupt Handling
It is recommended that host software utilize the Interrupt Mask Set and Interrupt Mask Clear (INTMS/INTMC) 
registers to efficiently handle interrupts when configured to use pin based or MSI messages.

In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr 
doesn't  use these registers. 

Any reason why these registers are not used in nvme interrupt handler ?

Why NVMe driver is not using any bottom half and processing all completion queues
in interrupt handler ?

Regards,
Bharat

^ permalink raw reply	[flat|nested] 8+ messages in thread

* INTMS/INTMC not being used in NVME interrupt handling
@ 2018-05-16 12:35 ` Bharat Kumar Gogada
  0 siblings, 0 replies; 8+ messages in thread
From: Bharat Kumar Gogada @ 2018-05-16 12:35 UTC (permalink / raw)


Hi,

As per NVME specification: 
7.5.1.1 Host Software Interrupt Handling
It is recommended that host software utilize the Interrupt Mask Set and Interrupt Mask Clear (INTMS/INTMC) 
registers to efficiently handle interrupts when configured to use pin based or MSI messages.

In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr 
doesn't  use these registers. 

Any reason why these registers are not used in nvme interrupt handler ?

Why NVMe driver is not using any bottom half and processing all completion queues
in interrupt handler ?

Regards,
Bharat

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: INTMS/INTMC not being used in NVME interrupt handling
  2018-05-16 12:35 ` Bharat Kumar Gogada
@ 2018-05-16 14:42   ` Keith Busch
  -1 siblings, 0 replies; 8+ messages in thread
From: Keith Busch @ 2018-05-16 14:42 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: linux-nvme, linux-kernel, keith.busch, axboe, hch, sagi

On Wed, May 16, 2018 at 12:35:15PM +0000, Bharat Kumar Gogada wrote:
> Hi,
> 
> As per NVME specification: 
> 7.5.1.1 Host Software Interrupt Handling
> It is recommended that host software utilize the Interrupt Mask Set and Interrupt Mask Clear (INTMS/INTMC) 
> registers to efficiently handle interrupts when configured to use pin based or MSI messages.
> 
> In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr 
> doesn't  use these registers. 
> 
> Any reason why these registers are not used in nvme interrupt handler ?

I think you've answered your own question: we process completions in the
interrupt context. The interrupt is already masked at the CPU level in
this context, so there should be no reason to mask them at the device
level.
 
> Why NVMe driver is not using any bottom half and processing all completion queues
> in interrupt handler ?

Performance.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* INTMS/INTMC not being used in NVME interrupt handling
@ 2018-05-16 14:42   ` Keith Busch
  0 siblings, 0 replies; 8+ messages in thread
From: Keith Busch @ 2018-05-16 14:42 UTC (permalink / raw)


On Wed, May 16, 2018@12:35:15PM +0000, Bharat Kumar Gogada wrote:
> Hi,
> 
> As per NVME specification: 
> 7.5.1.1 Host Software Interrupt Handling
> It is recommended that host software utilize the Interrupt Mask Set and Interrupt Mask Clear (INTMS/INTMC) 
> registers to efficiently handle interrupts when configured to use pin based or MSI messages.
> 
> In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr 
> doesn't  use these registers. 
> 
> Any reason why these registers are not used in nvme interrupt handler ?

I think you've answered your own question: we process completions in the
interrupt context. The interrupt is already masked at the CPU level in
this context, so there should be no reason to mask them at the device
level.
 
> Why NVMe driver is not using any bottom half and processing all completion queues
> in interrupt handler ?

Performance.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: INTMS/INTMC not being used in NVME interrupt handling
  2018-05-16 14:42   ` Keith Busch
@ 2018-05-17 11:15     ` Bharat Kumar Gogada
  -1 siblings, 0 replies; 8+ messages in thread
From: Bharat Kumar Gogada @ 2018-05-17 11:15 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-nvme, linux-kernel, keith.busch, axboe, hch, sagi

> > Hi,
> >
> > As per NVME specification:
> > 7.5.1.1 Host Software Interrupt Handling It is recommended that host
> > software utilize the Interrupt Mask Set and Interrupt Mask Clear
> > (INTMS/INTMC) registers to efficiently handle interrupts when configured
> to use pin based or MSI messages.
> >
> > In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr doesn't  use
> > these registers.
> >
> > Any reason why these registers are not used in nvme interrupt handler ?
> 
> I think you've answered your own question: we process completions in the
> interrupt context. The interrupt is already masked at the CPU level in this
> context, so there should be no reason to mask them at the device level.
> 
> > Why NVMe driver is not using any bottom half and processing all
> > completion queues in interrupt handler ?
> 
> Performance.
Thanks keith. 
Currently driver isn't setting any Coalesce count.  
So the NVMe card will raise interrupt for every single completion queue ?

For legacy interrupt for each CQ 
CQ-> ASSERT_INTA-> DOORBELL-> DEASSERT_INTA is this flow correct ?

Is the following flow valid
CQ1->ASSERT_INTA->CQ2/CQ3->Doorbell->DEASSERT_INTA ?

When using legacy interrupts, if CQ1 is sent followed by ASSERT_INTA, can the EP send 
another CQ2,CQ3.. before DEASSERT_INTA of CQ1 is generated?

Regards,
Bharat

^ permalink raw reply	[flat|nested] 8+ messages in thread

* INTMS/INTMC not being used in NVME interrupt handling
@ 2018-05-17 11:15     ` Bharat Kumar Gogada
  0 siblings, 0 replies; 8+ messages in thread
From: Bharat Kumar Gogada @ 2018-05-17 11:15 UTC (permalink / raw)


> > Hi,
> >
> > As per NVME specification:
> > 7.5.1.1 Host Software Interrupt Handling It is recommended that host
> > software utilize the Interrupt Mask Set and Interrupt Mask Clear
> > (INTMS/INTMC) registers to efficiently handle interrupts when configured
> to use pin based or MSI messages.
> >
> > In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr doesn't  use
> > these registers.
> >
> > Any reason why these registers are not used in nvme interrupt handler ?
> 
> I think you've answered your own question: we process completions in the
> interrupt context. The interrupt is already masked at the CPU level in this
> context, so there should be no reason to mask them at the device level.
> 
> > Why NVMe driver is not using any bottom half and processing all
> > completion queues in interrupt handler ?
> 
> Performance.
Thanks keith. 
Currently driver isn't setting any Coalesce count.  
So the NVMe card will raise interrupt for every single completion queue ?

For legacy interrupt for each CQ 
CQ-> ASSERT_INTA-> DOORBELL-> DEASSERT_INTA is this flow correct ?

Is the following flow valid
CQ1->ASSERT_INTA->CQ2/CQ3->Doorbell->DEASSERT_INTA ?

When using legacy interrupts, if CQ1 is sent followed by ASSERT_INTA, can the EP send 
another CQ2,CQ3.. before DEASSERT_INTA of CQ1 is generated?

Regards,
Bharat

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: INTMS/INTMC not being used in NVME interrupt handling
  2018-05-17 11:15     ` Bharat Kumar Gogada
@ 2018-05-17 18:04       ` Keith Busch
  -1 siblings, 0 replies; 8+ messages in thread
From: Keith Busch @ 2018-05-17 18:04 UTC (permalink / raw)
  To: Bharat Kumar Gogada
  Cc: axboe, sagi, linux-kernel, linux-nvme, keith.busch, hch

On Thu, May 17, 2018 at 11:15:59AM +0000, Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > As per NVME specification:
> > > 7.5.1.1 Host Software Interrupt Handling It is recommended that host
> > > software utilize the Interrupt Mask Set and Interrupt Mask Clear
> > > (INTMS/INTMC) registers to efficiently handle interrupts when configured
> > to use pin based or MSI messages.
> > >
> > > In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr doesn't  use
> > > these registers.
> > >
> > > Any reason why these registers are not used in nvme interrupt handler ?
> > 
> > I think you've answered your own question: we process completions in the
> > interrupt context. The interrupt is already masked at the CPU level in this
> > context, so there should be no reason to mask them at the device level.
> > 
> > > Why NVMe driver is not using any bottom half and processing all
> > > completion queues in interrupt handler ?
> > 
> > Performance.
> Thanks keith. 
> Currently driver isn't setting any Coalesce count.  
> So the NVMe card will raise interrupt for every single completion queue ?
> 
> For legacy interrupt for each CQ 
> CQ-> ASSERT_INTA-> DOORBELL-> DEASSERT_INTA is this flow correct ?

Mostly, yes. There could be a case where the controller wouldn't
deassert INTx if there are more completes past the CQ head doorbell write.

> Is the following flow valid
> CQ1->ASSERT_INTA->CQ2/CQ3->Doorbell->DEASSERT_INTA ?
> 
> When using legacy interrupts, if CQ1 is sent followed by ASSERT_INTA, can the EP send 
> another CQ2,CQ3.. before DEASSERT_INTA of CQ1 is generated?

I assume you are saying CQ entry 1, CQ entry 2, etc ...

The end point may continue posting those completion queue entries while
the interrupt is asserted. It should not deassert the interrupt until
the host acknowledges all outstanding completions with a CQ doorbell
write.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* INTMS/INTMC not being used in NVME interrupt handling
@ 2018-05-17 18:04       ` Keith Busch
  0 siblings, 0 replies; 8+ messages in thread
From: Keith Busch @ 2018-05-17 18:04 UTC (permalink / raw)


On Thu, May 17, 2018@11:15:59AM +0000, Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > As per NVME specification:
> > > 7.5.1.1 Host Software Interrupt Handling It is recommended that host
> > > software utilize the Interrupt Mask Set and Interrupt Mask Clear
> > > (INTMS/INTMC) registers to efficiently handle interrupts when configured
> > to use pin based or MSI messages.
> > >
> > > In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr doesn't  use
> > > these registers.
> > >
> > > Any reason why these registers are not used in nvme interrupt handler ?
> > 
> > I think you've answered your own question: we process completions in the
> > interrupt context. The interrupt is already masked at the CPU level in this
> > context, so there should be no reason to mask them at the device level.
> > 
> > > Why NVMe driver is not using any bottom half and processing all
> > > completion queues in interrupt handler ?
> > 
> > Performance.
> Thanks keith. 
> Currently driver isn't setting any Coalesce count.  
> So the NVMe card will raise interrupt for every single completion queue ?
> 
> For legacy interrupt for each CQ 
> CQ-> ASSERT_INTA-> DOORBELL-> DEASSERT_INTA is this flow correct ?

Mostly, yes. There could be a case where the controller wouldn't
deassert INTx if there are more completes past the CQ head doorbell write.

> Is the following flow valid
> CQ1->ASSERT_INTA->CQ2/CQ3->Doorbell->DEASSERT_INTA ?
> 
> When using legacy interrupts, if CQ1 is sent followed by ASSERT_INTA, can the EP send 
> another CQ2,CQ3.. before DEASSERT_INTA of CQ1 is generated?

I assume you are saying CQ entry 1, CQ entry 2, etc ...

The end point may continue posting those completion queue entries while
the interrupt is asserted. It should not deassert the interrupt until
the host acknowledges all outstanding completions with a CQ doorbell
write.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-05-17 18:04 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-16 12:35 INTMS/INTMC not being used in NVME interrupt handling Bharat Kumar Gogada
2018-05-16 12:35 ` Bharat Kumar Gogada
2018-05-16 14:42 ` Keith Busch
2018-05-16 14:42   ` Keith Busch
2018-05-17 11:15   ` Bharat Kumar Gogada
2018-05-17 11:15     ` Bharat Kumar Gogada
2018-05-17 18:04     ` Keith Busch
2018-05-17 18:04       ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.