All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Elliott, Robert (Persistent Memory)" <elliott@hpe.com>
To: Sreekanth Reddy <sreekanth.reddy@broadcom.com>
Cc: "linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"irqbalance@lists.infradead.org" <irqbalance@lists.infradead.org>,
	"Kashyap Desai" <kashyap.desai@broadcom.com>,
	Sathya Prakash Veerichetty <sathya.prakash@broadcom.com>,
	Chaitra Basappa <chaitra.basappa@broadcom.com>,
	Suganath Prabu Subramani  <suganath-prabu.subramani@broadcom.com>
Subject: RE: Observing Softlockup's while running heavy IOs
Date: Fri, 19 Aug 2016 21:27:52 +0000	[thread overview]
Message-ID: <DF4PR84MB01695445B0BE8A046742942DAB160@DF4PR84MB0169.NAMPRD84.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <CAK=zhgrOQ_LAbM3RKfq_MtveygqU3vPtChCB9Jdf6AUfFnr0HQ@mail.gmail.com>



> -----Original Message-----
> From: Sreekanth Reddy [mailto:sreekanth.reddy@broadcom.com]
> Sent: Friday, August 19, 2016 6:45 AM
> To: Elliott, Robert (Persistent Memory) <elliott@hpe.com>
> Subject: Re: Observing Softlockup's while running heavy IOs
> 
...
> Yes I am also observing that all the interrupts are routed to one
> CPU.  But still I observing softlockups (sometime hardlockups)
> even when I set rq_affinity to 2.

That'll ensure the block layer's completion handling is done there,
but not your driver's interrupt handler (which precedes the block
layer completion handling).

 
> Is their any way to route the interrupts the same CPUs which has
> submitted the corresponding IOs?
> or
> Is their any way/option in the irqbalance/kernel which can route
> interrupts to CPUs (enabled in affinity_hint) in round robin manner
> after specific time period.

Ensure your driver creates one MSIX interrupt per CPU core, uses
that interrupt for all submissions from that core, and reports
that it would like that interrupt to be serviced by that core
in /proc/irq/nnn/affinity_hint.  

Even with hyperthreading, this needs to be based on the logical
CPU cores, not just the physical core or the physical socket.
You can swamp a logical CPU core as easily as a physical CPU core.

Then, provide an irqbalance policy script that honors the
affinity_hint for your driver, or turn off irqbalance and
manually set /proc/irq/nnn/smp_affinity to match the
affinity_hint.  

Some versions of irqbalance honor the hints; some purposely
don't and need to be overridden with a policy script.


---
Robert Elliott, HPE Persistent Memory

  parent reply	other threads:[~2016-08-20  2:02 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-18  5:55 Observing Softlockup's while running heavy IOs Sreekanth Reddy
2016-08-18  5:55 ` Sreekanth Reddy
2016-08-18 14:59 ` Bart Van Assche
2016-08-18 14:59   ` Bart Van Assche
2016-08-18 21:08 ` Elliott, Robert (Persistent Memory)
2016-08-18 21:08   ` Elliott, Robert (Persistent Memory)
2016-08-19 11:44   ` Sreekanth Reddy
2016-08-19 11:44     ` Sreekanth Reddy
2016-08-19 15:56     ` Bart Van Assche
2016-08-19 15:56       ` Bart Van Assche
2016-09-01 10:31       ` Sreekanth Reddy
2016-09-01 10:31         ` Sreekanth Reddy
2016-09-01 23:04         ` Bart Van Assche
2016-09-01 23:04           ` Bart Van Assche
     [not found]           ` <CAK=zhgrLL22stCfwKdpJkN=PkxPVxL=K9RgpP1USEbg_xx5TEg@mail.gmail.com>
2016-09-06 15:06             ` Neil Horman
2016-09-06 15:06               ` Neil Horman
2016-09-07  6:00               ` Sreekanth Reddy
2016-09-07  6:00                 ` Sreekanth Reddy
2016-09-07 13:24                 ` Neil Horman
2016-09-07 13:24                   ` Neil Horman
2016-09-08  5:42                   ` Sreekanth Reddy
2016-09-08  5:42                     ` Sreekanth Reddy
2016-09-08 13:39                     ` Neil Horman
2016-09-08 13:39                       ` Neil Horman
2016-09-12  8:18                       ` Sreekanth Reddy
2016-09-12  8:18                         ` Sreekanth Reddy
2016-09-12 12:03                         ` Neil Horman
2016-09-12 12:03                           ` Neil Horman
2016-08-19 21:27     ` Elliott, Robert (Persistent Memory) [this message]
2016-08-19 21:27       ` Elliott, Robert (Persistent Memory)
2016-08-23  9:52       ` Kashyap Desai
2016-08-23  9:52         ` Kashyap Desai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DF4PR84MB01695445B0BE8A046742942DAB160@DF4PR84MB0169.NAMPRD84.PROD.OUTLOOK.COM \
    --to=elliott@hpe.com \
    --cc=chaitra.basappa@broadcom.com \
    --cc=irqbalance@lists.infradead.org \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=sathya.prakash@broadcom.com \
    --cc=sreekanth.reddy@broadcom.com \
    --cc=suganath-prabu.subramani@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.