All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Kent Overstreet <kmo@daterainc.com>
Cc: Christoph Hellwig <hch@infradead.org>,
	Alexander Gordeev <agordeev@redhat.com>,
	Shaohua Li <shli@kernel.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [patch 1/2]percpu_ida: fix a live lock
Date: Mon, 10 Feb 2014 16:06:27 -0700	[thread overview]
Message-ID: <52F95B73.7030205@kernel.dk> (raw)
In-Reply-To: <20140210224145.GB2362@kmo>



On 02/10/2014 03:41 PM, Kent Overstreet wrote:
> On Mon, Feb 10, 2014 at 09:26:15AM -0700, Jens Axboe wrote:
>>
>>
>> On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
>>> On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
>>>> Yeah, that was my first thought when I posted "percpu_ida: Allow variable
>>>> maximum number of cached tags" patch some few months ago. But I am back-
>>>> pedalling as it does not appear solves the fundamental problem - what is the
>>>> best threshold?
>>>>
>>>> May be we can walk off with a per-cpu timeout that flushes batch nr of tags
>>> >from local caches to the pool? Each local allocation would restart the timer,
>>>> but once allocation requests stopped coming on a CPU the tags would not gather
>>>> dust in local caches.
>>>
>>> We'll defintively need a fix to be able to allow the whole tag space.
>>
>> Certainly. The current situation of effectively only allowing half
>> the tags (if spread) is pretty crappy with (by far) most hardware.
>>
>>> For large numbers of tags per device the flush might work, but for
>>> devices with low number of tags we need something more efficient.  The
>>> case of less tags than CPUs isn't that unusual either and we probably
>>> want to switch to an allocator without per cpu allocations for them to
>>> avoid all this.  E.g. for many ATA devices we just have a single tag,
>>> and many scsi drivers also only want single digit outstanding commands
>>> per LUN.
>>
>> Even for cases where you have as many (or more) CPUs than tags,
>> per-cpu allocation is not necessarily a bad idea. It's a rare case
>> where you have all the CPUs touching the device at the same time,
>> after all.
>
> <just back from Switzerland, probably forgetting some of where I left off>
>
> You do still need to have enough tags to shard across the number of cpus
> _currently_ touching the device. I think I'm with Christoph here, I'm not sure
> how percpu tag allocation would be helpful when we have single digits/low double
> digits of tags available.

For the common case, I'd assume that anywhere between 31..256 tags is 
"normal". That's where the majority of devices will end up being, 
largely. So single digits would be an anomaly.

And even for the case of 31 tags and, eg, a 64 cpu system, over windows 
of access I don't think it's unreasonable to expect that you are not 
going to have 64 threads banging on the same device.

It obviously all depends on the access pattern. X threads for X tags 
would work perfectly well with per-cpu tagging, if they are doing sync 
IO. And similarly, 8 threads each having low queue depth would be fine. 
However, it all falls apart pretty quickly if threads*qd > tag space.

> I would expect that in that case we're better off with just a well implemented
> atomic bit vector and waitlist. However, I don't know where the crossover point
> is and I think Jens has done by far the most and most relevant benchmarking
> here.

The problem with that is when you have some of those threads on 
different nodes, it ends up collapsing pretty quickly again. Maybe the 
solution is to have a hierarchy of caching instead - per-node, per-cpu. 
At least that has the potential to make the common case still perform 
better.

> How about we just make the number of tags that are allowed to be stranded an
> explicit parameter (somehow) - then it can be up to device drivers to do
> something sensible with it. Half is probably an ideal default for devices where
> that works, but this way more constrained devices will be able to futz with it
> however they want.

I don't think we should involve device drivers in this, that's punting a 
complicated issue to someone who likely has little idea what to do about 
it. This needs to be handled sensibly in the core, not in a device 
driver. If we can't come up with a sensible algorithm to handle this, 
how can we expect someone writing a device driver to do so?

-- 
Jens Axboe

  reply	other threads:[~2014-02-10 23:06 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-31  3:38 [patch 1/2]percpu_ida: fix a live lock Shaohua Li
2014-01-04 21:08 ` Kent Overstreet
2014-01-05 13:13   ` Shaohua Li
2014-01-06 20:46     ` Kent Overstreet
2014-01-06 20:52       ` Jens Axboe
2014-01-06 21:47         ` Kent Overstreet
2014-02-09 15:50           ` Alexander Gordeev
2014-02-10 10:32             ` Christoph Hellwig
2014-02-10 12:29               ` Alexander Gordeev
2014-02-10 15:49                 ` Alexander Gordeev
2014-02-10 16:16                   ` Christoph Hellwig
2014-02-10 16:26               ` Jens Axboe
2014-02-10 22:41                 ` Kent Overstreet
2014-02-10 23:06                   ` Jens Axboe [this message]
2014-02-11  9:12                     ` Christoph Hellwig
2014-02-11 14:42                       ` James Bottomley
2014-02-11 14:53                         ` Christoph Hellwig
2014-02-14 10:36                     ` Alexander Gordeev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52F95B73.7030205@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=agordeev@redhat.com \
    --cc=hch@infradead.org \
    --cc=kmo@daterainc.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shli@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.