From: Steven Sistare <email@example.com>
To: Omar Sandoval <firstname.lastname@example.org>
Cc: email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
Jens Axboe <firstname.lastname@example.org>
Subject: Re: [PATCH v3 01/10] sched: Provide sparsemask, a reduced contention bitmap
Date: Thu, 6 Dec 2018 11:07:46 -0500 [thread overview]
Message-ID: <email@example.com> (raw)
On 11/27/2018 8:19 PM, Omar Sandoval wrote:
> On Tue, Nov 27, 2018 at 10:16:56AM -0500, Steven Sistare wrote:
>> On 11/9/2018 7:50 AM, Steve Sistare wrote:
>>> From: Steve Sistare <firstname.lastname@example.org>
>>> Provide struct sparsemask and functions to manipulate it. A sparsemask is
>>> a sparse bitmap. It reduces cache contention vs the usual bitmap when many
>>> threads concurrently set, clear, and visit elements, by reducing the number
>>> of significant bits per cacheline. For each 64 byte chunk of the mask,
>>> only the first K bits of the first word are used, and the remaining bits
>>> are ignored, where K is a creation time parameter. Thus a sparsemask that
>>> can represent a set of N elements is approximately (N/K * 64) bytes in
>>> Signed-off-by: Steve Sistare <email@example.com>
>>> include/linux/sparsemask.h | 260 +++++++++++++++++++++++++++++++++++++++++++++
>>> lib/Makefile | 2 +-
>>> lib/sparsemask.c | 142 +++++++++++++++++++++++++
>>> 3 files changed, 403 insertions(+), 1 deletion(-)
>>> create mode 100644 include/linux/sparsemask.h
>>> create mode 100644 lib/sparsemask.c
>> Hi Peter and Ingo,
>> I need your opinion: would you prefer that I keep the new sparsemask type,
>> or fold it into the existing sbitmap type? There is some overlap between the
>> two, but mostly in trivial one line functions. The main differences are:
> Adding Jens and myself.
>> * sparsemask defines iterators that allow an inline loop body, like cpumask,
>> whereas the sbitmap iterator forces us to define a callback function for
>> the body, which is awkward.
>> * sparsemask is slightly more efficient. The struct and variable length
>> bitmap are allocated contiguously,
> That just means you have the pointer indirection elsewhere :) The users
> of sbitmap embed it in whatever structure they have.
Yes, the sparsemask can be embedded in one place, but in my use case I also cache
pointers to the mask from elsewhere, and those sites incur the cost of 2 indirections
to perform bitmap operations.
>> and sbitmap uses an extra field "depth"
>> per bitmap cacheline.
> The depth field is memory which would otherwise be unused, and it's only
> used for sbitmap_get(), so it doesn't have any cost if you're using it
> like a cpumask.
>> * The order of arguments is different for the sparsemask accessors and
>> sbitmap accessors. sparsemask mimics cpumask which is used extensively
>> in the sched code.
>> * Much of the sbitmap code supports queueing, sleeping, and waking on bit
>> allocation, which is N/A for scheduler load load balancing. However, we
>> can call the basic functions which do not use queueing.
>> I could add the sparsemask iterators to sbitmap (90 lines), and define
>> a thin layer to change the argument order to mimic cpumask, but that
>> essentially recreates sparsemask.
> We only use sbitmap_for_each_set() in a few places. Maybe a for_each()
> style macro would be cleaner for those users, too, in which case I
> wouldn't be opposed to changing it. The cpumask argument order thing is
> a annoying, though.
>> Also, pushing sparsemask into sbitmap would limit our freedom to evolve the
>> type to meet the future needs of sched, as sbitmap has its own maintainer,
>> and is used by drivers, so changes to its API and ABI will be frowned upon.
> It's a generic data structure, so of course Jens and I have no problem
> with changing it to meet more needs :) Personally, I'd prefer to only
> have one datastructure for this, but I suppose it depends on whether
> Peter and Ingo think the argument order is important enough.
The argument order is a minor thing, not a blocker to adoption, but efficiency
is important in the core scheduler code. I actually did the work to write a
for_each macro with inline body to sbitmap, and converted my patches to use sbitmap.
But then I noticed your very recent patch adding the cleared word to each cacheline,
which must be loaded and ANDed with each bitset word in the for_each traversal,
adding more overhead which we don't need for the scheduler use case, on top of the
extra indirection noted above. You might add more such things in the future (a
"deferred set" word?) to support the needs of the block drivers who are the
intended clients of sbitmap.
Your sbitmap is more than a simple bitmap abstraction, and for the scheduler we
just need simple. Therefore, I propose to trim sparsemask to the bare minimum,
and move it to kernel/sched for use
by sched only.
It was 400 lines, but will
be 200, and 80 of those are comments.
If anyone objects, please speak now.
>> FWIW, here is the amount of code involved:
>> 250 lines basic operations
>> 284 lines for queueing
>> 534 lines total
>> 201 lines basic operations
>> 380 lines for queueing
>> 581 lines total
>> 260 lines total
>> 142 lines total
>> - Steve
next prev parent reply other threads:[~2018-12-06 16:08 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-09 12:50 [PATCH v3 00/10] steal tasks to improve CPU utilization Steve Sistare
2018-11-09 12:50 ` [PATCH v3 01/10] sched: Provide sparsemask, a reduced contention bitmap Steve Sistare
2018-11-27 15:16 ` Steven Sistare
2018-11-28 1:19 ` Omar Sandoval
2018-12-06 16:07 ` Steven Sistare [this message]
2018-12-06 18:19 ` Omar Sandoval
2018-11-09 12:50 ` [PATCH v3 02/10] sched/topology: Provide hooks to allocate data shared per LLC Steve Sistare
2018-11-09 12:50 ` [PATCH v3 03/10] sched/topology: Provide cfs_overload_cpus bitmap Steve Sistare
2018-11-09 17:38 ` Valentin Schneider
2018-11-19 17:32 ` Steven Sistare
2018-11-20 12:52 ` Valentin Schneider
2018-11-12 16:42 ` Valentin Schneider
2018-11-19 17:33 ` Steven Sistare
2018-11-20 12:42 ` Valentin Schneider
2018-11-26 19:06 ` Steven Sistare
2018-12-03 16:56 ` Valentin Schneider
2018-12-06 16:40 ` Steven Sistare
2018-12-06 17:28 ` Valentin Schneider
2018-11-09 12:50 ` [PATCH v3 04/10] sched/fair: Dynamically update cfs_overload_cpus Steve Sistare
2018-11-09 12:50 ` [PATCH v3 05/10] sched/fair: Hoist idle_stamp up from idle_balance Steve Sistare
2018-11-09 19:07 ` Valentin Schneider
2018-11-19 17:31 ` Steven Sistare
2018-11-20 10:24 ` Valentin Schneider
2018-11-09 12:50 ` [PATCH v3 06/10] sched/fair: Generalize the detach_task interface Steve Sistare
2018-11-09 12:50 ` [PATCH v3 07/10] sched/fair: Provide can_migrate_task_llc Steve Sistare
2018-11-09 12:50 ` [PATCH v3 08/10] sched/fair: Steal work from an overloaded CPU when CPU goes idle Steve Sistare
2018-11-09 12:50 ` [PATCH v3 09/10] sched/fair: disable stealing if too many NUMA nodes Steve Sistare
2018-11-09 12:50 ` [PATCH v3 10/10] sched/fair: Provide idle search schedstats Steve Sistare
2018-11-10 17:08 ` kbuild test robot
2018-11-09 15:02 ` hackbench run scripts Steven Sistare
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).