All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Vincent Fu <Vincent.Fu@wdc.com>,
	"vincentfu@gmail.com" <vincentfu@gmail.com>,
	"fio@vger.kernel.org" <fio@vger.kernel.org>
Subject: Re: [three fio patches 3/3] smalloc: allocate pool-> members from shared memory
Date: Wed, 28 Aug 2019 13:47:23 -0600	[thread overview]
Message-ID: <d511fd90-d267-8a77-0d48-57b912447c23@kernel.dk> (raw)
In-Reply-To: <BL0PR04MB4898E1614B5213924541271B95A30@BL0PR04MB4898.namprd04.prod.outlook.com>

On 8/28/19 1:44 PM, Vincent Fu wrote:
> On 8/28/19 3:13 PM, Jens Axboe wrote:
>> On 8/28/19 11:48 AM, vincentfu@gmail.com wrote:
>>> From: Vincent Fu <vincent.fu@wdc.com>
>>>
>>> If one process is making smalloc calls and another process is making
>>> sfree calls, pool->free_blocks and pool->next_non_full will not be
>>> synchronized because the two processes each have independent, local
>>> copies of the variables.
>>>
>>> This patch allocates space for the two variables from shared storage so
>>> that separate processes will be modifying quantities stored at the same
>>> locations.
>>>
>>> This issue was discovered on the server side running a client/server job
>>> with --status-interval=1. Such a job encountered an OOM error when only
>>> ~50 objects were allocated from the smalloc pool.
>>>
>>> Also change the calculation of free_blocks in add_pool() to use
>>> SMALLOC_BPI instead of SMALLOC_BPB. These two constants are
>>> coincidentally the same on Linux and Windows but SMALLOC_BPI is the
>>> correct one to use. free_blocks is the number of available blocks of
>>> size SMALLOC_BPB. It is the product of the number of unsigned integers
>>> in the bitmap (bitmap_blocks) and the number of bits per unsigned
>>> integer (SMALLOC_BPI).
>>
>> Would it make more sense to just have the pool[] come out of shared
>> memory?
>>
>>
> 
> Yeah, that would avoid the ugly *(pool->free_blocks) expressions and
> make things easier if anything else that needs to be shared is ever
> added to struct pool.
> 
> Would something like this in sinit() work?
> 
> if (nr_pools == 0)
> 
>       mp = mmap(NULL, MAX_POOLS * sizeof(struct pool), ...)

It should only be called once, so you should not need that nr_pools
check. But yeah, something like that.

> I'm on vacation through Labor Day and will work on this next week.

Sounds good. I have applied 1-2 for now, thanks Vincent.

-- 
Jens Axboe



      reply	other threads:[~2019-08-28 19:47 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-28 17:48 [three fio patches 0/3] vincentfu
2019-08-28 17:48 ` [three fio patches 1/3] docs: small HOWTO fixes vincentfu
2019-08-28 17:48 ` [three fio patches 2/3] options: allow offset_increment to understand percentages vincentfu
2019-08-28 17:48 ` [three fio patches 3/3] smalloc: allocate pool-> members from shared memory vincentfu
2019-08-28 19:12   ` Jens Axboe
2019-08-28 19:44     ` Vincent Fu
2019-08-28 19:47       ` Jens Axboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d511fd90-d267-8a77-0d48-57b912447c23@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=Vincent.Fu@wdc.com \
    --cc=fio@vger.kernel.org \
    --cc=vincentfu@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.