All of lore.kernel.org
 help / color / mirror / Atom feed
* fio memory usage
@ 2012-04-18 15:41 Vikram Seth
  2012-04-18 18:49 ` Jens Axboe
  0 siblings, 1 reply; 5+ messages in thread
From: Vikram Seth @ 2012-04-18 15:41 UTC (permalink / raw)
  To: fio

Hi Jens,

What's the max memory used per job (per device) for fio? Is there a
rule of thumb for min memory needed for fio?
Like, if I am running N #threads with numjobs on M #devices in the
system, then I'd like to know if I have enough memory before I start
the test rather than I wait for it to crashes days later.

Also, in case fio finds that it is running out of memory while
running, then does it generate an OOM kind of message in the output
file that can be used to track the reason for a crashed test.

Thanks,

Vikram.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: fio memory usage
  2012-04-18 15:41 fio memory usage Vikram Seth
@ 2012-04-18 18:49 ` Jens Axboe
  2012-04-18 18:54   ` Jens Axboe
  0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2012-04-18 18:49 UTC (permalink / raw)
  To: Vikram Seth; +Cc: fio

On 2012-04-18 17:41, Vikram Seth wrote:
> Hi Jens,
> 
> What's the max memory used per job (per device) for fio? Is there a
> rule of thumb for min memory needed for fio?
> Like, if I am running N #threads with numjobs on M #devices in the
> system, then I'd like to know if I have enough memory before I start
> the test rather than I wait for it to crashes days later.
> 
> Also, in case fio finds that it is running out of memory while
> running, then does it generate an OOM kind of message in the output
> file that can be used to track the reason for a crashed test.

Generally, any memory that is being used is allocated before fio starts
running anything. That's not always strictly true. For verify workloads,
fio will store meta data for written blocks. So memory foot print could
grow for that. But that's the only case that isn't limited in that
sense. Fio will alloc small items while running, but now we are in the
sub-kb category. Things that should not fail. And they are continually
freed as well, so not persistent items.

Usually IO buffers will take the most memory. You can easily calculate
that, that would be queue_depth * max_buffer_size * number_of_jobs.
Outside of that, fio sets up a shared memory segment. Default on Linux
is 32MB. If you use a random workload and don't set norandommap, fio
will allocate a device/file sized bitmap for tracking which blocks have
been written or not. That consumes 1 byte per block per not-shared file.
So for a 500gb drive using 4kb blocks as the minimum IO size, that'd be
122070313 blocks or ~116MB of memory for that bitmap. That'd be your
biggest consumer of persistent memory, but one you can usually
eliminate.

Hope that helps...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: fio memory usage
  2012-04-18 18:49 ` Jens Axboe
@ 2012-04-18 18:54   ` Jens Axboe
  2012-04-18 22:59     ` Vikram Seth
  0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2012-04-18 18:54 UTC (permalink / raw)
  To: Vikram Seth; +Cc: fio

On 2012-04-18 20:49, Jens Axboe wrote:
> On 2012-04-18 17:41, Vikram Seth wrote:
>> Hi Jens,
>>
>> What's the max memory used per job (per device) for fio? Is there a
>> rule of thumb for min memory needed for fio?
>> Like, if I am running N #threads with numjobs on M #devices in the
>> system, then I'd like to know if I have enough memory before I start
>> the test rather than I wait for it to crashes days later.
>>
>> Also, in case fio finds that it is running out of memory while
>> running, then does it generate an OOM kind of message in the output
>> file that can be used to track the reason for a crashed test.
> 
> Generally, any memory that is being used is allocated before fio starts
> running anything. That's not always strictly true. For verify workloads,
> fio will store meta data for written blocks. So memory foot print could
> grow for that. But that's the only case that isn't limited in that
> sense. Fio will alloc small items while running, but now we are in the
> sub-kb category. Things that should not fail. And they are continually
> freed as well, so not persistent items.
> 
> Usually IO buffers will take the most memory. You can easily calculate
> that, that would be queue_depth * max_buffer_size * number_of_jobs.
> Outside of that, fio sets up a shared memory segment. Default on Linux
> is 32MB. If you use a random workload and don't set norandommap, fio
> will allocate a device/file sized bitmap for tracking which blocks have
> been written or not. That consumes 1 byte per block per not-shared file.
> So for a 500gb drive using 4kb blocks as the minimum IO size, that'd be
> 122070313 blocks or ~116MB of memory for that bitmap. That'd be your
> biggest consumer of persistent memory, but one you can usually
> eliminate.
> 
> Hope that helps...

Oh, and any sort of logging of IO will also continually allocate memory.
That'd be options like write_*_log.

And I forgot to touch on what fio does if memory allocations fail. It
crashes. No attempts are being made at handling memory allocation
failures. That is something that should most likely be improved. In
reality most people run on Linux where allocations just don't fail with
the default settings, so what would happen is that the OOM killer would
terminate fio instead. I'm not sure what other platforms default to
here, but this fact has made making handling allocation failures a
smaller priority. In reality, what should be done, is have fio just exit
and report status in case of allocation failures. It would not be hard
to add.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: fio memory usage
  2012-04-18 18:54   ` Jens Axboe
@ 2012-04-18 22:59     ` Vikram Seth
  2012-04-19  7:54       ` Jens Axboe
  0 siblings, 1 reply; 5+ messages in thread
From: Vikram Seth @ 2012-04-18 22:59 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

On Wed, Apr 18, 2012 at 11:54 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 2012-04-18 20:49, Jens Axboe wrote:
>> On 2012-04-18 17:41, Vikram Seth wrote:
>>> Hi Jens,
>>>
>>> What's the max memory used per job (per device) for fio? Is there a
>>> rule of thumb for min memory needed for fio?
>>> Like, if I am running N #threads with numjobs on M #devices in the
>>> system, then I'd like to know if I have enough memory before I start
>>> the test rather than I wait for it to crashes days later.
>>>
>>> Also, in case fio finds that it is running out of memory while
>>> running, then does it generate an OOM kind of message in the output
>>> file that can be used to track the reason for a crashed test.
>>
>> Generally, any memory that is being used is allocated before fio starts
>> running anything. That's not always strictly true. For verify workloads,
>> fio will store meta data for written blocks. So memory foot print could
>> grow for that. But that's the only case that isn't limited in that
>> sense. Fio will alloc small items while running, but now we are in the
>> sub-kb category. Things that should not fail. And they are continually
>> freed as well, so not persistent items.
>>
>> Usually IO buffers will take the most memory. You can easily calculate
>> that, that would be queue_depth * max_buffer_size * number_of_jobs.
>> Outside of that, fio sets up a shared memory segment. Default on Linux
>> is 32MB. If you use a random workload and don't set norandommap, fio
>> will allocate a device/file sized bitmap for tracking which blocks have
>> been written or not. That consumes 1 byte per block per not-shared file.
>> So for a 500gb drive using 4kb blocks as the minimum IO size, that'd be
>> 122070313 blocks or ~116MB of memory for that bitmap. That'd be your
>> biggest consumer of persistent memory, but one you can usually
>> eliminate.
>>
>> Hope that helps...
>
> Oh, and any sort of logging of IO will also continually allocate memory.
> That'd be options like write_*_log.
>
> And I forgot to touch on what fio does if memory allocations fail. It
> crashes. No attempts are being made at handling memory allocation
> failures. That is something that should most likely be improved. In
> reality most people run on Linux where allocations just don't fail with
> the default settings, so what would happen is that the OOM killer would
> terminate fio instead. I'm not sure what other platforms default to
> here, but this fact has made making handling allocation failures a
> smaller priority. In reality, what should be done, is have fio just exit
> and report status in case of allocation failures. It would not be hard
> to add.
>
> --
> Jens Axboe
>

Thanks Jens. That was really helpful.

OOM killer should output the details in syslog, about what process it
killed etc.
As long as that is true, we have a way to figure out after a crash
whether if fio ran out of memory so that we know where to look
further.

However, I think having fio report fail status in output file on
memory allocation failure, will remove any dependencies on OS and its
reporting of such a failure.

Vikram.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: fio memory usage
  2012-04-18 22:59     ` Vikram Seth
@ 2012-04-19  7:54       ` Jens Axboe
  0 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2012-04-19  7:54 UTC (permalink / raw)
  To: Vikram Seth; +Cc: fio

On 04/19/2012 12:59 AM, Vikram Seth wrote:
> On Wed, Apr 18, 2012 at 11:54 AM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 2012-04-18 20:49, Jens Axboe wrote:
>>> On 2012-04-18 17:41, Vikram Seth wrote:
>>>> Hi Jens,
>>>>
>>>> What's the max memory used per job (per device) for fio? Is there a
>>>> rule of thumb for min memory needed for fio?
>>>> Like, if I am running N #threads with numjobs on M #devices in the
>>>> system, then I'd like to know if I have enough memory before I start
>>>> the test rather than I wait for it to crashes days later.
>>>>
>>>> Also, in case fio finds that it is running out of memory while
>>>> running, then does it generate an OOM kind of message in the output
>>>> file that can be used to track the reason for a crashed test.
>>>
>>> Generally, any memory that is being used is allocated before fio starts
>>> running anything. That's not always strictly true. For verify workloads,
>>> fio will store meta data for written blocks. So memory foot print could
>>> grow for that. But that's the only case that isn't limited in that
>>> sense. Fio will alloc small items while running, but now we are in the
>>> sub-kb category. Things that should not fail. And they are continually
>>> freed as well, so not persistent items.
>>>
>>> Usually IO buffers will take the most memory. You can easily calculate
>>> that, that would be queue_depth * max_buffer_size * number_of_jobs.
>>> Outside of that, fio sets up a shared memory segment. Default on Linux
>>> is 32MB. If you use a random workload and don't set norandommap, fio
>>> will allocate a device/file sized bitmap for tracking which blocks have
>>> been written or not. That consumes 1 byte per block per not-shared file.
>>> So for a 500gb drive using 4kb blocks as the minimum IO size, that'd be
>>> 122070313 blocks or ~116MB of memory for that bitmap. That'd be your
>>> biggest consumer of persistent memory, but one you can usually
>>> eliminate.
>>>
>>> Hope that helps...
>>
>> Oh, and any sort of logging of IO will also continually allocate memory.
>> That'd be options like write_*_log.
>>
>> And I forgot to touch on what fio does if memory allocations fail. It
>> crashes. No attempts are being made at handling memory allocation
>> failures. That is something that should most likely be improved. In
>> reality most people run on Linux where allocations just don't fail with
>> the default settings, so what would happen is that the OOM killer would
>> terminate fio instead. I'm not sure what other platforms default to
>> here, but this fact has made making handling allocation failures a
>> smaller priority. In reality, what should be done, is have fio just exit
>> and report status in case of allocation failures. It would not be hard
>> to add.
>>
>> --
>> Jens Axboe
>>
> 
> Thanks Jens. That was really helpful.
> 
> OOM killer should output the details in syslog, about what process it
> killed etc.
> As long as that is true, we have a way to figure out after a crash
> whether if fio ran out of memory so that we know where to look
> further.

Correct.

> However, I think having fio report fail status in output file on
> memory allocation failure, will remove any dependencies on OS and its
> reporting of such a failure.

That would indeed be ideal and not too hard to do. Mainly fio would need
to ensure that it does not do any allocations in the exit report path.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-04-19  7:54 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-18 15:41 fio memory usage Vikram Seth
2012-04-18 18:49 ` Jens Axboe
2012-04-18 18:54   ` Jens Axboe
2012-04-18 22:59     ` Vikram Seth
2012-04-19  7:54       ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.