All of lore.kernel.org
 help / color / mirror / Atom feed
From: Roberto Spadim <roberto@spadim.com.br>
To: Brad Campbell <lists2009@fnarfbargle.com>
Cc: Drew <drew.kay@gmail.com>, NeilBrown <neilb@suse.de>,
	Liam Kurmos <quantum.leaf@gmail.com>,
	linux-raid@vger.kernel.org
Subject: Re: mdadm raid1 read performance
Date: Wed, 4 May 2011 04:42:25 -0300	[thread overview]
Message-ID: <BANLkTinbFZf=kV=3eUe2S6XNAOst_McX4w@mail.gmail.com> (raw)
In-Reply-To: <4DC0F2B6.9050708@fnarfbargle.com>

hum...
at user program we use:
file=fopen(); var=fread(file,buffer_size);fclose(file);

buffer_size is the problem since it can be very small (many reads), or
very big (small memory problem, but very nice query to optimize at
device block level)
if we have a big buffer_size, we can split it across disks (ssd)
if we have a small buffer_size, we can't split it (only if readahead
is very big)
problem: we need memory (cache/buffer)

the problem... is readahead better for ssd? or a bigger 'buffer_size'
at user program is better?
or... a filesystem change of 'block' size to a bigger block size, with
this don't matter if user use a small buffer_size at fread functions,
filesystem will always read many information at device block layer,
what's better? others ideas?

i don't know how linux kernel handle a very big fread with memory
for example:
fread(file,1000000); // 1MB
will linux split the 'single' fread in many reads at block layer? each
read with 1 block size (512byte/4096byte)?

2011/5/4 Brad Campbell <lists2009@fnarfbargle.com>:
> On 04/05/11 13:30, Drew wrote:
>
>> It seemed logical to me that if two disks had the same data and we
>> were reading an arbitrary amount of data, why couldn't we split the
>> read across both disks? That way we get the benefits of pulling from
>> multiple disks in the read case while accepting the penalty of a write
>> being as slow as the slowest disk..
>>
>>
>
> I would have thought as you'd be skipping alternate "stripes" on each disk
> you minimise the benefit of a readahead buffer and get subjected to seek and
> rotational latency on both disks. Overall you're benefit would be slim to
> immeasurable. Now on SSD's I could see it providing some extra oomph as you
> suffer none of the mechanical latency penalties.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2011-05-04  7:42 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-04  0:07 mdadm raid1 read performance Liam Kurmos
2011-05-04  0:57 ` John Robinson
2011-05-06 20:44   ` Leslie Rhorer
2011-05-06 21:56     ` Keld Jørn Simonsen
2011-05-04  0:58 ` NeilBrown
2011-05-04  5:30   ` Drew
2011-05-04  6:31     ` Brad Campbell
2011-05-04  7:42       ` Roberto Spadim [this message]
2011-05-04 23:08         ` Liam Kurmos
2011-05-04 23:35           ` Roberto Spadim
2011-05-04 23:36           ` Brad Campbell
2011-05-04 23:45           ` NeilBrown
2011-05-04 23:57             ` Roberto Spadim
2011-05-05  0:14             ` Liam Kurmos
2011-05-05  0:20               ` Liam Kurmos
2011-05-05  0:25                 ` Roberto Spadim
2011-05-05  0:40                   ` Liam Kurmos
2011-05-05  7:26                     ` David Brown
2011-05-05 10:41                       ` Keld Jørn Simonsen
2011-05-05 11:38                         ` David Brown
2011-05-06  4:14                           ` CoolCold
2011-05-06  7:29                             ` David Brown
2011-05-06 21:05                       ` Leslie Rhorer
2011-05-07 10:37                         ` David Brown
2011-05-07 10:58                           ` Keld Jørn Simonsen
2011-05-05  0:24               ` Roberto Spadim
2011-05-05 11:10             ` Keld Jørn Simonsen
2011-05-06 21:20               ` Leslie Rhorer
2011-05-06 21:53                 ` Keld Jørn Simonsen
2011-05-07  3:17                   ` Leslie Rhorer
2011-05-05  4:06           ` Roman Mamedov
2011-05-05  8:06             ` Nikolay Kichukov
2011-05-05  8:39               ` Liam Kurmos
2011-05-05  8:49                 ` Liam Kurmos
2011-05-05  9:30               ` NeilBrown
2011-05-04  7:48       ` David Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='BANLkTinbFZf=kV=3eUe2S6XNAOst_McX4w@mail.gmail.com' \
    --to=roberto@spadim.com.br \
    --cc=drew.kay@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=lists2009@fnarfbargle.com \
    --cc=neilb@suse.de \
    --cc=quantum.leaf@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.