All of lore.kernel.org
 help / color / mirror / Atom feed
From: Roberto Spadim <roberto@spadim.com.br>
To: Wolfgang Denk <wd@denx.de>
Cc: stefan.huebner@stud.tu-ilmenau.de, linux-raid@vger.kernel.org
Subject: Re: Optimize RAID0 for max IOPS?
Date: Fri, 21 Jan 2011 18:03:52 -0200	[thread overview]
Message-ID: <AANLkTikFbeE=q9i2Bq7FL7+o4u2Qq4vVSnfYbwBorFOV@mail.gmail.com> (raw)
In-Reply-To: <20110121193457.C7719D30268@gemini.denx.de>

=) i know
but, every body tell software is slower, the solution - use hardware
ok
there´s no opensource firmware for raid hardware

i preffer a good software/hardware solution, linux raid is a good
software solution for me =)
but, why not try a opensource project? hehe
what we could do.... a virtual machine :P with only raid and nfs, or
make a dedicated cpu for raid (cpu affinity) and a portion of memory
only for raid cache (today i think raid software don´t have cache, it
shoudn´t, cache is done by linux at filesystem level, i´m right?)


2011/1/21 Wolfgang Denk <wd@denx.de>:
> Dear Roberto,
>
> In message <AANLkTiki_FfRrLtL3dMsrDLXeT8jNO0ndnTNpXk1OXMW@mail.gmail.com> you wrote:
>> a good idea....
>> why not start a opensource raid controller?
>> what we need? a cpu, memory, power supply with battery or capacitor,
>> sas/sata (disk interfaces), pci-express or another (computer
>> interface)
>> it don´t need a operational system, since it will only run one program
>> with some threads (ok a small operational system to implement threads
>> easly)
>>
>> we could use arm, fpga, intel core2duo, atlhon, xeon, or another system...
>
> You could evenuse a processor dedicated for such a job, like a
> PPC440SPe or PPC460SX or similar, which provide hardware-offload
> capabilities for the RAID calculations.  These are even supported by
> drivers in mainline Linux.
>
> But again, thee would not helpo to maximize IOPS - goal for
> optimization has always been maximum sequential troughput only
> (and yes, I know exactly what I'm talking about; guess where the
> aforementioned drivers are coming from).
>
> Best regards,
>
> Wolfgang Denk
>
> --
> DENX Software Engineering GmbH,     MD: Wolfgang Denk & Detlev Zundel
> HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
> Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
> I don't see any direct evidence ...  but, then, my crystal ball is in
> dire need of an ectoplasmic upgrade. :-)              -- Howard Smith
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2011-01-21 20:03 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-18 21:01 Optimize RAID0 for max IOPS? Wolfgang Denk
2011-01-18 22:18 ` Roberto Spadim
2011-01-19  7:04   ` Wolfgang Denk
2011-01-18 23:15 ` Stefan /*St0fF*/ Hübner
2011-01-19  0:05   ` Roberto Spadim
2011-01-19  7:11     ` Wolfgang Denk
2011-01-19  8:18       ` Stefan /*St0fF*/ Hübner
2011-01-19  8:29         ` Jaap Crezee
2011-01-19  9:32           ` Jan Kasprzak
2011-01-19  7:10   ` Wolfgang Denk
2011-01-19 19:21   ` Wolfgang Denk
2011-01-19 19:50     ` Roberto Spadim
2011-01-19 22:36       ` Stefan /*St0fF*/ Hübner
2011-01-19 23:09         ` Roberto Spadim
2011-01-19 23:18           ` Roberto Spadim
2011-01-20  2:48             ` Keld Jørn Simonsen
2011-01-20  3:53               ` Roberto Spadim
2011-01-21 19:34             ` Wolfgang Denk
2011-01-21 20:03               ` Roberto Spadim [this message]
2011-01-21 20:04                 ` Roberto Spadim
2011-01-24 14:40     ` CoolCold
2011-01-24 15:25       ` Justin Piszcz
2011-01-24 15:25         ` Justin Piszcz
2011-01-24 20:48         ` Wolfgang Denk
2011-01-24 20:48           ` Wolfgang Denk
2011-01-24 21:57         ` Wolfgang Denk
2011-01-24 21:57           ` Wolfgang Denk
2011-01-24 23:03           ` Dave Chinner
2011-01-24 23:03             ` Dave Chinner
2011-01-25  7:39             ` Emmanuel Florac
2011-01-25  7:39               ` Emmanuel Florac
2011-01-25  8:36               ` Dave Chinner
2011-01-25  8:36                 ` Dave Chinner
2011-01-25 12:45                 ` Wolfgang Denk
2011-01-25 12:45                   ` Wolfgang Denk
2011-01-25 12:51                   ` Emmanuel Florac
2011-01-25 12:51                     ` Emmanuel Florac
2011-01-24 20:43       ` Wolfgang Denk
2011-01-25 17:10 ` Christoph Hellwig
2011-01-25 18:41   ` Wolfgang Denk
2011-01-25 21:35     ` Christoph Hellwig
2011-01-26  7:16       ` Wolfgang Denk
2011-01-26  8:32         ` Stan Hoeppner
2011-01-26  8:42           ` Wolfgang Denk
2011-01-26  9:38         ` Christoph Hellwig
2011-01-26  9:41           ` CoolCold

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTikFbeE=q9i2Bq7FL7+o4u2Qq4vVSnfYbwBorFOV@mail.gmail.com' \
    --to=roberto@spadim.com.br \
    --cc=linux-raid@vger.kernel.org \
    --cc=stefan.huebner@stud.tu-ilmenau.de \
    --cc=wd@denx.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.