All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Aleš Bláha" <ales-76@seznam.cz>
To: Craig Dunwoody <cdunwoody@graphstream.com>
Cc: ceph-devel@lists.sourceforge.net
Subject: Re: Hardware-config suggestions for HDD-based OSD node?
Date: Mon, 29 Mar 2010 17:46:49 +0200	[thread overview]
Message-ID: <20100329174649.f9d79f2b.ales-76@seznam.cz> (raw)
In-Reply-To: <27770.1269867625@n20.hq.graphstream.com>

On Mon, 29 Mar 2010 15:00:25 +0200 (CEST)
Craig Dunwoody <cdunwoody@graphstream.com> wrote:

Hello Craig,

Sure raw numbers for a hardware are often very impressive, but
it is up to the software to squeeze the performance out of it.
Experimenting is always the best way to find out, but you can search
the web to get the picture. I suggest taking a look at Lustre
setups - the architecture is very similar, they also use mostly
of-the-shelf componenst. Apparently the common Lustre OSD is rather fat
- plenty of disks divided into several RAID groups, very much like the
X4500. Then againg, HPC people are mostly focused on
streaming writes, so if your application differs then your point of
diminishing returns might be elsewhere.

Ales Blaha

> 
> Hello Ales,
> 
> Thanks for your comments.  Because it is relatively simple to scale-up
> various hardware resources in an OSD node, even up to extreme levels as
> I described, I look forward to being able to do experiments of adding
> various resources to an OSD node until a point of diminishing returns.
> 
> Unfortunately, this starts to get expensive for experiments with many
> OSD nodes. Also, any such results will also depend on characteristics of
> application workload.
> 
> Craig Dunwoody
> GraphStream Incorporated
> 
> ales writes:
> >As to the number of disks, Sun Fire X4500 with 48 SATA disks showed
> >real-world performance around 800-1000 MB/s (benchmarks on the web -
> >iSCSI, no Ceph). Since it is a amd64 based I guess you can get similar
> >I/O rates from any Intel box on the market today, provided you have
> >enough SATA ports and/or PCI-E slots. One 10gbps NIC should be enough
> >(these are usually two ported anyway). I would say the Linux will not be
> >the bottleneck here even if you use software RAID. Usually it is the
> >network or the protocol that limit the performance.

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev

  reply	other threads:[~2010-03-29 15:46 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-28 22:36 Hardware-config suggestions for HDD-based OSD node? Craig Dunwoody
2010-03-29  0:29 ` Martin Millnert
2010-03-29  1:15 ` Gregory Farnum
2010-03-29  1:48   ` Craig Dunwoody
2010-03-29  5:18 ` ales-76
2010-03-29 13:00   ` Craig Dunwoody
2010-03-29 15:46     ` Aleš Bláha [this message]
2010-03-29 22:05       ` [OLD ceph-devel] " Craig Dunwoody
2010-03-29 21:26 ` Sage Weil
2010-03-29 22:54   ` Craig Dunwoody
2010-03-29  0:58 Craig Dunwoody

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100329174649.f9d79f2b.ales-76@seznam.cz \
    --to=ales-76@seznam.cz \
    --cc=cdunwoody@graphstream.com \
    --cc=ceph-devel@lists.sourceforge.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.