All of lore.kernel.org
 help / color / mirror / Atom feed
From: Adam Goryachev <mailinglists@websitemanagers.com.au>
To: Chris Murphy <lists@colorremedies.com>
Cc: Phil Turmel <philip@turmel.org>,
	Dave Cundiff <syshackmin@gmail.com>,
	"stan@hardwarefreak.com Hoeppner" <stan@hardwarefreak.com>,
	"linux-raid@vger.kernel.org list" <linux-raid@vger.kernel.org>
Subject: Re: RAID performance
Date: Sat, 09 Feb 2013 01:19:05 +1100	[thread overview]
Message-ID: <51150959.8070501@websitemanagers.com.au> (raw)
In-Reply-To: <DFB376BB-32B9-48DE-A7E9-A1B8F97982EF@colorremedies.com>

On 08/02/13 18:35, Chris Murphy wrote:
> 
> On Feb 7, 2013, at 11:25 PM, Adam Goryachev
> <mailinglists@websitemanagers.com.au> wrote:
>> 
>> 
>> On the remote machine.... NFS mount loop to present the NFS file as
>> a block device Xen which passes through the block device to domU
>> (Windows) disk partition partition is formatted NTFS
> 
> Assuming the domU gets it's own IP, Windows will mount NFS directly.
> You don't need to format it. On the storage server, storage is ext4
> or XFS and can be on LVM if you wish.

Are you suggesting that MS Windows 2003 Server (without any commercial
add-on software) will boot from NFS and run normally (no user noticable
changes) with it's C: actually a bunch of files on an NFS server?

I must admit, if that is possible, I'll be... better educated. I don't
think it is, hence I've gone with iSCSI which allows me to present a
block device to windows. I had considered configuring windows to
actually boot from iSCSI, which I think is mostly possible, but apart
from the added complexity, I've also heard it ends up with worse
performance as the emulated network card is less efficient than the
emulated disk + native network card. (Also the host gets extra CPU
allocation than the windows VM).

>> I'm not sure, but it was my understanding that using block devices
>> was the most efficient way to do this….
> 
> Depends on the usage. Files being copied and/or moved on the same
> storage array sounds like a file sharing context to me, not a block
> device requirement. And user report of write failures over iSCSI
> bothers me also. NFS is going to be much more fault tolerant, and all
> of your domUs can share one pile of storage. But as you have it
> configured, you've effectively over provisioned if each domU gets its
> own LV, all the more reason I don't think you need to do more over
> provisioning. And for now I think NFS vs iSCSI can wait another day,
> and that your problem lies elsewhere on the network.
> 
> Do you have internet network traffic going through this same switch?
> Or do you have the storage network isolated such that *only* iSCSI
> traffic is happening on a given wire?

There isn't any actual "internet traffic" as that all comes into a linux
firewall with ip forwarding disabled (and no NAT), only squid proxy, and
SMTP is available to forward traffic out. In any case, yes, there is a
single 1G ethernet in each physical box which shares all the SAN
traffic, as well as the user level traffic.

>> ie, if a user logs into terminal server 1, and copies a large file
>> from the desktop to another folder on the same c:, then this 
>> terminal server will get busy, possibly using a full 1Gbps through
>> the VM, physical machine, switch, to the storage server. However,
>> the storage server has another 3Gbps to serve all the other
>> systems.
> 
> I think you need to replicate the condition that causes the problem,
> on the storage server itself first, to isolate this from being a
> network problem. And I'd do rather intensive read tests first and
> then do predominately write tests to see if there's a distinct
> difference (above what's expected for the RAID 5 write hit). And then
> you can repeat these from your domUs.

OK, well, I've started running some performance tests on the storage
server, I'd like to find out if they are "expected results", and then
will move on to test over the network.

> I'm flummoxed off hand if an NTFS formatted iSCSI block device
> behaves exactly as an NTFS formatted LV; ergo, is it possible (and
> OK) to unmount the volumes on the domUs, and then mount the LV as
> NTFS on the storage server so that your storage server can run local
> tests, simultaneously to those LVs. Obviously you should not mount
> the LVs on the storage server while they are mounted over iSCSI or
> you'll totally corrupt the file system (and it will let you do this,
> a hazard of iSCSI).

Yes, I've no problem mounting an LV directly on the storage server, I've
done that before for testing/migration of physical machines. Of course,
as you mentioned, not while the VM is actually running!

Thanks,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2013-02-08 14:19 UTC|newest]

Thread overview: 131+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-07  6:48 RAID performance Adam Goryachev
2013-02-07  6:51 ` Adam Goryachev
2013-02-07  8:24   ` Stan Hoeppner
2013-02-07  7:02 ` Carsten Aulbert
2013-02-07 10:12   ` Adam Goryachev
2013-02-07 10:29     ` Carsten Aulbert
2013-02-07 10:41       ` Adam Goryachev
2013-02-07  8:11 ` Stan Hoeppner
2013-02-07 10:05   ` Adam Goryachev
2013-02-16  4:33     ` RAID performance - *Slow SSDs likely solved* Stan Hoeppner
     [not found]       ` <cfefe7a6-a13f-413c-9e3d-e061c68dc01b@email.android.com>
2013-02-17  5:01         ` Stan Hoeppner
2013-02-08  7:21   ` RAID performance Adam Goryachev
2013-02-08  7:37     ` Chris Murphy
2013-02-08 13:04     ` Stan Hoeppner
2013-02-07  9:07 ` Dave Cundiff
2013-02-07 10:19   ` Adam Goryachev
2013-02-07 11:07     ` Dave Cundiff
2013-02-07 12:49       ` Adam Goryachev
2013-02-07 12:53         ` Phil Turmel
2013-02-07 12:58           ` Adam Goryachev
2013-02-07 13:03             ` Phil Turmel
2013-02-07 13:08               ` Adam Goryachev
2013-02-07 13:20                 ` Mikael Abrahamsson
2013-02-07 22:03               ` Chris Murphy
2013-02-07 23:48                 ` Chris Murphy
2013-02-08  0:02                   ` Chris Murphy
2013-02-08  6:25                     ` Adam Goryachev
2013-02-08  7:35                       ` Chris Murphy
2013-02-08  8:34                         ` Chris Murphy
2013-02-08 14:31                           ` Adam Goryachev
2013-02-08 14:19                         ` Adam Goryachev [this message]
2013-02-08  6:15                   ` Adam Goryachev
2013-02-07 15:32         ` Dave Cundiff
2013-02-08 13:58           ` Adam Goryachev
2013-02-08 21:42             ` Stan Hoeppner
2013-02-14 22:42               ` Chris Murphy
2013-02-15  1:10                 ` Adam Goryachev
2013-02-15  1:40                   ` Chris Murphy
2013-02-15  4:01                     ` Adam Goryachev
2013-02-15  5:14                       ` Chris Murphy
2013-02-15 11:10                         ` Adam Goryachev
2013-02-15 23:01                           ` Chris Murphy
2013-02-17  9:52             ` RAID performance - new kernel results Adam Goryachev
2013-02-18 13:20               ` RAID performance - new kernel results - 5x SSD RAID5 Stan Hoeppner
2013-02-20 17:10                 ` Adam Goryachev
2013-02-21  6:04                   ` Stan Hoeppner
2013-02-21  6:40                     ` Adam Goryachev
2013-02-21  8:47                       ` Joseph Glanville
2013-02-22  8:10                       ` Stan Hoeppner
2013-02-24 20:36                         ` Stan Hoeppner
2013-03-01 16:06                           ` Adam Goryachev
2013-03-02  9:15                             ` Stan Hoeppner
2013-03-02 17:07                               ` Phil Turmel
2013-03-02 23:48                                 ` Stan Hoeppner
2013-03-03  2:35                                   ` Phil Turmel
2013-03-03 15:19                                 ` Adam Goryachev
2013-03-04  1:31                                   ` Phil Turmel
2013-03-04  9:39                                     ` Adam Goryachev
2013-03-04 12:41                                       ` Phil Turmel
2013-03-04 12:42                                       ` Stan Hoeppner
2013-03-04  5:25                                   ` Stan Hoeppner
2013-03-03 17:32                               ` Adam Goryachev
2013-03-04 12:20                                 ` Stan Hoeppner
2013-03-04 16:26                                   ` Adam Goryachev
2013-03-05  9:30                                     ` RAID performance - 5x SSD RAID5 - effects of stripe cache sizing Stan Hoeppner
2013-03-05 15:53                                       ` Adam Goryachev
2013-03-07  7:36                                         ` Stan Hoeppner
2013-03-08  0:17                                           ` Adam Goryachev
2013-03-08  4:02                                             ` Stan Hoeppner
2013-03-08  5:57                                               ` Mikael Abrahamsson
2013-03-08 10:09                                                 ` Stan Hoeppner
2013-03-08 14:11                                                   ` Mikael Abrahamsson
2013-02-21 17:41                     ` RAID performance - new kernel results - 5x SSD RAID5 David Brown
2013-02-23  6:41                       ` Stan Hoeppner
2013-02-23 15:57               ` RAID performance - new kernel results John Stoffel
2013-03-01 16:10                 ` Adam Goryachev
2013-03-10 15:35                   ` Charles Polisher
2013-04-15 12:23                     ` Adam Goryachev
2013-04-15 15:31                       ` John Stoffel
2013-04-17 10:15                         ` Adam Goryachev
2013-04-15 16:49                       ` Roy Sigurd Karlsbakk
2013-04-15 20:16                       ` Phil Turmel
2013-04-16 19:28                         ` Roy Sigurd Karlsbakk
2013-04-16 21:03                           ` Phil Turmel
2013-04-16 21:43                           ` Stan Hoeppner
2013-04-15 20:42                       ` Stan Hoeppner
2013-02-08  3:32       ` RAID performance Stan Hoeppner
2013-02-08  7:11         ` Adam Goryachev
2013-02-08 17:10           ` Stan Hoeppner
2013-02-08 18:44             ` Adam Goryachev
2013-02-09  4:09               ` Stan Hoeppner
2013-02-10  4:40                 ` Adam Goryachev
2013-02-10 13:22                   ` Stan Hoeppner
2013-02-10 16:16                     ` Adam Goryachev
2013-02-10 17:19                       ` Mikael Abrahamsson
2013-02-10 21:57                         ` Adam Goryachev
2013-02-11  3:41                           ` Adam Goryachev
2013-02-11  4:33                           ` Mikael Abrahamsson
2013-02-12  2:46                       ` Stan Hoeppner
2013-02-12  5:33                         ` Adam Goryachev
2013-02-13  7:56                           ` Stan Hoeppner
2013-02-13 13:48                             ` Phil Turmel
2013-02-13 16:17                             ` Adam Goryachev
2013-02-13 20:20                               ` Adam Goryachev
2013-02-14 12:22                                 ` Stan Hoeppner
2013-02-15 13:31                                   ` Stan Hoeppner
2013-02-15 14:32                                     ` Adam Goryachev
2013-02-16  1:07                                       ` Stan Hoeppner
2013-02-16 17:19                                         ` Adam Goryachev
2013-02-17  1:42                                           ` Stan Hoeppner
2013-02-17  5:02                                             ` Adam Goryachev
2013-02-17  6:28                                               ` Stan Hoeppner
2013-02-17  8:41                                                 ` Adam Goryachev
2013-02-17 13:58                                                   ` Stan Hoeppner
2013-02-17 14:46                                                     ` Adam Goryachev
2013-02-19  8:17                                                       ` Stan Hoeppner
2013-02-20 16:45                                                         ` Adam Goryachev
2013-02-21  0:45                                                           ` Stan Hoeppner
2013-02-21  3:10                                                             ` Adam Goryachev
2013-02-22 11:19                                                               ` Stan Hoeppner
2013-02-22 15:25                                                                 ` Charles Polisher
2013-02-23  4:14                                                                   ` Stan Hoeppner
2013-02-12  7:34                         ` Mikael Abrahamsson
2013-02-08  7:17         ` Adam Goryachev
2013-02-07 12:01     ` Brad Campbell
2013-02-07 12:37       ` Adam Goryachev
2013-02-07 17:12         ` Fredrik Lindgren
2013-02-08  0:00           ` Adam Goryachev
2013-02-11 19:49   ` Roy Sigurd Karlsbakk
2013-02-11 20:30     ` Dave Cundiff
2013-02-07 11:32 ` Mikael Abrahamsson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51150959.8070501@websitemanagers.com.au \
    --to=mailinglists@websitemanagers.com.au \
    --cc=linux-raid@vger.kernel.org \
    --cc=lists@colorremedies.com \
    --cc=philip@turmel.org \
    --cc=stan@hardwarefreak.com \
    --cc=syshackmin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.