All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Powerpath vs dm-multipath - two points of FUD?
@ 2014-09-14 18:44 Levy, Jerome
  2014-09-15  6:54 ` Hannes Reinecke
  0 siblings, 1 reply; 5+ messages in thread
From: Levy, Jerome @ 2014-09-14 18:44 UTC (permalink / raw)
  To: dm-devel

> Firstly, apologies if this is a common topic and my intentions are not 
> to start a flame war. I've googled extensively but haven't found
> specific information to address my queries, so I thought I would turn here.

At the risk of getting involved in a religious discussion (disclaimer: I am an EMC employee and was an advanced support engineer for both PowerPath and dm-multipath) I thought I'd pass along a few thoughts that might help:

PowerPath costs money. dm-multipath is included with the OS distro.

PowerPath operations are largely consistent across platforms.If you know how PowerPath works on Linux, you have a very short learning curve on AIX, Solaris, HP-UX, and so forth. This statement is not necessarily 
true of native multipath solutions on any platform, and can be a significant factor in a multi-vendor environment.

PowerPath contains proprietary load sensing and balancing algorithms which may help performance in a given situation. (It does. I've seen them.) YMMV.

Request size and other switching options  can be useful in a number of specific situations -- some may call them corner cases, but streaming media, heavy database backups, large dataset transfers, and others have been shown to benefit from alternate PowerPath policies.

As Hannes points out, PowerPath is supported by EMC. If things don't work, you have someone to call. That can be comforting in the middle of the night :)

I've seen both PowerPath and native multipath solutions provide a lot of value in different ways,  and the decision as to which to use is not always clear-cut. Hope this helps!

-- jml

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Powerpath vs dm-multipath - two points of FUD?
  2014-09-14 18:44 Powerpath vs dm-multipath - two points of FUD? Levy, Jerome
@ 2014-09-15  6:54 ` Hannes Reinecke
  0 siblings, 0 replies; 5+ messages in thread
From: Hannes Reinecke @ 2014-09-15  6:54 UTC (permalink / raw)
  To: device-mapper development

Hi Jerome,

On 09/14/2014 08:44 PM, Levy, Jerome wrote:
>> Firstly, apologies if this is a common topic and my intentions are not 
>> to start a flame war. I've googled extensively but haven't found
>> specific information to address my queries, so I thought I would turn here.
> 
> At the risk of getting involved in a religious discussion (disclaimer: I am an EMC employee
> and was an advanced support engineer for both PowerPath and
dm-multipath) I thought I'd pass
> along a few thoughts that might help:
> 
> PowerPath costs money. dm-multipath is included with the OS distro.
> 
[ .. ]
> 
> PowerPath contains proprietary load sensing and balancing algorithms which may help performance
> in a given situation. (It does. I've seen them.) YMMV.
> 
Unfortunately I'll have to agree with Jerome here.
If you have in-depth knowledge of the array you can cut some corners
and optimize for certain scenarios.
We haven't, and so we can't.

Neither can we estimate how _likely_ such a scenario is;
We live on the assumption that it's not, and that most customers are
happily living with the standard setups.
Surprisingly enough, most do :-)

> Request size and other switching options  can be useful in a number
> of specific situations -- some may call them corner cases, but
streaming
> media, heavy database backups, large dataset transfers, and others
have
> been shown to benefit from alternate PowerPath policies.
> 
See above. Of course.

> As Hannes points out, PowerPath is supported by EMC. If things don't work,
> you have someone to call. That can be comforting in the middle of
the night :)
> 
But so is device-mapper multipath :-)
Sorry.

> I've seen both PowerPath and native multipath solutions provide a lot of
> value in different ways,  and the decision as to which to use is
not always
> clear-cut. Hope this helps!
> 
Thanks for that. It surely helps.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Powerpath vs dm-multipath - two points of FUD?
  2014-09-09 16:50 Rob
  2014-09-10 10:04 ` Bryn M. Reeves
@ 2014-09-14  8:39 ` Hannes Reinecke
  1 sibling, 0 replies; 5+ messages in thread
From: Hannes Reinecke @ 2014-09-14  8:39 UTC (permalink / raw)
  To: dm-devel

On 09/09/2014 06:50 PM, Rob wrote:
> Hi List,
>
> Firstly, apologies if this is a common topic and my intentions are not
> to start a flame war. I've googled extensively but haven't found
> specific information to address my queries, so I thought I would turn here.
>
> We have a rather large multi-tenant infrastructure using PowerPath.
> Since this inherently comes with increased maintenance costs
> (recompiling the module, requiring extra steps / care when upgrading
> etc) we are looking at using dm-multipath as the defacto standard
> SAN-connection abstraction layer for installations of RHEL 7+.
>
> After discussions with our SAN Architect team, we were given the below
> points to chew over and we were met with stiff resistance to moving away
> from Powerpath. Since there was little right-of-reply, I'd like to run
> these points past the minds of this list to understand if these are
> valid enough to justify a valid business case of keeping Powerpath over
> Multipath.
>
Hehe. PowerPath again.
Mind you, device-mapper multipathing is fully supported by EMC ...

>
> /Here’s a couple of reasons to stick with powerpath:
>
> * Load Balancing:
>
>   Whilst dm-multipath can make use of more than one of the paths to an
> array, .i.e with round-robin, this isn’t true load-balancing.  Powerpath
> is able to examine the paths down to the array and balance workload
> based on how busy the storage controller / ports are.  AFAIK Rhel6 has
> added functionality to make path choices based on queue depth and
> service time, which does add some improvement over vanilla round-robin.
>
We do this with the switch to request-based multipathing.
Using one of the other load balancers (eg least-pending) and set 
rr_min_io to '1' will give you exactly that behaviour.

>   For VMAX and CX/VNX, powerpath uses the following parameters to
> balance the paths out: Pending I/Os on the path, Size of I/Os, Types of
> I/Os, and Paths most recently used.
>
pending I/O is covered with the 'least-pending' I/O scheduler; I fail to 
see the value in any of the others (where would be the point in 
switching I/O based on the _size_ of the I/O request ?)

>   * Flakey Path Detection:
>
>   The latest versions of powerpath can proactively take paths out of
> service should it observe intermittent IO failures (remember any IO
> failure can hold a thread for 30-60 seconds whilst the SCSI command
> further up the stack times out, and a retry is sent).  dm-multipath
> doesn’t have functionality to remove a flakey path, paths can only be
> marked out of service on hard failure./
>
Wrong. I've added flakey path detection a while back. I'll be looking
at the sources and will be checking the current status; might be I've 
not gotten around to send it upstream.
So you _might_ need to switch to SLES :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Powerpath vs dm-multipath - two points of FUD?
  2014-09-09 16:50 Rob
@ 2014-09-10 10:04 ` Bryn M. Reeves
  2014-09-14  8:39 ` Hannes Reinecke
  1 sibling, 0 replies; 5+ messages in thread
From: Bryn M. Reeves @ 2014-09-10 10:04 UTC (permalink / raw)
  To: device-mapper development

On Tue, Sep 09, 2014 at 05:50:49PM +0100, Rob wrote:
> Firstly, apologies if this is a common topic and my intentions are not to
> start a flame war. I've googled extensively but haven't found specific
> information to address my queries, so I thought I would turn here.

Dr. Freeman, I presume? ;)
 
> *Here’s a couple of reasons to stick with powerpath:* Load
> Balancing: Whilst dm-multipath can make use of more than one of the paths
> to an array, .i.e with round-robin, this isn’t true load-balancing.

Hasn't been true since RHEL5. RHEL6's kernel has request-based
device-mapper and the SCSI device_handler infrastructure which allows
the dm-multipath target to make better decisions in response to device
utilisation and error conditions. In addition to the simple round-robin
path selector RHEL6 (and later) device-mapper-multipath supports the
queue-length and service-time path selectors. These route IO to the
device with the shortest queue or best service time (shortest queue
relative to the device's throughput). It's relatively easy to add a new
path selector - the difficult part is to define generally useful
selection algorithms.

One thing to bear in mind is that all of the Linux multipath solutions
sit on top of the same SCSI infrastructure; I've seen situations where
hosts using multipath-tools, PowerPath and Veritas VxDMP all suffered
a similar failure due to a hardware fualt condition that was not handled
well by the Linux midlayer at the time.

>  Powerpath is able to examine the paths down to the array and balance
> workload based on how busy the storage controller / ports are.  AFAIK Rhel6
> has added functionality to make path choices based on queue depth and
> service time, which does add some improvement over vanilla round-robin. For
> VMAX and CX/VNX, powerpath uses the following parameters to balance the
> paths out: Pending I/Os on the path, Size of I/Os, Types of I/Os, and Paths

PowerPath does support various modes of optimisation that are specific
to EMC arrays (iirc separate policies exist for Symmetrix and other
product lines). These are based on an understanding of the device
internals and rely on information that is not publicly available - e.g.
I've seen cases where PowerPath decides to route all IO to a single port
of a Symmetrix presumably because this gives improved cache behaviour.

> most recently used. * Flakey Path Detection: The latest versions of
> powerpath can proactively take paths out of service should it observe
> intermittent IO failures (remember any IO failure can hold a thread for
> 30-60 seconds whilst the SCSI command further up the stack times out, and a
> retry is sent).  dm-multipath doesn’t have functionality to remove a flakey
> path, paths can only be marked out of service on hard failure.*

This is something I've discussed a few times. It's a useful feature but
one that mainly comes into play when dealing with failing or marginal
hardware.

Regards,
Bryn.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Powerpath vs dm-multipath - two points of FUD?
@ 2014-09-09 16:50 Rob
  2014-09-10 10:04 ` Bryn M. Reeves
  2014-09-14  8:39 ` Hannes Reinecke
  0 siblings, 2 replies; 5+ messages in thread
From: Rob @ 2014-09-09 16:50 UTC (permalink / raw)
  To: dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 2070 bytes --]

Hi List,

Firstly, apologies if this is a common topic and my intentions are not to
start a flame war. I've googled extensively but haven't found specific
information to address my queries, so I thought I would turn here.

We have a rather large multi-tenant infrastructure using PowerPath. Since
this inherently comes with increased maintenance costs (recompiling the
module, requiring extra steps / care when upgrading etc) we are looking at
using dm-multipath as the defacto standard SAN-connection abstraction layer
for installations of RHEL 7+.

After discussions with our SAN Architect team, we were given the below
points to chew over and we were met with stiff resistance to moving away
from Powerpath. Since there was little right-of-reply, I'd like to run
these points past the minds of this list to understand if these are valid
enough to justify a valid business case of keeping Powerpath over Multipath.












*Here’s a couple of reasons to stick with powerpath:* Load
Balancing: Whilst dm-multipath can make use of more than one of the paths
to an array, .i.e with round-robin, this isn’t true load-balancing.
 Powerpath is able to examine the paths down to the array and balance
workload based on how busy the storage controller / ports are.  AFAIK Rhel6
has added functionality to make path choices based on queue depth and
service time, which does add some improvement over vanilla round-robin. For
VMAX and CX/VNX, powerpath uses the following parameters to balance the
paths out: Pending I/Os on the path, Size of I/Os, Types of I/Os, and Paths
most recently used. * Flakey Path Detection: The latest versions of
powerpath can proactively take paths out of service should it observe
intermittent IO failures (remember any IO failure can hold a thread for
30-60 seconds whilst the SCSI command further up the stack times out, and a
retry is sent).  dm-multipath doesn’t have functionality to remove a flakey
path, paths can only be marked out of service on hard failure.*

Many thanks
--
Rob

[-- Attachment #1.2: Type: text/html, Size: 2187 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-09-15  6:54 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-14 18:44 Powerpath vs dm-multipath - two points of FUD? Levy, Jerome
2014-09-15  6:54 ` Hannes Reinecke
  -- strict thread matches above, loose matches on Subject: below --
2014-09-09 16:50 Rob
2014-09-10 10:04 ` Bryn M. Reeves
2014-09-14  8:39 ` Hannes Reinecke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.