All of lore.kernel.org
 help / color / mirror / Atom feed
* DM-MP Read Performance
@ 2010-05-11 18:59 Rodrigo Nascimento
  0 siblings, 0 replies; only message in thread
From: Rodrigo Nascimento @ 2010-05-11 18:59 UTC (permalink / raw)
  To: device-mapper development

Hi All,

I have a Oracle Enterprise Linux box, the kernel is 2.6.18...,
accessing LUs in a NetApp Box.
At NetApp box side, I have two Gigabit Ethernet NICs, each NIC is
member of a VLAN and this two interfaces are members of a
TargetPortalGroup.
At Host side, I have two Gigabit Ethernet NICs, each NIC is member of a VLAN.
I have this configuration at DM-MP conf file:

default
{
    use_friendly_names no
    max_fds 4096
    rr_min_io 128
}

devnode_blacklist
{
   devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
   devnode "^hd[a-z]"
   devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}

devices
{
   vendor "NetApp"
   product "LUN"
   flush_on_last_del yes
   getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
   prio_callout "/sbin/mpath_prio_netapp /dev/n%"
   features "1 queue_if_no_path"
   hardware_handler 0
   path_grouping_policy multibus
   failback immediate
   path_checker directio
}

When I simulate write operations on a LU I reach 90MB/s on each NIC.
When I simulate read operations on a LU I reach 40MB/s on each NIC.
It's a very poor number.
When it is running on Read Operations I can see the devices /dev/sdb
and /dev/sdc at 50% busy each and the dm-1 at 100%.

Anyone can help me to identify Why this throughput to read is poor?

Thanks,

---
Nascimento
NetApp - Enjoy it!

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2010-05-11 18:59 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-05-11 18:59 DM-MP Read Performance Rodrigo Nascimento

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.