linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* DMA blues...System lockup on setting DMA mode using hdparam
@ 2001-02-23 20:48 Jasmeet Sidhu
  2001-02-23 21:59 ` Joel Jaeggli
  2001-02-24  9:21 ` Vojtech Pavlik
  0 siblings, 2 replies; 3+ messages in thread
From: Jasmeet Sidhu @ 2001-02-23 20:48 UTC (permalink / raw)
  To: linux-kernel

Hey guys,

I have five Promise ATA100 controllers configured using kernel version 
2.4.2-ac1 (using pdc202xx drivers of course) on ASUS A7V with a AMD Tbird 
1GHz processor.  Now for the most part this kernel is very stable.  I have 
premium cables connected to the hard drives and all drives in the system 
are masters as you can probably tell by the drive device letters 
assigned.  The cables are 80pin UDMA (100% Data Integrity).  I have not 
seen any CRC errors, in fact the system has been up overnight and has 
almost transferred about 105GB of data in various file sizes.

The problem:

when trying to set the DMA mode on the drives, using "hdparm -X69 
/dev/hda", it works fine.  As a matter of fact, this command succeeds for 
the following devices:
/dev/hda, /dev/hdc, /dev/hdm, /dev/hdo, /dev/hdq, /dev/hds
However, the system locks up completely when I try the same exact command 
for *any* of the following devices: /dev/hde, /dev/hdg, /dev/hdi, /dev/hdk.

*NOTE* the Raid5 array /dev/md0 is not running when I am trying to set the 
DMA modes.  The raid is not mounted and is in stopped mode using raidstop 
/dev/md0.

Also, when I try and use the -k and the -K switches (keep settings after 
reset), the programs says that it worked.  However, after I restart the 
system, these "flags" are set to 0 again.  Is this normal?  In other words:
hdparam -k /dev/hda
  keepsettings =  0 (off)
# now lets set the -k option (keep settings after refresh).
hdparam -k1 /dev/hda
  setting keep_settings to 1 (on)
  keepsettings =  1 (on)
# noe lets restart the system and query again
hdparam -k /dev/hda
  keepsettings =  0 (off)

Is this normal?

Also another question related to IDE:
	Is there anyway we can see how good/bad the system performance is while 
the system is working?  I am not talking about a benchmarking tool like 
bonnie that simply tries to figure out how good a system can perform.  I 
would like to see something, maybe in /proc/ide/, that shows me the current 
throughput of the ide subsystem.  For example how many kb of data is going 
in, how much coming out of each device.  Any ideas on how to go about maybe 
adding this?  Where would be an ideal place to add such functionality?  In 
the ide code or maybe in the raid section?  Or maybe these two should be 
kept separate.  Any thoughts guys?

Any additional required information can be posted, let me know.

Anybody else out there with a similar situation?  Your thoughts on this 
would be really appreciated.

Here's the setup:

ide0 at 0x3800-0x3807,0x3402 on irq 11	PDC20265
ide1 at 0x3000-0x3007,0x2802 on irq 11	
ide2 at 0x5400-0x5407,0x5002 on irq 15	PDC20267
ide3 at 0x4800-0x4807,0x4402 on irq 15
ide4 at 0x7000-0x7007,0x6802 on irq 11	PDC20267
ide5 at 0x6400-0x6407,0x6002 on irq 11
ide6 at 0x8800-0x8807,0x8402 on irq 14	PDC20267
ide7 at 0x8000-0x8007,0x7802 on irq 14
ide8 at 0xa400-0xa407,0xa002 on irq 10	PDC20267
ide9 at 0x9800-0x9807,0x9402 on irq 10

hda: 40188960 sectors (20577 MB) w/1916KiB Cache, CHS=39870/16/63, UDMA(100)
hdc: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hde: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdg: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdi: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdk: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdm: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdo: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdq: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hds: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)

# Raid-5 configuration
#
raiddev                 	/dev/md0
raid-level              	5
chunk-size           	4
parity-algorithm     	left-symmetric
persistent-superblock 	1
nr-raid-disks           	8
nr-spare-disks        	1
device          		/dev/hde1
raid-disk       		0
device          		/dev/hdg1
raid-disk       		1
device          		/dev/hdi1
raid-disk       		2
device          		/dev/hdk1
raid-disk       		3
device          		/dev/hdm1
raid-disk       		4
device          		/dev/hdo1
raid-disk       		5
device          		/dev/hdq1
raid-disk      		 6
device          		/dev/hds1
raid-disk       		7
device          		/dev/hdc1
spare-disk      		0

[root@bertha hdparm-3.9]# df -k
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hda3             19072868   3589612  14514392  20% /
/dev/hda1               198313     11667    176392   7% /boot
/dev/md0             525461076 108657156 416803920  21% /raid
sj-f760-1:/vol/vol03/data01
                      142993408 140675648   2317760  99% /mnt/netapps


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: DMA blues...System lockup on setting DMA mode using hdparam
  2001-02-23 20:48 DMA blues...System lockup on setting DMA mode using hdparam Jasmeet Sidhu
@ 2001-02-23 21:59 ` Joel Jaeggli
  2001-02-24  9:21 ` Vojtech Pavlik
  1 sibling, 0 replies; 3+ messages in thread
From: Joel Jaeggli @ 2001-02-23 21:59 UTC (permalink / raw)
  To: Jasmeet Sidhu; +Cc: linux-kernel

On Fri, 23 Feb 2001, Jasmeet Sidhu wrote:

> Also another question related to IDE:
> 	Is there anyway we can see how good/bad the system performance is while
> the system is working?

that information (what the disk subsytem as a whole is doing) is collected
in /proc/stat something like xosview can help you visualize it.

>  I am not talking about a benchmarking tool like
> bonnie that simply tries to figure out how good a system can perform.  I
> would like to see something, maybe in /proc/ide/, that shows me the current
> throughput of the ide subsystem.  For example how many kb of data is going
> in, how much coming out of each device.  Any ideas on how to go about maybe
> adding this?  Where would be an ideal place to add such functionality?  In
> the ide code or maybe in the raid section?  Or maybe these two should be
> kept separate.  Any thoughts guys?
>
> Any additional required information can be posted, let me know.
>
> Anybody else out there with a similar situation?  Your thoughts on this
> would be really appreciated.

I actually have a similar situation although without hanging, with three
controllers. the kernels detects the two devices on the third controller
as udma rather than udma(100) with 2.4.1.

hda: 30003120 sectors (15362 MB) w/1916KiB Cache, CHS=1867/255/63, UDMA(33)
hdc: 30003120 sectors (15362 MB) w/1916KiB Cache, CHS=29765/16/63, UDMA(33)
  first two are on the server-works chipset controller

hde: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdg: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdi: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdk: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
hdm: 30003120 sectors (15362 MB) w/1916KiB Cache, CHS=29765/16/63, (U)DMA
hdo: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, (U)DMA
  these 6 are on three promise contollers... all with 18" 80pin cable

hdd: ATAPI 40X CD-ROM drive, 128kB Cache, UDMA(33)


> Here's the setup:
>
> ide0 at 0x3800-0x3807,0x3402 on irq 11	PDC20265
> ide1 at 0x3000-0x3007,0x2802 on irq 11
> ide2 at 0x5400-0x5407,0x5002 on irq 15	PDC20267
> ide3 at 0x4800-0x4807,0x4402 on irq 15
> ide4 at 0x7000-0x7007,0x6802 on irq 11	PDC20267
> ide5 at 0x6400-0x6407,0x6002 on irq 11
> ide6 at 0x8800-0x8807,0x8402 on irq 14	PDC20267
> ide7 at 0x8000-0x8007,0x7802 on irq 14
> ide8 at 0xa400-0xa407,0xa002 on irq 10	PDC20267
> ide9 at 0x9800-0x9807,0x9402 on irq 10
>
> hda: 40188960 sectors (20577 MB) w/1916KiB Cache, CHS=39870/16/63, UDMA(100)
> hdc: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hde: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hdg: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hdi: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hdk: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hdm: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hdo: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hdq: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
> hds: 150136560 sectors (76870 MB) w/1916KiB Cache, CHS=148945/16/63, UDMA(100)
>
> # Raid-5 configuration
> #
> raiddev                 	/dev/md0
> raid-level              	5
> chunk-size           	4
> parity-algorithm     	left-symmetric
> persistent-superblock 	1
> nr-raid-disks           	8
> nr-spare-disks        	1
> device          		/dev/hde1
> raid-disk       		0
> device          		/dev/hdg1
> raid-disk       		1
> device          		/dev/hdi1
> raid-disk       		2
> device          		/dev/hdk1
> raid-disk       		3
> device          		/dev/hdm1
> raid-disk       		4
> device          		/dev/hdo1
> raid-disk       		5
> device          		/dev/hdq1
> raid-disk      		 6
> device          		/dev/hds1
> raid-disk       		7
> device          		/dev/hdc1
> spare-disk      		0
>
> [root@bertha hdparm-3.9]# df -k
> Filesystem           1k-blocks      Used Available Use% Mounted on
> /dev/hda3             19072868   3589612  14514392  20% /
> /dev/hda1               198313     11667    176392   7% /boot
> /dev/md0             525461076 108657156 416803920  21% /raid
> sj-f760-1:/vol/vol03/data01
>                       142993408 140675648   2317760  99% /mnt/netapps
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

-- 
--------------------------------------------------------------------------
Joel Jaeggli				       joelja@darkwing.uoregon.edu
Academic User Services			     consult@gladstone.uoregon.edu
     PGP Key Fingerprint: 1DE9 8FCA 51FB 4195 B42A 9C32 A30D 121E
--------------------------------------------------------------------------
It is clear that the arm of criticism cannot replace the criticism of
arms.  Karl Marx -- Introduction to the critique of Hegel's Philosophy of
the right, 1843.



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: DMA blues...System lockup on setting DMA mode using hdparam
  2001-02-23 20:48 DMA blues...System lockup on setting DMA mode using hdparam Jasmeet Sidhu
  2001-02-23 21:59 ` Joel Jaeggli
@ 2001-02-24  9:21 ` Vojtech Pavlik
  1 sibling, 0 replies; 3+ messages in thread
From: Vojtech Pavlik @ 2001-02-24  9:21 UTC (permalink / raw)
  To: Jasmeet Sidhu; +Cc: linux-kernel

On Fri, Feb 23, 2001 at 12:48:29PM -0800, Jasmeet Sidhu wrote:


> Also, when I try and use the -k and the -K switches (keep settings after 
> reset), the programs says that it worked.  However, after I restart the 
> system, these "flags" are set to 0 again.  Is this normal?  In other words:
> hdparam -k /dev/hda
>   keepsettings =  0 (off)
> # now lets set the -k option (keep settings after refresh).
> hdparam -k1 /dev/hda
>   setting keep_settings to 1 (on)
>   keepsettings =  1 (on)
> # noe lets restart the system and query again
> hdparam -k /dev/hda
>   keepsettings =  0 (off)
> 
> Is this normal?

This only relates to a ide bus reset in case of a failure, not system reset.

-- 
Vojtech Pavlik
SuSE Labs

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2001-02-24  9:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-02-23 20:48 DMA blues...System lockup on setting DMA mode using hdparam Jasmeet Sidhu
2001-02-23 21:59 ` Joel Jaeggli
2001-02-24  9:21 ` Vojtech Pavlik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).