From: "Paul Rolland" <rol@as2917.net>
To: "'Bartlomiej Zolnierkiewicz'" <B.Zolnierkiewicz@elka.pw.edu.pl>,
"'Marek Habersack'" <grendel@caudium.net>
Cc: <linux-kernel@vger.kernel.org>
Subject: Re: Lost interrupts with IDE DMA on 2.5.x
Date: Sat, 26 Apr 2003 09:53:00 +0200 [thread overview]
Message-ID: <004401c30bc8$dc4bb660$2101a8c0@witbe> (raw)
In-Reply-To: <Pine.SOL.4.30.0304252143370.602-200000@mion.elka.pw.edu.pl>
Hi Bartlomiej,
It seems the patch is missing :-(
Regards,
Paul
>
> Attached patch should help, please try.
>
> --
> Bartlomiej
>
> On Fri, 25 Apr 2003, Marek Habersack wrote:
>
> > Hello,
> >
> > I've recently added a second drive to my workstation and
> since then
> > I'm getting the following error from time to time:
> >
> > Apr 25 20:42:06 beowulf kernel: hda: dma_timer_expiry:
> dma status == 0x64
> > Apr 25 20:42:06 beowulf kernel: hda: lost interrupt
> > Apr 25 20:42:06 beowulf kernel: hda: dma_intr: bad DMA
> status (dma_stat=70)
> > Apr 25 20:42:06 beowulf kernel: hda: dma_intr:
> status=0x50 { DriveReady
> > SeekComplete }
> >
> > Both drives are new Maxtors (60 and 40GB) on the VIA KT266 chipset
> > (the mobo is MSI K7T266 Pro2-A mobo):
> >
> > ----------VIA BusMastering IDE Configuration----------------
> > Driver Version: 3.36
> > South Bridge: VIA vt8233a
> > Revision: ISA 0x0 IDE 0x6
> > Highest DMA rate: UDMA133
> > BM-DMA base: 0xfc00
> > PCI clock: 33.3MHz
> > Master Read Cycle IRDY: 0ws
> > Master Write Cycle IRDY: 0ws
> > BM IDE Status Register Read Retry: yes
> > Max DRDY Pulse Width: No limit
> > -----------------------Primary IDE-------Secondary IDE------
> > Read DMA FIFO flush: yes yes
> > End Sector FIFO flush: no no
> > Prefetch Buffer: yes yes
> > Post Write Buffer: yes yes
> > Enabled: yes yes
> > Simplex only: no no
> > Cable Type: 80w 40w
> > -------------------drive0----drive1----drive2----drive3-----
> > Transfer Mode: UDMA UDMA PIO DMA
> > Address Setup: 120ns 120ns 120ns 120ns
> > Cmd Active: 90ns 90ns 90ns 90ns
> > Cmd Recovery: 30ns 30ns 30ns 30ns
> > Data Active: 90ns 90ns 330ns 90ns
> > Data Recovery: 30ns 30ns 270ns 30ns
> > Cycle Time: 15ns 15ns 600ns 120ns
> > Transfer Rate: 133.3MB/s 133.3MB/s 3.3MB/s 16.6MB/s
> >
> > Of course, when the above happens, all disk I/O freezes.
> The above
> > happens only when there's simultaneous activity on both devices. It
> > doesn't happen when the devices are on different IDE
> interfaces. The
> > transfer is always retried and completed successfully, so
> it's not a
> > bad hdd and I can only guess the problem is somewhere in
> the DMA/IRQ
> > handling by the IDE driver. If there's not enough information to
> > diagnose/solve the problem, I can do more tests (run with 2.4 for a
> > while, run with the generic IDE drive etc.).
> >
> > TIA,
> >
> > marek
>
prev parent reply other threads:[~2003-04-26 7:41 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-04-25 18:58 Lost interrupts with IDE DMA on 2.5.x Marek Habersack
2003-04-25 19:47 ` Bartlomiej Zolnierkiewicz
2003-04-26 7:53 ` Paul Rolland [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='004401c30bc8$dc4bb660$2101a8c0@witbe' \
--to=rol@as2917.net \
--cc=B.Zolnierkiewicz@elka.pw.edu.pl \
--cc=grendel@caudium.net \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).