linux-wireless.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Helmut Schaa <helmut.schaa@googlemail.com>
To: Tim Blechmann <tim@klingt.org>
Cc: linux-wireless@vger.kernel.org
Subject: Re: rt61pci issue
Date: Thu, 11 Nov 2010 14:45:41 +0100	[thread overview]
Message-ID: <AANLkTikLoz-7LuhkcuRWGr+he+S3D=K_tmVHcmj3AZ-O@mail.gmail.com> (raw)
In-Reply-To: <ibgpgg$a5m$1@dough.gmane.org>

Hi,

On Thu, Nov 11, 2010 at 2:02 PM, Tim Blechmann <tim@klingt.org> wrote:
> hi all,
>
> i am using tn rt61pci driver and besides having not the best wireless
> performance, it network transfer sometimes disturbs the audio playback on my
> machine. sometimes it works fine, but the machine can get into a state, where i
> will have to reboot the machine to get back to normal. i suppose, this is
> because of the rt61pci driver.
>
> in general if found, that the specific irq thread produces a lot of interrupts
> and does quite a lot of work. the specific irq thread takes quite some cpu time
> (currently i have an uptime of 5 hours and the irq thread used 4 minutes of cpu
> time) and about 200 interupts are issued per second:

You get ~200 interrupts/second from rt61pci while the device is idle? Or
while transmitting/receiving?

> cat /proc/interrupts |grep 16\: && sleep 1 && cat /proc/interrupts | grep 16\:
>  16:          0    5140126          0          0   IO-APIC-fasteoi
> uhci_hcd:usb3, ahci, 0000:08:00.0
>  16:          0    5140357          0          0   IO-APIC-fasteoi
> uhci_hcd:usb3, ahci, 0000:08:00.0
>
> using perf, i get the following data about the spent cpu time:
> # Events: 106  cycles
> #
> # Overhead          Command      Shared Object  Symbol
> # ........  ...............  .................  ......
> #
>    61.17%  irq/16-0000:08:  [rt61pci]          [k] rt61pci_interrupt_thread
>    12.77%  irq/16-0000:08:  [rt61pci]          [k] rt61pci_set_device_state
>     3.72%  irq/16-0000:08:  [kernel.kallsyms]  [k] __ticket_spin_lock
>     3.17%  irq/16-0000:08:  [kernel.kallsyms]  [k] local_bh_disable
>     2.14%  irq/16-0000:08:  [kernel.kallsyms]  [k] schedule
>     1.81%  irq/16-0000:08:  [rt61pci]          [k] rt61pci_kick_tx_queue
>     1.50%  irq/16-0000:08:  [mac80211]         [k] prepare_for_handlers
>     1.38%  irq/16-0000:08:  [rt2x00pci]        [k] rt2x00pci_rxdone
>     1.03%  irq/16-0000:08:  [kernel.kallsyms]  [k] __ticket_spin_unlock
>     0.87%  irq/16-0000:08:  [kernel.kallsyms]  [k] __inet_lookup_established
>     0.87%  irq/16-0000:08:  [kernel.kallsyms]  [k] tcp_event_data_recv
>     0.82%  irq/16-0000:08:  [kernel.kallsyms]  [k] irq_thread
>     0.66%  irq/16-0000:08:  [kernel.kallsyms]  [k] update_curr_rt
>     0.64%  irq/16-0000:08:  [rt2x00lib]        [k] rt2x00lib_rxdone
>     0.64%  irq/16-0000:08:  [kernel.kallsyms]  [k] mod_timer
>     0.51%  irq/16-0000:08:  [kernel.kallsyms]  [k] pick_next_task_rt
>     0.51%  irq/16-0000:08:  [kernel.kallsyms]  [k] cpupri_set
>     0.51%  irq/16-0000:08:  [kernel.kallsyms]  [k] _raw_spin_unlock_irqrestore
>     0.50%  irq/16-0000:08:  [kernel.kallsyms]  [k] hrtick_update
>     0.49%  irq/16-0000:08:  [kernel.kallsyms]  [k] __phys_addr
>     0.49%  irq/16-0000:08:  [kernel.kallsyms]  [k] update_shares
>     0.49%  irq/16-0000:08:  [kernel.kallsyms]  [k] dequeue_task
>     0.49%  irq/16-0000:08:  [kernel.kallsyms]  [k] dequeue_rt_stack
>     0.49%  irq/16-0000:08:  [kernel.kallsyms]  [k] skb_release_data
>     0.44%  irq/16-0000:08:  [kernel.kallsyms]  [k] load_balance
>     0.40%  irq/16-0000:08:  [kernel.kallsyms]  [k] memcpy
>     0.36%  irq/16-0000:08:  [kernel.kallsyms]  [k] map_single
>     0.36%  irq/16-0000:08:  [kernel.kallsyms]  [k] ip_output
>     0.36%  irq/16-0000:08:  [kernel.kallsyms]  [k] tcp_ack
>     0.22%  irq/16-0000:08:  [mac80211]         [k] minstrel_ht_tx_status
>     0.22%  irq/16-0000:08:  [kernel.kallsyms]  [k] tcp_v4_rcv
>     0.00%  irq/16-0000:08:  [kernel.kallsyms]  [k] native_write_msr_safe
>
> not sure, if this is really helpful, but this amount of interrupts doesn't look
> good to me. if i can be on any further help to track down this issue, i would be
> happy to assist.

Looks to me as if the interrupt thread gets scheduled but doesn't do
anything useful as
otherwise we should also see the rxdone/txdone functions etc in the perf output.

Mind to put a (maybe rate limited) printk into the interrupt thread
that prints out "reg"
and "reg_mcu" so that we can see which interrupts get triggered?

Helmut

  reply	other threads:[~2010-11-11 13:45 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-11 13:02 rt61pci issue Tim Blechmann
2010-11-11 13:45 ` Helmut Schaa [this message]
     [not found]   ` <201011111556.31601.tim@klingt.org>
2010-11-11 15:13     ` Helmut Schaa
2010-11-12  6:47       ` Helmut Schaa
2010-11-12  7:03         ` Helmut Schaa
2010-11-25 17:19           ` Tim Blechmann
2010-11-28 19:14             ` Helmut Schaa
2011-02-17 10:07               ` Tim Blechmann
2010-11-12 13:49 ` Helmut Schaa
2010-11-12 16:03   ` Tim Blechmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTikLoz-7LuhkcuRWGr+he+S3D=K_tmVHcmj3AZ-O@mail.gmail.com' \
    --to=helmut.schaa@googlemail.com \
    --cc=linux-wireless@vger.kernel.org \
    --cc=tim@klingt.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).