From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F4D8C3A59B for ; Mon, 19 Aug 2019 03:26:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 138042184E for ; Mon, 19 Aug 2019 03:26:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=endlessm-com.20150623.gappssmtp.com header.i=@endlessm-com.20150623.gappssmtp.com header.b="heT3PUoK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726384AbfHSD0l (ORCPT ); Sun, 18 Aug 2019 23:26:41 -0400 Received: from mail-ua1-f66.google.com ([209.85.222.66]:43327 "EHLO mail-ua1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726252AbfHSD0k (ORCPT ); Sun, 18 Aug 2019 23:26:40 -0400 Received: by mail-ua1-f66.google.com with SMTP id y7so149329uae.10 for ; Sun, 18 Aug 2019 20:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=endlessm-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=2dVZIdsfkzE6AbZuPYaxpGaVfhK5dGN1XGxWqMQWY5g=; b=heT3PUoK/y0BKoPbUHQ2gKZqKUJRu6V/ap5Ky4DJiIUq+MbwUJSJQElQWdkYS8CQL6 48HidbKc/o0cRsjxvRicBehJUnT4GEtdijU2OZifcPkimvX+r40BM1b/SgiCYX5UomYo Jxqk3kZMLWt5p/QE590Cuec1wy2Wfa0+WkOBAqFdR9RhRZasmJq4cyexZMlKxmsYQm2O i8SIO7tKQ8rYB37CDjTkKtxB43ufgpx2Dd0yJSHRHGrjCxR7jMwQOjRQF64QmXG6OTry 9zVN6q3Pp3JVR+xANkVhB++hZri6JsHRgGGc7NNcUzM1UK/djB01NxU4zxaORVxExRQF byig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=2dVZIdsfkzE6AbZuPYaxpGaVfhK5dGN1XGxWqMQWY5g=; b=F4mOF+zAeR4++IG7DDMQt5zc5H0ioyADoWyUAkXMFxJEc54/+0YFAZXdpJTLRvfNPp rSxmHG5/duHcH1fAHtSzXRRnsN03quFBCY7NFLCGzZy9XuYDqAdWeP4iHxeM+hXn0o5u 2daG2hwcycXJUQ3qVx/e/Om+q6cXBr0pdvLSLcswva+osnx6t/XMvdrKwxLuFfMHdLSj xmUmaQ71V53Ppybr0AWeIF2F0y6lEWLwVn2Q3I/5RSsQYn7hX4+3RRYRaFXAzNy8m6Qg OQC2gnA2z7LKi2/ckIb9ltJC1ZXrO8Bkin/4mTqA+YgMHqkpqm71//y1MRQizVvf0JIo wLrg== X-Gm-Message-State: APjAAAVJMCsO4p8O/FUResLoS6XKqTe187bK3lIS7jB7fbyYiHKJqX/W xwZG5wde2CFf7cS3exz8v1oaLapeTWr8i8VPh4HCvw== X-Google-Smtp-Source: APXvYqxhyJcN1oA5l10VDuydZejyHRlhznRvcuZMYkVOMjvKlEkhs560owClx2d55emJZN7Lul0H0tYh/4EqaOEs/l8= X-Received: by 2002:ab0:4993:: with SMTP id e19mr1856015uad.2.1566185199211; Sun, 18 Aug 2019 20:26:39 -0700 (PDT) MIME-Version: 1.0 References: <20190816100903.7549-1-jian-hong@endlessm.com> In-Reply-To: From: Jian-Hong Pan Date: Mon, 19 Aug 2019 11:26:02 +0800 Message-ID: Subject: Re: [PATCH v2] rtw88: pci: Move a mass of jobs in hw IRQ to soft IRQ To: Tony Chuang Cc: Kalle Valo , "David S . Miller" , "linux-wireless@vger.kernel.org" , "netdev@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux@endlessm.com" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Tony Chuang =E6=96=BC 2019=E5=B9=B48=E6=9C=8816=E6= =97=A5 =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=886:44=E5=AF=AB=E9=81=93=EF=BC=9A > > > From: Jian-Hong Pan > > > > There is a mass of jobs between spin lock and unlock in the hardware > > IRQ which will occupy much time originally. To make system work more > > efficiently, this patch moves the jobs to the soft IRQ (bottom half) to > > reduce the time in hardware IRQ. > > > > Signed-off-by: Jian-Hong Pan > > --- > > v2: > > Change the spin_lock_irqsave/unlock_irqrestore to spin_lock/unlock in > > rtw_pci_interrupt_handler. Because the interrupts are already disabled > > in the hardware interrupt handler. > > > > drivers/net/wireless/realtek/rtw88/pci.c | 33 +++++++++++++++++++----- > > 1 file changed, 27 insertions(+), 6 deletions(-) > > > > diff --git a/drivers/net/wireless/realtek/rtw88/pci.c > > b/drivers/net/wireless/realtek/rtw88/pci.c > > index 00ef229552d5..0740140d7e46 100644 > > --- a/drivers/net/wireless/realtek/rtw88/pci.c > > +++ b/drivers/net/wireless/realtek/rtw88/pci.c > > @@ -866,12 +866,28 @@ static irqreturn_t rtw_pci_interrupt_handler(int = irq, > > void *dev) > > { > > struct rtw_dev *rtwdev =3D dev; > > struct rtw_pci *rtwpci =3D (struct rtw_pci *)rtwdev->priv; > > - u32 irq_status[4]; > > > > spin_lock(&rtwpci->irq_lock); > > if (!rtwpci->irq_enabled) > > goto out; > > > > + /* disable RTW PCI interrupt to avoid more interrupts before the = end of > > + * thread function > > + */ > > + rtw_pci_disable_interrupt(rtwdev, rtwpci); > > So basically it's to prevent back-to-back interrupts. > > Nothing wrong about it, I just wondering why we don't like > back-to-back interrupts. Does it means that those interrupts > fired between irq_handler and threadfin would increase > much more time to consume them. 1. It is one of the reasons. Besides, the hardware interrupt has higher priority than soft IRQ. If next hardware interrupt is coming faster then the soft IRQ (BH), the software IRQ may always become pended. 2. Keep the data's state until the interrupt is fully dealt. 3. The process of request_threaded_irq is documented: https://www.kernel.org/doc/htmldocs/kernel-api/API-request-threaded-irq.htm= l > > +out: > > + spin_unlock(&rtwpci->irq_lock); > > + > > + return IRQ_WAKE_THREAD; > > +} > > + > > +static irqreturn_t rtw_pci_interrupt_threadfn(int irq, void *dev) > > +{ > > + struct rtw_dev *rtwdev =3D dev; > > + struct rtw_pci *rtwpci =3D (struct rtw_pci *)rtwdev->priv; > > + unsigned long flags; > > + u32 irq_status[4]; > > + > > rtw_pci_irq_recognized(rtwdev, rtwpci, irq_status); > > > > if (irq_status[0] & IMR_MGNTDOK) > > @@ -891,8 +907,11 @@ static irqreturn_t rtw_pci_interrupt_handler(int i= rq, > > void *dev) > > if (irq_status[0] & IMR_ROK) > > rtw_pci_rx_isr(rtwdev, rtwpci, RTW_RX_QUEUE_MPDU); > > > > -out: > > - spin_unlock(&rtwpci->irq_lock); > > + /* all of the jobs for this interrupt have been done */ > > + spin_lock_irqsave(&rtwpci->irq_lock, flags); > > I suggest to protect the ISRs. Because next patches will require > to check if the TX DMA path is empty. This means I will also add > this rtwpci->irq_lock to the TX path, and check if the skb_queue > does not have any pending SKBs not DMAed successfully. Ah ... Okay for the TX path > > + if (rtw_flag_check(rtwdev, RTW_FLAG_RUNNING)) > > Why check the flag here? Is there any racing or something? > Otherwise it looks to break the symmetry. According to backtrace, I notice rtw_pci_stop will be invoked in rtw_power_off, rtw_core_stop which clears the running state. rtw_core_stop is invoked by rtw_enter_ips due to IEEE80211_CONF_IDLE. If the stop command comes between the hardware interrupt and soft IRQ (BH) and there is no start command again, I think user wants the WiFi stop instead of becoming start again. Jian-Hong Pan