From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81AA8C433DF for ; Fri, 19 Jun 2020 19:16:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CDFD20C09 for ; Fri, 19 Jun 2020 19:16:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733117AbgFSTQJ (ORCPT ); Fri, 19 Jun 2020 15:16:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733123AbgFSTQI (ORCPT ); Fri, 19 Jun 2020 15:16:08 -0400 Received: from laurent.telenet-ops.be (laurent.telenet-ops.be [IPv6:2a02:1800:110:4::f00:19]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47D3CC0613F0 for ; Fri, 19 Jun 2020 12:16:06 -0700 (PDT) Received: from ramsan ([IPv6:2a02:1810:ac12:ed20:e0be:48f2:cba4:1407]) by laurent.telenet-ops.be with bizsmtp id t7Fx220094UASYb017Fxtf; Fri, 19 Jun 2020 21:16:04 +0200 Received: from rox.of.borg ([192.168.97.57]) by ramsan with esmtp (Exim 4.90_1) (envelope-from ) id 1jmMUX-0002JU-33; Fri, 19 Jun 2020 21:15:57 +0200 Received: from geert by rox.of.borg with local (Exim 4.90_1) (envelope-from ) id 1jmMUX-0006VA-1v; Fri, 19 Jun 2020 21:15:57 +0200 From: Geert Uytterhoeven To: Sergei Shtylyov , "David S . Miller" , Jakub Kicinski , Rob Herring Cc: Andrew Lunn , Oleksij Rempel , Philippe Schenker , Florian Fainelli , Heiner Kallweit , Kazuya Mizuguchi , Wolfram Sang , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Geert Uytterhoeven Subject: [PATCH/RFC 2/5] ravb: Split delay handling in parsing and applying Date: Fri, 19 Jun 2020 21:15:51 +0200 Message-Id: <20200619191554.24942-3-geert+renesas@glider.be> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619191554.24942-1-geert+renesas@glider.be> References: <20200619191554.24942-1-geert+renesas@glider.be> Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Currently, full delay handling is done in both the probe and resume paths. Split it in two parts, so the resume path doesn't have to redo the parsing part over and over again. Signed-off-by: Geert Uytterhoeven --- drivers/net/ethernet/renesas/ravb.h | 4 +++- drivers/net/ethernet/renesas/ravb_main.c | 21 ++++++++++++++++----- 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 9f88b5db4f89843a..e5ca12ce93c730a9 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1036,7 +1036,9 @@ struct ravb_private { unsigned no_avb_link:1; unsigned avb_link_active_low:1; unsigned wol_enabled:1; - int num_tx_desc; /* TX descriptors per packet */ + unsigned rxcidm:1; /* RX Clock Internal Delay Mode */ + unsigned txcidm:1; /* TX Clock Internal Delay Mode */ + int num_tx_desc; /* TX descriptors per packet */ }; static inline u32 ravb_read(struct net_device *ndev, enum ravb_reg reg) diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index a442bcf64b9cd875..f326234d1940f43e 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -1967,23 +1967,32 @@ static const struct soc_device_attribute ravb_delay_mode_quirk_match[] = { }; /* Set tx and rx clock internal delay modes */ -static void ravb_set_delay_mode(struct net_device *ndev) +static void ravb_parse_delay_mode(struct net_device *ndev) { struct ravb_private *priv = netdev_priv(ndev); - int set = 0; if (priv->phy_interface == PHY_INTERFACE_MODE_RGMII_ID || priv->phy_interface == PHY_INTERFACE_MODE_RGMII_RXID) - set |= APSR_DM_RDM; + priv->rxcidm = true; if (priv->phy_interface == PHY_INTERFACE_MODE_RGMII_ID || priv->phy_interface == PHY_INTERFACE_MODE_RGMII_TXID) { if (!WARN(soc_device_match(ravb_delay_mode_quirk_match), "phy-mode %s requires TX clock internal delay mode which is not supported by this hardware revision. Please update device tree", phy_modes(priv->phy_interface))) - set |= APSR_DM_TDM; + priv->txcidm = true; } +} +static void ravb_set_delay_mode(struct net_device *ndev) +{ + struct ravb_private *priv = netdev_priv(ndev); + u32 set = 0; + + if (priv->rxcidm) + set |= APSR_DM_RDM; + if (priv->txcidm) + set |= APSR_DM_TDM; ravb_modify(ndev, APSR, APSR_DM, set); } @@ -2116,8 +2125,10 @@ static int ravb_probe(struct platform_device *pdev) /* Request GTI loading */ ravb_modify(ndev, GCCR, GCCR_LTI, GCCR_LTI); - if (priv->chip_id != RCAR_GEN2) + if (priv->chip_id != RCAR_GEN2) { + ravb_parse_delay_mode(ndev); ravb_set_delay_mode(ndev); + } /* Allocate descriptor base address table */ priv->desc_bat_size = sizeof(struct ravb_desc) * DBAT_ENTRY_NUM; -- 2.17.1