From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 265FAC43381 for ; Wed, 6 Mar 2019 21:16:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EA62F20684 for ; Wed, 6 Mar 2019 21:16:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="iSlW3f3K"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="E1u0YJYz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA62F20684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Date:Message-ID:To:Subject:From: References:In-Reply-To:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oZFBakNZP2etAIy805QsW1mQOn49DTACOUD9SvktNTA=; b=iSlW3f3KdRMCda KNRvxD6lbuJAWwLZJ3GC3sbxZxVbnq00MKqk5KToWxmSBc1UHPn2CdJ2AsfS43+OI5nSwC40qqD00 y+Q3kycSPlqg+h67KMPHrApSWsNn09VD/Dw7LsSUw4G3PCuKodNMcKTma4OvV3KnpcWaPVUPAiX5d zLcIjr51HSZfUamhL8TB99KIfYhJ26ZtrryTVvf5q2lTs20JL36gzHYplbImyDWHzx7qZUrcC804D SbXoUTD2xDrraEpRxvurEp8NUyiBeZPASDkp+VvxHJbxDrtT2CmQsybM0oDjzD2yDwUhzE/A0254P WItbuo0eJ1wkpcyhCT6A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h1dtq-00047V-J5; Wed, 06 Mar 2019 21:16:26 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h1dtk-00046i-Ie; Wed, 06 Mar 2019 21:16:24 +0000 Received: from localhost (unknown [104.132.0.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9482020684; Wed, 6 Mar 2019 21:16:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1551906977; bh=Fn6Y/iYq3Rr0g0iG2gPzyBpnTJFrdlD7/k8Gw8JqMnk=; h=In-Reply-To:References:From:Subject:Cc:To:Date:From; b=E1u0YJYzowiyvFasp21o1MmGQtcj+TTb4DxdW4fXpbM2rXEHqcA+BhuWebvQclcrI J4Q+k/ru/i3uiCUY+PnOrx+nEEsKEe8flXzJ4D53UzhAPHMQvOWsaJ7Dk515fDvayt 2RM+ZM8c3uZWvtD7pmCDGuSPfNBR2Ti7ELbQBvSc= MIME-Version: 1.0 In-Reply-To: References: <20190305044936.22267-1-dbasehore@chromium.org> <20190305044936.22267-2-dbasehore@chromium.org> <155181177527.20095.15680753964583935841@swboyd.mtv.corp.google.com> From: Stephen Boyd Subject: Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}() To: "dbasehore ." Message-ID: <155190697673.20095.8507456124067904731@swboyd.mtv.corp.google.com> User-Agent: alot/0.8 Date: Wed, 06 Mar 2019 13:16:16 -0800 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190306_131620_650655_514C8C13 X-CRM114-Status: GOOD ( 36.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: aisheng.dong@nxp.com, Heiko Stübner , linux-doc@vger.kernel.org, Michael Turquette , Jonathan Corbet , Stephen Boyd , linux-kernel , linux-rockchip@lists.infradead.org, mchehab+samsung@kernel.org, linux-clk@vger.kernel.org, linux-arm-kernel@lists.infradead.org, jbrunet@baylibre.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org Quoting dbasehore . (2019-03-05 20:11:57) > On Tue, Mar 5, 2019 at 5:35 PM dbasehore . wrote: > > > > On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd wrote: > > > > > > Quoting Derek Basehore (2019-03-04 20:49:31) > > > > From: Stephen Boyd > > > > > > > > Enabling and preparing clocks can be written quite naturally with > > > > recursion. We start at some point in the tree and recurse up the > > > > tree to find the oldest parent clk that needs to be enabled or > > > > prepared. Then we enable/prepare and return to the caller, going > > > > back to the clk we started at and enabling/preparing along the > > > > way. This also unroll the recursion in unprepare,disable which can > > > > just be done in the order of walking up the clk tree. > > > > > > > > The problem is recursion isn't great for kernel code where we > > > > have a limited stack size. Furthermore, we may be calling this > > > > code inside clk_set_rate() which also has recursion in it, so > > > > we're really not looking good if we encounter a tall clk tree. > > > > > > > > Let's create a stack instead by looping over the parent chain and > > > > collecting clks of interest. Then the enable/prepare becomes as > > > > simple as iterating over that list and calling enable. > > > > > > > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/ > > > > -Fixed kernel warning > > > > -unrolled recursion in unprepare/disable too > > > > > > > > Cc: Jerome Brunet > > > > Signed-off-by: Stephen Boyd > > > > Signed-off-by: Derek Basehore > > > > --- > > > > > > From the original post: > > > > > > "I have some vague fear that this may not work if a clk op is framework > > > reentrant and attemps to call consumer clk APIs from within the clk ops. > > > If the reentrant call tries to add a clk that's already in the list then > > > we'll corrupt the list. Ugh." > > > > > > Do we have this sort of problem here? Or are you certain that we don't > > > have clks that prepare or enable something that is already in the > > > process of being prepared or enabled? > > > > I can look into whether anything's doing this and add a WARN_ON which > > returns an error if we ever hit that case. If this is happening on > > some platform, we'd want to correct that anyways. > > > > Also, if we're ever able to move to another locking scheme (hopefully > soon...), we can make the prepare/enable locks non-reentrant. Then if > anyone recursively calls back into the framework for another > prepare/enable, they will deadlock. I guess that's one way of making > sure no one does that. > Sure, but we can't regress the system by making prepare and enable non-reentrant. I was thinking we could write a Coccinelle script that looks for suspects by matching against a clk_ops structure and then picks out the prepare and enable ops from them and looks in those functions for calls to clk_prepare_enable() or clk_prepare() or clk_enable(). I don't know if or how it's possible to descend into the call graph from the clk_ops function to check for clk API calls, so we probably need to add the WARN_ON to help us find these issues at runtime too. You can use this cocci script to start poking at it though: @ prepare_enabler @ identifier func; identifier hw; position p; expression E; @@ int func@p(struct clk_hw *hw) { ... ( clk_prepare_enable(E); | clk_prepare(E); | clk_enable(E); ) ... } @ has_preparer depends on prepare_enabler @ identifier ops; identifier prepare_enabler.func; position p; @@ struct clk_ops ops = { ..., .prepare = func@p, ... }; @ has_enabler depends on prepare_enabler @ identifier ops; identifier prepare_enabler.func; position p; @@ struct clk_ops ops = { ..., .enable = func@p, ... }; @script:python@ pf << prepare_enabler.p; @@ coccilib.report.print_report(pf[0],"WARNING something bad called from clk op") I ran it and found one hit in the davinci clk driver where the driver manually turns a clk on to ensure a PLL locks. Hopefully that's a different clk tree than the current one so that it's not an issue. We already have quite a few grep hits for clk_prepare_enable() in drivers/clk/ too but I think those are mostly drivers that haven't converted to using critical clks so they have setup code to do that for them. I suppose it would also be good to dig through all those drivers and move them to the critical clk flag. For example, here's a patch for the highbank driver that should definitely be moved to critical clks. -----8<----- diff --git a/drivers/clk/clk-highbank.c b/drivers/clk/clk-highbank.c index 8e4581004695..bd328b0eb243 100644 --- a/drivers/clk/clk-highbank.c +++ b/drivers/clk/clk-highbank.c @@ -17,7 +17,6 @@ #include #include #include -#include #include #include #include @@ -272,7 +271,7 @@ static const struct clk_ops periclk_ops = { .set_rate = clk_periclk_set_rate, }; -static __init struct clk *hb_clk_init(struct device_node *node, const struct clk_ops *ops) +static void __init hb_clk_init(struct device_node *node, const struct clk_ops *ops, unsigned long clkflags) { u32 reg; struct hb_clk *hb_clk; @@ -284,11 +283,11 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk rc = of_property_read_u32(node, "reg", ®); if (WARN_ON(rc)) - return NULL; + return; hb_clk = kzalloc(sizeof(*hb_clk), GFP_KERNEL); if (WARN_ON(!hb_clk)) - return NULL; + return; /* Map system registers */ srnp = of_find_compatible_node(NULL, NULL, "calxeda,hb-sregs"); @@ -301,7 +300,7 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk init.name = clk_name; init.ops = ops; - init.flags = 0; + init.flags = clkflags; parent_name = of_clk_get_parent_name(node, 0); init.parent_names = &parent_name; init.num_parents = 1; @@ -311,33 +310,31 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk rc = clk_hw_register(NULL, &hb_clk->hw); if (WARN_ON(rc)) { kfree(hb_clk); - return NULL; + return; } - rc = of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw); - return hb_clk->hw.clk; + of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw); } static void __init hb_pll_init(struct device_node *node) { - hb_clk_init(node, &clk_pll_ops); + hb_clk_init(node, &clk_pll_ops, 0); } CLK_OF_DECLARE(hb_pll, "calxeda,hb-pll-clock", hb_pll_init); static void __init hb_a9periph_init(struct device_node *node) { - hb_clk_init(node, &a9periphclk_ops); + hb_clk_init(node, &a9periphclk_ops, 0); } CLK_OF_DECLARE(hb_a9periph, "calxeda,hb-a9periph-clock", hb_a9periph_init); static void __init hb_a9bus_init(struct device_node *node) { - struct clk *clk = hb_clk_init(node, &a9bclk_ops); - clk_prepare_enable(clk); + hb_clk_init(node, &a9bclk_ops, CLK_IS_CRITICAL); } CLK_OF_DECLARE(hb_a9bus, "calxeda,hb-a9bus-clock", hb_a9bus_init); static void __init hb_emmc_init(struct device_node *node) { - hb_clk_init(node, &periclk_ops); + hb_clk_init(node, &periclk_ops, 0); } CLK_OF_DECLARE(hb_emmc, "calxeda,hb-emmc-clock", hb_emmc_init); _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel