From: Samuel Holland <samuel@sholland.org> To: Maxime Ripard <maxime@cerno.tech> Cc: Chen-Yu Tsai <wens@csie.org>, Jernej Skrabec <jernej.skrabec@gmail.com>, Rob Herring <robh+dt@kernel.org>, Michael Turquette <mturquette@baylibre.com>, Stephen Boyd <sboyd@kernel.org>, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 0/7] clk: sunxi-ng: Add a RTC CCU driver Date: Tue, 28 Sep 2021 22:54:26 -0500 [thread overview] Message-ID: <bc338f11-9867-2394-ceaa-99314ff67660@sholland.org> (raw) In-Reply-To: <20210928090625.rq3atiaejaq5kcbx@gilmour> Hi Maxime, Thanks for your reply. On 9/28/21 4:06 AM, Maxime Ripard wrote: > On Tue, Sep 28, 2021 at 02:46:39AM -0500, Samuel Holland wrote: >> On 9/9/21 3:45 AM, Maxime Ripard wrote: >>> On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote: >>>> On 9/3/21 9:50 AM, Maxime Ripard wrote: >>>>> And since we can register all those clocks at device probe time, we >>>>> don't really need to split the driver in two (and especially in two >>>>> different places). The only obstacle to this after your previous series >>>>> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe >>>>> functions public, but that can easily be fixed by moving their >>>>> definition to include/linux/clk/sunxi-ng.h >>>> >>>> Where are you thinking the clock definitions would go? We don't export >>>> any of those structures (ccu_mux, ccu_common) or macros >>>> (SUNXI_CCU_GATE_DATA) in a public header either. >>> >>> Ah, right... >>> >>>> Would you want to export those? That seems like a lot of churn. Or would >>>> we put the CCU descriptions in drivers/clk/sunxi-ng and export a >>>> function that the RTC driver can call? (Or some other idea?) >>> >>> I guess we could export it. There's some fairly big headers in >>> include/linux/clk already (tegra and ti), it's not uAPI and we do have >>> reasons to do so, so I guess it's fine. >>> >>> I'd like to avoid having two drivers for the same device if possible, >>> especially in two separate places. This creates some confusion since the >>> general expectation is that there's only one driver per device. There's >>> also the fact that this could lead to subtle bugs since the probe order >>> is the link order (or module loading). >> >> I don't think there can be two "struct device"s for a single OF node. > > That's not what I meant, there's indeed a single of_node for a single > struct device. If we dig a bit into the core framework, the most likely > scenario is that we would register both the RTC and clock driver at > module_init, and with the device already created with its of_node set > during the initial DT parsing. > > We register our platform driver using module_platform_driver, which > expands to calling driver_register() at module_init(), setting the > driver bus to the platform_bus in the process (in > __platform_driver_register()). > > After some sanity check, driver_register() calls bus_add_driver(), which > will call driver_attach() if drivers_autoprobe is set (which is the > default, set into bus_register()). > > driver_attach() will, for each device on the platform bus, call > __driver_attach(). If there's a match between that device and our driver > (which is evaluated by platform_match() in our case), we'll call our > driver probe with that device through driver_probe_device(), > __driver_probe_device() and finally really_probe(). > > However, at no point in time there's any check about whether that device > has already been bound to a driver, nor does it create a new device for > each driver. I would expect this to hit the: if (dev->driver) return -EBUSY; in __driver_probe_device(), or fail the "if (!dev->driver)" check in __driver_attach() for the async case, once the first driver is bound. > So this means that, if you have two drivers that match the > same device (like our clock and RTC drivers), you'll have both probe > being called with the same device, and the probe order will be defined > by the link order. Worse, they would share the same driver_data, with > each driver not being aware of the other. This is incredibly fragile, > and hard to notice since it goes against the usual expectations. > >> So if the CCU part is in drivers/clk/sunxi-ng, the CCU "probe" >> function would have to be called from the RTC driver. > > No, it would be called by the core directly if there's a compatible to > match. > >> Since there has to be cooperation anyway, I don't think there would be >> any ordering problems. > > My initial point was that, with a direct function call, it's both > deterministic and obvious. I believe I did what you are suggesting for v2. From patch 7: --- a/drivers/rtc/rtc-sun6i.c +++ b/drivers/rtc/rtc-sun6i.c @@ -683,6 +684,10 @@ static int sun6i_rtc_probe(struct platform_device *pdev) chip->base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(chip->base)) return PTR_ERR(chip->base); + + ret = sun6i_rtc_ccu_probe(&pdev->dev, chip->base); + if (ret) + return ret; } platform_set_drvdata(pdev, chip); >>> And synchronizing access to registers between those two drivers will be >>> hard, while we could just share the same spin lock between the RTC and >>> clock drivers if they are instanciated in the same place. >> >> While the RTC driver currently shares a spinlock between the clock part >> and the RTC part, there isn't actually any overlap in register usage >> between the two. So there doesn't need to be any synchronization. > > I know, but this was more of a social problem than a technical one. Each > contributor and reviewer in the future will have to know or remember > that it's there, and make sure that it's still the case after any change > they make or review. > > This is again a fairly fragile assumption. Yeah, I agree that having a lock that is only sometimes safe to use with certain registers is quite fragile. Would splitting the spinlock in rtc-sun6i.c into "losc_lock" (for the clock provider) and "alarm_lock" (for the RTC driver) make this distinction clear enough? Eventually, I want to split up the struct between the clock provider and RTC driver so it's clear which members belong to whom, and there's no ugly global pointer use. Maybe I should do this first? Regards, Samuel
WARNING: multiple messages have this Message-ID (diff)
From: Samuel Holland <samuel@sholland.org> To: Maxime Ripard <maxime@cerno.tech> Cc: Chen-Yu Tsai <wens@csie.org>, Jernej Skrabec <jernej.skrabec@gmail.com>, Rob Herring <robh+dt@kernel.org>, Michael Turquette <mturquette@baylibre.com>, Stephen Boyd <sboyd@kernel.org>, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-clk@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 0/7] clk: sunxi-ng: Add a RTC CCU driver Date: Tue, 28 Sep 2021 22:54:26 -0500 [thread overview] Message-ID: <bc338f11-9867-2394-ceaa-99314ff67660@sholland.org> (raw) In-Reply-To: <20210928090625.rq3atiaejaq5kcbx@gilmour> Hi Maxime, Thanks for your reply. On 9/28/21 4:06 AM, Maxime Ripard wrote: > On Tue, Sep 28, 2021 at 02:46:39AM -0500, Samuel Holland wrote: >> On 9/9/21 3:45 AM, Maxime Ripard wrote: >>> On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote: >>>> On 9/3/21 9:50 AM, Maxime Ripard wrote: >>>>> And since we can register all those clocks at device probe time, we >>>>> don't really need to split the driver in two (and especially in two >>>>> different places). The only obstacle to this after your previous series >>>>> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe >>>>> functions public, but that can easily be fixed by moving their >>>>> definition to include/linux/clk/sunxi-ng.h >>>> >>>> Where are you thinking the clock definitions would go? We don't export >>>> any of those structures (ccu_mux, ccu_common) or macros >>>> (SUNXI_CCU_GATE_DATA) in a public header either. >>> >>> Ah, right... >>> >>>> Would you want to export those? That seems like a lot of churn. Or would >>>> we put the CCU descriptions in drivers/clk/sunxi-ng and export a >>>> function that the RTC driver can call? (Or some other idea?) >>> >>> I guess we could export it. There's some fairly big headers in >>> include/linux/clk already (tegra and ti), it's not uAPI and we do have >>> reasons to do so, so I guess it's fine. >>> >>> I'd like to avoid having two drivers for the same device if possible, >>> especially in two separate places. This creates some confusion since the >>> general expectation is that there's only one driver per device. There's >>> also the fact that this could lead to subtle bugs since the probe order >>> is the link order (or module loading). >> >> I don't think there can be two "struct device"s for a single OF node. > > That's not what I meant, there's indeed a single of_node for a single > struct device. If we dig a bit into the core framework, the most likely > scenario is that we would register both the RTC and clock driver at > module_init, and with the device already created with its of_node set > during the initial DT parsing. > > We register our platform driver using module_platform_driver, which > expands to calling driver_register() at module_init(), setting the > driver bus to the platform_bus in the process (in > __platform_driver_register()). > > After some sanity check, driver_register() calls bus_add_driver(), which > will call driver_attach() if drivers_autoprobe is set (which is the > default, set into bus_register()). > > driver_attach() will, for each device on the platform bus, call > __driver_attach(). If there's a match between that device and our driver > (which is evaluated by platform_match() in our case), we'll call our > driver probe with that device through driver_probe_device(), > __driver_probe_device() and finally really_probe(). > > However, at no point in time there's any check about whether that device > has already been bound to a driver, nor does it create a new device for > each driver. I would expect this to hit the: if (dev->driver) return -EBUSY; in __driver_probe_device(), or fail the "if (!dev->driver)" check in __driver_attach() for the async case, once the first driver is bound. > So this means that, if you have two drivers that match the > same device (like our clock and RTC drivers), you'll have both probe > being called with the same device, and the probe order will be defined > by the link order. Worse, they would share the same driver_data, with > each driver not being aware of the other. This is incredibly fragile, > and hard to notice since it goes against the usual expectations. > >> So if the CCU part is in drivers/clk/sunxi-ng, the CCU "probe" >> function would have to be called from the RTC driver. > > No, it would be called by the core directly if there's a compatible to > match. > >> Since there has to be cooperation anyway, I don't think there would be >> any ordering problems. > > My initial point was that, with a direct function call, it's both > deterministic and obvious. I believe I did what you are suggesting for v2. From patch 7: --- a/drivers/rtc/rtc-sun6i.c +++ b/drivers/rtc/rtc-sun6i.c @@ -683,6 +684,10 @@ static int sun6i_rtc_probe(struct platform_device *pdev) chip->base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(chip->base)) return PTR_ERR(chip->base); + + ret = sun6i_rtc_ccu_probe(&pdev->dev, chip->base); + if (ret) + return ret; } platform_set_drvdata(pdev, chip); >>> And synchronizing access to registers between those two drivers will be >>> hard, while we could just share the same spin lock between the RTC and >>> clock drivers if they are instanciated in the same place. >> >> While the RTC driver currently shares a spinlock between the clock part >> and the RTC part, there isn't actually any overlap in register usage >> between the two. So there doesn't need to be any synchronization. > > I know, but this was more of a social problem than a technical one. Each > contributor and reviewer in the future will have to know or remember > that it's there, and make sure that it's still the case after any change > they make or review. > > This is again a fairly fragile assumption. Yeah, I agree that having a lock that is only sometimes safe to use with certain registers is quite fragile. Would splitting the spinlock in rtc-sun6i.c into "losc_lock" (for the clock provider) and "alarm_lock" (for the RTC driver) make this distinction clear enough? Eventually, I want to split up the struct between the clock provider and RTC driver so it's clear which members belong to whom, and there's no ugly global pointer use. Maybe I should do this first? Regards, Samuel _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-09-29 3:54 UTC|newest] Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-09-01 5:39 [RFC PATCH 0/7] clk: sunxi-ng: Add a RTC CCU driver Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-01 5:39 ` [RFC PATCH 1/7] dt-bindings: rtc: sun6i: Add H616 and R329 compatibles Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-01 12:06 ` Rob Herring 2021-09-01 12:06 ` Rob Herring 2021-09-02 15:27 ` Rob Herring 2021-09-02 15:27 ` Rob Herring 2021-09-03 15:36 ` Samuel Holland 2021-09-03 15:36 ` Samuel Holland 2021-09-07 14:44 ` Rob Herring 2021-09-07 14:44 ` Rob Herring 2021-09-07 14:44 ` Rob Herring 2021-09-08 2:26 ` Samuel Holland 2021-09-08 2:26 ` Samuel Holland 2021-09-01 5:39 ` [RFC PATCH 2/7] clk: sunxi-ng: div: Add macro using CLK_HW_INIT_FW_NAME Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-01 5:39 ` [RFC PATCH 3/7] clk: sunxi-ng: mux: Add macro using CLK_HW_INIT_PARENTS_DATA Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-01 5:39 ` [RFC PATCH 4/7] clk: sunxi-ng: mux: Allow muxes to have keys Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-01 5:39 ` [RFC PATCH 5/7] clk: sunxi-ng: Add support for the sun50i RTC clocks Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-01 5:39 ` [RFC PATCH 6/7] [DO NOT MERGE] clk: sunxi-ng: Add support for H6 Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-03 14:51 ` Maxime Ripard 2021-09-03 14:51 ` Maxime Ripard 2021-09-03 15:07 ` Samuel Holland 2021-09-03 15:07 ` Samuel Holland 2021-09-01 5:39 ` [RFC PATCH 7/7] [DO NOT MERGE] clk: sunxi-ng: Add support for T5 Samuel Holland 2021-09-01 5:39 ` Samuel Holland 2021-09-03 14:50 ` [RFC PATCH 0/7] clk: sunxi-ng: Add a RTC CCU driver Maxime Ripard 2021-09-03 14:50 ` Maxime Ripard 2021-09-03 15:21 ` Samuel Holland 2021-09-03 15:21 ` Samuel Holland 2021-09-09 8:45 ` Maxime Ripard 2021-09-09 8:45 ` Maxime Ripard 2021-09-28 7:46 ` Samuel Holland 2021-09-28 7:46 ` Samuel Holland 2021-09-28 9:06 ` Maxime Ripard 2021-09-28 9:06 ` Maxime Ripard 2021-09-29 3:54 ` Samuel Holland [this message] 2021-09-29 3:54 ` Samuel Holland 2021-10-25 15:54 ` Maxime Ripard 2021-10-25 15:54 ` Maxime Ripard
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bc338f11-9867-2394-ceaa-99314ff67660@sholland.org \ --to=samuel@sholland.org \ --cc=devicetree@vger.kernel.org \ --cc=jernej.skrabec@gmail.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-clk@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-sunxi@lists.linux.dev \ --cc=maxime@cerno.tech \ --cc=mturquette@baylibre.com \ --cc=robh+dt@kernel.org \ --cc=sboyd@kernel.org \ --cc=wens@csie.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.