From: Miquel Raynal <miquel.raynal@bootlin.com> To: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Cc: Michael Walle <michael@walle.cc>, Jonathan Corbet <corbet@lwn.net>, Rob Herring <robh+dt@kernel.org>, Frank Rowand <frowand.list@gmail.com>, Sascha Hauer <s.hauer@pengutronix.de>, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org, Dan Carpenter <error27@gmail.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org> Subject: Re: [PATCH v5 00/21] nvmem: core: introduce NVMEM layouts Date: Mon, 6 Feb 2023 23:47:13 +0100 [thread overview] Message-ID: <20230206234713.7cf2f722@xps-13> (raw) In-Reply-To: <81a5c400-e671-fab3-732a-d615fa4242b3@linaro.org> Hi Srinivas, + Greg srinivas.kandagatla@linaro.org wrote on Mon, 6 Feb 2023 20:31:46 +0000: > Hi Michael/Miquel, > > I had to revert Layout patches due to comments from Greg about Making the layouts as built-in rather than modules, he is not ready to merge them as it is. Ok this is the second time I see something similar happening: - maintainer or maintainers group doing the review/apply job and sending to "upper" maintainer - upper maintainer refusing for a "questionable" reason at this stage. I am not saying the review is incorrect or anything. I'm just wondering whether, for the second time, I am facing a fair situation, either myself as a contributor or the intermediate maintainer who's being kind of bypassed. What I mean is: the review process has happened. Nothing was hidden, this series has started leaving on the mailing lists more than two years ago. The contribution process which has been in place for many years asks the contributors to send new versions when the review process leads to comments, which we did. Once the series has been "accepted" it is expected that this series will be pulled during the next merge window. If there is something else to fix, there are 6 to 8 long weeks where contributors' fixes are welcome. Why not letting us the opportunity to use them? Why, for the second time, I am facing an extremely urgent situation where I have to cancel all my commitments just because a random comment has been made on a series which has been standing still for months? What I would expect instead, is a discussion on the cover letter of the series where Michael explained why he did no choose to use modules in the first place. If it appears that for some reason it is best to enable NVMEM layouts as modules, we will send a timely series on top of the current one to enable that particular case. > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. I know Michael is busy after the FOSDEM and so am I, so, Greg, would you accept to take the PR as it is, participate to the discussion and wait for an update? Thanks, Miquèl > His original comment, > > "Why are we going back to "custom-built" kernel configurations? Why can > this not be a loadable module? Distros are now forced to enable these > layout and all kernels will have this dead code in the tree without any > choice in the matter? > > That's not ok, these need to be auto-loaded based on the hardware > representation like any other kernel module. You can't force them to be > always present, sorry. > " > > I have applied most of the patches except > > nvmem: core: introduce NVMEM layouts > nvmem: core: add per-cell post processing > nvmem: core: allow to modify a cell before adding it > nvmem: imx-ocotp: replace global post processing with layouts > nvmem: cell: drop global cell_post_process > nvmem: core: provide own priv pointer in post process callback > nvmem: layouts: add sl28vpd layout > MAINTAINERS: add myself as sl28vpd nvmem layout driver > nvmem: layouts: Add ONIE tlv layout driver > MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer > nvmem: core: return -ENOENT if nvmem cell is not found > nvmem: layouts: Fix spelling mistake "platforn" -> "platform" > dt-bindings: nvmem: Fix spelling mistake "platforn" -> "platform" > nvmem: core: fix nvmem_layout_get_match_data() > > Please rebase your patches on top of nvmem-next once layouts are converted to loadable modules. > > thanks, > srini > > > > On 03/01/2023 15:39, Miquel Raynal wrote: > > Hi Srinivas, > > > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > > > >> This is now the third attempt to fetch the MAC addresses from the VPD > >> for the Kontron sl28 boards. Previous discussions can be found here: > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > >> > >> > >> NVMEM cells are typically added by board code or by the devicetree. But > >> as the cells get more complex, there is (valid) push back from the > >> devicetree maintainers to not put that handling in the devicetree. > >> > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > >> can add cells during runtime. That way it is possible to add more complex > >> cells than it is possible right now with the offset/length/bits > >> description in the device tree. For example, you can have post processing > >> for individual cells (think of endian swapping, or ethernet offset > >> handling). > >> > >> The imx-ocotp driver is the only user of the global post processing hook, > >> convert it to nvmem layouts and drop the global post pocessing hook. > >> > >> For now, the layouts are selected by the device tree. But the idea is > >> that also board files or other drivers could set a layout. Although no > >> code for that exists yet. > >> > >> Thanks to Miquel, the device tree bindings are already approved and merged. > >> > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. > > > > I believe this series still applies even though -rc1 (and -rc2) are out > > now, may we know if you consider merging it anytime soon or if there > > are still discrepancies in the implementation you would like to > > discuss? Otherwise I would really like to see this laying in -next a > > few weeks before being sent out to Linus, just in case. > > > > Thanks, > > Miquèl
WARNING: multiple messages have this Message-ID (diff)
From: Miquel Raynal <miquel.raynal@bootlin.com> To: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Cc: Michael Walle <michael@walle.cc>, Jonathan Corbet <corbet@lwn.net>, Rob Herring <robh+dt@kernel.org>, Frank Rowand <frowand.list@gmail.com>, Sascha Hauer <s.hauer@pengutronix.de>, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org, Dan Carpenter <error27@gmail.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org> Subject: Re: [PATCH v5 00/21] nvmem: core: introduce NVMEM layouts Date: Mon, 6 Feb 2023 23:47:13 +0100 [thread overview] Message-ID: <20230206234713.7cf2f722@xps-13> (raw) In-Reply-To: <81a5c400-e671-fab3-732a-d615fa4242b3@linaro.org> Hi Srinivas, + Greg srinivas.kandagatla@linaro.org wrote on Mon, 6 Feb 2023 20:31:46 +0000: > Hi Michael/Miquel, > > I had to revert Layout patches due to comments from Greg about Making the layouts as built-in rather than modules, he is not ready to merge them as it is. Ok this is the second time I see something similar happening: - maintainer or maintainers group doing the review/apply job and sending to "upper" maintainer - upper maintainer refusing for a "questionable" reason at this stage. I am not saying the review is incorrect or anything. I'm just wondering whether, for the second time, I am facing a fair situation, either myself as a contributor or the intermediate maintainer who's being kind of bypassed. What I mean is: the review process has happened. Nothing was hidden, this series has started leaving on the mailing lists more than two years ago. The contribution process which has been in place for many years asks the contributors to send new versions when the review process leads to comments, which we did. Once the series has been "accepted" it is expected that this series will be pulled during the next merge window. If there is something else to fix, there are 6 to 8 long weeks where contributors' fixes are welcome. Why not letting us the opportunity to use them? Why, for the second time, I am facing an extremely urgent situation where I have to cancel all my commitments just because a random comment has been made on a series which has been standing still for months? What I would expect instead, is a discussion on the cover letter of the series where Michael explained why he did no choose to use modules in the first place. If it appears that for some reason it is best to enable NVMEM layouts as modules, we will send a timely series on top of the current one to enable that particular case. > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. I know Michael is busy after the FOSDEM and so am I, so, Greg, would you accept to take the PR as it is, participate to the discussion and wait for an update? Thanks, Miquèl > His original comment, > > "Why are we going back to "custom-built" kernel configurations? Why can > this not be a loadable module? Distros are now forced to enable these > layout and all kernels will have this dead code in the tree without any > choice in the matter? > > That's not ok, these need to be auto-loaded based on the hardware > representation like any other kernel module. You can't force them to be > always present, sorry. > " > > I have applied most of the patches except > > nvmem: core: introduce NVMEM layouts > nvmem: core: add per-cell post processing > nvmem: core: allow to modify a cell before adding it > nvmem: imx-ocotp: replace global post processing with layouts > nvmem: cell: drop global cell_post_process > nvmem: core: provide own priv pointer in post process callback > nvmem: layouts: add sl28vpd layout > MAINTAINERS: add myself as sl28vpd nvmem layout driver > nvmem: layouts: Add ONIE tlv layout driver > MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer > nvmem: core: return -ENOENT if nvmem cell is not found > nvmem: layouts: Fix spelling mistake "platforn" -> "platform" > dt-bindings: nvmem: Fix spelling mistake "platforn" -> "platform" > nvmem: core: fix nvmem_layout_get_match_data() > > Please rebase your patches on top of nvmem-next once layouts are converted to loadable modules. > > thanks, > srini > > > > On 03/01/2023 15:39, Miquel Raynal wrote: > > Hi Srinivas, > > > > michael@walle.cc wrote on Tue, 6 Dec 2022 21:07:19 +0100: > > > >> This is now the third attempt to fetch the MAC addresses from the VPD > >> for the Kontron sl28 boards. Previous discussions can be found here: > >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > >> > >> > >> NVMEM cells are typically added by board code or by the devicetree. But > >> as the cells get more complex, there is (valid) push back from the > >> devicetree maintainers to not put that handling in the devicetree. > >> > >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > >> can add cells during runtime. That way it is possible to add more complex > >> cells than it is possible right now with the offset/length/bits > >> description in the device tree. For example, you can have post processing > >> for individual cells (think of endian swapping, or ethernet offset > >> handling). > >> > >> The imx-ocotp driver is the only user of the global post processing hook, > >> convert it to nvmem layouts and drop the global post pocessing hook. > >> > >> For now, the layouts are selected by the device tree. But the idea is > >> that also board files or other drivers could set a layout. Although no > >> code for that exists yet. > >> > >> Thanks to Miquel, the device tree bindings are already approved and merged. > >> > >> NVMEM layouts as modules? > >> While possible in principle, it doesn't make any sense because the NVMEM > >> core can't be compiled as a module. The layouts needs to be available at > >> probe time. (That is also the reason why they get registered with > >> subsys_initcall().) So if the NVMEM core would be a module, the layouts > >> could be modules, too. > > > > I believe this series still applies even though -rc1 (and -rc2) are out > > now, may we know if you consider merging it anytime soon or if there > > are still discrepancies in the implementation you would like to > > discuss? Otherwise I would really like to see this laying in -next a > > few weeks before being sent out to Linus, just in case. > > > > Thanks, > > Miquèl _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2023-02-06 22:47 UTC|newest] Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-12-06 20:07 [PATCH v5 00/21] nvmem: core: introduce NVMEM layouts Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 01/21] net: add helper eth_addr_add() Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 02/21] of: base: add of_parse_phandle_with_optional_args() Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 03/21] of: property: make #.*-cells optional for simple props Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 04/21] of: property: add #nvmem-cell-cells property Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 05/21] nvmem: core: fix device node refcounting Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 06/21] nvmem: core: add an index parameter to the cell Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 07/21] nvmem: core: move struct nvmem_cell_info to nvmem-provider.h Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 08/21] nvmem: core: drop the removal of the cells in nvmem_add_cells() Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 09/21] nvmem: core: fix cell removal on error Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 10/21] nvmem: core: add nvmem_add_one_cell() Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 11/21] nvmem: core: use nvmem_add_one_cell() in nvmem_add_cells_from_of() Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 12/21] nvmem: core: introduce NVMEM layouts Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 13/21] nvmem: core: add per-cell post processing Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 14/21] nvmem: core: allow to modify a cell before adding it Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 15/21] nvmem: imx-ocotp: replace global post processing with layouts Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 16/21] nvmem: cell: drop global cell_post_process Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 17/21] nvmem: core: provide own priv pointer in post process callback Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 18/21] nvmem: layouts: add sl28vpd layout Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 19/21] MAINTAINERS: add myself as sl28vpd nvmem layout driver Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 20/21] nvmem: layouts: Add ONIE tlv " Michael Walle 2022-12-06 20:07 ` Michael Walle 2022-12-06 20:07 ` [PATCH v5 21/21] MAINTAINERS: Add myself as ONIE tlv NVMEM layout maintainer Michael Walle 2022-12-06 20:07 ` Michael Walle 2023-01-03 15:39 ` [PATCH v5 00/21] nvmem: core: introduce NVMEM layouts Miquel Raynal 2023-01-03 15:39 ` Miquel Raynal 2023-01-03 15:51 ` Srinivas Kandagatla 2023-01-03 15:51 ` Srinivas Kandagatla 2023-01-03 15:58 ` Miquel Raynal 2023-01-03 15:58 ` Miquel Raynal 2023-01-05 11:04 ` Alexander Stein 2023-01-05 11:04 ` Alexander Stein 2023-01-05 11:35 ` Miquel Raynal 2023-01-05 11:35 ` Miquel Raynal 2023-01-05 12:11 ` Michael Walle 2023-01-05 12:11 ` Michael Walle 2023-01-05 12:21 ` Alexander Stein 2023-01-05 12:21 ` Alexander Stein 2023-01-05 12:51 ` Michael Walle 2023-01-05 12:51 ` Michael Walle 2023-01-05 13:22 ` Alexander Stein 2023-01-05 13:22 ` Alexander Stein 2023-02-06 20:31 ` Srinivas Kandagatla 2023-02-06 20:31 ` Srinivas Kandagatla 2023-02-06 22:47 ` Miquel Raynal [this message] 2023-02-06 22:47 ` Miquel Raynal 2023-02-07 6:28 ` Greg Kroah-Hartman 2023-02-07 6:28 ` Greg Kroah-Hartman
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20230206234713.7cf2f722@xps-13 \ --to=miquel.raynal@bootlin.com \ --cc=corbet@lwn.net \ --cc=devicetree@vger.kernel.org \ --cc=error27@gmail.com \ --cc=frowand.list@gmail.com \ --cc=gregkh@linuxfoundation.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=michael@walle.cc \ --cc=robh+dt@kernel.org \ --cc=s.hauer@pengutronix.de \ --cc=srinivas.kandagatla@linaro.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.