From: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
To: Heiner Kallweit <hkallweit1@gmail.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/3] nvmem: core: remove member users from struct nvmem_device
Date: Thu, 8 Jun 2017 07:34:17 +0100 [thread overview]
Message-ID: <efaebfd5-2d9a-8613-f434-d61fa34147e4@linaro.org> (raw)
In-Reply-To: <8db3067b-66ca-e8f4-d974-256ede5eeef5@gmail.com>
On 07/06/17 22:51, Heiner Kallweit wrote:
> Am 07.06.2017 um 17:30 schrieb Srinivas Kandagatla:
>>
>>
>> On 04/06/17 12:01, Heiner Kallweit wrote:
>>> Member users is used only to check whether we're allowed to remove
>>> the module. So in case of built-in it's not used at all and in case
>>
>> nvmem providers doesn't have to be independent drivers, providers could be part of the other driver which can dynamically register and unregister nvmem providers. For example at24 and at25 drivers.
>>
>> This patch will break such cases !!
>>
> Thanks for the quick review.
> I don't think this patch breaks e.g. at24 / at25. Let me try to explain:
>
> at24 / at25 set themself as owner in struct nvmem_device and nvmem_unregister
> is called from at24/25_remove only. These remove callbacks are called only if
> all references to the respective module have been released.
>
> In current kernel code I don't see any nvmem use broken by the proposed patch.
> However in general you're right, there may be future use cases where
> nvmem_unregister isn't called only from a remove callback.
Yes, the patch would not break the exiting code, but as said it would
break a feature which was considered while writing the code.
>
> If the refcount isn't zero when calling nvmem_unregister then there's a bigger
> problem, I don't think there's any normal use case where this can happen.
Yes I understand chances of this error path is slim but it would crash
the system if it hits this path, so this safety check is in place.
> Instead of just returning -EBUSY I think a WARN() would be appropriate.
> Currently no caller of nvmem_unregister checks the return code anyway.
> My opinion would be that the refcount here is more a debug feature.
>
>
> Whilst we're talking about nvmem_unregister:
> I think the device_del() at the end should be a device_unregister().
> Else we miss put_device as second part of destroying a device.
These issues have already been addressed in
https://patchwork.kernel.org/patch/9685559/
https://patchwork.kernel.org/patch/9685561/
https://patchwork.kernel.org/patch/9729235/
--srini
>
> Rgds, Heiner
>
>
>>
>>
>>> that owner is a module we have the module refcount for the same
>>> purpose already. Whenever users is incremented the owner's refcount
>>> is incremented too. Therefore users isn't needed.
>>>
>>> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
>>> ---
>>> drivers/nvmem/core.c | 16 ----------------
>>> 1 file changed, 16 deletions(-)
>>>
>>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>>> index 8c830a80..4e07f3f8 100644
>>> --- a/drivers/nvmem/core.c
>>> +++ b/drivers/nvmem/core.c
>>> @@ -33,7 +33,6 @@ struct nvmem_device {
>>> int word_size;
>>> int ncells;
>>> int id;
>>> - int users;
>>> size_t size;
>>> bool read_only;
>>> int flags;
>>> @@ -517,13 +516,6 @@ EXPORT_SYMBOL_GPL(nvmem_register);
>>> */
>>> int nvmem_unregister(struct nvmem_device *nvmem)
>>> {
>>> - mutex_lock(&nvmem_mutex);
>>> - if (nvmem->users) {
>>> - mutex_unlock(&nvmem_mutex);
>>> - return -EBUSY;
>>> - }
>>> - mutex_unlock(&nvmem_mutex);
>>> -
>>> if (nvmem->flags & FLAG_COMPAT)
>>> device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
>>>
>>> @@ -562,7 +554,6 @@ static struct nvmem_device *__nvmem_device_get(struct device_node *np,
>>> }
>>> }
>>>
>>> - nvmem->users++;
>>> mutex_unlock(&nvmem_mutex);
>>>
>>> if (!try_module_get(nvmem->owner)) {
>>> @@ -570,10 +561,6 @@ static struct nvmem_device *__nvmem_device_get(struct device_node *np,
>>> "could not increase module refcount for cell %s\n",
>>> nvmem->name);
>>>
>>> - mutex_lock(&nvmem_mutex);
>>> - nvmem->users--;
>>> - mutex_unlock(&nvmem_mutex);
>>> -
>>> return ERR_PTR(-EINVAL);
>>> }
>>>
>>> @@ -583,9 +570,6 @@ static struct nvmem_device *__nvmem_device_get(struct device_node *np,
>>> static void __nvmem_device_put(struct nvmem_device *nvmem)
>>> {
>>> module_put(nvmem->owner);
>>> - mutex_lock(&nvmem_mutex);
>>> - nvmem->users--;
>>> - mutex_unlock(&nvmem_mutex);
>>> }
>>>
>>> static int nvmem_match(struct device *dev, void *data)
>>>
>>
>
next prev parent reply other threads:[~2017-06-08 6:34 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-04 10:48 [PATCH 0/3] nvmem: core: series with smaller refactorings Heiner Kallweit
2017-06-04 11:01 ` [PATCH 1/3] nvmem: core: remove member users from struct nvmem_device Heiner Kallweit
2017-06-07 15:30 ` Srinivas Kandagatla
2017-06-07 21:51 ` Heiner Kallweit
2017-06-08 6:34 ` Srinivas Kandagatla [this message]
2017-06-04 11:01 ` [PATCH 2/3] nvmem: core: add locking to nvmem_find_cell Heiner Kallweit
2017-06-07 15:30 ` Srinivas Kandagatla
2017-06-04 11:02 ` [PATCH 3/3] nvmem: core: remove nvmem_mutex Heiner Kallweit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=efaebfd5-2d9a-8613-f434-d61fa34147e4@linaro.org \
--to=srinivas.kandagatla@linaro.org \
--cc=hkallweit1@gmail.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).