linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ritesh Harjani <riteshh@codeaurora.org>
To: Chao Yu <chao@kernel.org>,
	Sahitya Tummala <stummala@codeaurora.org>,
	Chao Yu <yuchao0@huawei.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH] f2fs: do not use mutex lock in atomic context
Date: Fri, 15 Feb 2019 09:58:15 +0530	[thread overview]
Message-ID: <77c63bf6-069c-704c-220a-b50d997d2463@codeaurora.org> (raw)
In-Reply-To: <5650af07-c55d-fcb4-ca98-eca45248392d@kernel.org>


On 2/14/2019 9:40 PM, Chao Yu wrote:
> On 2019-2-14 15:46, Sahitya Tummala wrote:
>> On Wed, Feb 13, 2019 at 11:25:31AM +0800, Chao Yu wrote:
>>> On 2019/2/4 16:06, Sahitya Tummala wrote:
>>>> Fix below warning coming because of using mutex lock in atomic context.
>>>>
>>>> BUG: sleeping function called from invalid context at kernel/locking/mutex.c:98
>>>> in_atomic(): 1, irqs_disabled(): 0, pid: 585, name: sh
>>>> Preemption disabled at: __radix_tree_preload+0x28/0x130
>>>> Call trace:
>>>>   dump_backtrace+0x0/0x2b4
>>>>   show_stack+0x20/0x28
>>>>   dump_stack+0xa8/0xe0
>>>>   ___might_sleep+0x144/0x194
>>>>   __might_sleep+0x58/0x8c
>>>>   mutex_lock+0x2c/0x48
>>>>   f2fs_trace_pid+0x88/0x14c
>>>>   f2fs_set_node_page_dirty+0xd0/0x184
>>>>
>>>> Do not use f2fs_radix_tree_insert() to avoid doing cond_resched() with
>>>> spin_lock() acquired.
>>>>
>>>> Signed-off-by: Sahitya Tummala <stummala@codeaurora.org>
>>>> ---
>>>>   fs/f2fs/trace.c | 20 +++++++++++++-------
>>>>   1 file changed, 13 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/fs/f2fs/trace.c b/fs/f2fs/trace.c
>>>> index ce2a5eb..d0ab533 100644
>>>> --- a/fs/f2fs/trace.c
>>>> +++ b/fs/f2fs/trace.c
>>>> @@ -14,7 +14,7 @@
>>>>   #include "trace.h"
>>>>   
>>>>   static RADIX_TREE(pids, GFP_ATOMIC);
>>>> -static struct mutex pids_lock;
>>>> +static spinlock_t pids_lock;
>>>>   static struct last_io_info last_io;
>>>>   
>>>>   static inline void __print_last_io(void)
>>>> @@ -58,23 +58,29 @@ void f2fs_trace_pid(struct page *page)
>>>>   
>>>>   	set_page_private(page, (unsigned long)pid);
>>>>   
>>>> +retry:
>>>>   	if (radix_tree_preload(GFP_NOFS))
>>>>   		return;
>>>>   
>>>> -	mutex_lock(&pids_lock);
>>>> +	spin_lock(&pids_lock);
>>>>   	p = radix_tree_lookup(&pids, pid);
>>>>   	if (p == current)
>>>>   		goto out;
>>>>   	if (p)
>>>>   		radix_tree_delete(&pids, pid);
>>>>   
>>>> -	f2fs_radix_tree_insert(&pids, pid, current);

Do you know why do we have a retry logic here? When anyways we have 
called for radix_tree_delete with pid key?
Which should ensure the slot is empty, no?
Then why in the original code (f2fs_radix_tree_insert), we were 
retrying. For what condition a retry was needed?

Regards
Ritesh


>>>> +	if (radix_tree_insert(&pids, pid, current)) {
>>>> +		spin_unlock(&pids_lock);
>>>> +		radix_tree_preload_end();
>>>> +		cond_resched();
>>>> +		goto retry;
>>>> +	}
>>>>   
>>>>   	trace_printk("%3x:%3x %4x %-16s\n",
>>>>   			MAJOR(inode->i_sb->s_dev), MINOR(inode->i_sb->s_dev),
>>>>   			pid, current->comm);
>>> Hi Sahitya,
>>>
>>> Can trace_printk sleep? For safety, how about moving it out of spinlock?
>>>
>> Hi Chao,
>>
>> Yes, trace_printk() is safe to use in atomic context (unlike printk).
> Hi Sahitya,
>
> Thanks for your confirmation. :)
>
> Reviewed-by: Chao Yu <yuchao0@huawei.com>
>
> Thanks,
>
>> Thanks,
>> Sahitya.
>>
>>> Thanks,
>>>
>>>>   out:
>>>> -	mutex_unlock(&pids_lock);
>>>> +	spin_unlock(&pids_lock);
>>>>   	radix_tree_preload_end();
>>>>   }
>>>>   
>>>> @@ -119,7 +125,7 @@ void f2fs_trace_ios(struct f2fs_io_info *fio, int flush)
>>>>   
>>>>   void f2fs_build_trace_ios(void)
>>>>   {
>>>> -	mutex_init(&pids_lock);
>>>> +	spin_lock_init(&pids_lock);
>>>>   }
>>>>   
>>>>   #define PIDVEC_SIZE	128
>>>> @@ -147,7 +153,7 @@ void f2fs_destroy_trace_ios(void)
>>>>   	pid_t next_pid = 0;
>>>>   	unsigned int found;
>>>>   
>>>> -	mutex_lock(&pids_lock);
>>>> +	spin_lock(&pids_lock);
>>>>   	while ((found = gang_lookup_pids(pid, next_pid, PIDVEC_SIZE))) {
>>>>   		unsigned idx;
>>>>   
>>>> @@ -155,5 +161,5 @@ void f2fs_destroy_trace_ios(void)
>>>>   		for (idx = 0; idx < found; idx++)
>>>>   			radix_tree_delete(&pids, pid[idx]);
>>>>   	}
>>>> -	mutex_unlock(&pids_lock);
>>>> +	spin_unlock(&pids_lock);
>>>>   }
>>>>
>
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

  reply	other threads:[~2019-02-15  4:28 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-04  8:06 [PATCH] f2fs: do not use mutex lock in atomic context Sahitya Tummala
2019-02-13  3:25 ` Chao Yu
2019-02-14  7:46   ` Sahitya Tummala
2019-02-14 16:10     ` [f2fs-dev] " Chao Yu
2019-02-15  4:28       ` Ritesh Harjani [this message]
2019-02-15  9:10         ` Chao Yu
2019-02-18  2:04           ` Ritesh Harjani

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=77c63bf6-069c-704c-220a-b50d997d2463@codeaurora.org \
    --to=riteshh@codeaurora.org \
    --cc=chao@kernel.org \
    --cc=jaegeuk@kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stummala@codeaurora.org \
    --cc=yuchao0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).