From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C4CBC433E0 for ; Thu, 16 Jul 2020 15:12:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F41B9207CB for ; Thu, 16 Jul 2020 15:12:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728678AbgGPPMX (ORCPT ); Thu, 16 Jul 2020 11:12:23 -0400 Received: from www262.sakura.ne.jp ([202.181.97.72]:62416 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728150AbgGPPMX (ORCPT ); Thu, 16 Jul 2020 11:12:23 -0400 Received: from fsav401.sakura.ne.jp (fsav401.sakura.ne.jp [133.242.250.100]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 06GFCKfs068296; Fri, 17 Jul 2020 00:12:20 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav401.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav401.sakura.ne.jp); Fri, 17 Jul 2020 00:12:20 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav401.sakura.ne.jp) Received: from [192.168.1.9] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 06GFCJbd068287 (version=TLSv1.2 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 17 Jul 2020 00:12:19 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Subject: [PATCH v2] binder: Don't use mmput() from shrinker function. To: Greg Kroah-Hartman , Arve Hjonnevag , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner Cc: Michal Hocko , syzbot , acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@redhat.com, linux-kernel@vger.kernel.org, mark.rutland@arm.com, mingo@redhat.com, namhyung@kernel.org, peterz@infradead.org, syzkaller-bugs@googlegroups.com, "open list:ANDROID DRIVERS" , linux-mm References: <0000000000001fbbb605aa805c9b@google.com> <5ce3ee90-333e-638d-ac8c-cd6d7ab7aa3b@I-love.SAKURA.ne.jp> <20200716083506.GA20915@dhcp22.suse.cz> <36db7016-98d6-2c6b-110b-b2481fd480ac@i-love.sakura.ne.jp> <20200716135445.GN31089@dhcp22.suse.cz> From: Tetsuo Handa Message-ID: <4ba9adb2-43f5-2de0-22de-f6075c1fab50@i-love.sakura.ne.jp> Date: Fri, 17 Jul 2020 00:12:15 +0900 User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200716135445.GN31089@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org syzbot is reporting that mmput() from shrinker function has a risk of deadlock [1], for delayed_uprobe_add() from update_ref_ctr() calls kzalloc(GFP_KERNEL) with delayed_uprobe_lock held, and uprobe_clear_state() from __mmput() also holds delayed_uprobe_lock. Commit a1b2289cef92ef0e ("android: binder: drop lru lock in isolate callback") replaced mmput() with mmput_async() in order to avoid sleeping with spinlock held. But this patch replaces mmput() with mmput_async() in order not to start __mmput() from shrinker context. [1] https://syzkaller.appspot.com/bug?id=bc9e7303f537c41b2b0cc2dfcea3fc42964c2d45 Reported-by: syzbot Reported-by: syzbot Signed-off-by: Tetsuo Handa --- drivers/android/binder_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index 42c672f1584e..cbe6aa77d50d 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -947,7 +947,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, trace_binder_unmap_user_end(alloc, index); } mmap_read_unlock(mm); - mmput(mm); + mmput_async(mm); trace_binder_unmap_kernel_start(alloc, index); -- 2.18.4