From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933136AbdC3K0U (ORCPT ); Thu, 30 Mar 2017 06:26:20 -0400 Received: from mail-db5eur01on0094.outbound.protection.outlook.com ([104.47.2.94]:8983 "EHLO EUR01-DB5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932296AbdC3K0Q (ORCPT ); Thu, 30 Mar 2017 06:26:16 -0400 Authentication-Results: linux-foundation.org; dkim=none (message not signed) header.d=none;linux-foundation.org; dmarc=none action=none header.from=virtuozzo.com; From: Andrey Ryabinin To: CC: , , Andrey Ryabinin , , , , , , , , , , , , , Subject: [PATCH 1/4] mm/vmalloc: allow to call vfree() in atomic context Date: Thu, 30 Mar 2017 13:27:16 +0300 Message-ID: <20170330102719.13119-1-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.10.2 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [195.214.232.6] X-ClientProxiedBy: AM4P190CA0004.EURP190.PROD.OUTLOOK.COM (10.172.213.142) To HE1PR0801MB2058.eurprd08.prod.outlook.com (10.168.95.23) X-MS-Office365-Filtering-Correlation-Id: 40b15fec-64b0-43ba-1371-08d477573009 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001)(201703131423075)(201703031133081);SRVR:HE1PR0801MB2058; X-Microsoft-Exchange-Diagnostics: 1;HE1PR0801MB2058;3:L6pWffExmUjffoqNQkF0o591nrkPh3JWEdsBpqyjusxMIRYPXQf533jm0AY5AO99OqKIh412Wsac7mDRvFbtiFBWUyCcTZqK3FpMhhvLbTRLCkhmWAlrXzSMxT3QOl/DkHuhzqXnF9O5YoVcHeARzeRt/BfcI/MmAviDtrMyxenmUOhxqjcyirkeFX2xF3bsYVl5BR9b4DQFF4OR5yxNuJMqjagz2q8MJv3BvKQiAHN2bzQXPRK+oDoFoJ8qV1wewltkHTVEq8uv5KKRIdyP/3HjfGVTxBEMeKAEdG4FHll2Bzqd4RQbT9+YFPg+zn0ew0TAObd5ODgaj/02Y3rU5w==;25:8FVMF21PtJ23EFLEh8yNpjoz0Bd1GAyVU7pcGs2VCz67Jw9HPutmpoN037hkHqEvs9dgmH3y2UZoDs5BG14Xok+6Li53QnPvQy27Tnrc1uav4dFsIjxRMvT3p0MWJKqZFUqf0MBoWRao8ji7+6ypMGaQiSGxhxuL+G/qRxwKuHBWiaiBoEDhh0fxdJnv+aaO/h4QhIGVPHC6nHwzFWNsAxFAiFEXhjTGNi4mBuGgVbVgIKb8mp8TWD00LcS7r0CVyKAkO/tN8ZYWEMT77dHMN26JqX2AE4SEM8LsL/dqBE83BsrN+DuEM8aJ64Fh5ukiouusNU60gSG8QEWSshjgzC//O9eDaLdgwwHSMaWdhdU6WLeeeFvEBQHKtsskE0efdygr2ZOmRseWYzBjs7K5lET/rZ+SQNLxRz62sT0S9G6ipEgBHruefgBixyeDoc82zP9AbTVIVHgIrkYyWEvknQ== X-Microsoft-Exchange-Diagnostics: 1;HE1PR0801MB2058;31:iZ5m1O6G9hsPYaD7uoZfofUJbQoDa231Dc4cZkez3rOXzZsePgdDoUtR3yfbsDWV9WUqnHJjRWms7zAdCHyQZ9A69kkqo6liENjf/eQ2W/6izTzgKPcdXvHs8tQO5B9WSi55TZ9OCLD8VjENnuNvEXIHzQtdOXZxkDFsDisrBsTUGdkxL6mpZfPxyiqNZeqS89bBKFGsr9oaPM5ZX/ZZaHHWO1ur09pRmiROOgvYArs=;20:icb46aN315KRTm+kwpql2SnFQ9jcN2AOD3XxztUrOALKZsr4EQrfmfGt3x3hVfcs5fasqvqeHtnhFJA1b9PpaEqToVnSJ9/zmz3nGtUzi+YN397zoRIzQFVQsIKXTWf0SWkLFI/lFMVEEX3/mHWfsN4iolnj7qKYKdt4ATmtUULHB8JqYC+0+ENzKiUvdM7qV6rFWbU+B0pf1nepOhJDwbw+0zpRnpy6ZRKSOpkJb1H9c9Ra3frABNsBsFXJZnfr4qn/4jqknspWURy5+vv7ekiLRYDHVsbl+d594qjnNV0IFj1xVnA+Gf1qcfpnCUdlH2XHlmybDh1zwmYzLNJdhftIVwBNO1M4TVJpdaxK6PL/lHmn6sfhWTCo7UgQx222VBlJQy/7l/Vw38kWGK0KAVbAGMpXIMJrXpMEqsx2vXc= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(9452136761055); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040450)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6041248)(20161123562025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(20161123560025)(20161123564025)(6072148);SRVR:HE1PR0801MB2058;BCL:0;PCL:0;RULEID:;SRVR:HE1PR0801MB2058; X-Microsoft-Exchange-Diagnostics: 1;HE1PR0801MB2058;4:R/VZ3o8GU4E1lkwpwI5Z4cJF3rUp63X5D2G2hke65IoGYhixQggpT6raUauAS+anPAfcCtyikkqy41x72U1ePLTpLcNd+HiJMwby5f6QBM8kHP4zgzUCoJjq0a4w2UfnOUBlckzk5AmsRCRfKuAWtmhpP2j7VIjC6ujfeLFWIxGU6voJEJHOTTIlC6IYh34FNX6B9dw26Y6feR2JW40YEumpZCriwqnf4wDi5R89E4D03mb+KDKPMsKRofSwHXxsndbIO7XqjGmIDaHeH5iqCERIgrRaUZUy23Nqc+WUV8KJbLl5w0Ze0aY4HD5oAXQdfeZnxtXHlvQlASmoIAVmo8JzsjEnZBi304O7y1TlwWJ/Yp5uEj+mCYuE3s3/yXxUInUJwHkk++6bvbpOh9y4nUEAnHKfAyqLmgttf16w8l+/scMFVXFryP+xi7LvErxv6pOFWiIZtvVkSeE+uWx93QO2N6ogHmqQaL7qjSydg/skqhjNiF3ENxS/W8wTRrZ71CoJBT7J1TdhS8e8M+XRjb9+mTzYEoIfyIN2PBxsINgGGpo2LPFpS57NRiu7z2fHclh2IjE3CrWltSwuVAx1HyXhsZYYQNEhax1k9x2njbmJ5/U1LiuO+O1CWa672b2a4smAMiVf6VEFV01Ot6l5JgquIQ19afjMvKqTaM/7DeXSOy+LkFOukBpG9HBdS0ab2lqUxml08reQ7kxHIDGp+A== X-Forefront-PRVS: 02622CEF0A X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10019020)(4630300001)(6009001)(6069001)(39840400002)(39410400002)(39450400003)(39400400002)(7736002)(50466002)(6506006)(36756003)(86362001)(38730400002)(6486002)(305945005)(6512007)(189998001)(2351001)(1076002)(110136004)(54906002)(53416004)(42186005)(81166006)(53936002)(8676002)(6916009)(7416002)(48376002)(76506005)(66066001)(5660300001)(50986999)(50226002)(33646002)(3846002)(6116002)(25786009)(2906002)(47776003)(5003940100001)(4326008);DIR:OUT;SFP:1102;SCL:1;SRVR:HE1PR0801MB2058;H:localhost.sw.ru;FPR:;SPF:None;MLV:sfv;LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;HE1PR0801MB2058;23:Q5yCz49A4QPLbl8ZHyelP4diXPN6MhqGUia3EZ1?= =?us-ascii?Q?cBn+443XKApBfkt7TmAiTrrb69HWIJLDMsj0PX/WhMgT6oC7ZPTX5id2uK3J?= =?us-ascii?Q?WStOgw6vYTzkhlAE2P2/+Gc/sE/y0RfKNFYiznp7HqOwywq8GkR6MxyI+TwG?= =?us-ascii?Q?2kftKU+LluwksEvVHLi4kcEC7Kisnx1I3NFyUpESHAoJSPEZpfh9GyvetXv5?= =?us-ascii?Q?0nPqKI+rg7eiZ2HPTdhrzJtAZCPNem1/j5dCC7DUX8zcMfQU8fmjzOlxzTVV?= =?us-ascii?Q?XTJA3o/NmxjozgIjR+CwGLGaaj5f9GiygybKeAHerRE1EQbo6R2GG8ppe5Nd?= =?us-ascii?Q?T9OQt31Cm7JODFf8ui69qd2xuYqW+tAUOwyaLg13qb+r45WGiPxHsnMllMng?= =?us-ascii?Q?cPR6ZD37EVFQYY3FIze4lMGbe/lesFrBA0aC/O+fmJ26OZzHPr01NBbrGw+0?= =?us-ascii?Q?mB1HZKYLmcTxUBL+nQCPDdpvKPoa6iLXf+c8Y/mO+dRatwZ2MWll3xN4r/yd?= =?us-ascii?Q?uxj5c6D7PhCdmrCCpWhT+IpwVfVF0VjcCC6mRQ6EcvAuHT6BSigdPlVqW0ew?= =?us-ascii?Q?HVXFRRYoOb7o9UNsZQtuMsFx+m9wBLCR3kKJq9tWwqiCIRzjsAVUs5iX53Ks?= =?us-ascii?Q?WEs6DepyNPD5on+9GPx/B6vYi7DTdttqCiDGym6mneUVmwA15njmeAf1azU+?= =?us-ascii?Q?tqJ/KWcLFsP7mSzdyLnoSDOdwKhrmUJXkh+XyNjpEWoRqR0boJxd9vyQ9Tds?= =?us-ascii?Q?sqeb+YVGyXEt+kAoz6LsU78vfzHTHD5MFQurZZ0S3QoPMTao3TqeZ/2fRfYG?= =?us-ascii?Q?3nyoGuC6VYKes1kzop6/+OU63zGTdkDHEU1GRJbSFOuiDdHZZT1YmwdVP2Fj?= =?us-ascii?Q?l4pElVvU+cVcSDmF9C/yvQn/Xgaw2Gf1N6aI6ZUwrfyOylYbSyrYx2xgQm04?= =?us-ascii?Q?mapoU9KgfegjQ40sug8Vo+KD0iGYKAyiFHQA0eCKtYE/NXHcoryJDWbPcfMp?= =?us-ascii?Q?DIifm2cBbIaGhngBpGQ4OVuvOneAi4pHvVGU1UlI0boonLIvDrhgOx9T9iwb?= =?us-ascii?Q?+eb+XlLc=3D?= X-Microsoft-Exchange-Diagnostics: 1;HE1PR0801MB2058;6:s+PFiY/X8sM6Dka2H63KOdJEbXoy0c7PUQj/ZqhsIDuQw5XCvYZ6z+8V4sAJCxwPHbdzyj95lsXhQ0UxZAuB2onSNlp1GhxtueP6o7OdltPZBfYbfw5HCvVjTmPxg+3KT1GN6xMtvruM9MplyABe+0LOF03hWB2A0EXvTOK/ry2hoJMkZz/huKJPfYQUUgy/vHAahkZH8098Kvhbw44ZLB0CMj98VDIc1t/Zxh+cCeXDerhpvvdMRyS9bVgKOf7B1Qw52p7O3/gjcdkCDMk9Rjf0rYdtKmkHz77hAL+QPyaQKxgmKfylBjxE8ZjsAXvizXqQmAlAkL3J0JOTIdqCZaNB7mO3TOgCxmUAgS3rRlJvFvlExnl0+91EwW/PsgPqBH7dWrQVH4/BxcvcL7TTLQ==;5:JTqDtDz45IAbAfOfzlxOCuRhz5CyBPJgPyg+mCppR5X4Czc/hbdyN37L2JC4Lr1QMw8+SCLzEvR6xbxMO5HEq8etGSsIYYBfN7EygS0SqQxCg3p3Q9uD6Lj4vn/GYTuY8oH9TbGr79KzBWzLKTrdww==;24:mheXY41eDExOQ19ivCCCIROWqHTslN64XJj9k9pdXgkAYQJPAS8PW8WbXNl3JEsTKnxvKpJCxfsE/X/MgJ0so+xGFfj5vs7oYW3kpvAXEb8= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;HE1PR0801MB2058;7:PJ03buZ6Wv9LzDZj68tl6yOBqMFunzX+zGW3JUbzoGK5sj5tQ8fxKzrGzX5Uf1bG3dkQYk1gNejJ59BKV85XtNCDbTs0WdHSncYxiFQ88OFGt6kobMA6OYrdYxJPH5hXTokdjvtcMqRUtyPq8RLzyu8dZ0kEWFHZ/E9lCPnhKANEZtA0FJCV5SnFpeRlZiT3cdJZdo/URvAg84IiegsgMY3+Kfjhx3nLbsBSk7/psoh4Rgd8t7eDbFH9qk81mWcXTPQG4z7248EF1NBh66EtUVkyBcUgCP4f1Sk5gBFZoa12ISTzbAaslV8ZcyA7flItUtuF2aqvp9Ca4aJq+PGA2Q==;20:OltdJwl1NVW160Xm20z8JBMxRTE4SS30WBwa7kZjQZgWOO5OBqFQMUPA9ceWXf0WWSZC9DmNabgs75x7BwE4z/noXn/wuBnbz+13IOW6aKDFczunpwX4BIPb7vDG8uxflzPv8CzIwhAsCrmaBYoWNK4oDLG/95CgQXS000j7iS8= X-OriginatorOrg: virtuozzo.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Mar 2017 10:26:10.3742 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB2058 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") added might_sleep() to remove_vm_area() from vfree(), and commit 763b218ddfaf ("mm: add preempt points into __purge_vmap_area_lazy()") actually made vfree() potentially sleeping. This broke vmwgfx driver which calls vfree() under spin_lock(). BUG: sleeping function called from invalid context at mm/vmalloc.c:1480 in_atomic(): 1, irqs_disabled(): 0, pid: 341, name: plymouthd 2 locks held by plymouthd/341: #0: (drm_global_mutex){+.+.+.}, at: [] drm_release+0x3b/0x3b0 [drm] #1: (&(&tfile->lock)->rlock){+.+...}, at: [] ttm_object_file_release+0x28/0x90 [ttm] Call Trace: dump_stack+0x86/0xc3 ___might_sleep+0x17d/0x250 __might_sleep+0x4a/0x80 remove_vm_area+0x22/0x90 __vunmap+0x2e/0x110 vfree+0x42/0x90 kvfree+0x2c/0x40 drm_ht_remove+0x1a/0x30 [drm] ttm_object_file_release+0x50/0x90 [ttm] vmw_postclose+0x47/0x60 [vmwgfx] drm_release+0x290/0x3b0 [drm] __fput+0xf8/0x210 ____fput+0xe/0x10 task_work_run+0x85/0xc0 exit_to_usermode_loop+0xb4/0xc0 do_syscall_64+0x185/0x1f0 entry_SYSCALL64_slow_path+0x25/0x25 This can be fixed in vmgfx, but it would be better to make vfree() non-sleeping again because we may have other bugs like this one. __purge_vmap_area_lazy() is the only function in the vfree() path that wants to be able to sleep. So it make sense to schedule __purge_vmap_area_lazy() via schedule_work() so it runs only in sleepable context. This will have a minimal effect on the regular vfree() path. since __purge_vmap_area_lazy() is rarely called. Fixes: 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") Reported-by: Tetsuo Handa Signed-off-by: Andrey Ryabinin Cc: Signed-off-by: Andrey Ryabinin --- mm/vmalloc.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 68eb002..ea1b4ab 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -701,7 +701,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) * Kick off a purge of the outstanding lazy areas. Don't bother if somebody * is already purging. */ -static void try_purge_vmap_area_lazy(void) +static void try_purge_vmap_area_lazy(struct work_struct *work) { if (mutex_trylock(&vmap_purge_lock)) { __purge_vmap_area_lazy(ULONG_MAX, 0); @@ -720,6 +720,8 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static DECLARE_WORK(purge_vmap_work, try_purge_vmap_area_lazy); + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -736,7 +738,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) llist_add(&va->purge_list, &vmap_purge_list); if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + schedule_work(&purge_vmap_work); } /* @@ -1125,7 +1127,6 @@ void vm_unmap_ram(const void *mem, unsigned int count) unsigned long addr = (unsigned long)mem; struct vmap_area *va; - might_sleep(); BUG_ON(!addr); BUG_ON(addr < VMALLOC_START); BUG_ON(addr > VMALLOC_END); @@ -1477,8 +1478,6 @@ struct vm_struct *remove_vm_area(const void *addr) { struct vmap_area *va; - might_sleep(); - va = find_vmap_area((unsigned long)addr); if (va && va->flags & VM_VM_AREA) { struct vm_struct *vm = va->vm; -- 2.10.2 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Andrey Ryabinin To: CC: , , Andrey Ryabinin , , , , , , , , , , , , , Subject: [PATCH 1/4] mm/vmalloc: allow to call vfree() in atomic context Date: Thu, 30 Mar 2017 13:27:16 +0300 Message-ID: <20170330102719.13119-1-aryabinin@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: Commit 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") added might_sleep() to remove_vm_area() from vfree(), and commit 763b218ddfaf ("mm: add preempt points into __purge_vmap_area_lazy()") actually made vfree() potentially sleeping. This broke vmwgfx driver which calls vfree() under spin_lock(). BUG: sleeping function called from invalid context at mm/vmalloc.c:1480 in_atomic(): 1, irqs_disabled(): 0, pid: 341, name: plymouthd 2 locks held by plymouthd/341: #0: (drm_global_mutex){+.+.+.}, at: [] drm_release+0x3b/0x3b0 [drm] #1: (&(&tfile->lock)->rlock){+.+...}, at: [] ttm_object_file_release+0x28/0x90 [ttm] Call Trace: dump_stack+0x86/0xc3 ___might_sleep+0x17d/0x250 __might_sleep+0x4a/0x80 remove_vm_area+0x22/0x90 __vunmap+0x2e/0x110 vfree+0x42/0x90 kvfree+0x2c/0x40 drm_ht_remove+0x1a/0x30 [drm] ttm_object_file_release+0x50/0x90 [ttm] vmw_postclose+0x47/0x60 [vmwgfx] drm_release+0x290/0x3b0 [drm] __fput+0xf8/0x210 ____fput+0xe/0x10 task_work_run+0x85/0xc0 exit_to_usermode_loop+0xb4/0xc0 do_syscall_64+0x185/0x1f0 entry_SYSCALL64_slow_path+0x25/0x25 This can be fixed in vmgfx, but it would be better to make vfree() non-sleeping again because we may have other bugs like this one. __purge_vmap_area_lazy() is the only function in the vfree() path that wants to be able to sleep. So it make sense to schedule __purge_vmap_area_lazy() via schedule_work() so it runs only in sleepable context. This will have a minimal effect on the regular vfree() path. since __purge_vmap_area_lazy() is rarely called. Fixes: 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") Reported-by: Tetsuo Handa Signed-off-by: Andrey Ryabinin Cc: Signed-off-by: Andrey Ryabinin --- mm/vmalloc.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 68eb002..ea1b4ab 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -701,7 +701,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) * Kick off a purge of the outstanding lazy areas. Don't bother if somebody * is already purging. */ -static void try_purge_vmap_area_lazy(void) +static void try_purge_vmap_area_lazy(struct work_struct *work) { if (mutex_trylock(&vmap_purge_lock)) { __purge_vmap_area_lazy(ULONG_MAX, 0); @@ -720,6 +720,8 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static DECLARE_WORK(purge_vmap_work, try_purge_vmap_area_lazy); + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -736,7 +738,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) llist_add(&va->purge_list, &vmap_purge_list); if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + schedule_work(&purge_vmap_work); } /* @@ -1125,7 +1127,6 @@ void vm_unmap_ram(const void *mem, unsigned int count) unsigned long addr = (unsigned long)mem; struct vmap_area *va; - might_sleep(); BUG_ON(!addr); BUG_ON(addr < VMALLOC_START); BUG_ON(addr > VMALLOC_END); @@ -1477,8 +1478,6 @@ struct vm_struct *remove_vm_area(const void *addr) { struct vmap_area *va; - might_sleep(); - va = find_vmap_area((unsigned long)addr); if (va && va->flags & VM_VM_AREA) { struct vm_struct *vm = va->vm; -- 2.10.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id C31A66B0390 for ; Thu, 30 Mar 2017 06:26:16 -0400 (EDT) Received: by mail-pg0-f70.google.com with SMTP id 21so41029478pgg.4 for ; Thu, 30 Mar 2017 03:26:16 -0700 (PDT) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0095.outbound.protection.outlook.com. [104.47.2.95]) by mx.google.com with ESMTPS id t8si1756743pfg.364.2017.03.30.03.26.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 30 Mar 2017 03:26:15 -0700 (PDT) From: Andrey Ryabinin Subject: [PATCH 1/4] mm/vmalloc: allow to call vfree() in atomic context Date: Thu, 30 Mar 2017 13:27:16 +0300 Message-ID: <20170330102719.13119-1-aryabinin@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org Cc: penguin-kernel@I-love.SAKURA.ne.jp, linux-kernel@vger.kernel.org, Andrey Ryabinin , mhocko@kernel.org, linux-mm@kvack.org, hpa@zytor.com, chris@chris-wilson.co.uk, hch@lst.de, mingo@elte.hu, jszhang@marvell.com, joelaf@google.com, joaodias@google.com, willy@infradead.org, tglx@linutronix.de, thellstrom@vmware.com, stable@vger.kernel.org Commit 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") added might_sleep() to remove_vm_area() from vfree(), and commit 763b218ddfaf ("mm: add preempt points into __purge_vmap_area_lazy()") actually made vfree() potentially sleeping. This broke vmwgfx driver which calls vfree() under spin_lock(). BUG: sleeping function called from invalid context at mm/vmalloc.c:1480 in_atomic(): 1, irqs_disabled(): 0, pid: 341, name: plymouthd 2 locks held by plymouthd/341: #0: (drm_global_mutex){+.+.+.}, at: [] drm_release+0x3b/0x3b0 [drm] #1: (&(&tfile->lock)->rlock){+.+...}, at: [] ttm_object_file_release+0x28/0x90 [ttm] Call Trace: dump_stack+0x86/0xc3 ___might_sleep+0x17d/0x250 __might_sleep+0x4a/0x80 remove_vm_area+0x22/0x90 __vunmap+0x2e/0x110 vfree+0x42/0x90 kvfree+0x2c/0x40 drm_ht_remove+0x1a/0x30 [drm] ttm_object_file_release+0x50/0x90 [ttm] vmw_postclose+0x47/0x60 [vmwgfx] drm_release+0x290/0x3b0 [drm] __fput+0xf8/0x210 ____fput+0xe/0x10 task_work_run+0x85/0xc0 exit_to_usermode_loop+0xb4/0xc0 do_syscall_64+0x185/0x1f0 entry_SYSCALL64_slow_path+0x25/0x25 This can be fixed in vmgfx, but it would be better to make vfree() non-sleeping again because we may have other bugs like this one. __purge_vmap_area_lazy() is the only function in the vfree() path that wants to be able to sleep. So it make sense to schedule __purge_vmap_area_lazy() via schedule_work() so it runs only in sleepable context. This will have a minimal effect on the regular vfree() path. since __purge_vmap_area_lazy() is rarely called. Fixes: 5803ed292e63 ("mm: mark all calls into the vmalloc subsystem as potentially sleeping") Reported-by: Tetsuo Handa Signed-off-by: Andrey Ryabinin Cc: Signed-off-by: Andrey Ryabinin --- mm/vmalloc.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 68eb002..ea1b4ab 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -701,7 +701,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) * Kick off a purge of the outstanding lazy areas. Don't bother if somebody * is already purging. */ -static void try_purge_vmap_area_lazy(void) +static void try_purge_vmap_area_lazy(struct work_struct *work) { if (mutex_trylock(&vmap_purge_lock)) { __purge_vmap_area_lazy(ULONG_MAX, 0); @@ -720,6 +720,8 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static DECLARE_WORK(purge_vmap_work, try_purge_vmap_area_lazy); + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -736,7 +738,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) llist_add(&va->purge_list, &vmap_purge_list); if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + schedule_work(&purge_vmap_work); } /* @@ -1125,7 +1127,6 @@ void vm_unmap_ram(const void *mem, unsigned int count) unsigned long addr = (unsigned long)mem; struct vmap_area *va; - might_sleep(); BUG_ON(!addr); BUG_ON(addr < VMALLOC_START); BUG_ON(addr > VMALLOC_END); @@ -1477,8 +1478,6 @@ struct vm_struct *remove_vm_area(const void *addr) { struct vmap_area *va; - might_sleep(); - va = find_vmap_area((unsigned long)addr); if (va && va->flags & VM_VM_AREA) { struct vm_struct *vm = va->vm; -- 2.10.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org