From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22454C432C3 for ; Wed, 27 Nov 2019 20:55:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EB46121582 for ; Wed, 27 Nov 2019 20:55:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1574888142; bh=0YtwBp1f+Cgq7P3s/HVzv1YEHY2F4J8cu2TOkq3oFFM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=XZqvZgPwN4Q8CZXzNXrFX/1PlfT7785xW9O4GsdP7U/VRFckATWnv/pK/Y1aniTi0 vX0zquG7rolJvMSJ/+6voZZhcyvZIYtJNUWvj9ed4ynBOaOsnAOOSjA2nRHyhrTU2d rM51CGmAIOivxw23WxFr9gVf5LJP1DcbwBXbKlUY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728500AbfK0Uzl (ORCPT ); Wed, 27 Nov 2019 15:55:41 -0500 Received: from mail.kernel.org ([198.145.29.99]:46182 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730024AbfK0Uzk (ORCPT ); Wed, 27 Nov 2019 15:55:40 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C46222154A; Wed, 27 Nov 2019 20:55:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1574888139; bh=0YtwBp1f+Cgq7P3s/HVzv1YEHY2F4J8cu2TOkq3oFFM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zs01PbjGTKSzOmartOUqOsSEmIT9EyYbYF7E/Dr4ndbF5S4FsM+pbWhRJtWhMP/qC 5iHU+vdIRedCg5wKgrQs5eP4yB0F0hGYEoR2y864EdKISGv5nsfacTRQznGf/HBJIh VPZZ/M47IxvocWRm9CeQEdy6ebHgF93A4H09CjHQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chris Wilson , Lionel Landwerlin , Tvrtko Ursulin , Joonas Lahtinen , Rodrigo Vivi Subject: [PATCH 4.19 020/306] drm/i915/userptr: Try to acquire the page lock around set_page_dirty() Date: Wed, 27 Nov 2019 21:27:50 +0100 Message-Id: <20191127203116.175503395@linuxfoundation.org> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191127203114.766709977@linuxfoundation.org> References: <20191127203114.766709977@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Chris Wilson commit 2d691aeca4aecbb8d0414a777a46981a8e142b05 upstream. set_page_dirty says: For pages with a mapping this should be done under the page lock for the benefit of asynchronous memory errors who prefer a consistent dirty state. This rule can be broken in some special cases, but should be better not to. Under those rules, it is only safe for us to use the plain set_page_dirty calls for shmemfs/anonymous memory. Userptr may be used with real mappings and so needs to use the locked version (set_page_dirty_lock). However, following a try_to_unmap() we may want to remove the userptr and so call put_pages(). However, try_to_unmap() acquires the page lock and so we must avoid recursively locking the pages ourselves -- which means that we cannot safely acquire the lock around set_page_dirty(). Since we can't be sure of the lock, we have to risk skip dirtying the page, or else risk calling set_page_dirty() without a lock and so risk fs corruption. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203317 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112012 Fixes: 5cc9ed4b9a7a ("drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl") Signed-off-by: Chris Wilson Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Joonas Lahtinen Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin Link: https://patchwork.freedesktop.org/patch/msgid/20191111133205.11590-1-chris@chris-wilson.co.uk (cherry picked from commit 0d4bbe3d407f79438dc4f87943db21f7134cfc65) Signed-off-by: Joonas Lahtinen (cherry picked from commit cee7fb437edcdb2f9f8affa959e274997f5dca4d) Signed-off-by: Rodrigo Vivi Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/i915/i915_gem_userptr.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) --- a/drivers/gpu/drm/i915/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c @@ -691,8 +691,28 @@ i915_gem_userptr_put_pages(struct drm_i9 i915_gem_gtt_finish_pages(obj, pages); for_each_sgt_page(page, sgt_iter, pages) { - if (obj->mm.dirty) + if (obj->mm.dirty && trylock_page(page)) { + /* + * As this may not be anonymous memory (e.g. shmem) + * but exist on a real mapping, we have to lock + * the page in order to dirty it -- holding + * the page reference is not sufficient to + * prevent the inode from being truncated. + * Play safe and take the lock. + * + * However...! + * + * The mmu-notifier can be invalidated for a + * migrate_page, that is alreadying holding the lock + * on the page. Such a try_to_unmap() will result + * in us calling put_pages() and so recursively try + * to lock the page. We avoid that deadlock with + * a trylock_page() and in exchange we risk missing + * some page dirtying. + */ set_page_dirty(page); + unlock_page(page); + } mark_page_accessed(page); put_page(page);