From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48133C43603 for ; Wed, 4 Dec 2019 13:52:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 06BD82073B for ; Wed, 4 Dec 2019 13:52:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06BD82073B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 95B9C6B0ADC; Wed, 4 Dec 2019 08:52:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 90C3F6B0ADD; Wed, 4 Dec 2019 08:52:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D4AB6B0ADE; Wed, 4 Dec 2019 08:52:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id 626266B0ADC for ; Wed, 4 Dec 2019 08:52:22 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 2C95F180AD806 for ; Wed, 4 Dec 2019 13:52:22 +0000 (UTC) X-FDA: 76227598524.07.act65_4ed0e7b229730 X-HE-Tag: act65_4ed0e7b229730 X-Filterd-Recvd-Size: 6639 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Dec 2019 13:52:21 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id f4so4903630wmj.1 for ; Wed, 04 Dec 2019 05:52:21 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=An6Y7n87V4cec/ZG6X9jG6nGGaBG7D9uoUb5EJBRrQA=; b=sJ8F7EPg+uJThiM/mc1Jdx+NBXpNYvihaprcHE9W4q9KMMczB+O70LLGriKSTQggiU 88JZYO47i/0R52A9balSMeDmMDmLA2rKnKIUvXtxsxdWXWU6+FAJcY0w/4KZGyfL/rq5 PaQhf0tj+x9hWXVdrZWFLxM5ngmiHzaVx3RDD7fnnfzjSPSw9td3FI2ZL5oGKiBJn8CW vrCy/n29JV2o7y+W9Cr/INDlziiOBQ0Zo8VjeegMhyu5uptEbM1ZgkHG2WGBeuVli+yh vEWWhoGgJo26izyxjlaauEQiXn7VPRviWMWgW9G3XDrGSnD4K959NwKQe5uaY/2Uyihv whJw== X-Gm-Message-State: APjAAAWPhE8fGyWOBstk7GmALfQcPcQ3A61zvLCisvzT0DXpZE0ycpyy 2DCHmqt/qNr6UXb+Q3dryB8= X-Google-Smtp-Source: APXvYqwf4epbEO4cSbnKP1ihntMY5E8nPmRKTpkePIJQuwds3Lx4tk/ZGOCpzc9XmafoGcaajIJcCQ== X-Received: by 2002:a05:600c:1109:: with SMTP id b9mr27505155wma.162.1575467540467; Wed, 04 Dec 2019 05:52:20 -0800 (PST) Received: from localhost (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id z11sm8279399wrt.82.2019.12.04.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Dec 2019 05:52:19 -0800 (PST) Date: Wed, 4 Dec 2019 14:52:19 +0100 From: Michal Hocko To: Thomas =?iso-8859-1?Q?Hellstr=F6m_=28VMware=29?= Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, pv-drivers@vmware.com, linux-graphics-maintainer@vmware.com, Thomas Hellstrom , Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Ralph Campbell , =?iso-8859-1?B?Suly9G1l?= Glisse , Christian =?iso-8859-1?Q?K=F6nig?= Subject: Re: [PATCH v2 2/2] drm/ttm: Fix vm page protection handling Message-ID: <20191204135219.GH25242@dhcp22.suse.cz> References: <20191203104853.4378-1-thomas_os@shipmail.org> <20191203104853.4378-3-thomas_os@shipmail.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20191203104853.4378-3-thomas_os@shipmail.org> User-Agent: Mutt/1.10.1 (2018-07-13) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 03-12-19 11:48:53, Thomas Hellstr=F6m (VMware) wrote: > From: Thomas Hellstrom >=20 > TTM graphics buffer objects may, transparently to user-space, move > between IO and system memory. When that happens, all PTEs pointing to t= he > old location are zapped before the move and then faulted in again if > needed. When that happens, the page protection caching mode- and > encryption bits may change and be different from those of > struct vm_area_struct::vm_page_prot. >=20 > We were using an ugly hack to set the page protection correctly. > Fix that and instead use vmf_insert_mixed_prot() and / or > vmf_insert_pfn_prot(). > Also get the default page protection from > struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot(= ). > This way we catch modifications done by the vm system for drivers that > want write-notification. So essentially this should have any new side effect on functionality it is just making a hacky/ugly code less so? In other words what are the consequences of having page protection inconsistent from vma's? > Cc: Andrew Morton > Cc: Michal Hocko > Cc: "Matthew Wilcox (Oracle)" > Cc: "Kirill A. Shutemov" > Cc: Ralph Campbell > Cc: "J=E9r=F4me Glisse" > Cc: "Christian K=F6nig" > Signed-off-by: Thomas Hellstrom > Reviewed-by: Christian K=F6nig > --- > drivers/gpu/drm/ttm/ttm_bo_vm.c | 14 +++++++------- > 1 file changed, 7 insertions(+), 7 deletions(-) >=20 > diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_= bo_vm.c > index e6495ca2630b..2098f8d4dfc5 100644 > --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c > +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c > @@ -173,7 +173,6 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault= *vmf, > pgoff_t num_prefault) > { > struct vm_area_struct *vma =3D vmf->vma; > - struct vm_area_struct cvma =3D *vma; > struct ttm_buffer_object *bo =3D vma->vm_private_data; > struct ttm_bo_device *bdev =3D bo->bdev; > unsigned long page_offset; > @@ -244,7 +243,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault= *vmf, > goto out_io_unlock; > } > =20 > - cvma.vm_page_prot =3D ttm_io_prot(bo->mem.placement, prot); > + prot =3D ttm_io_prot(bo->mem.placement, prot); > if (!bo->mem.bus.is_iomem) { > struct ttm_operation_ctx ctx =3D { > .interruptible =3D false, > @@ -260,7 +259,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault= *vmf, > } > } else { > /* Iomem should not be marked encrypted */ > - cvma.vm_page_prot =3D pgprot_decrypted(cvma.vm_page_prot); > + prot =3D pgprot_decrypted(prot); > } > =20 > /* > @@ -284,10 +283,11 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fau= lt *vmf, > } > =20 > if (vma->vm_flags & VM_MIXEDMAP) > - ret =3D vmf_insert_mixed(&cvma, address, > - __pfn_to_pfn_t(pfn, PFN_DEV)); > + ret =3D vmf_insert_mixed_prot(vma, address, > + __pfn_to_pfn_t(pfn, PFN_DEV), > + prot); > else > - ret =3D vmf_insert_pfn(&cvma, address, pfn); > + ret =3D vmf_insert_pfn_prot(vma, address, pfn, prot); > =20 > /* Never error on prefaulted PTEs */ > if (unlikely((ret & VM_FAULT_ERROR))) { > @@ -319,7 +319,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) > if (ret) > return ret; > =20 > - prot =3D vm_get_page_prot(vma->vm_flags); > + prot =3D vma->vm_page_prot; > ret =3D ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); > if (ret =3D=3D VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWA= IT)) > return ret; > --=20 > 2.21.0 --=20 Michal Hocko SUSE Labs