From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_2 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2944AC10F26 for ; Fri, 6 Mar 2020 11:18:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DE6AA2073B for ; Fri, 6 Mar 2020 11:18:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DE6AA2073B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5D0BC6B0005; Fri, 6 Mar 2020 06:18:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5801A6B0006; Fri, 6 Mar 2020 06:18:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46EFD6B0007; Fri, 6 Mar 2020 06:18:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id 2E8976B0005 for ; Fri, 6 Mar 2020 06:18:41 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C125E8248047 for ; Fri, 6 Mar 2020 11:18:40 +0000 (UTC) X-FDA: 76564689600.23.brick78_4bb46493e3f3b X-HE-Tag: brick78_4bb46493e3f3b X-Filterd-Recvd-Size: 10906 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Mar 2020 11:18:39 +0000 (UTC) Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 026BEkug007073 for ; Fri, 6 Mar 2020 06:18:38 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0a-001b2d01.pphosted.com with ESMTP id 2yj4q3mqyp-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 06 Mar 2020 06:18:38 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 6 Mar 2020 11:18:36 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Fri, 6 Mar 2020 11:18:32 -0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 026BIUs553805080 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 6 Mar 2020 11:18:30 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8BCFB42045; Fri, 6 Mar 2020 11:18:30 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D44C342041; Fri, 6 Mar 2020 11:18:29 +0000 (GMT) Received: from p-imbrenda (unknown [9.145.0.1]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 6 Mar 2020 11:18:29 +0000 (GMT) Date: Fri, 6 Mar 2020 12:18:23 +0100 From: Claudio Imbrenda To: John Hubbard Cc: , , , , , , , , , , , , Will Deacon Subject: Re: [PATCH v3 2/2] mm/gup/writeback: add callbacks for inaccessible pages In-Reply-To: References: <20200304130655.462517-1-imbrenda@linux.ibm.com> <20200304130655.462517-3-imbrenda@linux.ibm.com> Organization: IBM X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-TM-AS-GCONF: 00 x-cbid: 20030611-0028-0000-0000-000003E1731E X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 20030611-0029-0000-0000-000024A6AB23 Message-Id: <20200306121823.50d253ac@p-imbrenda> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-03-06_03:2020-03-06,2020-03-06 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 suspectscore=2 adultscore=0 bulkscore=0 mlxscore=0 lowpriorityscore=0 phishscore=0 malwarescore=0 priorityscore=1501 spamscore=0 mlxlogscore=916 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2001150001 definitions=main-2003060081 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 5 Mar 2020 14:30:03 -0800 John Hubbard wrote: > On 3/4/20 5:06 AM, Claudio Imbrenda wrote: > > With the introduction of protected KVM guests on s390 there is now a > > concept of inaccessible pages. These pages need to be made > > accessible before the host can access them. > > > > While cpu accesses will trigger a fault that can be resolved, I/O > > accesses will just fail. We need to add a callback into > > architecture code for places that will do I/O, namely when > > writeback is started or when a page reference is taken. > > > > This is not only to enable paging, file backing etc, it is also > > necessary to protect the host against a malicious user space. For > > example a bad QEMU could simply start direct I/O on such protected > > memory. We do not want userspace to be able to trigger I/O errors > > and thus the logic is "whenever somebody accesses that page (gup) > > or does I/O, make sure that this page can be accessed". When the > > guest tries to access that page we will wait in the page fault > > handler for writeback to have finished and for the page_ref to be > > the expected value. > > > > On s390x the function is not supposed to fail, so it is ok to use a > > WARN_ON on failure. If we ever need some more finegrained handling > > we can tackle this when we know the details. > > > > Signed-off-by: Claudio Imbrenda > > Acked-by: Will Deacon > > Reviewed-by: David Hildenbrand > > Reviewed-by: Christian Borntraeger > > --- > > include/linux/gfp.h | 6 ++++++ > > mm/gup.c | 30 +++++++++++++++++++++++++++--- > > mm/page-writeback.c | 5 +++++ > > 3 files changed, 38 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > > index e5b817cb86e7..be2754841369 100644 > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -485,6 +485,12 @@ static inline void arch_free_page(struct page > > *page, int order) { } #ifndef HAVE_ARCH_ALLOC_PAGE > > static inline void arch_alloc_page(struct page *page, int order) { > > } #endif > > +#ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE > > +static inline int arch_make_page_accessible(struct page *page) > > +{ > > + return 0; > > +} > > +#endif > > > > struct page * > > __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int > > preferred_nid, diff --git a/mm/gup.c b/mm/gup.c > > index 81a95fbe9901..d0c4c6f336bb 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -413,6 +413,7 @@ static struct page *follow_page_pte(struct > > vm_area_struct *vma, struct page *page; > > spinlock_t *ptl; > > pte_t *ptep, pte; > > + int ret; > > > > /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > > if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == > > @@ -471,8 +472,6 @@ static struct page *follow_page_pte(struct > > vm_area_struct *vma, if (is_zero_pfn(pte_pfn(pte))) { > > page = pte_page(pte); > > } else { > > - int ret; > > - > > ret = follow_pfn_pte(vma, address, ptep, > > flags); page = ERR_PTR(ret); > > goto out; > > @@ -480,7 +479,6 @@ static struct page *follow_page_pte(struct > > vm_area_struct *vma, } > > > > if (flags & FOLL_SPLIT && PageTransCompound(page)) { > > - int ret; > > get_page(page); > > pte_unmap_unlock(ptep, ptl); > > lock_page(page); > > @@ -497,6 +495,19 @@ static struct page *follow_page_pte(struct > > vm_area_struct *vma, page = ERR_PTR(-ENOMEM); > > goto out; > > } > > + /* > > + * We need to make the page accessible if and only if we > > are going > > + * to access its content (the FOLL_PIN case). Please see > > + * Documentation/core-api/pin_user_pages.rst for details. > > + */ > > + if (flags & FOLL_PIN) { > > + ret = arch_make_page_accessible(page); > > + if (ret) { > > + unpin_user_page(page); > > + page = ERR_PTR(ret); > > + goto out; > > + } > > + } > > if (flags & FOLL_TOUCH) { > > if ((flags & FOLL_WRITE) && > > !pte_dirty(pte) && !PageDirty(page)) > > @@ -2162,6 +2173,19 @@ static int gup_pte_range(pmd_t pmd, unsigned > > long addr, unsigned long end, > > VM_BUG_ON_PAGE(compound_head(page) != head, page); > > > > + /* > > + * We need to make the page accessible if and only > > if we are > > + * going to access its content (the FOLL_PIN > > case). Please > > + * see Documentation/core-api/pin_user_pages.rst > > for > > + * details. > > + */ > > + if (flags & FOLL_PIN) { > > + ret = arch_make_page_accessible(page); > > + if (ret) { > > + unpin_user_page(page); > > + goto pte_unmap; > > + } > > + } > > SetPageReferenced(page); > > pages[*nr] = page; > > (*nr)++; > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > > index ab5a3cee8ad3..8384be5a2758 100644 > > --- a/mm/page-writeback.c > > +++ b/mm/page-writeback.c > > @@ -2807,6 +2807,11 @@ int __test_set_page_writeback(struct page > > *page, bool keep_write) inc_zone_page_state(page, > > NR_ZONE_WRITE_PENDING); } > > unlock_page_memcg(page); > > + /* > > + * If writeback has been triggered on a page that cannot > > be made > > + * accessible, it is too late. > > + */ > > + WARN_ON(arch_make_page_accessible(page)); > > Hi, > > Sorry for not commenting on this earlier. After looking at this a > bit, I think a tiny tweak would be helpful, because: > > a) WARN_ON() is a big problem for per-page issues, because, like > ants, pages are prone to show up in large groups. And a warning and > backtrace for each such page can easily bring a system to a crawl. > > b) Based on your explanation of how this works, what your situation > really seems to call for is the standard "crash hard in DEBUG builds, > in order to keep developers out of trouble, but continue on in > non-DEBUG builds". > > So maybe you'd be better protected with this instead: > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index ab5a3cee8ad3..b7f3d0766a5f 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -2764,7 +2764,7 @@ int test_clear_page_writeback(struct page *page) > int __test_set_page_writeback(struct page *page, bool keep_write) > { > struct address_space *mapping = page_mapping(page); > - int ret; > + int ret, access_ret; > > lock_page_memcg(page); > if (mapping && mapping_use_writeback_tags(mapping)) { > @@ -2807,6 +2807,13 @@ int __test_set_page_writeback(struct page > *page, bool keep_write) inc_zone_page_state(page, > NR_ZONE_WRITE_PENDING); } > unlock_page_memcg(page); > + access_ret = arch_make_page_accessible(page); > + /* > + * If writeback has been triggered on a page that cannot be > made > + * accessible, it is too late to recover here. > + */ > + VM_BUG_ON_PAGE(access_ret != 0, page); > + > return ret; > > } > > Assuming that's acceptable, you can add: > > Reviewed-by: John Hubbard > > to the updated patch. I will send an updated patch, thanks a lot for the feedback!