From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83849C3524F for ; Tue, 7 Jan 2020 22:46:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 43424207E0 for ; Tue, 7 Jan 2020 22:46:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="N8y3M7oE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727954AbgAGWqm (ORCPT ); Tue, 7 Jan 2020 17:46:42 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:4869 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727739AbgAGWqR (ORCPT ); Tue, 7 Jan 2020 17:46:17 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 07 Jan 2020 14:45:44 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 07 Jan 2020 14:46:01 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 07 Jan 2020 14:46:01 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 7 Jan 2020 22:46:00 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 7 Jan 2020 22:46:00 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 07 Jan 2020 14:46:00 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A . Shutemov" , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , Mike Rapoport Subject: [PATCH v12 11/22] mm/gup: introduce pin_user_pages*() and FOLL_PIN Date: Tue, 7 Jan 2020 14:45:47 -0800 Message-ID: <20200107224558.2362728-12-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200107224558.2362728-1-jhubbard@nvidia.com> References: <20200107224558.2362728-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1578437144; bh=ILPjWTz1uECxM8jMFV9CrBwQY3DCe/z1cMA8J31RcEU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=N8y3M7oEGlF2Wl2fmm0TU+YowgLxVYo4DvTmDlxpuOsYOjpyz1F/NGihEQMkRvDMS pUJ7V2Sropf3MRr0OCEHWhcCpRpeVudhY3paboRYSFjlCWQedKWdAWNPFTXNQbf0v9 rVHCHVL9BIxj5lsLVXqMftCDh91WGIG19SOtYt0L+R7blB3regKi3MBDHVLYvHg/vO CqUwZolw+sfLd09xQIbB6qDdrq+/xbWhlkXeUvekAhXLOiGiikYJgj8+Y6q4T4Ue7B NXexgnMZIaUQsnLLxupXXAhNLXKrjguawcfLMohQ7aeBX/oUiVJrgp8vpD7nBLPWk9 sHiEfW/E7AVoA== Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce pin_user_pages*() variations of get_user_pages*() calls, and also pin_longterm_pages*() variations. For now, these are placeholder calls, until the various call sites are converted to use the correct get_user_pages*() or pin_user_pages*() API. These variants will eventually all set FOLL_PIN, which is also introduced, and thoroughly documented. pin_user_pages() pin_user_pages_remote() pin_user_pages_fast() All pages that are pinned via the above calls, must be unpinned via put_user_page(). The underlying rules are: * FOLL_PIN is a gup-internal flag, so the call sites should not directly set it. That behavior is enforced with assertions. * Call sites that want to indicate that they are going to do DirectIO ("DIO") or something with similar characteristics, should call a get_user_pages()-like wrapper call that sets FOLL_PIN. These wrappers will: * Start with "pin_user_pages" instead of "get_user_pages". That makes it easy to find and audit the call sites. * Set FOLL_PIN * For pages that are received via FOLL_PIN, those pages must be returned via put_user_page(). Thanks to Jan Kara and Vlastimil Babka for explaining the 4 cases in this documentation. (I've reworded it and expanded upon it.) Reviewed-by: Jan Kara Reviewed-by: Mike Rapoport # Documentation Reviewed-by: J=C3=A9r=C3=B4me Glisse Cc: Jonathan Corbet Cc: Ira Weiny Signed-off-by: John Hubbard --- Documentation/core-api/index.rst | 1 + Documentation/core-api/pin_user_pages.rst | 232 ++++++++++++++++++++++ include/linux/mm.h | 63 ++++-- mm/gup.c | 164 +++++++++++++-- 4 files changed, 426 insertions(+), 34 deletions(-) create mode 100644 Documentation/core-api/pin_user_pages.rst diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/inde= x.rst index ab0eae1c153a..413f7d7c8642 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -31,6 +31,7 @@ Core utilities generic-radix-tree memory-allocation mm-api + pin_user_pages gfp_mask-from-fs-io timekeeping boot-time-mm diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core= -api/pin_user_pages.rst new file mode 100644 index 000000000000..71849830cd48 --- /dev/null +++ b/Documentation/core-api/pin_user_pages.rst @@ -0,0 +1,232 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D +pin_user_pages() and related calls +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D + +.. contents:: :local: + +Overview +=3D=3D=3D=3D=3D=3D=3D=3D + +This document describes the following functions:: + + pin_user_pages() + pin_user_pages_fast() + pin_user_pages_remote() + +Basic description of FOLL_PIN +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D + +FOLL_PIN and FOLL_LONGTERM are flags that can be passed to the get_user_pa= ges*() +("gup") family of functions. FOLL_PIN has significant interactions and +interdependencies with FOLL_LONGTERM, so both are covered here. + +FOLL_PIN is internal to gup, meaning that it should not appear at the gup = call +sites. This allows the associated wrapper functions (pin_user_pages*() an= d +others) to set the correct combination of these flags, and to check for pr= oblems +as well. + +FOLL_LONGTERM, on the other hand, *is* allowed to be set at the gup call s= ites. +This is in order to avoid creating a large number of wrapper functions to = cover +all combinations of get*(), pin*(), FOLL_LONGTERM, and more. Also, the +pin_user_pages*() APIs are clearly distinct from the get_user_pages*() API= s, so +that's a natural dividing line, and a good point to make separate wrapper = calls. +In other words, use pin_user_pages*() for DMA-pinned pages, and +get_user_pages*() for other cases. There are four cases described later on= in +this document, to further clarify that concept. + +FOLL_PIN and FOLL_GET are mutually exclusive for a given gup call. However= , +multiple threads and call sites are free to pin the same struct pages, via= both +FOLL_PIN and FOLL_GET. It's just the call site that needs to choose one or= the +other, not the struct page(s). + +The FOLL_PIN implementation is nearly the same as FOLL_GET, except that FO= LL_PIN +uses a different reference counting technique. + +FOLL_PIN is a prerequisite to FOLL_LONGTERM. Another way of saying that is= , +FOLL_LONGTERM is a specific case, more restrictive case of FOLL_PIN. + +Which flags are set by each wrapper +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +For these pin_user_pages*() functions, FOLL_PIN is OR'd in with whatever g= up +flags the caller provides. The caller is required to pass in a non-null st= ruct +pages* array, and the function then pin pages by incrementing each by a sp= ecial +value. For now, that value is +1, just like get_user_pages*().:: + + Function + -------- + pin_user_pages FOLL_PIN is always set internally by this functio= n. + pin_user_pages_fast FOLL_PIN is always set internally by this functio= n. + pin_user_pages_remote FOLL_PIN is always set internally by this functio= n. + +For these get_user_pages*() functions, FOLL_GET might not even be specifie= d. +Behavior is a little more complex than above. If FOLL_GET was *not* specif= ied, +but the caller passed in a non-null struct pages* array, then the function +sets FOLL_GET for you, and proceeds to pin pages by incrementing the refco= unt +of each page by +1.:: + + Function + -------- + get_user_pages FOLL_GET is sometimes set internally by this fun= ction. + get_user_pages_fast FOLL_GET is sometimes set internally by this fun= ction. + get_user_pages_remote FOLL_GET is sometimes set internally by this fun= ction. + +Tracking dma-pinned pages +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + +Some of the key design constraints, and solutions, for tracking dma-pinned +pages: + +* An actual reference count, per struct page, is required. This is because + multiple processes may pin and unpin a page. + +* False positives (reporting that a page is dma-pinned, when in fact it is= not) + are acceptable, but false negatives are not. + +* struct page may not be increased in size for this, and all fields are al= ready + used. + +* Given the above, we can overload the page->_refcount field by using, sor= t of, + the upper bits in that field for a dma-pinned count. "Sort of", means th= at, + rather than dividing page->_refcount into bit fields, we simple add a me= dium- + large value (GUP_PIN_COUNTING_BIAS, initially chosen to be 1024: 10 bits= ) to + page->_refcount. This provides fuzzy behavior: if a page has get_page() = called + on it 1024 times, then it will appear to have a single dma-pinned count. + And again, that's acceptable. + +This also leads to limitations: there are only 31-10=3D=3D21 bits availabl= e for a +counter that increments 10 bits at a time. + +TODO: for 1GB and larger huge pages, this is cutting it close. That's beca= use +when pin_user_pages() follows such pages, it increments the head page by "= 1" +(where "1" used to mean "+1" for get_user_pages(), but now means "+1024" f= or +pin_user_pages()) for each tail page. So if you have a 1GB huge page: + +* There are 256K (18 bits) worth of 4 KB tail pages. +* There are 21 bits available to count up via GUP_PIN_COUNTING_BIAS (that = is, + 10 bits at a time) +* There are 21 - 18 =3D=3D 3 bits available to count. Except that there ar= en't, + because you need to allow for a few normal get_page() calls on the head = page, + as well. Fortunately, the approach of using addition, rather than "hard" + bitfields, within page->_refcount, allows for sharing these bits gracefu= lly. + But we're still looking at about 8 references. + +This, however, is a missing feature more than anything else, because it's = easily +solved by addressing an obvious inefficiency in the original get_user_page= s() +approach of retrieving pages: stop treating all the pages as if they were +PAGE_SIZE. Retrieve huge pages as huge pages. The callers need to be aware= of +this, so some work is required. Once that's in place, this limitation most= ly +disappears from view, because there will be ample refcounting range availa= ble. + +* Callers must specifically request "dma-pinned tracking of pages". In oth= er + words, just calling get_user_pages() will not suffice; a new set of func= tions, + pin_user_page() and related, must be used. + +FOLL_PIN, FOLL_GET, FOLL_LONGTERM: when to use which flags +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D + +Thanks to Jan Kara, Vlastimil Babka and several other -mm people, for desc= ribing +these categories: + +CASE 1: Direct IO (DIO) +----------------------- +There are GUP references to pages that are serving +as DIO buffers. These buffers are needed for a relatively short time (so t= hey +are not "long term"). No special synchronization with page_mkclean() or +munmap() is provided. Therefore, flags to set at the call site are: :: + + FOLL_PIN + +...but rather than setting FOLL_PIN directly, call sites should use one of +the pin_user_pages*() routines that set FOLL_PIN. + +CASE 2: RDMA +------------ +There are GUP references to pages that are serving as DMA +buffers. These buffers are needed for a long time ("long term"). No specia= l +synchronization with page_mkclean() or munmap() is provided. Therefore, fl= ags +to set at the call site are: :: + + FOLL_PIN | FOLL_LONGTERM + +NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. = That's +because DAX pages do not have a separate page cache, and so "pinning" impl= ies +locking down file system blocks, which is not (yet) supported in that way. + +CASE 3: Hardware with page faulting support +------------------------------------------- +Here, a well-written driver doesn't normally need to pin pages at all. How= ever, +if the driver does choose to do so, it can register MMU notifiers for the = range, +and will be called back upon invalidation. Either way (avoiding page pinni= ng, or +using MMU notifiers to unpin upon request), there is proper synchronizatio= n with +both filesystem and mm (page_mkclean(), munmap(), etc). + +Therefore, neither flag needs to be set. + +In this case, ideally, neither get_user_pages() nor pin_user_pages() shoul= d be +called. Instead, the software should be written so that it does not pin pa= ges. +This allows mm and filesystems to operate more efficiently and reliably. + +CASE 4: Pinning for struct page manipulation only +------------------------------------------------- +Here, normal GUP calls are sufficient, so neither flag needs to be set. + +page_dma_pinned(): the whole point of pinning +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The whole point of marking pages as "DMA-pinned" or "gup-pinned" is to be = able +to query, "is this page DMA-pinned?" That allows code such as page_mkclean= () +(and file system writeback code in general) to make informed decisions abo= ut +what to do when a page cannot be unmapped due to such pins. + +What to do in those cases is the subject of a years-long series of discuss= ions +and debates (see the References at the end of this document). It's a TODO = item +here: fill in the details once that's worked out. Meanwhile, it's safe to = say +that having this available: :: + + static inline bool page_dma_pinned(struct page *page) + +...is a prerequisite to solving the long-running gup+DMA problem. + +Another way of thinking about FOLL_GET, FOLL_PIN, and FOLL_LONGTERM +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Another way of thinking about these flags is as a progression of restricti= ons: +FOLL_GET is for struct page manipulation, without affecting the data that = the +struct page refers to. FOLL_PIN is a *replacement* for FOLL_GET, and is fo= r +short term pins on pages whose data *will* get accessed. As such, FOLL_PIN= is +a "more severe" form of pinning. And finally, FOLL_LONGTERM is an even mor= e +restrictive case that has FOLL_PIN as a prerequisite: this is for pages th= at +will be pinned longterm, and whose data will be accessed. + +Unit testing +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +This file:: + + tools/testing/selftests/vm/gup_benchmark.c + +has the following new calls to exercise the new pin*() wrapper functions: + +* PIN_FAST_BENCHMARK (./gup_benchmark -a) +* PIN_BENCHMARK (./gup_benchmark -b) + +You can monitor how many total dma-pinned pages have been acquired and rel= eased +since the system was booted, via two new /proc/vmstat entries: :: + + /proc/vmstat/nr_foll_pin_requested + /proc/vmstat/nr_foll_pin_requested + +Those are both going to show zero, unless CONFIG_DEBUG_VM is set. This is +because there is a noticeable performance drop in put_user_page(), when th= ey +are activated. + +References +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +* `Some slow progress on get_user_pages() (Apr 2, 2019) `_ +* `DMA and get_user_pages() (LPC: Dec 12, 2018) `_ +* `The trouble with get_user_pages() (Apr 30, 2018) `_ + +John Hubbard, October, 2019 diff --git a/include/linux/mm.h b/include/linux/mm.h index e2032ff640eb..f9653e666bf4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1047,16 +1047,14 @@ static inline void put_page(struct page *page) * put_user_page() - release a gup-pinned page * @page: pointer to page to be released * - * Pages that were pinned via get_user_pages*() must be released via - * either put_user_page(), or one of the put_user_pages*() routines - * below. This is so that eventually, pages that are pinned via - * get_user_pages*() can be separately tracked and uniquely handled. In - * particular, interactions with RDMA and filesystems need special - * handling. + * Pages that were pinned via pin_user_pages*() must be released via eithe= r + * put_user_page(), or one of the put_user_pages*() routines. This is so t= hat + * eventually such pages can be separately tracked and uniquely handled. I= n + * particular, interactions with RDMA and filesystems need special handlin= g. * * put_user_page() and put_page() are not interchangeable, despite this ea= rly * implementation that makes them look the same. put_user_page() calls mus= t - * be perfectly matched up with get_user_page() calls. + * be perfectly matched up with pin*() calls. */ static inline void put_user_page(struct page *page) { @@ -1514,9 +1512,16 @@ long get_user_pages_remote(struct task_struct *tsk, = struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); +long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); +long pin_user_pages(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas); long get_user_pages_locked(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, @@ -1524,6 +1529,8 @@ long get_user_pages_unlocked(unsigned long start, uns= igned long nr_pages, =20 int get_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages); +int pin_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages); =20 int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc)= ; int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool in= c, @@ -2587,13 +2594,15 @@ struct page *follow_page(struct vm_area_struct *vma= , unsigned long address, #define FOLL_ANON 0x8000 /* don't do file mappings */ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below= */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ +#define FOLL_PIN 0x40000 /* pages must be released via put_user_page() */ =20 /* - * NOTE on FOLL_LONGTERM: + * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with eac= h + * other. Here is what they mean, and how to use them: * * FOLL_LONGTERM indicates that the page will be held for an indefinite ti= me - * period _often_ under userspace control. This is contrasted with - * iov_iter_get_pages() where usages which are transient. + * period _often_ under userspace control. This is in contrast to + * iov_iter_get_pages(), whose usages are transient. * * FIXME: For pages which are part of a filesystem, mappings are subject t= o the * lifetime enforced by the filesystem and we need guarantees that longter= m @@ -2608,11 +2617,39 @@ struct page *follow_page(struct vm_area_struct *vma= , unsigned long address, * Currently only get_user_pages() and get_user_pages_fast() support this = flag * and calls to get_user_pages_[un]locked are specifically not allowed. T= his * is due to an incompatibility with the FS DAX check and - * FAULT_FLAG_ALLOW_RETRY + * FAULT_FLAG_ALLOW_RETRY. * - * In the CMA case: longterm pins in a CMA region would unnecessarily frag= ment - * that region. And so CMA attempts to migrate the page before pinning wh= en + * In the CMA case: long term pins in a CMA region would unnecessarily fra= gment + * that region. And so, CMA attempts to migrate the page before pinning, = when * FOLL_LONGTERM is specified. + * + * FOLL_PIN indicates that a special kind of tracking (not just page->_ref= count, + * but an additional pin counting system) will be invoked. This is intende= d for + * anything that gets a page reference and then touches page data (for exa= mple, + * Direct IO). This lets the filesystem know that some non-file-system ent= ity is + * potentially changing the pages' data. In contrast to FOLL_GET (whose pa= ges + * are released via put_page()), FOLL_PIN pages must be released, ultimate= ly, by + * a call to put_user_page(). + * + * FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use diff= erent + * and separate refcounting mechanisms, however, and that means that each = has + * its own acquire and release mechanisms: + * + * FOLL_GET: get_user_pages*() to acquire, and put_page() to release. + * + * FOLL_PIN: pin_user_pages*() to acquire, and put_user_pages to relea= se. + * + * FOLL_PIN and FOLL_GET are mutually exclusive for a given function call. + * (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-b= ased + * calls applied to them, and that's perfectly OK. This is a constraint on= the + * callers, not on the pages.) + * + * FOLL_PIN should be set internally by the pin_user_pages*() APIs, never + * directly by the caller. That's in order to help avoid mismatches when + * releasing pages: get_user_pages*() pages must be released via put_page(= ), + * while pin_user_pages*() pages must be released via put_user_page(). + * + * Please see Documentation/vm/pin_user_pages.rst for more information. */ =20 static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) diff --git a/mm/gup.c b/mm/gup.c index a594bc708367..1c200eeabd77 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -194,6 +194,10 @@ static struct page *follow_page_pte(struct vm_area_str= uct *vma, spinlock_t *ptl; pte_t *ptep, pte; =20 + /* FOLL_GET and FOLL_PIN are mutually exclusive. */ + if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) =3D=3D + (FOLL_PIN | FOLL_GET))) + return ERR_PTR(-EINVAL); retry: if (unlikely(pmd_bad(*pmd))) return no_page_table(vma, flags); @@ -811,7 +815,7 @@ static long __get_user_pages(struct task_struct *tsk, s= truct mm_struct *mm, =20 start =3D untagged_addr(start); =20 - VM_BUG_ON(!!pages !=3D !!(gup_flags & FOLL_GET)); + VM_BUG_ON(!!pages !=3D !!(gup_flags & (FOLL_GET | FOLL_PIN))); =20 /* * If FOLL_FORCE is set then do not force a full fault as the hinting @@ -1035,7 +1039,16 @@ static __always_inline long __get_user_pages_locked(= struct task_struct *tsk, BUG_ON(*locked !=3D 1); } =20 - if (pages) + /* + * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior + * is to set FOLL_GET if the caller wants pages[] filled in (but has + * carelessly failed to specify FOLL_GET), so keep doing that, but only + * for FOLL_GET, not for the newer FOLL_PIN. + * + * FOLL_PIN always expects pages to be non-null, but no need to assert + * that here, as any failures will be obvious enough. + */ + if (pages && !(flags & FOLL_PIN)) flags |=3D FOLL_GET; =20 pages_done =3D 0; @@ -1606,11 +1619,19 @@ static __always_inline long __gup_longterm_locked(s= truct task_struct *tsk, * should use get_user_pages because it cannot pass * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. */ +#ifdef CONFIG_MMU long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) { + /* + * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, + * never directly by the caller, so enforce that with an assertion: + */ + if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) + return -EINVAL; + /* * Parts of FOLL_LONGTERM behavior are incompatible with * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on @@ -1636,6 +1657,16 @@ long get_user_pages_remote(struct task_struct *tsk, = struct mm_struct *mm, } EXPORT_SYMBOL(get_user_pages_remote); =20 +#else /* CONFIG_MMU */ +long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked) +{ + return 0; +} +#endif /* !CONFIG_MMU */ + /* * This is the same as get_user_pages_remote(), just with a * less-flexible calling convention where we assume that the task @@ -1647,6 +1678,13 @@ long get_user_pages(unsigned long start, unsigned lo= ng nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas) { + /* + * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, + * never directly by the caller, so enforce that with an assertion: + */ + if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) + return -EINVAL; + return __gup_longterm_locked(current, current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } @@ -2389,30 +2427,15 @@ static int __gup_longterm_unlocked(unsigned long st= art, int nr_pages, return ret; } =20 -/** - * get_user_pages_fast() - pin user pages in memory - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying pin behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. - * - * Attempt to pin user pages in memory without taking mm->mmap_sem. - * If not successful, it will fall back to taking the lock and - * calling get_user_pages(). - * - * Returns number of pages pinned. This may be fewer than the number - * requested. If nr_pages is 0 or negative, returns 0. If no pages - * were pinned, returns -errno. - */ -int get_user_pages_fast(unsigned long start, int nr_pages, - unsigned int gup_flags, struct page **pages) +static int internal_get_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, + struct page **pages) { unsigned long addr, len, end; int nr =3D 0, ret =3D 0; =20 if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | - FOLL_FORCE))) + FOLL_FORCE | FOLL_PIN))) return -EINVAL; =20 start =3D untagged_addr(start) & PAGE_MASK; @@ -2452,4 +2475,103 @@ int get_user_pages_fast(unsigned long start, int nr= _pages, =20 return ret; } + +/** + * get_user_pages_fast() - pin user pages in memory + * @start: starting user address + * @nr_pages: number of pages from start to pin + * @gup_flags: flags modifying pin behaviour + * @pages: array that receives pointers to the pages pinned. + * Should be at least nr_pages long. + * + * Attempt to pin user pages in memory without taking mm->mmap_sem. + * If not successful, it will fall back to taking the lock and + * calling get_user_pages(). + * + * Returns number of pages pinned. This may be fewer than the number reque= sted. + * If nr_pages is 0 or negative, returns 0. If no pages were pinned, retur= ns + * -errno. + */ +int get_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages) +{ + /* + * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, + * never directly by the caller, so enforce that: + */ + if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) + return -EINVAL; + + return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); +} EXPORT_SYMBOL_GPL(get_user_pages_fast); + +/** + * pin_user_pages_fast() - pin user pages in memory without taking locks + * + * For now, this is a placeholder function, until various call sites are + * converted to use the correct get_user_pages*() or pin_user_pages*() API= . So, + * this is identical to get_user_pages_fast(). + * + * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rs= t. It + * is NOT intended for Case 2 (RDMA: long-term pins). + */ +int pin_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages) +{ + /* + * This is a placeholder, until the pin functionality is activated. + * Until then, just behave like the corresponding get_user_pages*() + * routine. + */ + return get_user_pages_fast(start, nr_pages, gup_flags, pages); +} +EXPORT_SYMBOL_GPL(pin_user_pages_fast); + +/** + * pin_user_pages_remote() - pin pages of a remote process (task !=3D curr= ent) + * + * For now, this is a placeholder function, until various call sites are + * converted to use the correct get_user_pages*() or pin_user_pages*() API= . So, + * this is identical to get_user_pages_remote(). + * + * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rs= t. It + * is NOT intended for Case 2 (RDMA: long-term pins). + */ +long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked) +{ + /* + * This is a placeholder, until the pin functionality is activated. + * Until then, just behave like the corresponding get_user_pages*() + * routine. + */ + return get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, pages, + vmas, locked); +} +EXPORT_SYMBOL(pin_user_pages_remote); + +/** + * pin_user_pages() - pin user pages in memory for use by other devices + * + * For now, this is a placeholder function, until various call sites are + * converted to use the correct get_user_pages*() or pin_user_pages*() API= . So, + * this is identical to get_user_pages(). + * + * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rs= t. It + * is NOT intended for Case 2 (RDMA: long-term pins). + */ +long pin_user_pages(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas) +{ + /* + * This is a placeholder, until the pin functionality is activated. + * Until then, just behave like the corresponding get_user_pages*() + * routine. + */ + return get_user_pages(start, nr_pages, gup_flags, pages, vmas); +} +EXPORT_SYMBOL(pin_user_pages); --=20 2.24.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 658B7C33C9E for ; Tue, 7 Jan 2020 23:31:05 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DEA12207E0 for ; Tue, 7 Jan 2020 23:31:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="N8y3M7oE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DEA12207E0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 47spXk2VQBzDqjM for ; Wed, 8 Jan 2020 10:31:02 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com (client-ip=216.228.121.65; helo=hqnvemgate26.nvidia.com; envelope-from=jhubbard@nvidia.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="N8y3M7oE"; dkim-atps=neutral Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47snY90HZQzDqPS for ; Wed, 8 Jan 2020 09:46:20 +1100 (AEDT) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 07 Jan 2020 14:45:44 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 07 Jan 2020 14:46:01 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 07 Jan 2020 14:46:01 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 7 Jan 2020 22:46:00 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 7 Jan 2020 22:46:00 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Tue, 07 Jan 2020 14:46:00 -0800 From: John Hubbard To: Andrew Morton Subject: [PATCH v12 11/22] mm/gup: introduce pin_user_pages*() and FOLL_PIN Date: Tue, 7 Jan 2020 14:45:47 -0800 Message-ID: <20200107224558.2362728-12-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200107224558.2362728-1-jhubbard@nvidia.com> References: <20200107224558.2362728-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1578437144; bh=ILPjWTz1uECxM8jMFV9CrBwQY3DCe/z1cMA8J31RcEU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=N8y3M7oEGlF2Wl2fmm0TU+YowgLxVYo4DvTmDlxpuOsYOjpyz1F/NGihEQMkRvDMS pUJ7V2Sropf3MRr0OCEHWhcCpRpeVudhY3paboRYSFjlCWQedKWdAWNPFTXNQbf0v9 rVHCHVL9BIxj5lsLVXqMftCDh91WGIG19SOtYt0L+R7blB3regKi3MBDHVLYvHg/vO CqUwZolw+sfLd09xQIbB6qDdrq+/xbWhlkXeUvekAhXLOiGiikYJgj8+Y6q4T4Ue7B NXexgnMZIaUQsnLLxupXXAhNLXKrjguawcfLMohQ7aeBX/oUiVJrgp8vpD7nBLPWk9 sHiEfW/E7AVoA== X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , Jan Kara , kvm@vger.kernel.org, linux-doc@vger.kernel.org, David Airlie , Dave Chinner , dri-devel@lists.freedesktop.org, LKML , linux-mm@kvack.org, Paul Mackerras , linux-kselftest@vger.kernel.org, Ira Weiny , Jonathan Corbet , linux-rdma@vger.kernel.org, Mike Rapoport , Christoph Hellwig , Jason Gunthorpe , Vlastimil Babka , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , linux-media@vger.kernel.org, Shuah Khan , John Hubbard , linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Al Viro , "Kirill A . Shutemov" , Dan Williams , Mauro Carvalho Chehab , Magnus Karlsson , Jens Axboe , netdev@vger.kernel.org, Alex Williamson , Daniel Vetter , linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "David S . Miller" , Mike Kravetz Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Introduce pin_user_pages*() variations of get_user_pages*() calls, and also pin_longterm_pages*() variations. For now, these are placeholder calls, until the various call sites are converted to use the correct get_user_pages*() or pin_user_pages*() API. These variants will eventually all set FOLL_PIN, which is also introduced, and thoroughly documented. pin_user_pages() pin_user_pages_remote() pin_user_pages_fast() All pages that are pinned via the above calls, must be unpinned via put_user_page(). The underlying rules are: * FOLL_PIN is a gup-internal flag, so the call sites should not directly set it. That behavior is enforced with assertions. * Call sites that want to indicate that they are going to do DirectIO ("DIO") or something with similar characteristics, should call a get_user_pages()-like wrapper call that sets FOLL_PIN. These wrappers will: * Start with "pin_user_pages" instead of "get_user_pages". That makes it easy to find and audit the call sites. * Set FOLL_PIN * For pages that are received via FOLL_PIN, those pages must be returned via put_user_page(). Thanks to Jan Kara and Vlastimil Babka for explaining the 4 cases in this documentation. (I've reworded it and expanded upon it.) Reviewed-by: Jan Kara Reviewed-by: Mike Rapoport # Documentation Reviewed-by: J=C3=A9r=C3=B4me Glisse Cc: Jonathan Corbet Cc: Ira Weiny Signed-off-by: John Hubbard --- Documentation/core-api/index.rst | 1 + Documentation/core-api/pin_user_pages.rst | 232 ++++++++++++++++++++++ include/linux/mm.h | 63 ++++-- mm/gup.c | 164 +++++++++++++-- 4 files changed, 426 insertions(+), 34 deletions(-) create mode 100644 Documentation/core-api/pin_user_pages.rst diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/inde= x.rst index ab0eae1c153a..413f7d7c8642 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -31,6 +31,7 @@ Core utilities generic-radix-tree memory-allocation mm-api + pin_user_pages gfp_mask-from-fs-io timekeeping boot-time-mm diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core= -api/pin_user_pages.rst new file mode 100644 index 000000000000..71849830cd48 --- /dev/null +++ b/Documentation/core-api/pin_user_pages.rst @@ -0,0 +1,232 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D +pin_user_pages() and related calls +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D + +.. contents:: :local: + +Overview +=3D=3D=3D=3D=3D=3D=3D=3D + +This document describes the following functions:: + + pin_user_pages() + pin_user_pages_fast() + pin_user_pages_remote() + +Basic description of FOLL_PIN +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D + +FOLL_PIN and FOLL_LONGTERM are flags that can be passed to the get_user_pa= ges*() +("gup") family of functions. FOLL_PIN has significant interactions and +interdependencies with FOLL_LONGTERM, so both are covered here. + +FOLL_PIN is internal to gup, meaning that it should not appear at the gup = call +sites. This allows the associated wrapper functions (pin_user_pages*() an= d +others) to set the correct combination of these flags, and to check for pr= oblems +as well. + +FOLL_LONGTERM, on the other hand, *is* allowed to be set at the gup call s= ites. +This is in order to avoid creating a large number of wrapper functions to = cover +all combinations of get*(), pin*(), FOLL_LONGTERM, and more. Also, the +pin_user_pages*() APIs are clearly distinct from the get_user_pages*() API= s, so +that's a natural dividing line, and a good point to make separate wrapper = calls. +In other words, use pin_user_pages*() for DMA-pinned pages, and +get_user_pages*() for other cases. There are four cases described later on= in +this document, to further clarify that concept. + +FOLL_PIN and FOLL_GET are mutually exclusive for a given gup call. However= , +multiple threads and call sites are free to pin the same struct pages, via= both +FOLL_PIN and FOLL_GET. It's just the call site that needs to choose one or= the +other, not the struct page(s). + +The FOLL_PIN implementation is nearly the same as FOLL_GET, except that FO= LL_PIN +uses a different reference counting technique. + +FOLL_PIN is a prerequisite to FOLL_LONGTERM. Another way of saying that is= , +FOLL_LONGTERM is a specific case, more restrictive case of FOLL_PIN. + +Which flags are set by each wrapper +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +For these pin_user_pages*() functions, FOLL_PIN is OR'd in with whatever g= up +flags the caller provides. The caller is required to pass in a non-null st= ruct +pages* array, and the function then pin pages by incrementing each by a sp= ecial +value. For now, that value is +1, just like get_user_pages*().:: + + Function + -------- + pin_user_pages FOLL_PIN is always set internally by this functio= n. + pin_user_pages_fast FOLL_PIN is always set internally by this functio= n. + pin_user_pages_remote FOLL_PIN is always set internally by this functio= n. + +For these get_user_pages*() functions, FOLL_GET might not even be specifie= d. +Behavior is a little more complex than above. If FOLL_GET was *not* specif= ied, +but the caller passed in a non-null struct pages* array, then the function +sets FOLL_GET for you, and proceeds to pin pages by incrementing the refco= unt +of each page by +1.:: + + Function + -------- + get_user_pages FOLL_GET is sometimes set internally by this fun= ction. + get_user_pages_fast FOLL_GET is sometimes set internally by this fun= ction. + get_user_pages_remote FOLL_GET is sometimes set internally by this fun= ction. + +Tracking dma-pinned pages +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D + +Some of the key design constraints, and solutions, for tracking dma-pinned +pages: + +* An actual reference count, per struct page, is required. This is because + multiple processes may pin and unpin a page. + +* False positives (reporting that a page is dma-pinned, when in fact it is= not) + are acceptable, but false negatives are not. + +* struct page may not be increased in size for this, and all fields are al= ready + used. + +* Given the above, we can overload the page->_refcount field by using, sor= t of, + the upper bits in that field for a dma-pinned count. "Sort of", means th= at, + rather than dividing page->_refcount into bit fields, we simple add a me= dium- + large value (GUP_PIN_COUNTING_BIAS, initially chosen to be 1024: 10 bits= ) to + page->_refcount. This provides fuzzy behavior: if a page has get_page() = called + on it 1024 times, then it will appear to have a single dma-pinned count. + And again, that's acceptable. + +This also leads to limitations: there are only 31-10=3D=3D21 bits availabl= e for a +counter that increments 10 bits at a time. + +TODO: for 1GB and larger huge pages, this is cutting it close. That's beca= use +when pin_user_pages() follows such pages, it increments the head page by "= 1" +(where "1" used to mean "+1" for get_user_pages(), but now means "+1024" f= or +pin_user_pages()) for each tail page. So if you have a 1GB huge page: + +* There are 256K (18 bits) worth of 4 KB tail pages. +* There are 21 bits available to count up via GUP_PIN_COUNTING_BIAS (that = is, + 10 bits at a time) +* There are 21 - 18 =3D=3D 3 bits available to count. Except that there ar= en't, + because you need to allow for a few normal get_page() calls on the head = page, + as well. Fortunately, the approach of using addition, rather than "hard" + bitfields, within page->_refcount, allows for sharing these bits gracefu= lly. + But we're still looking at about 8 references. + +This, however, is a missing feature more than anything else, because it's = easily +solved by addressing an obvious inefficiency in the original get_user_page= s() +approach of retrieving pages: stop treating all the pages as if they were +PAGE_SIZE. Retrieve huge pages as huge pages. The callers need to be aware= of +this, so some work is required. Once that's in place, this limitation most= ly +disappears from view, because there will be ample refcounting range availa= ble. + +* Callers must specifically request "dma-pinned tracking of pages". In oth= er + words, just calling get_user_pages() will not suffice; a new set of func= tions, + pin_user_page() and related, must be used. + +FOLL_PIN, FOLL_GET, FOLL_LONGTERM: when to use which flags +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D + +Thanks to Jan Kara, Vlastimil Babka and several other -mm people, for desc= ribing +these categories: + +CASE 1: Direct IO (DIO) +----------------------- +There are GUP references to pages that are serving +as DIO buffers. These buffers are needed for a relatively short time (so t= hey +are not "long term"). No special synchronization with page_mkclean() or +munmap() is provided. Therefore, flags to set at the call site are: :: + + FOLL_PIN + +...but rather than setting FOLL_PIN directly, call sites should use one of +the pin_user_pages*() routines that set FOLL_PIN. + +CASE 2: RDMA +------------ +There are GUP references to pages that are serving as DMA +buffers. These buffers are needed for a long time ("long term"). No specia= l +synchronization with page_mkclean() or munmap() is provided. Therefore, fl= ags +to set at the call site are: :: + + FOLL_PIN | FOLL_LONGTERM + +NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. = That's +because DAX pages do not have a separate page cache, and so "pinning" impl= ies +locking down file system blocks, which is not (yet) supported in that way. + +CASE 3: Hardware with page faulting support +------------------------------------------- +Here, a well-written driver doesn't normally need to pin pages at all. How= ever, +if the driver does choose to do so, it can register MMU notifiers for the = range, +and will be called back upon invalidation. Either way (avoiding page pinni= ng, or +using MMU notifiers to unpin upon request), there is proper synchronizatio= n with +both filesystem and mm (page_mkclean(), munmap(), etc). + +Therefore, neither flag needs to be set. + +In this case, ideally, neither get_user_pages() nor pin_user_pages() shoul= d be +called. Instead, the software should be written so that it does not pin pa= ges. +This allows mm and filesystems to operate more efficiently and reliably. + +CASE 4: Pinning for struct page manipulation only +------------------------------------------------- +Here, normal GUP calls are sufficient, so neither flag needs to be set. + +page_dma_pinned(): the whole point of pinning +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The whole point of marking pages as "DMA-pinned" or "gup-pinned" is to be = able +to query, "is this page DMA-pinned?" That allows code such as page_mkclean= () +(and file system writeback code in general) to make informed decisions abo= ut +what to do when a page cannot be unmapped due to such pins. + +What to do in those cases is the subject of a years-long series of discuss= ions +and debates (see the References at the end of this document). It's a TODO = item +here: fill in the details once that's worked out. Meanwhile, it's safe to = say +that having this available: :: + + static inline bool page_dma_pinned(struct page *page) + +...is a prerequisite to solving the long-running gup+DMA problem. + +Another way of thinking about FOLL_GET, FOLL_PIN, and FOLL_LONGTERM +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Another way of thinking about these flags is as a progression of restricti= ons: +FOLL_GET is for struct page manipulation, without affecting the data that = the +struct page refers to. FOLL_PIN is a *replacement* for FOLL_GET, and is fo= r +short term pins on pages whose data *will* get accessed. As such, FOLL_PIN= is +a "more severe" form of pinning. And finally, FOLL_LONGTERM is an even mor= e +restrictive case that has FOLL_PIN as a prerequisite: this is for pages th= at +will be pinned longterm, and whose data will be accessed. + +Unit testing +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +This file:: + + tools/testing/selftests/vm/gup_benchmark.c + +has the following new calls to exercise the new pin*() wrapper functions: + +* PIN_FAST_BENCHMARK (./gup_benchmark -a) +* PIN_BENCHMARK (./gup_benchmark -b) + +You can monitor how many total dma-pinned pages have been acquired and rel= eased +since the system was booted, via two new /proc/vmstat entries: :: + + /proc/vmstat/nr_foll_pin_requested + /proc/vmstat/nr_foll_pin_requested + +Those are both going to show zero, unless CONFIG_DEBUG_VM is set. This is +because there is a noticeable performance drop in put_user_page(), when th= ey +are activated. + +References +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +* `Some slow progress on get_user_pages() (Apr 2, 2019) `_ +* `DMA and get_user_pages() (LPC: Dec 12, 2018) `_ +* `The trouble with get_user_pages() (Apr 30, 2018) `_ + +John Hubbard, October, 2019 diff --git a/include/linux/mm.h b/include/linux/mm.h index e2032ff640eb..f9653e666bf4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1047,16 +1047,14 @@ static inline void put_page(struct page *page) * put_user_page() - release a gup-pinned page * @page: pointer to page to be released * - * Pages that were pinned via get_user_pages*() must be released via - * either put_user_page(), or one of the put_user_pages*() routines - * below. This is so that eventually, pages that are pinned via - * get_user_pages*() can be separately tracked and uniquely handled. In - * particular, interactions with RDMA and filesystems need special - * handling. + * Pages that were pinned via pin_user_pages*() must be released via eithe= r + * put_user_page(), or one of the put_user_pages*() routines. This is so t= hat + * eventually such pages can be separately tracked and uniquely handled. I= n + * particular, interactions with RDMA and filesystems need special handlin= g. * * put_user_page() and put_page() are not interchangeable, despite this ea= rly * implementation that makes them look the same. put_user_page() calls mus= t - * be perfectly matched up with get_user_page() calls. + * be perfectly matched up with pin*() calls. */ static inline void put_user_page(struct page *page) { @@ -1514,9 +1512,16 @@ long get_user_pages_remote(struct task_struct *tsk, = struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked); +long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked); long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); +long pin_user_pages(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas); long get_user_pages_locked(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, @@ -1524,6 +1529,8 @@ long get_user_pages_unlocked(unsigned long start, uns= igned long nr_pages, =20 int get_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages); +int pin_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages); =20 int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc)= ; int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool in= c, @@ -2587,13 +2594,15 @@ struct page *follow_page(struct vm_area_struct *vma= , unsigned long address, #define FOLL_ANON 0x8000 /* don't do file mappings */ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below= */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ +#define FOLL_PIN 0x40000 /* pages must be released via put_user_page() */ =20 /* - * NOTE on FOLL_LONGTERM: + * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with eac= h + * other. Here is what they mean, and how to use them: * * FOLL_LONGTERM indicates that the page will be held for an indefinite ti= me - * period _often_ under userspace control. This is contrasted with - * iov_iter_get_pages() where usages which are transient. + * period _often_ under userspace control. This is in contrast to + * iov_iter_get_pages(), whose usages are transient. * * FIXME: For pages which are part of a filesystem, mappings are subject t= o the * lifetime enforced by the filesystem and we need guarantees that longter= m @@ -2608,11 +2617,39 @@ struct page *follow_page(struct vm_area_struct *vma= , unsigned long address, * Currently only get_user_pages() and get_user_pages_fast() support this = flag * and calls to get_user_pages_[un]locked are specifically not allowed. T= his * is due to an incompatibility with the FS DAX check and - * FAULT_FLAG_ALLOW_RETRY + * FAULT_FLAG_ALLOW_RETRY. * - * In the CMA case: longterm pins in a CMA region would unnecessarily frag= ment - * that region. And so CMA attempts to migrate the page before pinning wh= en + * In the CMA case: long term pins in a CMA region would unnecessarily fra= gment + * that region. And so, CMA attempts to migrate the page before pinning, = when * FOLL_LONGTERM is specified. + * + * FOLL_PIN indicates that a special kind of tracking (not just page->_ref= count, + * but an additional pin counting system) will be invoked. This is intende= d for + * anything that gets a page reference and then touches page data (for exa= mple, + * Direct IO). This lets the filesystem know that some non-file-system ent= ity is + * potentially changing the pages' data. In contrast to FOLL_GET (whose pa= ges + * are released via put_page()), FOLL_PIN pages must be released, ultimate= ly, by + * a call to put_user_page(). + * + * FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use diff= erent + * and separate refcounting mechanisms, however, and that means that each = has + * its own acquire and release mechanisms: + * + * FOLL_GET: get_user_pages*() to acquire, and put_page() to release. + * + * FOLL_PIN: pin_user_pages*() to acquire, and put_user_pages to relea= se. + * + * FOLL_PIN and FOLL_GET are mutually exclusive for a given function call. + * (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-b= ased + * calls applied to them, and that's perfectly OK. This is a constraint on= the + * callers, not on the pages.) + * + * FOLL_PIN should be set internally by the pin_user_pages*() APIs, never + * directly by the caller. That's in order to help avoid mismatches when + * releasing pages: get_user_pages*() pages must be released via put_page(= ), + * while pin_user_pages*() pages must be released via put_user_page(). + * + * Please see Documentation/vm/pin_user_pages.rst for more information. */ =20 static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) diff --git a/mm/gup.c b/mm/gup.c index a594bc708367..1c200eeabd77 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -194,6 +194,10 @@ static struct page *follow_page_pte(struct vm_area_str= uct *vma, spinlock_t *ptl; pte_t *ptep, pte; =20 + /* FOLL_GET and FOLL_PIN are mutually exclusive. */ + if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) =3D=3D + (FOLL_PIN | FOLL_GET))) + return ERR_PTR(-EINVAL); retry: if (unlikely(pmd_bad(*pmd))) return no_page_table(vma, flags); @@ -811,7 +815,7 @@ static long __get_user_pages(struct task_struct *tsk, s= truct mm_struct *mm, =20 start =3D untagged_addr(start); =20 - VM_BUG_ON(!!pages !=3D !!(gup_flags & FOLL_GET)); + VM_BUG_ON(!!pages !=3D !!(gup_flags & (FOLL_GET | FOLL_PIN))); =20 /* * If FOLL_FORCE is set then do not force a full fault as the hinting @@ -1035,7 +1039,16 @@ static __always_inline long __get_user_pages_locked(= struct task_struct *tsk, BUG_ON(*locked !=3D 1); } =20 - if (pages) + /* + * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior + * is to set FOLL_GET if the caller wants pages[] filled in (but has + * carelessly failed to specify FOLL_GET), so keep doing that, but only + * for FOLL_GET, not for the newer FOLL_PIN. + * + * FOLL_PIN always expects pages to be non-null, but no need to assert + * that here, as any failures will be obvious enough. + */ + if (pages && !(flags & FOLL_PIN)) flags |=3D FOLL_GET; =20 pages_done =3D 0; @@ -1606,11 +1619,19 @@ static __always_inline long __gup_longterm_locked(s= truct task_struct *tsk, * should use get_user_pages because it cannot pass * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault. */ +#ifdef CONFIG_MMU long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) { + /* + * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, + * never directly by the caller, so enforce that with an assertion: + */ + if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) + return -EINVAL; + /* * Parts of FOLL_LONGTERM behavior are incompatible with * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on @@ -1636,6 +1657,16 @@ long get_user_pages_remote(struct task_struct *tsk, = struct mm_struct *mm, } EXPORT_SYMBOL(get_user_pages_remote); =20 +#else /* CONFIG_MMU */ +long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked) +{ + return 0; +} +#endif /* !CONFIG_MMU */ + /* * This is the same as get_user_pages_remote(), just with a * less-flexible calling convention where we assume that the task @@ -1647,6 +1678,13 @@ long get_user_pages(unsigned long start, unsigned lo= ng nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas) { + /* + * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, + * never directly by the caller, so enforce that with an assertion: + */ + if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) + return -EINVAL; + return __gup_longterm_locked(current, current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } @@ -2389,30 +2427,15 @@ static int __gup_longterm_unlocked(unsigned long st= art, int nr_pages, return ret; } =20 -/** - * get_user_pages_fast() - pin user pages in memory - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying pin behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. - * - * Attempt to pin user pages in memory without taking mm->mmap_sem. - * If not successful, it will fall back to taking the lock and - * calling get_user_pages(). - * - * Returns number of pages pinned. This may be fewer than the number - * requested. If nr_pages is 0 or negative, returns 0. If no pages - * were pinned, returns -errno. - */ -int get_user_pages_fast(unsigned long start, int nr_pages, - unsigned int gup_flags, struct page **pages) +static int internal_get_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, + struct page **pages) { unsigned long addr, len, end; int nr =3D 0, ret =3D 0; =20 if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | - FOLL_FORCE))) + FOLL_FORCE | FOLL_PIN))) return -EINVAL; =20 start =3D untagged_addr(start) & PAGE_MASK; @@ -2452,4 +2475,103 @@ int get_user_pages_fast(unsigned long start, int nr= _pages, =20 return ret; } + +/** + * get_user_pages_fast() - pin user pages in memory + * @start: starting user address + * @nr_pages: number of pages from start to pin + * @gup_flags: flags modifying pin behaviour + * @pages: array that receives pointers to the pages pinned. + * Should be at least nr_pages long. + * + * Attempt to pin user pages in memory without taking mm->mmap_sem. + * If not successful, it will fall back to taking the lock and + * calling get_user_pages(). + * + * Returns number of pages pinned. This may be fewer than the number reque= sted. + * If nr_pages is 0 or negative, returns 0. If no pages were pinned, retur= ns + * -errno. + */ +int get_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages) +{ + /* + * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, + * never directly by the caller, so enforce that: + */ + if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) + return -EINVAL; + + return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); +} EXPORT_SYMBOL_GPL(get_user_pages_fast); + +/** + * pin_user_pages_fast() - pin user pages in memory without taking locks + * + * For now, this is a placeholder function, until various call sites are + * converted to use the correct get_user_pages*() or pin_user_pages*() API= . So, + * this is identical to get_user_pages_fast(). + * + * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rs= t. It + * is NOT intended for Case 2 (RDMA: long-term pins). + */ +int pin_user_pages_fast(unsigned long start, int nr_pages, + unsigned int gup_flags, struct page **pages) +{ + /* + * This is a placeholder, until the pin functionality is activated. + * Until then, just behave like the corresponding get_user_pages*() + * routine. + */ + return get_user_pages_fast(start, nr_pages, gup_flags, pages); +} +EXPORT_SYMBOL_GPL(pin_user_pages_fast); + +/** + * pin_user_pages_remote() - pin pages of a remote process (task !=3D curr= ent) + * + * For now, this is a placeholder function, until various call sites are + * converted to use the correct get_user_pages*() or pin_user_pages*() API= . So, + * this is identical to get_user_pages_remote(). + * + * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rs= t. It + * is NOT intended for Case 2 (RDMA: long-term pins). + */ +long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas, int *locked) +{ + /* + * This is a placeholder, until the pin functionality is activated. + * Until then, just behave like the corresponding get_user_pages*() + * routine. + */ + return get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, pages, + vmas, locked); +} +EXPORT_SYMBOL(pin_user_pages_remote); + +/** + * pin_user_pages() - pin user pages in memory for use by other devices + * + * For now, this is a placeholder function, until various call sites are + * converted to use the correct get_user_pages*() or pin_user_pages*() API= . So, + * this is identical to get_user_pages(). + * + * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rs= t. It + * is NOT intended for Case 2 (RDMA: long-term pins). + */ +long pin_user_pages(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas) +{ + /* + * This is a placeholder, until the pin functionality is activated. + * Until then, just behave like the corresponding get_user_pages*() + * routine. + */ + return get_user_pages(start, nr_pages, gup_flags, pages, vmas); +} +EXPORT_SYMBOL(pin_user_pages); --=20 2.24.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CE7FC282DD for ; Tue, 7 Jan 2020 22:46:16 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E8C82075A for ; Tue, 7 Jan 2020 22:46:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="N8y3M7oE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E8C82075A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2977E6E83B; Tue, 7 Jan 2020 22:46:03 +0000 (UTC) Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0F5F06E149 for ; Tue, 7 Jan 2020 22:46:02 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 07 Jan 2020 14:45:44 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 07 Jan 2020 14:46:01 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 07 Jan 2020 14:46:01 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 7 Jan 2020 22:46:00 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 7 Jan 2020 22:46:00 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Tue, 07 Jan 2020 14:46:00 -0800 From: John Hubbard To: Andrew Morton Subject: [PATCH v12 11/22] mm/gup: introduce pin_user_pages*() and FOLL_PIN Date: Tue, 7 Jan 2020 14:45:47 -0800 Message-ID: <20200107224558.2362728-12-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200107224558.2362728-1-jhubbard@nvidia.com> References: <20200107224558.2362728-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1578437144; bh=ILPjWTz1uECxM8jMFV9CrBwQY3DCe/z1cMA8J31RcEU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=N8y3M7oEGlF2Wl2fmm0TU+YowgLxVYo4DvTmDlxpuOsYOjpyz1F/NGihEQMkRvDMS pUJ7V2Sropf3MRr0OCEHWhcCpRpeVudhY3paboRYSFjlCWQedKWdAWNPFTXNQbf0v9 rVHCHVL9BIxj5lsLVXqMftCDh91WGIG19SOtYt0L+R7blB3regKi3MBDHVLYvHg/vO CqUwZolw+sfLd09xQIbB6qDdrq+/xbWhlkXeUvekAhXLOiGiikYJgj8+Y6q4T4Ue7B NXexgnMZIaUQsnLLxupXXAhNLXKrjguawcfLMohQ7aeBX/oUiVJrgp8vpD7nBLPWk9 sHiEfW/E7AVoA== X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , Jan Kara , kvm@vger.kernel.org, linux-doc@vger.kernel.org, David Airlie , Dave Chinner , dri-devel@lists.freedesktop.org, LKML , linux-mm@kvack.org, Paul Mackerras , linux-kselftest@vger.kernel.org, Ira Weiny , Jonathan Corbet , linux-rdma@vger.kernel.org, Michael Ellerman , Mike Rapoport , Christoph Hellwig , Jason Gunthorpe , Vlastimil Babka , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , linux-media@vger.kernel.org, Shuah Khan , John Hubbard , linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Al Viro , "Kirill A . Shutemov" , Dan Williams , Mauro Carvalho Chehab , Magnus Karlsson , Jens Axboe , netdev@vger.kernel.org, Alex Williamson , linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "David S . Miller" , Mike Kravetz Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" SW50cm9kdWNlIHBpbl91c2VyX3BhZ2VzKigpIHZhcmlhdGlvbnMgb2YgZ2V0X3VzZXJfcGFnZXMq KCkgY2FsbHMsCmFuZCBhbHNvIHBpbl9sb25ndGVybV9wYWdlcyooKSB2YXJpYXRpb25zLgoKRm9y IG5vdywgdGhlc2UgYXJlIHBsYWNlaG9sZGVyIGNhbGxzLCB1bnRpbCB0aGUgdmFyaW91cyBjYWxs IHNpdGVzCmFyZSBjb252ZXJ0ZWQgdG8gdXNlIHRoZSBjb3JyZWN0IGdldF91c2VyX3BhZ2VzKigp IG9yCnBpbl91c2VyX3BhZ2VzKigpIEFQSS4KClRoZXNlIHZhcmlhbnRzIHdpbGwgZXZlbnR1YWxs eSBhbGwgc2V0IEZPTExfUElOLCB3aGljaCBpcyBhbHNvCmludHJvZHVjZWQsIGFuZCB0aG9yb3Vn aGx5IGRvY3VtZW50ZWQuCgogICAgcGluX3VzZXJfcGFnZXMoKQogICAgcGluX3VzZXJfcGFnZXNf cmVtb3RlKCkKICAgIHBpbl91c2VyX3BhZ2VzX2Zhc3QoKQoKQWxsIHBhZ2VzIHRoYXQgYXJlIHBp bm5lZCB2aWEgdGhlIGFib3ZlIGNhbGxzLCBtdXN0IGJlIHVucGlubmVkIHZpYQpwdXRfdXNlcl9w YWdlKCkuCgpUaGUgdW5kZXJseWluZyBydWxlcyBhcmU6CgoqIEZPTExfUElOIGlzIGEgZ3VwLWlu dGVybmFsIGZsYWcsIHNvIHRoZSBjYWxsIHNpdGVzIHNob3VsZCBub3QgZGlyZWN0bHkKc2V0IGl0 LiBUaGF0IGJlaGF2aW9yIGlzIGVuZm9yY2VkIHdpdGggYXNzZXJ0aW9ucy4KCiogQ2FsbCBzaXRl cyB0aGF0IHdhbnQgdG8gaW5kaWNhdGUgdGhhdCB0aGV5IGFyZSBnb2luZyB0byBkbyBEaXJlY3RJ TwogICgiRElPIikgb3Igc29tZXRoaW5nIHdpdGggc2ltaWxhciBjaGFyYWN0ZXJpc3RpY3MsIHNo b3VsZCBjYWxsIGEKICBnZXRfdXNlcl9wYWdlcygpLWxpa2Ugd3JhcHBlciBjYWxsIHRoYXQgc2V0 cyBGT0xMX1BJTi4gVGhlc2Ugd3JhcHBlcnMKICB3aWxsOgogICAgICAgICogU3RhcnQgd2l0aCAi cGluX3VzZXJfcGFnZXMiIGluc3RlYWQgb2YgImdldF91c2VyX3BhZ2VzIi4gVGhhdAogICAgICAg ICAgbWFrZXMgaXQgZWFzeSB0byBmaW5kIGFuZCBhdWRpdCB0aGUgY2FsbCBzaXRlcy4KICAgICAg ICAqIFNldCBGT0xMX1BJTgoKKiBGb3IgcGFnZXMgdGhhdCBhcmUgcmVjZWl2ZWQgdmlhIEZPTExf UElOLCB0aG9zZSBwYWdlcyBtdXN0IGJlIHJldHVybmVkCiAgdmlhIHB1dF91c2VyX3BhZ2UoKS4K ClRoYW5rcyB0byBKYW4gS2FyYSBhbmQgVmxhc3RpbWlsIEJhYmthIGZvciBleHBsYWluaW5nIHRo ZSA0IGNhc2VzCmluIHRoaXMgZG9jdW1lbnRhdGlvbi4gKEkndmUgcmV3b3JkZWQgaXQgYW5kIGV4 cGFuZGVkIHVwb24gaXQuKQoKUmV2aWV3ZWQtYnk6IEphbiBLYXJhIDxqYWNrQHN1c2UuY3o+ClJl dmlld2VkLWJ5OiBNaWtlIFJhcG9wb3J0IDxycHB0QGxpbnV4LmlibS5jb20+ICAjIERvY3VtZW50 YXRpb24KUmV2aWV3ZWQtYnk6IErDqXLDtG1lIEdsaXNzZSA8amdsaXNzZUByZWRoYXQuY29tPgpD YzogSm9uYXRoYW4gQ29yYmV0IDxjb3JiZXRAbHduLm5ldD4KQ2M6IElyYSBXZWlueSA8aXJhLndl aW55QGludGVsLmNvbT4KU2lnbmVkLW9mZi1ieTogSm9obiBIdWJiYXJkIDxqaHViYmFyZEBudmlk aWEuY29tPgotLS0KIERvY3VtZW50YXRpb24vY29yZS1hcGkvaW5kZXgucnN0ICAgICAgICAgIHwg ICAxICsKIERvY3VtZW50YXRpb24vY29yZS1hcGkvcGluX3VzZXJfcGFnZXMucnN0IHwgMjMyICsr KysrKysrKysrKysrKysrKysrKysKIGluY2x1ZGUvbGludXgvbW0uaCAgICAgICAgICAgICAgICAg ICAgICAgIHwgIDYzICsrKystLQogbW0vZ3VwLmMgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgfCAxNjQgKysrKysrKysrKysrKy0tCiA0IGZpbGVzIGNoYW5nZWQsIDQyNiBpbnNlcnRp b25zKCspLCAzNCBkZWxldGlvbnMoLSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBEb2N1bWVudGF0aW9u L2NvcmUtYXBpL3Bpbl91c2VyX3BhZ2VzLnJzdAoKZGlmZiAtLWdpdCBhL0RvY3VtZW50YXRpb24v Y29yZS1hcGkvaW5kZXgucnN0IGIvRG9jdW1lbnRhdGlvbi9jb3JlLWFwaS9pbmRleC5yc3QKaW5k ZXggYWIwZWFlMWMxNTNhLi40MTNmN2Q3Yzg2NDIgMTAwNjQ0Ci0tLSBhL0RvY3VtZW50YXRpb24v Y29yZS1hcGkvaW5kZXgucnN0CisrKyBiL0RvY3VtZW50YXRpb24vY29yZS1hcGkvaW5kZXgucnN0 CkBAIC0zMSw2ICszMSw3IEBAIENvcmUgdXRpbGl0aWVzCiAgICBnZW5lcmljLXJhZGl4LXRyZWUK ICAgIG1lbW9yeS1hbGxvY2F0aW9uCiAgICBtbS1hcGkKKyAgIHBpbl91c2VyX3BhZ2VzCiAgICBn ZnBfbWFzay1mcm9tLWZzLWlvCiAgICB0aW1la2VlcGluZwogICAgYm9vdC10aW1lLW1tCmRpZmYg LS1naXQgYS9Eb2N1bWVudGF0aW9uL2NvcmUtYXBpL3Bpbl91c2VyX3BhZ2VzLnJzdCBiL0RvY3Vt ZW50YXRpb24vY29yZS1hcGkvcGluX3VzZXJfcGFnZXMucnN0Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0 CmluZGV4IDAwMDAwMDAwMDAwMC4uNzE4NDk4MzBjZDQ4Ci0tLSAvZGV2L251bGwKKysrIGIvRG9j dW1lbnRhdGlvbi9jb3JlLWFwaS9waW5fdXNlcl9wYWdlcy5yc3QKQEAgLTAsMCArMSwyMzIgQEAK Ky4uIFNQRFgtTGljZW5zZS1JZGVudGlmaWVyOiBHUEwtMi4wCisKKz09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KK3Bpbl91c2VyX3BhZ2VzKCkgYW5k IHJlbGF0ZWQgY2FsbHMKKz09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT0KKworLi4gY29udGVudHM6OiA6bG9jYWw6CisKK092ZXJ2aWV3Cis9PT09PT09 PQorCitUaGlzIGRvY3VtZW50IGRlc2NyaWJlcyB0aGUgZm9sbG93aW5nIGZ1bmN0aW9uczo6CisK KyBwaW5fdXNlcl9wYWdlcygpCisgcGluX3VzZXJfcGFnZXNfZmFzdCgpCisgcGluX3VzZXJfcGFn ZXNfcmVtb3RlKCkKKworQmFzaWMgZGVzY3JpcHRpb24gb2YgRk9MTF9QSU4KKz09PT09PT09PT09 PT09PT09PT09PT09PT09PT09CisKK0ZPTExfUElOIGFuZCBGT0xMX0xPTkdURVJNIGFyZSBmbGFn cyB0aGF0IGNhbiBiZSBwYXNzZWQgdG8gdGhlIGdldF91c2VyX3BhZ2VzKigpCisoImd1cCIpIGZh bWlseSBvZiBmdW5jdGlvbnMuIEZPTExfUElOIGhhcyBzaWduaWZpY2FudCBpbnRlcmFjdGlvbnMg YW5kCitpbnRlcmRlcGVuZGVuY2llcyB3aXRoIEZPTExfTE9OR1RFUk0sIHNvIGJvdGggYXJlIGNv dmVyZWQgaGVyZS4KKworRk9MTF9QSU4gaXMgaW50ZXJuYWwgdG8gZ3VwLCBtZWFuaW5nIHRoYXQg aXQgc2hvdWxkIG5vdCBhcHBlYXIgYXQgdGhlIGd1cCBjYWxsCitzaXRlcy4gVGhpcyBhbGxvd3Mg dGhlIGFzc29jaWF0ZWQgd3JhcHBlciBmdW5jdGlvbnMgIChwaW5fdXNlcl9wYWdlcyooKSBhbmQK K290aGVycykgdG8gc2V0IHRoZSBjb3JyZWN0IGNvbWJpbmF0aW9uIG9mIHRoZXNlIGZsYWdzLCBh bmQgdG8gY2hlY2sgZm9yIHByb2JsZW1zCithcyB3ZWxsLgorCitGT0xMX0xPTkdURVJNLCBvbiB0 aGUgb3RoZXIgaGFuZCwgKmlzKiBhbGxvd2VkIHRvIGJlIHNldCBhdCB0aGUgZ3VwIGNhbGwgc2l0 ZXMuCitUaGlzIGlzIGluIG9yZGVyIHRvIGF2b2lkIGNyZWF0aW5nIGEgbGFyZ2UgbnVtYmVyIG9m IHdyYXBwZXIgZnVuY3Rpb25zIHRvIGNvdmVyCithbGwgY29tYmluYXRpb25zIG9mIGdldCooKSwg cGluKigpLCBGT0xMX0xPTkdURVJNLCBhbmQgbW9yZS4gQWxzbywgdGhlCitwaW5fdXNlcl9wYWdl cyooKSBBUElzIGFyZSBjbGVhcmx5IGRpc3RpbmN0IGZyb20gdGhlIGdldF91c2VyX3BhZ2VzKigp IEFQSXMsIHNvCit0aGF0J3MgYSBuYXR1cmFsIGRpdmlkaW5nIGxpbmUsIGFuZCBhIGdvb2QgcG9p bnQgdG8gbWFrZSBzZXBhcmF0ZSB3cmFwcGVyIGNhbGxzLgorSW4gb3RoZXIgd29yZHMsIHVzZSBw aW5fdXNlcl9wYWdlcyooKSBmb3IgRE1BLXBpbm5lZCBwYWdlcywgYW5kCitnZXRfdXNlcl9wYWdl cyooKSBmb3Igb3RoZXIgY2FzZXMuIFRoZXJlIGFyZSBmb3VyIGNhc2VzIGRlc2NyaWJlZCBsYXRl ciBvbiBpbgordGhpcyBkb2N1bWVudCwgdG8gZnVydGhlciBjbGFyaWZ5IHRoYXQgY29uY2VwdC4K KworRk9MTF9QSU4gYW5kIEZPTExfR0VUIGFyZSBtdXR1YWxseSBleGNsdXNpdmUgZm9yIGEgZ2l2 ZW4gZ3VwIGNhbGwuIEhvd2V2ZXIsCittdWx0aXBsZSB0aHJlYWRzIGFuZCBjYWxsIHNpdGVzIGFy ZSBmcmVlIHRvIHBpbiB0aGUgc2FtZSBzdHJ1Y3QgcGFnZXMsIHZpYSBib3RoCitGT0xMX1BJTiBh bmQgRk9MTF9HRVQuIEl0J3MganVzdCB0aGUgY2FsbCBzaXRlIHRoYXQgbmVlZHMgdG8gY2hvb3Nl IG9uZSBvciB0aGUKK290aGVyLCBub3QgdGhlIHN0cnVjdCBwYWdlKHMpLgorCitUaGUgRk9MTF9Q SU4gaW1wbGVtZW50YXRpb24gaXMgbmVhcmx5IHRoZSBzYW1lIGFzIEZPTExfR0VULCBleGNlcHQg dGhhdCBGT0xMX1BJTgordXNlcyBhIGRpZmZlcmVudCByZWZlcmVuY2UgY291bnRpbmcgdGVjaG5p cXVlLgorCitGT0xMX1BJTiBpcyBhIHByZXJlcXVpc2l0ZSB0byBGT0xMX0xPTkdURVJNLiBBbm90 aGVyIHdheSBvZiBzYXlpbmcgdGhhdCBpcywKK0ZPTExfTE9OR1RFUk0gaXMgYSBzcGVjaWZpYyBj YXNlLCBtb3JlIHJlc3RyaWN0aXZlIGNhc2Ugb2YgRk9MTF9QSU4uCisKK1doaWNoIGZsYWdzIGFy ZSBzZXQgYnkgZWFjaCB3cmFwcGVyCis9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PQorCitGb3IgdGhlc2UgcGluX3VzZXJfcGFnZXMqKCkgZnVuY3Rpb25zLCBGT0xMX1BJTiBpcyBP UidkIGluIHdpdGggd2hhdGV2ZXIgZ3VwCitmbGFncyB0aGUgY2FsbGVyIHByb3ZpZGVzLiBUaGUg Y2FsbGVyIGlzIHJlcXVpcmVkIHRvIHBhc3MgaW4gYSBub24tbnVsbCBzdHJ1Y3QKK3BhZ2VzKiBh cnJheSwgYW5kIHRoZSBmdW5jdGlvbiB0aGVuIHBpbiBwYWdlcyBieSBpbmNyZW1lbnRpbmcgZWFj aCBieSBhIHNwZWNpYWwKK3ZhbHVlLiBGb3Igbm93LCB0aGF0IHZhbHVlIGlzICsxLCBqdXN0IGxp a2UgZ2V0X3VzZXJfcGFnZXMqKCkuOjoKKworIEZ1bmN0aW9uCisgLS0tLS0tLS0KKyBwaW5fdXNl cl9wYWdlcyAgICAgICAgICBGT0xMX1BJTiBpcyBhbHdheXMgc2V0IGludGVybmFsbHkgYnkgdGhp cyBmdW5jdGlvbi4KKyBwaW5fdXNlcl9wYWdlc19mYXN0ICAgICBGT0xMX1BJTiBpcyBhbHdheXMg c2V0IGludGVybmFsbHkgYnkgdGhpcyBmdW5jdGlvbi4KKyBwaW5fdXNlcl9wYWdlc19yZW1vdGUg ICBGT0xMX1BJTiBpcyBhbHdheXMgc2V0IGludGVybmFsbHkgYnkgdGhpcyBmdW5jdGlvbi4KKwor Rm9yIHRoZXNlIGdldF91c2VyX3BhZ2VzKigpIGZ1bmN0aW9ucywgRk9MTF9HRVQgbWlnaHQgbm90 IGV2ZW4gYmUgc3BlY2lmaWVkLgorQmVoYXZpb3IgaXMgYSBsaXR0bGUgbW9yZSBjb21wbGV4IHRo YW4gYWJvdmUuIElmIEZPTExfR0VUIHdhcyAqbm90KiBzcGVjaWZpZWQsCitidXQgdGhlIGNhbGxl ciBwYXNzZWQgaW4gYSBub24tbnVsbCBzdHJ1Y3QgcGFnZXMqIGFycmF5LCB0aGVuIHRoZSBmdW5j dGlvbgorc2V0cyBGT0xMX0dFVCBmb3IgeW91LCBhbmQgcHJvY2VlZHMgdG8gcGluIHBhZ2VzIGJ5 IGluY3JlbWVudGluZyB0aGUgcmVmY291bnQKK29mIGVhY2ggcGFnZSBieSArMS46OgorCisgRnVu Y3Rpb24KKyAtLS0tLS0tLQorIGdldF91c2VyX3BhZ2VzICAgICAgICAgICBGT0xMX0dFVCBpcyBz b21ldGltZXMgc2V0IGludGVybmFsbHkgYnkgdGhpcyBmdW5jdGlvbi4KKyBnZXRfdXNlcl9wYWdl c19mYXN0ICAgICAgRk9MTF9HRVQgaXMgc29tZXRpbWVzIHNldCBpbnRlcm5hbGx5IGJ5IHRoaXMg ZnVuY3Rpb24uCisgZ2V0X3VzZXJfcGFnZXNfcmVtb3RlICAgIEZPTExfR0VUIGlzIHNvbWV0aW1l cyBzZXQgaW50ZXJuYWxseSBieSB0aGlzIGZ1bmN0aW9uLgorCitUcmFja2luZyBkbWEtcGlubmVk IHBhZ2VzCis9PT09PT09PT09PT09PT09PT09PT09PT09CisKK1NvbWUgb2YgdGhlIGtleSBkZXNp Z24gY29uc3RyYWludHMsIGFuZCBzb2x1dGlvbnMsIGZvciB0cmFja2luZyBkbWEtcGlubmVkCitw YWdlczoKKworKiBBbiBhY3R1YWwgcmVmZXJlbmNlIGNvdW50LCBwZXIgc3RydWN0IHBhZ2UsIGlz IHJlcXVpcmVkLiBUaGlzIGlzIGJlY2F1c2UKKyAgbXVsdGlwbGUgcHJvY2Vzc2VzIG1heSBwaW4g YW5kIHVucGluIGEgcGFnZS4KKworKiBGYWxzZSBwb3NpdGl2ZXMgKHJlcG9ydGluZyB0aGF0IGEg cGFnZSBpcyBkbWEtcGlubmVkLCB3aGVuIGluIGZhY3QgaXQgaXMgbm90KQorICBhcmUgYWNjZXB0 YWJsZSwgYnV0IGZhbHNlIG5lZ2F0aXZlcyBhcmUgbm90LgorCisqIHN0cnVjdCBwYWdlIG1heSBu b3QgYmUgaW5jcmVhc2VkIGluIHNpemUgZm9yIHRoaXMsIGFuZCBhbGwgZmllbGRzIGFyZSBhbHJl YWR5CisgIHVzZWQuCisKKyogR2l2ZW4gdGhlIGFib3ZlLCB3ZSBjYW4gb3ZlcmxvYWQgdGhlIHBh Z2UtPl9yZWZjb3VudCBmaWVsZCBieSB1c2luZywgc29ydCBvZiwKKyAgdGhlIHVwcGVyIGJpdHMg aW4gdGhhdCBmaWVsZCBmb3IgYSBkbWEtcGlubmVkIGNvdW50LiAiU29ydCBvZiIsIG1lYW5zIHRo YXQsCisgIHJhdGhlciB0aGFuIGRpdmlkaW5nIHBhZ2UtPl9yZWZjb3VudCBpbnRvIGJpdCBmaWVs ZHMsIHdlIHNpbXBsZSBhZGQgYSBtZWRpdW0tCisgIGxhcmdlIHZhbHVlIChHVVBfUElOX0NPVU5U SU5HX0JJQVMsIGluaXRpYWxseSBjaG9zZW4gdG8gYmUgMTAyNDogMTAgYml0cykgdG8KKyAgcGFn ZS0+X3JlZmNvdW50LiBUaGlzIHByb3ZpZGVzIGZ1enp5IGJlaGF2aW9yOiBpZiBhIHBhZ2UgaGFz IGdldF9wYWdlKCkgY2FsbGVkCisgIG9uIGl0IDEwMjQgdGltZXMsIHRoZW4gaXQgd2lsbCBhcHBl YXIgdG8gaGF2ZSBhIHNpbmdsZSBkbWEtcGlubmVkIGNvdW50LgorICBBbmQgYWdhaW4sIHRoYXQn cyBhY2NlcHRhYmxlLgorCitUaGlzIGFsc28gbGVhZHMgdG8gbGltaXRhdGlvbnM6IHRoZXJlIGFy ZSBvbmx5IDMxLTEwPT0yMSBiaXRzIGF2YWlsYWJsZSBmb3IgYQorY291bnRlciB0aGF0IGluY3Jl bWVudHMgMTAgYml0cyBhdCBhIHRpbWUuCisKK1RPRE86IGZvciAxR0IgYW5kIGxhcmdlciBodWdl IHBhZ2VzLCB0aGlzIGlzIGN1dHRpbmcgaXQgY2xvc2UuIFRoYXQncyBiZWNhdXNlCit3aGVuIHBp bl91c2VyX3BhZ2VzKCkgZm9sbG93cyBzdWNoIHBhZ2VzLCBpdCBpbmNyZW1lbnRzIHRoZSBoZWFk IHBhZ2UgYnkgIjEiCisod2hlcmUgIjEiIHVzZWQgdG8gbWVhbiAiKzEiIGZvciBnZXRfdXNlcl9w YWdlcygpLCBidXQgbm93IG1lYW5zICIrMTAyNCIgZm9yCitwaW5fdXNlcl9wYWdlcygpKSBmb3Ig ZWFjaCB0YWlsIHBhZ2UuIFNvIGlmIHlvdSBoYXZlIGEgMUdCIGh1Z2UgcGFnZToKKworKiBUaGVy ZSBhcmUgMjU2SyAoMTggYml0cykgd29ydGggb2YgNCBLQiB0YWlsIHBhZ2VzLgorKiBUaGVyZSBh cmUgMjEgYml0cyBhdmFpbGFibGUgdG8gY291bnQgdXAgdmlhIEdVUF9QSU5fQ09VTlRJTkdfQklB UyAodGhhdCBpcywKKyAgMTAgYml0cyBhdCBhIHRpbWUpCisqIFRoZXJlIGFyZSAyMSAtIDE4ID09 IDMgYml0cyBhdmFpbGFibGUgdG8gY291bnQuIEV4Y2VwdCB0aGF0IHRoZXJlIGFyZW4ndCwKKyAg YmVjYXVzZSB5b3UgbmVlZCB0byBhbGxvdyBmb3IgYSBmZXcgbm9ybWFsIGdldF9wYWdlKCkgY2Fs bHMgb24gdGhlIGhlYWQgcGFnZSwKKyAgYXMgd2VsbC4gRm9ydHVuYXRlbHksIHRoZSBhcHByb2Fj aCBvZiB1c2luZyBhZGRpdGlvbiwgcmF0aGVyIHRoYW4gImhhcmQiCisgIGJpdGZpZWxkcywgd2l0 aGluIHBhZ2UtPl9yZWZjb3VudCwgYWxsb3dzIGZvciBzaGFyaW5nIHRoZXNlIGJpdHMgZ3JhY2Vm dWxseS4KKyAgQnV0IHdlJ3JlIHN0aWxsIGxvb2tpbmcgYXQgYWJvdXQgOCByZWZlcmVuY2VzLgor CitUaGlzLCBob3dldmVyLCBpcyBhIG1pc3NpbmcgZmVhdHVyZSBtb3JlIHRoYW4gYW55dGhpbmcg ZWxzZSwgYmVjYXVzZSBpdCdzIGVhc2lseQorc29sdmVkIGJ5IGFkZHJlc3NpbmcgYW4gb2J2aW91 cyBpbmVmZmljaWVuY3kgaW4gdGhlIG9yaWdpbmFsIGdldF91c2VyX3BhZ2VzKCkKK2FwcHJvYWNo IG9mIHJldHJpZXZpbmcgcGFnZXM6IHN0b3AgdHJlYXRpbmcgYWxsIHRoZSBwYWdlcyBhcyBpZiB0 aGV5IHdlcmUKK1BBR0VfU0laRS4gUmV0cmlldmUgaHVnZSBwYWdlcyBhcyBodWdlIHBhZ2VzLiBU aGUgY2FsbGVycyBuZWVkIHRvIGJlIGF3YXJlIG9mCit0aGlzLCBzbyBzb21lIHdvcmsgaXMgcmVx dWlyZWQuIE9uY2UgdGhhdCdzIGluIHBsYWNlLCB0aGlzIGxpbWl0YXRpb24gbW9zdGx5CitkaXNh cHBlYXJzIGZyb20gdmlldywgYmVjYXVzZSB0aGVyZSB3aWxsIGJlIGFtcGxlIHJlZmNvdW50aW5n IHJhbmdlIGF2YWlsYWJsZS4KKworKiBDYWxsZXJzIG11c3Qgc3BlY2lmaWNhbGx5IHJlcXVlc3Qg ImRtYS1waW5uZWQgdHJhY2tpbmcgb2YgcGFnZXMiLiBJbiBvdGhlcgorICB3b3JkcywganVzdCBj YWxsaW5nIGdldF91c2VyX3BhZ2VzKCkgd2lsbCBub3Qgc3VmZmljZTsgYSBuZXcgc2V0IG9mIGZ1 bmN0aW9ucywKKyAgcGluX3VzZXJfcGFnZSgpIGFuZCByZWxhdGVkLCBtdXN0IGJlIHVzZWQuCisK K0ZPTExfUElOLCBGT0xMX0dFVCwgRk9MTF9MT05HVEVSTTogd2hlbiB0byB1c2Ugd2hpY2ggZmxh Z3MKKz09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT0KKworVGhhbmtzIHRvIEphbiBLYXJhLCBWbGFzdGltaWwgQmFia2EgYW5kIHNldmVyYWwg b3RoZXIgLW1tIHBlb3BsZSwgZm9yIGRlc2NyaWJpbmcKK3RoZXNlIGNhdGVnb3JpZXM6CisKK0NB U0UgMTogRGlyZWN0IElPIChESU8pCistLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQorVGhlcmUgYXJl IEdVUCByZWZlcmVuY2VzIHRvIHBhZ2VzIHRoYXQgYXJlIHNlcnZpbmcKK2FzIERJTyBidWZmZXJz LiBUaGVzZSBidWZmZXJzIGFyZSBuZWVkZWQgZm9yIGEgcmVsYXRpdmVseSBzaG9ydCB0aW1lIChz byB0aGV5CithcmUgbm90ICJsb25nIHRlcm0iKS4gTm8gc3BlY2lhbCBzeW5jaHJvbml6YXRpb24g d2l0aCBwYWdlX21rY2xlYW4oKSBvcgorbXVubWFwKCkgaXMgcHJvdmlkZWQuIFRoZXJlZm9yZSwg ZmxhZ3MgdG8gc2V0IGF0IHRoZSBjYWxsIHNpdGUgYXJlOiA6OgorCisgICAgRk9MTF9QSU4KKwor Li4uYnV0IHJhdGhlciB0aGFuIHNldHRpbmcgRk9MTF9QSU4gZGlyZWN0bHksIGNhbGwgc2l0ZXMg c2hvdWxkIHVzZSBvbmUgb2YKK3RoZSBwaW5fdXNlcl9wYWdlcyooKSByb3V0aW5lcyB0aGF0IHNl dCBGT0xMX1BJTi4KKworQ0FTRSAyOiBSRE1BCistLS0tLS0tLS0tLS0KK1RoZXJlIGFyZSBHVVAg cmVmZXJlbmNlcyB0byBwYWdlcyB0aGF0IGFyZSBzZXJ2aW5nIGFzIERNQQorYnVmZmVycy4gVGhl c2UgYnVmZmVycyBhcmUgbmVlZGVkIGZvciBhIGxvbmcgdGltZSAoImxvbmcgdGVybSIpLiBObyBz cGVjaWFsCitzeW5jaHJvbml6YXRpb24gd2l0aCBwYWdlX21rY2xlYW4oKSBvciBtdW5tYXAoKSBp cyBwcm92aWRlZC4gVGhlcmVmb3JlLCBmbGFncwordG8gc2V0IGF0IHRoZSBjYWxsIHNpdGUgYXJl OiA6OgorCisgICAgRk9MTF9QSU4gfCBGT0xMX0xPTkdURVJNCisKK05PVEU6IFNvbWUgcGFnZXMs IHN1Y2ggYXMgREFYIHBhZ2VzLCBjYW5ub3QgYmUgcGlubmVkIHdpdGggbG9uZ3Rlcm0gcGlucy4g VGhhdCdzCitiZWNhdXNlIERBWCBwYWdlcyBkbyBub3QgaGF2ZSBhIHNlcGFyYXRlIHBhZ2UgY2Fj aGUsIGFuZCBzbyAicGlubmluZyIgaW1wbGllcworbG9ja2luZyBkb3duIGZpbGUgc3lzdGVtIGJs b2Nrcywgd2hpY2ggaXMgbm90ICh5ZXQpIHN1cHBvcnRlZCBpbiB0aGF0IHdheS4KKworQ0FTRSAz OiBIYXJkd2FyZSB3aXRoIHBhZ2UgZmF1bHRpbmcgc3VwcG9ydAorLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQorSGVyZSwgYSB3ZWxsLXdyaXR0ZW4gZHJpdmVyIGRv ZXNuJ3Qgbm9ybWFsbHkgbmVlZCB0byBwaW4gcGFnZXMgYXQgYWxsLiBIb3dldmVyLAoraWYgdGhl IGRyaXZlciBkb2VzIGNob29zZSB0byBkbyBzbywgaXQgY2FuIHJlZ2lzdGVyIE1NVSBub3RpZmll cnMgZm9yIHRoZSByYW5nZSwKK2FuZCB3aWxsIGJlIGNhbGxlZCBiYWNrIHVwb24gaW52YWxpZGF0 aW9uLiBFaXRoZXIgd2F5IChhdm9pZGluZyBwYWdlIHBpbm5pbmcsIG9yCit1c2luZyBNTVUgbm90 aWZpZXJzIHRvIHVucGluIHVwb24gcmVxdWVzdCksIHRoZXJlIGlzIHByb3BlciBzeW5jaHJvbml6 YXRpb24gd2l0aAorYm90aCBmaWxlc3lzdGVtIGFuZCBtbSAocGFnZV9ta2NsZWFuKCksIG11bm1h cCgpLCBldGMpLgorCitUaGVyZWZvcmUsIG5laXRoZXIgZmxhZyBuZWVkcyB0byBiZSBzZXQuCisK K0luIHRoaXMgY2FzZSwgaWRlYWxseSwgbmVpdGhlciBnZXRfdXNlcl9wYWdlcygpIG5vciBwaW5f dXNlcl9wYWdlcygpIHNob3VsZCBiZQorY2FsbGVkLiBJbnN0ZWFkLCB0aGUgc29mdHdhcmUgc2hv dWxkIGJlIHdyaXR0ZW4gc28gdGhhdCBpdCBkb2VzIG5vdCBwaW4gcGFnZXMuCitUaGlzIGFsbG93 cyBtbSBhbmQgZmlsZXN5c3RlbXMgdG8gb3BlcmF0ZSBtb3JlIGVmZmljaWVudGx5IGFuZCByZWxp YWJseS4KKworQ0FTRSA0OiBQaW5uaW5nIGZvciBzdHJ1Y3QgcGFnZSBtYW5pcHVsYXRpb24gb25s eQorLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQorSGVy ZSwgbm9ybWFsIEdVUCBjYWxscyBhcmUgc3VmZmljaWVudCwgc28gbmVpdGhlciBmbGFnIG5lZWRz IHRvIGJlIHNldC4KKworcGFnZV9kbWFfcGlubmVkKCk6IHRoZSB3aG9sZSBwb2ludCBvZiBwaW5u aW5nCis9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KKworVGhl IHdob2xlIHBvaW50IG9mIG1hcmtpbmcgcGFnZXMgYXMgIkRNQS1waW5uZWQiIG9yICJndXAtcGlu bmVkIiBpcyB0byBiZSBhYmxlCit0byBxdWVyeSwgImlzIHRoaXMgcGFnZSBETUEtcGlubmVkPyIg VGhhdCBhbGxvd3MgY29kZSBzdWNoIGFzIHBhZ2VfbWtjbGVhbigpCisoYW5kIGZpbGUgc3lzdGVt IHdyaXRlYmFjayBjb2RlIGluIGdlbmVyYWwpIHRvIG1ha2UgaW5mb3JtZWQgZGVjaXNpb25zIGFi b3V0Cit3aGF0IHRvIGRvIHdoZW4gYSBwYWdlIGNhbm5vdCBiZSB1bm1hcHBlZCBkdWUgdG8gc3Vj aCBwaW5zLgorCitXaGF0IHRvIGRvIGluIHRob3NlIGNhc2VzIGlzIHRoZSBzdWJqZWN0IG9mIGEg eWVhcnMtbG9uZyBzZXJpZXMgb2YgZGlzY3Vzc2lvbnMKK2FuZCBkZWJhdGVzIChzZWUgdGhlIFJl ZmVyZW5jZXMgYXQgdGhlIGVuZCBvZiB0aGlzIGRvY3VtZW50KS4gSXQncyBhIFRPRE8gaXRlbQor aGVyZTogZmlsbCBpbiB0aGUgZGV0YWlscyBvbmNlIHRoYXQncyB3b3JrZWQgb3V0LiBNZWFud2hp bGUsIGl0J3Mgc2FmZSB0byBzYXkKK3RoYXQgaGF2aW5nIHRoaXMgYXZhaWxhYmxlOiA6OgorCisg ICAgICAgIHN0YXRpYyBpbmxpbmUgYm9vbCBwYWdlX2RtYV9waW5uZWQoc3RydWN0IHBhZ2UgKnBh Z2UpCisKKy4uLmlzIGEgcHJlcmVxdWlzaXRlIHRvIHNvbHZpbmcgdGhlIGxvbmctcnVubmluZyBn dXArRE1BIHByb2JsZW0uCisKK0Fub3RoZXIgd2F5IG9mIHRoaW5raW5nIGFib3V0IEZPTExfR0VU LCBGT0xMX1BJTiwgYW5kIEZPTExfTE9OR1RFUk0KKz09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KKworQW5vdGhlciB3YXkg b2YgdGhpbmtpbmcgYWJvdXQgdGhlc2UgZmxhZ3MgaXMgYXMgYSBwcm9ncmVzc2lvbiBvZiByZXN0 cmljdGlvbnM6CitGT0xMX0dFVCBpcyBmb3Igc3RydWN0IHBhZ2UgbWFuaXB1bGF0aW9uLCB3aXRo b3V0IGFmZmVjdGluZyB0aGUgZGF0YSB0aGF0IHRoZQorc3RydWN0IHBhZ2UgcmVmZXJzIHRvLiBG T0xMX1BJTiBpcyBhICpyZXBsYWNlbWVudCogZm9yIEZPTExfR0VULCBhbmQgaXMgZm9yCitzaG9y dCB0ZXJtIHBpbnMgb24gcGFnZXMgd2hvc2UgZGF0YSAqd2lsbCogZ2V0IGFjY2Vzc2VkLiBBcyBz dWNoLCBGT0xMX1BJTiBpcworYSAibW9yZSBzZXZlcmUiIGZvcm0gb2YgcGlubmluZy4gQW5kIGZp bmFsbHksIEZPTExfTE9OR1RFUk0gaXMgYW4gZXZlbiBtb3JlCityZXN0cmljdGl2ZSBjYXNlIHRo YXQgaGFzIEZPTExfUElOIGFzIGEgcHJlcmVxdWlzaXRlOiB0aGlzIGlzIGZvciBwYWdlcyB0aGF0 Cit3aWxsIGJlIHBpbm5lZCBsb25ndGVybSwgYW5kIHdob3NlIGRhdGEgd2lsbCBiZSBhY2Nlc3Nl ZC4KKworVW5pdCB0ZXN0aW5nCis9PT09PT09PT09PT0KK1RoaXMgZmlsZTo6CisKKyB0b29scy90 ZXN0aW5nL3NlbGZ0ZXN0cy92bS9ndXBfYmVuY2htYXJrLmMKKworaGFzIHRoZSBmb2xsb3dpbmcg bmV3IGNhbGxzIHRvIGV4ZXJjaXNlIHRoZSBuZXcgcGluKigpIHdyYXBwZXIgZnVuY3Rpb25zOgor CisqIFBJTl9GQVNUX0JFTkNITUFSSyAoLi9ndXBfYmVuY2htYXJrIC1hKQorKiBQSU5fQkVOQ0hN QVJLICguL2d1cF9iZW5jaG1hcmsgLWIpCisKK1lvdSBjYW4gbW9uaXRvciBob3cgbWFueSB0b3Rh bCBkbWEtcGlubmVkIHBhZ2VzIGhhdmUgYmVlbiBhY3F1aXJlZCBhbmQgcmVsZWFzZWQKK3NpbmNl IHRoZSBzeXN0ZW0gd2FzIGJvb3RlZCwgdmlhIHR3byBuZXcgL3Byb2Mvdm1zdGF0IGVudHJpZXM6 IDo6CisKKyAgICAvcHJvYy92bXN0YXQvbnJfZm9sbF9waW5fcmVxdWVzdGVkCisgICAgL3Byb2Mv dm1zdGF0L25yX2ZvbGxfcGluX3JlcXVlc3RlZAorCitUaG9zZSBhcmUgYm90aCBnb2luZyB0byBz aG93IHplcm8sIHVubGVzcyBDT05GSUdfREVCVUdfVk0gaXMgc2V0LiBUaGlzIGlzCitiZWNhdXNl IHRoZXJlIGlzIGEgbm90aWNlYWJsZSBwZXJmb3JtYW5jZSBkcm9wIGluIHB1dF91c2VyX3BhZ2Uo KSwgd2hlbiB0aGV5CithcmUgYWN0aXZhdGVkLgorCitSZWZlcmVuY2VzCis9PT09PT09PT09CisK KyogYFNvbWUgc2xvdyBwcm9ncmVzcyBvbiBnZXRfdXNlcl9wYWdlcygpIChBcHIgMiwgMjAxOSkg PGh0dHBzOi8vbHduLm5ldC9BcnRpY2xlcy83ODQ1NzQvPmBfCisqIGBETUEgYW5kIGdldF91c2Vy X3BhZ2VzKCkgKExQQzogRGVjIDEyLCAyMDE4KSA8aHR0cHM6Ly9sd24ubmV0L0FydGljbGVzLzc3 NDQxMS8+YF8KKyogYFRoZSB0cm91YmxlIHdpdGggZ2V0X3VzZXJfcGFnZXMoKSAoQXByIDMwLCAy MDE4KSA8aHR0cHM6Ly9sd24ubmV0L0FydGljbGVzLzc1MzAyNy8+YF8KKworSm9obiBIdWJiYXJk LCBPY3RvYmVyLCAyMDE5CmRpZmYgLS1naXQgYS9pbmNsdWRlL2xpbnV4L21tLmggYi9pbmNsdWRl L2xpbnV4L21tLmgKaW5kZXggZTIwMzJmZjY0MGViLi5mOTY1M2U2NjZiZjQgMTAwNjQ0Ci0tLSBh L2luY2x1ZGUvbGludXgvbW0uaAorKysgYi9pbmNsdWRlL2xpbnV4L21tLmgKQEAgLTEwNDcsMTYg KzEwNDcsMTQgQEAgc3RhdGljIGlubGluZSB2b2lkIHB1dF9wYWdlKHN0cnVjdCBwYWdlICpwYWdl KQogICogcHV0X3VzZXJfcGFnZSgpIC0gcmVsZWFzZSBhIGd1cC1waW5uZWQgcGFnZQogICogQHBh Z2U6ICAgICAgICAgICAgcG9pbnRlciB0byBwYWdlIHRvIGJlIHJlbGVhc2VkCiAgKgotICogUGFn ZXMgdGhhdCB3ZXJlIHBpbm5lZCB2aWEgZ2V0X3VzZXJfcGFnZXMqKCkgbXVzdCBiZSByZWxlYXNl ZCB2aWEKLSAqIGVpdGhlciBwdXRfdXNlcl9wYWdlKCksIG9yIG9uZSBvZiB0aGUgcHV0X3VzZXJf cGFnZXMqKCkgcm91dGluZXMKLSAqIGJlbG93LiBUaGlzIGlzIHNvIHRoYXQgZXZlbnR1YWxseSwg cGFnZXMgdGhhdCBhcmUgcGlubmVkIHZpYQotICogZ2V0X3VzZXJfcGFnZXMqKCkgY2FuIGJlIHNl cGFyYXRlbHkgdHJhY2tlZCBhbmQgdW5pcXVlbHkgaGFuZGxlZC4gSW4KLSAqIHBhcnRpY3VsYXIs IGludGVyYWN0aW9ucyB3aXRoIFJETUEgYW5kIGZpbGVzeXN0ZW1zIG5lZWQgc3BlY2lhbAotICog aGFuZGxpbmcuCisgKiBQYWdlcyB0aGF0IHdlcmUgcGlubmVkIHZpYSBwaW5fdXNlcl9wYWdlcyoo KSBtdXN0IGJlIHJlbGVhc2VkIHZpYSBlaXRoZXIKKyAqIHB1dF91c2VyX3BhZ2UoKSwgb3Igb25l IG9mIHRoZSBwdXRfdXNlcl9wYWdlcyooKSByb3V0aW5lcy4gVGhpcyBpcyBzbyB0aGF0CisgKiBl dmVudHVhbGx5IHN1Y2ggcGFnZXMgY2FuIGJlIHNlcGFyYXRlbHkgdHJhY2tlZCBhbmQgdW5pcXVl bHkgaGFuZGxlZC4gSW4KKyAqIHBhcnRpY3VsYXIsIGludGVyYWN0aW9ucyB3aXRoIFJETUEgYW5k IGZpbGVzeXN0ZW1zIG5lZWQgc3BlY2lhbCBoYW5kbGluZy4KICAqCiAgKiBwdXRfdXNlcl9wYWdl KCkgYW5kIHB1dF9wYWdlKCkgYXJlIG5vdCBpbnRlcmNoYW5nZWFibGUsIGRlc3BpdGUgdGhpcyBl YXJseQogICogaW1wbGVtZW50YXRpb24gdGhhdCBtYWtlcyB0aGVtIGxvb2sgdGhlIHNhbWUuIHB1 dF91c2VyX3BhZ2UoKSBjYWxscyBtdXN0Ci0gKiBiZSBwZXJmZWN0bHkgbWF0Y2hlZCB1cCB3aXRo IGdldF91c2VyX3BhZ2UoKSBjYWxscy4KKyAqIGJlIHBlcmZlY3RseSBtYXRjaGVkIHVwIHdpdGgg cGluKigpIGNhbGxzLgogICovCiBzdGF0aWMgaW5saW5lIHZvaWQgcHV0X3VzZXJfcGFnZShzdHJ1 Y3QgcGFnZSAqcGFnZSkKIHsKQEAgLTE1MTQsOSArMTUxMiwxNiBAQCBsb25nIGdldF91c2VyX3Bh Z2VzX3JlbW90ZShzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRzaywgc3RydWN0IG1tX3N0cnVjdCAqbW0s CiAJCQkgICAgdW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBucl9wYWdlcywKIAkJ CSAgICB1bnNpZ25lZCBpbnQgZ3VwX2ZsYWdzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzLAogCQkJICAg IHN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqKnZtYXMsIGludCAqbG9ja2VkKTsKK2xvbmcgcGluX3Vz ZXJfcGFnZXNfcmVtb3RlKHN0cnVjdCB0YXNrX3N0cnVjdCAqdHNrLCBzdHJ1Y3QgbW1fc3RydWN0 ICptbSwKKwkJCSAgIHVuc2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgbnJfcGFnZXMs CisJCQkgICB1bnNpZ25lZCBpbnQgZ3VwX2ZsYWdzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzLAorCQkJ ICAgc3RydWN0IHZtX2FyZWFfc3RydWN0ICoqdm1hcywgaW50ICpsb2NrZWQpOwogbG9uZyBnZXRf dXNlcl9wYWdlcyh1bnNpZ25lZCBsb25nIHN0YXJ0LCB1bnNpZ25lZCBsb25nIG5yX3BhZ2VzLAog CQkJICAgIHVuc2lnbmVkIGludCBndXBfZmxhZ3MsIHN0cnVjdCBwYWdlICoqcGFnZXMsCiAJCQkg ICAgc3RydWN0IHZtX2FyZWFfc3RydWN0ICoqdm1hcyk7Citsb25nIHBpbl91c2VyX3BhZ2VzKHVu c2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgbnJfcGFnZXMsCisJCSAgICB1bnNpZ25l ZCBpbnQgZ3VwX2ZsYWdzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzLAorCQkgICAgc3RydWN0IHZtX2Fy ZWFfc3RydWN0ICoqdm1hcyk7CiBsb25nIGdldF91c2VyX3BhZ2VzX2xvY2tlZCh1bnNpZ25lZCBs b25nIHN0YXJ0LCB1bnNpZ25lZCBsb25nIG5yX3BhZ2VzLAogCQkgICAgdW5zaWduZWQgaW50IGd1 cF9mbGFncywgc3RydWN0IHBhZ2UgKipwYWdlcywgaW50ICpsb2NrZWQpOwogbG9uZyBnZXRfdXNl cl9wYWdlc191bmxvY2tlZCh1bnNpZ25lZCBsb25nIHN0YXJ0LCB1bnNpZ25lZCBsb25nIG5yX3Bh Z2VzLApAQCAtMTUyNCw2ICsxNTI5LDggQEAgbG9uZyBnZXRfdXNlcl9wYWdlc191bmxvY2tlZCh1 bnNpZ25lZCBsb25nIHN0YXJ0LCB1bnNpZ25lZCBsb25nIG5yX3BhZ2VzLAogCiBpbnQgZ2V0X3Vz ZXJfcGFnZXNfZmFzdCh1bnNpZ25lZCBsb25nIHN0YXJ0LCBpbnQgbnJfcGFnZXMsCiAJCQl1bnNp Z25lZCBpbnQgZ3VwX2ZsYWdzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzKTsKK2ludCBwaW5fdXNlcl9w YWdlc19mYXN0KHVuc2lnbmVkIGxvbmcgc3RhcnQsIGludCBucl9wYWdlcywKKwkJCXVuc2lnbmVk IGludCBndXBfZmxhZ3MsIHN0cnVjdCBwYWdlICoqcGFnZXMpOwogCiBpbnQgYWNjb3VudF9sb2Nr ZWRfdm0oc3RydWN0IG1tX3N0cnVjdCAqbW0sIHVuc2lnbmVkIGxvbmcgcGFnZXMsIGJvb2wgaW5j KTsKIGludCBfX2FjY291bnRfbG9ja2VkX3ZtKHN0cnVjdCBtbV9zdHJ1Y3QgKm1tLCB1bnNpZ25l ZCBsb25nIHBhZ2VzLCBib29sIGluYywKQEAgLTI1ODcsMTMgKzI1OTQsMTUgQEAgc3RydWN0IHBh Z2UgKmZvbGxvd19wYWdlKHN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqdm1hLCB1bnNpZ25lZCBsb25n IGFkZHJlc3MsCiAjZGVmaW5lIEZPTExfQU5PTgkweDgwMDAJLyogZG9uJ3QgZG8gZmlsZSBtYXBw aW5ncyAqLwogI2RlZmluZSBGT0xMX0xPTkdURVJNCTB4MTAwMDAJLyogbWFwcGluZyBsaWZldGlt ZSBpcyBpbmRlZmluaXRlOiBzZWUgYmVsb3cgKi8KICNkZWZpbmUgRk9MTF9TUExJVF9QTUQJMHgy MDAwMAkvKiBzcGxpdCBodWdlIHBtZCBiZWZvcmUgcmV0dXJuaW5nICovCisjZGVmaW5lIEZPTExf UElOCTB4NDAwMDAJLyogcGFnZXMgbXVzdCBiZSByZWxlYXNlZCB2aWEgcHV0X3VzZXJfcGFnZSgp ICovCiAKIC8qCi0gKiBOT1RFIG9uIEZPTExfTE9OR1RFUk06CisgKiBGT0xMX1BJTiBhbmQgRk9M TF9MT05HVEVSTSBtYXkgYmUgdXNlZCBpbiB2YXJpb3VzIGNvbWJpbmF0aW9ucyB3aXRoIGVhY2gK KyAqIG90aGVyLiBIZXJlIGlzIHdoYXQgdGhleSBtZWFuLCBhbmQgaG93IHRvIHVzZSB0aGVtOgog ICoKICAqIEZPTExfTE9OR1RFUk0gaW5kaWNhdGVzIHRoYXQgdGhlIHBhZ2Ugd2lsbCBiZSBoZWxk IGZvciBhbiBpbmRlZmluaXRlIHRpbWUKLSAqIHBlcmlvZCBfb2Z0ZW5fIHVuZGVyIHVzZXJzcGFj ZSBjb250cm9sLiAgVGhpcyBpcyBjb250cmFzdGVkIHdpdGgKLSAqIGlvdl9pdGVyX2dldF9wYWdl cygpIHdoZXJlIHVzYWdlcyB3aGljaCBhcmUgdHJhbnNpZW50LgorICogcGVyaW9kIF9vZnRlbl8g dW5kZXIgdXNlcnNwYWNlIGNvbnRyb2wuICBUaGlzIGlzIGluIGNvbnRyYXN0IHRvCisgKiBpb3Zf aXRlcl9nZXRfcGFnZXMoKSwgd2hvc2UgdXNhZ2VzIGFyZSB0cmFuc2llbnQuCiAgKgogICogRklY TUU6IEZvciBwYWdlcyB3aGljaCBhcmUgcGFydCBvZiBhIGZpbGVzeXN0ZW0sIG1hcHBpbmdzIGFy ZSBzdWJqZWN0IHRvIHRoZQogICogbGlmZXRpbWUgZW5mb3JjZWQgYnkgdGhlIGZpbGVzeXN0ZW0g YW5kIHdlIG5lZWQgZ3VhcmFudGVlcyB0aGF0IGxvbmd0ZXJtCkBAIC0yNjA4LDExICsyNjE3LDM5 IEBAIHN0cnVjdCBwYWdlICpmb2xsb3dfcGFnZShzdHJ1Y3Qgdm1fYXJlYV9zdHJ1Y3QgKnZtYSwg dW5zaWduZWQgbG9uZyBhZGRyZXNzLAogICogQ3VycmVudGx5IG9ubHkgZ2V0X3VzZXJfcGFnZXMo KSBhbmQgZ2V0X3VzZXJfcGFnZXNfZmFzdCgpIHN1cHBvcnQgdGhpcyBmbGFnCiAgKiBhbmQgY2Fs bHMgdG8gZ2V0X3VzZXJfcGFnZXNfW3VuXWxvY2tlZCBhcmUgc3BlY2lmaWNhbGx5IG5vdCBhbGxv d2VkLiAgVGhpcwogICogaXMgZHVlIHRvIGFuIGluY29tcGF0aWJpbGl0eSB3aXRoIHRoZSBGUyBE QVggY2hlY2sgYW5kCi0gKiBGQVVMVF9GTEFHX0FMTE9XX1JFVFJZCisgKiBGQVVMVF9GTEFHX0FM TE9XX1JFVFJZLgogICoKLSAqIEluIHRoZSBDTUEgY2FzZTogbG9uZ3Rlcm0gcGlucyBpbiBhIENN QSByZWdpb24gd291bGQgdW5uZWNlc3NhcmlseSBmcmFnbWVudAotICogdGhhdCByZWdpb24uICBB bmQgc28gQ01BIGF0dGVtcHRzIHRvIG1pZ3JhdGUgdGhlIHBhZ2UgYmVmb3JlIHBpbm5pbmcgd2hl bgorICogSW4gdGhlIENNQSBjYXNlOiBsb25nIHRlcm0gcGlucyBpbiBhIENNQSByZWdpb24gd291 bGQgdW5uZWNlc3NhcmlseSBmcmFnbWVudAorICogdGhhdCByZWdpb24uICBBbmQgc28sIENNQSBh dHRlbXB0cyB0byBtaWdyYXRlIHRoZSBwYWdlIGJlZm9yZSBwaW5uaW5nLCB3aGVuCiAgKiBGT0xM X0xPTkdURVJNIGlzIHNwZWNpZmllZC4KKyAqCisgKiBGT0xMX1BJTiBpbmRpY2F0ZXMgdGhhdCBh IHNwZWNpYWwga2luZCBvZiB0cmFja2luZyAobm90IGp1c3QgcGFnZS0+X3JlZmNvdW50LAorICog YnV0IGFuIGFkZGl0aW9uYWwgcGluIGNvdW50aW5nIHN5c3RlbSkgd2lsbCBiZSBpbnZva2VkLiBU aGlzIGlzIGludGVuZGVkIGZvcgorICogYW55dGhpbmcgdGhhdCBnZXRzIGEgcGFnZSByZWZlcmVu Y2UgYW5kIHRoZW4gdG91Y2hlcyBwYWdlIGRhdGEgKGZvciBleGFtcGxlLAorICogRGlyZWN0IElP KS4gVGhpcyBsZXRzIHRoZSBmaWxlc3lzdGVtIGtub3cgdGhhdCBzb21lIG5vbi1maWxlLXN5c3Rl bSBlbnRpdHkgaXMKKyAqIHBvdGVudGlhbGx5IGNoYW5naW5nIHRoZSBwYWdlcycgZGF0YS4gSW4g Y29udHJhc3QgdG8gRk9MTF9HRVQgKHdob3NlIHBhZ2VzCisgKiBhcmUgcmVsZWFzZWQgdmlhIHB1 dF9wYWdlKCkpLCBGT0xMX1BJTiBwYWdlcyBtdXN0IGJlIHJlbGVhc2VkLCB1bHRpbWF0ZWx5LCBi eQorICogYSBjYWxsIHRvIHB1dF91c2VyX3BhZ2UoKS4KKyAqCisgKiBGT0xMX1BJTiBpcyBzaW1p bGFyIHRvIEZPTExfR0VUOiBib3RoIG9mIHRoZXNlIHBpbiBwYWdlcy4gVGhleSB1c2UgZGlmZmVy ZW50CisgKiBhbmQgc2VwYXJhdGUgcmVmY291bnRpbmcgbWVjaGFuaXNtcywgaG93ZXZlciwgYW5k IHRoYXQgbWVhbnMgdGhhdCBlYWNoIGhhcworICogaXRzIG93biBhY3F1aXJlIGFuZCByZWxlYXNl IG1lY2hhbmlzbXM6CisgKgorICogICAgIEZPTExfR0VUOiBnZXRfdXNlcl9wYWdlcyooKSB0byBh Y3F1aXJlLCBhbmQgcHV0X3BhZ2UoKSB0byByZWxlYXNlLgorICoKKyAqICAgICBGT0xMX1BJTjog cGluX3VzZXJfcGFnZXMqKCkgdG8gYWNxdWlyZSwgYW5kIHB1dF91c2VyX3BhZ2VzIHRvIHJlbGVh c2UuCisgKgorICogRk9MTF9QSU4gYW5kIEZPTExfR0VUIGFyZSBtdXR1YWxseSBleGNsdXNpdmUg Zm9yIGEgZ2l2ZW4gZnVuY3Rpb24gY2FsbC4KKyAqIChUaGUgdW5kZXJseWluZyBwYWdlcyBtYXkg ZXhwZXJpZW5jZSBib3RoIEZPTExfR0VULWJhc2VkIGFuZCBGT0xMX1BJTi1iYXNlZAorICogY2Fs bHMgYXBwbGllZCB0byB0aGVtLCBhbmQgdGhhdCdzIHBlcmZlY3RseSBPSy4gVGhpcyBpcyBhIGNv bnN0cmFpbnQgb24gdGhlCisgKiBjYWxsZXJzLCBub3Qgb24gdGhlIHBhZ2VzLikKKyAqCisgKiBG T0xMX1BJTiBzaG91bGQgYmUgc2V0IGludGVybmFsbHkgYnkgdGhlIHBpbl91c2VyX3BhZ2VzKigp IEFQSXMsIG5ldmVyCisgKiBkaXJlY3RseSBieSB0aGUgY2FsbGVyLiBUaGF0J3MgaW4gb3JkZXIg dG8gaGVscCBhdm9pZCBtaXNtYXRjaGVzIHdoZW4KKyAqIHJlbGVhc2luZyBwYWdlczogZ2V0X3Vz ZXJfcGFnZXMqKCkgcGFnZXMgbXVzdCBiZSByZWxlYXNlZCB2aWEgcHV0X3BhZ2UoKSwKKyAqIHdo aWxlIHBpbl91c2VyX3BhZ2VzKigpIHBhZ2VzIG11c3QgYmUgcmVsZWFzZWQgdmlhIHB1dF91c2Vy X3BhZ2UoKS4KKyAqCisgKiBQbGVhc2Ugc2VlIERvY3VtZW50YXRpb24vdm0vcGluX3VzZXJfcGFn ZXMucnN0IGZvciBtb3JlIGluZm9ybWF0aW9uLgogICovCiAKIHN0YXRpYyBpbmxpbmUgaW50IHZt X2ZhdWx0X3RvX2Vycm5vKHZtX2ZhdWx0X3Qgdm1fZmF1bHQsIGludCBmb2xsX2ZsYWdzKQpkaWZm IC0tZ2l0IGEvbW0vZ3VwLmMgYi9tbS9ndXAuYwppbmRleCBhNTk0YmM3MDgzNjcuLjFjMjAwZWVh YmQ3NyAxMDA2NDQKLS0tIGEvbW0vZ3VwLmMKKysrIGIvbW0vZ3VwLmMKQEAgLTE5NCw2ICsxOTQs MTAgQEAgc3RhdGljIHN0cnVjdCBwYWdlICpmb2xsb3dfcGFnZV9wdGUoc3RydWN0IHZtX2FyZWFf c3RydWN0ICp2bWEsCiAJc3BpbmxvY2tfdCAqcHRsOwogCXB0ZV90ICpwdGVwLCBwdGU7CiAKKwkv KiBGT0xMX0dFVCBhbmQgRk9MTF9QSU4gYXJlIG11dHVhbGx5IGV4Y2x1c2l2ZS4gKi8KKwlpZiAo V0FSTl9PTl9PTkNFKChmbGFncyAmIChGT0xMX1BJTiB8IEZPTExfR0VUKSkgPT0KKwkJCSAoRk9M TF9QSU4gfCBGT0xMX0dFVCkpKQorCQlyZXR1cm4gRVJSX1BUUigtRUlOVkFMKTsKIHJldHJ5Ogog CWlmICh1bmxpa2VseShwbWRfYmFkKCpwbWQpKSkKIAkJcmV0dXJuIG5vX3BhZ2VfdGFibGUodm1h LCBmbGFncyk7CkBAIC04MTEsNyArODE1LDcgQEAgc3RhdGljIGxvbmcgX19nZXRfdXNlcl9wYWdl cyhzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRzaywgc3RydWN0IG1tX3N0cnVjdCAqbW0sCiAKIAlzdGFy dCA9IHVudGFnZ2VkX2FkZHIoc3RhcnQpOwogCi0JVk1fQlVHX09OKCEhcGFnZXMgIT0gISEoZ3Vw X2ZsYWdzICYgRk9MTF9HRVQpKTsKKwlWTV9CVUdfT04oISFwYWdlcyAhPSAhIShndXBfZmxhZ3Mg JiAoRk9MTF9HRVQgfCBGT0xMX1BJTikpKTsKIAogCS8qCiAJICogSWYgRk9MTF9GT1JDRSBpcyBz ZXQgdGhlbiBkbyBub3QgZm9yY2UgYSBmdWxsIGZhdWx0IGFzIHRoZSBoaW50aW5nCkBAIC0xMDM1 LDcgKzEwMzksMTYgQEAgc3RhdGljIF9fYWx3YXlzX2lubGluZSBsb25nIF9fZ2V0X3VzZXJfcGFn ZXNfbG9ja2VkKHN0cnVjdCB0YXNrX3N0cnVjdCAqdHNrLAogCQlCVUdfT04oKmxvY2tlZCAhPSAx KTsKIAl9CiAKLQlpZiAocGFnZXMpCisJLyoKKwkgKiBGT0xMX1BJTiBhbmQgRk9MTF9HRVQgYXJl IG11dHVhbGx5IGV4Y2x1c2l2ZS4gVHJhZGl0aW9uYWwgYmVoYXZpb3IKKwkgKiBpcyB0byBzZXQg Rk9MTF9HRVQgaWYgdGhlIGNhbGxlciB3YW50cyBwYWdlc1tdIGZpbGxlZCBpbiAoYnV0IGhhcwor CSAqIGNhcmVsZXNzbHkgZmFpbGVkIHRvIHNwZWNpZnkgRk9MTF9HRVQpLCBzbyBrZWVwIGRvaW5n IHRoYXQsIGJ1dCBvbmx5CisJICogZm9yIEZPTExfR0VULCBub3QgZm9yIHRoZSBuZXdlciBGT0xM X1BJTi4KKwkgKgorCSAqIEZPTExfUElOIGFsd2F5cyBleHBlY3RzIHBhZ2VzIHRvIGJlIG5vbi1u dWxsLCBidXQgbm8gbmVlZCB0byBhc3NlcnQKKwkgKiB0aGF0IGhlcmUsIGFzIGFueSBmYWlsdXJl cyB3aWxsIGJlIG9idmlvdXMgZW5vdWdoLgorCSAqLworCWlmIChwYWdlcyAmJiAhKGZsYWdzICYg Rk9MTF9QSU4pKQogCQlmbGFncyB8PSBGT0xMX0dFVDsKIAogCXBhZ2VzX2RvbmUgPSAwOwpAQCAt MTYwNiwxMSArMTYxOSwxOSBAQCBzdGF0aWMgX19hbHdheXNfaW5saW5lIGxvbmcgX19ndXBfbG9u Z3Rlcm1fbG9ja2VkKHN0cnVjdCB0YXNrX3N0cnVjdCAqdHNrLAogICogc2hvdWxkIHVzZSBnZXRf dXNlcl9wYWdlcyBiZWNhdXNlIGl0IGNhbm5vdCBwYXNzCiAgKiBGQVVMVF9GTEFHX0FMTE9XX1JF VFJZIHRvIGhhbmRsZV9tbV9mYXVsdC4KICAqLworI2lmZGVmIENPTkZJR19NTVUKIGxvbmcgZ2V0 X3VzZXJfcGFnZXNfcmVtb3RlKHN0cnVjdCB0YXNrX3N0cnVjdCAqdHNrLCBzdHJ1Y3QgbW1fc3Ry dWN0ICptbSwKIAkJdW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBucl9wYWdlcywK IAkJdW5zaWduZWQgaW50IGd1cF9mbGFncywgc3RydWN0IHBhZ2UgKipwYWdlcywKIAkJc3RydWN0 IHZtX2FyZWFfc3RydWN0ICoqdm1hcywgaW50ICpsb2NrZWQpCiB7CisJLyoKKwkgKiBGT0xMX1BJ TiBtdXN0IG9ubHkgYmUgc2V0IGludGVybmFsbHkgYnkgdGhlIHBpbl91c2VyX3BhZ2VzKigpIEFQ SXMsCisJICogbmV2ZXIgZGlyZWN0bHkgYnkgdGhlIGNhbGxlciwgc28gZW5mb3JjZSB0aGF0IHdp dGggYW4gYXNzZXJ0aW9uOgorCSAqLworCWlmIChXQVJOX09OX09OQ0UoZ3VwX2ZsYWdzICYgRk9M TF9QSU4pKQorCQlyZXR1cm4gLUVJTlZBTDsKKwogCS8qCiAJICogUGFydHMgb2YgRk9MTF9MT05H VEVSTSBiZWhhdmlvciBhcmUgaW5jb21wYXRpYmxlIHdpdGgKIAkgKiBGQVVMVF9GTEFHX0FMTE9X X1JFVFJZIGJlY2F1c2Ugb2YgdGhlIEZTIERBWCBjaGVjayByZXF1aXJlbWVudCBvbgpAQCAtMTYz Niw2ICsxNjU3LDE2IEBAIGxvbmcgZ2V0X3VzZXJfcGFnZXNfcmVtb3RlKHN0cnVjdCB0YXNrX3N0 cnVjdCAqdHNrLCBzdHJ1Y3QgbW1fc3RydWN0ICptbSwKIH0KIEVYUE9SVF9TWU1CT0woZ2V0X3Vz ZXJfcGFnZXNfcmVtb3RlKTsKIAorI2Vsc2UgLyogQ09ORklHX01NVSAqLworbG9uZyBnZXRfdXNl cl9wYWdlc19yZW1vdGUoc3RydWN0IHRhc2tfc3RydWN0ICp0c2ssIHN0cnVjdCBtbV9zdHJ1Y3Qg Km1tLAorCQkJICAgdW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBucl9wYWdlcywK KwkJCSAgIHVuc2lnbmVkIGludCBndXBfZmxhZ3MsIHN0cnVjdCBwYWdlICoqcGFnZXMsCisJCQkg ICBzdHJ1Y3Qgdm1fYXJlYV9zdHJ1Y3QgKip2bWFzLCBpbnQgKmxvY2tlZCkKK3sKKwlyZXR1cm4g MDsKK30KKyNlbmRpZiAvKiAhQ09ORklHX01NVSAqLworCiAvKgogICogVGhpcyBpcyB0aGUgc2Ft ZSBhcyBnZXRfdXNlcl9wYWdlc19yZW1vdGUoKSwganVzdCB3aXRoIGEKICAqIGxlc3MtZmxleGli bGUgY2FsbGluZyBjb252ZW50aW9uIHdoZXJlIHdlIGFzc3VtZSB0aGF0IHRoZSB0YXNrCkBAIC0x NjQ3LDYgKzE2NzgsMTMgQEAgbG9uZyBnZXRfdXNlcl9wYWdlcyh1bnNpZ25lZCBsb25nIHN0YXJ0 LCB1bnNpZ25lZCBsb25nIG5yX3BhZ2VzLAogCQl1bnNpZ25lZCBpbnQgZ3VwX2ZsYWdzLCBzdHJ1 Y3QgcGFnZSAqKnBhZ2VzLAogCQlzdHJ1Y3Qgdm1fYXJlYV9zdHJ1Y3QgKip2bWFzKQogeworCS8q CisJICogRk9MTF9QSU4gbXVzdCBvbmx5IGJlIHNldCBpbnRlcm5hbGx5IGJ5IHRoZSBwaW5fdXNl cl9wYWdlcyooKSBBUElzLAorCSAqIG5ldmVyIGRpcmVjdGx5IGJ5IHRoZSBjYWxsZXIsIHNvIGVu Zm9yY2UgdGhhdCB3aXRoIGFuIGFzc2VydGlvbjoKKwkgKi8KKwlpZiAoV0FSTl9PTl9PTkNFKGd1 cF9mbGFncyAmIEZPTExfUElOKSkKKwkJcmV0dXJuIC1FSU5WQUw7CisKIAlyZXR1cm4gX19ndXBf bG9uZ3Rlcm1fbG9ja2VkKGN1cnJlbnQsIGN1cnJlbnQtPm1tLCBzdGFydCwgbnJfcGFnZXMsCiAJ CQkJICAgICBwYWdlcywgdm1hcywgZ3VwX2ZsYWdzIHwgRk9MTF9UT1VDSCk7CiB9CkBAIC0yMzg5 LDMwICsyNDI3LDE1IEBAIHN0YXRpYyBpbnQgX19ndXBfbG9uZ3Rlcm1fdW5sb2NrZWQodW5zaWdu ZWQgbG9uZyBzdGFydCwgaW50IG5yX3BhZ2VzLAogCXJldHVybiByZXQ7CiB9CiAKLS8qKgotICog Z2V0X3VzZXJfcGFnZXNfZmFzdCgpIC0gcGluIHVzZXIgcGFnZXMgaW4gbWVtb3J5Ci0gKiBAc3Rh cnQ6CXN0YXJ0aW5nIHVzZXIgYWRkcmVzcwotICogQG5yX3BhZ2VzOgludW1iZXIgb2YgcGFnZXMg ZnJvbSBzdGFydCB0byBwaW4KLSAqIEBndXBfZmxhZ3M6CWZsYWdzIG1vZGlmeWluZyBwaW4gYmVo YXZpb3VyCi0gKiBAcGFnZXM6CWFycmF5IHRoYXQgcmVjZWl2ZXMgcG9pbnRlcnMgdG8gdGhlIHBh Z2VzIHBpbm5lZC4KLSAqCQlTaG91bGQgYmUgYXQgbGVhc3QgbnJfcGFnZXMgbG9uZy4KLSAqCi0g KiBBdHRlbXB0IHRvIHBpbiB1c2VyIHBhZ2VzIGluIG1lbW9yeSB3aXRob3V0IHRha2luZyBtbS0+ bW1hcF9zZW0uCi0gKiBJZiBub3Qgc3VjY2Vzc2Z1bCwgaXQgd2lsbCBmYWxsIGJhY2sgdG8gdGFr aW5nIHRoZSBsb2NrIGFuZAotICogY2FsbGluZyBnZXRfdXNlcl9wYWdlcygpLgotICoKLSAqIFJl dHVybnMgbnVtYmVyIG9mIHBhZ2VzIHBpbm5lZC4gVGhpcyBtYXkgYmUgZmV3ZXIgdGhhbiB0aGUg bnVtYmVyCi0gKiByZXF1ZXN0ZWQuIElmIG5yX3BhZ2VzIGlzIDAgb3IgbmVnYXRpdmUsIHJldHVy bnMgMC4gSWYgbm8gcGFnZXMKLSAqIHdlcmUgcGlubmVkLCByZXR1cm5zIC1lcnJuby4KLSAqLwot aW50IGdldF91c2VyX3BhZ2VzX2Zhc3QodW5zaWduZWQgbG9uZyBzdGFydCwgaW50IG5yX3BhZ2Vz LAotCQkJdW5zaWduZWQgaW50IGd1cF9mbGFncywgc3RydWN0IHBhZ2UgKipwYWdlcykKK3N0YXRp YyBpbnQgaW50ZXJuYWxfZ2V0X3VzZXJfcGFnZXNfZmFzdCh1bnNpZ25lZCBsb25nIHN0YXJ0LCBp bnQgbnJfcGFnZXMsCisJCQkJCXVuc2lnbmVkIGludCBndXBfZmxhZ3MsCisJCQkJCXN0cnVjdCBw YWdlICoqcGFnZXMpCiB7CiAJdW5zaWduZWQgbG9uZyBhZGRyLCBsZW4sIGVuZDsKIAlpbnQgbnIg PSAwLCByZXQgPSAwOwogCiAJaWYgKFdBUk5fT05fT05DRShndXBfZmxhZ3MgJiB+KEZPTExfV1JJ VEUgfCBGT0xMX0xPTkdURVJNIHwKLQkJCQkgICAgICAgRk9MTF9GT1JDRSkpKQorCQkJCSAgICAg ICBGT0xMX0ZPUkNFIHwgRk9MTF9QSU4pKSkKIAkJcmV0dXJuIC1FSU5WQUw7CiAKIAlzdGFydCA9 IHVudGFnZ2VkX2FkZHIoc3RhcnQpICYgUEFHRV9NQVNLOwpAQCAtMjQ1Miw0ICsyNDc1LDEwMyBA QCBpbnQgZ2V0X3VzZXJfcGFnZXNfZmFzdCh1bnNpZ25lZCBsb25nIHN0YXJ0LCBpbnQgbnJfcGFn ZXMsCiAKIAlyZXR1cm4gcmV0OwogfQorCisvKioKKyAqIGdldF91c2VyX3BhZ2VzX2Zhc3QoKSAt IHBpbiB1c2VyIHBhZ2VzIGluIG1lbW9yeQorICogQHN0YXJ0OglzdGFydGluZyB1c2VyIGFkZHJl c3MKKyAqIEBucl9wYWdlczoJbnVtYmVyIG9mIHBhZ2VzIGZyb20gc3RhcnQgdG8gcGluCisgKiBA Z3VwX2ZsYWdzOglmbGFncyBtb2RpZnlpbmcgcGluIGJlaGF2aW91cgorICogQHBhZ2VzOglhcnJh eSB0aGF0IHJlY2VpdmVzIHBvaW50ZXJzIHRvIHRoZSBwYWdlcyBwaW5uZWQuCisgKgkJU2hvdWxk IGJlIGF0IGxlYXN0IG5yX3BhZ2VzIGxvbmcuCisgKgorICogQXR0ZW1wdCB0byBwaW4gdXNlciBw YWdlcyBpbiBtZW1vcnkgd2l0aG91dCB0YWtpbmcgbW0tPm1tYXBfc2VtLgorICogSWYgbm90IHN1 Y2Nlc3NmdWwsIGl0IHdpbGwgZmFsbCBiYWNrIHRvIHRha2luZyB0aGUgbG9jayBhbmQKKyAqIGNh bGxpbmcgZ2V0X3VzZXJfcGFnZXMoKS4KKyAqCisgKiBSZXR1cm5zIG51bWJlciBvZiBwYWdlcyBw aW5uZWQuIFRoaXMgbWF5IGJlIGZld2VyIHRoYW4gdGhlIG51bWJlciByZXF1ZXN0ZWQuCisgKiBJ ZiBucl9wYWdlcyBpcyAwIG9yIG5lZ2F0aXZlLCByZXR1cm5zIDAuIElmIG5vIHBhZ2VzIHdlcmUg cGlubmVkLCByZXR1cm5zCisgKiAtZXJybm8uCisgKi8KK2ludCBnZXRfdXNlcl9wYWdlc19mYXN0 KHVuc2lnbmVkIGxvbmcgc3RhcnQsIGludCBucl9wYWdlcywKKwkJCXVuc2lnbmVkIGludCBndXBf ZmxhZ3MsIHN0cnVjdCBwYWdlICoqcGFnZXMpCit7CisJLyoKKwkgKiBGT0xMX1BJTiBtdXN0IG9u bHkgYmUgc2V0IGludGVybmFsbHkgYnkgdGhlIHBpbl91c2VyX3BhZ2VzKigpIEFQSXMsCisJICog bmV2ZXIgZGlyZWN0bHkgYnkgdGhlIGNhbGxlciwgc28gZW5mb3JjZSB0aGF0OgorCSAqLworCWlm IChXQVJOX09OX09OQ0UoZ3VwX2ZsYWdzICYgRk9MTF9QSU4pKQorCQlyZXR1cm4gLUVJTlZBTDsK KworCXJldHVybiBpbnRlcm5hbF9nZXRfdXNlcl9wYWdlc19mYXN0KHN0YXJ0LCBucl9wYWdlcywg Z3VwX2ZsYWdzLCBwYWdlcyk7Cit9CiBFWFBPUlRfU1lNQk9MX0dQTChnZXRfdXNlcl9wYWdlc19m YXN0KTsKKworLyoqCisgKiBwaW5fdXNlcl9wYWdlc19mYXN0KCkgLSBwaW4gdXNlciBwYWdlcyBp biBtZW1vcnkgd2l0aG91dCB0YWtpbmcgbG9ja3MKKyAqCisgKiBGb3Igbm93LCB0aGlzIGlzIGEg cGxhY2Vob2xkZXIgZnVuY3Rpb24sIHVudGlsIHZhcmlvdXMgY2FsbCBzaXRlcyBhcmUKKyAqIGNv bnZlcnRlZCB0byB1c2UgdGhlIGNvcnJlY3QgZ2V0X3VzZXJfcGFnZXMqKCkgb3IgcGluX3VzZXJf cGFnZXMqKCkgQVBJLiBTbywKKyAqIHRoaXMgaXMgaWRlbnRpY2FsIHRvIGdldF91c2VyX3BhZ2Vz X2Zhc3QoKS4KKyAqCisgKiBUaGlzIGlzIGludGVuZGVkIGZvciBDYXNlIDEgKERJTykgaW4gRG9j dW1lbnRhdGlvbi92bS9waW5fdXNlcl9wYWdlcy5yc3QuIEl0CisgKiBpcyBOT1QgaW50ZW5kZWQg Zm9yIENhc2UgMiAoUkRNQTogbG9uZy10ZXJtIHBpbnMpLgorICovCitpbnQgcGluX3VzZXJfcGFn ZXNfZmFzdCh1bnNpZ25lZCBsb25nIHN0YXJ0LCBpbnQgbnJfcGFnZXMsCisJCQl1bnNpZ25lZCBp bnQgZ3VwX2ZsYWdzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzKQoreworCS8qCisJICogVGhpcyBpcyBh IHBsYWNlaG9sZGVyLCB1bnRpbCB0aGUgcGluIGZ1bmN0aW9uYWxpdHkgaXMgYWN0aXZhdGVkLgor CSAqIFVudGlsIHRoZW4sIGp1c3QgYmVoYXZlIGxpa2UgdGhlIGNvcnJlc3BvbmRpbmcgZ2V0X3Vz ZXJfcGFnZXMqKCkKKwkgKiByb3V0aW5lLgorCSAqLworCXJldHVybiBnZXRfdXNlcl9wYWdlc19m YXN0KHN0YXJ0LCBucl9wYWdlcywgZ3VwX2ZsYWdzLCBwYWdlcyk7Cit9CitFWFBPUlRfU1lNQk9M X0dQTChwaW5fdXNlcl9wYWdlc19mYXN0KTsKKworLyoqCisgKiBwaW5fdXNlcl9wYWdlc19yZW1v dGUoKSAtIHBpbiBwYWdlcyBvZiBhIHJlbW90ZSBwcm9jZXNzICh0YXNrICE9IGN1cnJlbnQpCisg KgorICogRm9yIG5vdywgdGhpcyBpcyBhIHBsYWNlaG9sZGVyIGZ1bmN0aW9uLCB1bnRpbCB2YXJp b3VzIGNhbGwgc2l0ZXMgYXJlCisgKiBjb252ZXJ0ZWQgdG8gdXNlIHRoZSBjb3JyZWN0IGdldF91 c2VyX3BhZ2VzKigpIG9yIHBpbl91c2VyX3BhZ2VzKigpIEFQSS4gU28sCisgKiB0aGlzIGlzIGlk ZW50aWNhbCB0byBnZXRfdXNlcl9wYWdlc19yZW1vdGUoKS4KKyAqCisgKiBUaGlzIGlzIGludGVu ZGVkIGZvciBDYXNlIDEgKERJTykgaW4gRG9jdW1lbnRhdGlvbi92bS9waW5fdXNlcl9wYWdlcy5y c3QuIEl0CisgKiBpcyBOT1QgaW50ZW5kZWQgZm9yIENhc2UgMiAoUkRNQTogbG9uZy10ZXJtIHBp bnMpLgorICovCitsb25nIHBpbl91c2VyX3BhZ2VzX3JlbW90ZShzdHJ1Y3QgdGFza19zdHJ1Y3Qg KnRzaywgc3RydWN0IG1tX3N0cnVjdCAqbW0sCisJCQkgICB1bnNpZ25lZCBsb25nIHN0YXJ0LCB1 bnNpZ25lZCBsb25nIG5yX3BhZ2VzLAorCQkJICAgdW5zaWduZWQgaW50IGd1cF9mbGFncywgc3Ry dWN0IHBhZ2UgKipwYWdlcywKKwkJCSAgIHN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqKnZtYXMsIGlu dCAqbG9ja2VkKQoreworCS8qCisJICogVGhpcyBpcyBhIHBsYWNlaG9sZGVyLCB1bnRpbCB0aGUg cGluIGZ1bmN0aW9uYWxpdHkgaXMgYWN0aXZhdGVkLgorCSAqIFVudGlsIHRoZW4sIGp1c3QgYmVo YXZlIGxpa2UgdGhlIGNvcnJlc3BvbmRpbmcgZ2V0X3VzZXJfcGFnZXMqKCkKKwkgKiByb3V0aW5l LgorCSAqLworCXJldHVybiBnZXRfdXNlcl9wYWdlc19yZW1vdGUodHNrLCBtbSwgc3RhcnQsIG5y X3BhZ2VzLCBndXBfZmxhZ3MsIHBhZ2VzLAorCQkJCSAgICAgdm1hcywgbG9ja2VkKTsKK30KK0VY UE9SVF9TWU1CT0wocGluX3VzZXJfcGFnZXNfcmVtb3RlKTsKKworLyoqCisgKiBwaW5fdXNlcl9w YWdlcygpIC0gcGluIHVzZXIgcGFnZXMgaW4gbWVtb3J5IGZvciB1c2UgYnkgb3RoZXIgZGV2aWNl cworICoKKyAqIEZvciBub3csIHRoaXMgaXMgYSBwbGFjZWhvbGRlciBmdW5jdGlvbiwgdW50aWwg dmFyaW91cyBjYWxsIHNpdGVzIGFyZQorICogY29udmVydGVkIHRvIHVzZSB0aGUgY29ycmVjdCBn ZXRfdXNlcl9wYWdlcyooKSBvciBwaW5fdXNlcl9wYWdlcyooKSBBUEkuIFNvLAorICogdGhpcyBp cyBpZGVudGljYWwgdG8gZ2V0X3VzZXJfcGFnZXMoKS4KKyAqCisgKiBUaGlzIGlzIGludGVuZGVk IGZvciBDYXNlIDEgKERJTykgaW4gRG9jdW1lbnRhdGlvbi92bS9waW5fdXNlcl9wYWdlcy5yc3Qu IEl0CisgKiBpcyBOT1QgaW50ZW5kZWQgZm9yIENhc2UgMiAoUkRNQTogbG9uZy10ZXJtIHBpbnMp LgorICovCitsb25nIHBpbl91c2VyX3BhZ2VzKHVuc2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVk IGxvbmcgbnJfcGFnZXMsCisJCSAgICB1bnNpZ25lZCBpbnQgZ3VwX2ZsYWdzLCBzdHJ1Y3QgcGFn ZSAqKnBhZ2VzLAorCQkgICAgc3RydWN0IHZtX2FyZWFfc3RydWN0ICoqdm1hcykKK3sKKwkvKgor CSAqIFRoaXMgaXMgYSBwbGFjZWhvbGRlciwgdW50aWwgdGhlIHBpbiBmdW5jdGlvbmFsaXR5IGlz IGFjdGl2YXRlZC4KKwkgKiBVbnRpbCB0aGVuLCBqdXN0IGJlaGF2ZSBsaWtlIHRoZSBjb3JyZXNw b25kaW5nIGdldF91c2VyX3BhZ2VzKigpCisJICogcm91dGluZS4KKwkgKi8KKwlyZXR1cm4gZ2V0 X3VzZXJfcGFnZXMoc3RhcnQsIG5yX3BhZ2VzLCBndXBfZmxhZ3MsIHBhZ2VzLCB2bWFzKTsKK30K K0VYUE9SVF9TWU1CT0wocGluX3VzZXJfcGFnZXMpOwotLSAKMi4yNC4xCgpfX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpkcmktZGV2ZWwgbWFpbGluZyBsaXN0 CmRyaS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5mcmVlZGVza3Rv cC5vcmcvbWFpbG1hbi9saXN0aW5mby9kcmktZGV2ZWwK