From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA187C5DF60 for ; Tue, 5 Nov 2019 22:22:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9710D2087E for ; Tue, 5 Nov 2019 22:22:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9710D2087E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F2E16B0003; Tue, 5 Nov 2019 17:22:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 27BF86B0005; Tue, 5 Nov 2019 17:22:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11DC56B0006; Tue, 5 Nov 2019 17:22:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id EA3E76B0003 for ; Tue, 5 Nov 2019 17:22:09 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id A2ADF824999B for ; Tue, 5 Nov 2019 22:22:09 +0000 (UTC) X-FDA: 76123647978.28.game70_4ce77bab6101b X-HE-Tag: game70_4ce77bab6101b X-Filterd-Recvd-Size: 9396 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Nov 2019 22:22:08 +0000 (UTC) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Nov 2019 14:22:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,271,1569308400"; d="scan'208";a="376846147" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.41]) by orsmga005.jf.intel.com with ESMTP; 05 Nov 2019 14:22:05 -0800 Date: Tue, 5 Nov 2019 14:22:05 -0800 From: Sean Christopherson To: David Hildenbrand Cc: Dan Williams , Linux Kernel Mailing List , Linux MM , Michal Hocko , Andrew Morton , kvm-ppc@vger.kernel.org, linuxppc-dev , KVM list , linux-hyperv@vger.kernel.org, devel@driverdev.osuosl.org, xen-devel , X86 ML , Alexander Duyck , Alexander Duyck , Alex Williamson , Allison Randal , Andy Lutomirski , "Aneesh Kumar K.V" , Anshuman Khandual , Anthony Yznaga , Benjamin Herrenschmidt , Borislav Petkov , Boris Ostrovsky , Christophe Leroy , Cornelia Huck , Dave Hansen , Haiyang Zhang , "H. Peter Anvin" , Ingo Molnar , "Isaac J. Manjarres" , Jim Mattson , Joerg Roedel , Johannes Weiner , Juergen Gross , KarimAllah Ahmed , Kees Cook , "K. Y. Srinivasan" , "Matthew Wilcox (Oracle)" , Matt Sickler , Mel Gorman , Michael Ellerman , Michal Hocko , Mike Rapoport , Mike Rapoport , Nicholas Piggin , Oscar Salvador , Paolo Bonzini , Paul Mackerras , Paul Mackerras , Pavel Tatashin , Pavel Tatashin , Peter Zijlstra , Qian Cai , Radim =?utf-8?B?S3LEjW3DocWZ?= , Sasha Levin , Stefano Stabellini , Stephen Hemminger , Thomas Gleixner , Vitaly Kuznetsov , Vlastimil Babka , Wanpeng Li , YueHaibing , Adam Borowski Subject: Re: [PATCH v1 03/10] KVM: Prepare kvm_is_reserved_pfn() for PG_reserved changes Message-ID: <20191105222205.GA23297@linux.intel.com> References: <20191024120938.11237-1-david@redhat.com> <20191024120938.11237-4-david@redhat.com> <01adb4cb-6092-638c-0bab-e61322be7cf5@redhat.com> <613f3606-748b-0e56-a3ad-1efaffa1a67b@redhat.com> <20191105160000.GC8128@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 05, 2019 at 09:30:53PM +0100, David Hildenbrand wrote: > >>>I think I know what's going wrong: > >>> > >>>Pages that are pinned via gfn_to_pfn() and friends take a references, > >>>however are often released via > >>>kvm_release_pfn_clean()/kvm_release_pfn_dirty()/kvm_release_page_clean()... > >>> > >>> > >>>E.g., in arch/x86/kvm/x86.c:reexecute_instruction() > >>> > >>>... > >>>pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); > >>>... > >>>kvm_release_pfn_clean(pfn); > >>> > >>> > >>> > >>>void kvm_release_pfn_clean(kvm_pfn_t pfn) > >>>{ > >>> if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn)) > >>> put_page(pfn_to_page(pfn)); > >>>} > >>> > >>>This function makes perfect sense as the counterpart for kvm_get_pfn(): > >>> > >>>void kvm_get_pfn(kvm_pfn_t pfn) > >>>{ > >>> if (!kvm_is_reserved_pfn(pfn)) > >>> get_page(pfn_to_page(pfn)); > >>>} > >>> > >>> > >>>As all ZONE_DEVICE pages are currently reserved, pages pinned via > >>>gfn_to_pfn() and friends will often not see a put_page() AFAIKS. > > > >Assuming gup() takes a reference for ZONE_DEVICE pages, yes, this is a > >KVM bug. > > Yes, it does take a reference AFAIKs. E.g., > > mm/gup.c:gup_pte_range(): > ... > if (pte_devmap(pte)) { > if (unlikely(flags & FOLL_LONGTERM)) > goto pte_unmap; > > pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); > if (unlikely(!pgmap)) { > undo_dev_pagemap(nr, nr_start, pages); > goto pte_unmap; > } > } else if (pte_special(pte)) > goto pte_unmap; > > VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > page = pte_page(pte); > > head = try_get_compound_head(page, 1); > > try_get_compound_head() will increment the reference count. Doh, I looked right at that code and somehow didn't connect the dots. Thanks! > >>>Now, my patch does not change that, the result of > >>>kvm_is_reserved_pfn(pfn) will be unchanged. A proper fix for that would > >>>probably be > >>> > >>>a) To drop the reference to ZONE_DEVICE pages in gfn_to_pfn() and > >>>friends, after you successfully pinned the pages. (not sure if that's > >>>the right thing to do but you're the expert) > >>> > >>>b) To not use kvm_release_pfn_clean() and friends on pages that were > >>>definitely pinned. > > > >This is already KVM's intent, i.e. the purpose of the PageReserved() check > >is simply to avoid putting a non-existent reference. The problem is that > >KVM assumes pages with PG_reserved set are never pinned, which AFAICT was > >true when the code was first added. > > > >>(talking to myself, sorry) > >> > >>Thinking again, dropping this patch from this series could effectively also > >>fix that issue. E.g., kvm_release_pfn_clean() and friends would always do a > >>put_page() if "pfn_valid() and !PageReserved()", so after patch 9 also on > >>ZONDE_DEVICE pages. > > > >Yeah, this appears to be the correct fix. > > > >>But it would have side effects that might not be desired. E.g.,: > >> > >>1. kvm_pfn_to_page() would also return ZONE_DEVICE pages (might even be the > >>right thing to do). > > > >This should be ok, at least on x86. There are only three users of > >kvm_pfn_to_page(). Two of those are on allocations that are controlled by > >KVM and are guaranteed to be vanilla MAP_ANONYMOUS. The third is on guest > >memory when running a nested guest, and in that case supporting ZONE_DEVICE > >memory is desirable, i.e. KVM should play nice with a guest that is backed > >by ZONE_DEVICE memory. > > > >>2. kvm_set_pfn_dirty() would also set ZONE_DEVICE pages dirty (might be > >>okay) > > > >This is ok from a KVM perspective. > > What about > > void kvm_get_pfn(kvm_pfn_t pfn) > { > if (!kvm_is_reserved_pfn(pfn)) > get_page(pfn_to_page(pfn)); > } > > Is a pure get_page() sufficient in case of ZONE_DEVICE? > (asking because of the live references obtained via > get_dev_pagemap(pte_pfn(pte), pgmap) in mm/gup.c:gup_pte_range() somewhat > confuse me :) ) This ties into my concern with thp_adjust(). On x86, kvm_get_pfn() is only used in two flows, to manually get a ref for VM_IO/VM_PFNMAP pages and to switch the ref when mapping a non-hugetlbfs compound page, i.e. a THP. I assume VM_IO and PFNMAP can't apply to ZONE_DEVICE pages. In the thp_adjust() case, when a THP is encountered and the original PFN is for a non-PG_head page, KVM transfers the reference to the associated PG_head page[*] and maps the associated 2mb chunk/page. This is where KVM uses kvm_get_pfn() and could run afoul of the get_dev_pagemap() refcounts. [*] Technically I don't think it's guaranteed to be a PG_head, e.g. if the THP is a 1gb page, as KVM currently only maps THP as 2mb pages. But the idea is the same, transfer the refcount the PFN that's actually going into KVM's page tables. > > > >The scarier code (for me) is transparent_hugepage_adjust() and > >kvm_mmu_zap_collapsible_spte(), as I don't at all understand the > >interaction between THP and _PAGE_DEVMAP. > > The x86 KVM MMU code is one of the ugliest code I know (sorry, but it had to > be said :/ ). Luckily, this should be independent of the PG_reserved thingy > AFAIKs. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46B8AC5DF61 for ; Tue, 5 Nov 2019 22:22:12 +0000 (UTC) Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 14A542087E for ; Tue, 5 Nov 2019 22:22:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14A542087E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=driverdev-devel-bounces@linuxdriverproject.org Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id D6B958A02D; Tue, 5 Nov 2019 22:22:11 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AEcekGkkQBSd; Tue, 5 Nov 2019 22:22:08 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by whitealder.osuosl.org (Postfix) with ESMTP id BB4A68A07B; Tue, 5 Nov 2019 22:22:08 +0000 (UTC) Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by ash.osuosl.org (Postfix) with ESMTP id 8786B1BF2A0 for ; Tue, 5 Nov 2019 22:22:07 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 81FA38A02D for ; Tue, 5 Nov 2019 22:22:07 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id t6Hi3uBsJ52O for ; Tue, 5 Nov 2019 22:22:06 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by whitealder.osuosl.org (Postfix) with ESMTPS id 9259B8A07B for ; Tue, 5 Nov 2019 22:22:06 +0000 (UTC) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Nov 2019 14:22:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,271,1569308400"; d="scan'208";a="376846147" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.41]) by orsmga005.jf.intel.com with ESMTP; 05 Nov 2019 14:22:05 -0800 Date: Tue, 5 Nov 2019 14:22:05 -0800 From: Sean Christopherson To: David Hildenbrand Subject: Re: [PATCH v1 03/10] KVM: Prepare kvm_is_reserved_pfn() for PG_reserved changes Message-ID: <20191105222205.GA23297@linux.intel.com> References: <20191024120938.11237-1-david@redhat.com> <20191024120938.11237-4-david@redhat.com> <01adb4cb-6092-638c-0bab-e61322be7cf5@redhat.com> <613f3606-748b-0e56-a3ad-1efaffa1a67b@redhat.com> <20191105160000.GC8128@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-BeenThere: driverdev-devel@linuxdriverproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux Driver Project Developer List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-hyperv@vger.kernel.org, Michal Hocko , Radim =?utf-8?B?S3LEjW3DocWZ?= , KVM list , Pavel Tatashin , KarimAllah Ahmed , Benjamin Herrenschmidt , Dave Hansen , Alexander Duyck , Michal Hocko , Paul Mackerras , Linux MM , Paul Mackerras , Michael Ellerman , "H. Peter Anvin" , Wanpeng Li , Alexander Duyck , Boris Ostrovsky , Kees Cook , devel@driverdev.osuosl.org, Stefano Stabellini , Stephen Hemminger , "Aneesh Kumar K.V" , Joerg Roedel , X86 ML , YueHaibing , "Matthew Wilcox \(Oracle\)" , Mike Rapoport , Peter Zijlstra , Ingo Molnar , Vlastimil Babka , Anthony Yznaga , Oscar Salvador , "Isaac J. Manjarres" , Juergen Gross , Anshuman Khandual , Haiyang Zhang , Sasha Levin , kvm-ppc@vger.kernel.org, Qian Cai , Alex Williamson , Mike Rapoport , Borislav Petkov , Nicholas Piggin , Andy Lutomirski , xen-devel , Dan Williams , Vitaly Kuznetsov , Allison Randal , Jim Mattson , Christophe Leroy , Mel Gorman , Adam Borowski , Cornelia Huck , Pavel Tatashin , Linux Kernel Mailing List , Thomas Gleixner , Johannes Weiner , Paolo Bonzini , Andrew Morton , linuxppc-dev Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: driverdev-devel-bounces@linuxdriverproject.org Sender: "devel" On Tue, Nov 05, 2019 at 09:30:53PM +0100, David Hildenbrand wrote: > >>>I think I know what's going wrong: > >>> > >>>Pages that are pinned via gfn_to_pfn() and friends take a references, > >>>however are often released via > >>>kvm_release_pfn_clean()/kvm_release_pfn_dirty()/kvm_release_page_clean()... > >>> > >>> > >>>E.g., in arch/x86/kvm/x86.c:reexecute_instruction() > >>> > >>>... > >>>pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); > >>>... > >>>kvm_release_pfn_clean(pfn); > >>> > >>> > >>> > >>>void kvm_release_pfn_clean(kvm_pfn_t pfn) > >>>{ > >>> if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn)) > >>> put_page(pfn_to_page(pfn)); > >>>} > >>> > >>>This function makes perfect sense as the counterpart for kvm_get_pfn(): > >>> > >>>void kvm_get_pfn(kvm_pfn_t pfn) > >>>{ > >>> if (!kvm_is_reserved_pfn(pfn)) > >>> get_page(pfn_to_page(pfn)); > >>>} > >>> > >>> > >>>As all ZONE_DEVICE pages are currently reserved, pages pinned via > >>>gfn_to_pfn() and friends will often not see a put_page() AFAIKS. > > > >Assuming gup() takes a reference for ZONE_DEVICE pages, yes, this is a > >KVM bug. > > Yes, it does take a reference AFAIKs. E.g., > > mm/gup.c:gup_pte_range(): > ... > if (pte_devmap(pte)) { > if (unlikely(flags & FOLL_LONGTERM)) > goto pte_unmap; > > pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); > if (unlikely(!pgmap)) { > undo_dev_pagemap(nr, nr_start, pages); > goto pte_unmap; > } > } else if (pte_special(pte)) > goto pte_unmap; > > VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > page = pte_page(pte); > > head = try_get_compound_head(page, 1); > > try_get_compound_head() will increment the reference count. Doh, I looked right at that code and somehow didn't connect the dots. Thanks! > >>>Now, my patch does not change that, the result of > >>>kvm_is_reserved_pfn(pfn) will be unchanged. A proper fix for that would > >>>probably be > >>> > >>>a) To drop the reference to ZONE_DEVICE pages in gfn_to_pfn() and > >>>friends, after you successfully pinned the pages. (not sure if that's > >>>the right thing to do but you're the expert) > >>> > >>>b) To not use kvm_release_pfn_clean() and friends on pages that were > >>>definitely pinned. > > > >This is already KVM's intent, i.e. the purpose of the PageReserved() check > >is simply to avoid putting a non-existent reference. The problem is that > >KVM assumes pages with PG_reserved set are never pinned, which AFAICT was > >true when the code was first added. > > > >>(talking to myself, sorry) > >> > >>Thinking again, dropping this patch from this series could effectively also > >>fix that issue. E.g., kvm_release_pfn_clean() and friends would always do a > >>put_page() if "pfn_valid() and !PageReserved()", so after patch 9 also on > >>ZONDE_DEVICE pages. > > > >Yeah, this appears to be the correct fix. > > > >>But it would have side effects that might not be desired. E.g.,: > >> > >>1. kvm_pfn_to_page() would also return ZONE_DEVICE pages (might even be the > >>right thing to do). > > > >This should be ok, at least on x86. There are only three users of > >kvm_pfn_to_page(). Two of those are on allocations that are controlled by > >KVM and are guaranteed to be vanilla MAP_ANONYMOUS. The third is on guest > >memory when running a nested guest, and in that case supporting ZONE_DEVICE > >memory is desirable, i.e. KVM should play nice with a guest that is backed > >by ZONE_DEVICE memory. > > > >>2. kvm_set_pfn_dirty() would also set ZONE_DEVICE pages dirty (might be > >>okay) > > > >This is ok from a KVM perspective. > > What about > > void kvm_get_pfn(kvm_pfn_t pfn) > { > if (!kvm_is_reserved_pfn(pfn)) > get_page(pfn_to_page(pfn)); > } > > Is a pure get_page() sufficient in case of ZONE_DEVICE? > (asking because of the live references obtained via > get_dev_pagemap(pte_pfn(pte), pgmap) in mm/gup.c:gup_pte_range() somewhat > confuse me :) ) This ties into my concern with thp_adjust(). On x86, kvm_get_pfn() is only used in two flows, to manually get a ref for VM_IO/VM_PFNMAP pages and to switch the ref when mapping a non-hugetlbfs compound page, i.e. a THP. I assume VM_IO and PFNMAP can't apply to ZONE_DEVICE pages. In the thp_adjust() case, when a THP is encountered and the original PFN is for a non-PG_head page, KVM transfers the reference to the associated PG_head page[*] and maps the associated 2mb chunk/page. This is where KVM uses kvm_get_pfn() and could run afoul of the get_dev_pagemap() refcounts. [*] Technically I don't think it's guaranteed to be a PG_head, e.g. if the THP is a 1gb page, as KVM currently only maps THP as 2mb pages. But the idea is the same, transfer the refcount the PFN that's actually going into KVM's page tables. > > > >The scarier code (for me) is transparent_hugepage_adjust() and > >kvm_mmu_zap_collapsible_spte(), as I don't at all understand the > >interaction between THP and _PAGE_DEVMAP. > > The x86 KVM MMU code is one of the ugliest code I know (sorry, but it had to > be said :/ ). Luckily, this should be independent of the PG_reserved thingy > AFAIKs. _______________________________________________ devel mailing list devel@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E6BAC5DF60 for ; Tue, 5 Nov 2019 22:54:39 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 16E91206BA for ; Tue, 5 Nov 2019 22:54:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16E91206BA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4774jm2flXzF4kC for ; Wed, 6 Nov 2019 09:54:36 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=intel.com (client-ip=134.134.136.65; helo=mga03.intel.com; envelope-from=sean.j.christopherson@intel.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=intel.com Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 47740Q3Kr8zF576 for ; Wed, 6 Nov 2019 09:22:10 +1100 (AEDT) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Nov 2019 14:22:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,271,1569308400"; d="scan'208";a="376846147" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.41]) by orsmga005.jf.intel.com with ESMTP; 05 Nov 2019 14:22:05 -0800 Date: Tue, 5 Nov 2019 14:22:05 -0800 From: Sean Christopherson To: David Hildenbrand Subject: Re: [PATCH v1 03/10] KVM: Prepare kvm_is_reserved_pfn() for PG_reserved changes Message-ID: <20191105222205.GA23297@linux.intel.com> References: <20191024120938.11237-1-david@redhat.com> <20191024120938.11237-4-david@redhat.com> <01adb4cb-6092-638c-0bab-e61322be7cf5@redhat.com> <613f3606-748b-0e56-a3ad-1efaffa1a67b@redhat.com> <20191105160000.GC8128@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Mailman-Approved-At: Wed, 06 Nov 2019 09:53:06 +1100 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-hyperv@vger.kernel.org, Michal Hocko , Radim =?utf-8?B?S3LEjW3DocWZ?= , KVM list , Pavel Tatashin , KarimAllah Ahmed , Dave Hansen , Alexander Duyck , Michal Hocko , Linux MM , Paul Mackerras , "H. Peter Anvin" , Wanpeng Li , Alexander Duyck , "K. Y. Srinivasan" , Boris Ostrovsky , Kees Cook , devel@driverdev.osuosl.org, Stefano Stabellini , Stephen Hemminger , "Aneesh Kumar K.V" , Joerg Roedel , X86 ML , YueHaibing , "Matthew Wilcox \(Oracle\)" , Mike Rapoport , Peter Zijlstra , Ingo Molnar , Vlastimil Babka , Anthony Yznaga , Oscar Salvador , "Isaac J. Manjarres" , Matt Sickler , Juergen Gross , Anshuman Khandual , Haiyang Zhang , Sasha Levin , kvm-ppc@vger.kernel.org, Qian Cai , Alex Williamson , Mike Rapoport , Borislav Petkov , Nicholas Piggin , Andy Lutomirski , xen-devel , Dan Williams , Vitaly Kuznetsov , Allison Randal , Jim Mattson , Mel Gorman , Adam Borowski , Cornelia Huck , Pavel Tatashin , Linux Kernel Mailing List , Thomas Gleixner , Johannes Weiner , Paolo Bonzini , Andrew Morton , linuxppc-dev Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Tue, Nov 05, 2019 at 09:30:53PM +0100, David Hildenbrand wrote: > >>>I think I know what's going wrong: > >>> > >>>Pages that are pinned via gfn_to_pfn() and friends take a references, > >>>however are often released via > >>>kvm_release_pfn_clean()/kvm_release_pfn_dirty()/kvm_release_page_clean()... > >>> > >>> > >>>E.g., in arch/x86/kvm/x86.c:reexecute_instruction() > >>> > >>>... > >>>pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); > >>>... > >>>kvm_release_pfn_clean(pfn); > >>> > >>> > >>> > >>>void kvm_release_pfn_clean(kvm_pfn_t pfn) > >>>{ > >>> if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn)) > >>> put_page(pfn_to_page(pfn)); > >>>} > >>> > >>>This function makes perfect sense as the counterpart for kvm_get_pfn(): > >>> > >>>void kvm_get_pfn(kvm_pfn_t pfn) > >>>{ > >>> if (!kvm_is_reserved_pfn(pfn)) > >>> get_page(pfn_to_page(pfn)); > >>>} > >>> > >>> > >>>As all ZONE_DEVICE pages are currently reserved, pages pinned via > >>>gfn_to_pfn() and friends will often not see a put_page() AFAIKS. > > > >Assuming gup() takes a reference for ZONE_DEVICE pages, yes, this is a > >KVM bug. > > Yes, it does take a reference AFAIKs. E.g., > > mm/gup.c:gup_pte_range(): > ... > if (pte_devmap(pte)) { > if (unlikely(flags & FOLL_LONGTERM)) > goto pte_unmap; > > pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); > if (unlikely(!pgmap)) { > undo_dev_pagemap(nr, nr_start, pages); > goto pte_unmap; > } > } else if (pte_special(pte)) > goto pte_unmap; > > VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > page = pte_page(pte); > > head = try_get_compound_head(page, 1); > > try_get_compound_head() will increment the reference count. Doh, I looked right at that code and somehow didn't connect the dots. Thanks! > >>>Now, my patch does not change that, the result of > >>>kvm_is_reserved_pfn(pfn) will be unchanged. A proper fix for that would > >>>probably be > >>> > >>>a) To drop the reference to ZONE_DEVICE pages in gfn_to_pfn() and > >>>friends, after you successfully pinned the pages. (not sure if that's > >>>the right thing to do but you're the expert) > >>> > >>>b) To not use kvm_release_pfn_clean() and friends on pages that were > >>>definitely pinned. > > > >This is already KVM's intent, i.e. the purpose of the PageReserved() check > >is simply to avoid putting a non-existent reference. The problem is that > >KVM assumes pages with PG_reserved set are never pinned, which AFAICT was > >true when the code was first added. > > > >>(talking to myself, sorry) > >> > >>Thinking again, dropping this patch from this series could effectively also > >>fix that issue. E.g., kvm_release_pfn_clean() and friends would always do a > >>put_page() if "pfn_valid() and !PageReserved()", so after patch 9 also on > >>ZONDE_DEVICE pages. > > > >Yeah, this appears to be the correct fix. > > > >>But it would have side effects that might not be desired. E.g.,: > >> > >>1. kvm_pfn_to_page() would also return ZONE_DEVICE pages (might even be the > >>right thing to do). > > > >This should be ok, at least on x86. There are only three users of > >kvm_pfn_to_page(). Two of those are on allocations that are controlled by > >KVM and are guaranteed to be vanilla MAP_ANONYMOUS. The third is on guest > >memory when running a nested guest, and in that case supporting ZONE_DEVICE > >memory is desirable, i.e. KVM should play nice with a guest that is backed > >by ZONE_DEVICE memory. > > > >>2. kvm_set_pfn_dirty() would also set ZONE_DEVICE pages dirty (might be > >>okay) > > > >This is ok from a KVM perspective. > > What about > > void kvm_get_pfn(kvm_pfn_t pfn) > { > if (!kvm_is_reserved_pfn(pfn)) > get_page(pfn_to_page(pfn)); > } > > Is a pure get_page() sufficient in case of ZONE_DEVICE? > (asking because of the live references obtained via > get_dev_pagemap(pte_pfn(pte), pgmap) in mm/gup.c:gup_pte_range() somewhat > confuse me :) ) This ties into my concern with thp_adjust(). On x86, kvm_get_pfn() is only used in two flows, to manually get a ref for VM_IO/VM_PFNMAP pages and to switch the ref when mapping a non-hugetlbfs compound page, i.e. a THP. I assume VM_IO and PFNMAP can't apply to ZONE_DEVICE pages. In the thp_adjust() case, when a THP is encountered and the original PFN is for a non-PG_head page, KVM transfers the reference to the associated PG_head page[*] and maps the associated 2mb chunk/page. This is where KVM uses kvm_get_pfn() and could run afoul of the get_dev_pagemap() refcounts. [*] Technically I don't think it's guaranteed to be a PG_head, e.g. if the THP is a 1gb page, as KVM currently only maps THP as 2mb pages. But the idea is the same, transfer the refcount the PFN that's actually going into KVM's page tables. > > > >The scarier code (for me) is transparent_hugepage_adjust() and > >kvm_mmu_zap_collapsible_spte(), as I don't at all understand the > >interaction between THP and _PAGE_DEVMAP. > > The x86 KVM MMU code is one of the ugliest code I know (sorry, but it had to > be said :/ ). Luckily, this should be independent of the PG_reserved thingy > AFAIKs. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 630C3C5DF60 for ; Tue, 5 Nov 2019 22:22:39 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2681E2087E for ; Tue, 5 Nov 2019 22:22:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2681E2087E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iS7DG-0003MK-PY; Tue, 05 Nov 2019 22:22:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iS7DF-0003MF-QH for xen-devel@lists.xenproject.org; Tue, 05 Nov 2019 22:22:09 +0000 X-Inumbo-ID: b3de9752-001a-11ea-a1a5-12813bfff9fa Received: from mga03.intel.com (unknown [134.134.136.65]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id b3de9752-001a-11ea-a1a5-12813bfff9fa; Tue, 05 Nov 2019 22:22:08 +0000 (UTC) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Nov 2019 14:22:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,271,1569308400"; d="scan'208";a="376846147" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.41]) by orsmga005.jf.intel.com with ESMTP; 05 Nov 2019 14:22:05 -0800 Date: Tue, 5 Nov 2019 14:22:05 -0800 From: Sean Christopherson To: David Hildenbrand Message-ID: <20191105222205.GA23297@linux.intel.com> References: <20191024120938.11237-1-david@redhat.com> <20191024120938.11237-4-david@redhat.com> <01adb4cb-6092-638c-0bab-e61322be7cf5@redhat.com> <613f3606-748b-0e56-a3ad-1efaffa1a67b@redhat.com> <20191105160000.GC8128@linux.intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Subject: Re: [Xen-devel] [PATCH v1 03/10] KVM: Prepare kvm_is_reserved_pfn() for PG_reserved changes X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: linux-hyperv@vger.kernel.org, Michal Hocko , Radim =?utf-8?B?S3LEjW3DocWZ?= , KVM list , Pavel Tatashin , KarimAllah Ahmed , Benjamin Herrenschmidt , Dave Hansen , Alexander Duyck , Michal Hocko , Paul Mackerras , Linux MM , Paul Mackerras , Michael Ellerman , "H. Peter Anvin" , Wanpeng Li , Alexander Duyck , "K. Y. Srinivasan" , Boris Ostrovsky , Kees Cook , devel@driverdev.osuosl.org, Stefano Stabellini , Stephen Hemminger , "Aneesh Kumar K.V" , Joerg Roedel , X86 ML , YueHaibing , "Matthew Wilcox \(Oracle\)" , Mike Rapoport , Peter Zijlstra , Ingo Molnar , Vlastimil Babka , Anthony Yznaga , Oscar Salvador , "Isaac J. Manjarres" , Matt Sickler , Juergen Gross , Anshuman Khandual , Haiyang Zhang , Sasha Levin , kvm-ppc@vger.kernel.org, Qian Cai , Alex Williamson , Mike Rapoport , Borislav Petkov , Nicholas Piggin , Andy Lutomirski , xen-devel , Dan Williams , Vitaly Kuznetsov , Allison Randal , Jim Mattson , Christophe Leroy , Mel Gorman , Adam Borowski , Cornelia Huck , Pavel Tatashin , Linux Kernel Mailing List , Thomas Gleixner , Johannes Weiner , Paolo Bonzini , Andrew Morton , linuxppc-dev Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" T24gVHVlLCBOb3YgMDUsIDIwMTkgYXQgMDk6MzA6NTNQTSArMDEwMCwgRGF2aWQgSGlsZGVuYnJh bmQgd3JvdGU6Cj4gPj4+SSB0aGluayBJIGtub3cgd2hhdCdzIGdvaW5nIHdyb25nOgo+ID4+Pgo+ ID4+PlBhZ2VzIHRoYXQgYXJlIHBpbm5lZCB2aWEgZ2ZuX3RvX3BmbigpIGFuZCBmcmllbmRzIHRh a2UgYSByZWZlcmVuY2VzLAo+ID4+Pmhvd2V2ZXIgYXJlIG9mdGVuIHJlbGVhc2VkIHZpYQo+ID4+ Pmt2bV9yZWxlYXNlX3Bmbl9jbGVhbigpL2t2bV9yZWxlYXNlX3Bmbl9kaXJ0eSgpL2t2bV9yZWxl YXNlX3BhZ2VfY2xlYW4oKS4uLgo+ID4+Pgo+ID4+Pgo+ID4+PkUuZy4sIGluIGFyY2gveDg2L2t2 bS94ODYuYzpyZWV4ZWN1dGVfaW5zdHJ1Y3Rpb24oKQo+ID4+Pgo+ID4+Pi4uLgo+ID4+PnBmbiA9 IGdmbl90b19wZm4odmNwdS0+a3ZtLCBncGFfdG9fZ2ZuKGdwYSkpOwo+ID4+Pi4uLgo+ID4+Pmt2 bV9yZWxlYXNlX3Bmbl9jbGVhbihwZm4pOwo+ID4+Pgo+ID4+Pgo+ID4+Pgo+ID4+PnZvaWQga3Zt X3JlbGVhc2VfcGZuX2NsZWFuKGt2bV9wZm5fdCBwZm4pCj4gPj4+ewo+ID4+PglpZiAoIWlzX2Vy cm9yX25vc2xvdF9wZm4ocGZuKSAmJiAha3ZtX2lzX3Jlc2VydmVkX3BmbihwZm4pKQo+ID4+PgkJ cHV0X3BhZ2UocGZuX3RvX3BhZ2UocGZuKSk7Cj4gPj4+fQo+ID4+Pgo+ID4+PlRoaXMgZnVuY3Rp b24gbWFrZXMgcGVyZmVjdCBzZW5zZSBhcyB0aGUgY291bnRlcnBhcnQgZm9yIGt2bV9nZXRfcGZu KCk6Cj4gPj4+Cj4gPj4+dm9pZCBrdm1fZ2V0X3Bmbihrdm1fcGZuX3QgcGZuKQo+ID4+PnsKPiA+ Pj4JaWYgKCFrdm1faXNfcmVzZXJ2ZWRfcGZuKHBmbikpCj4gPj4+CQlnZXRfcGFnZShwZm5fdG9f cGFnZShwZm4pKTsKPiA+Pj59Cj4gPj4+Cj4gPj4+Cj4gPj4+QXMgYWxsIFpPTkVfREVWSUNFIHBh Z2VzIGFyZSBjdXJyZW50bHkgcmVzZXJ2ZWQsIHBhZ2VzIHBpbm5lZCB2aWEKPiA+Pj5nZm5fdG9f cGZuKCkgYW5kIGZyaWVuZHMgd2lsbCBvZnRlbiBub3Qgc2VlIGEgcHV0X3BhZ2UoKSBBRkFJS1Mu Cj4gPgo+ID5Bc3N1bWluZyBndXAoKSB0YWtlcyBhIHJlZmVyZW5jZSBmb3IgWk9ORV9ERVZJQ0Ug cGFnZXMsIHllcywgdGhpcyBpcyBhCj4gPktWTSBidWcuCj4gCj4gWWVzLCBpdCBkb2VzIHRha2Ug YSByZWZlcmVuY2UgQUZBSUtzLiBFLmcuLAo+IAo+IG1tL2d1cC5jOmd1cF9wdGVfcmFuZ2UoKToK PiAuLi4KPiAJCWlmIChwdGVfZGV2bWFwKHB0ZSkpIHsKPiAJCQlpZiAodW5saWtlbHkoZmxhZ3Mg JiBGT0xMX0xPTkdURVJNKSkKPiAJCQkJZ290byBwdGVfdW5tYXA7Cj4gCj4gCQkJcGdtYXAgPSBn ZXRfZGV2X3BhZ2VtYXAocHRlX3BmbihwdGUpLCBwZ21hcCk7Cj4gCQkJaWYgKHVubGlrZWx5KCFw Z21hcCkpIHsKPiAJCQkJdW5kb19kZXZfcGFnZW1hcChuciwgbnJfc3RhcnQsIHBhZ2VzKTsKPiAJ CQkJZ290byBwdGVfdW5tYXA7Cj4gCQkJfQo+IAkJfSBlbHNlIGlmIChwdGVfc3BlY2lhbChwdGUp KQo+IAkJCWdvdG8gcHRlX3VubWFwOwo+IAo+IAkJVk1fQlVHX09OKCFwZm5fdmFsaWQocHRlX3Bm bihwdGUpKSk7Cj4gCQlwYWdlID0gcHRlX3BhZ2UocHRlKTsKPiAKPiAJCWhlYWQgPSB0cnlfZ2V0 X2NvbXBvdW5kX2hlYWQocGFnZSwgMSk7Cj4gCj4gdHJ5X2dldF9jb21wb3VuZF9oZWFkKCkgd2ls bCBpbmNyZW1lbnQgdGhlIHJlZmVyZW5jZSBjb3VudC4KCkRvaCwgSSBsb29rZWQgcmlnaHQgYXQg dGhhdCBjb2RlIGFuZCBzb21laG93IGRpZG4ndCBjb25uZWN0IHRoZSBkb3RzLgpUaGFua3MhCgo+ ID4+Pk5vdywgbXkgcGF0Y2ggZG9lcyBub3QgY2hhbmdlIHRoYXQsIHRoZSByZXN1bHQgb2YKPiA+ Pj5rdm1faXNfcmVzZXJ2ZWRfcGZuKHBmbikgd2lsbCBiZSB1bmNoYW5nZWQuIEEgcHJvcGVyIGZp eCBmb3IgdGhhdCB3b3VsZAo+ID4+PnByb2JhYmx5IGJlCj4gPj4+Cj4gPj4+YSkgVG8gZHJvcCB0 aGUgcmVmZXJlbmNlIHRvIFpPTkVfREVWSUNFIHBhZ2VzIGluIGdmbl90b19wZm4oKSBhbmQKPiA+ Pj5mcmllbmRzLCBhZnRlciB5b3Ugc3VjY2Vzc2Z1bGx5IHBpbm5lZCB0aGUgcGFnZXMuIChub3Qg c3VyZSBpZiB0aGF0J3MKPiA+Pj50aGUgcmlnaHQgdGhpbmcgdG8gZG8gYnV0IHlvdSdyZSB0aGUg ZXhwZXJ0KQo+ID4+Pgo+ID4+PmIpIFRvIG5vdCB1c2Uga3ZtX3JlbGVhc2VfcGZuX2NsZWFuKCkg YW5kIGZyaWVuZHMgb24gcGFnZXMgdGhhdCB3ZXJlCj4gPj4+ZGVmaW5pdGVseSBwaW5uZWQuCj4g Pgo+ID5UaGlzIGlzIGFscmVhZHkgS1ZNJ3MgaW50ZW50LCBpLmUuIHRoZSBwdXJwb3NlIG9mIHRo ZSBQYWdlUmVzZXJ2ZWQoKSBjaGVjawo+ID5pcyBzaW1wbHkgdG8gYXZvaWQgcHV0dGluZyBhIG5v bi1leGlzdGVudCByZWZlcmVuY2UuICBUaGUgcHJvYmxlbSBpcyB0aGF0Cj4gPktWTSBhc3N1bWVz IHBhZ2VzIHdpdGggUEdfcmVzZXJ2ZWQgc2V0IGFyZSBuZXZlciBwaW5uZWQsIHdoaWNoIEFGQUlD VCB3YXMKPiA+dHJ1ZSB3aGVuIHRoZSBjb2RlIHdhcyBmaXJzdCBhZGRlZC4KPiA+Cj4gPj4odGFs a2luZyB0byBteXNlbGYsIHNvcnJ5KQo+ID4+Cj4gPj5UaGlua2luZyBhZ2FpbiwgZHJvcHBpbmcg dGhpcyBwYXRjaCBmcm9tIHRoaXMgc2VyaWVzIGNvdWxkIGVmZmVjdGl2ZWx5IGFsc28KPiA+PmZp eCB0aGF0IGlzc3VlLiBFLmcuLCBrdm1fcmVsZWFzZV9wZm5fY2xlYW4oKSBhbmQgZnJpZW5kcyB3 b3VsZCBhbHdheXMgZG8gYQo+ID4+cHV0X3BhZ2UoKSBpZiAicGZuX3ZhbGlkKCkgYW5kICFQYWdl UmVzZXJ2ZWQoKSIsIHNvIGFmdGVyIHBhdGNoIDkgYWxzbyBvbgo+ID4+Wk9OREVfREVWSUNFIHBh Z2VzLgo+ID4KPiA+WWVhaCwgdGhpcyBhcHBlYXJzIHRvIGJlIHRoZSBjb3JyZWN0IGZpeC4KPiA+ Cj4gPj5CdXQgaXQgd291bGQgaGF2ZSBzaWRlIGVmZmVjdHMgdGhhdCBtaWdodCBub3QgYmUgZGVz aXJlZC4gRS5nLiw6Cj4gPj4KPiA+PjEuIGt2bV9wZm5fdG9fcGFnZSgpIHdvdWxkIGFsc28gcmV0 dXJuIFpPTkVfREVWSUNFIHBhZ2VzIChtaWdodCBldmVuIGJlIHRoZQo+ID4+cmlnaHQgdGhpbmcg dG8gZG8pLgo+ID4KPiA+VGhpcyBzaG91bGQgYmUgb2ssIGF0IGxlYXN0IG9uIHg4Ni4gIFRoZXJl IGFyZSBvbmx5IHRocmVlIHVzZXJzIG9mCj4gPmt2bV9wZm5fdG9fcGFnZSgpLiAgVHdvIG9mIHRo b3NlIGFyZSBvbiBhbGxvY2F0aW9ucyB0aGF0IGFyZSBjb250cm9sbGVkIGJ5Cj4gPktWTSBhbmQg YXJlIGd1YXJhbnRlZWQgdG8gYmUgdmFuaWxsYSBNQVBfQU5PTllNT1VTLiAgVGhlIHRoaXJkIGlz IG9uIGd1ZXN0Cj4gPm1lbW9yeSB3aGVuIHJ1bm5pbmcgYSBuZXN0ZWQgZ3Vlc3QsIGFuZCBpbiB0 aGF0IGNhc2Ugc3VwcG9ydGluZyBaT05FX0RFVklDRQo+ID5tZW1vcnkgaXMgZGVzaXJhYmxlLCBp LmUuIEtWTSBzaG91bGQgcGxheSBuaWNlIHdpdGggYSBndWVzdCB0aGF0IGlzIGJhY2tlZAo+ID5i eSBaT05FX0RFVklDRSBtZW1vcnkuCj4gPgo+ID4+Mi4ga3ZtX3NldF9wZm5fZGlydHkoKSB3b3Vs ZCBhbHNvIHNldCBaT05FX0RFVklDRSBwYWdlcyBkaXJ0eSAobWlnaHQgYmUKPiA+Pm9rYXkpCj4g Pgo+ID5UaGlzIGlzIG9rIGZyb20gYSBLVk0gcGVyc3BlY3RpdmUuCj4gCj4gV2hhdCBhYm91dAo+ IAo+IHZvaWQga3ZtX2dldF9wZm4oa3ZtX3Bmbl90IHBmbikKPiB7Cj4gCWlmICgha3ZtX2lzX3Jl c2VydmVkX3BmbihwZm4pKQo+IAkJZ2V0X3BhZ2UocGZuX3RvX3BhZ2UocGZuKSk7Cj4gfQo+IAo+ IElzIGEgcHVyZSBnZXRfcGFnZSgpIHN1ZmZpY2llbnQgaW4gY2FzZSBvZiBaT05FX0RFVklDRT8K PiAoYXNraW5nIGJlY2F1c2Ugb2YgdGhlIGxpdmUgcmVmZXJlbmNlcyBvYnRhaW5lZCB2aWEKPiBn ZXRfZGV2X3BhZ2VtYXAocHRlX3BmbihwdGUpLCBwZ21hcCkgaW4gbW0vZ3VwLmM6Z3VwX3B0ZV9y YW5nZSgpIHNvbWV3aGF0Cj4gY29uZnVzZSBtZSA6KSApCgpUaGlzIHRpZXMgaW50byBteSBjb25j ZXJuIHdpdGggdGhwX2FkanVzdCgpLiAgT24geDg2LCBrdm1fZ2V0X3BmbigpIGlzCm9ubHkgdXNl ZCBpbiB0d28gZmxvd3MsIHRvIG1hbnVhbGx5IGdldCBhIHJlZiBmb3IgVk1fSU8vVk1fUEZOTUFQ IHBhZ2VzCmFuZCB0byBzd2l0Y2ggdGhlIHJlZiB3aGVuIG1hcHBpbmcgYSBub24taHVnZXRsYmZz IGNvbXBvdW5kIHBhZ2UsIGkuZS4gYQpUSFAuCgpJIGFzc3VtZSBWTV9JTyBhbmQgUEZOTUFQIGNh bid0IGFwcGx5IHRvIFpPTkVfREVWSUNFIHBhZ2VzLgoKSW4gdGhlIHRocF9hZGp1c3QoKSBjYXNl LCB3aGVuIGEgVEhQIGlzIGVuY291bnRlcmVkIGFuZCB0aGUgb3JpZ2luYWwgUEZOCmlzIGZvciBh IG5vbi1QR19oZWFkIHBhZ2UsIEtWTSB0cmFuc2ZlcnMgdGhlIHJlZmVyZW5jZSB0byB0aGUgYXNz b2NpYXRlZApQR19oZWFkIHBhZ2VbKl0gYW5kIG1hcHMgdGhlIGFzc29jaWF0ZWQgMm1iIGNodW5r L3BhZ2UuICBUaGlzIGlzIHdoZXJlIEtWTQp1c2VzIGt2bV9nZXRfcGZuKCkgYW5kIGNvdWxkIHJ1 biBhZm91bCBvZiB0aGUgZ2V0X2Rldl9wYWdlbWFwKCkgcmVmY291bnRzLgoKClsqXSBUZWNobmlj YWxseSBJIGRvbid0IHRoaW5rIGl0J3MgZ3VhcmFudGVlZCB0byBiZSBhIFBHX2hlYWQsIGUuZy4g aWYgdGhlCiAgICBUSFAgaXMgYSAxZ2IgcGFnZSwgYXMgS1ZNIGN1cnJlbnRseSBvbmx5IG1hcHMg VEhQIGFzIDJtYiBwYWdlcy4gIEJ1dAogICAgdGhlIGlkZWEgaXMgdGhlIHNhbWUsIHRyYW5zZmVy IHRoZSByZWZjb3VudCB0aGUgUEZOIHRoYXQncyBhY3R1YWxseQogICAgZ29pbmcgaW50byBLVk0n cyBwYWdlIHRhYmxlcy4KCj4gPgo+ID5UaGUgc2NhcmllciBjb2RlIChmb3IgbWUpIGlzIHRyYW5z cGFyZW50X2h1Z2VwYWdlX2FkanVzdCgpIGFuZAo+ID5rdm1fbW11X3phcF9jb2xsYXBzaWJsZV9z cHRlKCksIGFzIEkgZG9uJ3QgYXQgYWxsIHVuZGVyc3RhbmQgdGhlCj4gPmludGVyYWN0aW9uIGJl dHdlZW4gVEhQIGFuZCBfUEFHRV9ERVZNQVAuCj4gCj4gVGhlIHg4NiBLVk0gTU1VIGNvZGUgaXMg b25lIG9mIHRoZSB1Z2xpZXN0IGNvZGUgSSBrbm93IChzb3JyeSwgYnV0IGl0IGhhZCB0bwo+IGJl IHNhaWQgOi8gKS4gTHVja2lseSwgdGhpcyBzaG91bGQgYmUgaW5kZXBlbmRlbnQgb2YgdGhlIFBH X3Jlc2VydmVkIHRoaW5neQo+IEFGQUlLcy4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0 cy54ZW5wcm9qZWN0Lm9yZwpodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL21haWxtYW4vbGlz dGluZm8veGVuLWRldmVs