From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FB2AC433FF for ; Fri, 2 Aug 2019 19:16:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 33A65216C8 for ; Fri, 2 Aug 2019 19:16:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dIZIhWEt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392063AbfHBTPu (ORCPT ); Fri, 2 Aug 2019 15:15:50 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:16559 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391984AbfHBTPu (ORCPT ); Fri, 2 Aug 2019 15:15:50 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 02 Aug 2019 12:15:49 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 02 Aug 2019 12:15:47 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 02 Aug 2019 12:15:47 -0700 Received: from [10.2.171.217] (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 2 Aug 2019 19:15:46 +0000 Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites To: Jan Kara , Matthew Wilcox CC: Michal Hocko , , Andrew Morton , Christoph Hellwig , Dan Williams , Dave Chinner , Dave Hansen , Ira Weiny , Jason Gunthorpe , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , LKML , , , , , , , , , , , , , , , , , , , , , , , References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> <20190802142443.GB5597@bombadil.infradead.org> <20190802145227.GQ25064@quack2.suse.cz> X-Nvconfidentiality: public From: John Hubbard Message-ID: <076e7826-67a5-4829-aae2-2b90f302cebd@nvidia.com> Date: Fri, 2 Aug 2019 12:14:09 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190802145227.GQ25064@quack2.suse.cz> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564773349; bh=Ykr8zuBl8qD6qRk+6CuJmCvWs6y/6SnwmHdkeYBJDDI=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=dIZIhWEtL/6DtZGzemgJsDJsVLyADAMaN//lJ1grJLFkCmFOkzTj/rs0FvLfEl2Hi oGzaotnI6n/OU/9zhZLMfdrrUHRJGxX7AYLyG2WZvgX4Lg54c2pU4PjbkWrNUgOXfI uv8TTyv/DF3hYu24iq7PnVxsiQftZ7SHYqoH7NBkU536G72MURkyt2TZuU0HsMwqx1 3Glf+aCLRqZtZbMei0ZGioStTFz2Vyclh08xm02uGWhgBmLM1you/SeWFTun4O+4QV 968ZBWSTwuYIseJKnsPYve06ID75kz8N2rJE61L933vzLxD+Ru7sU6sLx/7LpcpGPg O3OFkZJZEBCjA== Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On 8/2/19 7:52 AM, Jan Kara wrote: > On Fri 02-08-19 07:24:43, Matthew Wilcox wrote: >> On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: >>> On Fri 02-08-19 11:12:44, Michal Hocko wrote: >>>> On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: >>>> [...] >>>>> 2) Convert all of the call sites for get_user_pages*(), to >>>>> invoke put_user_page*(), instead of put_page(). This involves dozens of >>>>> call sites, and will take some time. >>>> >>>> How do we make sure this is the case and it will remain the case in the >>>> future? There must be some automagic to enforce/check that. It is simply >>>> not manageable to do it every now and then because then 3) will simply >>>> be never safe. >>>> >>>> Have you considered coccinele or some other scripted way to do the >>>> transition? I have no idea how to deal with future changes that would >>>> break the balance though. Hi Michal, Yes, I've thought about it, and coccinelle falls a bit short (it's not smart enough to know which put_page()'s to convert). However, there is a debug option planned: a yet-to-be-posted commit [1] uses struct page extensions (obviously protected by CONFIG_DEBUG_GET_USER_PAGES_REFERENCES) to add a redundant counter. That allows: void __put_page(struct page *page) { ... /* Someone called put_page() instead of put_user_page() */ WARN_ON_ONCE(atomic_read(&page_ext->pin_count) > 0); >>> >>> Yeah, that's why I've been suggesting at LSF/MM that we may need to create >>> a gup wrapper - say vaddr_pin_pages() - and track which sites dropping >>> references got converted by using this wrapper instead of gup. The >>> counterpart would then be more logically named as unpin_page() or whatever >>> instead of put_user_page(). Sure this is not completely foolproof (you can >>> create new callsite using vaddr_pin_pages() and then just drop refs using >>> put_page()) but I suppose it would be a high enough barrier for missed >>> conversions... Thoughts? The debug option above is still a bit simplistic in its implementation (and maybe not taking full advantage of the data it has), but I think it's preferable, because it monitors the "core" and WARNs. Instead of the wrapper, I'm thinking: documentation and the passage of time, plus the debug option (perhaps enhanced--probably once I post it someone will notice opportunities), yes? >> >> I think the API we really need is get_user_bvec() / put_user_bvec(), >> and I know Christoph has been putting some work into that. That avoids >> doing refcount operations on hundreds of pages if the page in question is >> a huge page. Once people are switched over to that, they won't be tempted >> to manually call put_page() on the individual constituent pages of a bvec. > > Well, get_user_bvec() is certainly a good API for one class of users but > just looking at the above series, you'll see there are *many* places that > just don't work with bvecs at all and you need something for those. > Yes, there are quite a few places that don't involve _bvec, as we can see right here. So we need something. Andrew asked for a debug option some time ago, and several people (Dave Hansen, Dan Williams, Jerome) had the idea of vmap-ing gup pages separately, so you can definitely tell where each page came from. I'm hoping not to have to go to that level of complexity though. [1] "mm/gup: debug tracking of get_user_pages() references" : https://github.com/johnhubbard/linux/commit/21ff7d6161ec2a14d3f9d17c98abb00cc969d4d6 thanks, -- John Hubbard NVIDIA From mboxrd@z Thu Jan 1 00:00:00 1970 From: John Hubbard Date: Fri, 02 Aug 2019 19:14:09 +0000 Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites Message-Id: <076e7826-67a5-4829-aae2-2b90f302cebd@nvidia.com> List-Id: References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> <20190802142443.GB5597@bombadil.infradead.org> <20190802145227.GQ25064@quack2.suse.cz> In-Reply-To: <20190802145227.GQ25064@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Jan Kara , Matthew Wilcox Cc: Michal Hocko , john.hubbard@gmail.com, Andrew Morton , Christoph Hellwig , Dan Williams , Dave Chinner , Dave Hansen , Ira Weiny , Jason Gunthorpe , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , LKML , amd-gfx@lists.freedesktop.org, ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, devel@lists.orangefs.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fbdev@vger.kernel.org On 8/2/19 7:52 AM, Jan Kara wrote: > On Fri 02-08-19 07:24:43, Matthew Wilcox wrote: >> On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: >>> On Fri 02-08-19 11:12:44, Michal Hocko wrote: >>>> On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: >>>> [...] >>>>> 2) Convert all of the call sites for get_user_pages*(), to >>>>> invoke put_user_page*(), instead of put_page(). This involves dozens of >>>>> call sites, and will take some time. >>>> >>>> How do we make sure this is the case and it will remain the case in the >>>> future? There must be some automagic to enforce/check that. It is simply >>>> not manageable to do it every now and then because then 3) will simply >>>> be never safe. >>>> >>>> Have you considered coccinele or some other scripted way to do the >>>> transition? I have no idea how to deal with future changes that would >>>> break the balance though. Hi Michal, Yes, I've thought about it, and coccinelle falls a bit short (it's not smart enough to know which put_page()'s to convert). However, there is a debug option planned: a yet-to-be-posted commit [1] uses struct page extensions (obviously protected by CONFIG_DEBUG_GET_USER_PAGES_REFERENCES) to add a redundant counter. That allows: void __put_page(struct page *page) { ... /* Someone called put_page() instead of put_user_page() */ WARN_ON_ONCE(atomic_read(&page_ext->pin_count) > 0); >>> >>> Yeah, that's why I've been suggesting at LSF/MM that we may need to create >>> a gup wrapper - say vaddr_pin_pages() - and track which sites dropping >>> references got converted by using this wrapper instead of gup. The >>> counterpart would then be more logically named as unpin_page() or whatever >>> instead of put_user_page(). Sure this is not completely foolproof (you can >>> create new callsite using vaddr_pin_pages() and then just drop refs using >>> put_page()) but I suppose it would be a high enough barrier for missed >>> conversions... Thoughts? The debug option above is still a bit simplistic in its implementation (and maybe not taking full advantage of the data it has), but I think it's preferable, because it monitors the "core" and WARNs. Instead of the wrapper, I'm thinking: documentation and the passage of time, plus the debug option (perhaps enhanced--probably once I post it someone will notice opportunities), yes? >> >> I think the API we really need is get_user_bvec() / put_user_bvec(), >> and I know Christoph has been putting some work into that. That avoids >> doing refcount operations on hundreds of pages if the page in question is >> a huge page. Once people are switched over to that, they won't be tempted >> to manually call put_page() on the individual constituent pages of a bvec. > > Well, get_user_bvec() is certainly a good API for one class of users but > just looking at the above series, you'll see there are *many* places that > just don't work with bvecs at all and you need something for those. > Yes, there are quite a few places that don't involve _bvec, as we can see right here. So we need something. Andrew asked for a debug option some time ago, and several people (Dave Hansen, Dan Williams, Jerome) had the idea of vmap-ing gup pages separately, so you can definitely tell where each page came from. I'm hoping not to have to go to that level of complexity though. [1] "mm/gup: debug tracking of get_user_pages() references" : https://github.com/johnhubbard/linux/commit/21ff7d6161ec2a14d3f9d17c98abb00cc969d4d6 thanks, -- John Hubbard NVIDIA From mboxrd@z Thu Jan 1 00:00:00 1970 From: John Hubbard Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites Date: Fri, 2 Aug 2019 12:14:09 -0700 Message-ID: <076e7826-67a5-4829-aae2-2b90f302cebd@nvidia.com> References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> <20190802142443.GB5597@bombadil.infradead.org> <20190802145227.GQ25064@quack2.suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20190802145227.GQ25064@quack2.suse.cz> Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org To: Jan Kara , Matthew Wilcox Cc: Michal Hocko , john.hubbard@gmail.com, Andrew Morton , Christoph Hellwig , Dan Williams , Dave Chinner , Dave Hansen , Ira Weiny , Jason Gunthorpe , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , LKML , amd-gfx@lists.freedesktop.org, ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, devel@lists.orangefs.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fbdev@vger.kernel.org List-Id: ceph-devel.vger.kernel.org On 8/2/19 7:52 AM, Jan Kara wrote: > On Fri 02-08-19 07:24:43, Matthew Wilcox wrote: >> On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: >>> On Fri 02-08-19 11:12:44, Michal Hocko wrote: >>>> On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: >>>> [...] >>>>> 2) Convert all of the call sites for get_user_pages*(), to >>>>> invoke put_user_page*(), instead of put_page(). This involves dozens of >>>>> call sites, and will take some time. >>>> >>>> How do we make sure this is the case and it will remain the case in the >>>> future? There must be some automagic to enforce/check that. It is simply >>>> not manageable to do it every now and then because then 3) will simply >>>> be never safe. >>>> >>>> Have you considered coccinele or some other scripted way to do the >>>> transition? I have no idea how to deal with future changes that would >>>> break the balance though. Hi Michal, Yes, I've thought about it, and coccinelle falls a bit short (it's not smart enough to know which put_page()'s to convert). However, there is a debug option planned: a yet-to-be-posted commit [1] uses struct page extensions (obviously protected by CONFIG_DEBUG_GET_USER_PAGES_REFERENCES) to add a redundant counter. That allows: void __put_page(struct page *page) { ... /* Someone called put_page() instead of put_user_page() */ WARN_ON_ONCE(atomic_read(&page_ext->pin_count) > 0); >>> >>> Yeah, that's why I've been suggesting at LSF/MM that we may need to create >>> a gup wrapper - say vaddr_pin_pages() - and track which sites dropping >>> references got converted by using this wrapper instead of gup. The >>> counterpart would then be more logically named as unpin_page() or whatever >>> instead of put_user_page(). Sure this is not completely foolproof (you can >>> create new callsite using vaddr_pin_pages() and then just drop refs using >>> put_page()) but I suppose it would be a high enough barrier for missed >>> conversions... Thoughts? The debug option above is still a bit simplistic in its implementation (and maybe not taking full advantage of the data it has), but I think it's preferable, because it monitors the "core" and WARNs. Instead of the wrapper, I'm thinking: documentation and the passage of time, plus the debug option (perhaps enhanced--probably once I post it someone will notice opportunities), yes? >> >> I think the API we really need is get_user_bvec() / put_user_bvec(), >> and I know Christoph has been putting some work into that. That avoids >> doing refcount operations on hundreds of pages if the page in question is >> a huge page. Once people are switched over to that, they won't be tempted >> to manually call put_page() on the individual constituent pages of a bvec. > > Well, get_user_bvec() is certainly a good API for one class of users but > just looking at the above series, you'll see there are *many* places that > just don't work with bvecs at all and you need something for those. > Yes, there are quite a few places that don't involve _bvec, as we can see right here. So we need something. Andrew asked for a debug option some time ago, and several people (Dave Hansen, Dan Williams, Jerome) had the idea of vmap-ing gup pages separately, so you can definitely tell where each page came from. I'm hoping not to have to go to that level of complexity though. [1] "mm/gup: debug tracking of get_user_pages() references" : https://github.com/johnhubbard/linux/commit/21ff7d6161ec2a14d3f9d17c98abb00cc969d4d6 thanks, -- John Hubbard NVIDIA From mboxrd@z Thu Jan 1 00:00:00 1970 From: jhubbard@nvidia.com (John Hubbard) Date: Fri, 2 Aug 2019 12:14:09 -0700 Subject: [PATCH 00/34] put_user_pages(): miscellaneous call sites In-Reply-To: <20190802145227.GQ25064@quack2.suse.cz> References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> <20190802142443.GB5597@bombadil.infradead.org> <20190802145227.GQ25064@quack2.suse.cz> Message-ID: <076e7826-67a5-4829-aae2-2b90f302cebd@nvidia.com> List-Id: Linux Driver Project Developer List On 8/2/19 7:52 AM, Jan Kara wrote: > On Fri 02-08-19 07:24:43, Matthew Wilcox wrote: >> On Fri, Aug 02, 2019@02:41:46PM +0200, Jan Kara wrote: >>> On Fri 02-08-19 11:12:44, Michal Hocko wrote: >>>> On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: >>>> [...] >>>>> 2) Convert all of the call sites for get_user_pages*(), to >>>>> invoke put_user_page*(), instead of put_page(). This involves dozens of >>>>> call sites, and will take some time. >>>> >>>> How do we make sure this is the case and it will remain the case in the >>>> future? There must be some automagic to enforce/check that. It is simply >>>> not manageable to do it every now and then because then 3) will simply >>>> be never safe. >>>> >>>> Have you considered coccinele or some other scripted way to do the >>>> transition? I have no idea how to deal with future changes that would >>>> break the balance though. Hi Michal, Yes, I've thought about it, and coccinelle falls a bit short (it's not smart enough to know which put_page()'s to convert). However, there is a debug option planned: a yet-to-be-posted commit [1] uses struct page extensions (obviously protected by CONFIG_DEBUG_GET_USER_PAGES_REFERENCES) to add a redundant counter. That allows: void __put_page(struct page *page) { ... /* Someone called put_page() instead of put_user_page() */ WARN_ON_ONCE(atomic_read(&page_ext->pin_count) > 0); >>> >>> Yeah, that's why I've been suggesting at LSF/MM that we may need to create >>> a gup wrapper - say vaddr_pin_pages() - and track which sites dropping >>> references got converted by using this wrapper instead of gup. The >>> counterpart would then be more logically named as unpin_page() or whatever >>> instead of put_user_page(). Sure this is not completely foolproof (you can >>> create new callsite using vaddr_pin_pages() and then just drop refs using >>> put_page()) but I suppose it would be a high enough barrier for missed >>> conversions... Thoughts? The debug option above is still a bit simplistic in its implementation (and maybe not taking full advantage of the data it has), but I think it's preferable, because it monitors the "core" and WARNs. Instead of the wrapper, I'm thinking: documentation and the passage of time, plus the debug option (perhaps enhanced--probably once I post it someone will notice opportunities), yes? >> >> I think the API we really need is get_user_bvec() / put_user_bvec(), >> and I know Christoph has been putting some work into that. That avoids >> doing refcount operations on hundreds of pages if the page in question is >> a huge page. Once people are switched over to that, they won't be tempted >> to manually call put_page() on the individual constituent pages of a bvec. > > Well, get_user_bvec() is certainly a good API for one class of users but > just looking at the above series, you'll see there are *many* places that > just don't work with bvecs at all and you need something for those. > Yes, there are quite a few places that don't involve _bvec, as we can see right here. So we need something. Andrew asked for a debug option some time ago, and several people (Dave Hansen, Dan Williams, Jerome) had the idea of vmap-ing gup pages separately, so you can definitely tell where each page came from. I'm hoping not to have to go to that level of complexity though. [1] "mm/gup: debug tracking of get_user_pages() references" : https://github.com/johnhubbard/linux/commit/21ff7d6161ec2a14d3f9d17c98abb00cc969d4d6 thanks, -- John Hubbard NVIDIA From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.3 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93BBBC32750 for ; Fri, 2 Aug 2019 19:15:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 66C532087E for ; Fri, 2 Aug 2019 19:15:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Kgn9IYAq"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dIZIhWEt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 66C532087E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4eHOy68TDqVameAkdJph8J8Oq6HnE2MV29kaI2jdfTI=; b=Kgn9IYAq3iltwCQGjFwLoNwl5 kRF8DoIxEtz0NxOfXDZeUfQiCl+AX/m81EHdISSS3hYSwfqujOsvTr2BHbjQxe0s2kGxiF0NygCpV 0NMFm94NcKgpRnG+jrrDAjoTBJRxFaCX3Qy/OcriiCb1ZdHr4VRFYzeFdP0W5rwTTwijIWv8eTdUR dr8Uaaq9cnl6YR0L7YTSA3pT3hf9510SPI4XaR9BpWIn5/xYVfnSaL/vo/nCZGKYJrpRiFs9gwKJN 243JMKc3glvbXEqLNi8oWy9AhTz9e1QF/V40JMMzQaKD5jhFzf5E+/3smBtm0FonNFHodpVagIbot kaCO7kicg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1htd1r-00049V-Lc; Fri, 02 Aug 2019 19:15:51 +0000 Received: from hqemgate14.nvidia.com ([216.228.121.143]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1htd1o-00048e-0z; Fri, 02 Aug 2019 19:15:49 +0000 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 02 Aug 2019 12:15:49 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 02 Aug 2019 12:15:47 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 02 Aug 2019 12:15:47 -0700 Received: from [10.2.171.217] (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 2 Aug 2019 19:15:46 +0000 Subject: Re: [PATCH 00/34] put_user_pages(): miscellaneous call sites To: Jan Kara , Matthew Wilcox References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> <20190802142443.GB5597@bombadil.infradead.org> <20190802145227.GQ25064@quack2.suse.cz> X-Nvconfidentiality: public From: John Hubbard Message-ID: <076e7826-67a5-4829-aae2-2b90f302cebd@nvidia.com> Date: Fri, 2 Aug 2019 12:14:09 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190802145227.GQ25064@quack2.suse.cz> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564773349; bh=Ykr8zuBl8qD6qRk+6CuJmCvWs6y/6SnwmHdkeYBJDDI=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=dIZIhWEtL/6DtZGzemgJsDJsVLyADAMaN//lJ1grJLFkCmFOkzTj/rs0FvLfEl2Hi oGzaotnI6n/OU/9zhZLMfdrrUHRJGxX7AYLyG2WZvgX4Lg54c2pU4PjbkWrNUgOXfI uv8TTyv/DF3hYu24iq7PnVxsiQftZ7SHYqoH7NBkU536G72MURkyt2TZuU0HsMwqx1 3Glf+aCLRqZtZbMei0ZGioStTFz2Vyclh08xm02uGWhgBmLM1you/SeWFTun4O+4QV 968ZBWSTwuYIseJKnsPYve06ID75kz8N2rJE61L933vzLxD+Ru7sU6sLx/7LpcpGPg O3OFkZJZEBCjA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190802_121548_083038_78A41645 X-CRM114-Status: GOOD ( 17.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, Michal Hocko , linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , Dan Williams , devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, intel-gfx@lists.freedesktop.org, john.hubbard@gmail.com, linux-block@vger.kernel.org, =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , linux-rpi-kernel@lists.infradead.org, ceph-devel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-nfs@vger.kernel.org, netdev@vger.kernel.org, LKML , linux-xfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 8/2/19 7:52 AM, Jan Kara wrote: > On Fri 02-08-19 07:24:43, Matthew Wilcox wrote: >> On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote: >>> On Fri 02-08-19 11:12:44, Michal Hocko wrote: >>>> On Thu 01-08-19 19:19:31, john.hubbard@gmail.com wrote: >>>> [...] >>>>> 2) Convert all of the call sites for get_user_pages*(), to >>>>> invoke put_user_page*(), instead of put_page(). This involves dozens of >>>>> call sites, and will take some time. >>>> >>>> How do we make sure this is the case and it will remain the case in the >>>> future? There must be some automagic to enforce/check that. It is simply >>>> not manageable to do it every now and then because then 3) will simply >>>> be never safe. >>>> >>>> Have you considered coccinele or some other scripted way to do the >>>> transition? I have no idea how to deal with future changes that would >>>> break the balance though. Hi Michal, Yes, I've thought about it, and coccinelle falls a bit short (it's not smart enough to know which put_page()'s to convert). However, there is a debug option planned: a yet-to-be-posted commit [1] uses struct page extensions (obviously protected by CONFIG_DEBUG_GET_USER_PAGES_REFERENCES) to add a redundant counter. That allows: void __put_page(struct page *page) { ... /* Someone called put_page() instead of put_user_page() */ WARN_ON_ONCE(atomic_read(&page_ext->pin_count) > 0); >>> >>> Yeah, that's why I've been suggesting at LSF/MM that we may need to create >>> a gup wrapper - say vaddr_pin_pages() - and track which sites dropping >>> references got converted by using this wrapper instead of gup. The >>> counterpart would then be more logically named as unpin_page() or whatever >>> instead of put_user_page(). Sure this is not completely foolproof (you can >>> create new callsite using vaddr_pin_pages() and then just drop refs using >>> put_page()) but I suppose it would be a high enough barrier for missed >>> conversions... Thoughts? The debug option above is still a bit simplistic in its implementation (and maybe not taking full advantage of the data it has), but I think it's preferable, because it monitors the "core" and WARNs. Instead of the wrapper, I'm thinking: documentation and the passage of time, plus the debug option (perhaps enhanced--probably once I post it someone will notice opportunities), yes? >> >> I think the API we really need is get_user_bvec() / put_user_bvec(), >> and I know Christoph has been putting some work into that. That avoids >> doing refcount operations on hundreds of pages if the page in question is >> a huge page. Once people are switched over to that, they won't be tempted >> to manually call put_page() on the individual constituent pages of a bvec. > > Well, get_user_bvec() is certainly a good API for one class of users but > just looking at the above series, you'll see there are *many* places that > just don't work with bvecs at all and you need something for those. > Yes, there are quite a few places that don't involve _bvec, as we can see right here. So we need something. Andrew asked for a debug option some time ago, and several people (Dave Hansen, Dan Williams, Jerome) had the idea of vmap-ing gup pages separately, so you can definitely tell where each page came from. I'm hoping not to have to go to that level of complexity though. [1] "mm/gup: debug tracking of get_user_pages() references" : https://github.com/johnhubbard/linux/commit/21ff7d6161ec2a14d3f9d17c98abb00cc969d4d6 thanks, -- John Hubbard NVIDIA _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 591E0C32750 for ; Fri, 2 Aug 2019 19:16:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 291572054F for ; Fri, 2 Aug 2019 19:16:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dIZIhWEt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 291572054F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htd2B-0007Mk-GB; Fri, 02 Aug 2019 19:16:11 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htd2A-0007Mf-SI for xen-devel@lists.xenproject.org; Fri, 02 Aug 2019 19:16:10 +0000 X-Inumbo-ID: efb6d462-b559-11e9-a3f8-bf3b0a82c7a2 Received: from hqemgate14.nvidia.com (unknown [216.228.121.143]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id efb6d462-b559-11e9-a3f8-bf3b0a82c7a2; Fri, 02 Aug 2019 19:15:49 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 02 Aug 2019 12:15:49 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 02 Aug 2019 12:15:47 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 02 Aug 2019 12:15:47 -0700 Received: from [10.2.171.217] (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 2 Aug 2019 19:15:46 +0000 To: Jan Kara , Matthew Wilcox References: <20190802022005.5117-1-jhubbard@nvidia.com> <20190802091244.GD6461@dhcp22.suse.cz> <20190802124146.GL25064@quack2.suse.cz> <20190802142443.GB5597@bombadil.infradead.org> <20190802145227.GQ25064@quack2.suse.cz> X-Nvconfidentiality: public From: John Hubbard Message-ID: <076e7826-67a5-4829-aae2-2b90f302cebd@nvidia.com> Date: Fri, 2 Aug 2019 12:14:09 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190802145227.GQ25064@quack2.suse.cz> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564773349; bh=Ykr8zuBl8qD6qRk+6CuJmCvWs6y/6SnwmHdkeYBJDDI=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=dIZIhWEtL/6DtZGzemgJsDJsVLyADAMaN//lJ1grJLFkCmFOkzTj/rs0FvLfEl2Hi oGzaotnI6n/OU/9zhZLMfdrrUHRJGxX7AYLyG2WZvgX4Lg54c2pU4PjbkWrNUgOXfI uv8TTyv/DF3hYu24iq7PnVxsiQftZ7SHYqoH7NBkU536G72MURkyt2TZuU0HsMwqx1 3Glf+aCLRqZtZbMei0ZGioStTFz2Vyclh08xm02uGWhgBmLM1you/SeWFTun4O+4QV 968ZBWSTwuYIseJKnsPYve06ID75kz8N2rJE61L933vzLxD+Ru7sU6sLx/7LpcpGPg O3OFkZJZEBCjA== Subject: Re: [Xen-devel] [PATCH 00/34] put_user_pages(): miscellaneous call sites X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, Michal Hocko , linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , Dan Williams , devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, intel-gfx@lists.freedesktop.org, john.hubbard@gmail.com, linux-block@vger.kernel.org, =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , linux-rpi-kernel@lists.infradead.org, ceph-devel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-nfs@vger.kernel.org, netdev@vger.kernel.org, LKML , linux-xfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8"; Format="flowed" Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" T24gOC8yLzE5IDc6NTIgQU0sIEphbiBLYXJhIHdyb3RlOgo+IE9uIEZyaSAwMi0wOC0xOSAwNzoy NDo0MywgTWF0dGhldyBXaWxjb3ggd3JvdGU6Cj4+IE9uIEZyaSwgQXVnIDAyLCAyMDE5IGF0IDAy OjQxOjQ2UE0gKzAyMDAsIEphbiBLYXJhIHdyb3RlOgo+Pj4gT24gRnJpIDAyLTA4LTE5IDExOjEy OjQ0LCBNaWNoYWwgSG9ja28gd3JvdGU6Cj4+Pj4gT24gVGh1IDAxLTA4LTE5IDE5OjE5OjMxLCBq b2huLmh1YmJhcmRAZ21haWwuY29tIHdyb3RlOgo+Pj4+IFsuLi5dCj4+Pj4+IDIpIENvbnZlcnQg YWxsIG9mIHRoZSBjYWxsIHNpdGVzIGZvciBnZXRfdXNlcl9wYWdlcyooKSwgdG8KPj4+Pj4gaW52 b2tlIHB1dF91c2VyX3BhZ2UqKCksIGluc3RlYWQgb2YgcHV0X3BhZ2UoKS4gVGhpcyBpbnZvbHZl cyBkb3plbnMgb2YKPj4+Pj4gY2FsbCBzaXRlcywgYW5kIHdpbGwgdGFrZSBzb21lIHRpbWUuCj4+ Pj4KPj4+PiBIb3cgZG8gd2UgbWFrZSBzdXJlIHRoaXMgaXMgdGhlIGNhc2UgYW5kIGl0IHdpbGwg cmVtYWluIHRoZSBjYXNlIGluIHRoZQo+Pj4+IGZ1dHVyZT8gVGhlcmUgbXVzdCBiZSBzb21lIGF1 dG9tYWdpYyB0byBlbmZvcmNlL2NoZWNrIHRoYXQuIEl0IGlzIHNpbXBseQo+Pj4+IG5vdCBtYW5h Z2VhYmxlIHRvIGRvIGl0IGV2ZXJ5IG5vdyBhbmQgdGhlbiBiZWNhdXNlIHRoZW4gMykgd2lsbCBz aW1wbHkKPj4+PiBiZSBuZXZlciBzYWZlLgo+Pj4+Cj4+Pj4gSGF2ZSB5b3UgY29uc2lkZXJlZCBj b2NjaW5lbGUgb3Igc29tZSBvdGhlciBzY3JpcHRlZCB3YXkgdG8gZG8gdGhlCj4+Pj4gdHJhbnNp dGlvbj8gSSBoYXZlIG5vIGlkZWEgaG93IHRvIGRlYWwgd2l0aCBmdXR1cmUgY2hhbmdlcyB0aGF0 IHdvdWxkCj4+Pj4gYnJlYWsgdGhlIGJhbGFuY2UgdGhvdWdoLgoKSGkgTWljaGFsLAoKWWVzLCBJ J3ZlIHRob3VnaHQgYWJvdXQgaXQsIGFuZCBjb2NjaW5lbGxlIGZhbGxzIGEgYml0IHNob3J0IChp dCdzIG5vdCBzbWFydAplbm91Z2ggdG8ga25vdyB3aGljaCBwdXRfcGFnZSgpJ3MgdG8gY29udmVy dCkuIEhvd2V2ZXIsIHRoZXJlIGlzIGEgZGVidWcKb3B0aW9uIHBsYW5uZWQ6IGEgeWV0LXRvLWJl LXBvc3RlZCBjb21taXQgWzFdIHVzZXMgc3RydWN0IHBhZ2UgZXh0ZW5zaW9ucwoob2J2aW91c2x5 IHByb3RlY3RlZCBieSBDT05GSUdfREVCVUdfR0VUX1VTRVJfUEFHRVNfUkVGRVJFTkNFUykgdG8g YWRkCmEgcmVkdW5kYW50IGNvdW50ZXIuIFRoYXQgYWxsb3dzOgoKdm9pZCBfX3B1dF9wYWdlKHN0 cnVjdCBwYWdlICpwYWdlKQp7CgkuLi4KCS8qIFNvbWVvbmUgY2FsbGVkIHB1dF9wYWdlKCkgaW5z dGVhZCBvZiBwdXRfdXNlcl9wYWdlKCkgKi8KCVdBUk5fT05fT05DRShhdG9taWNfcmVhZCgmcGFn ZV9leHQtPnBpbl9jb3VudCkgPiAwKTsKCj4+Pgo+Pj4gWWVhaCwgdGhhdCdzIHdoeSBJJ3ZlIGJl ZW4gc3VnZ2VzdGluZyBhdCBMU0YvTU0gdGhhdCB3ZSBtYXkgbmVlZCB0byBjcmVhdGUKPj4+IGEg Z3VwIHdyYXBwZXIgLSBzYXkgdmFkZHJfcGluX3BhZ2VzKCkgLSBhbmQgdHJhY2sgd2hpY2ggc2l0 ZXMgZHJvcHBpbmcKPj4+IHJlZmVyZW5jZXMgZ290IGNvbnZlcnRlZCBieSB1c2luZyB0aGlzIHdy YXBwZXIgaW5zdGVhZCBvZiBndXAuIFRoZQo+Pj4gY291bnRlcnBhcnQgd291bGQgdGhlbiBiZSBt b3JlIGxvZ2ljYWxseSBuYW1lZCBhcyB1bnBpbl9wYWdlKCkgb3Igd2hhdGV2ZXIKPj4+IGluc3Rl YWQgb2YgcHV0X3VzZXJfcGFnZSgpLiAgU3VyZSB0aGlzIGlzIG5vdCBjb21wbGV0ZWx5IGZvb2xw cm9vZiAoeW91IGNhbgo+Pj4gY3JlYXRlIG5ldyBjYWxsc2l0ZSB1c2luZyB2YWRkcl9waW5fcGFn ZXMoKSBhbmQgdGhlbiBqdXN0IGRyb3AgcmVmcyB1c2luZwo+Pj4gcHV0X3BhZ2UoKSkgYnV0IEkg c3VwcG9zZSBpdCB3b3VsZCBiZSBhIGhpZ2ggZW5vdWdoIGJhcnJpZXIgZm9yIG1pc3NlZAo+Pj4g Y29udmVyc2lvbnMuLi4gVGhvdWdodHM/CgpUaGUgZGVidWcgb3B0aW9uIGFib3ZlIGlzIHN0aWxs IGEgYml0IHNpbXBsaXN0aWMgaW4gaXRzIGltcGxlbWVudGF0aW9uIChhbmQgbWF5YmUKbm90IHRh a2luZyBmdWxsIGFkdmFudGFnZSBvZiB0aGUgZGF0YSBpdCBoYXMpLCBidXQgSSB0aGluayBpdCdz IHByZWZlcmFibGUsCmJlY2F1c2UgaXQgbW9uaXRvcnMgdGhlICJjb3JlIiBhbmQgV0FSTnMuCgpJ bnN0ZWFkIG9mIHRoZSB3cmFwcGVyLCBJJ20gdGhpbmtpbmc6IGRvY3VtZW50YXRpb24gYW5kIHRo ZSBwYXNzYWdlIG9mIHRpbWUsCnBsdXMgdGhlIGRlYnVnIG9wdGlvbiAocGVyaGFwcyBlbmhhbmNl ZC0tcHJvYmFibHkgb25jZSBJIHBvc3QgaXQgc29tZW9uZSB3aWxsCm5vdGljZSBvcHBvcnR1bml0 aWVzKSwgeWVzPwoKPj4KPj4gSSB0aGluayB0aGUgQVBJIHdlIHJlYWxseSBuZWVkIGlzIGdldF91 c2VyX2J2ZWMoKSAvIHB1dF91c2VyX2J2ZWMoKSwKPj4gYW5kIEkga25vdyBDaHJpc3RvcGggaGFz IGJlZW4gcHV0dGluZyBzb21lIHdvcmsgaW50byB0aGF0LiAgVGhhdCBhdm9pZHMKPj4gZG9pbmcg cmVmY291bnQgb3BlcmF0aW9ucyBvbiBodW5kcmVkcyBvZiBwYWdlcyBpZiB0aGUgcGFnZSBpbiBx dWVzdGlvbiBpcwo+PiBhIGh1Z2UgcGFnZS4gIE9uY2UgcGVvcGxlIGFyZSBzd2l0Y2hlZCBvdmVy IHRvIHRoYXQsIHRoZXkgd29uJ3QgYmUgdGVtcHRlZAo+PiB0byBtYW51YWxseSBjYWxsIHB1dF9w YWdlKCkgb24gdGhlIGluZGl2aWR1YWwgY29uc3RpdHVlbnQgcGFnZXMgb2YgYSBidmVjLgo+IAo+ IFdlbGwsIGdldF91c2VyX2J2ZWMoKSBpcyBjZXJ0YWlubHkgYSBnb29kIEFQSSBmb3Igb25lIGNs YXNzIG9mIHVzZXJzIGJ1dAo+IGp1c3QgbG9va2luZyBhdCB0aGUgYWJvdmUgc2VyaWVzLCB5b3Un bGwgc2VlIHRoZXJlIGFyZSAqbWFueSogcGxhY2VzIHRoYXQKPiBqdXN0IGRvbid0IHdvcmsgd2l0 aCBidmVjcyBhdCBhbGwgYW5kIHlvdSBuZWVkIHNvbWV0aGluZyBmb3IgdGhvc2UuCj4gCgpZZXMs IHRoZXJlIGFyZSBxdWl0ZSBhIGZldyBwbGFjZXMgdGhhdCBkb24ndCBpbnZvbHZlIF9idmVjLCBh cyB3ZSBjYW4gc2VlCnJpZ2h0IGhlcmUuIFNvIHdlIG5lZWQgc29tZXRoaW5nLiBBbmRyZXcgYXNr ZWQgZm9yIGEgZGVidWcgb3B0aW9uIHNvbWUgdGltZQphZ28sIGFuZCBzZXZlcmFsIHBlb3BsZSAo RGF2ZSBIYW5zZW4sIERhbiBXaWxsaWFtcywgSmVyb21lKSBoYWQgdGhlIGlkZWEKb2Ygdm1hcC1p bmcgZ3VwIHBhZ2VzIHNlcGFyYXRlbHksIHNvIHlvdSBjYW4gZGVmaW5pdGVseSB0ZWxsIHdoZXJl IGVhY2gKcGFnZSBjYW1lIGZyb20uIEknbSBob3Bpbmcgbm90IHRvIGhhdmUgdG8gZ28gdG8gdGhh dCBsZXZlbCBvZiBjb21wbGV4aXR5CnRob3VnaC4KCgpbMV0gIm1tL2d1cDogZGVidWcgdHJhY2tp bmcgb2YgZ2V0X3VzZXJfcGFnZXMoKSByZWZlcmVuY2VzIiA6Cmh0dHBzOi8vZ2l0aHViLmNvbS9q b2huaHViYmFyZC9saW51eC9jb21taXQvMjFmZjdkNjE2MWVjMmExNGQzZjlkMTdjOThhYmIwMGNj OTY5ZDRkNgoKdGhhbmtzLAotLSAKSm9obiBIdWJiYXJkCk5WSURJQQoKX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9y Zy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA==