From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74FF9C32755 for ; Fri, 2 Aug 2019 02:26:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4225A2083B for ; Fri, 2 Aug 2019 02:26:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mO2EsXIn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390724AbfHBCU2 (ORCPT ); Thu, 1 Aug 2019 22:20:28 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:42204 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390715AbfHBCUZ (ORCPT ); Thu, 1 Aug 2019 22:20:25 -0400 Received: by mail-pl1-f195.google.com with SMTP id ay6so33057787plb.9; Thu, 01 Aug 2019 19:20:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qLhYs8BKsJD26dkkDQW7qDqDKRQ6ceEysj+E+zVe/Zg=; b=mO2EsXInu3VlKA104JEnfgx6jDjnSEwvGouzsXnnB6AUhB4nvA/xUkFDDFSnp0kAVb uzJcZIfv98AAj+k04bO7VhQK6uHpuI27M0R+JJmNHqmsuBRinpXiyXzVO3vG2KOnwG10 9wLLSCL2f4bGbhp3gn22p2RGwRVC/ywoZ78m+PYeQh5xx0q2EjBUJcL8fGu/7GpGYNTC uoBxihJiCgimiOEov856ivIQlm2ZtbCKKeyCad06O9lOqE5uy2qRmSTr2cNmPxvRt3hH pW5v3ngGfAjNbaRjAeL1vDJ+MdUfb90y74Jjb2kpsnBznTRKzfySI1iA5+bGvCblKYHE C88Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qLhYs8BKsJD26dkkDQW7qDqDKRQ6ceEysj+E+zVe/Zg=; b=KQtLrErJ+vdFKzbr15iBok+EHCaUihwNNoC2rlEP35JKpvg6vZgcaZfYvedYfLuBkq d9H8TGtd95iD3bRZ9av7+2/vY8YYNJC2OsxKwcw66QhN1Ot3bBcWindL+1eqlwiMyNSY SfWLpr5TrEvxf11tatMDHtKThlsn+KM+mtyzsUyIhMyesumcMs0mzIwq9FXEs55r1i3J hkRuSjGfBUDvcdgTNv3usGpN8sZ7ecsq87ydGlcEGXxS5fjtCBcw1yq0jEE2kt2fYSem etWmvUk0QZivH1w98St1UJBHRBgDJPYJjRW+VG9yinJHMQkEs+o2I3HVf0Bycp4DVrG2 JGNw== X-Gm-Message-State: APjAAAUWnycjuO5YxNgkuEy4pvja0/H6m9gGADJE5K9uxQpjNVKBqHES f6ow9pxO+TfPrG0hM7pUQwM= X-Google-Smtp-Source: APXvYqzyCRty8QPAc9XoTrRagrp/y/95vNaIlWT3LNdx/ZaWWXrbiDzYki+gcyf8LhQXoed6UV6NgQ== X-Received: by 2002:a17:902:6b0c:: with SMTP id o12mr26388046plk.113.1564712424200; Thu, 01 Aug 2019 19:20:24 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9sm38179744pgc.5.2019.08.01.19.20.22 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 19:20:23 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Christoph Hellwig , Dan Williams , Dave Chinner , Dave Hansen , Ira Weiny , Jan Kara , Jason Gunthorpe , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , LKML , amd-gfx@lists.freedesktop.org, ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, devel@lists.orangefs.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-media@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, linux-xfs@vger.kernel.org, netdev@vger.kernel.org, rds-devel@oss.oracle.com, sparclinux@vger.kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org, John Hubbard , Andy Walls , Mauro Carvalho Chehab Subject: [PATCH 08/34] media/ivtv: convert put_page() to put_user_page*() Date: Thu, 1 Aug 2019 19:19:39 -0700 Message-Id: <20190802022005.5117-9-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190802022005.5117-1-jhubbard@nvidia.com> References: <20190802022005.5117-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: 8bit Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Cc: Andy Walls Cc: Mauro Carvalho Chehab Cc: linux-media@vger.kernel.org Signed-off-by: John Hubbard --- drivers/media/pci/ivtv/ivtv-udma.c | 14 ++++---------- drivers/media/pci/ivtv/ivtv-yuv.c | 10 +++------- 2 files changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 5f8883031c9c..7c7f33c2412b 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, { struct ivtv_dma_page_info user_dma; struct ivtv_user_dma *dma = &itv->udma; - int i, err; + int err; IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned int)ivtv_dest_addr); @@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", err, user_dma.page_count); if (err >= 0) { - for (i = 0; i < err; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, err); return -EINVAL; } return err; @@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Fill SG List with new values */ if (ivtv_udma_fill_sg_list(dma, &user_dma, 0) < 0) { - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } @@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, void ivtv_udma_unmap(struct ivtv *itv) { struct ivtv_user_dma *dma = &itv->udma; - int i; IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n"); @@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv) ivtv_udma_sync_for_cpu(itv); /* Release User Pages */ - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; } diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index cd2fe2d444c0..9465a7d450b6 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -81,8 +81,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, uv_pages, uv_dma.page_count); if (uv_pages >= 0) { - for (i = 0; i < uv_pages; i++) - put_page(dma->map[y_pages + i]); + put_user_pages(&dma->map[y_pages], uv_pages); rc = -EFAULT; } else { rc = uv_pages; @@ -93,8 +92,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, y_pages, y_dma.page_count); } if (y_pages >= 0) { - for (i = 0; i < y_pages; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, y_pages); /* * Inherit the -EFAULT from rc's * initialization, but allow it to be @@ -112,9 +110,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Fill & map SG List */ if (ivtv_udma_fill_sg_list (dma, &uv_dma, ivtv_udma_fill_sg_list (dma, &y_dma, 0)) < 0) { IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem userspace buffers\n"); - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } -- 2.22.0 From mboxrd@z Thu Jan 1 00:00:00 1970 From: john.hubbard@gmail.com Date: Fri, 02 Aug 2019 02:19:39 +0000 Subject: [PATCH 08/34] media/ivtv: convert put_page() to put_user_page*() Message-Id: <20190802022005.5117-9-jhubbard@nvidia.com> List-Id: References: <20190802022005.5117-1-jhubbard@nvidia.com> In-Reply-To: <20190802022005.5117-1-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Andrew Morton Cc: linux-fbdev@vger.kernel.org, Jan Kara , kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, sparclinux@vger.kernel.org, ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , linux-rpi-kernel@lists.infradead.org, Dan Williams , Mauro Carvalho Chehab , linux-arm-kernel@lists.infradead.org, linux-nfs@vger From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Cc: Andy Walls Cc: Mauro Carvalho Chehab Cc: linux-media@vger.kernel.org Signed-off-by: John Hubbard --- drivers/media/pci/ivtv/ivtv-udma.c | 14 ++++---------- drivers/media/pci/ivtv/ivtv-yuv.c | 10 +++------- 2 files changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 5f8883031c9c..7c7f33c2412b 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, { struct ivtv_dma_page_info user_dma; struct ivtv_user_dma *dma = &itv->udma; - int i, err; + int err; IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned int)ivtv_dest_addr); @@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", err, user_dma.page_count); if (err >= 0) { - for (i = 0; i < err; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, err); return -EINVAL; } return err; @@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Fill SG List with new values */ if (ivtv_udma_fill_sg_list(dma, &user_dma, 0) < 0) { - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } @@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, void ivtv_udma_unmap(struct ivtv *itv) { struct ivtv_user_dma *dma = &itv->udma; - int i; IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n"); @@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv) ivtv_udma_sync_for_cpu(itv); /* Release User Pages */ - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; } diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index cd2fe2d444c0..9465a7d450b6 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -81,8 +81,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, uv_pages, uv_dma.page_count); if (uv_pages >= 0) { - for (i = 0; i < uv_pages; i++) - put_page(dma->map[y_pages + i]); + put_user_pages(&dma->map[y_pages], uv_pages); rc = -EFAULT; } else { rc = uv_pages; @@ -93,8 +92,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, y_pages, y_dma.page_count); } if (y_pages >= 0) { - for (i = 0; i < y_pages; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, y_pages); /* * Inherit the -EFAULT from rc's * initialization, but allow it to be @@ -112,9 +110,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Fill & map SG List */ if (ivtv_udma_fill_sg_list (dma, &uv_dma, ivtv_udma_fill_sg_list (dma, &y_dma, 0)) < 0) { IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem userspace buffers\n"); - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } -- 2.22.0 From mboxrd@z Thu Jan 1 00:00:00 1970 From: john.hubbard@gmail.com Subject: [PATCH 08/34] media/ivtv: convert put_page() to put_user_page*() Date: Thu, 1 Aug 2019 19:19:39 -0700 Message-ID: <20190802022005.5117-9-jhubbard@nvidia.com> References: <20190802022005.5117-1-jhubbard@nvidia.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20190802022005.5117-1-jhubbard@nvidia.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: driverdev-devel-bounces@linuxdriverproject.org Sender: "devel" To: Andrew Morton Cc: linux-fbdev@vger.kernel.org, Jan Kara , kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, sparclinux@vger.kernel.org, ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , linux-rpi-kernel@lists.infradead.org, Dan Williams , Mauro Carvalho Chehab , linux-arm-kernel@lists.infradead.org, linux-nfs@vger List-Id: ceph-devel.vger.kernel.org From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Cc: Andy Walls Cc: Mauro Carvalho Chehab Cc: linux-media@vger.kernel.org Signed-off-by: John Hubbard --- drivers/media/pci/ivtv/ivtv-udma.c | 14 ++++---------- drivers/media/pci/ivtv/ivtv-yuv.c | 10 +++------- 2 files changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 5f8883031c9c..7c7f33c2412b 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, { struct ivtv_dma_page_info user_dma; struct ivtv_user_dma *dma = &itv->udma; - int i, err; + int err; IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned int)ivtv_dest_addr); @@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", err, user_dma.page_count); if (err >= 0) { - for (i = 0; i < err; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, err); return -EINVAL; } return err; @@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Fill SG List with new values */ if (ivtv_udma_fill_sg_list(dma, &user_dma, 0) < 0) { - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } @@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, void ivtv_udma_unmap(struct ivtv *itv) { struct ivtv_user_dma *dma = &itv->udma; - int i; IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n"); @@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv) ivtv_udma_sync_for_cpu(itv); /* Release User Pages */ - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; } diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index cd2fe2d444c0..9465a7d450b6 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -81,8 +81,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, uv_pages, uv_dma.page_count); if (uv_pages >= 0) { - for (i = 0; i < uv_pages; i++) - put_page(dma->map[y_pages + i]); + put_user_pages(&dma->map[y_pages], uv_pages); rc = -EFAULT; } else { rc = uv_pages; @@ -93,8 +92,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, y_pages, y_dma.page_count); } if (y_pages >= 0) { - for (i = 0; i < y_pages; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, y_pages); /* * Inherit the -EFAULT from rc's * initialization, but allow it to be @@ -112,9 +110,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Fill & map SG List */ if (ivtv_udma_fill_sg_list (dma, &uv_dma, ivtv_udma_fill_sg_list (dma, &y_dma, 0)) < 0) { IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem userspace buffers\n"); - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } -- 2.22.0 From mboxrd@z Thu Jan 1 00:00:00 1970 From: john.hubbard@gmail.com (john.hubbard@gmail.com) Date: Thu, 1 Aug 2019 19:19:39 -0700 Subject: [PATCH 08/34] media/ivtv: convert put_page() to put_user_page*() In-Reply-To: <20190802022005.5117-1-jhubbard@nvidia.com> References: <20190802022005.5117-1-jhubbard@nvidia.com> Message-ID: <20190802022005.5117-9-jhubbard@nvidia.com> List-Id: Linux Driver Project Developer List From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Cc: Andy Walls Cc: Mauro Carvalho Chehab Cc: linux-media at vger.kernel.org Signed-off-by: John Hubbard --- drivers/media/pci/ivtv/ivtv-udma.c | 14 ++++---------- drivers/media/pci/ivtv/ivtv-yuv.c | 10 +++------- 2 files changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 5f8883031c9c..7c7f33c2412b 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, { struct ivtv_dma_page_info user_dma; struct ivtv_user_dma *dma = &itv->udma; - int i, err; + int err; IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned int)ivtv_dest_addr); @@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", err, user_dma.page_count); if (err >= 0) { - for (i = 0; i < err; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, err); return -EINVAL; } return err; @@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Fill SG List with new values */ if (ivtv_udma_fill_sg_list(dma, &user_dma, 0) < 0) { - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } @@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, void ivtv_udma_unmap(struct ivtv *itv) { struct ivtv_user_dma *dma = &itv->udma; - int i; IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n"); @@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv) ivtv_udma_sync_for_cpu(itv); /* Release User Pages */ - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; } diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index cd2fe2d444c0..9465a7d450b6 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -81,8 +81,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, uv_pages, uv_dma.page_count); if (uv_pages >= 0) { - for (i = 0; i < uv_pages; i++) - put_page(dma->map[y_pages + i]); + put_user_pages(&dma->map[y_pages], uv_pages); rc = -EFAULT; } else { rc = uv_pages; @@ -93,8 +92,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, y_pages, y_dma.page_count); } if (y_pages >= 0) { - for (i = 0; i < y_pages; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, y_pages); /* * Inherit the -EFAULT from rc's * initialization, but allow it to be @@ -112,9 +110,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Fill & map SG List */ if (ivtv_udma_fill_sg_list (dma, &uv_dma, ivtv_udma_fill_sg_list (dma, &y_dma, 0)) < 0) { IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem userspace buffers\n"); - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } -- 2.22.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,DKIM_VALID,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE781C433FF for ; Fri, 2 Aug 2019 02:22:57 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 834DA206A3 for ; Fri, 2 Aug 2019 02:22:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ZkDsACMX"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mO2EsXIn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 834DA206A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oGfyAF/y3dsiXB4/v5Ee1uAx2VhqHSyqBMdI2NzYVNU=; b=ZkDsACMXZO6b9i uQRUKhWzaB9qWjulEyu6KRMTm9czb5inyyAFYuxeBYLr4BibDZVX92At5UC7Z6P1ebmkP18u1gnlr 6PyvdqsKw+7iQY15Jo8f1nm9zGqLoxC5LEknnk9Eycw7BVvqym+zlcm2UDJ+sqpxkeMc2m1Z8ntzP T7zpINKsud4YbOIM1zbgpMHaRXHUR59ZA+0UIz1IxKJWdguZ3kvt07VuH7p7VYKV7QuIZSSN0Rf8O AAy21Jv2RfOlC5m+PyeZudV+OJ1Ce07ZHjx6fW7nFe1qu+sowZwMwHZrsVcH4x/CedYSiwHK4mUu1 wu/CuwWsWPyBokkoePmg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1htNDc-0004TV-VM; Fri, 02 Aug 2019 02:22:56 +0000 Received: from mail-pl1-x642.google.com ([2607:f8b0:4864:20::642]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1htNBA-0001kX-Rj; Fri, 02 Aug 2019 02:20:26 +0000 Received: by mail-pl1-x642.google.com with SMTP id w24so33011789plp.2; Thu, 01 Aug 2019 19:20:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qLhYs8BKsJD26dkkDQW7qDqDKRQ6ceEysj+E+zVe/Zg=; b=mO2EsXInu3VlKA104JEnfgx6jDjnSEwvGouzsXnnB6AUhB4nvA/xUkFDDFSnp0kAVb uzJcZIfv98AAj+k04bO7VhQK6uHpuI27M0R+JJmNHqmsuBRinpXiyXzVO3vG2KOnwG10 9wLLSCL2f4bGbhp3gn22p2RGwRVC/ywoZ78m+PYeQh5xx0q2EjBUJcL8fGu/7GpGYNTC uoBxihJiCgimiOEov856ivIQlm2ZtbCKKeyCad06O9lOqE5uy2qRmSTr2cNmPxvRt3hH pW5v3ngGfAjNbaRjAeL1vDJ+MdUfb90y74Jjb2kpsnBznTRKzfySI1iA5+bGvCblKYHE C88Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qLhYs8BKsJD26dkkDQW7qDqDKRQ6ceEysj+E+zVe/Zg=; b=bJHmj3pEoHpMJ4JynA5rq3DR5x3gSg3G/+GEQ6fOTh3uTLntaB8cExF/as6HsnZjjU D3lQsxUw7fgszuGm13RVC+yFucWwyVe8fxX8yoHkIQXhVhgWZZyGbeukHBXFS9HIueKi WQ33tcoah57QDU8Pe8dQ6FCSGl8HBhTDhj+s3hrd4+RBkFrgBzlHLuXU4N/XdEBMDBan KJWPoi5W+4hUeL3UFcpp4zg2w/Hz2QWwMAGX3rBJrzWWVbf2nu6CSQmxfM9Wmr7+jxfs 9OqK/e81Em5E9KHoWM2Tyg+nAPEUnAqBJKe9sMcL7kPqxaLWhGEkxN4el8GTDrVvtKpV tSKA== X-Gm-Message-State: APjAAAW2wC9JoCoRUTfJe58lG/oA8mB8Jq8ilAre6pWnVi4QKORkwMPo hSqCv6DdQlsPNdgoHGx2MFs= X-Google-Smtp-Source: APXvYqzyCRty8QPAc9XoTrRagrp/y/95vNaIlWT3LNdx/ZaWWXrbiDzYki+gcyf8LhQXoed6UV6NgQ== X-Received: by 2002:a17:902:6b0c:: with SMTP id o12mr26388046plk.113.1564712424200; Thu, 01 Aug 2019 19:20:24 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9sm38179744pgc.5.2019.08.01.19.20.22 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 19:20:23 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Subject: [PATCH 08/34] media/ivtv: convert put_page() to put_user_page*() Date: Thu, 1 Aug 2019 19:19:39 -0700 Message-Id: <20190802022005.5117-9-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190802022005.5117-1-jhubbard@nvidia.com> References: <20190802022005.5117-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190801_192024_989355_59FB1227 X-CRM114-Status: GOOD ( 12.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, Jan Kara , kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , linux-rpi-kernel@lists.infradead.org, Dan Williams , Mauro Carvalho Chehab , linux-arm-kernel@lists.infradead.org, linux-nfs@vger.kernel.org, Andy Walls , netdev@vger.kernel.org, LKML , linux-xfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fsdevel@vger.kernel.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Cc: Andy Walls Cc: Mauro Carvalho Chehab Cc: linux-media@vger.kernel.org Signed-off-by: John Hubbard --- drivers/media/pci/ivtv/ivtv-udma.c | 14 ++++---------- drivers/media/pci/ivtv/ivtv-yuv.c | 10 +++------- 2 files changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 5f8883031c9c..7c7f33c2412b 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, { struct ivtv_dma_page_info user_dma; struct ivtv_user_dma *dma = &itv->udma; - int i, err; + int err; IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned int)ivtv_dest_addr); @@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", err, user_dma.page_count); if (err >= 0) { - for (i = 0; i < err; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, err); return -EINVAL; } return err; @@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Fill SG List with new values */ if (ivtv_udma_fill_sg_list(dma, &user_dma, 0) < 0) { - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } @@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, void ivtv_udma_unmap(struct ivtv *itv) { struct ivtv_user_dma *dma = &itv->udma; - int i; IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n"); @@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv) ivtv_udma_sync_for_cpu(itv); /* Release User Pages */ - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; } diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index cd2fe2d444c0..9465a7d450b6 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -81,8 +81,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, uv_pages, uv_dma.page_count); if (uv_pages >= 0) { - for (i = 0; i < uv_pages; i++) - put_page(dma->map[y_pages + i]); + put_user_pages(&dma->map[y_pages], uv_pages); rc = -EFAULT; } else { rc = uv_pages; @@ -93,8 +92,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, y_pages, y_dma.page_count); } if (y_pages >= 0) { - for (i = 0; i < y_pages; i++) - put_page(dma->map[i]); + put_user_pages(dma->map, y_pages); /* * Inherit the -EFAULT from rc's * initialization, but allow it to be @@ -112,9 +110,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Fill & map SG List */ if (ivtv_udma_fill_sg_list (dma, &uv_dma, ivtv_udma_fill_sg_list (dma, &y_dma, 0)) < 0) { IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem userspace buffers\n"); - for (i = 0; i < dma->page_count; i++) { - put_page(dma->map[i]); - } + put_user_pages(dma->map, dma->page_count); dma->page_count = 0; return -ENOMEM; } -- 2.22.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E205AC433FF for ; Fri, 2 Aug 2019 04:15:08 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B66A62073D for ; Fri, 2 Aug 2019 04:15:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mO2EsXIn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B66A62073D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htOxX-0005qx-5z; Fri, 02 Aug 2019 04:14:27 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1htNBB-00074s-KP for xen-devel@lists.xenproject.org; Fri, 02 Aug 2019 02:20:25 +0000 X-Inumbo-ID: 15d87c58-b4cc-11e9-8980-bc764e045a96 Received: from mail-pl1-x643.google.com (unknown [2607:f8b0:4864:20::643]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 15d87c58-b4cc-11e9-8980-bc764e045a96; Fri, 02 Aug 2019 02:20:24 +0000 (UTC) Received: by mail-pl1-x643.google.com with SMTP id a93so32956307pla.7 for ; Thu, 01 Aug 2019 19:20:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qLhYs8BKsJD26dkkDQW7qDqDKRQ6ceEysj+E+zVe/Zg=; b=mO2EsXInu3VlKA104JEnfgx6jDjnSEwvGouzsXnnB6AUhB4nvA/xUkFDDFSnp0kAVb uzJcZIfv98AAj+k04bO7VhQK6uHpuI27M0R+JJmNHqmsuBRinpXiyXzVO3vG2KOnwG10 9wLLSCL2f4bGbhp3gn22p2RGwRVC/ywoZ78m+PYeQh5xx0q2EjBUJcL8fGu/7GpGYNTC uoBxihJiCgimiOEov856ivIQlm2ZtbCKKeyCad06O9lOqE5uy2qRmSTr2cNmPxvRt3hH pW5v3ngGfAjNbaRjAeL1vDJ+MdUfb90y74Jjb2kpsnBznTRKzfySI1iA5+bGvCblKYHE C88Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qLhYs8BKsJD26dkkDQW7qDqDKRQ6ceEysj+E+zVe/Zg=; b=UAAGNCRxLEVNIIiHLbWJZvfFXbeMjNZThVwTpGu+Ap5rGK4nnYl2eu+4yVkRufpJoj THRGhy3zeGwxIGu5GlyyQZEEGamBNkW4qBQ2VWy+t7fqNVd05bJGJFsc1XB4zgKit8B5 Ng4EmuKqZyBKJA3PzRp5r/YXd0rT3CmQVPgJi3w2bXnwh+DeP322hBBRuZ/bAxOwGia2 K/HHfmeaWX19nyRl+vhn7ZWUjTAONST1kqwdMoFtRvV2PVbUbf9Xrl0xeO/cfQlAB7tC FoZgzFXrREJ6uTN0jUsdbkTFT74tlfQ/bVKSBYLNGXMfVObCPikS5ceRTXY84KcHXOD+ +69g== X-Gm-Message-State: APjAAAXccWI2Y2V5eCEberiyrD0HiGy4F9MTioVaQprjM92ERgomdTPr nbIu1m2yoeAjQadpwJ2iSdc= X-Google-Smtp-Source: APXvYqzyCRty8QPAc9XoTrRagrp/y/95vNaIlWT3LNdx/ZaWWXrbiDzYki+gcyf8LhQXoed6UV6NgQ== X-Received: by 2002:a17:902:6b0c:: with SMTP id o12mr26388046plk.113.1564712424200; Thu, 01 Aug 2019 19:20:24 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id u9sm38179744pgc.5.2019.08.01.19.20.22 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 19:20:23 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Date: Thu, 1 Aug 2019 19:19:39 -0700 Message-Id: <20190802022005.5117-9-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190802022005.5117-1-jhubbard@nvidia.com> References: <20190802022005.5117-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Mailman-Approved-At: Fri, 02 Aug 2019 04:14:22 +0000 Subject: [Xen-devel] [PATCH 08/34] media/ivtv: convert put_page() to put_user_page*() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, Jan Kara , kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, sparclinux@vger.kernel.org, Ira Weiny , ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , linux-rpi-kernel@lists.infradead.org, Dan Williams , Mauro Carvalho Chehab , linux-arm-kernel@lists.infradead.org, linux-nfs@vger.kernel.org, Andy Walls , netdev@vger.kernel.org, LKML , linux-xfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fsdevel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" RnJvbTogSm9obiBIdWJiYXJkIDxqaHViYmFyZEBudmlkaWEuY29tPgoKRm9yIHBhZ2VzIHRoYXQg d2VyZSByZXRhaW5lZCB2aWEgZ2V0X3VzZXJfcGFnZXMqKCksIHJlbGVhc2UgdGhvc2UgcGFnZXMK dmlhIHRoZSBuZXcgcHV0X3VzZXJfcGFnZSooKSByb3V0aW5lcywgaW5zdGVhZCBvZiB2aWEgcHV0 X3BhZ2UoKSBvcgpyZWxlYXNlX3BhZ2VzKCkuCgpUaGlzIGlzIHBhcnQgYSB0cmVlLXdpZGUgY29u dmVyc2lvbiwgYXMgZGVzY3JpYmVkIGluIGNvbW1pdCBmYzFkOGU3Y2NhMmQKKCJtbTogaW50cm9k dWNlIHB1dF91c2VyX3BhZ2UqKCksIHBsYWNlaG9sZGVyIHZlcnNpb25zIikuCgpDYzogQW5keSBX YWxscyA8YXdhbGxzQG1kLm1ldHJvY2FzdC5uZXQ+CkNjOiBNYXVybyBDYXJ2YWxobyBDaGVoYWIg PG1jaGVoYWJAa2VybmVsLm9yZz4KQ2M6IGxpbnV4LW1lZGlhQHZnZXIua2VybmVsLm9yZwpTaWdu ZWQtb2ZmLWJ5OiBKb2huIEh1YmJhcmQgPGpodWJiYXJkQG52aWRpYS5jb20+Ci0tLQogZHJpdmVy cy9tZWRpYS9wY2kvaXZ0di9pdnR2LXVkbWEuYyB8IDE0ICsrKystLS0tLS0tLS0tCiBkcml2ZXJz L21lZGlhL3BjaS9pdnR2L2l2dHYteXV2LmMgIHwgMTAgKysrLS0tLS0tLQogMiBmaWxlcyBjaGFu Z2VkLCA3IGluc2VydGlvbnMoKyksIDE3IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2RyaXZl cnMvbWVkaWEvcGNpL2l2dHYvaXZ0di11ZG1hLmMgYi9kcml2ZXJzL21lZGlhL3BjaS9pdnR2L2l2 dHYtdWRtYS5jCmluZGV4IDVmODg4MzAzMWM5Yy4uN2M3ZjMzYzI0MTJiIDEwMDY0NAotLS0gYS9k cml2ZXJzL21lZGlhL3BjaS9pdnR2L2l2dHYtdWRtYS5jCisrKyBiL2RyaXZlcnMvbWVkaWEvcGNp L2l2dHYvaXZ0di11ZG1hLmMKQEAgLTkyLDcgKzkyLDcgQEAgaW50IGl2dHZfdWRtYV9zZXR1cChz dHJ1Y3QgaXZ0diAqaXR2LCB1bnNpZ25lZCBsb25nIGl2dHZfZGVzdF9hZGRyLAogewogCXN0cnVj dCBpdnR2X2RtYV9wYWdlX2luZm8gdXNlcl9kbWE7CiAJc3RydWN0IGl2dHZfdXNlcl9kbWEgKmRt YSA9ICZpdHYtPnVkbWE7Ci0JaW50IGksIGVycjsKKwlpbnQgZXJyOwogCiAJSVZUVl9ERUJVR19E TUEoIml2dHZfdWRtYV9zZXR1cCwgZHN0OiAweCUwOHhcbiIsICh1bnNpZ25lZCBpbnQpaXZ0dl9k ZXN0X2FkZHIpOwogCkBAIC0xMTksOCArMTE5LDcgQEAgaW50IGl2dHZfdWRtYV9zZXR1cChzdHJ1 Y3QgaXZ0diAqaXR2LCB1bnNpZ25lZCBsb25nIGl2dHZfZGVzdF9hZGRyLAogCQlJVlRWX0RFQlVH X1dBUk4oImZhaWxlZCB0byBtYXAgdXNlciBwYWdlcywgcmV0dXJuZWQgJWQgaW5zdGVhZCBvZiAl ZFxuIiwKIAkJCSAgIGVyciwgdXNlcl9kbWEucGFnZV9jb3VudCk7CiAJCWlmIChlcnIgPj0gMCkg ewotCQkJZm9yIChpID0gMDsgaSA8IGVycjsgaSsrKQotCQkJCXB1dF9wYWdlKGRtYS0+bWFwW2ld KTsKKwkJCXB1dF91c2VyX3BhZ2VzKGRtYS0+bWFwLCBlcnIpOwogCQkJcmV0dXJuIC1FSU5WQUw7 CiAJCX0KIAkJcmV0dXJuIGVycjsKQEAgLTEzMCw5ICsxMjksNyBAQCBpbnQgaXZ0dl91ZG1hX3Nl dHVwKHN0cnVjdCBpdnR2ICppdHYsIHVuc2lnbmVkIGxvbmcgaXZ0dl9kZXN0X2FkZHIsCiAKIAkv KiBGaWxsIFNHIExpc3Qgd2l0aCBuZXcgdmFsdWVzICovCiAJaWYgKGl2dHZfdWRtYV9maWxsX3Nn X2xpc3QoZG1hLCAmdXNlcl9kbWEsIDApIDwgMCkgewotCQlmb3IgKGkgPSAwOyBpIDwgZG1hLT5w YWdlX2NvdW50OyBpKyspIHsKLQkJCXB1dF9wYWdlKGRtYS0+bWFwW2ldKTsKLQkJfQorCQlwdXRf dXNlcl9wYWdlcyhkbWEtPm1hcCwgZG1hLT5wYWdlX2NvdW50KTsKIAkJZG1hLT5wYWdlX2NvdW50 ID0gMDsKIAkJcmV0dXJuIC1FTk9NRU07CiAJfQpAQCAtMTUzLDcgKzE1MCw2IEBAIGludCBpdnR2 X3VkbWFfc2V0dXAoc3RydWN0IGl2dHYgKml0diwgdW5zaWduZWQgbG9uZyBpdnR2X2Rlc3RfYWRk ciwKIHZvaWQgaXZ0dl91ZG1hX3VubWFwKHN0cnVjdCBpdnR2ICppdHYpCiB7CiAJc3RydWN0IGl2 dHZfdXNlcl9kbWEgKmRtYSA9ICZpdHYtPnVkbWE7Ci0JaW50IGk7CiAKIAlJVlRWX0RFQlVHX0lO Rk8oIml2dHZfdW5tYXBfdXNlcl9kbWFcbiIpOwogCkBAIC0xNzAsOSArMTY2LDcgQEAgdm9pZCBp dnR2X3VkbWFfdW5tYXAoc3RydWN0IGl2dHYgKml0dikKIAlpdnR2X3VkbWFfc3luY19mb3JfY3B1 KGl0dik7CiAKIAkvKiBSZWxlYXNlIFVzZXIgUGFnZXMgKi8KLQlmb3IgKGkgPSAwOyBpIDwgZG1h LT5wYWdlX2NvdW50OyBpKyspIHsKLQkJcHV0X3BhZ2UoZG1hLT5tYXBbaV0pOwotCX0KKwlwdXRf dXNlcl9wYWdlcyhkbWEtPm1hcCwgZG1hLT5wYWdlX2NvdW50KTsKIAlkbWEtPnBhZ2VfY291bnQg PSAwOwogfQogCmRpZmYgLS1naXQgYS9kcml2ZXJzL21lZGlhL3BjaS9pdnR2L2l2dHYteXV2LmMg Yi9kcml2ZXJzL21lZGlhL3BjaS9pdnR2L2l2dHYteXV2LmMKaW5kZXggY2QyZmUyZDQ0NGMwLi45 NDY1YTdkNDUwYjYgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvbWVkaWEvcGNpL2l2dHYvaXZ0di15dXYu YworKysgYi9kcml2ZXJzL21lZGlhL3BjaS9pdnR2L2l2dHYteXV2LmMKQEAgLTgxLDggKzgxLDcg QEAgc3RhdGljIGludCBpdnR2X3l1dl9wcmVwX3VzZXJfZG1hKHN0cnVjdCBpdnR2ICppdHYsIHN0 cnVjdCBpdnR2X3VzZXJfZG1hICpkbWEsCiAJCQkJIHV2X3BhZ2VzLCB1dl9kbWEucGFnZV9jb3Vu dCk7CiAKIAkJCWlmICh1dl9wYWdlcyA+PSAwKSB7Ci0JCQkJZm9yIChpID0gMDsgaSA8IHV2X3Bh Z2VzOyBpKyspCi0JCQkJCXB1dF9wYWdlKGRtYS0+bWFwW3lfcGFnZXMgKyBpXSk7CisJCQkJcHV0 X3VzZXJfcGFnZXMoJmRtYS0+bWFwW3lfcGFnZXNdLCB1dl9wYWdlcyk7CiAJCQkJcmMgPSAtRUZB VUxUOwogCQkJfSBlbHNlIHsKIAkJCQlyYyA9IHV2X3BhZ2VzOwpAQCAtOTMsOCArOTIsNyBAQCBz dGF0aWMgaW50IGl2dHZfeXV2X3ByZXBfdXNlcl9kbWEoc3RydWN0IGl2dHYgKml0diwgc3RydWN0 IGl2dHZfdXNlcl9kbWEgKmRtYSwKIAkJCQkgeV9wYWdlcywgeV9kbWEucGFnZV9jb3VudCk7CiAJ CX0KIAkJaWYgKHlfcGFnZXMgPj0gMCkgewotCQkJZm9yIChpID0gMDsgaSA8IHlfcGFnZXM7IGkr KykKLQkJCQlwdXRfcGFnZShkbWEtPm1hcFtpXSk7CisJCQlwdXRfdXNlcl9wYWdlcyhkbWEtPm1h cCwgeV9wYWdlcyk7CiAJCQkvKgogCQkJICogSW5oZXJpdCB0aGUgLUVGQVVMVCBmcm9tIHJjJ3MK IAkJCSAqIGluaXRpYWxpemF0aW9uLCBidXQgYWxsb3cgaXQgdG8gYmUKQEAgLTExMiw5ICsxMTAs NyBAQCBzdGF0aWMgaW50IGl2dHZfeXV2X3ByZXBfdXNlcl9kbWEoc3RydWN0IGl2dHYgKml0diwg c3RydWN0IGl2dHZfdXNlcl9kbWEgKmRtYSwKIAkvKiBGaWxsICYgbWFwIFNHIExpc3QgKi8KIAlp ZiAoaXZ0dl91ZG1hX2ZpbGxfc2dfbGlzdCAoZG1hLCAmdXZfZG1hLCBpdnR2X3VkbWFfZmlsbF9z Z19saXN0IChkbWEsICZ5X2RtYSwgMCkpIDwgMCkgewogCQlJVlRWX0RFQlVHX1dBUk4oImNvdWxk IG5vdCBhbGxvY2F0ZSBib3VuY2UgYnVmZmVycyBmb3IgaGlnaG1lbSB1c2Vyc3BhY2UgYnVmZmVy c1xuIik7Ci0JCWZvciAoaSA9IDA7IGkgPCBkbWEtPnBhZ2VfY291bnQ7IGkrKykgewotCQkJcHV0 X3BhZ2UoZG1hLT5tYXBbaV0pOwotCQl9CisJCXB1dF91c2VyX3BhZ2VzKGRtYS0+bWFwLCBkbWEt PnBhZ2VfY291bnQpOwogCQlkbWEtPnBhZ2VfY291bnQgPSAwOwogCQlyZXR1cm4gLUVOT01FTTsK IAl9Ci0tIAoyLjIyLjAKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0 Lm9yZwpodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL21haWxtYW4vbGlzdGluZm8veGVuLWRl dmVs