From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E3DDC43141 for ; Mon, 25 Nov 2019 23:12:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 264E620836 for ; Mon, 25 Nov 2019 23:12:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="QAQYtsEP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727867AbfKYXMc (ORCPT ); Mon, 25 Nov 2019 18:12:32 -0500 Received: from hqemgate14.nvidia.com ([216.228.121.143]:1220 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727379AbfKYXK6 (ORCPT ); Mon, 25 Nov 2019 18:10:58 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 25 Nov 2019 15:11:01 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 25 Nov 2019 15:10:57 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 25 Nov 2019 15:10:57 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 25 Nov 2019 23:10:57 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 25 Nov 2019 23:10:56 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 25 Nov 2019 15:10:55 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , Christoph Hellwig , Hans Verkuil , Subject: [PATCH v2 14/19] media/v4l2-core: set pages dirty upon releasing DMA buffers Date: Mon, 25 Nov 2019 15:10:30 -0800 Message-ID: <20191125231035.1539120-15-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191125231035.1539120-1-jhubbard@nvidia.com> References: <20191125231035.1539120-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1574723461; bh=icmjko9fP8b/hMTtebs5M97o7j6aV8LjauIhhN3vqys=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=QAQYtsEPznrJwYtJuyCN3CQvvRG4Lsv9auW93LmGFNQjSLIaB5B+BRtjMxn4thN/y 5b7UGT31uTykMZwG7WzdzFcS/4FOEwJJ2XRx3TchJqBZZ7vXxkX8acqKhZ5NGtvJ3S C5tjldJvsyNb3qAcsUunMwUfJDNa3IrWmRdJT74PlZytQnLUWnKqYOxefiTrFDpFLx 6djhqjwEWY9Bs38cPn2P/UZtkm6NJ4muyqAAyMkPdhf4N+DEkGsTSBAr7yg7qZWzVj q+EayEwclibzb94TKvIXens4UJMc2fnnWOVV1cELgsXy+gkJS/WvIHUJJxX7aDcxRM dP4iB4R3YMg9Q== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org After DMA is complete, and the device and CPU caches are synchronized, it's still required to mark the CPU pages as dirty, if the data was coming from the device. However, this driver was just issuing a bare put_page() call, without any set_page_dirty*() call. Fix the problem, by calling set_page_dirty_lock() if the CPU pages were potentially receiving data from the device. Reviewed-by: Christoph Hellwig Acked-by: Hans Verkuil Cc: Mauro Carvalho Chehab Cc: Signed-off-by: John Hubbard --- drivers/media/v4l2-core/videobuf-dma-sg.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2= -core/videobuf-dma-sg.c index 66a6c6c236a7..28262190c3ab 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -349,8 +349,11 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma) BUG_ON(dma->sglen); =20 if (dma->pages) { - for (i =3D 0; i < dma->nr_pages; i++) + for (i =3D 0; i < dma->nr_pages; i++) { + if (dma->direction =3D=3D DMA_FROM_DEVICE) + set_page_dirty_lock(dma->pages[i]); put_page(dma->pages[i]); + } kfree(dma->pages); dma->pages =3D NULL; } --=20 2.24.0