From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 848A5C433E1 for ; Mon, 17 Aug 2020 17:02:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E24820716 for ; Mon, 17 Aug 2020 17:02:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597683775; bh=CfZVTiWmvsNIp/eM2/UdKh3fA5wDsnpRGCYMfhKh0fk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=U1pQ2kESw2E71BFUxgAiaPguRb+FXp3jmuY97USoepvwDzX8qfJxddWSlpxBuqexk KSi1a8Vde+I4hGsCZ0jnin54IPIXBCIVtXSiaT7uwiRip8bHAp1qP+tehnM+ZjJXgq W+vqIxOOHMY7FJ4tQOPHkrnk9Z0zOEYtIdGtzyw0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731204AbgHQRCx (ORCPT ); Mon, 17 Aug 2020 13:02:53 -0400 Received: from mail.kernel.org ([198.145.29.99]:57766 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387959AbgHQQI0 (ORCPT ); Mon, 17 Aug 2020 12:08:26 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 372FC20825; Mon, 17 Aug 2020 16:08:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597680505; bh=CfZVTiWmvsNIp/eM2/UdKh3fA5wDsnpRGCYMfhKh0fk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xGR1xWN47cXKWUiX9vl5xdlKiCASCGpdT3w1k3XjIkyfm4O0Ox8jRpee+d3yRzmRu MLvm+uI4zxssaHXp2nc9K6P2VAZqVjGhQrbXQOkFWLGoZJPDjwD5fa/RegWokRGdAy P6SFQz+2QpslDGNw7RKBR5r1qOtxqVT5vuYVruLk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chuck Lever , Sasha Levin Subject: [PATCH 5.4 213/270] svcrdma: Fix page leak in svc_rdma_recv_read_chunk() Date: Mon, 17 Aug 2020 17:16:54 +0200 Message-Id: <20200817143806.388010888@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200817143755.807583758@linuxfoundation.org> References: <20200817143755.807583758@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Chuck Lever [ Upstream commit e814eecbe3bbeaa8b004d25a4b8974d232b765a9 ] Commit 07d0ff3b0cd2 ("svcrdma: Clean up Read chunk path") moved the page saver logic so that it gets executed event when an error occurs. In that case, the I/O is never posted, and those pages are then leaked. Errors in this path, however, are quite rare. Fixes: 07d0ff3b0cd2 ("svcrdma: Clean up Read chunk path") Signed-off-by: Chuck Lever Signed-off-by: Sasha Levin --- net/sunrpc/xprtrdma/svc_rdma_rw.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c index 066af6b2eb012..0bb3f0dca80d6 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_rw.c +++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c @@ -677,7 +677,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp, struct svc_rdma_read_info *info, __be32 *p) { - unsigned int i; int ret; ret = -EINVAL; @@ -700,12 +699,6 @@ static int svc_rdma_build_read_chunk(struct svc_rqst *rqstp, info->ri_chunklen += rs_length; } - /* Pages under I/O have been copied to head->rc_pages. - * Prevent their premature release by svc_xprt_release() . - */ - for (i = 0; i < info->ri_readctxt->rc_page_count; i++) - rqstp->rq_pages[i] = NULL; - return ret; } @@ -800,6 +793,26 @@ static int svc_rdma_build_pz_read_chunk(struct svc_rqst *rqstp, return ret; } +/* Pages under I/O have been copied to head->rc_pages. Ensure they + * are not released by svc_xprt_release() until the I/O is complete. + * + * This has to be done after all Read WRs are constructed to properly + * handle a page that is part of I/O on behalf of two different RDMA + * segments. + * + * Do this only if I/O has been posted. Otherwise, we do indeed want + * svc_xprt_release() to clean things up properly. + */ +static void svc_rdma_save_io_pages(struct svc_rqst *rqstp, + const unsigned int start, + const unsigned int num_pages) +{ + unsigned int i; + + for (i = start; i < num_pages + start; i++) + rqstp->rq_pages[i] = NULL; +} + /** * svc_rdma_recv_read_chunk - Pull a Read chunk from the client * @rdma: controlling RDMA transport @@ -853,6 +866,7 @@ int svc_rdma_recv_read_chunk(struct svcxprt_rdma *rdma, struct svc_rqst *rqstp, ret = svc_rdma_post_chunk_ctxt(&info->ri_cc); if (ret < 0) goto out_err; + svc_rdma_save_io_pages(rqstp, 0, head->rc_page_count); return 0; out_err: -- 2.25.1