From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FE58C433DB for ; Wed, 3 Feb 2021 11:37:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F46364F77 for ; Wed, 3 Feb 2021 11:37:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234355AbhBCLha (ORCPT ); Wed, 3 Feb 2021 06:37:30 -0500 Received: from mga11.intel.com ([192.55.52.93]:43917 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234281AbhBCLh2 (ORCPT ); Wed, 3 Feb 2021 06:37:28 -0500 IronPort-SDR: FYhC0aW4g8Ve07LaJERJQylAzfvM6ilTgsX8jQ3TUKALwXlwm3T1FuF00jGv5kOu5SYkJdjanA rEVxVsbKJ/Yw== X-IronPort-AV: E=McAfee;i="6000,8403,9883"; a="177518915" X-IronPort-AV: E=Sophos;i="5.79,398,1602572400"; d="scan'208";a="177518915" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2021 03:35:42 -0800 IronPort-SDR: z/8O5vonvBeFY2co8BN/lvOUHOqMjwf/ZXUBp0TWW7vU2v6scESPZduy1QiKwTmaP/G0XJsAqP hFYOvog8m7Dw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,398,1602572400"; d="scan'208";a="575866571" Received: from mattu-haswell.fi.intel.com ([10.237.72.170]) by orsmga005.jf.intel.com with ESMTP; 03 Feb 2021 03:35:41 -0800 From: Mathias Nyman To: Cc: , Mathias Nyman , stable@vger.kernel.org, Andreas Hartmann Subject: [PATCH 1/1] xhci: fix bounce buffer usage for non-sg list case Date: Wed, 3 Feb 2021 13:37:02 +0200 Message-Id: <20210203113702.436762-2-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210203113702.436762-1-mathias.nyman@linux.intel.com> References: <20210203113702.436762-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org xhci driver may in some special cases need to copy small amounts of payload data to a bounce buffer in order to meet the boundary and alignment restrictions set by the xHCI specification. In the majority of these cases the data is in a sg list, and driver incorrectly assumed data is always in urb->sg when using the bounce buffer. If data instead is contiguous, and in urb->transfer_buffer, we may still need to bounce buffer a small part if data starts very close (less than packet size) to a 64k boundary. Check if sg list is used before copying data to/from it. Fixes: f9c589e142d0 ("xhci: TD-fragment, align the unsplittable case with a bounce buffer") Cc: stable@vger.kernel.org Reported-by: Andreas Hartmann Tested-by: Andreas Hartmann Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 31 ++++++++++++++++++++----------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index cf0c93a90200..89c3be9917f6 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -699,11 +699,16 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci, dma_unmap_single(dev, seg->bounce_dma, ring->bounce_buf_len, DMA_FROM_DEVICE); /* for in tranfers we need to copy the data from bounce to sg */ - len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf, - seg->bounce_len, seg->bounce_offs); - if (len != seg->bounce_len) - xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n", - len, seg->bounce_len); + if (urb->num_sgs) { + len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf, + seg->bounce_len, seg->bounce_offs); + if (len != seg->bounce_len) + xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n", + len, seg->bounce_len); + } else { + memcpy(urb->transfer_buffer + seg->bounce_offs, seg->bounce_buf, + seg->bounce_len); + } seg->bounce_len = 0; seg->bounce_offs = 0; } @@ -3277,12 +3282,16 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len, /* create a max max_pkt sized bounce buffer pointed to by last trb */ if (usb_urb_dir_out(urb)) { - len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs, - seg->bounce_buf, new_buff_len, enqd_len); - if (len != new_buff_len) - xhci_warn(xhci, - "WARN Wrong bounce buffer write length: %zu != %d\n", - len, new_buff_len); + if (urb->num_sgs) { + len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs, + seg->bounce_buf, new_buff_len, enqd_len); + if (len != new_buff_len) + xhci_warn(xhci, "WARN Wrong bounce buffer write length: %zu != %d\n", + len, new_buff_len); + } else { + memcpy(seg->bounce_buf, urb->transfer_buffer + enqd_len, new_buff_len); + } + seg->bounce_dma = dma_map_single(dev, seg->bounce_buf, max_pkt, DMA_TO_DEVICE); } else { -- 2.25.1