From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5ABAC43461 for ; Mon, 5 Apr 2021 21:03:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C50156044F for ; Mon, 5 Apr 2021 21:03:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241212AbhDEVD0 (ORCPT ); Mon, 5 Apr 2021 17:03:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241253AbhDEVDV (ORCPT ); Mon, 5 Apr 2021 17:03:21 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28F6DC061756 for ; Mon, 5 Apr 2021 14:03:13 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id b127so11297596qkf.19 for ; Mon, 05 Apr 2021 14:03:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JNx5Rnbw4kie6bp007z69E1v0eUbWtT0qFG7mbbWkbg=; b=EB2lhfxNEs4Z/DKPahsghGaJFpUbQmaa8Lp0tQ4rU899FL78ksX72PRf91q7+FzMbI zxOCpTNSvmPPnuVp3MzFYQ/wIVIOSDIKEMCNEPaR8Sgy4xCOIpnMv3RTrZzTAjGXSTsX GSlVVwpnYAxZc2cLKLOE1SvtHaJ17gT7p2InUADAf5dIyyvQ0zc5S1KnahpkDVsJYEwM YkN4yu8UDzA5/mZdCvX/FNMsZNu3c45mQmhp3IyZRDXJhHJ6AjtDvhI0YfmZPan5cOXy AXXYBaYuYYRPyM/6w1TilQpxJzhyMxkspZZi/MxyKYnzpEf4x8UZHcqu4uOYBTUB09Ch hGtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JNx5Rnbw4kie6bp007z69E1v0eUbWtT0qFG7mbbWkbg=; b=LvigumnWQGSNht5nfKgcJJj+5eqFcsmB8LqBiWlfDewMDGu74uCBoTTDWEefnOMsss GYYflaWe/Q97IxPD+iyzicQQkD8342FXUgkxwY3zHljesDZNOVZIApfSafSAsuxTJmoH 2NAN45gm+cjS4nhBVlVOgdpWejwU6wlMy4AzvdkrL3bArYzwmuEu3Igxka2vqm7OUeGs Ai7Xgo+vpce0v6RLfUx2JCyh18w++ZjyoIrK4009j6Z+sWmAds2OYsDtQnSRpVUOQzCc c8Pnd1cG6BQGWk1gC/JsB7p/sc6c732z1XNCjmXvt5LP/RH0f9jYvtJxkDqRfXSmHczh 9zmw== X-Gm-Message-State: AOAM530oJGt+1PkWlDMxe1PfE/1/IUBVduLs4VC9I1fzY7E/cl5UFEyT K5nDV13dwJhWX6DC4b45Y6OZUFhNYgGyzstXuHDPXrzu1OlRZSXCimxqiIsqut+4x7APXcnjWOc 548Teci0cP6EJrlcJbGTbVJEj36HXwhZ9SHJWiaoNpluHNY4Iy2Fo9qYjL3c= X-Google-Smtp-Source: ABdhPJy7x1RkwtEY338PSaWtDjMS0bwwzfAduUHppgcxJ4//mezTYDpyIu67VPGO4Cb3E3xMwYc0U8yKnA== X-Received: from jxgao-snp.c.googlers.com ([fda3:e722:ac3:10:7f:e700:c0a8:1373]) (user=jxgao job=sendgmr) by 2002:a05:6214:2607:: with SMTP id gu7mr25234522qvb.18.1617656592302; Mon, 05 Apr 2021 14:03:12 -0700 (PDT) Date: Mon, 5 Apr 2021 21:02:29 +0000 In-Reply-To: <20210405210230.1707074-1-jxgao@google.com> Message-Id: <20210405210230.1707074-8-jxgao@google.com> Mime-Version: 1.0 References: <20210405210230.1707074-1-jxgao@google.com> X-Mailer: git-send-email 2.31.0.208.g409f899ff0-goog Subject: [PATCH v5.10 7/8] swiotlb: respect min_align_mask From: Jianxiong Gao To: stable@vger.kernel.org Cc: Jianxiong Gao , Christoph Hellwig , Konrad Rzeszutek Wilk Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig 'commit 1f221a0d0dbf ("swiotlb: respect min_align_mask")' Respect the min_align_mask in struct device_dma_parameters in swiotlb. There are two parts to it: 1) for the lower bits of the alignment inside the io tlb slot, just extent the size of the allocation and leave the start of the slot empty 2) for the high bits ensure we find a slot that matches the high bits of the alignment to avoid wasting too much memory Based on an earlier patch from Jianxiong Gao . Signed-off-by: Christoph Hellwig Acked-by: Jianxiong Gao Tested-by: Jianxiong Gao Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Jianxiong Gao --- kernel/dma/swiotlb.c | 41 +++++++++++++++++++++++++++++++---------- 1 file changed, 31 insertions(+), 10 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index d1349971f099..ba6e9269a1cc 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -453,6 +453,15 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, } #define slot_addr(start, idx) ((start) + ((idx) << IO_TLB_SHIFT)) + +/* + * Return the offset into a iotlb slot required to keep the device happy. + */ +static unsigned int swiotlb_align_offset(struct device *dev, u64 addr) +{ + return addr & dma_get_min_align_mask(dev) & (IO_TLB_SIZE - 1); +} + /* * Carefully handle integer overflow which can occur when boundary_mask == ~0UL. */ @@ -474,23 +483,27 @@ static unsigned int wrap_index(unsigned int index) * Find a suitable number of IO TLB entries size that will fit this request and * allocate a buffer from that IO TLB pool. */ -static int find_slots(struct device *dev, size_t alloc_size) +static int find_slots(struct device *dev, phys_addr_t orig_addr, size_t alloc_size) { unsigned long boundary_mask = dma_get_seg_boundary(dev); dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, io_tlb_start) & boundary_mask; unsigned long max_slots = get_max_slots(boundary_mask); - unsigned int nslots = nr_slots(alloc_size), stride = 1; + unsigned int iotlb_align_mask = + dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1); + unsigned int nslots = nr_slots(alloc_size), stride; unsigned int index, wrap, count = 0, i; unsigned long flags; BUG_ON(!nslots); /* - * For mappings greater than or equal to a page, we limit the stride - * (and hence alignment) to a page size. + * For mappings with an alignment requirement don't bother looping to + * unaligned slots once we found an aligned one. For allocations of + * PAGE_SIZE or larger only look for page aligned allocations. */ + stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1; if (alloc_size >= PAGE_SIZE) - stride <<= (PAGE_SHIFT - IO_TLB_SHIFT); + stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT)); spin_lock_irqsave(&io_tlb_lock, flags); if (unlikely(nslots > io_tlb_nslabs - io_tlb_used)) @@ -498,6 +511,12 @@ static int find_slots(struct device *dev, size_t alloc_size) index = wrap = wrap_index(ALIGN(io_tlb_index, stride)); do { + if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != + (orig_addr & iotlb_align_mask)) { + index = wrap_index(index + 1); + continue; + } + /* * If we find a slot that indicates we have 'nslots' number of * contiguous buffers, we allocate the buffers from that slot @@ -540,6 +559,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { + unsigned int offset = swiotlb_align_offset(dev, orig_addr); unsigned int index, i; phys_addr_t tlb_addr; @@ -555,7 +575,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return (phys_addr_t)DMA_MAPPING_ERROR; } - index = find_slots(dev, alloc_size); + index = find_slots(dev, orig_addr, alloc_size + offset); if (index == -1) { if (!(attrs & DMA_ATTR_NO_WARN)) dev_warn_ratelimited(dev, @@ -568,10 +588,10 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, * This is needed when we sync the memory. Then we sync the buffer if * needed. */ - for (i = 0; i < nr_slots(alloc_size); i++) + for (i = 0; i < nr_slots(alloc_size + offset); i++) io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i); - tlb_addr = slot_addr(io_tlb_start, index); + tlb_addr = slot_addr(io_tlb_start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); @@ -586,8 +606,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, enum dma_data_direction dir, unsigned long attrs) { unsigned long flags; - int i, count, nslots = nr_slots(alloc_size); - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; + unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); + int i, count, nslots = nr_slots(alloc_size + offset); + int index = (tlb_addr - offset - io_tlb_start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = io_tlb_orig_addr[index]; /* -- 2.27.0