From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B81FCC64E8A for ; Tue, 1 Dec 2020 17:52:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF41521D46 for ; Tue, 1 Dec 2020 17:52:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hWBXtKPm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF41521D46 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E9B46B006C; Tue, 1 Dec 2020 12:52:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5992E6B006E; Tue, 1 Dec 2020 12:52:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 461868D0001; Tue, 1 Dec 2020 12:52:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 2957B6B006C for ; Tue, 1 Dec 2020 12:52:01 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D8C34180AD830 for ; Tue, 1 Dec 2020 17:52:00 +0000 (UTC) X-FDA: 77545456800.17.soda32_1c174e3273ac Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id BCC6C180D0181 for ; Tue, 1 Dec 2020 17:52:00 +0000 (UTC) X-HE-Tag: soda32_1c174e3273ac X-Filterd-Recvd-Size: 16727 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 17:52:00 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id 131so1588359pfb.9 for ; Tue, 01 Dec 2020 09:52:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bYRnT6meiNHkxAgZPYwSbGdDS48Uvqnd+UI5WVDlRIo=; b=hWBXtKPmFreatgchOXPxb8I6zutB24f/ziw4eDAqHjbN0a+hiWJJdxgAW8NqnrnLqq v6SqcIafXQGiqd+NS6JkhDZpWGBKMWc3lIBk325OH7p2yTUp+WWhk33Qgm0mQu506ZRz 434w2TTh093Y8mrW5fU27QDbN+Lcnngtij2H4KzLha+GOb+UswF9kZWy6Q41R5CUx/xf IiiQrdJbJBRnN3cAtj6bw9NqEnAL/ZKleL4c3Rzj1n/st3/kuAyjoO1yBBe4au0ZR4Zv pG4x0X9zOBCn58x3W675dSYXFLfhrceoFFDqJutASvmStsm36yHFw2eeSZAv7iIOtiL7 IfMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=bYRnT6meiNHkxAgZPYwSbGdDS48Uvqnd+UI5WVDlRIo=; b=hEwol0lxV9PHwKkr24HzR0+rkEinLlcVx3tRiZC+QB808OdbMju/s9pNUdXPxQL36G GUX/C5DqGYG6RrkjT5nZxaOZmLBdr7kGqKvf7gJ/YQ9AS6148ZH2HV1HIS/teRStrdmd 9UoCMqYWSAkKh19EJt4F/W/653gRY+MUT0sXGsl6rjKnQnUcdcwJzFHv2LRUkwW5MnK9 MiSXvfagBq5T6cqfXr8/jdZRUdCn6kgmrOiw15ZwvrtX0r5bLaU3+u36GiTHiMRGrKY4 P/nkBGFpe7EKWPsrX2kZWJ3ec4m0Fau6ROCS6q9/VUs0CANDT9/GJ50iECKAD302A0XT 4juA== X-Gm-Message-State: AOAM533s8B4KtZRJXnMb2tqDxAdqYDHHtmhR/SocG6pgeG1c6tuD2wV9 0GhbliPXXYvLr0bS8qLIyENXRUxBQmA= X-Google-Smtp-Source: ABdhPJxPMBUU5/Z5mNi1wm+3qVG2Z14gAaQt2uPUeLlEWWh2TAsg5KL2eW8/0LCEd/AJCDUVDQXCfg== X-Received: by 2002:aa7:8595:0:b029:198:159e:52d4 with SMTP id w21-20020aa785950000b0290198159e52d4mr3509713pfn.7.1606845119118; Tue, 01 Dec 2020 09:51:59 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id q23sm390082pfg.192.2020.12.01.09.51.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Dec 2020 09:51:58 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v2 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Date: Tue, 1 Dec 2020 09:51:44 -0800 Message-Id: <20201201175144.3996569-5-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog In-Reply-To: <20201201175144.3996569-1-minchan@kernel.org> References: <20201201175144.3996569-1-minchan@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hyesoo Yu This patch supports chunk heap that allocates the buffers that arranged into a list a fixed size chunks taken from CMA. The chunk heap doesn't use heap-helper although it can remove duplicated code since heap-helper is under deprecated process.[1] NOTE: This patch only adds the default CMA heap to allocate chunk pages. We will add another CMA memory regions to the dmabuf heaps interface with a later patch (which requires a dt binding) [1] https://lore.kernel.org/patchwork/patch/1336002 Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim --- drivers/dma-buf/heaps/Kconfig | 15 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/chunk_heap.c | 429 +++++++++++++++++++++++++++++ 3 files changed, 445 insertions(+) create mode 100644 drivers/dma-buf/heaps/chunk_heap.c diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfi= g index a5eef06c4226..9153f83afed7 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,18 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_CHUNK + tristate "DMA-BUF CHUNK Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CHUNK heap. This heap is backed + by the Contiguous Memory Allocator (CMA) and allocates the buffers th= at + arranged into a list of fixed size chunks taken from CMA. + +config DMABUF_HEAPS_CHUNK_ORDER + int "Chunk page order for dmabuf chunk heap" + default 4 + depends on DMABUF_HEAPS_CHUNK + help + Set page order of fixed chunk size to allocate from CMA. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makef= ile index 974467791032..8faa6cfdc0c5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) +=3D system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) +=3D cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CHUNK) +=3D chunk_heap.o diff --git a/drivers/dma-buf/heaps/chunk_heap.c b/drivers/dma-buf/heaps/c= hunk_heap.c new file mode 100644 index 000000000000..0277707a93a9 --- /dev/null +++ b/drivers/dma-buf/heaps/chunk_heap.c @@ -0,0 +1,429 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ION Memory Allocator chunk heap exporter + * + * Copyright (c) 2020 Samsung Electronics Co., Ltd. + * Author: for Samsung Electronics. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct chunk_heap { + struct dma_heap *heap; + unsigned int order; + struct cma *cma; +}; + +struct chunk_heap_buffer { + struct chunk_heap *heap; + struct list_head attachments; + struct mutex lock; + struct sg_table sg_table; + unsigned long len; + int vmap_cnt; + void *vaddr; +}; + +struct chunk_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; + bool mapped; +}; + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table =3D kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret =3D sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg =3D new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg =3D sg_next(new_sg); + } + + return new_table; +} + +static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_atta= chment *attachment) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + struct chunk_heap_attachment *a; + struct sg_table *table; + + a =3D kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table =3D dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table =3D table; + a->dev =3D attachment->dev; + INIT_LIST_HEAD(&a->list); + a->mapped =3D false; + + attachment->priv =3D a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_att= achment *attachment) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + struct chunk_heap_attachment *a =3D attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment= *attachment, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a =3D attachment->priv; + struct sg_table *table =3D a->table; + int ret; + + ret =3D dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + a->mapped =3D true; + return table; +} + +static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachme= nt, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a =3D attachment->priv; + + a->mapped =3D false; + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct= *vma) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + struct sg_table *table =3D &buffer->sg_table; + unsigned long addr =3D vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page =3D sg_page_iter_page(&piter); + + ret =3D remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr +=3D PAGE_SIZE; + if (addr >=3D vma->vm_end) + return 0; + } + return 0; +} + +static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer) +{ + struct sg_table *table =3D &buffer->sg_table; + int npages =3D PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages =3D vmalloc(sizeof(struct page *) * npages); + struct page **tmp =3D pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return ERR_PTR(-ENOMEM); + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >=3D npages); + *tmp++ =3D sg_page_iter_page(&piter); + } + + vaddr =3D vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *m= ap) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + int ret =3D 0; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + vaddr =3D buffer->vaddr; + goto done; + } + + vaddr =3D chunk_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) { + ret =3D PTR_ERR(vaddr); + goto err; + } + + buffer->vaddr =3D vaddr; +done: + buffer->vmap_cnt++; + dma_buf_map_set_vaddr(map, vaddr); +err: + mutex_unlock(&buffer->lock); + + return ret; +} + +static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map= *map) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr =3D NULL; + } + mutex_unlock(&buffer->lock); +} + +static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct chunk_heap_buffer *buffer =3D dmabuf->priv; + struct chunk_heap *chunk_heap =3D buffer->heap; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table =3D &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order); + sg_free_table(table); + kfree(buffer); +} + +static const struct dma_buf_ops chunk_heap_buf_ops =3D { + .attach =3D chunk_heap_attach, + .detach =3D chunk_heap_detach, + .map_dma_buf =3D chunk_heap_map_dma_buf, + .unmap_dma_buf =3D chunk_heap_unmap_dma_buf, + .begin_cpu_access =3D chunk_heap_dma_buf_begin_cpu_access, + .end_cpu_access =3D chunk_heap_dma_buf_end_cpu_access, + .mmap =3D chunk_heap_mmap, + .vmap =3D chunk_heap_vmap, + .vunmap =3D chunk_heap_vunmap, + .release =3D chunk_heap_dma_buf_release, +}; + +static int chunk_heap_allocate(struct dma_heap *heap, unsigned long len, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct chunk_heap *chunk_heap =3D dma_heap_get_drvdata(heap); + struct chunk_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + struct sg_table *table; + struct scatterlist *sg; + struct page **pages; + unsigned int chunk_size =3D PAGE_SIZE << chunk_heap->order; + unsigned int count, alloced =3D 0; + unsigned int num_retry =3D 5; + int ret =3D -ENOMEM; + pgoff_t pg; + + buffer =3D kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) + return ret; + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap =3D chunk_heap; + buffer->len =3D ALIGN(len, chunk_size); + count =3D buffer->len / chunk_size; + + pages =3D kvmalloc_array(count, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_pages; + + while (num_retry--) { + unsigned long nr_pages; + + ret =3D cma_alloc_bulk(chunk_heap->cma, chunk_heap->order, + num_retry ? true : false, + chunk_heap->order, count - alloced, + pages + alloced, &nr_pages); + alloced +=3D nr_pages; + if (alloced =3D=3D count) + break; + if (ret !=3D -EBUSY) + break; + + } + if (ret < 0) + goto err_alloc; + + table =3D &buffer->sg_table; + if (sg_alloc_table(table, count, GFP_KERNEL)) + goto err_alloc; + + sg =3D table->sgl; + for (pg =3D 0; pg < count; pg++) { + sg_set_page(sg, pages[pg], chunk_size, 0); + sg =3D sg_next(sg); + } + + exp_info.ops =3D &chunk_heap_buf_ops; + exp_info.size =3D buffer->len; + exp_info.flags =3D fd_flags; + exp_info.priv =3D buffer; + dmabuf =3D dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret =3D PTR_ERR(dmabuf); + goto err_export; + } + kvfree(pages); + + ret =3D dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + return ret; + } + + return 0; +err_export: + sg_free_table(table); +err_alloc: + for (pg =3D 0; pg < alloced; pg++) + cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order); + kvfree(pages); +err_pages: + kfree(buffer); + + return ret; +} + +static const struct dma_heap_ops chunk_heap_ops =3D { + .allocate =3D chunk_heap_allocate, +}; + +#ifdef CONFIG_DMABUF_HEAPS_CHUNK_ORDER +#define CHUNK_HEAP_ORDER (CONFIG_DMABUF_HEAPS_CHUNK_ORDER) +#else +#define CHUNK_HEAP_ORDER (0) +#endif + +static int __init chunk_heap_init(void) +{ + struct cma *default_cma =3D dev_get_cma_area(NULL); + struct dma_heap_export_info exp_info; + struct chunk_heap *chunk_heap; + + if (!default_cma) + return 0; + + chunk_heap =3D kzalloc(sizeof(*chunk_heap), GFP_KERNEL); + if (!chunk_heap) + return -ENOMEM; + + chunk_heap->order =3D CHUNK_HEAP_ORDER; + chunk_heap->cma =3D default_cma; + + exp_info.name =3D cma_get_name(default_cma); + exp_info.ops =3D &chunk_heap_ops; + exp_info.priv =3D chunk_heap; + + chunk_heap->heap =3D dma_heap_add(&exp_info); + if (IS_ERR(chunk_heap->heap)) { + int ret =3D PTR_ERR(chunk_heap->heap); + + kfree(chunk_heap); + return ret; + } + + return 0; +} +module_init(chunk_heap_init); +MODULE_DESCRIPTION("DMA-BUF Chunk Heap"); +MODULE_LICENSE("GPL v2"); --=20 2.29.2.454.gaff20da3a2-goog