All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: "Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"Maxime Ripard" <mripard@kernel.org>,
	"Dave Airlie" <airlied@linux.ie>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Sam Ravnborg" <sam@ravnborg.org>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Lucas Stach" <l.stach@pengutronix.de>,
	linux+etnaviv@armlinux.org.uk,
	"Christian Gmeiner" <christian.gmeiner@gmail.com>,
	"Inki Dae" <inki.dae@samsung.com>,
	"Joonyoung Shim" <jy0922.shim@samsung.com>,
	"Seung-Woo Kim" <sw0312.kim@samsung.com>,
	"Kyungmin Park" <kyungmin.park@samsung.com>,
	"Kukjin Kim" <kgene@kernel.org>,
	"Krzysztof Kozlowski" <krzk@kernel.org>,
	yuq825@gmail.com, "Ben Skeggs" <bskeggs@redhat.com>,
	"Rob Herring" <robh@kernel.org>,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	steven.price@arm.com, alyssa.rosenzweig@collabora.com,
	"Sandy Huang" <hjc@rock-chips.com>,
	"Heiko Stübner" <heiko@sntech.de>,
	"Hans de Goede" <hdegoede@redhat.com>,
	"Sean Paul" <sean@poorly.run>, "Eric Anholt" <eric@anholt.net>,
	"Oleksandr Andrushchenko" <oleksandr_andrushchenko@epam.com>,
	ray.huang@amd.com, "Sumit Semwal" <sumit.semwal@linaro.org>,
	"Emil Velikov" <emil.velikov@collabora.com>,
	luben.tuikov@amd.com, apaneers@amd.com, melissa.srw@gmail.com,
	"Chris Wilson" <chris@chris-wilson.co.uk>,
	"Qinglang Miao" <miaoqinglang@huawei.com>,
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	"Linux ARM" <linux-arm-kernel@lists.infradead.org>,
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
	lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	"open list:ARM/Rockchip SoC..."
	<linux-rockchip@lists.infradead.org>,
	xen-devel@lists.xenproject.org,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Thu, 5 Nov 2020 11:37:08 +0100	[thread overview]
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de> (raw)
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>


[-- Attachment #1.1.1: Type: text/plain, Size: 3167 bytes --]

Hi

Am 05.11.20 um 11:07 schrieb Linus Walleij:
> Overall I like this, just an inline question:
> 
> On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> 
>> To do framebuffer updates, one needs memcpy from system memory and a
>> pointer-increment function. Add both interfaces with documentation.
> 
> (...)
>> +/**
>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>> + * @dst:       The dma-buf mapping structure
>> + * @src:       The source buffer
>> + * @len:       The number of byte in src
>> + *
>> + * Copies data into a dma-buf mapping. The source buffer is in system
>> + * memory. Depending on the buffer's location, the helper picks the correct
>> + * method of accessing the memory.
>> + */
>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
>> +{
>> +       if (dst->is_iomem)
>> +               memcpy_toio(dst->vaddr_iomem, src, len);
>> +       else
>> +               memcpy(dst->vaddr, src, len);
>> +}
> 
> Are these going to be really big memcpy() operations?

Individually, each could be a scanline, so a few KiB. (4 bytes *
horizontal resolution). Updating a full framebuffer can sum up to
several MiB.

> 
> Some platforms have DMA offload engines that can perform memcpy(),They could be
> drivers/dma, include/linux/dmaengine.h
> especially if the CPU doesn't really need to touch the contents
> and flush caches etc.
> An example exist in some MTD drivers that move large quantities of
> data off flash memory like this:
> drivers/mtd/nand/raw/cadence-nand-controller.c
> 
> Notice that DMAengine and DMAbuf does not have much in common,
> the names can be deceiving.
> 
> The value of this varies with the system architecture. It is not just
> a question about performance but also about power and the CPU
> being able to do other stuff in parallel for large transfers. So *when*
> to use this facility to accelerate memcpy() is a delicate question.
> 
> What I'm after here is if these can be really big, do we want
> (in the long run, not now) open up to the idea to slot in
> hardware-accelerated memcpy() here?

We currently use this functionality for the graphical framebuffer
console that most DRM drivers provide. It's non-accelerated and slow,
but this has not been much of a problem so far.

Within DRM, we're more interested in removing console code from drivers
and going for the generic implementation.

Most of the graphics HW allocates framebuffers from video RAM, system
memory or CMA pools and does not really need these memcpys. Only a few
systems with small video RAM require a shadow buffer, which we flush
into VRAM as needed. Those might benefit.

OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
it from the DRM code. I think it all depends on how invasive that change
would be.

Best regards
Thomas

> 
> Yours,
> Linus Walleij
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.2: OpenPGP_0x680DC11D530B7A23.asc --]
[-- Type: application/pgp-keys, Size: 4259 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: "Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"Maxime Ripard" <mripard@kernel.org>,
	"Dave Airlie" <airlied@linux.ie>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Sam Ravnborg" <sam@ravnborg.org>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Lucas Stach" <l.stach@pengutronix.de>,
	linux+etnaviv@armlinux.org.uk,
	"Christian Gmeiner" <christian.gmeiner@gmail.com>,
	"Inki Dae" <inki.dae@samsung.com>,
	"Joonyoung Shim" <jy0922.shim@samsung.com>,
	"Seung-Woo Kim" <sw0312.kim@samsung.com>,
	"Kyungmin Park" <kyungmin.park@samsung.com>,
	"Kukjin Kim" <kgene@kernel.org>,
	"Krzysztof Kozlowski" <krzk@kernel.org>,
	yuq825@gmail.com, "Ben Skeggs" <bskeggs@redhat.com>,
	"Rob Herring" <robh@kernel.org>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Thu, 5 Nov 2020 11:37:08 +0100	[thread overview]
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de> (raw)
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>


[-- Attachment #1.1.1: Type: text/plain, Size: 3167 bytes --]

Hi

Am 05.11.20 um 11:07 schrieb Linus Walleij:
> Overall I like this, just an inline question:
> 
> On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> 
>> To do framebuffer updates, one needs memcpy from system memory and a
>> pointer-increment function. Add both interfaces with documentation.
> 
> (...)
>> +/**
>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>> + * @dst:       The dma-buf mapping structure
>> + * @src:       The source buffer
>> + * @len:       The number of byte in src
>> + *
>> + * Copies data into a dma-buf mapping. The source buffer is in system
>> + * memory. Depending on the buffer's location, the helper picks the correct
>> + * method of accessing the memory.
>> + */
>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
>> +{
>> +       if (dst->is_iomem)
>> +               memcpy_toio(dst->vaddr_iomem, src, len);
>> +       else
>> +               memcpy(dst->vaddr, src, len);
>> +}
> 
> Are these going to be really big memcpy() operations?

Individually, each could be a scanline, so a few KiB. (4 bytes *
horizontal resolution). Updating a full framebuffer can sum up to
several MiB.

> 
> Some platforms have DMA offload engines that can perform memcpy(),They could be
> drivers/dma, include/linux/dmaengine.h
> especially if the CPU doesn't really need to touch the contents
> and flush caches etc.
> An example exist in some MTD drivers that move large quantities of
> data off flash memory like this:
> drivers/mtd/nand/raw/cadence-nand-controller.c
> 
> Notice that DMAengine and DMAbuf does not have much in common,
> the names can be deceiving.
> 
> The value of this varies with the system architecture. It is not just
> a question about performance but also about power and the CPU
> being able to do other stuff in parallel for large transfers. So *when*
> to use this facility to accelerate memcpy() is a delicate question.
> 
> What I'm after here is if these can be really big, do we want
> (in the long run, not now) open up to the idea to slot in
> hardware-accelerated memcpy() here?

We currently use this functionality for the graphical framebuffer
console that most DRM drivers provide. It's non-accelerated and slow,
but this has not been much of a problem so far.

Within DRM, we're more interested in removing console code from drivers
and going for the generic implementation.

Most of the graphics HW allocates framebuffers from video RAM, system
memory or CMA pools and does not really need these memcpys. Only a few
systems with small video RAM require a shadow buffer, which we flush
into VRAM as needed. Those might benefit.

OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
it from the DRM code. I think it all depends on how invasive that change
would be.

Best regards
Thomas

> 
> Yours,
> Linus Walleij
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.2: OpenPGP_0x680DC11D530B7A23.asc --]
[-- Type: application/pgp-keys, Size: 4259 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: luben.tuikov@amd.com, "Heiko Stübner" <heiko@sntech.de>,
	"Dave Airlie" <airlied@linux.ie>,
	nouveau@lists.freedesktop.org,
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
	"Chris Wilson" <chris@chris-wilson.co.uk>,
	melissa.srw@gmail.com, "Eric Anholt" <eric@anholt.net>,
	ray.huang@amd.com, "Gerd Hoffmann" <kraxel@redhat.com>,
	"Sam Ravnborg" <sam@ravnborg.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Emil Velikov" <emil.velikov@collabora.com>,
	"Rob Herring" <robh@kernel.org>,
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
	"Joonyoung Shim" <jy0922.shim@samsung.com>,
	lima@lists.freedesktop.org,
	"Oleksandr Andrushchenko" <oleksandr_andrushchenko@epam.com>,
	"Krzysztof Kozlowski" <krzk@kernel.org>,
	steven.price@arm.com,
	"open list:ARM/Rockchip SoC..."
	<linux-rockchip@lists.infradead.org>,
	"Kukjin Kim" <kgene@kernel.org>,
	"Ben Skeggs" <bskeggs@redhat.com>,
	linux+etnaviv@armlinux.org.uk, spice-devel@lists.freedesktop.org,
	alyssa.rosenzweig@collabora.com,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	etnaviv@lists.freedesktop.org,
	"Maxime Ripard" <mripard@kernel.org>,
	"Inki Dae" <inki.dae@samsung.com>,
	"Hans de Goede" <hdegoede@redhat.com>,
	"Christian Gmeiner" <christian.gmeiner@gmail.com>,
	xen-devel@lists.xenproject.org,
	virtualization@lists.linux-foundation.org,
	"Sean Paul" <sean@poorly.run>,
	apaneers@amd.com,
	"Linux ARM" <linux-arm-kernel@lists.infradead.org>,
	linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Seung-Woo Kim" <sw0312.kim@samsung.com>,
	"Sandy Huang" <hjc@rock-chips.com>,
	"Kyungmin Park" <kyungmin.park@samsung.com>,
	"Qinglang Miao" <miaoqinglang@huawei.com>,
	yuq825@gmail.com, "Daniel Vetter" <daniel@ffwll.ch>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Lucas Stach" <l.stach@pengutronix.de>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Thu, 5 Nov 2020 11:37:08 +0100	[thread overview]
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de> (raw)
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>


[-- Attachment #1.1.1.1: Type: text/plain, Size: 3167 bytes --]

Hi

Am 05.11.20 um 11:07 schrieb Linus Walleij:
> Overall I like this, just an inline question:
> 
> On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> 
>> To do framebuffer updates, one needs memcpy from system memory and a
>> pointer-increment function. Add both interfaces with documentation.
> 
> (...)
>> +/**
>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>> + * @dst:       The dma-buf mapping structure
>> + * @src:       The source buffer
>> + * @len:       The number of byte in src
>> + *
>> + * Copies data into a dma-buf mapping. The source buffer is in system
>> + * memory. Depending on the buffer's location, the helper picks the correct
>> + * method of accessing the memory.
>> + */
>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
>> +{
>> +       if (dst->is_iomem)
>> +               memcpy_toio(dst->vaddr_iomem, src, len);
>> +       else
>> +               memcpy(dst->vaddr, src, len);
>> +}
> 
> Are these going to be really big memcpy() operations?

Individually, each could be a scanline, so a few KiB. (4 bytes *
horizontal resolution). Updating a full framebuffer can sum up to
several MiB.

> 
> Some platforms have DMA offload engines that can perform memcpy(),They could be
> drivers/dma, include/linux/dmaengine.h
> especially if the CPU doesn't really need to touch the contents
> and flush caches etc.
> An example exist in some MTD drivers that move large quantities of
> data off flash memory like this:
> drivers/mtd/nand/raw/cadence-nand-controller.c
> 
> Notice that DMAengine and DMAbuf does not have much in common,
> the names can be deceiving.
> 
> The value of this varies with the system architecture. It is not just
> a question about performance but also about power and the CPU
> being able to do other stuff in parallel for large transfers. So *when*
> to use this facility to accelerate memcpy() is a delicate question.
> 
> What I'm after here is if these can be really big, do we want
> (in the long run, not now) open up to the idea to slot in
> hardware-accelerated memcpy() here?

We currently use this functionality for the graphical framebuffer
console that most DRM drivers provide. It's non-accelerated and slow,
but this has not been much of a problem so far.

Within DRM, we're more interested in removing console code from drivers
and going for the generic implementation.

Most of the graphics HW allocates framebuffers from video RAM, system
memory or CMA pools and does not really need these memcpys. Only a few
systems with small video RAM require a shadow buffer, which we flush
into VRAM as needed. Those might benefit.

OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
it from the DRM code. I think it all depends on how invasive that change
would be.

Best regards
Thomas

> 
> Yours,
> Linus Walleij
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.1.2: OpenPGP_0x680DC11D530B7A23.asc --]
[-- Type: application/pgp-keys, Size: 4259 bytes --]

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

[-- Attachment #2: Type: text/plain, Size: 170 bytes --]

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

WARNING: multiple messages have this Message-ID (diff)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: luben.tuikov@amd.com, "Heiko Stübner" <heiko@sntech.de>,
	"Dave Airlie" <airlied@linux.ie>,
	nouveau@lists.freedesktop.org,
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
	"Chris Wilson" <chris@chris-wilson.co.uk>,
	melissa.srw@gmail.com, "Eric Anholt" <eric@anholt.net>,
	ray.huang@amd.com, "Sam Ravnborg" <sam@ravnborg.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Emil Velikov" <emil.velikov@collabora.com>,
	"Rob Herring" <robh@kernel.org>,
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
	"Joonyoung Shim" <jy0922.shim@samsung.com>,
	lima@lists.freedesktop.org,
	"Oleksandr Andrushchenko" <oleksandr_andrushchenko@epam.com>,
	"Krzysztof Kozlowski" <krzk@kernel.org>,
	steven.price@arm.com,
	"open list:ARM/Rockchip SoC..."
	<linux-rockchip@lists.infradead.org>,
	"Kukjin Kim" <kgene@kernel.org>,
	"Ben Skeggs" <bskeggs@redhat.com>,
	linux+etnaviv@armlinux.org.uk, spice-devel@lists.freedesktop.org,
	alyssa.rosenzweig@collabora.com,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	etnaviv@lists.freedesktop.org,
	"Maxime Ripard" <mripard@kernel.org>,
	"Inki Dae" <inki.dae@samsung.com>,
	"Hans de Goede" <hdegoede@redhat.com>,
	"Christian Gmeiner" <christian.gmeiner@gmail.com>,
	xen-devel@lists.xenproject.org,
	virtualization@lists.linux-foundation.org,
	"Sean Paul" <sean@poorly.run>,
	apaneers@amd.com,
	"Linux ARM" <linux-arm-kernel@lists.infradead.org>,
	linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Seung-Woo Kim" <sw0312.kim@samsung.com>,
	"Sandy Huang" <hjc@rock-chips.com>,
	"Kyungmin Park" <kyungmin.park@samsung.com>,
	"Qinglang Miao" <miaoqinglang@huawei.com>,
	yuq825@gmail.com, "Daniel Vetter" <daniel@ffwll.ch>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Lucas Stach" <l.stach@pengutronix.de>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Thu, 5 Nov 2020 11:37:08 +0100	[thread overview]
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de> (raw)
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>


[-- Attachment #1.1.1.1: Type: text/plain, Size: 3167 bytes --]

Hi

Am 05.11.20 um 11:07 schrieb Linus Walleij:
> Overall I like this, just an inline question:
> 
> On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> 
>> To do framebuffer updates, one needs memcpy from system memory and a
>> pointer-increment function. Add both interfaces with documentation.
> 
> (...)
>> +/**
>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>> + * @dst:       The dma-buf mapping structure
>> + * @src:       The source buffer
>> + * @len:       The number of byte in src
>> + *
>> + * Copies data into a dma-buf mapping. The source buffer is in system
>> + * memory. Depending on the buffer's location, the helper picks the correct
>> + * method of accessing the memory.
>> + */
>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
>> +{
>> +       if (dst->is_iomem)
>> +               memcpy_toio(dst->vaddr_iomem, src, len);
>> +       else
>> +               memcpy(dst->vaddr, src, len);
>> +}
> 
> Are these going to be really big memcpy() operations?

Individually, each could be a scanline, so a few KiB. (4 bytes *
horizontal resolution). Updating a full framebuffer can sum up to
several MiB.

> 
> Some platforms have DMA offload engines that can perform memcpy(),They could be
> drivers/dma, include/linux/dmaengine.h
> especially if the CPU doesn't really need to touch the contents
> and flush caches etc.
> An example exist in some MTD drivers that move large quantities of
> data off flash memory like this:
> drivers/mtd/nand/raw/cadence-nand-controller.c
> 
> Notice that DMAengine and DMAbuf does not have much in common,
> the names can be deceiving.
> 
> The value of this varies with the system architecture. It is not just
> a question about performance but also about power and the CPU
> being able to do other stuff in parallel for large transfers. So *when*
> to use this facility to accelerate memcpy() is a delicate question.
> 
> What I'm after here is if these can be really big, do we want
> (in the long run, not now) open up to the idea to slot in
> hardware-accelerated memcpy() here?

We currently use this functionality for the graphical framebuffer
console that most DRM drivers provide. It's non-accelerated and slow,
but this has not been much of a problem so far.

Within DRM, we're more interested in removing console code from drivers
and going for the generic implementation.

Most of the graphics HW allocates framebuffers from video RAM, system
memory or CMA pools and does not really need these memcpys. Only a few
systems with small video RAM require a shadow buffer, which we flush
into VRAM as needed. Those might benefit.

OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
it from the DRM code. I think it all depends on how invasive that change
would be.

Best regards
Thomas

> 
> Yours,
> Linus Walleij
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.1.2: OpenPGP_0x680DC11D530B7A23.asc --]
[-- Type: application/pgp-keys, Size: 4259 bytes --]

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

WARNING: multiple messages have this Message-ID (diff)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: luben.tuikov@amd.com, "Dave Airlie" <airlied@linux.ie>,
	nouveau@lists.freedesktop.org,
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
	"Chris Wilson" <chris@chris-wilson.co.uk>,
	melissa.srw@gmail.com, ray.huang@amd.com,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Sam Ravnborg" <sam@ravnborg.org>,
	"Emil Velikov" <emil.velikov@collabora.com>,
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
	"Joonyoung Shim" <jy0922.shim@samsung.com>,
	lima@lists.freedesktop.org,
	"Oleksandr Andrushchenko" <oleksandr_andrushchenko@epam.com>,
	"Krzysztof Kozlowski" <krzk@kernel.org>,
	steven.price@arm.com,
	"open list:ARM/Rockchip SoC..."
	<linux-rockchip@lists.infradead.org>,
	"Kukjin Kim" <kgene@kernel.org>,
	"Ben Skeggs" <bskeggs@redhat.com>,
	linux+etnaviv@armlinux.org.uk, spice-devel@lists.freedesktop.org,
	alyssa.rosenzweig@collabora.com, etnaviv@lists.freedesktop.org,
	"Hans de Goede" <hdegoede@redhat.com>,
	xen-devel@lists.xenproject.org,
	virtualization@lists.linux-foundation.org,
	"Sean Paul" <sean@poorly.run>,
	apaneers@amd.com,
	"Linux ARM" <linux-arm-kernel@lists.infradead.org>,
	linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Seung-Woo Kim" <sw0312.kim@samsung.com>,
	"Sandy Huang" <hjc@rock-chips.com>,
	"Kyungmin Park" <kyungmin.park@samsung.com>,
	"Qinglang Miao" <miaoqinglang@huawei.com>,
	yuq825@gmail.com, "Alex Deucher" <alexander.deucher@amd.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Thu, 5 Nov 2020 11:37:08 +0100	[thread overview]
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de> (raw)
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>


[-- Attachment #1.1.1.1: Type: text/plain, Size: 3167 bytes --]

Hi

Am 05.11.20 um 11:07 schrieb Linus Walleij:
> Overall I like this, just an inline question:
> 
> On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> 
>> To do framebuffer updates, one needs memcpy from system memory and a
>> pointer-increment function. Add both interfaces with documentation.
> 
> (...)
>> +/**
>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>> + * @dst:       The dma-buf mapping structure
>> + * @src:       The source buffer
>> + * @len:       The number of byte in src
>> + *
>> + * Copies data into a dma-buf mapping. The source buffer is in system
>> + * memory. Depending on the buffer's location, the helper picks the correct
>> + * method of accessing the memory.
>> + */
>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
>> +{
>> +       if (dst->is_iomem)
>> +               memcpy_toio(dst->vaddr_iomem, src, len);
>> +       else
>> +               memcpy(dst->vaddr, src, len);
>> +}
> 
> Are these going to be really big memcpy() operations?

Individually, each could be a scanline, so a few KiB. (4 bytes *
horizontal resolution). Updating a full framebuffer can sum up to
several MiB.

> 
> Some platforms have DMA offload engines that can perform memcpy(),They could be
> drivers/dma, include/linux/dmaengine.h
> especially if the CPU doesn't really need to touch the contents
> and flush caches etc.
> An example exist in some MTD drivers that move large quantities of
> data off flash memory like this:
> drivers/mtd/nand/raw/cadence-nand-controller.c
> 
> Notice that DMAengine and DMAbuf does not have much in common,
> the names can be deceiving.
> 
> The value of this varies with the system architecture. It is not just
> a question about performance but also about power and the CPU
> being able to do other stuff in parallel for large transfers. So *when*
> to use this facility to accelerate memcpy() is a delicate question.
> 
> What I'm after here is if these can be really big, do we want
> (in the long run, not now) open up to the idea to slot in
> hardware-accelerated memcpy() here?

We currently use this functionality for the graphical framebuffer
console that most DRM drivers provide. It's non-accelerated and slow,
but this has not been much of a problem so far.

Within DRM, we're more interested in removing console code from drivers
and going for the generic implementation.

Most of the graphics HW allocates framebuffers from video RAM, system
memory or CMA pools and does not really need these memcpys. Only a few
systems with small video RAM require a shadow buffer, which we flush
into VRAM as needed. Those might benefit.

OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
it from the DRM code. I think it all depends on how invasive that change
would be.

Best regards
Thomas

> 
> Yours,
> Linus Walleij
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.1.2: OpenPGP_0x680DC11D530B7A23.asc --]
[-- Type: application/pgp-keys, Size: 4259 bytes --]

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: luben.tuikov@amd.com, "Heiko Stübner" <heiko@sntech.de>,
	"Dave Airlie" <airlied@linux.ie>,
	nouveau@lists.freedesktop.org,
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
	"Chris Wilson" <chris@chris-wilson.co.uk>,
	melissa.srw@gmail.com, "Eric Anholt" <eric@anholt.net>,
	ray.huang@amd.com, "Gerd Hoffmann" <kraxel@redhat.com>,
	"Sam Ravnborg" <sam@ravnborg.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Emil Velikov" <emil.velikov@collabora.com>,
	"Rob Herring" <robh@kernel.org>,
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
	"Joonyoung Shim" <jy0922.shim@samsung.com>,
	lima@lists.freedesktop.org,
	"Oleksandr Andrushchenko" <oleksandr_andrushchenko@epam.com>,
	"Krzysztof Kozlowski" <krzk@kernel.org>,
	steven.price@arm.com,
	"open list:ARM/Rockchip SoC..."
	<linux-rockchip@lists.infradead.org>,
	"Kukjin Kim" <kgene@kernel.org>,
	"Ben Skeggs" <bskeggs@redhat.com>,
	linux+etnaviv@armlinux.org.uk, spice-devel@lists.freedesktop.org,
	alyssa.rosenzweig@collabora.com,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	etnaviv@lists.freedesktop.org,
	"Maxime Ripard" <mripard@kernel.org>,
	"Inki Dae" <inki.dae@samsung.com>,
	"Hans de Goede" <hdegoede@redhat.com>,
	"Christian Gmeiner" <christian.gmeiner@gmail.com>,
	xen-devel@lists.xenproject.org,
	virtualization@lists.linux-foundation.org,
	"Sean Paul" <sean@poorly.run>,
	apaneers@amd.com,
	"Linux ARM" <linux-arm-kernel@lists.infradead.org>,
	linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Seung-Woo Kim" <sw0312.kim@samsung.com>,
	"Sandy Huang" <hjc@rock-chips.com>,
	"Kyungmin Park" <kyungmin.park@samsung.com>,
	"Qinglang Miao" <miaoqinglang@huawei.com>,
	yuq825@gmail.com, "Daniel Vetter" <daniel@ffwll.ch>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Lucas Stach" <l.stach@pengutronix.de>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Thu, 5 Nov 2020 11:37:08 +0100	[thread overview]
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de> (raw)
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>


[-- Attachment #1.1.1.1: Type: text/plain, Size: 3167 bytes --]

Hi

Am 05.11.20 um 11:07 schrieb Linus Walleij:
> Overall I like this, just an inline question:
> 
> On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> 
>> To do framebuffer updates, one needs memcpy from system memory and a
>> pointer-increment function. Add both interfaces with documentation.
> 
> (...)
>> +/**
>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>> + * @dst:       The dma-buf mapping structure
>> + * @src:       The source buffer
>> + * @len:       The number of byte in src
>> + *
>> + * Copies data into a dma-buf mapping. The source buffer is in system
>> + * memory. Depending on the buffer's location, the helper picks the correct
>> + * method of accessing the memory.
>> + */
>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
>> +{
>> +       if (dst->is_iomem)
>> +               memcpy_toio(dst->vaddr_iomem, src, len);
>> +       else
>> +               memcpy(dst->vaddr, src, len);
>> +}
> 
> Are these going to be really big memcpy() operations?

Individually, each could be a scanline, so a few KiB. (4 bytes *
horizontal resolution). Updating a full framebuffer can sum up to
several MiB.

> 
> Some platforms have DMA offload engines that can perform memcpy(),They could be
> drivers/dma, include/linux/dmaengine.h
> especially if the CPU doesn't really need to touch the contents
> and flush caches etc.
> An example exist in some MTD drivers that move large quantities of
> data off flash memory like this:
> drivers/mtd/nand/raw/cadence-nand-controller.c
> 
> Notice that DMAengine and DMAbuf does not have much in common,
> the names can be deceiving.
> 
> The value of this varies with the system architecture. It is not just
> a question about performance but also about power and the CPU
> being able to do other stuff in parallel for large transfers. So *when*
> to use this facility to accelerate memcpy() is a delicate question.
> 
> What I'm after here is if these can be really big, do we want
> (in the long run, not now) open up to the idea to slot in
> hardware-accelerated memcpy() here?

We currently use this functionality for the graphical framebuffer
console that most DRM drivers provide. It's non-accelerated and slow,
but this has not been much of a problem so far.

Within DRM, we're more interested in removing console code from drivers
and going for the generic implementation.

Most of the graphics HW allocates framebuffers from video RAM, system
memory or CMA pools and does not really need these memcpys. Only a few
systems with small video RAM require a shadow buffer, which we flush
into VRAM as needed. Those might benefit.

OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
it from the DRM code. I think it all depends on how invasive that change
would be.

Best regards
Thomas

> 
> Yours,
> Linus Walleij
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.1.2: OpenPGP_0x680DC11D530B7A23.asc --]
[-- Type: application/pgp-keys, Size: 4259 bytes --]

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-11-05 10:37 UTC|newest]

Thread overview: 141+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-20 12:20 [PATCH v5 00/10] Support GEM object mappings from I/O memory Thomas Zimmermann
2020-10-20 12:20 ` Thomas Zimmermann
2020-10-20 12:20 ` Thomas Zimmermann
2020-10-20 12:20 ` Thomas Zimmermann
2020-10-20 12:20 ` Thomas Zimmermann
2020-10-20 12:20 ` Thomas Zimmermann
2020-10-20 12:20 ` [PATCH v5 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20 ` [PATCH v5 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20 ` [PATCH v5 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20 ` [PATCH v5 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() Thomas Zimmermann
2020-10-20 12:20   ` [PATCH v5 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}() Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` [PATCH v5 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() Thomas Zimmermann
2020-10-20 12:20 ` [PATCH v5 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 13:39   ` Christian König
2020-10-20 13:39     ` Christian König
2020-10-20 13:39     ` Christian König
2020-10-20 13:39     ` Christian König
2020-10-20 13:39     ` Christian König
2020-10-20 13:39     ` Christian König
2020-10-20 12:20 ` [PATCH v5 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20 ` [PATCH v5 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20 ` [PATCH v5 08/10] drm/gem: Store client buffer mappings as " Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-22  8:49   ` Daniel Vetter
2020-10-22  8:49     ` Daniel Vetter
2020-10-22  8:49     ` Daniel Vetter
2020-10-22  8:49     ` Daniel Vetter
2020-10-22  8:49     ` Daniel Vetter
2020-10-22  8:49     ` Daniel Vetter
2020-10-22  9:18     ` Thomas Zimmermann
2020-10-22  9:18       ` Thomas Zimmermann
2020-10-22  9:18       ` Thomas Zimmermann
2020-10-22  9:18       ` Thomas Zimmermann
2020-10-22  9:18       ` Thomas Zimmermann
2020-10-22  9:18       ` Thomas Zimmermann
2020-10-22  9:18       ` Thomas Zimmermann
2020-10-22 10:21       ` Daniel Vetter
2020-10-22 10:21         ` Daniel Vetter
2020-10-22 10:21         ` Daniel Vetter
2020-10-22 10:21         ` Daniel Vetter
2020-10-22 10:21         ` Daniel Vetter
2020-10-22 10:21         ` Daniel Vetter
2020-10-22 10:21         ` Daniel Vetter
2020-10-20 12:20 ` [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-11-05 10:07   ` Linus Walleij
2020-11-05 10:07     ` Linus Walleij
2020-11-05 10:07     ` Linus Walleij
2020-11-05 10:07     ` Linus Walleij
2020-11-05 10:07     ` Linus Walleij
2020-11-05 10:07     ` Linus Walleij
2020-11-05 10:37     ` Thomas Zimmermann [this message]
2020-11-05 10:37       ` Thomas Zimmermann
2020-11-05 10:37       ` Thomas Zimmermann
2020-11-05 10:37       ` Thomas Zimmermann
2020-11-05 10:37       ` Thomas Zimmermann
2020-11-05 10:37       ` Thomas Zimmermann
2020-11-05 12:54       ` Daniel Vetter
2020-11-05 12:54         ` Daniel Vetter
2020-11-05 12:54         ` Daniel Vetter
2020-11-05 12:54         ` Daniel Vetter
2020-11-05 12:54         ` Daniel Vetter
2020-11-05 12:54         ` Daniel Vetter
2020-10-20 12:20 ` [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-20 12:20   ` Thomas Zimmermann
2020-10-22  8:05   ` Daniel Vetter
2020-10-22  8:05     ` Daniel Vetter
2020-10-22  8:05     ` Daniel Vetter
2020-10-22  8:05     ` Daniel Vetter
2020-10-22  8:05     ` Daniel Vetter
2020-10-22  8:05     ` Daniel Vetter
2020-10-22  8:37     ` Thomas Zimmermann
2020-10-22  8:37       ` Thomas Zimmermann
2020-10-22  8:37       ` Thomas Zimmermann
2020-10-22  8:37       ` Thomas Zimmermann
2020-10-22  8:37       ` Thomas Zimmermann
2020-10-22  8:37       ` Thomas Zimmermann
2020-10-22  8:51       ` Daniel Vetter
2020-10-22  8:51         ` Daniel Vetter
2020-10-22  8:51         ` Daniel Vetter
2020-10-22  8:51         ` Daniel Vetter
2020-10-22  8:51         ` Daniel Vetter
2020-10-22  8:51         ` Daniel Vetter
2020-10-24 20:38   ` Sam Ravnborg
2020-10-24 20:38     ` Sam Ravnborg
2020-10-24 20:38     ` Sam Ravnborg
2020-10-24 20:38     ` Sam Ravnborg
2020-10-24 20:38     ` Sam Ravnborg
2020-10-24 20:38     ` Sam Ravnborg
2020-10-26  7:50     ` Thomas Zimmermann
2020-10-26  7:50       ` Thomas Zimmermann
2020-10-26  7:50       ` Thomas Zimmermann
2020-10-26  7:50       ` Thomas Zimmermann
2020-10-26  7:50       ` Thomas Zimmermann
2020-10-26  7:50       ` Thomas Zimmermann
2020-10-26  7:50       ` Thomas Zimmermann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de \
    --to=tzimmermann@suse.de \
    --cc=airlied@linux.ie \
    --cc=alexander.deucher@amd.com \
    --cc=alyssa.rosenzweig@collabora.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=apaneers@amd.com \
    --cc=bskeggs@redhat.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=christian.gmeiner@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=emil.velikov@collabora.com \
    --cc=eric@anholt.net \
    --cc=etnaviv@lists.freedesktop.org \
    --cc=hdegoede@redhat.com \
    --cc=heiko@sntech.de \
    --cc=hjc@rock-chips.com \
    --cc=inki.dae@samsung.com \
    --cc=jy0922.shim@samsung.com \
    --cc=kgene@kernel.org \
    --cc=kraxel@redhat.com \
    --cc=krzk@kernel.org \
    --cc=kyungmin.park@samsung.com \
    --cc=l.stach@pengutronix.de \
    --cc=lima@lists.freedesktop.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linus.walleij@linaro.org \
    --cc=linux+etnaviv@armlinux.org.uk \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-rockchip@lists.infradead.org \
    --cc=linux-samsung-soc@vger.kernel.org \
    --cc=luben.tuikov@amd.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=melissa.srw@gmail.com \
    --cc=miaoqinglang@huawei.com \
    --cc=mripard@kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=oleksandr_andrushchenko@epam.com \
    --cc=ray.huang@amd.com \
    --cc=robh@kernel.org \
    --cc=sam@ravnborg.org \
    --cc=sean@poorly.run \
    --cc=spice-devel@lists.freedesktop.org \
    --cc=steven.price@arm.com \
    --cc=sumit.semwal@linaro.org \
    --cc=sw0312.kim@samsung.com \
    --cc=tomeu.vizoso@collabora.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xen-devel@lists.xenproject.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.