All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Intel-gfx@lists.freedesktop.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Rob Clark" <robdclark@chromium.org>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"T . J . Mercier" <tjmercier@google.com>,
	Kenny.Ho@amd.com, "Christian König" <christian.koenig@amd.com>,
	"Brian Welty" <brian.welty@intel.com>,
	"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>
Subject: Re: [RFC 00/17] DRM scheduling cgroup controller
Date: Mon, 31 Oct 2022 10:20:18 -1000	[thread overview]
Message-ID: <Y2AuAtSmS8X7a1qC@slm.duckdns.org> (raw)
In-Reply-To: <908129fa-3ddc-0f62-18df-e318dc760955@linux.intel.com>

Hello,

On Thu, Oct 27, 2022 at 03:32:00PM +0100, Tvrtko Ursulin wrote:
> Looking at what's available in cgroups world now, I have spotted the
> blkio.prio.class control. For my current use case (lower GPU scheduling of
> background/unfocused windows) that would also work. Even if starting with
> just two possible values - 'no-change' and 'idle' (to follow the IO
> controller naming).

I wouldn't follow that example. That's only meaningful within the context of
bfq and it probabaly shouldn't have been merged in the first place.

> How would you view that as a proposal? It would be a bit tougher "sell" to
> the DRM community, perhaps, given that many drivers do have scheduling
> priority, but the concept of scheduling class is not really there.
> Nevertheless a concept of normal-vs-background could be plausible in my
> mind. It could be easily implemented by using the priority controls
> available in many drivers.

I don't feel great about that.

* The semantics aren't clearly defined. While not immediately obvious in the
  interface, the task nice levels have definite mappings to weight values
  and thus clear meanings. I don't think it's a good idea to leave the
  interface semantics open to interpretation.

* Maybe GPUs are better but my experience with optional hardware features in
  the storage world has been that vendors diverge wildly and unexpectedly to
  the point many features are mostly useless. There are fewer GPU vendors
  and more software effort behind each, so maybe the situation is better but
  I think it'd be helpul to keep some skepticism.

* Even when per-vendor or per-driver features are consistent enough to be
  useful, they often aren't thought through enough to be truly useful. e.g.
  nvme has priority features but they aren't really that useful because they
  can't do much without congestion control on the issuer side and once you
  have congestion control on the issuer side which is usually a lot more
  complex (e.g. dealing with work-conserving hierarchical weight
  distributions, priority inversions and so on), you can achieve most of
  what you need in terms of resource control from the issuer side anyway.

So, I'd much prefer to have a fuller solution on the kernel side which
integrates per-vendor/driver features where they make sense.

> > >    drm.budget_supported
> > > 	One of:
> > > 	 1) 'yes' - when all DRM clients in the group support the functionality.
> > > 	 2) 'no' - when at least one of the DRM clients does not support the
> > > 		   functionality.
> > > 	 3) 'n/a' - when there are no DRM clients in the group.
> > 
> > Yeah, I'm not sure about this. This isn't a per-cgroup property to begin
> > with and I'm not sure 'no' meaning at least one device not supporting is
> > intuitive. The distinction between 'no' and 'n/a' is kinda weird too. Please
> > drop this.
> 
> The idea actually is for this to be per-cgroup and potentially change
> dynamically. It implements the concept of "observability", how I have,
> perhaps clumsily, named it.
> 
> This is because we can have a mix of DRM file descriptors in a cgroup, not
> all of which support the proposed functionality. So I wanted to have
> something by which the administrator can observe the status of the group.
> 
> For instance seeing some clients do not support the feature could be signal
> that things have been misconfigured, or that appeal needs to be made for
> driver X to start supporting the feature. Seeing a "no" there in other words
> is a signal that budgeting may not really work as expected and needs to be
> investigated.

I still don't see how this is per-cgroup given that it's indicating whether
the driver supports some feature. Also, the eventual goal would be
supporting the same control mechanisms across most (if not all) GPUs, right?

> > Rather than doing it hierarchically on the spot, it's usually a lot cheaper
> > and easier to calculate the flattened hierarchical weight per leaf cgroup
> > and divide the bandwidth according to the eventual portions. For an example,
> > please take a look at block/blk-iocost.c.
> 
> This seems exactly what I had in mind (but haven't implemented it yet). So
> in this RFC I have budget splitting per group where each tree level adds up
> to "100%" and the thing which I have not implemented is "borrowing" or
> yielding (how blk-iocost.c calls it, or donating) unused budgets to
> siblings.
> 
> I am very happy to hear my idea is the right one and someone already
> implemented it. Thanks for this pointer!

The budget donation thing in iocost is necessary only because it wants to
make the hot path local to the cgroup because io control has to support very
high decision rate. For time-slicing GPU, it's likely that following the
current hierarchical weight on the spot is enough.

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	linux-kernel@vger.kernel.org,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [Intel-gfx] [RFC 00/17] DRM scheduling cgroup controller
Date: Mon, 31 Oct 2022 10:20:18 -1000	[thread overview]
Message-ID: <Y2AuAtSmS8X7a1qC@slm.duckdns.org> (raw)
In-Reply-To: <908129fa-3ddc-0f62-18df-e318dc760955@linux.intel.com>

Hello,

On Thu, Oct 27, 2022 at 03:32:00PM +0100, Tvrtko Ursulin wrote:
> Looking at what's available in cgroups world now, I have spotted the
> blkio.prio.class control. For my current use case (lower GPU scheduling of
> background/unfocused windows) that would also work. Even if starting with
> just two possible values - 'no-change' and 'idle' (to follow the IO
> controller naming).

I wouldn't follow that example. That's only meaningful within the context of
bfq and it probabaly shouldn't have been merged in the first place.

> How would you view that as a proposal? It would be a bit tougher "sell" to
> the DRM community, perhaps, given that many drivers do have scheduling
> priority, but the concept of scheduling class is not really there.
> Nevertheless a concept of normal-vs-background could be plausible in my
> mind. It could be easily implemented by using the priority controls
> available in many drivers.

I don't feel great about that.

* The semantics aren't clearly defined. While not immediately obvious in the
  interface, the task nice levels have definite mappings to weight values
  and thus clear meanings. I don't think it's a good idea to leave the
  interface semantics open to interpretation.

* Maybe GPUs are better but my experience with optional hardware features in
  the storage world has been that vendors diverge wildly and unexpectedly to
  the point many features are mostly useless. There are fewer GPU vendors
  and more software effort behind each, so maybe the situation is better but
  I think it'd be helpul to keep some skepticism.

* Even when per-vendor or per-driver features are consistent enough to be
  useful, they often aren't thought through enough to be truly useful. e.g.
  nvme has priority features but they aren't really that useful because they
  can't do much without congestion control on the issuer side and once you
  have congestion control on the issuer side which is usually a lot more
  complex (e.g. dealing with work-conserving hierarchical weight
  distributions, priority inversions and so on), you can achieve most of
  what you need in terms of resource control from the issuer side anyway.

So, I'd much prefer to have a fuller solution on the kernel side which
integrates per-vendor/driver features where they make sense.

> > >    drm.budget_supported
> > > 	One of:
> > > 	 1) 'yes' - when all DRM clients in the group support the functionality.
> > > 	 2) 'no' - when at least one of the DRM clients does not support the
> > > 		   functionality.
> > > 	 3) 'n/a' - when there are no DRM clients in the group.
> > 
> > Yeah, I'm not sure about this. This isn't a per-cgroup property to begin
> > with and I'm not sure 'no' meaning at least one device not supporting is
> > intuitive. The distinction between 'no' and 'n/a' is kinda weird too. Please
> > drop this.
> 
> The idea actually is for this to be per-cgroup and potentially change
> dynamically. It implements the concept of "observability", how I have,
> perhaps clumsily, named it.
> 
> This is because we can have a mix of DRM file descriptors in a cgroup, not
> all of which support the proposed functionality. So I wanted to have
> something by which the administrator can observe the status of the group.
> 
> For instance seeing some clients do not support the feature could be signal
> that things have been misconfigured, or that appeal needs to be made for
> driver X to start supporting the feature. Seeing a "no" there in other words
> is a signal that budgeting may not really work as expected and needs to be
> investigated.

I still don't see how this is per-cgroup given that it's indicating whether
the driver supports some feature. Also, the eventual goal would be
supporting the same control mechanisms across most (if not all) GPUs, right?

> > Rather than doing it hierarchically on the spot, it's usually a lot cheaper
> > and easier to calculate the flattened hierarchical weight per leaf cgroup
> > and divide the bandwidth according to the eventual portions. For an example,
> > please take a look at block/blk-iocost.c.
> 
> This seems exactly what I had in mind (but haven't implemented it yet). So
> in this RFC I have budget splitting per group where each tree level adds up
> to "100%" and the thing which I have not implemented is "borrowing" or
> yielding (how blk-iocost.c calls it, or donating) unused budgets to
> siblings.
> 
> I am very happy to hear my idea is the right one and someone already
> implemented it. Thanks for this pointer!

The budget donation thing in iocost is necessary only because it wants to
make the hot path local to the cgroup because io control has to support very
high decision rate. For time-slicing GPU, it's likely that following the
current hierarchical weight on the spot is enough.

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
To: Tvrtko Ursulin <tvrtko.ursulin-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Cc: Intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	"Johannes Weiner"
	<hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	"Zefan Li" <lizefan.x-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>,
	"Dave Airlie" <airlied-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"Daniel Vetter" <daniel.vetter-/w4YWyX8dFk@public.gmane.org>,
	"Rob Clark" <robdclark-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>,
	"Stéphane Marchesin"
	<marcheu-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>,
	"T . J . Mercier"
	<tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Kenny.Ho-5C7GfCeVMHo@public.gmane.org,
	"Christian König" <christian.koenig-5C7GfCeVMHo@public.gmane.org>,
	"Brian Welty"
	<brian.welty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"Tvrtko Ursulin"
	<tvrtko.ursulin-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Subject: Re: [RFC 00/17] DRM scheduling cgroup controller
Date: Mon, 31 Oct 2022 10:20:18 -1000	[thread overview]
Message-ID: <Y2AuAtSmS8X7a1qC@slm.duckdns.org> (raw)
In-Reply-To: <908129fa-3ddc-0f62-18df-e318dc760955-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>

Hello,

On Thu, Oct 27, 2022 at 03:32:00PM +0100, Tvrtko Ursulin wrote:
> Looking at what's available in cgroups world now, I have spotted the
> blkio.prio.class control. For my current use case (lower GPU scheduling of
> background/unfocused windows) that would also work. Even if starting with
> just two possible values - 'no-change' and 'idle' (to follow the IO
> controller naming).

I wouldn't follow that example. That's only meaningful within the context of
bfq and it probabaly shouldn't have been merged in the first place.

> How would you view that as a proposal? It would be a bit tougher "sell" to
> the DRM community, perhaps, given that many drivers do have scheduling
> priority, but the concept of scheduling class is not really there.
> Nevertheless a concept of normal-vs-background could be plausible in my
> mind. It could be easily implemented by using the priority controls
> available in many drivers.

I don't feel great about that.

* The semantics aren't clearly defined. While not immediately obvious in the
  interface, the task nice levels have definite mappings to weight values
  and thus clear meanings. I don't think it's a good idea to leave the
  interface semantics open to interpretation.

* Maybe GPUs are better but my experience with optional hardware features in
  the storage world has been that vendors diverge wildly and unexpectedly to
  the point many features are mostly useless. There are fewer GPU vendors
  and more software effort behind each, so maybe the situation is better but
  I think it'd be helpul to keep some skepticism.

* Even when per-vendor or per-driver features are consistent enough to be
  useful, they often aren't thought through enough to be truly useful. e.g.
  nvme has priority features but they aren't really that useful because they
  can't do much without congestion control on the issuer side and once you
  have congestion control on the issuer side which is usually a lot more
  complex (e.g. dealing with work-conserving hierarchical weight
  distributions, priority inversions and so on), you can achieve most of
  what you need in terms of resource control from the issuer side anyway.

So, I'd much prefer to have a fuller solution on the kernel side which
integrates per-vendor/driver features where they make sense.

> > >    drm.budget_supported
> > > 	One of:
> > > 	 1) 'yes' - when all DRM clients in the group support the functionality.
> > > 	 2) 'no' - when at least one of the DRM clients does not support the
> > > 		   functionality.
> > > 	 3) 'n/a' - when there are no DRM clients in the group.
> > 
> > Yeah, I'm not sure about this. This isn't a per-cgroup property to begin
> > with and I'm not sure 'no' meaning at least one device not supporting is
> > intuitive. The distinction between 'no' and 'n/a' is kinda weird too. Please
> > drop this.
> 
> The idea actually is for this to be per-cgroup and potentially change
> dynamically. It implements the concept of "observability", how I have,
> perhaps clumsily, named it.
> 
> This is because we can have a mix of DRM file descriptors in a cgroup, not
> all of which support the proposed functionality. So I wanted to have
> something by which the administrator can observe the status of the group.
> 
> For instance seeing some clients do not support the feature could be signal
> that things have been misconfigured, or that appeal needs to be made for
> driver X to start supporting the feature. Seeing a "no" there in other words
> is a signal that budgeting may not really work as expected and needs to be
> investigated.

I still don't see how this is per-cgroup given that it's indicating whether
the driver supports some feature. Also, the eventual goal would be
supporting the same control mechanisms across most (if not all) GPUs, right?

> > Rather than doing it hierarchically on the spot, it's usually a lot cheaper
> > and easier to calculate the flattened hierarchical weight per leaf cgroup
> > and divide the bandwidth according to the eventual portions. For an example,
> > please take a look at block/blk-iocost.c.
> 
> This seems exactly what I had in mind (but haven't implemented it yet). So
> in this RFC I have budget splitting per group where each tree level adds up
> to "100%" and the thing which I have not implemented is "borrowing" or
> yielding (how blk-iocost.c calls it, or donating) unused budgets to
> siblings.
> 
> I am very happy to hear my idea is the right one and someone already
> implemented it. Thanks for this pointer!

The budget donation thing in iocost is necessary only because it wants to
make the hot path local to the cgroup because io control has to support very
high decision rate. For time-slicing GPU, it's likely that following the
current hierarchical weight on the spot is enough.

Thanks.

-- 
tejun

  reply	other threads:[~2022-10-31 20:20 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-19 17:32 [RFC 00/17] DRM scheduling cgroup controller Tvrtko Ursulin
2022-10-19 17:32 ` Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 01/17] cgroup: Add the DRM " Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 02/17] drm: Track clients per owning process Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-20  6:40   ` Christian König
2022-10-20  6:40     ` Christian König
2022-10-20  6:40     ` [Intel-gfx] " Christian König
2022-10-20  7:34     ` Tvrtko Ursulin
2022-10-20  7:34       ` Tvrtko Ursulin
2022-10-20  7:34       ` [Intel-gfx] " Tvrtko Ursulin
2022-10-20 11:33       ` Christian König
2022-10-20 11:33         ` Christian König
2022-10-20 11:33         ` [Intel-gfx] " Christian König
2022-10-27 14:35         ` Tvrtko Ursulin
2022-10-27 14:35           ` Tvrtko Ursulin
2022-10-27 14:35           ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 03/17] cgroup/drm: Support cgroup priority control Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 04/17] drm/cgroup: Allow safe external access to file_priv Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 05/17] drm: Connect priority updates to drm core Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-20  9:50   ` kernel test robot
2022-10-19 17:32 ` [RFC 06/17] drm: Only track clients which are providing drm_cgroup_ops Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 07/17] drm/i915: i915 priority Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 08/17] drm: Allow for migration of clients Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 09/17] cgroup/drm: Introduce weight based drm cgroup control Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 10/17] drm: Add ability to query drm cgroup GPU time Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 11/17] drm: Add over budget signalling callback Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 12/17] cgroup/drm: Client exit hook Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 13/17] cgroup/drm: Ability to periodically scan cgroups for over budget GPU usage Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-21 22:52   ` T.J. Mercier
2022-10-21 22:52     ` T.J. Mercier
2022-10-21 22:52     ` [Intel-gfx] " T.J. Mercier
2022-10-27 14:45     ` Tvrtko Ursulin
2022-10-27 14:45       ` Tvrtko Ursulin
2022-10-27 14:45       ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 14/17] cgroup/drm: Show group budget signaling capability in sysfs Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32 ` [RFC 15/17] drm/i915: Migrate client to new owner on context create Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` [Intel-gfx] " Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 16/17] drm/i915: Wire up with drm controller GPU time query Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32 ` [Intel-gfx] [RFC 17/17] drm/i915: Implement cgroup controller over budget throttling Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 17:32   ` Tvrtko Ursulin
2022-10-19 18:45 ` [RFC 00/17] DRM scheduling cgroup controller Tejun Heo
2022-10-19 18:45   ` Tejun Heo
2022-10-19 18:45   ` [Intel-gfx] " Tejun Heo
2022-10-27 14:32   ` Tvrtko Ursulin
2022-10-27 14:32     ` Tvrtko Ursulin
2022-10-27 14:32     ` [Intel-gfx] " Tvrtko Ursulin
2022-10-31 20:20     ` Tejun Heo [this message]
2022-10-31 20:20       ` Tejun Heo
2022-10-31 20:20       ` [Intel-gfx] " Tejun Heo
2022-11-09 16:59       ` Tvrtko Ursulin
2022-11-09 16:59         ` Tvrtko Ursulin
2022-11-09 16:59         ` Tvrtko Ursulin
2022-10-19 19:25 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y2AuAtSmS8X7a1qC@slm.duckdns.org \
    --to=tj@kernel.org \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=Kenny.Ho@amd.com \
    --cc=airlied@redhat.com \
    --cc=brian.welty@intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=marcheu@chromium.org \
    --cc=robdclark@chromium.org \
    --cc=tjmercier@google.com \
    --cc=tvrtko.ursulin@intel.com \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.