All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
	linux-media@vger.kernel.org, linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, linux-tegra@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	alexander.deucher@amd.com, daniel@ffwll.ch,
	viro@zeniv.linux.org.uk, akpm@linux-foundation.org,
	hughd@google.com, andrey.grodzovsky@amd.com
Subject: Re: [PATCH 03/13] mm: shmem: provide oom badness for shmem files
Date: Fri, 10 Jun 2022 16:16:40 +0200	[thread overview]
Message-ID: <YqNSSFQELx/LeEHR@dhcp22.suse.cz> (raw)
In-Reply-To: <2e7e050e-04eb-0c0a-0675-d7f1c3ae7aed@amd.com>

On Fri 10-06-22 14:17:27, Christian König wrote:
> Am 10.06.22 um 13:44 schrieb Michal Hocko:
> > On Fri 10-06-22 12:58:53, Christian König wrote:
> > [SNIP]
> > > > I do realize this is a long term problem and there is a demand for some
> > > > solution at least. I am not sure how to deal with shared resources
> > > > myself. The best approximation I can come up with is to limit the scope
> > > > of the damage into a memcg context. One idea I was playing with (but
> > > > never convinced myself it is really a worth) is to allow a new mode of
> > > > the oom victim selection for the global oom event.
> > And just for the clarity. I have mentioned global oom event here but the
> > concept could be extended to per-memcg oom killer as well.
> 
> Then what exactly do you mean with "limiting the scope of the damage"? Cause
> that doesn't make sense without memcg.

What I meant to say is to use the scheme of the damage control
not only to the global oom situation (on the global shortage of memory)
but also to the memcg oom situation (when the hard limit on a hierarchy
is reached).

[...]
> > The primary question is whether it actually helps much or what kind of
> > scenarios it can help with and whether we can actually do better for
> > those.
> 
> Well, it does help massively with a standard Linux desktop and GPU workloads
> (e.g. games).
> 
> See what currently happens is that when games allocate for example textures
> the memory for that is not accounted against that game. Instead it's usually
> the display server (X or Wayland) which most of the shared resources
> accounts to because it needs to compose a desktop from it and usually also
> mmaps it for fallback CPU operations.

Let me try to understand some more. So the game (or the entity to be
responsible for the resource) doesn't really allocate the memory but it
relies on somebody else (from memcg perspective living in a different
resource domain - i.e. a different memcg) to do that on its behalf.
Correct? If that is the case then that is certainly not fitting into the
memcg model then.
I am not really sure there is any reasonable model where you cannot
really tell who is responsible for the resource.

> So what happens when a games over allocates texture resources is that your
> whole desktop restarts because the compositor is killed. This obviously also
> kills the game, but it would be much nice if we would be more selective
> here.
> 
> For hardware rendering DMA-buf and GPU drivers are used, but for the
> software fallback shmem files is what is used under the hood as far as I
> know. And the underlying problem is the same for both.

For shmem files the end user of the buffer can preallocate and so own
the buffer and be accounted for it.
> 
> > Also do not forget that shared file memory is not the only thing
> > to care about. What about the kernel memory used on behalf of processes?
> 
> Yeah, I'm aware of that as well. But at least inside the GPU drivers we try
> to keep that in a reasonable ratio.
> 
> > Just consider the above mentioned memcg driven model. It doesn't really
> > require to chase specific files and do some arbitrary math to share the
> > responsibility. It has a clear accounting and responsibility model.
> 
> Ok, how does that work then?

The memory is accounted to whoever faults that memory in or to the
allocating context if that is a kernel memory (in most situations).
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: andrey.grodzovsky@amd.com, linux-mm@kvack.org,
	nouveau@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	hughd@google.com, linux-kernel@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, linux-fsdevel@vger.kernel.org,
	viro@zeniv.linux.org.uk,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>,
	linux-tegra@vger.kernel.org, alexander.deucher@amd.com,
	akpm@linux-foundation.org, linux-media@vger.kernel.org
Subject: Re: [Intel-gfx] [PATCH 03/13] mm: shmem: provide oom badness for shmem files
Date: Fri, 10 Jun 2022 16:16:40 +0200	[thread overview]
Message-ID: <YqNSSFQELx/LeEHR@dhcp22.suse.cz> (raw)
In-Reply-To: <2e7e050e-04eb-0c0a-0675-d7f1c3ae7aed@amd.com>

On Fri 10-06-22 14:17:27, Christian König wrote:
> Am 10.06.22 um 13:44 schrieb Michal Hocko:
> > On Fri 10-06-22 12:58:53, Christian König wrote:
> > [SNIP]
> > > > I do realize this is a long term problem and there is a demand for some
> > > > solution at least. I am not sure how to deal with shared resources
> > > > myself. The best approximation I can come up with is to limit the scope
> > > > of the damage into a memcg context. One idea I was playing with (but
> > > > never convinced myself it is really a worth) is to allow a new mode of
> > > > the oom victim selection for the global oom event.
> > And just for the clarity. I have mentioned global oom event here but the
> > concept could be extended to per-memcg oom killer as well.
> 
> Then what exactly do you mean with "limiting the scope of the damage"? Cause
> that doesn't make sense without memcg.

What I meant to say is to use the scheme of the damage control
not only to the global oom situation (on the global shortage of memory)
but also to the memcg oom situation (when the hard limit on a hierarchy
is reached).

[...]
> > The primary question is whether it actually helps much or what kind of
> > scenarios it can help with and whether we can actually do better for
> > those.
> 
> Well, it does help massively with a standard Linux desktop and GPU workloads
> (e.g. games).
> 
> See what currently happens is that when games allocate for example textures
> the memory for that is not accounted against that game. Instead it's usually
> the display server (X or Wayland) which most of the shared resources
> accounts to because it needs to compose a desktop from it and usually also
> mmaps it for fallback CPU operations.

Let me try to understand some more. So the game (or the entity to be
responsible for the resource) doesn't really allocate the memory but it
relies on somebody else (from memcg perspective living in a different
resource domain - i.e. a different memcg) to do that on its behalf.
Correct? If that is the case then that is certainly not fitting into the
memcg model then.
I am not really sure there is any reasonable model where you cannot
really tell who is responsible for the resource.

> So what happens when a games over allocates texture resources is that your
> whole desktop restarts because the compositor is killed. This obviously also
> kills the game, but it would be much nice if we would be more selective
> here.
> 
> For hardware rendering DMA-buf and GPU drivers are used, but for the
> software fallback shmem files is what is used under the hood as far as I
> know. And the underlying problem is the same for both.

For shmem files the end user of the buffer can preallocate and so own
the buffer and be accounted for it.
> 
> > Also do not forget that shared file memory is not the only thing
> > to care about. What about the kernel memory used on behalf of processes?
> 
> Yeah, I'm aware of that as well. But at least inside the GPU drivers we try
> to keep that in a reasonable ratio.
> 
> > Just consider the above mentioned memcg driven model. It doesn't really
> > require to chase specific files and do some arbitrary math to share the
> > responsibility. It has a clear accounting and responsibility model.
> 
> Ok, how does that work then?

The memory is accounted to whoever faults that memory in or to the
allocating context if that is a kernel memory (in most situations).
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: andrey.grodzovsky@amd.com, linux-mm@kvack.org,
	nouveau@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	hughd@google.com, linux-kernel@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, linux-fsdevel@vger.kernel.org,
	viro@zeniv.linux.org.uk, daniel@ffwll.ch,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>,
	linux-tegra@vger.kernel.org, alexander.deucher@amd.com,
	akpm@linux-foundation.org, linux-media@vger.kernel.org
Subject: Re: [PATCH 03/13] mm: shmem: provide oom badness for shmem files
Date: Fri, 10 Jun 2022 16:16:40 +0200	[thread overview]
Message-ID: <YqNSSFQELx/LeEHR@dhcp22.suse.cz> (raw)
In-Reply-To: <2e7e050e-04eb-0c0a-0675-d7f1c3ae7aed@amd.com>

On Fri 10-06-22 14:17:27, Christian König wrote:
> Am 10.06.22 um 13:44 schrieb Michal Hocko:
> > On Fri 10-06-22 12:58:53, Christian König wrote:
> > [SNIP]
> > > > I do realize this is a long term problem and there is a demand for some
> > > > solution at least. I am not sure how to deal with shared resources
> > > > myself. The best approximation I can come up with is to limit the scope
> > > > of the damage into a memcg context. One idea I was playing with (but
> > > > never convinced myself it is really a worth) is to allow a new mode of
> > > > the oom victim selection for the global oom event.
> > And just for the clarity. I have mentioned global oom event here but the
> > concept could be extended to per-memcg oom killer as well.
> 
> Then what exactly do you mean with "limiting the scope of the damage"? Cause
> that doesn't make sense without memcg.

What I meant to say is to use the scheme of the damage control
not only to the global oom situation (on the global shortage of memory)
but also to the memcg oom situation (when the hard limit on a hierarchy
is reached).

[...]
> > The primary question is whether it actually helps much or what kind of
> > scenarios it can help with and whether we can actually do better for
> > those.
> 
> Well, it does help massively with a standard Linux desktop and GPU workloads
> (e.g. games).
> 
> See what currently happens is that when games allocate for example textures
> the memory for that is not accounted against that game. Instead it's usually
> the display server (X or Wayland) which most of the shared resources
> accounts to because it needs to compose a desktop from it and usually also
> mmaps it for fallback CPU operations.

Let me try to understand some more. So the game (or the entity to be
responsible for the resource) doesn't really allocate the memory but it
relies on somebody else (from memcg perspective living in a different
resource domain - i.e. a different memcg) to do that on its behalf.
Correct? If that is the case then that is certainly not fitting into the
memcg model then.
I am not really sure there is any reasonable model where you cannot
really tell who is responsible for the resource.

> So what happens when a games over allocates texture resources is that your
> whole desktop restarts because the compositor is killed. This obviously also
> kills the game, but it would be much nice if we would be more selective
> here.
> 
> For hardware rendering DMA-buf and GPU drivers are used, but for the
> software fallback shmem files is what is used under the hood as far as I
> know. And the underlying problem is the same for both.

For shmem files the end user of the buffer can preallocate and so own
the buffer and be accounted for it.
> 
> > Also do not forget that shared file memory is not the only thing
> > to care about. What about the kernel memory used on behalf of processes?
> 
> Yeah, I'm aware of that as well. But at least inside the GPU drivers we try
> to keep that in a reasonable ratio.
> 
> > Just consider the above mentioned memcg driven model. It doesn't really
> > require to chase specific files and do some arbitrary math to share the
> > responsibility. It has a clear accounting and responsibility model.
> 
> Ok, how does that work then?

The memory is accounted to whoever faults that memory in or to the
allocating context if that is a kernel memory (in most situations).
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: andrey.grodzovsky@amd.com, linux-mm@kvack.org,
	nouveau@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	hughd@google.com, linux-kernel@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, linux-fsdevel@vger.kernel.org,
	viro@zeniv.linux.org.uk, daniel@ffwll.ch,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>,
	linux-tegra@vger.kernel.org, alexander.deucher@amd.com,
	akpm@linux-foundation.org, linux-media@vger.kernel.org
Subject: Re: [Nouveau] [PATCH 03/13] mm: shmem: provide oom badness for shmem files
Date: Fri, 10 Jun 2022 16:16:40 +0200	[thread overview]
Message-ID: <YqNSSFQELx/LeEHR@dhcp22.suse.cz> (raw)
In-Reply-To: <2e7e050e-04eb-0c0a-0675-d7f1c3ae7aed@amd.com>

On Fri 10-06-22 14:17:27, Christian König wrote:
> Am 10.06.22 um 13:44 schrieb Michal Hocko:
> > On Fri 10-06-22 12:58:53, Christian König wrote:
> > [SNIP]
> > > > I do realize this is a long term problem and there is a demand for some
> > > > solution at least. I am not sure how to deal with shared resources
> > > > myself. The best approximation I can come up with is to limit the scope
> > > > of the damage into a memcg context. One idea I was playing with (but
> > > > never convinced myself it is really a worth) is to allow a new mode of
> > > > the oom victim selection for the global oom event.
> > And just for the clarity. I have mentioned global oom event here but the
> > concept could be extended to per-memcg oom killer as well.
> 
> Then what exactly do you mean with "limiting the scope of the damage"? Cause
> that doesn't make sense without memcg.

What I meant to say is to use the scheme of the damage control
not only to the global oom situation (on the global shortage of memory)
but also to the memcg oom situation (when the hard limit on a hierarchy
is reached).

[...]
> > The primary question is whether it actually helps much or what kind of
> > scenarios it can help with and whether we can actually do better for
> > those.
> 
> Well, it does help massively with a standard Linux desktop and GPU workloads
> (e.g. games).
> 
> See what currently happens is that when games allocate for example textures
> the memory for that is not accounted against that game. Instead it's usually
> the display server (X or Wayland) which most of the shared resources
> accounts to because it needs to compose a desktop from it and usually also
> mmaps it for fallback CPU operations.

Let me try to understand some more. So the game (or the entity to be
responsible for the resource) doesn't really allocate the memory but it
relies on somebody else (from memcg perspective living in a different
resource domain - i.e. a different memcg) to do that on its behalf.
Correct? If that is the case then that is certainly not fitting into the
memcg model then.
I am not really sure there is any reasonable model where you cannot
really tell who is responsible for the resource.

> So what happens when a games over allocates texture resources is that your
> whole desktop restarts because the compositor is killed. This obviously also
> kills the game, but it would be much nice if we would be more selective
> here.
> 
> For hardware rendering DMA-buf and GPU drivers are used, but for the
> software fallback shmem files is what is used under the hood as far as I
> know. And the underlying problem is the same for both.

For shmem files the end user of the buffer can preallocate and so own
the buffer and be accounted for it.
> 
> > Also do not forget that shared file memory is not the only thing
> > to care about. What about the kernel memory used on behalf of processes?
> 
> Yeah, I'm aware of that as well. But at least inside the GPU drivers we try
> to keep that in a reasonable ratio.
> 
> > Just consider the above mentioned memcg driven model. It doesn't really
> > require to chase specific files and do some arbitrary math to share the
> > responsibility. It has a clear accounting and responsibility model.
> 
> Ok, how does that work then?

The memory is accounted to whoever faults that memory in or to the
allocating context if that is a kernel memory (in most situations).
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2022-06-10 14:16 UTC|newest]

Thread overview: 145+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-31  9:59 Per file OOM badness Christian König
2022-05-31  9:59 ` Christian König
2022-05-31  9:59 ` [Nouveau] " Christian König
2022-05-31  9:59 ` [PATCH 01/13] fs: add OOM badness callback to file_operatrations struct Christian König
2022-05-31  9:59   ` Christian König
2022-05-31  9:59   ` [Nouveau] " Christian König
2022-05-31  9:59 ` [PATCH 02/13] oom: take per file badness into account Christian König
2022-05-31  9:59   ` Christian König
2022-05-31  9:59   ` [Nouveau] " Christian König
2022-05-31  9:59 ` [PATCH 03/13] mm: shmem: provide oom badness for shmem files Christian König
2022-05-31  9:59   ` Christian König
2022-05-31  9:59   ` [Nouveau] " Christian König
2022-06-09  9:18   ` Michal Hocko
2022-06-09  9:18     ` [Nouveau] " Michal Hocko
2022-06-09  9:18     ` Michal Hocko
2022-06-09  9:18     ` [Intel-gfx] " Michal Hocko
2022-06-09 12:16     ` Christian König
2022-06-09 12:16       ` Christian König
2022-06-09 12:16       ` [Intel-gfx] " Christian König
2022-06-09 12:16       ` [Nouveau] " Christian König
2022-06-09 12:57       ` Michal Hocko
2022-06-09 12:57         ` [Nouveau] " Michal Hocko
2022-06-09 12:57         ` Michal Hocko
2022-06-09 12:57         ` [Intel-gfx] " Michal Hocko
2022-06-09 14:10         ` Christian König
2022-06-09 14:10           ` Christian König
2022-06-09 14:10           ` [Nouveau] " Christian König
2022-06-09 14:21           ` Michal Hocko
2022-06-09 14:21             ` [Nouveau] " Michal Hocko
2022-06-09 14:21             ` Michal Hocko
2022-06-09 14:21             ` [Intel-gfx] " Michal Hocko
2022-06-09 14:29             ` Christian König
2022-06-09 14:29               ` Christian König
2022-06-09 14:29               ` [Intel-gfx] " Christian König
2022-06-09 14:29               ` [Nouveau] " Christian König
2022-06-09 15:07               ` Michal Hocko
2022-06-09 15:07                 ` [Nouveau] " Michal Hocko
2022-06-09 15:07                 ` Michal Hocko
2022-06-09 15:07                 ` [Intel-gfx] " Michal Hocko
2022-06-10 10:58                 ` Christian König
2022-06-10 10:58                   ` Christian König
2022-06-10 10:58                   ` [Nouveau] " Christian König
2022-06-10 11:44                   ` Michal Hocko
2022-06-10 11:44                     ` [Nouveau] " Michal Hocko
2022-06-10 11:44                     ` Michal Hocko
2022-06-10 11:44                     ` [Intel-gfx] " Michal Hocko
2022-06-10 12:17                     ` Christian König
2022-06-10 12:17                       ` Christian König
2022-06-10 12:17                       ` [Intel-gfx] " Christian König
2022-06-10 12:17                       ` [Nouveau] " Christian König
2022-06-10 14:16                       ` Michal Hocko [this message]
2022-06-10 14:16                         ` Michal Hocko
2022-06-10 14:16                         ` Michal Hocko
2022-06-10 14:16                         ` [Intel-gfx] " Michal Hocko
2022-06-11  8:06                         ` Christian König
2022-06-11  8:06                           ` Christian König
2022-06-11  8:06                           ` [Intel-gfx] " Christian König
2022-06-11  8:06                           ` [Nouveau] " Christian König
2022-06-13  7:45                           ` Michal Hocko
2022-06-13  7:45                             ` [Nouveau] " Michal Hocko
2022-06-13  7:45                             ` Michal Hocko
2022-06-13  7:45                             ` [Intel-gfx] " Michal Hocko
2022-06-13 11:50                             ` Christian König
2022-06-13 11:50                               ` Christian König
2022-06-13 11:50                               ` [Intel-gfx] " Christian König
2022-06-13 11:50                               ` [Nouveau] " Christian König
2022-06-13 12:11                               ` Michal Hocko
2022-06-13 12:11                                 ` [Nouveau] " Michal Hocko
2022-06-13 12:11                                 ` Michal Hocko
2022-06-13 12:11                                 ` [Intel-gfx] " Michal Hocko
2022-06-13 12:55                                 ` [Nouveau] " Christian König
2022-06-13 12:55                                   ` Christian König
2022-06-13 12:55                                   ` Christian König
2022-06-13 12:55                                   ` [Intel-gfx] " Christian König
2022-06-13 14:11                                   ` Michal Hocko
2022-06-13 14:11                                     ` [Nouveau] " Michal Hocko
2022-06-13 14:11                                     ` Michal Hocko
2022-06-13 14:11                                     ` Michal Hocko
2022-06-15 12:35                                     ` Christian König
2022-06-15 12:35                                       ` Christian König
2022-06-15 12:35                                       ` [Intel-gfx] " Christian König
2022-06-15 12:35                                       ` [Nouveau] " Christian König
2022-06-15 13:15                                       ` Michal Hocko
2022-06-15 13:15                                         ` [Nouveau] " Michal Hocko
2022-06-15 13:15                                         ` Michal Hocko
2022-06-15 13:15                                         ` [Intel-gfx] " Michal Hocko
2022-06-15 14:24                                         ` Christian König
2022-06-15 14:24                                           ` Christian König
2022-06-15 14:24                                           ` [Intel-gfx] " Christian König
2022-06-15 14:24                                           ` [Nouveau] " Christian König
2022-06-13  9:08                           ` Michel Dänzer
2022-06-13  9:08                             ` [Nouveau] " Michel Dänzer
2022-06-13  9:08                             ` [Intel-gfx] " Michel Dänzer
2022-06-13  9:08                             ` Michel Dänzer
2022-06-13  9:11                             ` Christian König
2022-06-13  9:11                               ` Christian König
2022-06-13  9:11                               ` [Intel-gfx] " Christian König
2022-06-13  9:11                               ` [Nouveau] " Christian König
2022-06-09 15:19             ` Felix Kuehling
2022-06-09 15:19               ` Felix Kuehling
2022-06-09 15:19               ` [Intel-gfx] " Felix Kuehling
2022-06-09 15:19               ` [Nouveau] " Felix Kuehling
2022-06-09 15:22               ` Christian König
2022-06-09 15:22                 ` Christian König
2022-06-09 15:22                 ` [Intel-gfx] " Christian König
2022-06-09 15:22                 ` [Nouveau] " Christian König
2022-06-09 15:54                 ` Michal Hocko
2022-06-09 15:54                   ` [Nouveau] " Michal Hocko
2022-06-09 15:54                   ` Michal Hocko
2022-06-09 15:54                   ` [Intel-gfx] " Michal Hocko
2022-05-31  9:59 ` [PATCH 04/13] dma-buf: provide oom badness for DMA-buf files Christian König
2022-05-31  9:59   ` Christian König
2022-05-31  9:59   ` [Nouveau] " Christian König
2022-05-31  9:59 ` [PATCH 05/13] drm/gem: adjust per file OOM badness on handling buffers Christian König
2022-05-31  9:59   ` Christian König
2022-05-31  9:59   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 06/13] drm/gma500: use drm_oom_badness Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 07/13] drm/amdgpu: Use drm_oom_badness for amdgpu Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 08/13] drm/radeon: use drm_oom_badness Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 09/13] drm/i915: " Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 10/13] drm/nouveau: " Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 11/13] drm/omap: " Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 12/13] drm/vmwgfx: " Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 10:00 ` [PATCH 13/13] drm/tegra: " Christian König
2022-05-31 10:00   ` Christian König
2022-05-31 10:00   ` [Nouveau] " Christian König
2022-05-31 22:00 ` Per file OOM badness Alex Deucher
2022-05-31 22:00   ` Alex Deucher
2022-05-31 22:00   ` [Intel-gfx] " Alex Deucher
2022-05-31 22:00   ` Alex Deucher
2022-05-31 22:00   ` [Nouveau] " Alex Deucher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YqNSSFQELx/LeEHR@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=andrey.grodzovsky@amd.com \
    --cc=christian.koenig@amd.com \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=daniel@ffwll.ch \
    --cc=hughd@google.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.