All of lore.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM ATTEND] Attend mm summit 2018
@ 2018-02-22  2:54 Balbir Singh
  2018-02-22 13:03 ` Michal Hocko
  0 siblings, 1 reply; 7+ messages in thread
From: Balbir Singh @ 2018-02-22  2:54 UTC (permalink / raw)
  To: lsf-pc; +Cc: linux-mm

I did not send out an official request to attend earlier, but I was
invited by Jerome to attend. I was traveling for a big part of Jan and
some part of Feb 2018 and hence the delay in sending out this email
with my desire to attend

I did a proposal last year for N_COHERENT_MEMORY, but there was
suggestions that for coherent memory we do ZONE_DEVICE and thus
HMM/CDM came into the picture. I have good knowledge of the memory
cgroups subsystem and have been looking at HMM and HMM/CDM for over a
year now and playing with ZONE_DEVICE in general. I would like to
attend to learn and discuss on the following topics

1. HMM and HMM-CDM and arguments for whether NUMA makes sense or not.
Experiences gained with both technologies.
2. Memory cgroups - I don't see a pressing need for many new features,
but I'd like to see if we can revive some old proposals around virtual
memory limits

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [LSF/MM ATTEND] Attend mm summit 2018
  2018-02-22  2:54 [LSF/MM ATTEND] Attend mm summit 2018 Balbir Singh
@ 2018-02-22 13:03 ` Michal Hocko
  2018-02-22 13:23   ` Balbir Singh
  0 siblings, 1 reply; 7+ messages in thread
From: Michal Hocko @ 2018-02-22 13:03 UTC (permalink / raw)
  To: Balbir Singh; +Cc: lsf-pc, linux-mm

On Thu 22-02-18 13:54:46, Balbir Singh wrote:
[...]
> 2. Memory cgroups - I don't see a pressing need for many new features,
> but I'd like to see if we can revive some old proposals around virtual
> memory limits

Could you be more specific about usecase(s)?
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [LSF/MM ATTEND] Attend mm summit 2018
  2018-02-22 13:03 ` Michal Hocko
@ 2018-02-22 13:23   ` Balbir Singh
  2018-02-22 13:34     ` [Lsf-pc] " Michal Hocko
  0 siblings, 1 reply; 7+ messages in thread
From: Balbir Singh @ 2018-02-22 13:23 UTC (permalink / raw)
  To: Michal Hocko; +Cc: lsf-pc, linux-mm

On Fri, Feb 23, 2018 at 12:03 AM, Michal Hocko <mhocko@kernel.org> wrote:
> On Thu 22-02-18 13:54:46, Balbir Singh wrote:
> [...]
>> 2. Memory cgroups - I don't see a pressing need for many new features,
>> but I'd like to see if we can revive some old proposals around virtual
>> memory limits
>
> Could you be more specific about usecase(s)?

I had for a long time a virtual memory limit controller in -mm tree.
The use case was to fail allocations as opposed to OOM'ing in the
worst case as we do with the cgroup memory limits (actual page usage
control). I did not push for it then since I got side-tracked. I'd
like to pursue a use case for being able to fail allocations as
opposed to OOM'ing on a per cgroup basis. I'd like to start the
discussion again.

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Lsf-pc] [LSF/MM ATTEND] Attend mm summit 2018
  2018-02-22 13:23   ` Balbir Singh
@ 2018-02-22 13:34     ` Michal Hocko
  2018-02-22 22:01       ` virtual memory limits control (was Re: [Lsf-pc] [LSF/MM ATTEND] Attend mm summit 2018) Balbir Singh
  0 siblings, 1 reply; 7+ messages in thread
From: Michal Hocko @ 2018-02-22 13:34 UTC (permalink / raw)
  To: Balbir Singh; +Cc: linux-mm, lsf-pc

On Fri 23-02-18 00:23:53, Balbir Singh wrote:
> On Fri, Feb 23, 2018 at 12:03 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > On Thu 22-02-18 13:54:46, Balbir Singh wrote:
> > [...]
> >> 2. Memory cgroups - I don't see a pressing need for many new features,
> >> but I'd like to see if we can revive some old proposals around virtual
> >> memory limits
> >
> > Could you be more specific about usecase(s)?
> 
> I had for a long time a virtual memory limit controller in -mm tree.
> The use case was to fail allocations as opposed to OOM'ing in the
> worst case as we do with the cgroup memory limits (actual page usage
> control). I did not push for it then since I got side-tracked. I'd
> like to pursue a use case for being able to fail allocations as
> opposed to OOM'ing on a per cgroup basis. I'd like to start the
> discussion again.

So you basically want the strict no overcommit on the per memcg level?
I am really skeptical, to be completely honest. The global behavior is
not very usable in most cases already. Making it per-memcg will just
amplify all the issues (application tend to overcommit their virtual
address space). Not to mention that you cannot really prevent from the
OOM killer because there are allocations outside of the address space.

So if you want to push this forward you really need a very good existing
usecase to justifiy the change.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* virtual memory limits control (was Re: [Lsf-pc] [LSF/MM ATTEND] Attend mm summit 2018)
  2018-02-22 13:34     ` [Lsf-pc] " Michal Hocko
@ 2018-02-22 22:01       ` Balbir Singh
  2018-02-23  7:42         ` Michal Hocko
  0 siblings, 1 reply; 7+ messages in thread
From: Balbir Singh @ 2018-02-22 22:01 UTC (permalink / raw)
  To: Michal Hocko; +Cc: linux-mm, lsf-pc

Changed the subject to reflect the discussion

On Thu, 22 Feb 2018 14:34:25 +0100
Michal Hocko <mhocko@kernel.org> wrote:

> On Fri 23-02-18 00:23:53, Balbir Singh wrote:
> > On Fri, Feb 23, 2018 at 12:03 AM, Michal Hocko <mhocko@kernel.org> wrote:  
> > > On Thu 22-02-18 13:54:46, Balbir Singh wrote:
> > > [...]  
> > >> 2. Memory cgroups - I don't see a pressing need for many new features,
> > >> but I'd like to see if we can revive some old proposals around virtual
> > >> memory limits  
> > >
> > > Could you be more specific about usecase(s)?  
> > 
> > I had for a long time a virtual memory limit controller in -mm tree.
> > The use case was to fail allocations as opposed to OOM'ing in the
> > worst case as we do with the cgroup memory limits (actual page usage
> > control). I did not push for it then since I got side-tracked. I'd
> > like to pursue a use case for being able to fail allocations as
> > opposed to OOM'ing on a per cgroup basis. I'd like to start the
> > discussion again.  
> 
> So you basically want the strict no overcommit on the per memcg level?

I don't think it implies strict no overcommit, the value sets the
overcommit ratio (independent of the global vm.overcommit_ratio, which
we can discuss on the side, since I don't want it to impact the use
case).

The goal of the controller was  (and its optional, may not work well
for sparse address spaces)

1. set the vm limit
2. If the limit is exceeded, fail at malloc()/mmap() as opposed to
OOM'ing at page fault time
3. Application handles the fault and decide not to proceed with the
new task that needed more memory

I think this leads to applications being able to deal with failures
better. OOM is a big hammer

> I am really skeptical, to be completely honest. The global behavior is
> not very usable in most cases already. Making it per-memcg will just
> amplify all the issues (application tend to overcommit their virtual
> address space). Not to mention that you cannot really prevent from the
> OOM killer because there are allocations outside of the address space.
> 

Could you clarify on the outside address space -- as in shared
allocations outside the cgroup?  kernel allocations as a side-effect?

> So if you want to push this forward you really need a very good existing
> usecase to justifiy the change.

I want to start the discussion again.

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: virtual memory limits control (was Re: [Lsf-pc] [LSF/MM ATTEND] Attend mm summit 2018)
  2018-02-22 22:01       ` virtual memory limits control (was Re: [Lsf-pc] [LSF/MM ATTEND] Attend mm summit 2018) Balbir Singh
@ 2018-02-23  7:42         ` Michal Hocko
  2018-02-25 23:08           ` Balbir Singh
  0 siblings, 1 reply; 7+ messages in thread
From: Michal Hocko @ 2018-02-23  7:42 UTC (permalink / raw)
  To: Balbir Singh; +Cc: linux-mm, lsf-pc

On Fri 23-02-18 09:01:23, Balbir Singh wrote:
> Changed the subject to reflect the discussion
> 
> On Thu, 22 Feb 2018 14:34:25 +0100
> Michal Hocko <mhocko@kernel.org> wrote:
> 
> > On Fri 23-02-18 00:23:53, Balbir Singh wrote:
> > > On Fri, Feb 23, 2018 at 12:03 AM, Michal Hocko <mhocko@kernel.org> wrote:  
> > > > On Thu 22-02-18 13:54:46, Balbir Singh wrote:
> > > > [...]  
> > > >> 2. Memory cgroups - I don't see a pressing need for many new features,
> > > >> but I'd like to see if we can revive some old proposals around virtual
> > > >> memory limits  
> > > >
> > > > Could you be more specific about usecase(s)?  
> > > 
> > > I had for a long time a virtual memory limit controller in -mm tree.
> > > The use case was to fail allocations as opposed to OOM'ing in the
> > > worst case as we do with the cgroup memory limits (actual page usage
> > > control). I did not push for it then since I got side-tracked. I'd
> > > like to pursue a use case for being able to fail allocations as
> > > opposed to OOM'ing on a per cgroup basis. I'd like to start the
> > > discussion again.  
> > 
> > So you basically want the strict no overcommit on the per memcg level?
> 
> I don't think it implies strict no overcommit, the value sets the
> overcommit ratio (independent of the global vm.overcommit_ratio, which
> we can discuss on the side, since I don't want it to impact the use
> case).
> 
> The goal of the controller was  (and its optional, may not work well
> for sparse address spaces)
> 
> 1. set the vm limit
> 2. If the limit is exceeded, fail at malloc()/mmap() as opposed to
> OOM'ing at page fault time

this is basically strict no-overcommit

> 3. Application handles the fault and decide not to proceed with the
> new task that needed more memory

So you do not return ENOMEM but rather raise a signal? What that would
be?

> I think this leads to applications being able to deal with failures
> better. OOM is a big hammer

Do you have any _specific_ usecase in mind?
 
> > I am really skeptical, to be completely honest. The global behavior is
> > not very usable in most cases already. Making it per-memcg will just
> > amplify all the issues (application tend to overcommit their virtual
> > address space). Not to mention that you cannot really prevent from the
> > OOM killer because there are allocations outside of the address space.
> > 
> 
> Could you clarify on the outside address space -- as in shared
> allocations outside the cgroup?  kernel allocations as a side-effect?

basically anything that can be triggered from userspace and doesn't map
into the address space - page cache, fs metadata, drm buffers etc...
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: virtual memory limits control (was Re: [Lsf-pc] [LSF/MM ATTEND] Attend mm summit 2018)
  2018-02-23  7:42         ` Michal Hocko
@ 2018-02-25 23:08           ` Balbir Singh
  0 siblings, 0 replies; 7+ messages in thread
From: Balbir Singh @ 2018-02-25 23:08 UTC (permalink / raw)
  To: Michal Hocko; +Cc: linux-mm, vdavydov

-lsf-pc (I can add them back, but I did not want to spam the group)
+Vladimir

On Fri, 23 Feb 2018 08:42:01 +0100
Michal Hocko <mhocko@kernel.org> wrote:

> On Fri 23-02-18 09:01:23, Balbir Singh wrote:
> > Changed the subject to reflect the discussion
> > 
> > On Thu, 22 Feb 2018 14:34:25 +0100
> > Michal Hocko <mhocko@kernel.org> wrote:
> >   
> > > On Fri 23-02-18 00:23:53, Balbir Singh wrote:  
> > > > On Fri, Feb 23, 2018 at 12:03 AM, Michal Hocko <mhocko@kernel.org> wrote:    
> > > > > On Thu 22-02-18 13:54:46, Balbir Singh wrote:
> > > > > [...]    
> > > > >> 2. Memory cgroups - I don't see a pressing need for many new features,
> > > > >> but I'd like to see if we can revive some old proposals around virtual
> > > > >> memory limits    
> > > > >
> > > > > Could you be more specific about usecase(s)?    
> > > > 
> > > > I had for a long time a virtual memory limit controller in -mm tree.
> > > > The use case was to fail allocations as opposed to OOM'ing in the
> > > > worst case as we do with the cgroup memory limits (actual page usage
> > > > control). I did not push for it then since I got side-tracked. I'd
> > > > like to pursue a use case for being able to fail allocations as
> > > > opposed to OOM'ing on a per cgroup basis. I'd like to start the
> > > > discussion again.    
> > > 
> > > So you basically want the strict no overcommit on the per memcg level?  
> > 
> > I don't think it implies strict no overcommit, the value sets the
> > overcommit ratio (independent of the global vm.overcommit_ratio, which
> > we can discuss on the side, since I don't want it to impact the use
> > case).
> > 
> > The goal of the controller was  (and its optional, may not work well
> > for sparse address spaces)
> > 
> > 1. set the vm limit
> > 2. If the limit is exceeded, fail at malloc()/mmap() as opposed to
> > OOM'ing at page fault time  
> 
> this is basically strict no-overcommit

I look at it more as Committed_AS accounting and controls not controlled
or driven by CommitLimit, but something the administrator can derive,
but your right the defaults would be CommitLimit

> 
> > 3. Application handles the fault and decide not to proceed with the
> > new task that needed more memory  
> 
> So you do not return ENOMEM but rather raise a signal? What that would
> be?

Nope, it will return ENOMEM

> 
> > I think this leads to applications being able to deal with failures
> > better. OOM is a big hammer  
> 
> Do you have any _specific_ usecase in mind?

It's mostly my frustration with OOM kills I see, granted a lot of it is
about sizing the memory cgroup correctly, but that is not an easy job.

>  
> > > I am really skeptical, to be completely honest. The global behavior is
> > > not very usable in most cases already. Making it per-memcg will just
> > > amplify all the issues (application tend to overcommit their virtual
> > > address space). Not to mention that you cannot really prevent from the
> > > OOM killer because there are allocations outside of the address space.
> > >   
> > 
> > Could you clarify on the outside address space -- as in shared
> > allocations outside the cgroup?  kernel allocations as a side-effect?  
> 
> basically anything that can be triggered from userspace and doesn't map
> into the address space - page cache, fs metadata, drm buffers etc...

Yep, the virtual memory limits controller is more about the Committed_AS.

I also noticed that Vladimir tried something similar at
https://lkml.org/lkml/2014/7/3/405

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-02-25 23:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-22  2:54 [LSF/MM ATTEND] Attend mm summit 2018 Balbir Singh
2018-02-22 13:03 ` Michal Hocko
2018-02-22 13:23   ` Balbir Singh
2018-02-22 13:34     ` [Lsf-pc] " Michal Hocko
2018-02-22 22:01       ` virtual memory limits control (was Re: [Lsf-pc] [LSF/MM ATTEND] Attend mm summit 2018) Balbir Singh
2018-02-23  7:42         ` Michal Hocko
2018-02-25 23:08           ` Balbir Singh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.