linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [HELP] How to get task_struct from mm
@ 2019-05-30  6:57 Yang Shi
  2019-05-30  7:26 ` Yang Shi
  2019-05-30 15:41 ` Michal Hocko
  0 siblings, 2 replies; 5+ messages in thread
From: Yang Shi @ 2019-05-30  6:57 UTC (permalink / raw)
  To: Mel Gorman, Michal Hocko, Andrew Morton; +Cc: linux-mm, linux-kernel

Hi folks,


As what we discussed about page demotion for PMEM at LSF/MM, the 
demotion should respect to the mempolicy and allowed mems of the process 
which the page (anonymous page only for now) belongs to.


The vma that the page is mapped to can be retrieved from rmap walk 
easily, but we need know the task_struct that the vma belongs to. It 
looks there is not such API, and container_of seems not work with 
pointer member.


Any suggestion?


Thanks,

Yang


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [HELP] How to get task_struct from mm
  2019-05-30  6:57 [HELP] How to get task_struct from mm Yang Shi
@ 2019-05-30  7:26 ` Yang Shi
  2019-05-30 15:41 ` Michal Hocko
  1 sibling, 0 replies; 5+ messages in thread
From: Yang Shi @ 2019-05-30  7:26 UTC (permalink / raw)
  To: Mel Gorman, Michal Hocko, Andrew Morton; +Cc: linux-mm, linux-kernel



On 5/30/19 2:57 PM, Yang Shi wrote:
> Hi folks,
>
>
> As what we discussed about page demotion for PMEM at LSF/MM, the 
> demotion should respect to the mempolicy and allowed mems of the 
> process which the page (anonymous page only for now) belongs to.
>
>
> The vma that the page is mapped to can be retrieved from rmap walk 
> easily, but we need know the task_struct that the vma belongs to. It 
> looks there is not such API, and container_of seems not work with 
> pointer member.
>
>
> Any suggestion?

mm->owner is defined for CONFIG_MEMCG only, I'm wondering whether we can 
extend this to !CONFIG_MEMCG case or not?

>
>
> Thanks,
>
> Yang
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [HELP] How to get task_struct from mm
  2019-05-30  6:57 [HELP] How to get task_struct from mm Yang Shi
  2019-05-30  7:26 ` Yang Shi
@ 2019-05-30 15:41 ` Michal Hocko
  2019-05-31 12:51   ` Yang Shi
  1 sibling, 1 reply; 5+ messages in thread
From: Michal Hocko @ 2019-05-30 15:41 UTC (permalink / raw)
  To: Yang Shi; +Cc: Mel Gorman, Andrew Morton, linux-mm, linux-kernel

On Thu 30-05-19 14:57:46, Yang Shi wrote:
> Hi folks,
> 
> 
> As what we discussed about page demotion for PMEM at LSF/MM, the demotion
> should respect to the mempolicy and allowed mems of the process which the
> page (anonymous page only for now) belongs to.

cpusets memory mask (aka mems_allowed) is indeed tricky and somehow
awkward.  It is inherently an address space property and I never
understood why we have it per _thread_. This just doesn't make any
sense to me. This just leads to weird corner cases. What should happen
if different threads disagree about the allocation affinity while
working on a shared address space?
 
> The vma that the page is mapped to can be retrieved from rmap walk easily,
> but we need know the task_struct that the vma belongs to. It looks there is
> not such API, and container_of seems not work with pointer member.

I do not think this is a good idea. As you point out in the reply we
have that for memcgs but we really hope to get rid of mm->owner there
as well. It is just more tricky there. Moreover such a reverse mapping
would be incorrect. Just think of a disagreeing yet overlapping cpusets
for different threads mapping the same page.

Is it such a big deal to document that the node migrate is not
compatible with cpusets?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [HELP] How to get task_struct from mm
  2019-05-30 15:41 ` Michal Hocko
@ 2019-05-31 12:51   ` Yang Shi
  2019-05-31 13:56     ` Michal Hocko
  0 siblings, 1 reply; 5+ messages in thread
From: Yang Shi @ 2019-05-31 12:51 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Mel Gorman, Andrew Morton, linux-mm, linux-kernel



On 5/30/19 11:41 PM, Michal Hocko wrote:
> On Thu 30-05-19 14:57:46, Yang Shi wrote:
>> Hi folks,
>>
>>
>> As what we discussed about page demotion for PMEM at LSF/MM, the demotion
>> should respect to the mempolicy and allowed mems of the process which the
>> page (anonymous page only for now) belongs to.
> cpusets memory mask (aka mems_allowed) is indeed tricky and somehow
> awkward.  It is inherently an address space property and I never
> understood why we have it per _thread_. This just doesn't make any
> sense to me. This just leads to weird corner cases. What should happen
> if different threads disagree about the allocation affinity while
> working on a shared address space?

I'm supposed (just my guess) such restriction should just apply for the 
first allocation. Just like memcg charge, who does it first, whose 
policy gets applied.

>   
>> The vma that the page is mapped to can be retrieved from rmap walk easily,
>> but we need know the task_struct that the vma belongs to. It looks there is
>> not such API, and container_of seems not work with pointer member.
> I do not think this is a good idea. As you point out in the reply we
> have that for memcgs but we really hope to get rid of mm->owner there
> as well. It is just more tricky there. Moreover such a reverse mapping
> would be incorrect. Just think of a disagreeing yet overlapping cpusets
> for different threads mapping the same page.
>
> Is it such a big deal to document that the node migrate is not
> compatible with cpusets?

Not only cpuset, but get_vma_policy() also needs find task_struct from 
vma. Currently, get_vma_policy() just uses "current", so it just returns 
the current process's mempolicy if the vma doesn't have mempolicy. For 
the node migrate case, "current" is definitely not correct.

It looks there is not an easy way to workaround it unless we claim node 
migrate is not compatible with both cpusets and mempolicy.



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [HELP] How to get task_struct from mm
  2019-05-31 12:51   ` Yang Shi
@ 2019-05-31 13:56     ` Michal Hocko
  0 siblings, 0 replies; 5+ messages in thread
From: Michal Hocko @ 2019-05-31 13:56 UTC (permalink / raw)
  To: Yang Shi; +Cc: Mel Gorman, Andrew Morton, linux-mm, linux-kernel

On Fri 31-05-19 20:51:05, Yang Shi wrote:
> 
> 
> On 5/30/19 11:41 PM, Michal Hocko wrote:
> > On Thu 30-05-19 14:57:46, Yang Shi wrote:
> > > Hi folks,
> > > 
> > > 
> > > As what we discussed about page demotion for PMEM at LSF/MM, the demotion
> > > should respect to the mempolicy and allowed mems of the process which the
> > > page (anonymous page only for now) belongs to.
> > cpusets memory mask (aka mems_allowed) is indeed tricky and somehow
> > awkward.  It is inherently an address space property and I never
> > understood why we have it per _thread_. This just doesn't make any
> > sense to me. This just leads to weird corner cases. What should happen
> > if different threads disagree about the allocation affinity while
> > working on a shared address space?
> 
> I'm supposed (just my guess) such restriction should just apply for the
> first allocation. Just like memcg charge, who does it first, whose policy
> gets applied.

I am not really sure that was the deliberate design choice. Maybe
somebody has a different recollection though.

> > > The vma that the page is mapped to can be retrieved from rmap walk easily,
> > > but we need know the task_struct that the vma belongs to. It looks there is
> > > not such API, and container_of seems not work with pointer member.
> > I do not think this is a good idea. As you point out in the reply we
> > have that for memcgs but we really hope to get rid of mm->owner there
> > as well. It is just more tricky there. Moreover such a reverse mapping
> > would be incorrect. Just think of a disagreeing yet overlapping cpusets
> > for different threads mapping the same page.
> > 
> > Is it such a big deal to document that the node migrate is not
> > compatible with cpusets?
> 
> Not only cpuset, but get_vma_policy() also needs find task_struct from vma.
> Currently, get_vma_policy() just uses "current", so it just returns the
> current process's mempolicy if the vma doesn't have mempolicy. For the node
> migrate case, "current" is definitely not correct.
>
> It looks there is not an easy way to workaround it unless we claim node
> migrate is not compatible with both cpusets and mempolicy.

yep, it seems so.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-05-31 13:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-30  6:57 [HELP] How to get task_struct from mm Yang Shi
2019-05-30  7:26 ` Yang Shi
2019-05-30 15:41 ` Michal Hocko
2019-05-31 12:51   ` Yang Shi
2019-05-31 13:56     ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).