From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
To: ricklind@linux.vnet.ibm.com, linuxppc-dev@lists.ozlabs.org
Cc: Rick Lindsley <ricklind@linux.vnet.ibm.com>
Subject: Re: [PATCH] arch/powerpc/mm/slice: Cleanup leftover use of task_size
Date: Thu, 13 Apr 2017 16:28:46 +0530 [thread overview]
Message-ID: <87poggpo9l.fsf@skywalker.in.ibm.com> (raw)
In-Reply-To: <1492067567-24688-1-git-send-email-ricklind@linux.vnet.ibm.com>
ricklind@linux.vnet.ibm.com writes:
> From: Rick Lindsley <ricklind@linux.vnet.ibm.com>
>
> With the 512TB virtual addressing capability, a new field was added to
> the paca and mm_context (addr_limit) to track the process's desire to
> use the larger addressing space. Functions in the radix-enabled path
> (mmap.c) were modified to inspect this value when deciding whether to
> grant or deny requests in that range.
>
> However, the non-radix path falls through to the old, hashed slice code
> (slice_get_unmapped_area, etc.) and these code paths still inspect
> task_size. The same attention to addr_limit made in (for example)
> radix__arch_get_unmapped_area() should also be applied to (correspondingly)
> slice_get_unmapped_area().
I would suggest we don't do this change now. But rather we audit the
usage of TASK_SIZE(), mm->task_size and move them correctly to
mm->task_size and mm->context.addr_limit. The context.addr_limit is
added as an optimization for slice_mask copy and we need to closely
audit to make sure we can use that as a boundary condition for error
checking in case of mmap.
IMHO we should do this while consoliding
TASK_SIZE/mm->task_size/mm->context.addr_limit.
A previous attempt can be found at
https://lkml.kernel.org/r/20161230155634.8692-1-dsafonov@virtuozzo.com
We should start working in that direction. Some arch even have thread_info.addr_limit
>
> Signed-off-by: Rick Lindsley <ricklind@linux.vnet.ibm.com>
> ---
> arch/powerpc/mm/slice.c | 12 ++++++------
> 1 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 251b6ba..c023bff 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -96,7 +96,7 @@ static int slice_area_is_free(struct mm_struct *mm, unsigned long addr,
> {
> struct vm_area_struct *vma;
>
> - if ((mm->task_size - len) < addr)
> + if ((mm->addr_limit - len) < addr)
> return 0;
> vma = find_vma(mm, addr);
> return (!vma || (addr + len) <= vma->vm_start);
> @@ -133,7 +133,7 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret)
> if (!slice_low_has_vma(mm, i))
> ret->low_slices |= 1u << i;
>
> - if (mm->task_size <= SLICE_LOW_TOP)
> + if (mm->addr_limit <= SLICE_LOW_TOP)
> return;
>
> for (i = 0; i < GET_HIGH_SLICE_INDEX(mm->context.addr_limit); i++)
> @@ -444,20 +444,20 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
> bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH);
>
> /* Sanity checks */
> - BUG_ON(mm->task_size == 0);
> + BUG_ON(mm->addr_limit == 0);
> VM_BUG_ON(radix_enabled());
>
> slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
> slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
> addr, len, flags, topdown);
>
> - if (len > mm->task_size)
> + if (len > mm->addr_limit)
> return -ENOMEM;
> if (len & ((1ul << pshift) - 1))
> return -EINVAL;
> if (fixed && (addr & ((1ul << pshift) - 1)))
> return -EINVAL;
> - if (fixed && addr > (mm->task_size - len))
> + if (fixed && addr > (mm->addr_limit - len))
> return -ENOMEM;
>
> /* If hint, make sure it matches our alignment restrictions */
> @@ -465,7 +465,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
> addr = _ALIGN_UP(addr, 1ul << pshift);
> slice_dbg(" aligned addr=%lx\n", addr);
> /* Ignore hint if it's too large or overlaps a VMA */
> - if (addr > mm->task_size - len ||
> + if (addr > mm->addr_limit - len ||
> !slice_area_is_free(mm, addr, len))
> addr = 0;
> }
> --
> 1.7.1
-aneesh
next prev parent reply other threads:[~2017-04-13 10:59 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-13 7:12 [PATCH] arch/powerpc/mm/slice: Cleanup leftover use of task_size ricklind
2017-04-13 10:58 ` Aneesh Kumar K.V [this message]
2017-04-13 11:43 ` Rick Lindsley
2017-04-13 11:48 ` Aneesh Kumar K.V
2017-04-13 15:35 ` kbuild test robot
2017-04-14 7:16 ` Rick Lindsley
2017-04-13 19:33 ` Aneesh Kumar K.V
2017-04-14 5:31 ` Rick Lindsley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87poggpo9l.fsf@skywalker.in.ibm.com \
--to=aneesh.kumar@linux.vnet.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=ricklind@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.