All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Vlastimil Babka <vbabka@suse.cz>, Jens Axboe <axboe@kernel.dk>,
	Andrew Dona-Couch <andrew@donacou.ch>,
	Andrew Morton <akpm@linux-foundation.org>,
	Drew DeVault <sir@cmpwn.com>,
	Ammar Faizi <ammarfaizi2@gnuweeb.org>,
	linux-kernel@vger.kernel.org, linux-api@vger.kernel.org,
	io_uring Mailing List <io-uring@vger.kernel.org>,
	Pavel Begunkov <asml.silence@gmail.com>,
	linux-mm@kvack.org
Subject: Re: [PATCH] Increase default MLOCK_LIMIT to 8 MiB
Date: Wed, 24 Nov 2021 20:09:42 +0100	[thread overview]
Message-ID: <cc9d3f3e-2fe1-0df0-06b2-c54e020161da@redhat.com> (raw)
In-Reply-To: <20211124183544.GL5112@ziepe.ca>

On 24.11.21 19:35, Jason Gunthorpe wrote:
> On Wed, Nov 24, 2021 at 05:43:58PM +0100, David Hildenbrand wrote:
>> On 24.11.21 16:34, Jason Gunthorpe wrote:
>>> On Wed, Nov 24, 2021 at 03:14:00PM +0100, David Hildenbrand wrote:
>>>
>>>> I'm not aware of any where you can fragment 50% of all pageblocks in the
>>>> system as an unprivileged user essentially consuming almost no memory
>>>> and essentially staying inside well-defined memlock limits. But sure if
>>>> there are "many" people will be able to come up with at least one
>>>> comparable thing. I'll be happy to learn.
>>>
>>> If the concern is that THP's can be DOS'd then any avenue that renders
>>> the system out of THPs is a DOS attack vector. Including all the
>>> normal workloads that people run and already complain that THPs get
>>> exhausted.
>>>
>>> A hostile userspace can only quicken this process.
>>
>> We can not only fragment THP but also easily smaller compound pages,
>> with less impact though (well, as long as people want more than 0.1% per
>> user ...).
> 
> My point is as long as userspace can drive this fragmentation, by any
> means, we can never have DOS proof higher order pages, so lets not
> worry so much about one of many ways to create fragmentation.
> 

That would be giving up on compound pages (hugetlbfs, THP, ...) on any
current Linux system that does not use ZONE_MOVABLE -- which is not
something I am not willing to buy into, just like our customers ;)

See my other mail, the upstream version of my reproducer essentially
shows what FOLL_LONGTERM is currently doing wrong with pageblocks. And
at least to me that's an interesting insight :)

I agree that the more extreme scenarios I can construct are a secondary
concern. But my upstream reproducer just highlights what can easily
happen in reality.

>>>> My position that FOLL_LONGTERM for unprivileged users is a strong no-go
>>>> stands as it is.
>>>
>>> As this basically excludes long standing pre-existing things like
>>> RDMA, XDP, io_uring, and more I don't think this can be the general
>>> answer for mm, sorry.
>>
>> Let's think about options to restrict FOLL_LONGTERM usage:
> 
> Which gives me the view that we should be talking about how to make
> high order pages completely DOS proof, not about FOLL_LONGTERM.

Sure, one step at a time ;)

> 
> To me that is exactly what ZONE_MOVABLE strives to achieve, and I
> think anyone who cares about QOS around THP must include ZONE_MOVABLE
> in their solution.

For 100% yes.

> 
> In all of this I am thinking back to the discussion about the 1GB THP
> proposal which was resoundly shot down on the grounds that 2MB THP
> *doesn't work* today due to the existing fragmentation problems.

The point that "2MB THP" doesn't work is just wrong. pageblocks do their
job very well, but we can end up in corner case situations where more
and more pageblocks are getting fragmented. And people are constantly
improving these corner cases (e.g. proactive compaction).

Usually you have to allocate *a lot* of memory and put the system under
extreme memory pressure, such that unmovable allocations spill into
movable pageblocks and the other way around.

The thing about my reproducer is that it does that without any memory
pressure, and that is the BIG difference to everything else we have in
that regard. You can have an idle 1TiB system running my reproducer and
it will fragment half of of all pageblocks in the system while mlocking
~ 1GiB. And that highlights the real issue IMHO.

The 1 GB THP project is still going on BTW.

> 
>> Another option would be not accounting FOLL_LONGTERM as RLIMIT_MEMLOCK,
>> but instead as something that explicitly matches the differing
>> semantics. 
> 
> Also a good idea, someone who cares about this should really put
> pinned pages into the cgroup machinery (with correct accounting!)
> 
>> At the same time, eventually work on proper alternatives with mmu
>> notifiers (and possibly without the any such limits) where possible
>> and required.
> 
> mmu_notifiers is also bad, it just offends a different group of MM
> concerns :)

Yeah, I know, locking nightmare.

> 
> Something like io_ring is registering a bulk amount of memory and then
> doing some potentially long operations against it.

The individual operations it performs are comparable to O_DIRECT I think
-- but no expert.

> 
> So to use a mmu_notifier scheme you'd have to block the mmu_notifier
> invalidate_range_start until all the operations touching the memory
> finish (and suspend new operations at the same time!).
> 
> Blocking the notifier like this locks up the migration/etc threads
> completely, and is destructive to the OOM reclaim.
> 
> At least with a pinned page those threads don't even try to touch it
> instead of getting stuck up.

Yes, if only we'd be pinning for a restricted amount of time ...

-- 
Thanks,

David / dhildenb


  reply	other threads:[~2021-11-24 19:09 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-28  8:08 [PATCH] Increase default MLOCK_LIMIT to 8 MiB Drew DeVault
2021-10-28 18:22 ` Jens Axboe
2021-11-04 14:27   ` Cyril Hrubis
2021-11-04 14:44     ` Jens Axboe
2021-11-06  2:33 ` Ammar Faizi
2021-11-06  7:05   ` Drew DeVault
2021-11-06  7:12     ` Ammar Faizi
2021-11-16  4:35       ` Andrew Morton
2021-11-16  6:32         ` Drew DeVault
2021-11-16 19:47           ` Andrew Morton
2021-11-16 19:48             ` Drew DeVault
2021-11-16 21:37               ` Andrew Morton
2021-11-17  8:23                 ` Drew DeVault
2021-11-22 17:11                 ` David Hildenbrand
2021-11-22 17:55                   ` Andrew Dona-Couch
2021-11-22 18:26                     ` David Hildenbrand
2021-11-22 19:53                       ` Jens Axboe
2021-11-22 20:03                         ` Matthew Wilcox
2021-11-22 20:04                           ` Jens Axboe
2021-11-22 20:08                         ` David Hildenbrand
2021-11-22 20:44                           ` Jens Axboe
2021-11-22 21:56                             ` David Hildenbrand
2021-11-23 12:02                               ` David Hildenbrand
2021-11-23 13:25                           ` Jason Gunthorpe
2021-11-23 13:39                             ` David Hildenbrand
2021-11-23 14:07                               ` Jason Gunthorpe
2021-11-23 14:44                                 ` David Hildenbrand
2021-11-23 17:00                                   ` Jason Gunthorpe
2021-11-23 17:04                                     ` David Hildenbrand
2021-11-23 22:04                                     ` Vlastimil Babka
2021-11-23 23:59                                       ` Jason Gunthorpe
2021-11-24  8:57                                         ` David Hildenbrand
2021-11-24 13:23                                           ` Jason Gunthorpe
2021-11-24 13:25                                             ` David Hildenbrand
2021-11-24 13:28                                               ` Jason Gunthorpe
2021-11-24 13:29                                                 ` David Hildenbrand
2021-11-24 13:48                                                   ` Jason Gunthorpe
2021-11-24 14:14                                                     ` David Hildenbrand
2021-11-24 15:34                                                       ` Jason Gunthorpe
2021-11-24 16:43                                                         ` David Hildenbrand
2021-11-24 18:35                                                           ` Jason Gunthorpe
2021-11-24 19:09                                                             ` David Hildenbrand [this message]
2021-11-24 23:11                                                               ` Jason Gunthorpe
2021-11-30 15:52                                                                 ` David Hildenbrand
2021-11-24 18:37                                                           ` David Hildenbrand
2021-11-24 14:37                                           ` Vlastimil Babka
2021-11-24 14:41                                             ` David Hildenbrand
2021-11-16 18:36         ` Matthew Wilcox
2021-11-16 18:44           ` Drew DeVault
2021-11-16 18:55           ` Jens Axboe
2021-11-16 19:21             ` Vito Caputo
2021-11-16 19:25               ` Drew DeVault
2021-11-16 19:46                 ` Vito Caputo
2021-11-16 19:41               ` Jens Axboe
2021-11-17 22:26         ` Johannes Weiner
2021-11-17 23:17           ` Jens Axboe
2021-11-18 21:58             ` Andrew Morton
2021-11-19  7:41               ` Drew DeVault

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cc9d3f3e-2fe1-0df0-06b2-c54e020161da@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ammarfaizi2@gnuweeb.org \
    --cc=andrew@donacou.ch \
    --cc=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=io-uring@vger.kernel.org \
    --cc=jgg@ziepe.ca \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=sir@cmpwn.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.