From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752766AbdLEPUB (ORCPT ); Tue, 5 Dec 2017 10:20:01 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34386 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752366AbdLEPT7 (ORCPT ); Tue, 5 Dec 2017 10:19:59 -0500 Date: Tue, 5 Dec 2017 16:19:56 +0100 From: Oleg Nesterov To: Kirill Tkhai Cc: Tejun Heo , axboe@kernel.dk, bcrl@kvack.org, viro@zeniv.linux.org.uk, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-aio@kvack.org Subject: Re: [PATCH 0/5] blkcg: Limit maximum number of aio requests available for cgroup Message-ID: <20171205151956.GA22836@redhat.com> References: <151240305010.10164.15584502480037205018.stgit@localhost.localdomain> <20171204200756.GC2421075@devbig577.frc2.facebook.com> <17b22d53-ad3d-1ba8-854f-fc2a43d86c44@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <17b22d53-ad3d-1ba8-854f-fc2a43d86c44@virtuozzo.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Tue, 05 Dec 2017 15:19:59 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/05, Kirill Tkhai wrote: > > Currently, aio_nr and aio_max_nr are global. Yeah, I too tried to complain 2 years ago... > In case of containers this > means that a single container may occupy all aio requests, which are > available in the system, and memory. let me quote my old emails... This is off-topic, but the whole "vm" logic in aio_setup_ring() looks sub-optimal. I do not mean the code, just it seems to me it is pointless to pollute the page cache, and expose the pages we can not swap/free to lru. Afaics we _only_ need this for migration. This memory lives in page-cache/lru, it is visible for shrinker which will unmap these pages for no reason on memory shortage. IOW, aio fools the kernel, this memory looks reclaimable but it is not. And we only do this for migration. Even if this is not a problem, this does not look right. So perhaps at least mapping_set_unevictable() makes sense. But I simply do not know if migration will work with this change. Perhaps I missed something, doesn't matter. But this means that this memory is not accounted, so if I increase aio-max-nr then this test-case #define __NR_io_setup 206 int main(void) { int nr; for (nr = 0; ;++nr) { void *ctx = NULL; int ret = syscall(__NR_io_setup, 1, &ctx); if (ret) { printf("failed %d %m: ", nr); getchar(); } } return 0; } triggers OOM-killer which kills sshd and other daemons on my machine. These pages were not even faulted in (or the shrinker can unmap them), the kernel can not know who should be blamed. Shouldn't we account aio events/pages somehow, say per-user, or in mm->pinned_vm ? I do not think this is unkown, and probably this all is fine. IOW, this is just a question, not a bug-report or something like this. And of course, this is not exploitable because aio-max-nr limits the number of pages you can steal. But otoh, aio_max_nr is system-wide, so the unpriviliged user can ddos (say) mysqld. And this leads to the same question: shouldn't we account nr_events at least? Oleg.