From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_DKIMWL_WL_HIGH, USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4B38C072B1 for ; Tue, 28 May 2019 07:38:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9A6FA20989 for ; Tue, 28 May 2019 07:38:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1559029120; bh=Rz8gTFAUBx05valy07c89aDhA1wcUWODVajJbyAQ8t0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=MX1BAG1vM4Qb9F5b9/MKSpSEtgeGQbEVWw3XBYbcos3xTwPjQloaiWQDBzWgixbC3 DyOttC6hCf2MFn867Ioemd9Sx41OcuOdMDeJvhE5zFYS6UprRYvBISrmuJr1CAhNvM mR1/BzuhYKpkZKrLUEWyDmgIbXLozidVbGSmc65o= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727524AbfE1Hik (ORCPT ); Tue, 28 May 2019 03:38:40 -0400 Received: from mx2.suse.de ([195.135.220.15]:56076 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727373AbfE1Hij (ORCPT ); Tue, 28 May 2019 03:38:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 33E0EAE2E; Tue, 28 May 2019 07:38:37 +0000 (UTC) Date: Tue, 28 May 2019 09:38:35 +0200 From: Michal Hocko To: Konstantin Khlebnikov Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vladimir Davydov , Johannes Weiner , Tejun Heo , Andrew Morton , Mel Gorman , Roman Gushchin , linux-api@vger.kernel.org Subject: Re: [PATCH RFC] mm/madvise: implement MADV_STOCKPILE (kswapd from user space) Message-ID: <20190528073835.GP1658@dhcp22.suse.cz> References: <155895155861.2824.318013775811596173.stgit@buzz> <20190527141223.GD1658@dhcp22.suse.cz> <20190527142156.GE1658@dhcp22.suse.cz> <20190527143926.GF1658@dhcp22.suse.cz> <9c55a343-2a91-46c6-166d-41b94bf5e9c8@yandex-team.ru> <20190528065153.GB1803@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 28-05-19 10:30:12, Konstantin Khlebnikov wrote: > On 28.05.2019 9:51, Michal Hocko wrote: > > On Tue 28-05-19 09:25:13, Konstantin Khlebnikov wrote: > > > On 27.05.2019 17:39, Michal Hocko wrote: > > > > On Mon 27-05-19 16:21:56, Michal Hocko wrote: > > > > > On Mon 27-05-19 16:12:23, Michal Hocko wrote: > > > > > > [Cc linux-api. Please always cc this list when proposing a new user > > > > > > visible api. Keeping the rest of the email intact for reference] > > > > > > > > > > > > On Mon 27-05-19 13:05:58, Konstantin Khlebnikov wrote: > > > > > [...] > > > > > > > This implements manual kswapd-style memory reclaim initiated by userspace. > > > > > > > It reclaims both physical memory and cgroup pages. It works in context of > > > > > > > task who calls syscall madvise thus cpu time is accounted correctly. > > > > > > > > > > I do not follow. Does this mean that the madvise always reclaims from > > > > > the memcg the process is member of? > > > > > > > > OK, I've had a quick look at the implementation (the semantic should be > > > > clear from the patch descrition btw.) and it goes all the way up the > > > > hierarchy and finally try to impose the same limit to the global state. > > > > This doesn't really make much sense to me. For few reasons. > > > > > > > > First of all it breaks isolation where one subgroup can influence a > > > > different hierarchy via parent reclaim. > > > > > > madvise(NULL, size, MADV_STOCKPILE) is the same as memory allocation and > > > freeing immediately, but without pinning memory and provoking oom. > > > > > > So, there is shouldn't be any isolation or security issues. > > > > > > At least probably it should be limited with portion of limit (like half) > > > instead of whole limit as it does now. > > > > I do not think so. If a process is running inside a memcg then it is > > a subject of a limit and that implies an isolation. What you are > > proposing here is to allow escaping that restriction unless I am missing > > something. Just consider the following setup > > > > root (total memory = 2G) > > / \ > > (1G) A B (1G) > > / \ > > (500M) C D (500M) > > > > all of them used up close to the limit and a process inside D requests > > shrinking to 250M. Unless I am misunderstanding this implementation > > will shrink D, B root to 250M (which means reclaiming C and A as well) > > and then globally if that was not sufficient. So you have allowed D to > > "allocate" 1,75G of memory effectively, right? > > It shrinks not 'size' memory - only while usage + size > limit. > So, after reclaiming 250M in D all other levels will have 250M free. Could you define the exact semantic? Ideally something for the manual page please? -- Michal Hocko SUSE Labs