From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8731BC433DB for ; Tue, 23 Mar 2021 12:37:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 11B8E619AB for ; Tue, 23 Mar 2021 12:37:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 11B8E619AB Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C14C8D0008; Tue, 23 Mar 2021 08:37:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2FBAD8D0007; Tue, 23 Mar 2021 08:37:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14E1A8D0008; Tue, 23 Mar 2021 08:37:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id E73C78D0007 for ; Tue, 23 Mar 2021 08:37:14 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AAF12365E for ; Tue, 23 Mar 2021 12:37:13 +0000 (UTC) X-FDA: 77951089146.16.36AAA35 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf05.hostedemail.com (Postfix) with ESMTP id 945A6E005F08 for ; Tue, 23 Mar 2021 12:37:12 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1616503031; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AkQCdkqsRL6FjVNRqhzir7MU97cl0F21rnjotT8pc7o=; b=KwkHFRMetl98EBn2+6hZdQMbLx3wGCW+ZeZTXYE6tZ9EWQDepTMVHHnXIg/k3bqU0cCDMX ORhlt0uu4/H6PIzCcWdRTwWn0RqVtQFCiL2CroNtQNHwcFgJj87e7ZbaFf+k1oipnKCAV4 L00HWoonVx+sHh/m0UpP3MDTrpVYy5c= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 673DEAD38; Tue, 23 Mar 2021 12:37:11 +0000 (UTC) Date: Tue, 23 Mar 2021 13:37:10 +0100 From: Michal Hocko To: Christian =?iso-8859-1?Q?K=F6nig?= Cc: Matthew Wilcox , dri-devel , Linux MM , amd-gfx list , Dave Chinner , Leo Liu Subject: Re: [PATCH] drm/ttm: stop warning on TT shrinker failure Message-ID: References: <20210322140548.GN1719932@casper.infradead.org> <75ff80c5-a054-d13d-85c1-0040addb45d2@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <75ff80c5-a054-d13d-85c1-0040addb45d2@gmail.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 945A6E005F08 X-Stat-Signature: 33wi8o5kiho7oto1hbkbrbyaktaodneq Received-SPF: none (suse.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616503032-802577 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 23-03-21 13:21:32, Christian K=F6nig wrote: > Am 23.03.21 um 13:04 schrieb Michal Hocko: > > On Tue 23-03-21 12:48:58, Christian K=F6nig wrote: > > > Am 23.03.21 um 12:28 schrieb Daniel Vetter: > > > > On Tue, Mar 23, 2021 at 08:38:33AM +0100, Michal Hocko wrote: > > > > > On Mon 22-03-21 20:34:25, Christian K=F6nig wrote: > > [...] > > > > > > My only concern is that if I could rely on memalloc_no* being= used we could > > > > > > optimize this quite a bit further. > > > > > Yes you can use the scope API and you will be guaranteed that _= any_ > > > > > allocation from the enclosed context will inherit GFP_NO* seman= tic. > > > The question is if this is also guaranteed the other way around? > > >=20 > > > In other words if somebody calls get_free_page(GFP_NOFS) are the co= ntext > > > flags set as well? > > gfp mask is always restricted in the page allocator. So say you have > > noio scope context and call get_free_page/kmalloc(GFP_NOFS) then the > > scope would restrict the allocation flags to GFP_NOIO (aka drop > > __GFP_IO). For further details, have a look at current_gfp_context > > and its callers. > >=20 > > Does this answer your question? >=20 > But what happens if you don't have noio scope and somebody calls > get_free_page(GFP_NOFS)? Then this will be a regular NOFS request. Let me repeat scope API will further restrict any requested allocation mode. > Is then the noio scope added automatically? And is it possible that the > shrinker gets called without noio scope even we would need it? Here you have lost me again. > > > > > I think this is where I don't get yet what Christian tries to d= o: We > > > > > really shouldn't do different tricks and calling contexts betwe= en direct > > > > > reclaim and kswapd reclaim. Otherwise very hard to track down b= ugs are > > > > > pretty much guaranteed. So whether we use explicit gfp flags or= the > > > > > context apis, result is exactly the same. > > > Ok let us recap what TTMs TT shrinker does here: > > >=20 > > > 1. We got memory which is not swapable because it might be accessed= by the > > > GPU at any time. > > > 2. Make sure the memory is not accessed by the GPU and driver need = to grab a > > > lock before they can make it accessible again. > > > 3. Allocate a shmem file and copy over the not swapable pages. > > This is quite tricky because the shrinker operates in the PF_MEMALLOC > > context so such an allocation would be allowed to completely deplete > > memory unless you explicitly mark that context as __GFP_NOMEMALLOC. >=20 > Thanks, exactly that was one thing I was absolutely not sure about. And= yes > I agree that this is really tricky. >=20 > Ideally I would like to be able to trigger swapping out the shmem page = I > allocated immediately after doing the copy. So let me try to rephrase to make sure I understand. You would like to swap out the existing content from the shrinker and you use shmem as a way to achieve that. The swapout should happen at the time of copying (shrinker context) or shortly afterwards? So effectively to call pageout() on the shmem page after the copy? =20 > This way I would only need a single page for the whole shrink operation= at > any given time. What do you mean by that? You want the share the same shmem page for other copy+swapout? --=20 Michal Hocko SUSE Labs