From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 558C4C35E01 for ; Tue, 25 Feb 2020 17:23:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2F4CC20CC7 for ; Tue, 25 Feb 2020 17:23:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ccyHoO3W" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731171AbgBYRXg (ORCPT ); Tue, 25 Feb 2020 12:23:36 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:60776 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729817AbgBYRXg (ORCPT ); Tue, 25 Feb 2020 12:23:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582651414; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=IoCi0iBaspXw/yP3KM/TOlfiJk3jWe8/C1RZOLA99tE=; b=ccyHoO3W+NhR6OXAQe87a9KX5Z60K00mmYab8aBWE7QTYX1Kchhc5w0oXzKCwlrXY2w2te IJ4Don20DzYQXJfuy7iXnm5L6FKxZ8D5Q0XS5wBxLdTqjqd4tPPjOY2Qg8y2w5pne3wViI gp4TtUe/5B4h0ysySdkNTB039IXDeYc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-40-bC5DqIZ8MBy5tpKLMLXxYg-1; Tue, 25 Feb 2020 12:23:31 -0500 X-MC-Unique: bC5DqIZ8MBy5tpKLMLXxYg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 44CDAA899E7; Tue, 25 Feb 2020 17:23:29 +0000 (UTC) Received: from [10.36.117.12] (ovpn-117-12.ams2.redhat.com [10.36.117.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4A6D4271A0; Tue, 25 Feb 2020 17:23:24 +0000 (UTC) Subject: Re: [PATCH RFC v4 12/13] mm/vmscan: Export drop_slab() and drop_slab_node() To: Michal Hocko Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtio-dev@lists.oasis-open.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Andrew Morton , "Michael S . Tsirkin" References: <20191212171137.13872-1-david@redhat.com> <20191212171137.13872-13-david@redhat.com> <20200225145829.GW22443@dhcp22.suse.cz> <20200225170619.GC32720@dhcp22.suse.cz> From: David Hildenbrand Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata= mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/ ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg= Organization: Red Hat GmbH Message-ID: <9c21be0c-5eef-58c2-bfb9-ff787a5a2c08@redhat.com> Date: Tue, 25 Feb 2020 18:23:23 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 MIME-Version: 1.0 In-Reply-To: <20200225170619.GC32720@dhcp22.suse.cz> Content-Type: text/plain; charset=windows-1252 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 25.02.20 18:06, Michal Hocko wrote: > On Tue 25-02-20 16:09:29, David Hildenbrand wrote: >> On 25.02.20 15:58, Michal Hocko wrote: >>> On Thu 12-12-19 18:11:36, David Hildenbrand wrote: >>>> We already have a way to trigger reclaiming of all reclaimable slab objects >>>> from user space (echo 2 > /proc/sys/vm/drop_caches). Let's allow drivers >>>> to also trigger this when they really want to make progress and know what >>>> they are doing. >>> >>> I cannot say I would be fan of this. This is a global action with user >>> visible performance impact. I am worried that we will find out that all >>> sorts of drivers have a very good idea that dropping slab caches is >>> going to help their problem whatever it is. We have seen the same patter >>> in the userspace already and that is the reason we are logging the usage >>> to the log and count invocations in the counter. >> >> Yeah, I decided to hold back patch 11-13 for the v1 (which I am planning >> to post in March after more testing). What we really want is to make >> memory offlining an alloc_contig_range() work better with reclaimable >> objects. >> >>> >>>> virtio-mem wants to use these functions when it failed to unplug memory >>>> for quite some time (e.g., after 30 minutes). It will then try to >>>> free up reclaimable objects by dropping the slab caches every now and >>>> then (e.g., every 30 minutes) as long as necessary. There will be a way to >>>> disable this feature and info messages will be logged. >>>> >>>> In the future, we want to have a drop_slab_range() functionality >>>> instead. Memory offlining code has similar demands and also other >>>> alloc_contig_range() users (e.g., gigantic pages) could make good use of >>>> this feature. Adding it, however, requires more work/thought. >>> >>> We already do have a memory_notify(MEM_GOING_OFFLINE) for that purpose >>> and slab allocator implements a callback (slab_mem_going_offline_callback). >>> The callback is quite dumb and it doesn't really try to free objects >>> from the given memory range or even try to drop active objects which >>> might turn out to be hard but this sounds like a more robust way to >>> achieve what you want. >> >> Two things: >> >> 1. memory_notify(MEM_GOING_OFFLINE) is called after trying to isolate >> the page range and checking if we only have movable pages. Won't help >> much I guess. > > You are right, I have missed that. Can we reorder those two calls? AFAIK no (would have to look up the details, but there was a good reason for the order, e.g., avoid races with other users of page isolation like alloc_contig_range()). Especially, "[PATCH RFC v4 06/13] mm: Allow to offline unmovable PageOffline() pages via MEM_GOING_OFFLINE" (which is still impatiently waiting for an ACK ;) ) also works around that ordering issue in a way we discussed back then. -- Thanks, David / dhildenb From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: virtio-dev-return-6781-cohuck=redhat.com@lists.oasis-open.org Sender: List-Post: List-Help: List-Unsubscribe: List-Subscribe: Received: from lists.oasis-open.org (oasis-open.org [10.110.1.242]) by lists.oasis-open.org (Postfix) with ESMTP id D0B43985D65 for ; Tue, 25 Feb 2020 17:23:35 +0000 (UTC) References: <20191212171137.13872-1-david@redhat.com> <20191212171137.13872-13-david@redhat.com> <20200225145829.GW22443@dhcp22.suse.cz> <20200225170619.GC32720@dhcp22.suse.cz> From: David Hildenbrand Message-ID: <9c21be0c-5eef-58c2-bfb9-ff787a5a2c08@redhat.com> Date: Tue, 25 Feb 2020 18:23:23 +0100 MIME-Version: 1.0 In-Reply-To: <20200225170619.GC32720@dhcp22.suse.cz> Content-Language: en-US Subject: [virtio-dev] Re: [PATCH RFC v4 12/13] mm/vmscan: Export drop_slab() and drop_slab_node() Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit To: Michal Hocko Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtio-dev@lists.oasis-open.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Andrew Morton , "Michael S . Tsirkin" List-ID: On 25.02.20 18:06, Michal Hocko wrote: > On Tue 25-02-20 16:09:29, David Hildenbrand wrote: >> On 25.02.20 15:58, Michal Hocko wrote: >>> On Thu 12-12-19 18:11:36, David Hildenbrand wrote: >>>> We already have a way to trigger reclaiming of all reclaimable slab objects >>>> from user space (echo 2 > /proc/sys/vm/drop_caches). Let's allow drivers >>>> to also trigger this when they really want to make progress and know what >>>> they are doing. >>> >>> I cannot say I would be fan of this. This is a global action with user >>> visible performance impact. I am worried that we will find out that all >>> sorts of drivers have a very good idea that dropping slab caches is >>> going to help their problem whatever it is. We have seen the same patter >>> in the userspace already and that is the reason we are logging the usage >>> to the log and count invocations in the counter. >> >> Yeah, I decided to hold back patch 11-13 for the v1 (which I am planning >> to post in March after more testing). What we really want is to make >> memory offlining an alloc_contig_range() work better with reclaimable >> objects. >> >>> >>>> virtio-mem wants to use these functions when it failed to unplug memory >>>> for quite some time (e.g., after 30 minutes). It will then try to >>>> free up reclaimable objects by dropping the slab caches every now and >>>> then (e.g., every 30 minutes) as long as necessary. There will be a way to >>>> disable this feature and info messages will be logged. >>>> >>>> In the future, we want to have a drop_slab_range() functionality >>>> instead. Memory offlining code has similar demands and also other >>>> alloc_contig_range() users (e.g., gigantic pages) could make good use of >>>> this feature. Adding it, however, requires more work/thought. >>> >>> We already do have a memory_notify(MEM_GOING_OFFLINE) for that purpose >>> and slab allocator implements a callback (slab_mem_going_offline_callback). >>> The callback is quite dumb and it doesn't really try to free objects >>> from the given memory range or even try to drop active objects which >>> might turn out to be hard but this sounds like a more robust way to >>> achieve what you want. >> >> Two things: >> >> 1. memory_notify(MEM_GOING_OFFLINE) is called after trying to isolate >> the page range and checking if we only have movable pages. Won't help >> much I guess. > > You are right, I have missed that. Can we reorder those two calls? AFAIK no (would have to look up the details, but there was a good reason for the order, e.g., avoid races with other users of page isolation like alloc_contig_range()). Especially, "[PATCH RFC v4 06/13] mm: Allow to offline unmovable PageOffline() pages via MEM_GOING_OFFLINE" (which is still impatiently waiting for an ACK ;) ) also works around that ordering issue in a way we discussed back then. -- Thanks, David / dhildenb --------------------------------------------------------------------- To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org