From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59CA9C43331 for ; Tue, 31 Mar 2020 14:36:16 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 217F42071A for ; Tue, 31 Mar 2020 14:36:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LtHeYy1T" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 217F42071A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:39090 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJHzz-000875-8h for qemu-devel@archiver.kernel.org; Tue, 31 Mar 2020 10:36:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52001) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jJHyr-0006sW-CP for qemu-devel@nongnu.org; Tue, 31 Mar 2020 10:35:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jJHyq-0000Kc-0c for qemu-devel@nongnu.org; Tue, 31 Mar 2020 10:35:05 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:25415 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jJHyp-0000KB-SS for qemu-devel@nongnu.org; Tue, 31 Mar 2020 10:35:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585665302; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=xxKSX2erUXm8teiGRhQpTXCwL1APOTWOaLSEEHORbD8=; b=LtHeYy1T8u7Nad945q35deM3CrM1QPE4XgbqY7dacvX1fOJIbebVEUjYT+Hi1i2siaCt6G YH4m3ib0/LCfyKDyIUU9ZrU5uMZbXa2a6kmuPT0eKomrsNAc7MHj6Aw5N3aqCpyAkr2+x5 NT4sEU4GDkh0RjDsWVEwG0uLrHG3OPY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-114-n3JoleztP1i8sdheYhPpLA-1; Tue, 31 Mar 2020 10:34:58 -0400 X-MC-Unique: n3JoleztP1i8sdheYhPpLA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2B2268017CE; Tue, 31 Mar 2020 14:34:57 +0000 (UTC) Received: from [10.36.114.0] (ovpn-114-0.ams2.redhat.com [10.36.114.0]) by smtp.corp.redhat.com (Postfix) with ESMTP id 363825D9C5; Tue, 31 Mar 2020 14:34:49 +0000 (UTC) Subject: Re: [RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue From: David Hildenbrand To: "Michael S. Tsirkin" References: <20200326031817-mutt-send-email-mst@kernel.org> <20200326054554-mutt-send-email-mst@kernel.org> <20200331091718-mutt-send-email-mst@kernel.org> <02a393ce-c4b4-ede9-7671-76fa4c19097a@redhat.com> <20200331093300-mutt-send-email-mst@kernel.org> <20200331100359-mutt-send-email-mst@kernel.org> <85f699d4-459a-a319-0a8f-96c87d345c49@redhat.com> <20200331101117-mutt-send-email-mst@kernel.org> <118bc13b-76b2-f5a1-6aca-65bd10a22f6c@redhat.com> Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata= mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/ ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg= Organization: Red Hat GmbH Message-ID: <00dc8bad-05e5-6085-525c-ce9fded672cc@redhat.com> Date: Tue, 31 Mar 2020 16:34:48 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <118bc13b-76b2-f5a1-6aca-65bd10a22f6c@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 205.139.110.61 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pagupta@redhat.com, Alexander Duyck , qemu-devel@nongnu.org, mojha@codeaurora.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, namit@vmware.com, Hui Zhu , akpm@linux-foundation.org, jasowang@redhat.com, Hui Zhu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" On 31.03.20 16:29, David Hildenbrand wrote: > On 31.03.20 16:18, Michael S. Tsirkin wrote: >> On Tue, Mar 31, 2020 at 04:09:59PM +0200, David Hildenbrand wrote: >> >> ... >> >>>>>>>>>>>>>> So if we want to address this, IMHO this calls for a new API. >>>>>>>>>>>>>> Along the lines of >>>>>>>>>>>>>> >>>>>>>>>>>>>> struct page *alloc_page_range(gfp_t gfp, unsigned int min_order, >>>>>>>>>>>>>> unsigned int max_order, unsigned int *order) >>>>>>>>>>>>>> >>>>>>>>>>>>>> the idea would then be to return at a number of pages in the given >>>>>>>>>>>>>> range. >>>>>>>>>>>>>> >>>>>>>>>>>>>> What do you think? Want to try implementing that? >> >> .. >> >>> I expect the whole "steal huge pages from your guest" to be problematic, >>> as I already mentioned to Alex. This needs a performance evaluation. >>> >>> This all smells like a lot of workload dependent fine-tuning. :) >> >> >> So that's why I proposed the API above. >> >> The idea is that *if we are allocating a huge page anyway*, >> rather than break it up let's send it whole to the device. >> If we have smaller pages, return smaller pages. >> > > Sorry, I still fail to see why you cannot do that with my version of > balloon_pages_alloc(). But maybe I haven't understood the magic you > expect to happen in alloc_page_range() :) > > It's just going via a different inflate queue once we have that page, as > I stated in front of my draft patch "but with an > optimized reporting interface". > >> That seems like it would always be an improvement, whatever the >> workload. >> > > Don't think so. Assume there are plenty of 4k pages lying around. It > might actually be *bad* for guest performance if you take a huge page > instead of all the leftover 4k pages that cannot be merged. Only at the > point where you would want to break a bigger page up and report it in > pieces, where it would definitely make no difference. I just understood what you mean :) and now it makes sense - it avoids exactly that. Basically 1. Try to allocate order-0. No split necessary? return the page 2. Try to allocate order-1. No split necessary? return the page ... up to MAX_ORDER - 1. Yeah, I guess this will need a new kernel API. -- Thanks, David / dhildenb