From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5FDFC04AB2 for ; Fri, 10 May 2019 17:45:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 916502184C for ; Fri, 10 May 2019 17:45:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727667AbfEJRpn (ORCPT ); Fri, 10 May 2019 13:45:43 -0400 Received: from mail-qk1-f195.google.com ([209.85.222.195]:46831 "EHLO mail-qk1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727383AbfEJRpn (ORCPT ); Fri, 10 May 2019 13:45:43 -0400 Received: by mail-qk1-f195.google.com with SMTP id a132so4100237qkb.13 for ; Fri, 10 May 2019 10:45:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=ItAX23A+ZK4sf8NEwjAZtt+aMB/LmGzTQuo1uxHF1/0=; b=UZvwaRGVooiEtYUjPSwbPs7eVZybRraJLgvYlOi0k35SuQGDZUkHZf7APdC1Qjzd4B EqBwe1aUTbwQenoWSTevphHldt06e4HN6sHPScbtxE4JYfK3ERRsTwZ7QCUGw7C7+XNw ir86MKPp+56ElMnBMtOjjm0kmXhBgAmeO+XL0h4tPr2TGi9+C8HmBvkwCdtqzX5igk95 eJlHnOZR+ywo4B2qdtDatZF6+srI3eEmnKNBfI0htR1CKA1tWR28ogNJRkEelP8ZOhm1 NJKqmM5z/pucOpfqUFY2LfS95KXwyOsbM9p5Btr+7iBStrSoETY0n1ryoJ7zHXc/SU5e cgVA== X-Gm-Message-State: APjAAAUvb4Ll8xFkbb6N3DRwme/6MRZlZQHK464EMIfTT1MGBeSbbaoF 3Tl2p80OTgmOys8UbXHABQq0TDNxoVk= X-Google-Smtp-Source: APXvYqx1jICg7CS0UwHc52npH/Yny0UGKDM8v5+hee7gxDo8h3K94GlP6ro645dP+vmCXxSTHU2/Tw== X-Received: by 2002:a05:620a:1463:: with SMTP id j3mr10351073qkl.157.1557510342262; Fri, 10 May 2019 10:45:42 -0700 (PDT) Received: from vitty.brq.redhat.com ([209.48.7.126]) by smtp.gmail.com with ESMTPSA id k53sm3528159qtb.65.2019.05.10.10.45.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 10:45:41 -0700 (PDT) From: Vitaly Kuznetsov To: Michael Kelley , "m.maya.nakamura" Cc: KY Srinivasan , Haiyang Zhang , Stephen Hemminger , "sashal\@kernel.org" , "x86\@kernel.org" , "linux-hyperv\@vger.kernel.org" , "linux-kernel\@vger.kernel.org" Subject: RE: [PATCH 2/6] x86: hv: hv_init.c: Replace alloc_page() with kmem_cache_alloc() In-Reply-To: References: <87wok8it8p.fsf@vitty.brq.redhat.com> <20190412072401.GA69620@maya190131.isni1t2eisqetojrdim5hhf1se.xx.internal.cloudapp.net> <87mukvfynk.fsf@vitty.brq.redhat.com> <20190508064559.GA54416@maya190131.isni1t2eisqetojrdim5hhf1se.xx.internal.cloudapp.net> <87mujxro70.fsf@vitty.brq.redhat.com> <87r296qwbk.fsf@vitty.brq.redhat.com> Date: Fri, 10 May 2019 13:45:41 -0400 Message-ID: <8736lmqk3e.fsf@vitty.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michael Kelley writes: > From: Vitaly Kuznetsov Sent: Friday, May 10, 2019 6:22 AM >> >> >> >> I think we can consider these allocations being DMA-like (because >> >> Hypervisor accesses this memory too) so you can probably take a look at >> >> dma_pool_create()/dma_pool_alloc() and friends. >> >> >> > >> > I've taken a look at dma_pool_create(), and it takes a "struct device" >> > argument with which the DMA pool will be associated. That probably >> > makes DMA pools a bad choice for this usage. Pages need to be allocated >> > pretty early during boot for Hyper-V communication, and even if the >> > device subsystem is initialized early enough to create a fake device, >> > such a dependency seems rather dubious. >> >> We can probably use dma_pool_create()/dma_pool_alloc() from vmbus module >> but these 'early' allocations may not have a device to bind to indeed. >> >> > >> > kmem_cache_create/alloc() seems like the only choice to get >> > guaranteed alignment. Do you see any actual problem with >> > using kmem_cache_*, other than the naming? It seems like these >> > kmem_cache_* functions really just act as a sub-allocator for >> > known-size allocations, and "cache" is a common usage >> > pattern, but not necessarily the only usage pattern. >> >> Yes, it's basically the name - it makes it harder to read the code and >> some future refactoring of kmem_cache_* may not take our use-case into >> account (as we're misusing the API). We can try renaming it to something >> generic of course and see what -mm people have to say :-) >> > > This makes me think of creating Hyper-V specific alloc/free functions > that wrap whatever the backend allocator actually is. So we have > hv_alloc_hyperv_page() and hv_free_hyperv_page(). That makes the > code very readable and the intent is super clear. > > As for the backend allocator, an alternative is to write our own simple > allocator. It maintains a single free list. If hv_alloc_hyperv_page() is > called, and the free list is empty, do alloc_page() and break it up into > Hyper-V sized pages to replenish the free list. (On x86, these end up > being 1-for-1 operations.) hv_free_hyperv_page() just puts the Hyper-V > page back on the free list. Don't worry trying to combine and do > free_page() since there's very little free'ing done anyway. And I'm > assuming GPF_KERNEL is all we need. > > If in the future Linux provides an alternate general-purpose allocator > that guarantees alignment, we can ditch the simple allocator and use > the new mechanism with some simple code changes in one place. > > Thoughts? > +1 for adding wrappers and if the allocator turns out to be more-or-less trivial I think we can live with that for the time being. -- Vitaly