From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DF70C433FE for ; Thu, 6 Oct 2022 06:24:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230169AbiJFGYM (ORCPT ); Thu, 6 Oct 2022 02:24:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229545AbiJFGYK (ORCPT ); Thu, 6 Oct 2022 02:24:10 -0400 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [IPv6:2a00:1450:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D3DB89AEC for ; Wed, 5 Oct 2022 23:24:08 -0700 (PDT) Received: by mail-ed1-x52b.google.com with SMTP id w10so1380337edd.4 for ; Wed, 05 Oct 2022 23:24:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date; bh=hcwsr+gxsxYhjpD2bB7DUVncJgwA9R5/DlEljscp2s0=; b=NJri+RJsCQX1v6PEjp7l9HpnrU7itfxe3i3yU/37dNWDDBhMgv6eSv9aaKcC/efLqJ aeJbPSl5Bby1+28sixlaa+qCs6LYgYw/8L+LyPgleIQDG2xH9hKayZe2nZr03G6UQ60U Kd6Bsitsbk71avsoS8gmrNpfQGKGnX8PjQWra1hlR+9MqurTuTiKve6So6Yauy6ne5ld g8ogpv6fx1mVWtxpIZKWec/4nvRtWcY6I81kXqDehFrTZeouiMNGPh2vOYYoRbK5CKtz b7cOcqQXFIh2mPH+OgF+AtNtLT4aOv3XKHnZMQwR7YKMmrf1Bw1NOpvAfIDH/mX8iCsY vaQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date; bh=hcwsr+gxsxYhjpD2bB7DUVncJgwA9R5/DlEljscp2s0=; b=t9gcSsGOMFnL6WthBhUtxwv9j24h3nl4aMrCVr6gWYVQxc+imkezDUx1mTPQdt964i EP0b1KxQMz3bXqGfnGbAAqSrTCAWVQ2Y1CQ1PBMaPeAKsvoj1cPMAMHidJ22gV44DeRP 6kTvuj92UnqtZGfVWWgqTaXkb+l9DhRZInDw4/zytD0+1vgXpsZH7Tp14240aC332L9R FFRlpXPltVGg+kqYAWXgmlYwTzTEk7NQSOWl56F4G+2IqKFexjkKZL6zW7YrruA0WmU5 OnK1jaw9SQjUzZMIf9TiTvH/NWFv7/TPNftpyBVSKirtaMHLCTcjOyBq+JMXDDTlXvp1 aJ1w== X-Gm-Message-State: ACrzQf1XjGAqdCFAoPdi6tgnB61hxmVxfivhXatTjHtX31rdUdkcgBv6 RE9P3CAswo+VMvlUcnukc3OI1iRDrCqF9eAAuFeg/A== X-Google-Smtp-Source: AMsMyM45Fo5EqDCOTvGzAIiJi6QN5VCakFhFt5Khq+dHMnxFOxnAznWVaQOOFUO5stJANfFH4Rz6OIlSSFIqJiIJ/Dc= X-Received: by 2002:a05:6402:1856:b0:458:db1e:20ec with SMTP id v22-20020a056402185600b00458db1e20ecmr3214007edy.14.1665037447118; Wed, 05 Oct 2022 23:24:07 -0700 (PDT) MIME-Version: 1.0 References: <20221002002326.946620-1-ira.weiny@intel.com> <20221002002326.946620-3-ira.weiny@intel.com> In-Reply-To: From: Sumit Garg Date: Thu, 6 Oct 2022 11:53:55 +0530 Message-ID: Subject: Re: [PATCH 2/4] tee: Remove vmalloc page support To: =?UTF-8?B?UGhpbCBDaGFuZyAo5by15LiW5YuzKQ==?= Cc: "ira.weiny@intel.com" , Jens Wiklander , Andrew Morton , Al Viro , "Fabio M. De Francesco" , Christoph Hellwig , Linus Torvalds , "op-tee@lists.trustedfirmware.org" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Phil, Please don't top-post in the OSS mailing list. On Wed, 5 Oct 2022 at 08:59, Phil Chang (=E5=BC=B5=E4=B8=96=E5=8B=B3) wrote: > > Hi Sumit > > Thanks for mentioning that, in fact, our product is low memory devices, a= nd continuous pages are extremely valuable. > Although our driver is not upstream yet but highly dependent on tee shm v= malloc support, Sorry but you need to get your driver mainline in order to support vmalloc interface. As otherwise it's a maintenance nightmare to support interfaces in the mainline for out-of-tree drivers. -Sumit > some scenarios are driver alloc high order pages but system memory is fra= gmentation so that alloc failed. > In this situation, vmalloc support is important and gives flexible usage = to user. > > > -----Original Message----- > From: Sumit Garg > Sent: Monday, October 3, 2022 2:57 PM > To: ira.weiny@intel.com > Cc: Jens Wiklander ; Andrew Morton ; Al Viro ; Fabio M. De Francesco = ; Christoph Hellwig ; Linus Torvalds <= torvalds@linux-foundation.org>; op-tee@lists.trustedfirmware.org; linux-ker= nel@vger.kernel.org; linux-mm@kvack.org; Phil Chang (=E5=BC=B5=E4=B8=96=E5= =8B=B3) > Subject: Re: [PATCH 2/4] tee: Remove vmalloc page support > > + Phil > > Hi Ira, > > On Sun, 2 Oct 2022 at 05:53, wrote: > > > > From: Ira Weiny > > > > The kernel pages used by shm_get_kernel_pages() are allocated using > > GFP_KERNEL through the following call stack: > > > > trusted_instantiate() > > trusted_payload_alloc() -> GFP_KERNEL > > > > tee_shm_register_kernel_buf() > > register_shm_helper() > > shm_get_kernel_pages() > > > > Where is one of: > > > > trusted_key_unseal() > > trusted_key_get_random() > > trusted_key_seal() > > > > Remove the vmalloc page support from shm_get_kernel_pages(). Replace > > with a warn on once. > > > > Cc: Jens Wiklander > > Cc: Al Viro > > Cc: "Fabio M. De Francesco" > > Cc: Christoph Hellwig > > Cc: Linus Torvalds > > Signed-off-by: Ira Weiny > > > > --- > > Jens I went with the suggestion from Linus and Christoph and rejected > > vmalloc addresses. I did not hear back from you regarding Linus' > > question if the vmalloc page support was required by an up coming > > patch set or not. So I assumed it was something out of tree. > > It looks like I wasn't CC'd to that conversation. IIRC, support for vmall= oc addresses was added recently by Phil here [1]. So I would like to give h= im a chance if he is planning to post a corresponding kernel driver upstrea= m. > > [1] https://urldefense.com/v3/__https://lists.trustedfirmware.org/archive= s/list/op-tee@lists.trustedfirmware.org/thread/M7HI3P2M66V27SK35CGQRICZ7DJZ= 5J2W/__;!!CTRNKA9wMg0ARbw!wGOKR9k3khZJlPt1K_xBCXX4EBM5ZCfWKuruFgSP45H8wTvJr= x4_St3Fb5ZrljD5QQ$ > > -Sumit > > > --- > > drivers/tee/tee_shm.c | 36 ++++++++++++------------------------ > > 1 file changed, 12 insertions(+), 24 deletions(-) > > > > diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index > > 27295bda3e0b..527a6eabc03e 100644 > > --- a/drivers/tee/tee_shm.c > > +++ b/drivers/tee/tee_shm.c > > @@ -24,37 +24,25 @@ static void shm_put_kernel_pages(struct page > > **pages, size_t page_count) static int shm_get_kernel_pages(unsigned l= ong start, size_t page_count, > > struct page **pages) { > > + struct kvec *kiov; > > size_t n; > > int rc; > > > > - if (is_vmalloc_addr((void *)start)) { > > - struct page *page; > > - > > - for (n =3D 0; n < page_count; n++) { > > - page =3D vmalloc_to_page((void *)(start + PAGE_= SIZE * n)); > > - if (!page) > > - return -ENOMEM; > > - > > - get_page(page); > > - pages[n] =3D page; > > - } > > - rc =3D page_count; > > - } else { > > - struct kvec *kiov; > > - > > - kiov =3D kcalloc(page_count, sizeof(*kiov), GFP_KERNEL)= ; > > - if (!kiov) > > - return -ENOMEM; > > + if (WARN_ON_ONCE(is_vmalloc_addr((void *)start))) > > + return -EINVAL; > > > > - for (n =3D 0; n < page_count; n++) { > > - kiov[n].iov_base =3D (void *)(start + n * PAGE_= SIZE); > > - kiov[n].iov_len =3D PAGE_SIZE; > > - } > > + kiov =3D kcalloc(page_count, sizeof(*kiov), GFP_KERNEL); > > + if (!kiov) > > + return -ENOMEM; > > > > - rc =3D get_kernel_pages(kiov, page_count, 0, pages); > > - kfree(kiov); > > + for (n =3D 0; n < page_count; n++) { > > + kiov[n].iov_base =3D (void *)(start + n * PAGE_SIZE); > > + kiov[n].iov_len =3D PAGE_SIZE; > > } > > > > + rc =3D get_kernel_pages(kiov, page_count, 0, pages); > > + kfree(kiov); > > + > > return rc; > > } > > > > -- > > 2.37.2 > >