From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2D72C433DB for ; Fri, 5 Feb 2021 15:57:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5643564DE1 for ; Fri, 5 Feb 2021 15:57:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232076AbhBEORT (ORCPT ); Fri, 5 Feb 2021 09:17:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232178AbhBEOPE (ORCPT ); Fri, 5 Feb 2021 09:15:04 -0500 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD7BFC061BC3 for ; Fri, 5 Feb 2021 07:53:08 -0800 (PST) Received: by mail-wm1-x32b.google.com with SMTP id j21so3568330wmj.0 for ; Fri, 05 Feb 2021 07:53:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=8Ir4+6jXeWF96O1JJUoPM2tro7HgHzmtgfQxRaavk4k=; b=GG4/X4vfqGxHf+FcSU1m0HfMF3nZHr/JkWja0qFhQnEm4oNaQ4Ne1XqdW2nY85/nNu c32RcEEvul0N+mNo54y/xEeLO0m9B9bp3ksBZlm1QSFP0fJYnnh2EWf4+vIxtgV3TEr5 ugFftl0I++sc6DjOP/rSWFANYqFl18yBXjwpc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=8Ir4+6jXeWF96O1JJUoPM2tro7HgHzmtgfQxRaavk4k=; b=PQ2MewZ3aRFV57nWYlgOsSpV+sjnpjK4YeSNzCyTmnqPEHTg898Ub/l+ukLmlY0J7t 1TMEASKqf8BrxmLYJzVC5jjdHIJkmtS50Q2j71NO56iwX1cbQxchg5OkPH602iEubHc6 hMwk2OMIskueS7W4tYNXBSb/RYXfTyhN2VX3wWbLyrsR5uc8YO9Ybv+S1WjQOMid5ayR J2M49Cf7X54CGdXEp73PAwGs5dYU6ekFWI+ilHLaxk40ugVYPpW9y3cuxIzBYoNTYQs1 //m75HAeY5btDIDWSPrRSaG1LUKyYkz2YTDqgQMVNcGfxEm1Tn34UYyOvE1oTE+tQQl/ kq7w== X-Gm-Message-State: AOAM530quAKxXW6IVDlntt2Xn+enTY1E2b1OlpK/7lhNarIMk9IXeVd4 DTxNoDKpoh6+duhKsb8ZibWkCw== X-Google-Smtp-Source: ABdhPJyHjupniXZffmsHLQ5vfS77YrHpoNGLCS1sGwzhA+NN8PKI/ovT3ResDGrVha19lSn77gFBig== X-Received: by 2002:a1c:cc19:: with SMTP id h25mr4151508wmb.124.1612540387688; Fri, 05 Feb 2021 07:53:07 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id f14sm9338758wmj.30.2021.02.05.07.53.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 07:53:06 -0800 (PST) Date: Fri, 5 Feb 2021 16:53:04 +0100 From: Daniel Vetter To: Jason Gunthorpe Cc: Daniel Vetter , John Hubbard , Alex Deucher , Leon Romanovsky , linux-rdma , Maling list - DRI developers , Doug Ledford , Daniel Vetter , Christian Koenig , Jianxin Xiong Subject: Re: [PATCH v16 0/4] RDMA: Add dma-buf support Message-ID: References: <1608067636-98073-1-git-send-email-jianxin.xiong@intel.com> <5e4ac17d-1654-9abc-9a14-bda223d62866@nvidia.com> <20210204182923.GL4247@nvidia.com> <8e731fce-95c1-4ace-d8bc-dc0df7432d22@nvidia.com> <20210205154319.GT4247@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210205154319.GT4247@nvidia.com> X-Operating-System: Linux phenom 5.7.0-1-amd64 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org On Fri, Feb 05, 2021 at 11:43:19AM -0400, Jason Gunthorpe wrote: > On Fri, Feb 05, 2021 at 04:39:47PM +0100, Daniel Vetter wrote: > > > > And again, for slightly older hardware, without pinning to VRAM there is > > > no way to use this solution here for peer-to-peer. So I'm glad to see that > > > so far you're not ruling out the pinning option. > > > > Since HMM and ZONE_DEVICE came up, I'm kinda tempted to make ZONE_DEVICE > > ZONE_MOVEABLE (at least if you don't have a pinned vram contigent in your > > cgroups) or something like that, so we could benefit from the work to make > > sure pin_user_pages and all these never end up in there? > > ZONE_DEVICE should already not be returned from GUP. > > I've understood in the hmm casse the idea was a CPU touch of some > ZONE_DEVICE pages would trigger a migration to CPU memory, GUP would > want to follow the same logic, presumably it comes for free with the > fault handler somehow Oh I didn't know this, I thought the proposed p2p direct i/o patches would just use the fact that underneath ZONE_DEVICE there's "normal" struct pages. And so I got worried that maybe also pin_user_pages can creep in. But I didn't read the patches in full detail: https://lore.kernel.org/linux-block/20201106170036.18713-12-logang@deltatee.com/ But if you're saying that this all needs specific code and all the gup/pup code we have is excluded, I think we can make sure that we're not ever building features that requiring time-unlimited pinning of ZONE_DEVICE. Which I think we want. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch