From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3ADEC4360F for ; Tue, 12 Mar 2019 03:50:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ADDF62087C for ; Tue, 12 Mar 2019 03:50:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726954AbfCLDux (ORCPT ); Mon, 11 Mar 2019 23:50:53 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:41778 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726754AbfCLDuw (ORCPT ); Mon, 11 Mar 2019 23:50:52 -0400 Received: by mail-qt1-f195.google.com with SMTP id v10so1067196qtp.8 for ; Mon, 11 Mar 2019 20:50:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=QRPDOmP3PZybh4jg1T0rjd7mcth50LuMLbslo/IWnIE=; b=QRAO9p51w8DJPijjP/Kj/iKM0Bl8vMFCNjout0MZFXelDuYttLJYqD+dTSaGwlGbIv A6wzqLTZpJppg3ghVtfgTWkxcYJ2QWAss5Ef4CyMYNVQyCRCLfjiEkzudCD8CgC4mGCB CytloNA6kzBCofYSW0176asnXzghKWHeHN1cjieRJY8vGSR9NJeg4X9inzZdqrr0QTZ2 bk0KircBIas+53AX3L7OCAL+6/DgAUyNyQU7S8lu/oxNmdtRzCUvemXwFlNWN3LOBT4x Rxr/9WseYyZHF+ICm2sign+op7X0/p7Z9u6JYbLSo+MHOCWFNh1w8Ev0FxrKN/+I1Mvg c4ww== X-Gm-Message-State: APjAAAXHjnurVukSFmAygYhAwF/mXAyRD+OvFmyjTg+ri+g/aooq4MKw wCdqUVxL3k/s9+iZysxNFuxLkg== X-Google-Smtp-Source: APXvYqytPBLiV9ui85yEFwzYki+uAt4jj2vFWUzeaP5nba2eYldaAweFOnQcfmd69Ncni6yOLzzxow== X-Received: by 2002:ac8:faf:: with SMTP id b44mr28675567qtk.9.1552362651332; Mon, 11 Mar 2019 20:50:51 -0700 (PDT) Received: from redhat.com (pool-173-76-246-42.bstnma.fios.verizon.net. [173.76.246.42]) by smtp.gmail.com with ESMTPSA id p35sm5014288qte.83.2019.03.11.20.50.49 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 11 Mar 2019 20:50:50 -0700 (PDT) Date: Mon, 11 Mar 2019 23:50:47 -0400 From: "Michael S. Tsirkin" To: Jason Wang Cc: Andrea Arcangeli , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, peterx@redhat.com, linux-mm@kvack.org, Jerome Glisse Subject: Re: [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address Message-ID: <20190311234956-mutt-send-email-mst@kernel.org> References: <1551856692-3384-1-git-send-email-jasowang@redhat.com> <1551856692-3384-6-git-send-email-jasowang@redhat.com> <20190307103503-mutt-send-email-mst@kernel.org> <20190307124700-mutt-send-email-mst@kernel.org> <20190307191622.GP23850@redhat.com> <20190308194845.GC26923@redhat.com> <8b68a2a0-907a-15f5-a07f-fc5b53d7ea19@redhat.com> <20190311084525-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 12, 2019 at 10:52:15AM +0800, Jason Wang wrote: > > On 2019/3/11 下午8:48, Michael S. Tsirkin wrote: > > On Mon, Mar 11, 2019 at 03:40:31PM +0800, Jason Wang wrote: > > > On 2019/3/9 上午3:48, Andrea Arcangeli wrote: > > > > Hello Jeson, > > > > > > > > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > > > > Just to make sure I understand here. For boosting through huge TLB, do > > > > > you mean we can do that in the future (e.g by mapping more userspace > > > > > pages to kenrel) or it can be done by this series (only about three 4K > > > > > pages were vmapped per virtqueue)? > > > > When I answered about the advantages of mmu notifier and I mentioned > > > > guaranteed 2m/gigapages where available, I overlooked the detail you > > > > were using vmap instead of kmap. So with vmap you're actually doing > > > > the opposite, it slows down the access because it will always use a 4k > > > > TLB even if QEMU runs on THP or gigapages hugetlbfs. > > > > > > > > If there's just one page (or a few pages) in each vmap there's no need > > > > of vmap, the linearity vmap provides doesn't pay off in such > > > > case. > > > > > > > > So likely there's further room for improvement here that you can > > > > achieve in the current series by just dropping vmap/vunmap. > > > > > > > > You can just use kmap (or kmap_atomic if you're in preemptible > > > > section, should work from bh/irq). > > > > > > > > In short the mmu notifier to invalidate only sets a "struct page * > > > > userringpage" pointer to NULL without calls to vunmap. > > > > > > > > In all cases immediately after gup_fast returns you can always call > > > > put_page immediately (which explains why I'd like an option to drop > > > > FOLL_GET from gup_fast to speed it up). > > > > > > > > Then you can check the sequence_counter and inc/dec counter increased > > > > by _start/_end. That will tell you if the page you got and you called > > > > put_page to immediately unpin it or even to free it, cannot go away > > > > under you until the invalidate is called. > > > > > > > > If sequence counters and counter tells that gup_fast raced with anyt > > > > mmu notifier invalidate you can just repeat gup_fast. Otherwise you're > > > > done, the page cannot go away under you, the host virtual to host > > > > physical mapping cannot change either. And the page is not pinned > > > > either. So you can just set the "struct page * userringpage = page" > > > > where "page" was the one setup by gup_fast. > > > > > > > > When later the invalidate runs, you can just call set_page_dirty if > > > > gup_fast was called with "write = 1" and then you clear the pointer > > > > "userringpage = NULL". > > > > > > > > When you need to read/write to the memory > > > > kmap/kmap_atomic(userringpage) should work. > > > Yes, I've considered kmap() from the start. The reason I don't do that is > > > large virtqueue may need more than one page so VA might not be contiguous. > > > But this is probably not a big issue which just need more tricks in the > > > vhost memory accessors. > > > > > > > > > > In short because there's no hardware involvement here, the established > > > > mapping is just the pointer to the page, there is no need of setting > > > > up any pagetables or to do any TLB flushes (except on 32bit archs if > > > > the page is above the direct mapping but it never happens on 64bit > > > > archs). > > > I see, I believe we don't care much about the performance of 32bit archs (or > > > we can just fallback to copy_to_user() friends). > > Using copyXuser is better I guess. > > > Ok. > > > > > > > Using direct mapping (I > > > guess kernel will always try hugepage for that?) should be better and we can > > > even use it for the data transfer not only for the metadata. > > > > > > Thanks > > We can't really. The big issue is get user pages. Doing that on data > > path will be slower than copyXuser. > > > I meant if we can find a way to avoid doing gup in datapath. E.g vhost > maintain a range tree and add or remove ranges through MMU notifier. Then in > datapath, if we find the range, then use direct mapping otherwise > copy_to_user(). > > Thanks We can try. But I'm not sure there's any reason to think there's any locality there. > > > Or maybe it won't with the > > amount of mitigations spread around. Go ahead and try. > > > >