From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F537C433FE for ; Fri, 4 Dec 2020 20:18:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD69922B40 for ; Fri, 4 Dec 2020 20:18:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728009AbgLDUSM (ORCPT ); Fri, 4 Dec 2020 15:18:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726021AbgLDUSM (ORCPT ); Fri, 4 Dec 2020 15:18:12 -0500 Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D0E0C061A51 for ; Fri, 4 Dec 2020 12:17:31 -0800 (PST) Received: by mail-ej1-x642.google.com with SMTP id bo9so10415427ejb.13 for ; Fri, 04 Dec 2020 12:17:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gvNT4Fi2ZWeKMJ6AEF/XdAB2QqFFs6kg02O1L/V0Ptc=; b=jZmpQLz+0zm8DuVG1G4QZGibKke9In87CMUj5LFlPc2l0qQLTW44JPKbkeqeAySKQb 6qSvOmDOUouHWFLY/VqpPu2lfqQRRqyotBLWEOXqsEpvRKeVZogwLA8yF2I8bX7/nxbm ++SzbSxQv8FO3w/cLCZY/mouyifqKbKQXMXywYOJZmHhtGwbvZvQ2JF4BhEvMYrA37VN 8U56lSfPn4GwNYqyXGhC+kjYyHJlmnJqGyfZ/GMdtkCW46wXa2B/5xjH1GRHmec8htlQ TD3E2v6Gx2A3/TEKK5NdlNaFNthfVGKsQh4ZWeIRoHMXGjNC8oL2miFx+6y8PXdhnmCQ elag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gvNT4Fi2ZWeKMJ6AEF/XdAB2QqFFs6kg02O1L/V0Ptc=; b=WujAqYKyIH4Clsn2FPzAKaAsISWPouMyljNZlhAIzz4J1zPa5gZgNeUldCiEL25Pkn 6GjY+Bs6HY206gx4vZKu3mZdRhAxvG9/8lKlKSLUSNw6dRmVBiXZG5+ejF+V9kZqpW9F fLgnagoo2oDBJHo9r5iCBl3uJ8wGAk+k18G4x3TPNllCd1Gy6Q1A/davUxAX5GJMl2+W hOhH1qcLN46cbMBAsw9pcljCDeqUxc+STIzF/pxV9QDq8khy2zriYyUhhshjewuMn14N w4qMKwb1xzm/1t426h+Qa5toJmOFeJp8YzQqBU2MsqACbIWjzCtAsRSjxz5zWr4PG7tI XTWA== X-Gm-Message-State: AOAM533WbrSfbpKG9PkUvTZWzHc+sj32dvLU4Nfai2DIgwF/twW5PNEx 2vfIlTbObkRUJD0MuKqyFTvhOFuiv45d9wji1cJsoQ== X-Google-Smtp-Source: ABdhPJw8Drr9ktbZeLfkNbcQmUohJJ6sLwReukCwnMz3uARnVeZq68y62K6kglAumIacNHki2ug3rzktFKoEm1OHGW0= X-Received: by 2002:a17:906:d41:: with SMTP id r1mr8524974ejh.383.1607113050273; Fri, 04 Dec 2020 12:17:30 -0800 (PST) MIME-Version: 1.0 References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-7-pasha.tatashin@soleen.com> <20201202163507.GL5487@ziepe.ca> <20201203010809.GQ5487@ziepe.ca> <20201203141729.GS5487@ziepe.ca> <87360lnxph.fsf@oracle.com> In-Reply-To: <87360lnxph.fsf@oracle.com> From: Pavel Tatashin Date: Fri, 4 Dec 2020 15:16:54 -0500 Message-ID: Subject: Re: [PATCH 6/6] mm/gup: migrate pinned pages out of movable zone To: Daniel Jordan Cc: Jason Gunthorpe , Alex Williamson , LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 4, 2020 at 3:06 PM Daniel Jordan wrote: > > Jason Gunthorpe writes: > > > On Wed, Dec 02, 2020 at 08:34:32PM -0500, Pavel Tatashin wrote: > >> What I meant is the users of the interface do it incrementally not in > >> large chunks. For example: > >> > >> vfio_pin_pages_remote > >> vaddr_get_pfn > >> ret = pin_user_pages_remote(mm, vaddr, 1, flags | > >> FOLL_LONGTERM, page, NULL, NULL); > >> 1 -> pin only one pages at a time > > > > I don't know why vfio does this, it is why it so ridiculously slow at > > least. > > Well Alex can correct me, but I went digging and a comment from the > first type1 vfio commit says the iommu API didn't promise to unmap > subpages of previous mappings, so doing page at a time gave flexibility > at the cost of inefficiency. > > Then 166fd7d94afd allowed the iommu to use larger pages in vfio, but > vfio kept pinning pages at a time. I couldn't find an explanation for > why that stayed the same. > > Yesterday I tried optimizing vfio to skip gup calls for tail pages after > Matthew pointed out this same issue to me by coincidence last week. > Currently debugging, but if there's a fundamental reason this won't work > on the vfio side, it'd be nice to know. Hi Daniel, I do not think there are any fundamental reasons why it won't work. I have also thinking increasing VFIO chunking for a different reason: If a client touches pages before doing a VFIO DMA map, those pages might be huge, and pinning a small page at a time and migrating a small page at a time can break-up the huge pages. So, it is not only inefficient to pin, but it can also inadvertently slow down the runtime. Thank you, Pasha From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92C6DC4361A for ; Fri, 4 Dec 2020 20:17:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E32F922C9C for ; Fri, 4 Dec 2020 20:17:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E32F922C9C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 27A386B005D; Fri, 4 Dec 2020 15:17:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 22A176B0068; Fri, 4 Dec 2020 15:17:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 119CD6B006E; Fri, 4 Dec 2020 15:17:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id EE9E46B005D for ; Fri, 4 Dec 2020 15:17:32 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B35B38249980 for ; Fri, 4 Dec 2020 20:17:32 +0000 (UTC) X-FDA: 77556709944.08.shoes26_450822b273c7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 96AB61819E772 for ; Fri, 4 Dec 2020 20:17:32 +0000 (UTC) X-HE-Tag: shoes26_450822b273c7 X-Filterd-Recvd-Size: 5372 Received: from mail-ej1-f68.google.com (mail-ej1-f68.google.com [209.85.218.68]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Dec 2020 20:17:31 +0000 (UTC) Received: by mail-ej1-f68.google.com with SMTP id qw4so10444196ejb.12 for ; Fri, 04 Dec 2020 12:17:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gvNT4Fi2ZWeKMJ6AEF/XdAB2QqFFs6kg02O1L/V0Ptc=; b=jZmpQLz+0zm8DuVG1G4QZGibKke9In87CMUj5LFlPc2l0qQLTW44JPKbkeqeAySKQb 6qSvOmDOUouHWFLY/VqpPu2lfqQRRqyotBLWEOXqsEpvRKeVZogwLA8yF2I8bX7/nxbm ++SzbSxQv8FO3w/cLCZY/mouyifqKbKQXMXywYOJZmHhtGwbvZvQ2JF4BhEvMYrA37VN 8U56lSfPn4GwNYqyXGhC+kjYyHJlmnJqGyfZ/GMdtkCW46wXa2B/5xjH1GRHmec8htlQ TD3E2v6Gx2A3/TEKK5NdlNaFNthfVGKsQh4ZWeIRoHMXGjNC8oL2miFx+6y8PXdhnmCQ elag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gvNT4Fi2ZWeKMJ6AEF/XdAB2QqFFs6kg02O1L/V0Ptc=; b=VC/LZvzD6M+ki4boxbQCiCA1khjJi7DB0c5nytBNYqFm/vP3W88LTPwUqGA40usFIy XIWe41YopEBQjcXVGstVURWCql0h1IHHUR0eP+rzCgtOlPEz9gJ7CYOlXySEyFy8EoxH PTY4gU0qj8n1H7AG2lYFRjVUjnuLz/c4AMr4po5WreVK3DIyXZUp22w6LfAVPEbIFZtz MxqUnAZ1U3Oxiz5VumVFmDFyBThuD6oyeW9n5aHBYHqbinJSD3UvJ4VADGd1yJw+Tr9u B7TwCjydB9fjuD3ihShIX7qpMGbuhh3N5bE6EXuW8iUEZzMBZjPjaaPsCGtGQ2wOShnN bsDQ== X-Gm-Message-State: AOAM533egGOiFBprbayarDgy9TTt8RCS6Gh/7dmzzD1K1aDT6+AQumTe t0VMMvXT5Bfv3ZF+YeRCotcRoJxqollGg2jyGZR6tg== X-Google-Smtp-Source: ABdhPJw8Drr9ktbZeLfkNbcQmUohJJ6sLwReukCwnMz3uARnVeZq68y62K6kglAumIacNHki2ug3rzktFKoEm1OHGW0= X-Received: by 2002:a17:906:d41:: with SMTP id r1mr8524974ejh.383.1607113050273; Fri, 04 Dec 2020 12:17:30 -0800 (PST) MIME-Version: 1.0 References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-7-pasha.tatashin@soleen.com> <20201202163507.GL5487@ziepe.ca> <20201203010809.GQ5487@ziepe.ca> <20201203141729.GS5487@ziepe.ca> <87360lnxph.fsf@oracle.com> In-Reply-To: <87360lnxph.fsf@oracle.com> From: Pavel Tatashin Date: Fri, 4 Dec 2020 15:16:54 -0500 Message-ID: Subject: Re: [PATCH 6/6] mm/gup: migrate pinned pages out of movable zone To: Daniel Jordan Cc: Jason Gunthorpe , Alex Williamson , LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Dec 4, 2020 at 3:06 PM Daniel Jordan wrote: > > Jason Gunthorpe writes: > > > On Wed, Dec 02, 2020 at 08:34:32PM -0500, Pavel Tatashin wrote: > >> What I meant is the users of the interface do it incrementally not in > >> large chunks. For example: > >> > >> vfio_pin_pages_remote > >> vaddr_get_pfn > >> ret = pin_user_pages_remote(mm, vaddr, 1, flags | > >> FOLL_LONGTERM, page, NULL, NULL); > >> 1 -> pin only one pages at a time > > > > I don't know why vfio does this, it is why it so ridiculously slow at > > least. > > Well Alex can correct me, but I went digging and a comment from the > first type1 vfio commit says the iommu API didn't promise to unmap > subpages of previous mappings, so doing page at a time gave flexibility > at the cost of inefficiency. > > Then 166fd7d94afd allowed the iommu to use larger pages in vfio, but > vfio kept pinning pages at a time. I couldn't find an explanation for > why that stayed the same. > > Yesterday I tried optimizing vfio to skip gup calls for tail pages after > Matthew pointed out this same issue to me by coincidence last week. > Currently debugging, but if there's a fundamental reason this won't work > on the vfio side, it'd be nice to know. Hi Daniel, I do not think there are any fundamental reasons why it won't work. I have also thinking increasing VFIO chunking for a different reason: If a client touches pages before doing a VFIO DMA map, those pages might be huge, and pinning a small page at a time and migrating a small page at a time can break-up the huge pages. So, it is not only inefficient to pin, but it can also inadvertently slow down the runtime. Thank you, Pasha