From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24340C433FE for ; Thu, 3 Dec 2020 16:41:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B6640206F9 for ; Thu, 3 Dec 2020 16:41:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729147AbgLCQld (ORCPT ); Thu, 3 Dec 2020 11:41:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726026AbgLCQld (ORCPT ); Thu, 3 Dec 2020 11:41:33 -0500 Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02432C061A4E for ; Thu, 3 Dec 2020 08:40:53 -0800 (PST) Received: by mail-ej1-x642.google.com with SMTP id ce23so652958ejb.8 for ; Thu, 03 Dec 2020 08:40:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ScpuytyG6NLRNL7X2oNhk8CvwGk89oKl1c7QnQJXwzk=; b=L17/rVF3Xhc3L8P6V8zFE34V4ptIG/bCmTq6E1peP7hWNOhjGZuxW4wUxV8x2gJuLT h0pOM8fC67Cs4JphYZRhFK9ERmAvmgFmatc7g/+Gohq6HLEVVtv+IHaFFDcsHgoFIojn owcQsVHQY8iL5jrItIXaTb4Yy/GC1NwldRT5Kv4wGlkikkMzBK0/8mKapJ0IF5PGeNRR 6cSe29FL+MTmIa1TkXKKknhhD/0K1v2g5TdU+uLfkz+lpAVyiv6PairUU8kligNVDx5d cizx9piiDlQ/+3+mBzUa45yob/ibHnv3iM2v6ZR5wyZb06kd4T7Rwz2pxmkKXJ8J8GJR i71g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ScpuytyG6NLRNL7X2oNhk8CvwGk89oKl1c7QnQJXwzk=; b=B4A+FxwMNK7QN5KYOMADbkM4J/SllKb1VVizLSHANwJ+59vvaycAkciE9SWxqcQ7ZB uZ46EtoBmJ4zxJCHT/CKUhbuB13JFolOCi29/sXdLgNIfMcFWUmXPdJvsOvMPR3eYppq AEfjeulxSSQYjDdavKASc2TJklL6C7vjfMVcGoDYT2IOqYuoZMqapK2k8BkugSiEGe40 k2jWYDOKSDCDcDq6JcLn8nGMxRKr75nVnnFsc9gA2KRNfALdO/SdkqTQqG7oWbDIx4gI n+v62sGmfgEjosLJMUfa8EoXkOLEMxSizlRf5rVbu1DbdeGZW9Vg/ChT21//t3aP03wG FiNg== X-Gm-Message-State: AOAM531LeW8gcRWX9FwQFHGwMyjuf2SWKRCqS9wXNEaNV9OLcQ3AlRnv nZh6l9r3qlO0LYXqmNEe3gpGoyTYG8qKiuOCOR1daw== X-Google-Smtp-Source: ABdhPJweZzhaLuYS4oXjcPMboasHYjDF1XYNOK4O1QMIV986L6OhBriMHC+AldZq7MM3Xyyhx4DTrsMNDXmJ8P6WRNg= X-Received: by 2002:a17:906:c04d:: with SMTP id bm13mr3225888ejb.519.1607013651675; Thu, 03 Dec 2020 08:40:51 -0800 (PST) MIME-Version: 1.0 References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-7-pasha.tatashin@soleen.com> <20201202163507.GL5487@ziepe.ca> <20201203010809.GQ5487@ziepe.ca> <20201203141729.GS5487@ziepe.ca> In-Reply-To: <20201203141729.GS5487@ziepe.ca> From: Pavel Tatashin Date: Thu, 3 Dec 2020 11:40:15 -0500 Message-ID: Subject: Re: [PATCH 6/6] mm/gup: migrate pinned pages out of movable zone To: Jason Gunthorpe Cc: LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Looking at this code some more.. How is it even correct? > > 1633 if (!isolate_lru_page(head)) { > 1634 list_add_tail(&head->lru, &cma_page_list); > > Here we are only running under the read side of the mmap sem so multiple > GUPs can be calling that sequence in parallel. I don't see any > obvious exclusion that will prevent corruption of head->lru. The first > GUP thread to do isolate_lru_page() will ClearPageLRU() and the second > GUP thread will be a NOP for isolate_lru_page(). > > They will both race list_add_tail and other list ops. That is not OK. Good question. I studied it, and I do not see how this is OK. Worse, this race is also exposable as a syscall instead of via driver: two move_pages() run simultaneously. Perhaps in other places? move_pages() kernel_move_pages() mmget() do_pages_move() add_page_for_migratio() mmap_read_lock(mm); list_add_tail(&head->lru, pagelist); <- Not protected > > > What I meant is the users of the interface do it incrementally not in > > large chunks. For example: > > > > vfio_pin_pages_remote > > vaddr_get_pfn > > ret = pin_user_pages_remote(mm, vaddr, 1, flags | > > FOLL_LONGTERM, page, NULL, NULL); > > 1 -> pin only one pages at a time > > I don't know why vfio does this, it is why it so ridiculously slow at > least. Agreed. > > Jason From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE15C433FE for ; Thu, 3 Dec 2020 16:40:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 22781207A5 for ; Thu, 3 Dec 2020 16:40:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 22781207A5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3305E6B0036; Thu, 3 Dec 2020 11:40:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DF666B005C; Thu, 3 Dec 2020 11:40:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CF728D0001; Thu, 3 Dec 2020 11:40:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id 0574F6B0036 for ; Thu, 3 Dec 2020 11:40:54 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BCE1C8249980 for ; Thu, 3 Dec 2020 16:40:53 +0000 (UTC) X-FDA: 77552535186.03.cat73_1f0660b273bd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 97E8E28A4E8 for ; Thu, 3 Dec 2020 16:40:53 +0000 (UTC) X-HE-Tag: cat73_1f0660b273bd X-Filterd-Recvd-Size: 4952 Received: from mail-ej1-f66.google.com (mail-ej1-f66.google.com [209.85.218.66]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Dec 2020 16:40:52 +0000 (UTC) Received: by mail-ej1-f66.google.com with SMTP id qw4so4380870ejb.12 for ; Thu, 03 Dec 2020 08:40:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ScpuytyG6NLRNL7X2oNhk8CvwGk89oKl1c7QnQJXwzk=; b=L17/rVF3Xhc3L8P6V8zFE34V4ptIG/bCmTq6E1peP7hWNOhjGZuxW4wUxV8x2gJuLT h0pOM8fC67Cs4JphYZRhFK9ERmAvmgFmatc7g/+Gohq6HLEVVtv+IHaFFDcsHgoFIojn owcQsVHQY8iL5jrItIXaTb4Yy/GC1NwldRT5Kv4wGlkikkMzBK0/8mKapJ0IF5PGeNRR 6cSe29FL+MTmIa1TkXKKknhhD/0K1v2g5TdU+uLfkz+lpAVyiv6PairUU8kligNVDx5d cizx9piiDlQ/+3+mBzUa45yob/ibHnv3iM2v6ZR5wyZb06kd4T7Rwz2pxmkKXJ8J8GJR i71g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ScpuytyG6NLRNL7X2oNhk8CvwGk89oKl1c7QnQJXwzk=; b=s0C8AqyDakPNKDRAUZio6L6eWm9IfqmlqZy1YTfZcGozNtMgjFvASVynXZxJ4eZmJf zJbkJT/3ozx/sttOZLxL5OIoS/l4ItsYlsX3/mMvsuF6GIyFwZ5JKGm7b2tPwZsy9vHN BGVq3j1GU1sdL7ZPkjBMuylpmS92l35lHFgt3S7/HhMz72GAZIAlaWFdW/t+enqm7FTO aJHXLzF+AuLvNPUuVycaqGTMJi6F4uS7w+u3bYBtuetAzneKxZ1P047jy0W3saEqwA4m jyGJ6Bd3cmgZ/7K+c/sQU95YWW6dKvuTDKG1dIbvOXwLIRINaHCShMmh0xTmWXhV7ZYX hjDw== X-Gm-Message-State: AOAM5322sCV1XmYAvl2hVaCLmYMHjP4wDGhHwCWyy2WYdFy71ymQ9fJ1 NpHJNtjC5sjsQweuxmiP8HULv+7NGlU+J92b5ZKKyw== X-Google-Smtp-Source: ABdhPJweZzhaLuYS4oXjcPMboasHYjDF1XYNOK4O1QMIV986L6OhBriMHC+AldZq7MM3Xyyhx4DTrsMNDXmJ8P6WRNg= X-Received: by 2002:a17:906:c04d:: with SMTP id bm13mr3225888ejb.519.1607013651675; Thu, 03 Dec 2020 08:40:51 -0800 (PST) MIME-Version: 1.0 References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-7-pasha.tatashin@soleen.com> <20201202163507.GL5487@ziepe.ca> <20201203010809.GQ5487@ziepe.ca> <20201203141729.GS5487@ziepe.ca> In-Reply-To: <20201203141729.GS5487@ziepe.ca> From: Pavel Tatashin Date: Thu, 3 Dec 2020 11:40:15 -0500 Message-ID: Subject: Re: [PATCH 6/6] mm/gup: migrate pinned pages out of movable zone To: Jason Gunthorpe Cc: LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > Looking at this code some more.. How is it even correct? > > 1633 if (!isolate_lru_page(head)) { > 1634 list_add_tail(&head->lru, &cma_page_list); > > Here we are only running under the read side of the mmap sem so multiple > GUPs can be calling that sequence in parallel. I don't see any > obvious exclusion that will prevent corruption of head->lru. The first > GUP thread to do isolate_lru_page() will ClearPageLRU() and the second > GUP thread will be a NOP for isolate_lru_page(). > > They will both race list_add_tail and other list ops. That is not OK. Good question. I studied it, and I do not see how this is OK. Worse, this race is also exposable as a syscall instead of via driver: two move_pages() run simultaneously. Perhaps in other places? move_pages() kernel_move_pages() mmget() do_pages_move() add_page_for_migratio() mmap_read_lock(mm); list_add_tail(&head->lru, pagelist); <- Not protected > > > What I meant is the users of the interface do it incrementally not in > > large chunks. For example: > > > > vfio_pin_pages_remote > > vaddr_get_pfn > > ret = pin_user_pages_remote(mm, vaddr, 1, flags | > > FOLL_LONGTERM, page, NULL, NULL); > > 1 -> pin only one pages at a time > > I don't know why vfio does this, it is why it so ridiculously slow at > least. Agreed. > > Jason