From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35AEBC11F64 for ; Thu, 1 Jul 2021 13:47:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 991AA613FD for ; Thu, 1 Jul 2021 13:47:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 991AA613FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=purdue.edu Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EA3BA8D02A5; Thu, 1 Jul 2021 09:47:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7B1A8D0001; Thu, 1 Jul 2021 09:47:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C584F8D02A5; Thu, 1 Jul 2021 09:47:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id 8B5FA8D0001 for ; Thu, 1 Jul 2021 09:47:41 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 468F7181C5C26 for ; Thu, 1 Jul 2021 13:47:41 +0000 (UTC) X-FDA: 78314146722.19.A30524E Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2128.outbound.protection.outlook.com [40.107.220.128]) by imf15.hostedemail.com (Postfix) with ESMTP id 2F456D000099 for ; Thu, 1 Jul 2021 13:47:40 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HBzSZkqC6sRa7mFTd2qGCmDxkHPxq1LdeQfxbLoLEEDCnb8B4KBchxCW+z0gzVS7Od8o5k/PTH/kntXTy5fEIw4tWn9EThr02/gi+cHk9mcuw6WO8BuOgCjdCHuk+GjhpbMrPrMq479sVIjwqHxQo2FKu+6mxZa6L/9IGIeZdrzButwQ50YdPKf2rcMwcowK1w+1rX9FzcrxmB87QloPwJBLSqlXbiZvOSUdn+xDSAqxcS7Rq0ISS5YVu0utfG3Z+l5wKIvqQNqp4fmID6tB2AV3VFds/ctMCFQ3RwuRaoXa0PdqLkna/4WsbTemDPULadPcUops0Qon+Mf8tzwkxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oJSElhAQtUVKI2gu3DtOdaV+9DTqjbmLmXgzE91D/EQ=; b=TMM0RYBdRsaxgUU6xc4XFiZB6l+MT9Ff5RRrMnzROSLN5MgEBnVH6vje848w5d9Eyf2mDAo8H0OVv2wLzLD2n2x3tYqsSpuAuHu2vyVLB1rIEg/lwnpVi/w0ElAeMYHn4rx/VUuAmWbmJJgKYIvx7j6UQfSTKXCcv1u+7ZOUQxflCTeFv57ZWk55j074sBBQ86EhZDgbczdUQO0ugtDrUeLybkLW3KuwBdTB3a4Y0HrkA5XcvYsrGcm6gTDoAAtTwqqOqspiY7/NoZZXFXrg2d80ZEsekCxmZBPRBKxtFR3KzK65d8G2q23DgW33f7jB8qMc6VopS4so5jrWvb7wYQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=purdue.edu; dmarc=pass action=none header.from=purdue.edu; dkim=pass header.d=purdue.edu; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purdue0.onmicrosoft.com; s=selector2-purdue0-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oJSElhAQtUVKI2gu3DtOdaV+9DTqjbmLmXgzE91D/EQ=; b=TvbJS+Vc57tcJlBljvX391q5aZaVDBmIORIEmF3kIDwxcXf6sR8qX3KHr9KU/CGT9y0SQhHnsgvq1lIsdvt4t9pRHC1h7gcoSMm4iUlO6HkzkNHJHS3ycehNraUzW00dFN7EQ9cNoeaEpqFXX9rFBypEglG1U9dMzTMySswqY2o= Received: from DM5PR22MB1676.namprd22.prod.outlook.com (2603:10b6:4:67::36) by DM6PR22MB2296.namprd22.prod.outlook.com (2603:10b6:5:2bb::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Thu, 1 Jul 2021 13:47:37 +0000 Received: from DM5PR22MB1676.namprd22.prod.outlook.com ([fe80::9c0f:593a:1376:ba25]) by DM5PR22MB1676.namprd22.prod.outlook.com ([fe80::9c0f:593a:1376:ba25%7]) with mapi id 15.20.4264.026; Thu, 1 Jul 2021 13:47:37 +0000 From: Kaiyang Zhao To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, Kaiyang Zhao Subject: [PATCH] Shared page tables during fork Date: Thu, 1 Jul 2021 09:46:18 -0400 Message-Id: <20210701134618.18376-1-zhao776@purdue.edu> X-Mailer: git-send-email 2.30.2 Content-Type: text/plain X-Originating-IP: [66.253.158.83] X-ClientProxiedBy: TYAPR01CA0019.jpnprd01.prod.outlook.com (2603:1096:404::31) To DM5PR22MB1676.namprd22.prod.outlook.com (2603:10b6:4:67::36) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain (66.253.158.83) by TYAPR01CA0019.jpnprd01.prod.outlook.com (2603:1096:404::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Thu, 1 Jul 2021 13:47:34 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0edd0467-23b4-415b-fa36-08d93c96c939 X-MS-TrafficTypeDiagnostic: DM6PR22MB2296: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6V9VQb1DsWpsPIQxBWUe3veJiq/TVhuSmnLBvGFcPhZ8B8iqOLHtEM0WnbvLGgnYtW3iK4c9e/jE3Z8dkE+xt99UkcMgKt8+uiGqaDgpQt0ZkSJnb/RdZOfS5idFxhfCCVV0ZSax5UmhEEjJBasXscKvZSQxrFwR+k8KbGGedJ734BfnTuV3CbJJDpy0DxLhJrPC7aZFZFhwCxqN23lem8yjXDTPPCauiH5qzgbzhR2bkdEZ19r71If+kz2u3RHHwbLou5XpeRm4+63firvvnUafxZnC1G90/xOcwM43ptQWhbgomXlhcqkt13HLFHqEfcWx/+98t/l3WMfyxQKGYJJZVdnEtIccjnLw6V+duxtzGqXXBTyz+lbkBULzJUfHehdzG4uLwG1bmt8NKFqUPSyNU+gltWp2siNxK5/7yuVXGKQYRvePO9qfCNMHy8E3fFo0v9Nd4A8lfhOpMxjGEu5KcJVaiw7SVdzOHmPXgXITqVaIjycSNmPEA1p2rhslaFMUGsNTphnG1nL+PCimhkQFLsBGN7GMF2/vgi608yn/lRh5MPXKLrMEOyY+vr0wq/hB3kCBAjybkl/ESZTFolPAeM/asFzEJAuNhGop8EepOnun4F0aPPTlJNqVv+vOteIfLq4S5CdUsDeoBZ93EL+FRzr+IgoShZwwDdgVe9XQuaapcZLiQniDMu7PxEUGrLGx+ldHDwgaBXv0U2RnHBdK7Vt625o7fCcYSGl8jenjEfroZj9eREsykfOO4J6QU7l0dOYjqvEpMJ3P+Pi0L1QHs1N33ZZ/FXe3DO0eQsomQTT2nUAKgwPBjcJii9lKeKIHLaUK2JflrBDWXmneLNjU7LGhh+0ijc2ZuDO6sB4= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR22MB1676.namprd22.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(396003)(39860400002)(366004)(6916009)(75432002)(186003)(86362001)(4326008)(16526019)(956004)(107886003)(2906002)(38350700002)(38100700002)(83380400001)(30864003)(966005)(6486002)(66946007)(66556008)(8936002)(66476007)(1076003)(8676002)(5660300002)(6506007)(478600001)(6512007)(52116002)(316002)(2616005)(36756003)(26005)(786003)(6666004)(69590400013)(21314003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?1ZiUNRVLorYOgu5I2qkBtBPwbHAz+6LCD4djvfKiso53uhNncpS/AdLEfaKc?= =?us-ascii?Q?2QaSA+qxYWOzcwoLzW8i8JplMXPlYP1BSJ9iDGNd1S8u2t6Y8sWzCUQeMuyD?= =?us-ascii?Q?6ekmhhXZlKJlSLqQO4gMZiAldkIk9msCeCx4ER7kCZXl16IzlWevyVTBZdFe?= =?us-ascii?Q?4qbIOibOwtrmsNg0aM43wblJ01Q7BDQJ8TL1r1Zr1nNlgiJwU/zvB8lXSaMU?= =?us-ascii?Q?fNUEYOlQF+6XM5oJ9AXa5CwUmx5vSU+P76zAO8Av4BotJR4UoDtvPJMfsxYQ?= =?us-ascii?Q?YhQYqugGlVbrwKqTLsRfYQC2P6lI8mA4hrnxRa12gQPu+xtMcEjxgrnu1E+c?= =?us-ascii?Q?vdegESrvgXAd1+HPlpfpbCXlIqa654we04u5hwSG+oWOQS5gzLTeEGpyib9J?= =?us-ascii?Q?rDlmphmqU/VVShn0ubsBB9SKjYXDZB7sbzIGvleC3h0qaEYI2jO+0y1c9P/I?= =?us-ascii?Q?5ZcCAKe0YrWBSbr6Hc2VxdQcG5ndGsT+WlhOl4YdNwFzZYf+g23AD0r5/oSQ?= =?us-ascii?Q?7B+H9PPx0qWhyGSSfLtLpRUQgy4XpkFy+qEbMogcqfUrhoaQIVrPwNWDlDYE?= =?us-ascii?Q?HGIFSwiCBp9UhBVaGwUt0A2lz9p0gQObgs8pYa3pBkJT9oiDVtBkLkb2Dgpq?= =?us-ascii?Q?VxEOllYn1MeeCj6XdR1kibPwr7vdkOZ+v9WbJjUnorhLzdQWxd0GW+vYvI95?= =?us-ascii?Q?cFUKVazdw103A7mMYdA5jxeOThmKa+UWpOAgX2Ixc5+hoIIYFCliZqh7jfc6?= =?us-ascii?Q?msBfzH+sDtddq4K9U9nCfJ1J1f33AX5UVDreeQgqYWT2XFhNP7Rzg4c2sJlY?= =?us-ascii?Q?vQjmtqxbfqzjsloxVTzqMYFGN8Ln8/wCHr0CY8mB3zwl3c23Vra062hu/9kQ?= =?us-ascii?Q?JStW9Py1Xz/wIeqJdv/lhZ1eQo64FuNaOUXI/EhZZEVHXpXFJpnFb1tM8/CA?= =?us-ascii?Q?oJ8VVIrJDZ6W8zftvC+s0np3ROSaobcVqESVMp4TPBj/unRI/3I2WZUtElEk?= =?us-ascii?Q?JBxt2vIMJfuPBhEJxOCItWhkdcSNhmMzFxCb5S3oyppzNJBeea6TPdFJJ1ZO?= =?us-ascii?Q?kjXcIWeASpEuDlwnkwNA3Fn9723ewEmZjfT0wG1vIyaNDHuseusPDzUTGrmW?= =?us-ascii?Q?KCWxmt4U1fP7v/liln/TsVmQOZ5CrNv7sn63K8UudmbLk5gs4pHSzMAsNFzX?= =?us-ascii?Q?QVxIIVBZnmPJj7eYJ+5FFgyXHUnJNyyls9zKuzkBgwMTTEwvoIJZAAS29xZW?= =?us-ascii?Q?hU3CHxKkSbp/2kbcZEBbRwYafuXtphdulG9HVecwYHDxeMtQB01iKt93lYpt?= =?us-ascii?Q?L3A3nEUtcKhks1jF35JRGBWr?= X-OriginatorOrg: purdue.edu X-MS-Exchange-CrossTenant-Network-Message-Id: 0edd0467-23b4-415b-fa36-08d93c96c939 X-MS-Exchange-CrossTenant-AuthSource: DM5PR22MB1676.namprd22.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jul 2021 13:47:37.5492 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4130bd39-7c53-419c-b1e5-8758d6d63f21 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: PN5VCJlMjjM+bAwCX8YlhQxqYcuvVPglPOEhLpt3p1ljKGMlnHzFeZBbgYluTzUGKxsTWS1ulmsa++2wTg2vpw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR22MB2296 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 2F456D000099 X-Stat-Signature: s5mrqgmijp3i5tgdedp5ghzjdtj6crom Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=purdue0.onmicrosoft.com header.s=selector2-purdue0-onmicrosoft-com header.b=TvbJS+Vc; dmarc=pass (policy=none) header.from=purdue.edu; spf=pass (imf15.hostedemail.com: domain of zhao776@purdue.edu designates 40.107.220.128 as permitted sender) smtp.mailfrom=zhao776@purdue.edu X-HE-Tag: 1625147260-861357 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In our research work [https://dl.acm.org/doi/10.1145/3447786.3456258], we have identified a method that for large applications (i.e., a few hundred MBs and larger), can significantly speed up the fork system call. Current= ly the amount of time that the fork system call takes to complete is proportional to the size of allocated memory of a process, and our design speeds up fork invocation by up to 270x at 50GB in our experiments. The design is that instead of copying the entire paging tree during the fork invocation, we make the child and the parent process share the same set of last-level page tables, which will be reference counted. To preser= ve the copy-on-write semantics, we disable the write permission in PMD entri= es in fork, and copy PTE tables as needed in the page fault handler. We tested a prototype with large workloads that call fork to take snapsho= ts such as fuzzers (e.g., AFL), and it yielded over 2x the execution throughput for AFL. The patch is a prototype for x86 only and does not support huge pages and swapping, and is meant to demonstrate the potentia= l performance gains to fork. Applications can opt-in by a switch use_odf in procfs. On a side note, an approach that shares page tables was proposed by Dave McCracken [http://lkml.iu.edu/hypermail/linux/kernel/0508.3/1623.html, https://www.kernel.org/doc/ols/2006/ols2006v2-pages-125-130.pdf], but nev= er made it into the kernel. We believe that with the increasing memory consumption of modern applications and modern use cases of fork such as snapshotting, the shared page table approach in the context of fork is worth exploring. Please let us know your level of interest in this or comments on the general design. Thank you. Signed-off-by: Kaiyang Zhao --- arch/x86/include/asm/pgtable.h | 19 +- fs/proc/base.c | 74 ++++++ include/linux/mm.h | 11 + include/linux/mm_types.h | 2 + include/linux/pgtable.h | 11 + include/linux/sched/coredump.h | 5 +- kernel/fork.c | 7 +- mm/gup.c | 61 ++++- mm/memory.c | 401 +++++++++++++++++++++++++++++++-- mm/mmap.c | 91 +++++++- mm/mprotect.c | 6 + 11 files changed, 668 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtabl= e.h index b6c97b8f59ec..0fda05a5c7a1 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -410,6 +410,16 @@ static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdva= l_t clear) return native_make_pmd(v & ~clear); } =20 +static inline pmd_t pmd_mknonpresent(pmd_t pmd) +{ + return pmd_clear_flags(pmd, _PAGE_PRESENT); +} + +static inline pmd_t pmd_mkpresent(pmd_t pmd) +{ + return pmd_set_flags(pmd, _PAGE_PRESENT); +} + #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP static inline int pmd_uffd_wp(pmd_t pmd) { @@ -798,6 +808,11 @@ static inline int pmd_present(pmd_t pmd) return pmd_flags(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PSE); } =20 +static inline int pmd_iswrite(pmd_t pmd) +{ + return pmd_flags(pmd) & (_PAGE_RW); +} + #ifdef CONFIG_NUMA_BALANCING /* * These work without NUMA balancing but the kernel does not care. See t= he @@ -833,7 +848,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) * Currently stuck as a macro due to indirect forward reference to * linux/mmzone.h's __section_mem_map_addr() definition: */ -#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) +#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd_mkpresent(pmd))) =20 /* * Conversion functions: convert a page and protection to a page entry, @@ -846,7 +861,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) =20 static inline int pmd_bad(pmd_t pmd) { - return (pmd_flags(pmd) & ~_PAGE_USER) !=3D _KERNPG_TABLE; + return ((pmd_flags(pmd) & ~(_PAGE_USER)) | (_PAGE_RW | _PAGE_PRESENT)) = !=3D _KERNPG_TABLE; } =20 static inline unsigned long pages_to_mb(unsigned long npg) diff --git a/fs/proc/base.c b/fs/proc/base.c index e5b5f7709d48..936f33594539 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -2935,6 +2935,79 @@ static const struct file_operations proc_coredump_= filter_operations =3D { }; #endif =20 +static ssize_t proc_use_odf_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + struct task_struct *task =3D get_proc_task(file_inode(file)); + struct mm_struct *mm; + char buffer[PROC_NUMBUF]; + size_t len; + int ret; + + if (!task) + return -ESRCH; + + ret =3D 0; + mm =3D get_task_mm(task); + if (mm) { + len =3D snprintf(buffer, sizeof(buffer), "%lu\n", + ((mm->flags & MMF_USE_ODF_MASK) >> MMF_USE_ODF)); + mmput(mm); + ret =3D simple_read_from_buffer(buf, count, ppos, buffer, len); + } + + put_task_struct(task); + + return ret; +} + +static ssize_t proc_use_odf_write(struct file *file, + const char __user *buf, + size_t count, + loff_t *ppos) +{ + struct task_struct *task; + struct mm_struct *mm; + unsigned int val; + int ret; + + ret =3D kstrtouint_from_user(buf, count, 0, &val); + if (ret < 0) + return ret; + + ret =3D -ESRCH; + task =3D get_proc_task(file_inode(file)); + if (!task) + goto out_no_task; + + mm =3D get_task_mm(task); + if (!mm) + goto out_no_mm; + ret =3D 0; + + if (val =3D=3D 1) { + set_bit(MMF_USE_ODF, &mm->flags); + } else if (val =3D=3D 0) { + clear_bit(MMF_USE_ODF, &mm->flags); + } else { + //ignore + } + + mmput(mm); + out_no_mm: + put_task_struct(task); + out_no_task: + if (ret < 0) + return ret; + return count; +} + +static const struct file_operations proc_use_odf_operations =3D { + .read =3D proc_use_odf_read, + .write =3D proc_use_odf_write, + .llseek =3D generic_file_llseek, +}; + #ifdef CONFIG_TASK_IO_ACCOUNTING static int do_io_accounting(struct task_struct *task, struct seq_file *m= , int whole) { @@ -3253,6 +3326,7 @@ static const struct pid_entry tgid_base_stuff[] =3D= { #ifdef CONFIG_ELF_CORE REG("coredump_filter", S_IRUGO|S_IWUSR, proc_coredump_filter_operations= ), #endif + REG("use_odf", S_IRUGO|S_IWUSR, proc_use_odf_operations), #ifdef CONFIG_TASK_IO_ACCOUNTING ONE("io", S_IRUSR, proc_tgid_io_accounting), #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index 57453dba41b9..a30eca9e236a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -664,6 +664,7 @@ static inline void vma_init(struct vm_area_struct *vm= a, struct mm_struct *mm) memset(vma, 0, sizeof(*vma)); vma->vm_mm =3D mm; vma->vm_ops =3D &dummy_vm_ops; + vma->pte_table_counter_pending =3D true; INIT_LIST_HEAD(&vma->anon_vma_chain); } =20 @@ -2250,6 +2251,9 @@ static inline bool pgtable_pte_page_ctor(struct pag= e *page) return false; __SetPageTable(page); inc_lruvec_page_state(page, NR_PAGETABLE); + + atomic64_set(&(page->pte_table_refcount), 0); + return true; } =20 @@ -2276,6 +2280,8 @@ static inline void pgtable_pte_page_dtor(struct pag= e *page) =20 #define pte_alloc(mm, pmd) (unlikely(pmd_none(*(pmd))) && __pte_alloc(mm= , pmd)) =20 +#define tfork_pte_alloc(mm, pmd) (__tfork_pte_alloc(mm, pmd)) + #define pte_alloc_map(mm, pmd, address) \ (pte_alloc(mm, pmd) ? NULL : pte_offset_map(pmd, address)) =20 @@ -2283,6 +2289,10 @@ static inline void pgtable_pte_page_dtor(struct pa= ge *page) (pte_alloc(mm, pmd) ? \ NULL : pte_offset_map_lock(mm, pmd, address, ptlp)) =20 +#define tfork_pte_alloc_map_lock(mm, pmd, address, ptlp) \ + (tfork_pte_alloc(mm, pmd) ? \ + NULL : pte_offset_map_lock(mm, pmd, address, ptlp)) + #define pte_alloc_kernel(pmd, address) \ ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd))? \ NULL: pte_offset_kernel(pmd, address)) @@ -2616,6 +2626,7 @@ extern int do_madvise(struct mm_struct *mm, unsigne= d long start, size_t len_in, #ifdef CONFIG_MMU extern int __mm_populate(unsigned long addr, unsigned long len, int ignore_errors); +extern int __mm_populate_nolock(unsigned long addr, unsigned long len, i= nt ignore_errors); static inline void mm_populate(unsigned long addr, unsigned long len) { /* Ignore errors */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index f37abb2d222e..e06c677ce279 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -158,6 +158,7 @@ struct page { union { struct mm_struct *pt_mm; /* x86 pgds only */ atomic_t pt_frag_refcount; /* powerpc */ + atomic64_t pte_table_refcount; }; #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS @@ -379,6 +380,7 @@ struct vm_area_struct { struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; + bool pte_table_counter_pending; } __randomize_layout; =20 struct core_thread { diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index d147480cdefc..6afd77ff82e6 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -90,6 +90,11 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, uns= igned long address) return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(address); } #define pte_offset_kernel pte_offset_kernel +static inline pte_t *tfork_pte_offset_kernel(pmd_t pmd_val, unsigned lon= g address) +{ + return (pte_t *)pmd_page_vaddr(pmd_val) + pte_index(address); +} +#define tfork_pte_offset_kernel tfork_pte_offset_kernel #endif =20 #if defined(CONFIG_HIGHPTE) @@ -782,6 +787,12 @@ static inline void arch_swap_restore(swp_entry_t ent= ry, struct page *page) }) #endif =20 +#define pte_table_start(addr) \ +(addr & PMD_MASK) + +#define pte_table_end(addr) = \ +(((addr) + PMD_SIZE) & PMD_MASK) + /* * When walking page tables, we usually want to skip any p?d_none entrie= s; * and any p?d_bad entries - reporting the error before resetting to non= e. diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredum= p.h index 4d9e3a656875..8f6e50bc04ab 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -83,7 +83,10 @@ static inline int get_dumpable(struct mm_struct *mm) #define MMF_HAS_PINNED 28 /* FOLL_PIN has run, never cleared */ #define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP) =20 +#define MMF_USE_ODF 29 +#define MMF_USE_ODF_MASK (1 << MMF_USE_ODF) + #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ - MMF_DISABLE_THP_MASK) + MMF_DISABLE_THP_MASK | MMF_USE_ODF_MASK) =20 #endif /* _LINUX_SCHED_COREDUMP_H */ diff --git a/kernel/fork.c b/kernel/fork.c index d738aae40f9e..4f21ea4f4f38 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -594,8 +594,13 @@ static __latent_entropy int dup_mmap(struct mm_struc= t *mm, rb_parent =3D &tmp->vm_rb; =20 mm->map_count++; - if (!(tmp->vm_flags & VM_WIPEONFORK)) + if (!(tmp->vm_flags & VM_WIPEONFORK)) { retval =3D copy_page_range(tmp, mpnt); + if (oldmm->flags & MMF_USE_ODF_MASK) { + tmp->pte_table_counter_pending =3D false; // reference of the shared= PTE table by the new VMA is counted in copy_pmd_range_tfork + mpnt->pte_table_counter_pending =3D false; // don't double count whe= n forking again + } + } =20 if (tmp->vm_ops && tmp->vm_ops->open) tmp->vm_ops->open(tmp); diff --git a/mm/gup.c b/mm/gup.c index 42b8b1fa6521..5768f339b0ff 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1489,8 +1489,11 @@ long populate_vma_page_range(struct vm_area_struct= *vma, * to break COW, except for shared mappings because these don't COW * and we would not want to dirty them for nothing. */ - if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D VM_WRITE) - gup_flags |=3D FOLL_WRITE; + if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D VM_WRITE) { + if (!(mm->flags & MMF_USE_ODF_MASK)) { //for ODF processes, only alloc= ate page tables + gup_flags |=3D FOLL_WRITE; + } + } =20 /* * We want mlock to succeed for regions that have any permissions @@ -1669,6 +1672,60 @@ static long __get_user_pages_locked(struct mm_stru= ct *mm, unsigned long start, } #endif /* !CONFIG_MMU */ =20 +int __mm_populate_nolock(unsigned long start, unsigned long len, int ign= ore_errors) +{ + struct mm_struct *mm =3D current->mm; + unsigned long end, nstart, nend; + struct vm_area_struct *vma =3D NULL; + int locked =3D 0; + long ret =3D 0; + + end =3D start + len; + + for (nstart =3D start; nstart < end; nstart =3D nend) { + /* + * We want to fault in pages for [nstart; end) address range. + * Find first corresponding VMA. + */ + if (!locked) { + locked =3D 1; + //down_read(&mm->mmap_sem); + vma =3D find_vma(mm, nstart); + } else if (nstart >=3D vma->vm_end) + vma =3D vma->vm_next; + if (!vma || vma->vm_start >=3D end) + break; + /* + * Set [nstart; nend) to intersection of desired address + * range with the first VMA. Also, skip undesirable VMA types. + */ + nend =3D min(end, vma->vm_end); + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + continue; + if (nstart < vma->vm_start) + nstart =3D vma->vm_start; + /* + * Now fault in a range of pages. populate_vma_page_range() + * double checks the vma flags, so that it won't mlock pages + * if the vma was already munlocked. + */ + ret =3D populate_vma_page_range(vma, nstart, nend, &locked); + if (ret < 0) { + if (ignore_errors) { + ret =3D 0; + continue; /* continue at next VMA */ + } + break; + } + nend =3D nstart + ret * PAGE_SIZE; + ret =3D 0; + } + /*if (locked) + up_read(&mm->mmap_sem); + */ + return ret; /* 0 or negative error code */ +} + /** * get_dump_page() - pin user page in memory while writing it to core du= mp * @addr: user address diff --git a/mm/memory.c b/mm/memory.c index db86558791f1..2b28766e4213 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -83,6 +83,9 @@ #include #include =20 +static bool tfork_one_pte_table(struct mm_struct *, pmd_t *, unsigned lo= ng, unsigned long); +static inline void init_rss_vec(int *rss); +static inline void add_mm_rss_vec(struct mm_struct *mm, int *rss); #include "pgalloc-track.h" #include "internal.h" =20 @@ -227,7 +230,16 @@ static void free_pte_range(struct mmu_gather *tlb, p= md_t *pmd, unsigned long addr) { pgtable_t token =3D pmd_pgtable(*pmd); + long counter; pmd_clear(pmd); + counter =3D atomic64_read(&(token->pte_table_refcount)); + if (counter > 0) { + //the pte table can only be shared in this case +#ifdef CONFIG_DEBUG_VM + printk("free_pte_range: addr=3D%lx, counter=3D%ld, not freeing table",= addr, counter); +#endif + return; //pte table is still in use + } pte_free_tlb(tlb, token, addr); mm_dec_nr_ptes(tlb->mm); } @@ -433,6 +445,118 @@ void free_pgtables(struct mmu_gather *tlb, struct v= m_area_struct *vma, } } =20 +// frees every page described by the pte table +void zap_one_pte_table(pmd_t pmd_val, unsigned long addr, struct mm_stru= ct *mm) +{ + int rss[NR_MM_COUNTERS]; + pte_t *pte; + unsigned long end; + + init_rss_vec(rss); + addr =3D pte_table_start(addr); + end =3D pte_table_end(addr); + pte =3D tfork_pte_offset_kernel(pmd_val, addr); + do { + pte_t ptent =3D *pte; + + if (pte_none(ptent)) + continue; + + if (pte_present(ptent)) { + struct page *page; + + if (pte_special(ptent)) { //known special pte: vvar VMA, which has ju= st one page shared system-wide. Shouldn't matter + continue; + } + page =3D vm_normal_page(NULL, addr, ptent); //kyz : vma is not import= ant + if (unlikely(!page)) + continue; + rss[mm_counter(page)]--; +#ifdef CONFIG_DEBUG_VM + // printk("zap_one_pte_table: addr=3D%lx, end=3D%lx, (before) mapco= unt=3D%d, refcount=3D%d\n", addr, end, page_mapcount(page), page_ref_coun= t(page)); +#endif + page_remove_rmap(page, false); + put_page(page); + } + } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); + + add_mm_rss_vec(mm, rss); +} + +/* pmd lock should be held + * returns 1 if the table becomes unused + */ +int dereference_pte_table(pmd_t pmd_val, bool free_table, struct mm_stru= ct *mm, unsigned long addr) +{ + struct page *table_page; + + table_page =3D pmd_page(pmd_val); + + if (atomic64_dec_and_test(&(table_page->pte_table_refcount))) { +#ifdef CONFIG_DEBUG_VM + printk("dereference_pte_table: addr=3D%lx, free_table=3D%d, pte table = reached end of life\n", addr, free_table); +#endif + + zap_one_pte_table(pmd_val, addr, mm); + if (free_table) { + pgtable_pte_page_dtor(table_page); + __free_page(table_page); + mm_dec_nr_ptes(mm); + } + return 1; + } else { +#ifdef CONFIG_DEBUG_VM + printk("dereference_pte_table: addr=3D%lx, (after) pte_table_count=3D%= lld\n", addr, atomic64_read(&(table_page->pte_table_refcount))); +#endif + } + return 0; +} + +int dereference_pte_table_multiple(pmd_t pmd_val, bool free_table, struc= t mm_struct *mm, unsigned long addr, int num) +{ + struct page *table_page; + int count_after; + + table_page =3D pmd_page(pmd_val); + count_after =3D atomic64_sub_return(num, &(table_page->pte_table_refcou= nt)); + if (count_after <=3D 0) { +#ifdef CONFIG_DEBUG_VM + printk("dereference_pte_table_multiple: addr=3D%lx, free_table=3D%d, n= um=3D%d, after count=3D%d, table reached end of life\n", addr, free_table= , num, count_after); +#endif + + zap_one_pte_table(pmd_val, addr, mm); + if (free_table) { + pgtable_pte_page_dtor(table_page); + __free_page(table_page); + mm_dec_nr_ptes(mm); + } + return 1; + } else { +#ifdef CONFIG_DEBUG_VM + printk("dereference_pte_table_multiple: addr=3D%lx, num=3D%d, (after) = count=3D%lld\n", addr, num, atomic64_read(&(table_page->pte_table_refcoun= t))); +#endif + } + return 0; +} + +int __tfork_pte_alloc(struct mm_struct *mm, pmd_t *pmd) +{ + pgtable_t new =3D pte_alloc_one(mm); + + if (!new) + return -ENOMEM; + smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ + + mm_inc_nr_ptes(mm); + //kyz: won't check if the pte table already exists + pmd_populate(mm, pmd, new); + new =3D NULL; + if (new) + pte_free(mm, new); + return 0; +} + + int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) { spinlock_t *ptl; @@ -928,6 +1052,45 @@ copy_present_page(struct vm_area_struct *dst_vma, s= truct vm_area_struct *src_vma return 0; } =20 +static inline unsigned long +copy_one_pte_tfork(struct mm_struct *dst_mm, + pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, + unsigned long addr, int *rss) +{ + unsigned long vm_flags =3D vma->vm_flags; + pte_t pte =3D *src_pte; + struct page *page; + + /* + * If it's a COW mapping + * only protect in the child (the faulting process) + */ + if (is_cow_mapping(vm_flags) && pte_write(pte)) { + pte =3D pte_wrprotect(pte); + } + + /* + * If it's a shared mapping, mark it clean in + * the child + */ + if (vm_flags & VM_SHARED) + pte =3D pte_mkclean(pte); + pte =3D pte_mkold(pte); + + page =3D vm_normal_page(vma, addr, pte); + if (page) { + get_page(page); + page_dup_rmap(page, false); + rss[mm_counter(page)]++; +#ifdef CONFIG_DEBUG_VM +// printk("copy_one_pte_tfork: addr=3D%lx, (after) mapcount=3D%d, refco= unt=3D%d\n", addr, page_mapcount(page), page_ref_count(page)); +#endif + } + + set_pte_at(dst_mm, addr, dst_pte, pte); + return 0; +} + /* * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated= page * is required to copy this pte. @@ -999,6 +1162,59 @@ page_copy_prealloc(struct mm_struct *src_mm, struct= vm_area_struct *vma, return new_page; } =20 +static int copy_pte_range_tfork(struct mm_struct *dst_mm, + pmd_t *dst_pmd, pmd_t src_pmd_val, struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + pte_t *orig_src_pte, *orig_dst_pte; + pte_t *src_pte, *dst_pte; + spinlock_t *dst_ptl; + int rss[NR_MM_COUNTERS]; + swp_entry_t entry =3D (swp_entry_t){0}; + struct page *dst_pte_page; + + init_rss_vec(rss); + + src_pte =3D tfork_pte_offset_kernel(src_pmd_val, addr); //src_pte point= s to the old table + if (!pmd_iswrite(*dst_pmd)) { + dst_pte =3D tfork_pte_alloc_map_lock(dst_mm, dst_pmd, addr, &dst_ptl);= //dst_pte points to a new table +#ifdef CONFIG_DEBUG_VM + printk("copy_pte_range_tfork: allocated new table. addr=3D%lx, prev_ta= ble_page=3D%px, table_page=3D%px\n", addr, pmd_page(src_pmd_val), pmd_pag= e(*dst_pmd)); +#endif + } else { + dst_pte =3D pte_alloc_map_lock(dst_mm, dst_pmd, addr, &dst_ptl); + } + if (!dst_pte) + return -ENOMEM; + + dst_pte_page =3D pmd_page(*dst_pmd); + atomic64_inc(&(dst_pte_page->pte_table_refcount)); //kyz: associates th= e VMA with the new table +#ifdef CONFIG_DEBUG_VM + printk("copy_pte_range_tfork: addr =3D %lx, end =3D %lx, new pte table = counter (after)=3D%lld\n", addr, end, atomic64_read(&(dst_pte_page->pte_t= able_refcount))); +#endif + + orig_src_pte =3D src_pte; + orig_dst_pte =3D dst_pte; + arch_enter_lazy_mmu_mode(); + + do { + if (pte_none(*src_pte)) { + continue; + } + entry.val =3D copy_one_pte_tfork(dst_mm, dst_pte, src_pte, + vma, addr, rss); + if (entry.val) + printk("kyz: failed copy_one_pte_tfork call\n"); + } while (dst_pte++, src_pte++, addr +=3D PAGE_SIZE, addr !=3D end); + + arch_leave_lazy_mmu_mode(); + pte_unmap(orig_src_pte); + add_mm_rss_vec(dst_mm, rss); + pte_unmap_unlock(orig_dst_pte, dst_ptl); + + return 0; +} + static int copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *sr= c_vma, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, @@ -1130,8 +1346,9 @@ copy_pmd_range(struct vm_area_struct *dst_vma, stru= ct vm_area_struct *src_vma, { struct mm_struct *dst_mm =3D dst_vma->vm_mm; struct mm_struct *src_mm =3D src_vma->vm_mm; - pmd_t *src_pmd, *dst_pmd; + pmd_t *src_pmd, *dst_pmd, src_pmd_value; unsigned long next; + struct page *table_page; =20 dst_pmd =3D pmd_alloc(dst_mm, dst_pud, addr); if (!dst_pmd) @@ -1153,9 +1370,43 @@ copy_pmd_range(struct vm_area_struct *dst_vma, str= uct vm_area_struct *src_vma, } if (pmd_none_or_clear_bad(src_pmd)) continue; - if (copy_pte_range(dst_vma, src_vma, dst_pmd, src_pmd, - addr, next)) - return -ENOMEM; + if (src_mm->flags & MMF_USE_ODF_MASK) { +#ifdef CONFIG_DEBUG_VM + printk("copy_pmd_range: vm_start=3D%lx, addr=3D%lx, vm_end=3D%lx, end= =3D%lx\n", src_vma->vm_start, addr, src_vma->vm_end, end); +#endif + + src_pmd_value =3D *src_pmd; + //kyz: sets write-protect to the pmd entry if the vma is writable + if (src_vma->vm_flags & VM_WRITE) { + src_pmd_value =3D pmd_wrprotect(src_pmd_value); + set_pmd_at(src_mm, addr, src_pmd, src_pmd_value); + } + table_page =3D pmd_page(*src_pmd); + if (src_vma->pte_table_counter_pending) { // kyz : the old VMA hasn't= been counted in the PTE table, count it now + atomic64_add(2, &(table_page->pte_table_refcount)); +#ifdef CONFIG_DEBUG_VM + printk("copy_pmd_range: addr=3D%lx, pte table counter (after countin= g old&new)=3D%lld\n", addr, atomic64_read(&(table_page->pte_table_refcoun= t))); +#endif + } else { + atomic64_inc(&(table_page->pte_table_refcount)); //increments the p= te table counter + if (atomic64_read(&(table_page->pte_table_refcount)) =3D=3D 1) { //= the VMA is old, but the pte table is new (created by a fault after the la= st odf call) + atomic64_set(&(table_page->pte_table_refcount), 2); +#ifdef CONFIG_DEBUG_VM + printk("copy_pmd_range: addr=3D%lx, pte table counter (old VMA, new= pte table)=3D%lld\n", addr, atomic64_read(&(table_page->pte_table_refcou= nt))); +#endif + } +#ifdef CONFIG_DEBUG_VM + else { + printk("copy_pmd_range: addr=3D%lx, pte table counter (after counti= ng new)=3D%lld\n", addr, atomic64_read(&(table_page->pte_table_refcount))= ); + } +#endif + } + set_pmd_at(dst_mm, addr, dst_pmd, src_pmd_value); //shares the table= with the child + } else { + if (copy_pte_range(dst_vma, src_vma, dst_pmd, src_pmd, + addr, next)) + return -ENOMEM; + } } while (dst_pmd++, src_pmd++, addr =3D next, addr !=3D end); return 0; } @@ -1240,9 +1491,10 @@ copy_page_range(struct vm_area_struct *dst_vma, st= ruct vm_area_struct *src_vma) * readonly mappings. The tradeoff is that copy_page_range is more * efficient than faulting. */ - if (!(src_vma->vm_flags & (VM_HUGETLB | VM_PFNMAP | VM_MIXEDMAP)) && +/* if (!(src_vma->vm_flags & (VM_HUGETLB | VM_PFNMAP | VM_MIXEDMAP)) && !src_vma->anon_vma) return 0; +*/ =20 if (is_vm_hugetlb_page(src_vma)) return copy_hugetlb_page_range(dst_mm, src_mm, src_vma); @@ -1304,7 +1556,7 @@ copy_page_range(struct vm_area_struct *dst_vma, str= uct vm_area_struct *src_vma) static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, - struct zap_details *details) + struct zap_details *details, bool invalidate_pmd) { struct mm_struct *mm =3D tlb->mm; int force_flush =3D 0; @@ -1343,8 +1595,10 @@ static unsigned long zap_pte_range(struct mmu_gath= er *tlb, details->check_mapping !=3D page_rmapping(page)) continue; } - ptent =3D ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); + if (!invalidate_pmd) { + ptent =3D ptep_get_and_clear_full(mm, addr, pte, + tlb->fullmm); + } tlb_remove_tlb_entry(tlb, pte, addr); if (unlikely(!page)) continue; @@ -1358,8 +1612,12 @@ static unsigned long zap_pte_range(struct mmu_gath= er *tlb, likely(!(vma->vm_flags & VM_SEQ_READ))) mark_page_accessed(page); } - rss[mm_counter(page)]--; - page_remove_rmap(page, false); + if (!invalidate_pmd) { + rss[mm_counter(page)]--; + page_remove_rmap(page, false); + } else { + continue; + } if (unlikely(page_mapcount(page) < 0)) print_bad_pte(vma, addr, ptent, page); if (unlikely(__tlb_remove_page(tlb, page))) { @@ -1446,12 +1704,16 @@ static inline unsigned long zap_pmd_range(struct = mmu_gather *tlb, struct zap_details *details) { pmd_t *pmd; - unsigned long next; + unsigned long next, table_start, table_end; + spinlock_t *ptl; + struct page *table_page; + bool got_new_table =3D false; =20 pmd =3D pmd_offset(pud, addr); do { + ptl =3D pmd_lock(vma->vm_mm, pmd); next =3D pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { if (next - addr !=3D HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false, NULL); else if (zap_huge_pmd(tlb, vma, pmd, addr)) @@ -1478,8 +1740,49 @@ static inline unsigned long zap_pmd_range(struct m= mu_gather *tlb, */ if (pmd_none_or_trans_huge_or_clear_bad(pmd)) goto next; - next =3D zap_pte_range(tlb, vma, pmd, addr, next, details); + //kyz: copy if the pte table is shared and VMA does not cover fully th= e 2MB region + table_page =3D pmd_page(*pmd); + table_start =3D pte_table_start(addr); + + if ((!pmd_iswrite(*pmd)) && (!vma->pte_table_counter_pending)) {//shar= ed pte table. vma has gone through odf + table_end =3D pte_table_end(addr); + if (table_start < vma->vm_start || table_end > vma->vm_end) { +#ifdef CONFIG_DEBUG_VM + printk("%s: addr=3D%lx, end=3D%lx, table_start=3D%lx, table_end=3D%l= x, copy then zap\n", __func__, addr, end, table_start, table_end); +#endif + if (dereference_pte_table(*pmd, false, vma->vm_mm, addr) !=3D 1) { /= /dec the counter of the shared table. tfork_one_pte_table cannot find the= current VMA (which is being unmapped) + got_new_table =3D tfork_one_pte_table(vma->vm_mm, pmd, addr, vma->v= m_end); + if (got_new_table) { + next =3D zap_pte_range(tlb, vma, pmd, addr, next, details, false); + } else { +#ifdef CONFIG_DEBUG_VM + printk("zap_pmd_range: no more VMAs in this process are using the = table, but there are other processes using it\n"); +#endif + pmd_clear(pmd); + } + } else { +#ifdef CONFIG_DEBUG_VM + printk("zap_pmd_range: the shared table is dead. NOT copying after = all.\n"); +#endif + // the shared table will be freed by unmap_single_vma() + } + } else { +#ifdef CONFIG_DEBUG_VM + printk("%s: addr=3D%lx, end=3D%lx, table_start=3D%lx, table_end=3D%l= x, zap while preserving pte entries\n", __func__, addr, end, table_start,= table_end); +#endif + //kyz: shared and fully covered by the VMA, preserve the pte entries + next =3D zap_pte_range(tlb, vma, pmd, addr, next, details, true); + dereference_pte_table(*pmd, true, vma->vm_mm, addr); + pmd_clear(pmd); + } + } else { + next =3D zap_pte_range(tlb, vma, pmd, addr, next, details, false); + if (!vma->pte_table_counter_pending) { + atomic64_dec(&(table_page->pte_table_refcount)); + } + } next: + spin_unlock(ptl); cond_resched(); } while (pmd++, addr =3D next, addr !=3D end); =20 @@ -4476,6 +4779,66 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf= , pud_t orig_pud) return VM_FAULT_FALLBACK; } =20 +/* kyz: Handles an entire pte-level page table, covering multiple VMAs = (if they exist) + * Returns true if a new table is put in place, false otherwise. + * if exclude is not 0, the vma that covers addr to exclude will not be= copied + */ +static bool tfork_one_pte_table(struct mm_struct *mm, pmd_t *dst_pmd, un= signed long addr, unsigned long exclude) +{ + unsigned long table_end, end, orig_addr; + struct vm_area_struct *vma; + pmd_t orig_pmd_val; + bool copied =3D false; + struct page *orig_pte_page; + int num_vmas =3D 0; + + if (!pmd_none(*dst_pmd)) { + orig_pmd_val =3D *dst_pmd; + } else { + BUG(); + } + + //kyz: Starts from the beginning of the range covered by the table + orig_addr =3D addr; + table_end =3D pte_table_end(addr); + addr =3D pte_table_start(addr); +#ifdef CONFIG_DEBUG_VM + orig_pte_page =3D pmd_page(orig_pmd_val); + printk("tfork_one_pte_table: shared pte table counter=3D%lld, Covered R= ange: start=3D%lx, end=3D%lx\n", atomic64_read(&(orig_pte_page->pte_table= _refcount)), addr, table_end); +#endif + do { + vma =3D find_vma(mm, addr); + if (!vma) { + break; //inexplicable + } + if (vma->vm_start >=3D table_end) { + break; + } + end =3D pmd_addr_end(addr, vma->vm_end); + if (vma->pte_table_counter_pending) { //this vma is newly mapped (cle= an) and (fully/partly) described by this pte table + addr =3D end; + continue; + } + if (vma->vm_start > addr) { + addr =3D vma->vm_start; + } + if (exclude > 0 && vma->vm_start <=3D orig_addr && vma->vm_end >=3D ex= clude) { + addr =3D end; + continue; + } +#ifdef CONFIG_DEBUG_VM + printk("tfork_one_pte_table: vm_start=3D%lx, vm_end=3D%lx\n", vma->vm_= start, vma->vm_end); +#endif + num_vmas++; + copy_pte_range_tfork(mm, dst_pmd, orig_pmd_val, vma, addr, end); + copied =3D true; + addr =3D end; + } while (addr < table_end); + + dereference_pte_table_multiple(orig_pmd_val, true, mm, orig_addr, num_v= mas); + return copied; +} + /* * These routines also need to handle stuff like marking pages dirty * and/or accessed for architectures that don't do it in hardware (most @@ -4610,6 +4973,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_= struct *vma, pgd_t *pgd; p4d_t *p4d; vm_fault_t ret; + spinlock_t *ptl; =20 pgd =3D pgd_offset(mm, address); p4d =3D p4d_alloc(mm, pgd, address); @@ -4659,6 +5023,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_= struct *vma, vmf.orig_pmd =3D *vmf.pmd; =20 barrier(); + /* if (unlikely(is_swap_pmd(vmf.orig_pmd))) { VM_BUG_ON(thp_migration_supported() && !is_pmd_migration_entry(vmf.orig_pmd)); @@ -4666,6 +5031,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_= struct *vma, pmd_migration_entry_wait(mm, vmf.pmd); return 0; } + */ if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) return do_huge_pmd_numa_page(&vmf); @@ -4679,6 +5045,15 @@ static vm_fault_t __handle_mm_fault(struct vm_area= _struct *vma, return 0; } } + //kyz: checks if the pmd entry prohibits writes + if ((!pmd_none(vmf.orig_pmd)) && (!pmd_iswrite(vmf.orig_pmd)) && (vma-= >vm_flags & VM_WRITE)) { +#ifdef CONFIG_DEBUG_VM + printk("__handle_mm_fault: PID=3D%d, addr=3D%lx\n", current->pid, add= ress); +#endif + ptl =3D pmd_lock(mm, vmf.pmd); + tfork_one_pte_table(mm, vmf.pmd, vmf.address, 0u); + spin_unlock(ptl); + } } =20 return handle_pte_fault(&vmf); diff --git a/mm/mmap.c b/mm/mmap.c index ca54d36d203a..308d86cfe544 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -47,6 +47,7 @@ #include #include #include +#include =20 #include #include @@ -276,6 +277,9 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) =20 success: populate =3D newbrk > oldbrk && (mm->def_flags & VM_LOCKED) !=3D 0; + if (mm->flags & MMF_USE_ODF_MASK) { //for ODF + populate =3D true; + } if (downgraded) mmap_read_unlock(mm); else @@ -1115,6 +1119,50 @@ can_vma_merge_after(struct vm_area_struct *vma, un= signed long vm_flags, return 0; } =20 +static int pgtable_counter_fixup_pmd_entry(pmd_t *pmd, unsigned long add= r, + unsigned long next, struct mm_walk *walk) +{ + struct page *table_page; + + table_page =3D pmd_page(*pmd); + atomic64_inc(&(table_page->pte_table_refcount)); + +#ifdef CONFIG_DEBUG_VM + printk("fixup inc: addr=3D%lx\n", addr); +#endif + + walk->action =3D ACTION_CONTINUE; //skip pte level + return 0; +} + +static int pgtable_counter_fixup_test(unsigned long addr, unsigned long = next, + struct mm_walk *walk) +{ + return 0; +} + +static const struct mm_walk_ops pgtable_counter_fixup_walk_ops =3D { +.pmd_entry =3D pgtable_counter_fixup_pmd_entry, +.test_walk =3D pgtable_counter_fixup_test +}; + +int merge_vma_pgtable_counter_fixup(struct vm_area_struct *vma, unsigned= long start, unsigned long end) +{ + if (vma->pte_table_counter_pending) { + return 0; + } else { +#ifdef CONFIG_DEBUG_VM + printk("merge fixup: vm_start=3D%lx, vm_end=3D%lx, inc start=3D%lx, in= c end=3D%lx\n", vma->vm_start, vma->vm_end, start, end); +#endif + start =3D pte_table_end(start); + end =3D pte_table_start(end); + __mm_populate_nolock(start, end-start, 1); //popuate tables for extend= ed address range so that we can increment counters + walk_page_range(vma->vm_mm, start, end, &pgtable_counter_fixup_walk_op= s, NULL); + } + + return 0; +} + /* * Given a mapping request (addr,end,vm_flags,file,pgoff), figure out * whether that can be merged with its predecessor or its successor. @@ -1215,6 +1263,9 @@ struct vm_area_struct *vma_merge(struct mm_struct *= mm, if (err) return NULL; khugepaged_enter_vma_merge(prev, vm_flags); + + merge_vma_pgtable_counter_fixup(prev, addr, end); + return prev; } =20 @@ -1242,6 +1293,9 @@ struct vm_area_struct *vma_merge(struct mm_struct *= mm, if (err) return NULL; khugepaged_enter_vma_merge(area, vm_flags); + + merge_vma_pgtable_counter_fixup(area, addr, end); + return area; } =20 @@ -1584,8 +1638,15 @@ unsigned long do_mmap(struct file *file, unsigned = long addr, addr =3D mmap_region(file, addr, len, vm_flags, pgoff, uf); if (!IS_ERR_VALUE(addr) && ((vm_flags & VM_LOCKED) || - (flags & (MAP_POPULATE | MAP_NONBLOCK)) =3D=3D MAP_POPULATE)) + (flags & (MAP_POPULATE | MAP_NONBLOCK)) =3D=3D MAP_POPULATE || + (mm->flags & MMF_USE_ODF_MASK))) { +#ifdef CONFIG_DEBUG_VM + if (mm->flags & MMF_USE_ODF_MASK) { + printk("mmap: force populate, addr=3D%lx, len=3D%lx\n", addr, len); + } +#endif *populate =3D len; + } return addr; } =20 @@ -2799,6 +2860,31 @@ int split_vma(struct mm_struct *mm, struct vm_area= _struct *vma, return __split_vma(mm, vma, addr, new_below); } =20 +/* left and right vma after the split, address of split */ +int split_vma_pgtable_counter_fixup(struct vm_area_struct *lvma, struct = vm_area_struct *rvma, bool orig_pending_flag) +{ + if (orig_pending_flag) { + return 0; //the new vma will have pending flag as true by default, ju= st as the old vma + } else { +#ifdef CONFIG_DEBUG_VM + printk("split fixup: set vma flag to false, rvma_start=3D%lx\n", rvma-= >vm_start); +#endif + lvma->pte_table_counter_pending =3D false; + rvma->pte_table_counter_pending =3D false; + + if (pte_table_start(rvma->vm_start) =3D=3D rvma->vm_start) { //the sp= lit was right at the pte table boundary + return 0; //the only case where we don't increment pte table counter + } else { +#ifdef CONFIG_DEBUG_VM + printk("split fixup: rvma_start=3D%lx\n", rvma->vm_start); +#endif + walk_page_range(rvma->vm_mm, pte_table_start(rvma->vm_start), pte_tab= le_end(rvma->vm_start), &pgtable_counter_fixup_walk_ops, NULL); + } + } + + return 0; +} + static inline void unlock_range(struct vm_area_struct *start, unsigned long limit) { @@ -2869,6 +2955,8 @@ int __do_munmap(struct mm_struct *mm, unsigned long= start, size_t len, if (error) return error; prev =3D vma; + + split_vma_pgtable_counter_fixup(prev, prev->vm_next, prev->pte_table_c= ounter_pending); } =20 /* Does it split the last one? */ @@ -2877,6 +2965,7 @@ int __do_munmap(struct mm_struct *mm, unsigned long= start, size_t len, int error =3D __split_vma(mm, last, end, 1); if (error) return error; + split_vma_pgtable_counter_fixup(last->vm_prev, last, last->pte_table_c= ounter_pending); } vma =3D vma_next(mm, prev); =20 diff --git a/mm/mprotect.c b/mm/mprotect.c index 4cb240fd9936..d396b1d38fab 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -445,6 +445,8 @@ static const struct mm_walk_ops prot_none_walk_ops =3D= { .test_walk =3D prot_none_test, }; =20 +int split_vma_pgtable_counter_fixup(struct vm_area_struct *lvma, struct = vm_area_struct *rvma, bool orig_pending_flag); + int mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev= , unsigned long start, unsigned long end, unsigned long newflags) @@ -517,12 +519,16 @@ mprotect_fixup(struct vm_area_struct *vma, struct v= m_area_struct **pprev, error =3D split_vma(mm, vma, start, 1); if (error) goto fail; + + split_vma_pgtable_counter_fixup(vma->vm_prev, vma, vma->pte_table_coun= ter_pending); } =20 if (end !=3D vma->vm_end) { error =3D split_vma(mm, vma, end, 0); if (error) goto fail; + + split_vma_pgtable_counter_fixup(vma, vma->vm_next, vma->pte_table_coun= ter_pending); } =20 success: --=20 2.30.2