From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D247C433EF for ; Wed, 22 Sep 2021 10:27:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2200C611C9 for ; Wed, 22 Sep 2021 10:27:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2200C611C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id BECCF6B0075; Wed, 22 Sep 2021 06:27:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B75846B0078; Wed, 22 Sep 2021 06:27:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EFFA6B007B; Wed, 22 Sep 2021 06:27:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id 8E1E96B0075 for ; Wed, 22 Sep 2021 06:27:28 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3702382499A8 for ; Wed, 22 Sep 2021 10:27:28 +0000 (UTC) X-FDA: 78614832576.14.FD340D8 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf12.hostedemail.com (Postfix) with ESMTP id E948610000A3 for ; Wed, 22 Sep 2021 10:27:27 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id me1so1682803pjb.4 for ; Wed, 22 Sep 2021 03:27:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2WShyFWb04L9WEAJIMbiQ1oAuqmUubIidKcAA6z6qHc=; b=cmX7TscVl7FShFFSBOdOXLSIvnSRAK2+duMaOxUohUqToHBQ4DZTIo8CNMXeSIcXCh QlbB3FsrvQ+luQXw0SxtwUvA2Jh/CS10+n+7p6uQj7YLQDwSgDQZYO7oqJxRMcmD2JDB tpyL9E4o0GqjwEGdPz5Fg8CL8URUpWBI4KefN/Mgz8HjQDFe510Ix2aazYtQjUWFqdzJ nrIHZxfacZX995iYpQbwxAM6f9SjeIH5XvhBRLhsiiAPPybW2b3eP1R7osUTLWfxGrAW nSYPCLmEml+3OcrnZriVdgIOAGCE8crbypeYn/lgHwyj5+Xe6OtrCfkNq/d2HKhaV/g0 fnhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2WShyFWb04L9WEAJIMbiQ1oAuqmUubIidKcAA6z6qHc=; b=rweuFfMM/7qqmS1uA6fVJBfgF6Ugg9/ipQmKMmOECiOe1qbDvnJN8BF98QLJoMZSIz 97tUAw9gxutOMnb9EUylHm6VnQbcmYaPaAIZfx6BZ2eXJ4apNFx/OMXwzforGx4RAO2y 85/zLMg8lM0W025btHc/PKhoMCxOIUIIBqToHSioGH4eI/hGPbv1LxuVDfkH+6M2qXls dqFe/xKgZZsm8UxmgCQd608HxqMzzn2Vyg0FwceaYsJzej+5UUkwlZnrjkioh2BGAfo4 Li2YpsjtIvwE983+HPMak8eK6+sDFB9cwCQ9IuZAAxTxLcFFGGQzhQBvxHtLPmaSycwW YyWg== X-Gm-Message-State: AOAM533zpSsG715QoCeCjb42tnCgm0dz8F1Y7gQ8w+Z+AnBXMLzXmGzK FtZs7Jh7uRjbD7aBNJFh0teM6Q== X-Google-Smtp-Source: ABdhPJwdS5Kmj5jx1TOgHHJcAJ4Oj6w5rtx3g+FopqrNg7Ovjt492uI9o2342vDaynECyHxNmd9l8w== X-Received: by 2002:a17:90a:345:: with SMTP id 5mr10185110pjf.189.1632306446871; Wed, 22 Sep 2021 03:27:26 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id s89sm1821929pjj.43.2021.09.22.03.27.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Sep 2021 03:27:26 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, chenhuang5@huawei.com, bodeddub@amazon.com, corbet@lwn.net, willy@infradead.org, 21cnbao@gmail.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, zhengqi.arch@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v3 4/4] selftests: vm: add a hugetlb test case Date: Wed, 22 Sep 2021 18:24:11 +0800 Message-Id: <20210922102411.34494-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210922102411.34494-1-songmuchun@bytedance.com> References: <20210922102411.34494-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=cmX7TscV; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E948610000A3 X-Stat-Signature: nsm3epcq3o78n6q5648ep6qdhbo6o8pp X-HE-Tag: 1632306447-484198 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since the head vmemmap page frame associated with each HugeTLB page is reused, we should hide the PG_head flag of tail struct page from the user. Add a tese case to check whether it is work properly. The test steps are as follows. 1) alloc 2MB hugeTLB 2) get each page frame 3) apply those APIs in each page frame 4) Those APIs work completely the same as before. Reading the flags of a page by /proc/kpageflags is done in stable_page_flags(), which has invoked PageHead(), PageTail(), PageCompound() and compound_head(). If those APIs work properly, the head page must have 15 and 17 bits set. And tail pages must have 16 and 17 bits set but 15 bit unset. Those flags are checked in check_page_flags(). Signed-off-by: Muchun Song --- tools/testing/selftests/vm/vmemmap_hugetlb.c | 144 +++++++++++++++++++++= ++++++ 1 file changed, 144 insertions(+) create mode 100644 tools/testing/selftests/vm/vmemmap_hugetlb.c diff --git a/tools/testing/selftests/vm/vmemmap_hugetlb.c b/tools/testing= /selftests/vm/vmemmap_hugetlb.c new file mode 100644 index 000000000000..4cc74dd4c333 --- /dev/null +++ b/tools/testing/selftests/vm/vmemmap_hugetlb.c @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * A test case of using hugepage memory in a user application using the + * mmap system call with MAP_HUGETLB flag. Before running this program + * make sure the administrator has allocated enough default sized huge + * pages to cover the 2 MB allocation. + * + * For ia64 architecture, Linux kernel reserves Region number 4 for huge= pages. + * That means the addresses starting with 0x800000... will need to be + * specified. Specifying a fixed address is not required on ppc64, i386 + * or x86_64. + */ +#include +#include +#include +#include +#include + +#define MAP_LENGTH (2UL * 1024 * 1024) + +#ifndef MAP_HUGETLB +#define MAP_HUGETLB 0x40000 /* arch specific */ +#endif + +#define PAGE_SIZE 4096 + +#define PAGE_COMPOUND_HEAD (1UL << 15) +#define PAGE_COMPOUND_TAIL (1UL << 16) +#define PAGE_HUGE (1UL << 17) + +#define HEAD_PAGE_FLAGS (PAGE_COMPOUND_HEAD | PAGE_HUGE) +#define TAIL_PAGE_FLAGS (PAGE_COMPOUND_TAIL | PAGE_HUGE) + +#define PM_PFRAME_BITS 55 +#define PM_PFRAME_MASK ~((1UL << PM_PFRAME_BITS) - 1) + +/* Only ia64 requires this */ +#ifdef __ia64__ +#define MAP_ADDR (void *)(0x8000000000000000UL) +#define MAP_FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB | MAP_FIXE= D) +#else +#define MAP_ADDR NULL +#define MAP_FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB) +#endif + +static void write_bytes(char *addr, size_t length) +{ + unsigned long i; + + for (i =3D 0; i < length; i++) + *(addr + i) =3D (char)i; +} + +static unsigned long virt_to_pfn(void *addr) +{ + int fd; + unsigned long pagemap; + + fd =3D open("/proc/self/pagemap", O_RDONLY); + if (fd < 0) + return -1UL; + + lseek(fd, (unsigned long)addr / PAGE_SIZE * sizeof(pagemap), SEEK_SET); + read(fd, &pagemap, sizeof(pagemap)); + close(fd); + + return pagemap & ~PM_PFRAME_MASK; +} + +static int check_page_flags(unsigned long pfn) +{ + int fd, i; + unsigned long pageflags; + + fd =3D open("/proc/kpageflags", O_RDONLY); + if (fd < 0) + return -1; + + lseek(fd, pfn * sizeof(pageflags), SEEK_SET); + + read(fd, &pageflags, sizeof(pageflags)); + if ((pageflags & HEAD_PAGE_FLAGS) !=3D HEAD_PAGE_FLAGS) { + close(fd); + printf("Head page flags (%lx) is invalid\n", pageflags); + return -1; + } + + /* + * pages other than the first page must be tail and shouldn't be head; + * this also verifies kernel has correctly set the fake page_head to ta= il + * while hugetlb_free_vmemmap is enabled. + */ + for (i =3D 1; i < MAP_LENGTH / PAGE_SIZE; i++) { + read(fd, &pageflags, sizeof(pageflags)); + if ((pageflags & TAIL_PAGE_FLAGS) !=3D TAIL_PAGE_FLAGS || + (pageflags & HEAD_PAGE_FLAGS) =3D=3D HEAD_PAGE_FLAGS) { + close(fd); + printf("Tail page flags (%lx) is invalid\n", pageflags); + return -1; + } + } + + close(fd); + + return 0; +} + +int main(int argc, char **argv) +{ + void *addr; + unsigned long pfn; + + addr =3D mmap(MAP_ADDR, MAP_LENGTH, PROT_READ | PROT_WRITE, MAP_FLAGS, = -1, 0); + if (addr =3D=3D MAP_FAILED) { + perror("mmap"); + exit(1); + } + + /* Trigger allocation of HugeTLB page. */ + write_bytes(addr, MAP_LENGTH); + + pfn =3D virt_to_pfn(addr); + if (pfn =3D=3D -1UL) { + munmap(addr, MAP_LENGTH); + perror("virt_to_pfn"); + exit(1); + } + + printf("Returned address is %p whose pfn is %lx\n", addr, pfn); + + if (check_page_flags(pfn) < 0) { + munmap(addr, MAP_LENGTH); + perror("check_page_flags"); + exit(1); + } + + /* munmap() length of MAP_HUGETLB memory must be hugepage aligned */ + if (munmap(addr, MAP_LENGTH)) { + perror("munmap"); + exit(1); + } + + return 0; +} --=20 2.11.0