All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rick Edgecombe <rick.p.edgecombe@intel.com>
To: rick.p.edgecombe@intel.com
Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de,
	broonie@kernel.org, dave.hansen@linux.intel.com,
	debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org,
	kirill.shutemov@linux.intel.com, luto@kernel.org,
	mingo@redhat.com, peterz@infradead.org,
	sparclinux@vger.kernel.org, tglx@linutronix.de, x86@kernel.org,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Subject: [RFC v2.1 08/12] s390: Use initializer for struct vm_unmapped_area_info
Date: Fri,  1 Mar 2024 16:17:10 -0800	[thread overview]
Message-ID: <20240302001714.674091-8-rick.p.edgecombe@intel.com> (raw)
In-Reply-To: <20240302001714.674091-1-rick.p.edgecombe@intel.com>

Future changes will need to add a new member to struct
vm_unmapped_area_info. This would cause trouble for any call site that
doesn't initialize the struct. Currently every caller sets each field
manually, so if new fields are added they will be unitialized and the core
code parsing the struct will see garbage in the new field.

It could be possible to initialize the new field manually to 0 at each
call site. This and a couple other options were discussed, and the
consensus (see links) was that in general the best way to accomplish this
would be via static initialization with designated field initiators.
Having some struct vm_unmapped_area_info instances not zero initialized
will put those sites at risk of feeding garbage into vm_unmapped_area() if
the convention is to zero initialize the struct and any new field addition
misses a call site that initializes each field manually.

It could be possible to leave the code mostly untouched, and just change
the line:
struct vm_unmapped_area_info info
to:
struct vm_unmapped_area_info info = {};

However, that would leave cleanup for the fields that are manually set
to zero, as it would no longer be required.

So to be reduce the chance of bugs via uninitialized fields, instead
simply continue the process to initialize the struct this way tree wide.
This will zero any unspecified members. Move the field initializers to the
struct declaration when they are known at that time. Leave the fields out
that were manually initialized to zero, as this would be redundant for
designated initializers.

Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t
Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/
---
Hi,

This patch was split and refactored out of a tree-wide change [0] to just
zero-init each struct vm_unmapped_area_info. The overall goal of the
series is to help shadow stack guard gaps. Currently, there is only one
arch with shadow stacks, but two more are in progress. It is 0day tested
only.

Thanks,

Rick

[0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/
---
 arch/s390/mm/hugetlbpage.c | 27 +++++++++++++--------------
 arch/s390/mm/mmap.c        | 25 +++++++++++++------------
 2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
index c2d2850ec8d5..dd7245b276e6 100644
--- a/arch/s390/mm/hugetlbpage.c
+++ b/arch/s390/mm/hugetlbpage.c
@@ -258,14 +258,13 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file,
 		unsigned long pgoff, unsigned long flags)
 {
 	struct hstate *h = hstate_file(file);
-	struct vm_unmapped_area_info info;
+	struct vm_unmapped_area_info info = {
+		.length = len,
+		.low_limit = current->mm->mmap_base,
+		.high_limit = TASK_SIZE,
+		.align_mask = PAGE_MASK & ~huge_page_mask(h)
+	};
 
-	info.flags = 0;
-	info.length = len;
-	info.low_limit = current->mm->mmap_base;
-	info.high_limit = TASK_SIZE;
-	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
-	info.align_offset = 0;
 	return vm_unmapped_area(&info);
 }
 
@@ -274,15 +273,15 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
 		unsigned long pgoff, unsigned long flags)
 {
 	struct hstate *h = hstate_file(file);
-	struct vm_unmapped_area_info info;
+	struct vm_unmapped_area_info info = {
+		.flags = VM_UNMAPPED_AREA_TOPDOWN,
+		.length = len,
+		.low_limit = PAGE_SIZE,
+		.high_limit = current->mm->mmap_base,
+		.align_mask = PAGE_MASK & ~huge_page_mask(h)
+	};
 	unsigned long addr;
 
-	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
-	info.length = len;
-	info.low_limit = PAGE_SIZE;
-	info.high_limit = current->mm->mmap_base;
-	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
-	info.align_offset = 0;
 	addr = vm_unmapped_area(&info);
 
 	/*
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index cd52d72b59cf..203eb653b92f 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -77,7 +77,12 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 {
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
-	struct vm_unmapped_area_info info;
+	struct vm_unmapped_area_info info = {
+		.length = len,
+		.low_limit = mm->mmap_base,
+		.high_limit = TASK_SIZE,
+		.align_offset = pgoff << PAGE_SHIFT
+	};
 
 	if (len > TASK_SIZE - mmap_min_addr)
 		return -ENOMEM;
@@ -93,15 +98,10 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 			goto check_asce_limit;
 	}
 
-	info.flags = 0;
-	info.length = len;
-	info.low_limit = mm->mmap_base;
-	info.high_limit = TASK_SIZE;
 	if (filp || (flags & MAP_SHARED))
 		info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT;
 	else
 		info.align_mask = 0;
-	info.align_offset = pgoff << PAGE_SHIFT;
 	addr = vm_unmapped_area(&info);
 	if (offset_in_page(addr))
 		return addr;
@@ -116,7 +116,13 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
 {
 	struct vm_area_struct *vma;
 	struct mm_struct *mm = current->mm;
-	struct vm_unmapped_area_info info;
+	struct vm_unmapped_area_info info = {
+		.flags = VM_UNMAPPED_AREA_TOPDOWN,
+		.length = len,
+		.low_limit = PAGE_SIZE,
+		.high_limit = mm->mmap_base,
+		.align_offset = pgoff << PAGE_SHIFT
+	};
 
 	/* requested length too big for entire address space */
 	if (len > TASK_SIZE - mmap_min_addr)
@@ -134,15 +140,10 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
 			goto check_asce_limit;
 	}
 
-	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
-	info.length = len;
-	info.low_limit = PAGE_SIZE;
-	info.high_limit = mm->mmap_base;
 	if (filp || (flags & MAP_SHARED))
 		info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT;
 	else
 		info.align_mask = 0;
-	info.align_offset = pgoff << PAGE_SHIFT;
 	addr = vm_unmapped_area(&info);
 
 	/*
-- 
2.34.1


  parent reply	other threads:[~2024-03-02  0:17 UTC|newest]

Thread overview: 94+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-26 19:09 [PATCH v2 0/9] Cover a guard gap corner case Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 1/9] mm: Switch mm->get_unmapped_area() to a flag Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 2/9] mm: Introduce arch_get_unmapped_area_vmflags() Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 3/9] mm: Use get_unmapped_area_vmflags() Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 4/9] thp: Add thp_get_unmapped_area_vmflags() Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 5/9] mm: Initialize struct vm_unmapped_area_info Rick Edgecombe
2024-02-26 19:09   ` Rick Edgecombe
2024-02-26 19:09   ` Rick Edgecombe
2024-02-26 19:09   ` Rick Edgecombe
2024-02-27  7:02   ` Christophe Leroy
2024-02-27  7:02     ` Christophe Leroy
2024-02-27  7:02     ` Christophe Leroy
2024-02-27  7:02     ` Christophe Leroy
2024-02-27 15:00     ` Edgecombe, Rick P
2024-02-27 15:00       ` Edgecombe, Rick P
2024-02-27 15:00       ` Edgecombe, Rick P
2024-02-27 15:00       ` Edgecombe, Rick P
2024-02-27 18:07     ` Kees Cook
2024-02-27 18:07       ` Kees Cook
2024-02-27 18:07       ` Kees Cook
2024-02-27 18:07       ` Kees Cook
2024-02-27 18:16       ` Christophe Leroy
2024-02-27 18:16         ` Christophe Leroy
2024-02-27 18:16         ` Christophe Leroy
2024-02-27 18:16         ` Christophe Leroy
2024-02-27 20:25         ` Edgecombe, Rick P
2024-02-27 20:25           ` Edgecombe, Rick P
2024-02-27 20:25           ` Edgecombe, Rick P
2024-02-27 20:25           ` Edgecombe, Rick P
2024-02-28 13:22           ` Christophe Leroy
2024-02-28 13:22             ` Christophe Leroy
2024-02-28 13:22             ` Christophe Leroy
2024-02-28 13:22             ` Christophe Leroy
2024-02-28 17:01             ` Edgecombe, Rick P
2024-02-28 17:01               ` Edgecombe, Rick P
2024-02-28 17:01               ` Edgecombe, Rick P
2024-02-28 17:01               ` Edgecombe, Rick P
2024-02-28 23:10               ` Christophe Leroy
2024-02-28 23:10                 ` Christophe Leroy
2024-02-28 23:10                 ` Christophe Leroy
2024-02-28 23:10                 ` Christophe Leroy
2024-02-28 17:21             ` Kees Cook
2024-02-28 17:21               ` Kees Cook
2024-02-28 17:21               ` Kees Cook
2024-02-28 17:21               ` Kees Cook
2024-03-02  0:47               ` Edgecombe, Rick P
2024-03-02  0:47                 ` Edgecombe, Rick P
2024-03-02  0:47                 ` Edgecombe, Rick P
2024-03-02  0:47                 ` Edgecombe, Rick P
2024-03-02  1:51                 ` Kees Cook
2024-03-02  1:51                   ` Kees Cook
2024-03-02  1:51                   ` Kees Cook
2024-03-02  1:51                   ` Kees Cook
2024-03-04 18:00                   ` Christophe Leroy
2024-03-04 18:00                     ` Christophe Leroy
2024-03-04 18:00                     ` Christophe Leroy
2024-03-04 18:00                     ` Christophe Leroy
2024-03-04 18:03                     ` Edgecombe, Rick P
2024-03-04 18:03                       ` Edgecombe, Rick P
2024-03-04 18:03                       ` Edgecombe, Rick P
2024-03-04 18:03                       ` Edgecombe, Rick P
2024-02-28 11:51   ` Kirill A. Shutemov
2024-02-28 11:51     ` Kirill A. Shutemov
2024-02-28 11:51     ` Kirill A. Shutemov
2024-02-28 11:51     ` Kirill A. Shutemov
2024-03-02  0:17   ` [RFC v2.1 01/12] ARC: Use initializer for " Rick Edgecombe
2024-03-02  0:17     ` Rick Edgecombe
2024-03-02  0:17     ` [RFC v2.1 02/12] ARM: " Rick Edgecombe
2024-03-02  0:17       ` Rick Edgecombe
2024-03-02  0:17     ` [RFC v2.1 03/12] csky: " Rick Edgecombe
2024-03-03  3:09       ` Guo Ren
2024-03-05 14:51         ` Edgecombe, Rick P
2024-03-02  0:17     ` [RFC v2.1 04/12] LoongArch: " Rick Edgecombe
2024-03-02  0:17     ` [RFC v2.1 05/12] MIPS: " Rick Edgecombe
2024-03-02  0:17     ` [RFC v2.1 06/12] parisc: " Rick Edgecombe
2024-03-02  6:35       ` Helge Deller
2024-03-05 14:51         ` Edgecombe, Rick P
2024-03-02  0:17     ` [RFC v2.1 07/12] powerpc: " Rick Edgecombe
2024-03-02  0:17       ` Rick Edgecombe
2024-03-05  0:51       ` Michael Ellerman
2024-03-05  0:51         ` Michael Ellerman
2024-03-05 14:50         ` Edgecombe, Rick P
2024-03-05 14:50           ` Edgecombe, Rick P
2024-03-02  0:17     ` Rick Edgecombe [this message]
2024-03-02  0:17     ` [RFC v2.1 09/12] sh: " Rick Edgecombe
2024-03-02  0:17     ` [RFC v2.1 10/12] sparc: " Rick Edgecombe
2024-03-02  0:17     ` [RFC v2.1 11/12] x86/mm: " Rick Edgecombe
2024-03-02  0:17     ` [RFC v2.1 12/12] hugetlbfs: " Rick Edgecombe
2024-03-02  4:42     ` [RFC v2.1 01/12] ARC: " Vineet Gupta
2024-03-02  4:42       ` Vineet Gupta
2024-02-26 19:09 ` [PATCH v2 6/9] mm: Take placement mappings gap into account Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 7/9] x86/mm: Implement HAVE_ARCH_UNMAPPED_AREA_VMFLAGS Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 8/9] x86/mm: Care about shadow stack guard gap during placement Rick Edgecombe
2024-02-26 19:09 ` [PATCH v2 9/9] selftests/x86: Add placement guard gap test for shstk Rick Edgecombe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240302001714.674091-8-rick.p.edgecombe@intel.com \
    --to=rick.p.edgecombe@intel.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=agordeev@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=borntraeger@linux.ibm.com \
    --cc=bp@alien8.de \
    --cc=broonie@kernel.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=debug@rivosinc.com \
    --cc=gerald.schaefer@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=hpa@zytor.com \
    --cc=keescook@chromium.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=svens@linux.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.