All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] restore protections after forced fault in get_user_pages
@ 2004-02-02  7:29 Roland McGrath
  2004-02-02 22:46 ` Andrew Morton
  0 siblings, 1 reply; 13+ messages in thread
From: Roland McGrath @ 2004-02-02  7:29 UTC (permalink / raw)
  To: Andrew Morton, Linus Torvalds; +Cc: Linux Kernel Mailing List

get_user_pages can force a fault to succeed despite the vma's protections.
This is used in access_process_vm, so that ptrace can write an unwritable
page, or read an unreadable but present page.  The problem is that the page
table entry is then left with the escalated permissions, so a benign ptrace
call can cause a normal user access that should fault not to.  A simple
test case is to run this program under gdb:

	main()
	{
	  puts ("hi there");
	  *(volatile int *) &main = 0;
	  puts ("should be dead now");
	  return 22;
	}

do:

	$ gdb ./foo
	(gdb) break main
	(gdb) run
	(gdb) cont

With existing Linux kernels, the program will print:

	hi there
	should be dead now

and exit normally.  However, the second line of main obviously should
fault.  The following patch makes it do so.  

There are real-world scenarios where this has bitten people.  For example,
some garbage collectors use mprotect to ensure that they get known SIGSEGV
faults for accessing certain pages during collection.  When dealing with
such a program (e.g. a JVM), just examining the data at certain addresses
in the debugger can completely break the normal behavior of the program.
No fun for the poor bastard trying to debug such a garbage collector.

As the comment says, this is not airtight, i.e. atomic for racing accesses.
But it is perfectly adequate for the reasonable debugger scenario where
everything else that might use that user page is stopped while
access_process_vm gets called.  Previously there was a window where the
protections behaved wrong, from after the first ptrace-induced fault until
perhaps forever (page out).  Now there is a window from after the fault
until a tiny bit later.  I think that's a satisfactory improvement.


Thanks,
Roland


Index: linux-2.6/mm/memory.c
===================================================================
RCS file: /home/roland/redhat/bkcvs/linux-2.5/mm/memory.c,v
retrieving revision 1.141
diff -u -b -p -r1.141 memory.c
--- linux-2.6/mm/memory.c 20 Jan 2004 05:12:38 -0000 1.141
+++ linux-2.6/mm/memory.c 2 Feb 2004 06:03:29 -0000
@@ -685,6 +685,27 @@ static inline struct page *get_page_map(
 	return page;
 }
 
+/*
+ * Reset the page table entry for the given address after faulting in a page.
+ * We restore the protections indicated by its vma.
+ */
+static void
+restore_page_prot(struct mm_struct *mm, struct vm_area_struct *vma,
+		  unsigned long address)
+{
+	pgd_t *pgd = pgd_offset(mm, address);
+	pmd_t *pmd = pmd_alloc(mm, pgd, address);
+	pte_t *pte;
+	if (!pmd)
+		return;
+	pte = pte_alloc_map(mm, pmd, address);
+	if (!pte)
+		return;
+	flush_cache_page(vma, address);
+	ptep_establish(vma, address, pte, pte_modify(*pte, vma->vm_page_prot));
+	update_mmu_cache(vma, address, entry);
+	pte_unmap(pte);
+}
 
 int get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
 		unsigned long start, int len, int write, int force,
@@ -764,6 +785,22 @@ int get_user_pages(struct task_struct *t
 				}
 				spin_lock(&mm->page_table_lock);
 			}
+			if (force &&
+			    !(vma->vm_flags & (write ? VM_WRITE : VM_READ)))
+				/*
+				 * We forced a fault for protections the
+				 * page does not ordinarily allow.  Restore
+				 * the proper protections so that improper
+				 * user accesses are not allowed just
+				 * because we accessed it from the side.
+				 * Note that there is a window here where
+				 * user access could exploit the temporary
+				 * page proections.  Tough beans.  You just
+				 * have prevent user code from running while
+				 * you are accessing memory via ptrace if you
+				 * want it to get the proper faults.
+				 */
+			      restore_page_prot(mm, vma, start);
 			if (pages) {
 				pages[i] = get_page_map(map);
 				if (!pages[i]) {

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-02  7:29 [PATCH] restore protections after forced fault in get_user_pages Roland McGrath
@ 2004-02-02 22:46 ` Andrew Morton
  2004-02-02 23:09   ` Linus Torvalds
  2004-02-02 23:48   ` Roland McGrath
  0 siblings, 2 replies; 13+ messages in thread
From: Andrew Morton @ 2004-02-02 22:46 UTC (permalink / raw)
  To: Roland McGrath; +Cc: torvalds, linux-kernel

Roland McGrath <roland@redhat.com> wrote:
>
> get_user_pages can force a fault to succeed despite the vma's protections.
> This is used in access_process_vm, so that ptrace can write an unwritable
> page, or read an unreadable but present page.

That's a bit ugly, isn't it?  We don't want to modify the pte permissions
in this case.  We just want the page frame.  But we do still want to call
handle_mm_fault() if the page isn't there at all, or to COW it.

One way to handle that would be to give the `write' arg to
handle_mm_fault() a third value which means "give us a writeable page, but
don't make the pte writeable".  Maybe that isn't warranted for this special
case.  But it would be better, really.


> +/*
> + * Reset the page table entry for the given address after faulting in a page.
> + * We restore the protections indicated by its vma.
> + */
> +static void
> +restore_page_prot(struct mm_struct *mm, struct vm_area_struct *vma,
> +		  unsigned long address)
> +{
> +	pgd_t *pgd = pgd_offset(mm, address);
> +	pmd_t *pmd = pmd_alloc(mm, pgd, address);
> +	pte_t *pte;
> +	if (!pmd)
> +		return;
> +	pte = pte_alloc_map(mm, pmd, address);
> +	if (!pte)
> +		return;
> +	flush_cache_page(vma, address);
> +	ptep_establish(vma, address, pte, pte_modify(*pte, vma->vm_page_prot));
> +	update_mmu_cache(vma, address, entry);
> +	pte_unmap(pte);
> +}

This guy forgot to flush the tlb after modifying the pte permissions, and
symbol `entry' is not defined.  We've hit the latter problem multiple
times.  It's a good argument for empty inlines rather than empty macros.

diff -puN mm/memory.c~get_user_pages-restore-protections-fix mm/memory.c
--- 25/mm/memory.c~get_user_pages-restore-protections-fix	Mon Feb  2 14:34:34 2004
+++ 25-akpm/mm/memory.c	Mon Feb  2 14:35:47 2004
@@ -701,6 +701,7 @@ restore_page_prot(struct mm_struct *mm, 
 	pgd_t *pgd = pgd_offset(mm, address);
 	pmd_t *pmd = pmd_alloc(mm, pgd, address);
 	pte_t *pte;
+	pte_t entry;
 
 	if (!pmd)
 		return;
@@ -708,7 +709,9 @@ restore_page_prot(struct mm_struct *mm, 
 	if (!pte)
 		return;
 	flush_cache_page(vma, address);
-	ptep_establish(vma, address, pte, pte_modify(*pte, vma->vm_page_prot));
+	entry = pte_modify(*pte, vma->vm_page_prot);
+	ptep_establish(vma, address, pte, entry);
+	flush_tlb_page(vma, address);
 	update_mmu_cache(vma, address, entry);
 	pte_unmap(pte);
 }

_


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-02 22:46 ` Andrew Morton
@ 2004-02-02 23:09   ` Linus Torvalds
  2004-02-02 23:48   ` Roland McGrath
  1 sibling, 0 replies; 13+ messages in thread
From: Linus Torvalds @ 2004-02-02 23:09 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Roland McGrath, linux-kernel



On Mon, 2 Feb 2004, Andrew Morton wrote:
> 
> One way to handle that would be to give the `write' arg to
> handle_mm_fault() a third value which means "give us a writeable page, but
> don't make the pte writeable".  Maybe that isn't warranted for this special
> case.  But it would be better, really.

I agree - it would be a "fault for an exclusive write, but don't do the
writability part". I don't think any of the low-level filesystems would 
even know about that flag, since the actual page table accesses are done 
by the generic VM.

		Linus

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-02 22:46 ` Andrew Morton
  2004-02-02 23:09   ` Linus Torvalds
@ 2004-02-02 23:48   ` Roland McGrath
  2004-02-02 23:55     ` Linus Torvalds
  1 sibling, 1 reply; 13+ messages in thread
From: Roland McGrath @ 2004-02-02 23:48 UTC (permalink / raw)
  To: Andrew Morton; +Cc: torvalds, linux-kernel

> That's a bit ugly, isn't it?  We don't want to modify the pte permissions
> in this case.  We just want the page frame.  But we do still want to call
> handle_mm_fault() if the page isn't there at all, or to COW it.

I quite agree.  My first crack was ugliness in isolation, because I
anticipated resistance to changing the function signatures in the fault
path and it seemed like a fair bit of twiddling would be required.

> One way to handle that would be to give the `write' arg to
> handle_mm_fault() a third value which means "give us a writeable page, but
> don't make the pte writeable".  Maybe that isn't warranted for this special
> case.  But it would be better, really.

It would be ideal.  However, it would also require changing the interfaces
further.  Currently handle_mm_fault just says what happened, and doesn't
give back the page directly.  get_user_pages then retakes
mm->page_table_lock and calls follow_page to look up the page.
So either handle_mm_fault would need to be able to return the page
directly, or else follow_page would need to be changed to do "force"
lookups that don't bail out when the pte is unwritable.  i.e.:

--- memory.c	20 Jan 2004 05:12:38 -0000	1.141
+++ memory.c	2 Feb 2004 23:38:57 -0000
@@ -621,7 +621,7 @@ void zap_page_range(struct vm_area_struc
  * mm->page_table_lock must be held.
  */
 struct page *
-follow_page(struct mm_struct *mm, unsigned long address, int write) 
+follow_page(struct mm_struct *mm, unsigned long address, int write, int force)
 {
 	pgd_t *pgd;
 	pmd_t *pmd;
@@ -652,7 +652,7 @@ follow_page(struct mm_struct *mm, unsign
 	pte = *ptep;
 	pte_unmap(ptep);
 	if (pte_present(pte)) {
-		if (write && !pte_write(pte))
+		if (write && !force && !pte_write(pte))
 			goto out;
 		if (write && !pte_dirty(pte)) {
 			struct page *page = pte_page(pte);


Off hand I'm not positive that is sufficient to get all the cases right
once the fault installs the proper permissions, though perhaps it is.
Remember, there is not only the writing unreadable case, but the case of
reading and writing unreadable (PROT_NONE) as well.

Then there is the issue of not making the pte writable in the first place.
pte_mkwrite is used to construct the pte in a variety of places I can see
off hand in memory.c (break_cow, do_wp_page, do_swap_page,
do_anonymous_page, do_no_page), and I haven't traced all the hugetlbpage
code paths that are also written this way.  Perhaps it would be sufficient
to change all these pte_mkwrite(pte) into pte_modify(pte, vma->vm_page_prot).
But I am not really confident right off that I know everything that's going
on here.

I've outlined the changes that I think would be sufficient (plus figuring
out the analogous changes to hugetlb stuff).  I'd be happy to give it a
try.  But I'm not at all confident to begin with that I'm aware of all the
pitfalls.


Thanks,
Roland

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-02 23:48   ` Roland McGrath
@ 2004-02-02 23:55     ` Linus Torvalds
  2004-02-03  0:03       ` Roland McGrath
  2004-02-03  0:09       ` Roland McGrath
  0 siblings, 2 replies; 13+ messages in thread
From: Linus Torvalds @ 2004-02-02 23:55 UTC (permalink / raw)
  To: Roland McGrath; +Cc: Andrew Morton, linux-kernel



On Mon, 2 Feb 2004, Roland McGrath wrote:
> 
> It would be ideal.  However, it would also require changing the interfaces
> further.  Currently handle_mm_fault just says what happened, and doesn't
> give back the page directly.  get_user_pages then retakes
> mm->page_table_lock and calls follow_page to look up the page.

I'd suggest making this be:
 - handle_mm_fault() take a more detailed flag ("read / write / copy", 
   where the new "copy" part is a write that actually leaves the page 
   only readable, but marks it dirty)
 - we do "follow-page" with a read.

That should be sufficient, I think: since "handle_mm_fault()" marks the 
page dirty (but not writable) and will have done all the work to do a COW, 
we know that once we do the "follow_page()", we'll be getting a private 
copy. Which is what we wanted.

So only "handle_mm_fault()" would actually need changing.

I think.

Anybody see any problems with this approach?

		Linus

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-02 23:55     ` Linus Torvalds
@ 2004-02-03  0:03       ` Roland McGrath
  2004-02-03  0:09       ` Roland McGrath
  1 sibling, 0 replies; 13+ messages in thread
From: Roland McGrath @ 2004-02-03  0:03 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

> That should be sufficient, I think: since "handle_mm_fault()" marks the 
> page dirty (but not writable) and will have done all the work to do a COW, 
> we know that once we do the "follow_page()", we'll be getting a private 
> copy. Which is what we wanted.

The only potential hole I can see here is if there is an (exceedingly rare)
race where the page could be ejected after handle_mm_fault, then brought
back in but not marked dirty, before follow_page looks it up and returns a
page to be used for writing without marking it dirty.  Obviously this is a
ridiculously unlikely case.  But what I don't know is whether it is
strictly speaking impossible.  That is, that a page once dirty then later
not present, would ever again be present without also being dirty.


Thanks,
Roland

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-02 23:55     ` Linus Torvalds
  2004-02-03  0:03       ` Roland McGrath
@ 2004-02-03  0:09       ` Roland McGrath
  2004-02-03  0:30         ` Linus Torvalds
  1 sibling, 1 reply; 13+ messages in thread
From: Roland McGrath @ 2004-02-03  0:09 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

> I'd suggest making this be:
>  - handle_mm_fault() take a more detailed flag ("read / write / copy", 
>    where the new "copy" part is a write that actually leaves the page 
>    only readable, but marks it dirty)

On second thought, why is it necessary to have the caller tell
handle_mm_fault "write" vs "copy"?  The existing "write" flag says do COW
and mark it dirty.  Why not just redefine it not to also mean "make the pte
writable", but rather "make the pte as writable as the vma says"?
i.e., just replace pte_mkwrite(pte) with pte_modify(pte, vma->vm_page_prot)
throughout.  That way we don't have to change all the arch/*/fault.c callers.

I think this question is orthogonal to my concern about follow_page.


Thanks,
Roland

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-03  0:09       ` Roland McGrath
@ 2004-02-03  0:30         ` Linus Torvalds
  2004-02-03  0:42           ` Andrew Morton
                             ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Linus Torvalds @ 2004-02-03  0:30 UTC (permalink / raw)
  To: Roland McGrath; +Cc: Andrew Morton, linux-kernel



On Mon, 2 Feb 2004, Roland McGrath wrote:
> 
> On second thought, why is it necessary to have the caller tell
> handle_mm_fault "write" vs "copy"?  The existing "write" flag says do COW
> and mark it dirty.  Why not just redefine it not to also mean "make the pte
> writable", but rather "make the pte as writable as the vma says"?
> i.e., just replace pte_mkwrite(pte) with pte_modify(pte, vma->vm_page_prot)
> throughout.  That way we don't have to change all the arch/*/fault.c callers.

I think you're right, we could do it that way too. And that should 
actually fix the problem that "do_wp_page()" doesn't even get the 
"write_access" status at all (since it is _only_ called for writes).

So yes, that should work.

> I think this question is orthogonal to my concern about follow_page.

The follow_page issue should be fixable by just marking the _page_ dirty 
inside the ptrace routines. I think we do that anyway (or we'd already be 
buggy wrt perfectly normal writes).

Ie the sequence would be:

 - handle_mm_fault(write/copy)
	This makes sure that we have done a COW. There is no other real 
	issue, but we obviously _do_ need to make sure that a private copy
	gets split. Marking it dirty is necessary: otherwise the private
	page might be thrown out by the pageout, resulting in the virtual 
	page being re-loaded with a clean (shared) copy. That would be 
	bad.

 - follow_page() looks up the page
	The page may indeed have been written out here (very unlikely, 
	but for the sake of the argument, let's assume so). However, that 
	doesn't matter - what matters is that the VM must not "re-share" 
	the page, so that if the mapping was private, we'll still have a 
	private page.

	Marking it dirty will guarantee this: even if the virtual page 
	gets evicted, it then gets evicted as a _swap_ page, not as a
	demand-loadable clean page.

 - we write to the page, and mark the "struct page" dirty with SetPageDirty()
	Now the page tables might be marked clean (but only if the thing 
	was a shared mapping), but because we marked the page dirty, we
	are guaranteed to not lose the data we wrote.

Do you see any holes in this?

		Linus

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-03  0:30         ` Linus Torvalds
@ 2004-02-03  0:42           ` Andrew Morton
  2004-02-03  0:55           ` Roland McGrath
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2004-02-03  0:42 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: roland, linux-kernel

Linus Torvalds <torvalds@osdl.org> wrote:
>
> mark the "struct page" dirty with SetPageDirty()

set_page_dirty(), please.  It takes care of list motion and dirty page
accounting.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-03  0:30         ` Linus Torvalds
  2004-02-03  0:42           ` Andrew Morton
@ 2004-02-03  0:55           ` Roland McGrath
  2004-02-03  8:29           ` Ingo Molnar
  2004-02-03 10:25           ` Roland McGrath
  3 siblings, 0 replies; 13+ messages in thread
From: Roland McGrath @ 2004-02-03  0:55 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

> The follow_page issue should be fixable by just marking the _page_ dirty 
> inside the ptrace routines. I think we do that anyway (or we'd already be 
> buggy wrt perfectly normal writes).

access_process_vm already calls set_page_dirty_locked, in fact.
So things are relatively simple.

I suggested using pte_modify(pte, vma->vm_page_prot).  That is not the
right thing to do, in fact.  For private mappings, the vm_page_prot value
is always unwritable (PAGE_COPY).  So getting the right bits is tiny bit
hairier.  Either it can do:

	if (vma->vm_flags & VM_WRITE)
		entry = pte_mkwrite(entry);

or it can do something branchless like:

	entry = pte_modify(entry, protection_map[vma->vm_flags & 0x7]);

Or maybe there is a more optimal formulation I haven't thought of.


Thanks,
Roland

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-03  0:30         ` Linus Torvalds
  2004-02-03  0:42           ` Andrew Morton
  2004-02-03  0:55           ` Roland McGrath
@ 2004-02-03  8:29           ` Ingo Molnar
  2004-02-03 10:25           ` Roland McGrath
  3 siblings, 0 replies; 13+ messages in thread
From: Ingo Molnar @ 2004-02-03  8:29 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Roland McGrath, Andrew Morton, linux-kernel


* Linus Torvalds <torvalds@osdl.org> wrote:

> > I think this question is orthogonal to my concern about follow_page.
> 
> The follow_page issue should be fixable by just marking the _page_
> dirty inside the ptrace routines. I think we do that anyway (or we'd
> already be buggy wrt perfectly normal writes).

the follow_page() issue could be handled like the 4/4 patch does it: a
repeat-until loop of manual-fault + lookup-pte (we break out of the loop
if the manual fault fails or when the lookup is successful). The race
window is so small that the repeating solution is i think far superior
to artificially dirtying the pte.

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-03  0:30         ` Linus Torvalds
                             ` (2 preceding siblings ...)
  2004-02-03  8:29           ` Ingo Molnar
@ 2004-02-03 10:25           ` Roland McGrath
  2004-02-03 17:04             ` Linus Torvalds
  3 siblings, 1 reply; 13+ messages in thread
From: Roland McGrath @ 2004-02-03 10:25 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

This patch is working for me.  What do you think?

I'm not sure I followed Ingo's comment.  The ptes are still marked dirty
here, just not also writable.  I can't see off hand how it matters one way
or the other, since the page will have been marked dirty by get_user_pages
already.  I don't see a problem making the pte_dirty use also conditional
if that is better somehow.


Thanks,
Roland


Index: linux-2.6/mm/memory.c
===================================================================
RCS file: /home/roland/redhat/bkcvs/linux-2.5/mm/memory.c,v
retrieving revision 1.141
diff -b -p -u -r1.141 memory.c
--- linux-2.6/mm/memory.c 20 Jan 2004 05:12:38 -0000 1.141
+++ linux-2.6/mm/memory.c 3 Feb 2004 05:18:30 -0000
@@ -746,7 +746,8 @@ int get_user_pages(struct task_struct *t
 		spin_lock(&mm->page_table_lock);
 		do {
 			struct page *map;
-			while (!(map = follow_page(mm, start, write))) {
+			int lookup_write = write;
+			while (!(map = follow_page(mm, start, lookup_write))) {
 				spin_unlock(&mm->page_table_lock);
 				switch (handle_mm_fault(mm,vma,start,write)) {
 				case VM_FAULT_MINOR:
@@ -762,6 +763,14 @@ int get_user_pages(struct task_struct *t
 				default:
 					BUG();
 				}
+				/*
+				 * Now that we have performed a write fault
+				 * and surely no longer have a shared page we
+				 * shouldn't write, we shouldn't ignore an
+				 * unwritable page in the page table if
+				 * we are forcing write access.
+				 */
+				lookup_write = write && !force;
 				spin_lock(&mm->page_table_lock);
 			}
 			if (pages) {
@@ -951,6 +960,19 @@ int remap_page_range(struct vm_area_stru
 EXPORT_SYMBOL(remap_page_range);
 
 /*
+ * Do pte_mkwrite, but only if the vma says VM_WRITE.  We do this when
+ * servicing faults for write access.  In the normal case, do always want
+ * pte_mkwrite.  But get_user_pages can cause write faults for mappings
+ * that do not have writing enabled, when used by access_process_vm.
+ */
+static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+{
+	if (likely(vma->vm_flags & VM_WRITE))
+		pte = pte_mkwrite(pte);
+	return pte;
+}
+
+/*
  * We hold the mm semaphore for reading and vma->vm_mm->page_table_lock
  */
 static inline void break_cow(struct vm_area_struct * vma, struct page * new_page, unsigned long address, 
@@ -959,7 +981,8 @@ static inline void break_cow(struct vm_a
 	pte_t entry;
 
 	flush_cache_page(vma, address);
-	entry = pte_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)));
+	entry = maybe_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)),
+			      vma);
 	ptep_establish(vma, address, page_table, entry);
 	update_mmu_cache(vma, address, entry);
 }
@@ -1011,7 +1034,8 @@ static int do_wp_page(struct mm_struct *
 		unlock_page(old_page);
 		if (reuse) {
 			flush_cache_page(vma, address);
-			entry = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte)));
+			entry = maybe_mkwrite(pte_mkyoung(pte_mkdirty(pte)),
+					      vma);
 			ptep_establish(vma, address, page_table, entry);
 			update_mmu_cache(vma, address, entry);
 			pte_unmap(page_table);
@@ -1279,7 +1303,7 @@ static int do_swap_page(struct mm_struct
 	mm->rss++;
 	pte = mk_pte(page, vma->vm_page_prot);
 	if (write_access && can_share_swap_page(page))
-		pte = pte_mkdirty(pte_mkwrite(pte));
+		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
 	unlock_page(page);
 
 	flush_icache_page(vma, page);
@@ -1346,7 +1370,9 @@ do_anonymous_page(struct mm_struct *mm, 
 			goto out;
 		}
 		mm->rss++;
-		entry = pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
+		entry = maybe_mkwrite(pte_mkdirty(mk_pte(page,
+							 vma->vm_page_prot)),
+				      vma);
 		lru_cache_add_active(page);
 		mark_page_accessed(page);
 	}
@@ -1462,7 +1488,7 @@ retry:
 		flush_icache_page(vma, new_page);
 		entry = mk_pte(new_page, vma->vm_page_prot);
 		if (write_access)
-			entry = pte_mkwrite(pte_mkdirty(entry));
+			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 		set_pte(page_table, entry);
 		pte_chain = page_add_rmap(new_page, page_table, pte_chain);
 		pte_unmap(page_table);

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] restore protections after forced fault in get_user_pages
  2004-02-03 10:25           ` Roland McGrath
@ 2004-02-03 17:04             ` Linus Torvalds
  0 siblings, 0 replies; 13+ messages in thread
From: Linus Torvalds @ 2004-02-03 17:04 UTC (permalink / raw)
  To: Roland McGrath; +Cc: Andrew Morton, linux-kernel



On Tue, 3 Feb 2004, Roland McGrath wrote:
>
> This patch is working for me.  What do you think?

I think it looks ok, and makes sense. I don't see anything wrong with it, 
so if we put it in -mm for a while to test, and nobody else can see any 
gotcha's with it..

		Linus

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2004-02-03 17:04 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-02-02  7:29 [PATCH] restore protections after forced fault in get_user_pages Roland McGrath
2004-02-02 22:46 ` Andrew Morton
2004-02-02 23:09   ` Linus Torvalds
2004-02-02 23:48   ` Roland McGrath
2004-02-02 23:55     ` Linus Torvalds
2004-02-03  0:03       ` Roland McGrath
2004-02-03  0:09       ` Roland McGrath
2004-02-03  0:30         ` Linus Torvalds
2004-02-03  0:42           ` Andrew Morton
2004-02-03  0:55           ` Roland McGrath
2004-02-03  8:29           ` Ingo Molnar
2004-02-03 10:25           ` Roland McGrath
2004-02-03 17:04             ` Linus Torvalds

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.