All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: <akpm@linux-foundation.org>
Cc: Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, <x86@kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linuxppc-dev@lists.ozlabs.org>,
	<linux-riscv@lists.infradead.org>, <linux-s390@vger.kernel.org>,
	<surenb@google.com>, Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS
Date: Tue, 2 Apr 2024 15:51:37 +0800	[thread overview]
Message-ID: <20240402075142.196265-3-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com>

The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly set fault to VM_FAULT_BADACCESS and handle error,
no need to lock_mm_and_find_vma() and check vm_flags again, the latency
time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/fault.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9bb9f395351a..405f9aa831bd 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 
 	if (!(vma->vm_flags & vm_flags)) {
 		vma_end_read(vma);
-		goto lock_mmap;
+		fault = VM_FAULT_BADACCESS;
+		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
+		goto done;
 	}
 	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
 	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
-- 
2.27.0


WARNING: multiple messages have this Message-ID (diff)
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: <akpm@linux-foundation.org>
Cc: Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, <x86@kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linuxppc-dev@lists.ozlabs.org>,
	<linux-riscv@lists.infradead.org>, <linux-s390@vger.kernel.org>,
	<surenb@google.com>, Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS
Date: Tue, 2 Apr 2024 15:51:37 +0800	[thread overview]
Message-ID: <20240402075142.196265-3-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com>

The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly set fault to VM_FAULT_BADACCESS and handle error,
no need to lock_mm_and_find_vma() and check vm_flags again, the latency
time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/fault.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9bb9f395351a..405f9aa831bd 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 
 	if (!(vma->vm_flags & vm_flags)) {
 		vma_end_read(vma);
-		goto lock_mmap;
+		fault = VM_FAULT_BADACCESS;
+		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
+		goto done;
 	}
 	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
 	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
-- 
2.27.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: <akpm@linux-foundation.org>
Cc: Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, <x86@kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linuxppc-dev@lists.ozlabs.org>,
	<linux-riscv@lists.infradead.org>, <linux-s390@vger.kernel.org>,
	<surenb@google.com>, Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS
Date: Tue, 2 Apr 2024 15:51:37 +0800	[thread overview]
Message-ID: <20240402075142.196265-3-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com>

The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly set fault to VM_FAULT_BADACCESS and handle error,
no need to lock_mm_and_find_vma() and check vm_flags again, the latency
time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/fault.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9bb9f395351a..405f9aa831bd 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 
 	if (!(vma->vm_flags & vm_flags)) {
 		vma_end_read(vma);
-		goto lock_mmap;
+		fault = VM_FAULT_BADACCESS;
+		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
+		goto done;
 	}
 	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
 	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: <akpm@linux-foundation.org>
Cc: x86@kernel.org, linux-s390@vger.kernel.org,
	Albert Ou <aou@eecs.berkeley.edu>,
	linuxppc-dev@lists.ozlabs.org,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Russell King <linux@armlinux.org.uk>,
	surenb@google.com, Dave Hansen <dave.hansen@linux.intel.com>,
	linux-riscv@lists.infradead.org,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Nicholas Piggin <npiggin@gmail.com>,
	Andy Lutomirski <luto@kernel.org>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Will Deacon <will@kernel.org>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS
Date: Tue, 2 Apr 2024 15:51:37 +0800	[thread overview]
Message-ID: <20240402075142.196265-3-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240402075142.196265-1-wangkefeng.wang@huawei.com>

The vm_flags of vma already checked under per-VMA lock, if it is a
bad access, directly set fault to VM_FAULT_BADACCESS and handle error,
no need to lock_mm_and_find_vma() and check vm_flags again, the latency
time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/arm64/mm/fault.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9bb9f395351a..405f9aa831bd 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 
 	if (!(vma->vm_flags & vm_flags)) {
 		vma_end_read(vma);
-		goto lock_mmap;
+		fault = VM_FAULT_BADACCESS;
+		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
+		goto done;
 	}
 	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
 	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
-- 
2.27.0


  parent reply	other threads:[~2024-04-02  7:53 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-02  7:51 [PATCH 0/7] arch/mm/fault: accelerate pagefault when badaccess Kefeng Wang
2024-04-02  7:51 ` Kefeng Wang
2024-04-02  7:51 ` Kefeng Wang
2024-04-02  7:51 ` Kefeng Wang
2024-04-02  7:51 ` [PATCH 1/7] arm64: mm: cleanup __do_page_fault() Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-03  5:11   ` Suren Baghdasaryan
2024-04-03  5:11     ` Suren Baghdasaryan
2024-04-03  5:11     ` Suren Baghdasaryan
2024-04-03  5:11     ` Suren Baghdasaryan
2024-04-03 18:24   ` Catalin Marinas
2024-04-03 18:24     ` Catalin Marinas
2024-04-03 18:24     ` Catalin Marinas
2024-04-03 18:24     ` Catalin Marinas
2024-04-02  7:51 ` Kefeng Wang [this message]
2024-04-02  7:51   ` [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-03  5:19   ` Suren Baghdasaryan
2024-04-03  5:19     ` Suren Baghdasaryan
2024-04-03  5:19     ` Suren Baghdasaryan
2024-04-03  5:19     ` Suren Baghdasaryan
2024-04-03  5:30     ` Suren Baghdasaryan
2024-04-03  5:30       ` Suren Baghdasaryan
2024-04-03  5:30       ` Suren Baghdasaryan
2024-04-03  5:30       ` Suren Baghdasaryan
2024-04-03  6:13       ` Kefeng Wang
2024-04-03  6:13         ` Kefeng Wang
2024-04-03  6:13         ` Kefeng Wang
2024-04-03  6:13         ` Kefeng Wang
2024-04-03 18:32   ` Catalin Marinas
2024-04-03 18:32     ` Catalin Marinas
2024-04-03 18:32     ` Catalin Marinas
2024-04-03 18:32     ` Catalin Marinas
2024-04-02  7:51 ` [PATCH 3/7] arm: " Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-03  5:30   ` Suren Baghdasaryan
2024-04-03  5:30     ` Suren Baghdasaryan
2024-04-03  5:30     ` Suren Baghdasaryan
2024-04-03  5:30     ` Suren Baghdasaryan
2024-04-02  7:51 ` [PATCH 4/7] powerpc: mm: accelerate pagefault when badaccess Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-03  5:34   ` Suren Baghdasaryan
2024-04-03  5:34     ` Suren Baghdasaryan
2024-04-03  5:34     ` Suren Baghdasaryan
2024-04-03  5:34     ` Suren Baghdasaryan
2024-04-02  7:51 ` [PATCH 5/7] riscv: " Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-03  5:37   ` Suren Baghdasaryan
2024-04-03  5:37     ` Suren Baghdasaryan
2024-04-03  5:37     ` Suren Baghdasaryan
2024-04-03  5:37     ` Suren Baghdasaryan
2024-04-02  7:51 ` [PATCH 6/7] s390: " Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51 ` [PATCH 7/7] x86: " Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-02  7:51   ` Kefeng Wang
2024-04-03  5:59   ` Suren Baghdasaryan
2024-04-03  5:59     ` Suren Baghdasaryan
2024-04-03  5:59     ` Suren Baghdasaryan
2024-04-03  5:59     ` Suren Baghdasaryan
2024-04-03  7:58     ` Kefeng Wang
2024-04-03  7:58       ` Kefeng Wang
2024-04-03  7:58       ` Kefeng Wang
2024-04-03  7:58       ` Kefeng Wang
2024-04-03 14:23       ` Suren Baghdasaryan
2024-04-03 14:23         ` Suren Baghdasaryan
2024-04-03 14:23         ` Suren Baghdasaryan
2024-04-03 14:23         ` Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240402075142.196265-3-wangkefeng.wang@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=agordeev@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=dave.hansen@linux.intel.com \
    --cc=gerald.schaefer@linux.ibm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=npiggin@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peterz@infradead.org \
    --cc=surenb@google.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.