From: Akira Tsukamoto <akira.tsukamoto@gmail.com> To: Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/1] riscv: __asm_copy_to-from_user: Improve using word copy, if size is < 9*SZREG Date: Thu, 11 Nov 2021 17:13:04 +0900 [thread overview] Message-ID: <747e611a-2225-0685-b1e6-8b45ef45042d@gmail.com> (raw) In-Reply-To: <6ebbb5e0-c2bc-89ce-2cb8-4f537c5aea13@gmail.com> Reduce the number of slow byte_copy being used. Currently byte_copy is used for all the cases when the size is smaller than 9*SZREG. When the size is in between 2*SZREG to 9*SZREG, use faster unrolled word_copy. Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com> --- arch/riscv/lib/uaccess.S | 46 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index 63bc691cff91..50013479cb86 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -34,8 +34,10 @@ ENTRY(__asm_copy_from_user) /* * Use byte copy only if too small. * SZREG holds 4 for RV32 and 8 for RV64 + * a3 - 2*SZREG is minimum size for word_copy + * 1*SZREG for aligning dst + 1*SZREG for word_copy */ - li a3, 9*SZREG /* size must be larger than size in word_copy */ + li a3, 2*SZREG bltu a2, a3, .Lbyte_copy_tail /* @@ -66,9 +68,40 @@ ENTRY(__asm_copy_from_user) andi a3, a1, SZREG-1 bnez a3, .Lshift_copy +.Lcheck_size_bulk: + /* + * Evaluate the size if possible to use unrolled. + * The word_copy_unlrolled requires larger than 8*SZREG + */ + li a3, 8*SZREG + add a4, a0, a3 + bltu a4, t0, .Lword_copy_unlrolled + .Lword_copy: - /* - * Both src and dst are aligned, unrolled word copy + /* + * Both src and dst are aligned + * Not unrolled word copy with every 1*SZREG iteration + * + * a0 - start of aligned dst + * a1 - start of aligned src + * t0 - end of aligned dst + */ + bgeu a0, t0, .Lbyte_copy_tail /* check if end of copy */ + addi t0, t0, -(SZREG) /* not to over run */ +1: + fixup REG_L a5, 0(a1) + addi a1, a1, SZREG + fixup REG_S a5, 0(a0) + addi a0, a0, SZREG + bltu a0, t0, 1b + + addi t0, t0, SZREG /* revert to original value */ + j .Lbyte_copy_tail + +.Lword_copy_unlrolled: + /* + * Both src and dst are aligned + * Unrolled word copy with every 8*SZREG iteration * * a0 - start of aligned dst * a1 - start of aligned src @@ -97,7 +130,12 @@ ENTRY(__asm_copy_from_user) bltu a0, t0, 2b addi t0, t0, 8*SZREG /* revert to original value */ - j .Lbyte_copy_tail + + /* + * Remaining might large enough for word_copy to reduce slow byte + * copy + */ + j .Lcheck_size_bulk .Lshift_copy: -- 2.17.1
WARNING: multiple messages have this Message-ID (diff)
From: Akira Tsukamoto <akira.tsukamoto@gmail.com> To: Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/1] riscv: __asm_copy_to-from_user: Improve using word copy, if size is < 9*SZREG Date: Thu, 11 Nov 2021 17:13:04 +0900 [thread overview] Message-ID: <747e611a-2225-0685-b1e6-8b45ef45042d@gmail.com> (raw) In-Reply-To: <6ebbb5e0-c2bc-89ce-2cb8-4f537c5aea13@gmail.com> Reduce the number of slow byte_copy being used. Currently byte_copy is used for all the cases when the size is smaller than 9*SZREG. When the size is in between 2*SZREG to 9*SZREG, use faster unrolled word_copy. Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com> --- arch/riscv/lib/uaccess.S | 46 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index 63bc691cff91..50013479cb86 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -34,8 +34,10 @@ ENTRY(__asm_copy_from_user) /* * Use byte copy only if too small. * SZREG holds 4 for RV32 and 8 for RV64 + * a3 - 2*SZREG is minimum size for word_copy + * 1*SZREG for aligning dst + 1*SZREG for word_copy */ - li a3, 9*SZREG /* size must be larger than size in word_copy */ + li a3, 2*SZREG bltu a2, a3, .Lbyte_copy_tail /* @@ -66,9 +68,40 @@ ENTRY(__asm_copy_from_user) andi a3, a1, SZREG-1 bnez a3, .Lshift_copy +.Lcheck_size_bulk: + /* + * Evaluate the size if possible to use unrolled. + * The word_copy_unlrolled requires larger than 8*SZREG + */ + li a3, 8*SZREG + add a4, a0, a3 + bltu a4, t0, .Lword_copy_unlrolled + .Lword_copy: - /* - * Both src and dst are aligned, unrolled word copy + /* + * Both src and dst are aligned + * Not unrolled word copy with every 1*SZREG iteration + * + * a0 - start of aligned dst + * a1 - start of aligned src + * t0 - end of aligned dst + */ + bgeu a0, t0, .Lbyte_copy_tail /* check if end of copy */ + addi t0, t0, -(SZREG) /* not to over run */ +1: + fixup REG_L a5, 0(a1) + addi a1, a1, SZREG + fixup REG_S a5, 0(a0) + addi a0, a0, SZREG + bltu a0, t0, 1b + + addi t0, t0, SZREG /* revert to original value */ + j .Lbyte_copy_tail + +.Lword_copy_unlrolled: + /* + * Both src and dst are aligned + * Unrolled word copy with every 8*SZREG iteration * * a0 - start of aligned dst * a1 - start of aligned src @@ -97,7 +130,12 @@ ENTRY(__asm_copy_from_user) bltu a0, t0, 2b addi t0, t0, 8*SZREG /* revert to original value */ - j .Lbyte_copy_tail + + /* + * Remaining might large enough for word_copy to reduce slow byte + * copy + */ + j .Lcheck_size_bulk .Lshift_copy: -- 2.17.1 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2021-11-11 8:13 UTC|newest] Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-11-11 8:11 [PATCH v2 0/1] __asm_copy_to-from_user: Reduce more byte_copy Akira Tsukamoto 2021-11-11 8:11 ` Akira Tsukamoto 2021-11-11 8:13 ` Akira Tsukamoto [this message] 2021-11-11 8:13 ` [PATCH v2 1/1] riscv: __asm_copy_to-from_user: Improve using word copy, if size is < 9*SZREG Akira Tsukamoto 2021-11-11 23:04 ` kernel test robot 2021-11-11 23:04 ` kernel test robot 2021-11-11 23:04 ` kernel test robot 2021-11-12 4:23 ` kernel test robot 2021-11-12 4:23 ` kernel test robot 2021-11-12 4:23 ` kernel test robot
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=747e611a-2225-0685-b1e6-8b45ef45042d@gmail.com \ --to=akira.tsukamoto@gmail.com \ --cc=aou@eecs.berkeley.edu \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-riscv@lists.infradead.org \ --cc=palmer@dabbelt.com \ --cc=paul.walmsley@sifive.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.