From: Michael Clark <michaeljclark@mac.com>
To: Paul Burton <paul.burton@mips.com>,
Jonas Gorski <jonas.gorski@gmail.com>
Cc: RISC-V Patches <patches@groups.riscv.org>,
Linux RISC-V <linux-riscv@lists.infradead.org>,
Andrew Waterman <andrew@sifive.com>,
Linux MIPS <linux-mips@linux-mips.org>
Subject: Re: [PATCH 3/3] MIPS: fix truncation in __cmpxchg_small for short values
Date: Sun, 24 Feb 2019 13:09:19 +1300 [thread overview]
Message-ID: <f44161a1-a8cf-0d95-2da5-af062833491d@mac.com> (raw)
In-Reply-To: <20190211202023.rch4duvcctyspvxe@pburton-laptop>
Hi Paul,
On 12/02/19 9:20 AM, Paul Burton wrote:
> Hello,
>
> On Mon, Feb 11, 2019 at 01:37:08PM +0100, Jonas Gorski wrote:
>> On Mon, 11 Feb 2019 at 05:39, Michael Clark <michaeljclark@mac.com> wrote:
>>> diff --git a/arch/mips/kernel/cmpxchg.c b/arch/mips/kernel/cmpxchg.c
>>> index 0b9535bc2c53..1169958fd748 100644
>>> --- a/arch/mips/kernel/cmpxchg.c
>>> +++ b/arch/mips/kernel/cmpxchg.c
>>> @@ -57,7 +57,7 @@ unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
>>> u32 mask, old32, new32, load32;
>>> volatile u32 *ptr32;
>>> unsigned int shift;
>>> - u8 load;
>>> + u32 load;
>>
>> There already is a u32 line above, so maybe move it there.
>>
>> Also reading the code to understand this, isn't the old code broken
>> for cmpxchg_small calls for 16-bit variables, where old is > 0xff?
>>
>> because it does later
>>
>> /*
>> * Ensure the byte we want to exchange matches the expected
>> * old value, and if not then bail.
>> */
>> load = (load32 & mask) >> shift;
>> if (load != old)
>> return load;
>>
>> and if load is a u8, it can never be old if old contains a larger
>> value than what can fit in a u8.
>>
>> After re-reading your commit log, it seems you say something similar,
>> but it wasn't quite obvious for me that this means the function is
>> basically broken for short values where the old value does not fit in
>> u8.
>>
>> So this should have an appropriate "Fixes:" tag. And Cc: stable. Seems
>> like quite a serious issue to me.
Okay. I was pretty sure it was a bug but I am not sure about the
conventions for Linux fixes.
> It could be serious if used, though in practice this support was added
> to support qrwlock which only really needed 8-bit support at the time.
> Since then commit d13316614633 ("locking/qrwlock: Prevent slowpath
> writers getting held up by fastpath") removed even that.
Yes. I suspected it was a latent bug. Truncating shorts in compare and
swap would could show up with unusual values.
> But still, yes it's clearly a nasty little bug if anyone does try to use
> a 16-bit cmpxchg() & I've marked it for stable backport whilst applying.
>
> Thanks for fixing this Michael :)
Appreciated. Yes. Thanks for taking the time to verify it, although it
is quite an obvious fix it could be a big time suck if one encountered
it in the wild as one wouldn't expect a broken intrinsic.
Sorry for the lag in replying. At the time it appeared somewhat like
throwing text plus code over the fence, as part of discovery for a
submission regarding RISC-V atomics. I had full intention of circling
back, just that I am not good with many small emails.
This has prompted me to revise a fair spinlock implementation (that fits
into 32-bits)
.. RISC-V tangent...
Related by parent thread. I was looking into ticket spin locks for
threads on bare metal, which prompted this patch in the first place.
While the Linux asm-generic ticket spinlock is fair; LR/SC for small
word atomics requires significantly more instructions, thus has a cycle
penalty for fairness vs amo.add. The problem, however, on RISC-V is that
a fair spinlock using amo.add for head and tail would need to be 64-bits
in size due to the 32-bit minimum atomic width for atomic ops. For one
per CPU structure on a large system, this is significant memory.
A compromise on code size and compactness of structure, would be
amoadd.w of 0x0001_0000 for head acquire and LR/SC for tail release. I
chose 2^16 because 255 cores seems a bit small present day given folk
are fitting more than a thousand RISC-V cores on an FPGA, and one
assumes 4096 is quite plausible. Anyhow, here is a 32-bit ticket
spinlock with support for 65,535 cores (needs verification):
spin_lock:
lui a2,0x10
amoadd.w a5,a5,(a0)
srliw a4,a5,0x10
2: slliw a5,a5,0x10
srliw a5,a5,0x10
bne a5,a4,3f
ret
3: lw a5,0(a0)
fence r,rw
j 2b
spin_trylock:
lui a5,0x10
lr.w.aq a4,(a0)
addw a5,a5,a4
slliw a3,a5,0x10
srliw a3,a3,0x10
srliw a4,a5,0x10
beq a3,a4,2f
1: li a0,0
ret
2: sc.w.rl a4,a5,(a0)
bnez a4,1b
fence r,rw
li a0,1
ret
spin_unlock:
fence rw,w
1: lr.w.aq a4,(a0)
srliw a5,a4,0x10
addiw a4,a4,1
slliw a4,a4,0x10
srliw a4,a4,0x10
slliw a5,a5,0x10
or a5,a5,a4
sc.w.rl a4,a5,(a0)
bnez a5,1b
ret
We could keep the current simple (but unfair) spinlock. I do suspect
unfairness will not scale so whatever is done, it may end up needing to
be a config option. I actually am fond of the idea of a RISC-V Atomic
Extension V2 in RISC-V V3.0 or V4.0 with 48-bit instructions. A 6-byte
instruction would be quite a nice compromise.
It seems that the hybrid approach (above) using amoadd.w for the head,
i.e. fast ticket number acquisition followed by spin is logical. This
balances the code size penalty for lock/unlock when trying to fit a
scalable ticket spinlock into 32-bits If one swaps head and tail, then
lock acquisition has a high cost and lock release becomes trivial, which
seems wrong. spin_trylock necessarily must use LR/SC as it needs to
conditionally acquire a ticket.
This is actually RISC-V generic so I probably should post it somewhere
where folk are interested in RISC-V software. I think if we come up with
simple lock, it should be usable in BSD or Linux GPL, so consider any of
these fragments as public domain. Verification is assumed to be applied.
The previous patches where tested in QEMU, but this asm is part compiler
emitted and part hand-coded (compiler is not yet smart enough to avoid
illegal branches as it doesn't parse LR/SC - that's possibly an arm
issue also i.e. other RISC; so just sharing thoughts...).
Michael.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2019-02-24 0:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-11 4:38 [PATCH 0/3] RISC-V: use generic spinlock and rwlock Michael Clark
2019-02-11 4:38 ` [PATCH 1/3] RISC-V: implement xchg_small and cmpxchg_small for char and short Michael Clark
2019-02-12 7:17 ` Christoph Hellwig
2019-02-24 21:03 ` Michael Clark
2019-02-11 4:38 ` [PATCH 2/3] RISC-V: convert custom spinlock/rwlock to generic qspinlock/qrwlock Michael Clark
2019-02-11 4:38 ` [PATCH 3/3] MIPS: fix truncation in __cmpxchg_small for short values Michael Clark
2019-02-11 12:37 ` Jonas Gorski
2019-02-11 20:20 ` Paul Burton
2019-02-24 0:09 ` Michael Clark [this message]
2019-02-11 20:03 ` Paul Burton
2019-02-11 20:03 ` Paul Burton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f44161a1-a8cf-0d95-2da5-af062833491d@mac.com \
--to=michaeljclark@mac.com \
--cc=andrew@sifive.com \
--cc=jonas.gorski@gmail.com \
--cc=linux-mips@linux-mips.org \
--cc=linux-riscv@lists.infradead.org \
--cc=patches@groups.riscv.org \
--cc=paul.burton@mips.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.