From: David Howells <dhowells@redhat.com>
To: Will Deacon <will@kernel.org>
Cc: dhowells@redhat.com, paulmck@linux.ibm.com, mark.rutland@arm.com,
torvalds@linuxfoundation.org, linux-kernel@vger.kernel.org,
linux-fsdevel@vger.kernel.org, peterz@infradead.org
Subject: Do we need to correct barriering in circular-buffers.rst?
Date: Wed, 18 Sep 2019 16:43:00 +0100 [thread overview]
Message-ID: <15228.1568821380@warthog.procyon.org.uk> (raw)
In-Reply-To: <20190917170716.ud457wladfhhjd6h@willie-the-truck>
Will Deacon <will@kernel.org> wrote:
> If I'm understanding your code correctly (big 'if'), then you have things
> like this in pipe_read():
>
>
> unsigned int head = READ_ONCE(pipe->head);
> unsigned int tail = pipe->tail;
> unsigned int mask = pipe->buffers - 1;
>
> if (tail != head) {
> struct pipe_buffer *buf = &pipe->bufs[tail & mask];
>
> [...]
>
> written = copy_page_to_iter(buf->page, buf->offset, chars, to);
>
>
> where you want to make sure you don't read from 'buf->page' until after
> you've read the updated head index. Is that right? If so, then READ_ONCE()
> will not give you that guarantee on architectures such as Power and Arm,
> because the 'if (tail != head)' branch can be speculated and the buffer
> can be read before we've got around to looking at the head index.
>
> So I reckon you need smp_load_acquire() in this case. pipe_write() might be
> ok with the control dependency because CPUs don't tend to make speculative
> writes visible, but I didn't check it carefully and the compiler can do
> crazy stuff in this area, so I'd be inclined to use smp_load_acquire() here
> too unless you really need the last ounce of performance.
Yeah, I probably do.
Documentation/core-api/circular-buffers.rst might be wrong, then, I think.
It mandates using smp_store_release() to update buffer->head in the producer
and buffer->tail in the consumer - but these need pairing with memory barriers
used when reading buffer->head and buffer->tail on the other side. Currently,
for the producer we have:
spin_lock(&producer_lock);
unsigned long head = buffer->head;
/* The spin_unlock() and next spin_lock() provide needed ordering. */
unsigned long tail = READ_ONCE(buffer->tail);
if (CIRC_SPACE(head, tail, buffer->size) >= 1) {
/* insert one item into the buffer */
struct item *item = buffer[head];
produce_item(item);
smp_store_release(buffer->head,
(head + 1) & (buffer->size - 1));
/* wake_up() will make sure that the head is committed before
* waking anyone up */
wake_up(consumer);
}
spin_unlock(&producer_lock);
I think the ordering comment about spin_unlock and spin_lock is wrong. There's
no requirement to have a spinlock on either side - and in any case, both sides
could be inside their respective locked sections when accessing the buffer.
The READ_ONCE() would theoretically provide the smp_read_barrier_depends() to
pair with the smp_store_release() in the consumer. Maybe I should change this
to:
spin_lock(&producer_lock);
/* Barrier paired with consumer-side store-release on tail */
unsigned long tail = smp_load_acquire(buffer->tail);
unsigned long head = buffer->head;
if (CIRC_SPACE(head, tail, buffer->size) >= 1) {
/* insert one item into the buffer */
struct item *item = buffer[head];
produce_item(item);
smp_store_release(buffer->head,
(head + 1) & (buffer->size - 1));
/* wake_up() will make sure that the head is committed before
* waking anyone up */
wake_up(consumer);
}
spin_unlock(&producer_lock);
The consumer is currently:
spin_lock(&consumer_lock);
/* Read index before reading contents at that index. */
unsigned long head = smp_load_acquire(buffer->head);
unsigned long tail = buffer->tail;
if (CIRC_CNT(head, tail, buffer->size) >= 1) {
/* extract one item from the buffer */
struct item *item = buffer[tail];
consume_item(item);
/* Finish reading descriptor before incrementing tail. */
smp_store_release(buffer->tail,
(tail + 1) & (buffer->size - 1));
}
spin_unlock(&consumer_lock);
which I think is okay.
And yes, I note that this does use smp_load_acquire(buffer->head) in the
consumer - which I should also be doing.
David
next prev parent reply other threads:[~2019-09-18 15:43 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-13 13:00 [RFC][PATCH] pipe: Convert ring to head/tail David Howells
2019-09-13 13:06 ` My just-shovel-data-through-for-X-amount-of-time test David Howells
2019-09-15 14:59 ` [RFC][PATCH] pipe: Convert ring to head/tail Will Deacon
2019-09-17 13:51 ` David Howells
2019-09-17 17:07 ` Will Deacon
2019-09-18 15:43 ` David Howells [this message]
2019-09-18 16:48 ` Do we need to correct barriering in circular-buffers.rst? Linus Torvalds
2019-09-19 13:59 ` David Howells
2019-09-19 15:59 ` Linus Torvalds
2019-09-23 14:49 ` Peter Zijlstra
2019-09-27 9:51 ` Andrea Parri
2019-09-27 12:49 ` Peter Zijlstra
2019-09-27 15:57 ` Peter Zijlstra
2019-09-27 20:43 ` Nick Desaulniers
2019-09-27 21:58 ` Nick Desaulniers
2019-09-30 9:33 ` Peter Zijlstra
2019-09-30 11:54 ` Peter Zijlstra
2019-09-30 12:02 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=15228.1568821380@warthog.procyon.org.uk \
--to=dhowells@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=paulmck@linux.ibm.com \
--cc=peterz@infradead.org \
--cc=torvalds@linuxfoundation.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).