All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive
@ 2022-02-16 10:53 Hanna Reitz
  2022-02-16 10:53 ` [PATCH v2 1/3] " Hanna Reitz
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Hanna Reitz @ 2022-02-16 10:53 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Eric Blake, qemu-devel, Stefan Hajnoczi

Hi,

v1 with detailed reasoning:
https://lists.nongnu.org/archive/html/qemu-block/2022-02/msg00508.html

This series makes bdrv_refresh_limits() non-recursive so that it is
sufficient for callers to ensure that the node on which they call it
will not receive concurrent I/O requests (instead of ensuring the same
for the whole subtree).

We need to ensure such I/O does not happen because bdrv_refresh_limits()
is not atomic and will produce intermediate invalid values, which will
break concurrent I/O requests that read these values.


v2:
- Use separate `try` block to clean up in patch 2 instead of putting the
  `os.remove()` in the existing one (which would cause the second
  `os.remove()` to be skipped if the first one failed)


git-backport-diff against v1:

Key:
[----] : patches are identical
[####] : number of functional differences between upstream/downstream patch
[down] : patch is downstream-only
The flags [FC] indicate (F)unctional and (C)ontextual differences, respectively

001/3:[----] [--] 'block: Make bdrv_refresh_limits() non-recursive'
002/3:[0005] [FC] 'iotests: Allow using QMP with the QSD'
003/3:[----] [--] 'iotests/graph-changes-while-io: New test'


Hanna Reitz (3):
  block: Make bdrv_refresh_limits() non-recursive
  iotests: Allow using QMP with the QSD
  iotests/graph-changes-while-io: New test

 block/io.c                                    |  4 -
 tests/qemu-iotests/iotests.py                 | 32 ++++++-
 .../qemu-iotests/tests/graph-changes-while-io | 91 +++++++++++++++++++
 .../tests/graph-changes-while-io.out          |  5 +
 4 files changed, 127 insertions(+), 5 deletions(-)
 create mode 100755 tests/qemu-iotests/tests/graph-changes-while-io
 create mode 100644 tests/qemu-iotests/tests/graph-changes-while-io.out

-- 
2.34.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/3] block: Make bdrv_refresh_limits() non-recursive
  2022-02-16 10:53 [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Hanna Reitz
@ 2022-02-16 10:53 ` Hanna Reitz
  2022-03-03 16:56   ` Kevin Wolf
  2022-02-16 10:53 ` [PATCH v2 2/3] iotests: Allow using QMP with the QSD Hanna Reitz
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Hanna Reitz @ 2022-02-16 10:53 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Eric Blake, qemu-devel, Stefan Hajnoczi

bdrv_refresh_limits() recurses down to the node's children.  That does
not seem necessary: When we refresh limits on some node, and then
recurse down and were to change one of its children's BlockLimits, then
that would mean we noticed the changed limits by pure chance.  The fact
that we refresh the parent's limits has nothing to do with it, so the
reason for the change probably happened before this point in time, and
we should have refreshed the limits then.

On the other hand, we do not have infrastructure for noticing that block
limits change after they have been initialized for the first time (this
would require propagating the change upwards to the respective node's
parents), and so evidently we consider this case impossible.

If this case is impossible, then we will not need to recurse down in
bdrv_refresh_limits().  Every node's limits are initialized in
bdrv_open_driver(), and are refreshed whenever its children change.
We want to use the childrens' limits to get some initial default, but we
can just take them, we do not need to refresh them.

The problem with recursing is that bdrv_refresh_limits() is not atomic.
It begins with zeroing BDS.bl, and only then sets proper, valid limits.
If we do not drain all nodes whose limits are refreshed, then concurrent
I/O requests can encounter invalid request_alignment values and crash
qemu.  Therefore, a recursing bdrv_refresh_limits() requires the whole
subtree to be drained, which is currently not ensured by most callers.

A non-recursive bdrv_refresh_limits() only requires the node in question
to not receive I/O requests, and this is done by most callers in some
way or another:
- bdrv_open_driver() deals with a new node with no parents yet
- bdrv_set_file_or_backing_noperm() acts on a drained node
- bdrv_reopen_commit() acts only on drained nodes
- bdrv_append() should in theory require the node to be drained; in
  practice most callers just lock the AioContext, which should at least
  be enough to prevent concurrent I/O requests from accessing invalid
  limits

So we can resolve the bug by making bdrv_refresh_limits() non-recursive.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1879437
Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 block/io.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/block/io.c b/block/io.c
index 4e4cb556c5..c3e7301613 100644
--- a/block/io.c
+++ b/block/io.c
@@ -189,10 +189,6 @@ void bdrv_refresh_limits(BlockDriverState *bs, Transaction *tran, Error **errp)
     QLIST_FOREACH(c, &bs->children, next) {
         if (c->role & (BDRV_CHILD_DATA | BDRV_CHILD_FILTERED | BDRV_CHILD_COW))
         {
-            bdrv_refresh_limits(c->bs, tran, errp);
-            if (*errp) {
-                return;
-            }
             bdrv_merge_limits(&bs->bl, &c->bs->bl);
             have_limits = true;
         }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/3] iotests: Allow using QMP with the QSD
  2022-02-16 10:53 [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Hanna Reitz
  2022-02-16 10:53 ` [PATCH v2 1/3] " Hanna Reitz
@ 2022-02-16 10:53 ` Hanna Reitz
  2022-02-25 18:54   ` Eric Blake
  2022-02-16 10:53 ` [PATCH v2 3/3] iotests/graph-changes-while-io: New test Hanna Reitz
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Hanna Reitz @ 2022-02-16 10:53 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Eric Blake, qemu-devel, Stefan Hajnoczi

Add a parameter to optionally open a QMP connection when creating a
QemuStorageDaemon instance.

Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
 tests/qemu-iotests/iotests.py | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 6ba65eb1ff..6027780180 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -39,6 +39,7 @@
 
 from qemu.machine import qtest
 from qemu.qmp import QMPMessage
+from qemu.aqmp.legacy import QEMUMonitorProtocol
 
 # Use this logger for logging messages directly from the iotests module
 logger = logging.getLogger('qemu.iotests')
@@ -348,14 +349,30 @@ def cmd(self, cmd):
 
 
 class QemuStorageDaemon:
-    def __init__(self, *args: str, instance_id: str = 'a'):
+    _qmp: Optional[QEMUMonitorProtocol] = None
+    _qmpsock: Optional[str] = None
+    # Python < 3.8 would complain if this type were not a string literal
+    # (importing `annotations` from `__future__` would work; but not on <= 3.6)
+    _p: 'Optional[subprocess.Popen[bytes]]' = None
+
+    def __init__(self, *args: str, instance_id: str = 'a', qmp: bool = False):
         assert '--pidfile' not in args
         self.pidfile = os.path.join(test_dir, f'qsd-{instance_id}-pid')
         all_args = [qsd_prog] + list(args) + ['--pidfile', self.pidfile]
 
+        if qmp:
+            self._qmpsock = os.path.join(sock_dir, f'qsd-{instance_id}.sock')
+            all_args += ['--chardev',
+                         f'socket,id=qmp-sock,path={self._qmpsock}',
+                         '--monitor', 'qmp-sock']
+
+            self._qmp = QEMUMonitorProtocol(self._qmpsock, server=True)
+
         # Cannot use with here, we want the subprocess to stay around
         # pylint: disable=consider-using-with
         self._p = subprocess.Popen(all_args)
+        if self._qmp is not None:
+            self._qmp.accept()
         while not os.path.exists(self.pidfile):
             if self._p.poll() is not None:
                 cmd = ' '.join(all_args)
@@ -370,11 +387,24 @@ def __init__(self, *args: str, instance_id: str = 'a'):
 
         assert self._pid == self._p.pid
 
+    def qmp(self, cmd: str, args: Optional[Dict[str, object]] = None) \
+            -> QMPMessage:
+        assert self._qmp is not None
+        return self._qmp.cmd(cmd, args)
+
     def stop(self, kill_signal=15):
         self._p.send_signal(kill_signal)
         self._p.wait()
         self._p = None
 
+        if self._qmp:
+            self._qmp.close()
+
+        if self._qmpsock is not None:
+            try:
+                os.remove(self._qmpsock)
+            except OSError:
+                pass
         try:
             os.remove(self.pidfile)
         except OSError:
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/3] iotests/graph-changes-while-io: New test
  2022-02-16 10:53 [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Hanna Reitz
  2022-02-16 10:53 ` [PATCH v2 1/3] " Hanna Reitz
  2022-02-16 10:53 ` [PATCH v2 2/3] iotests: Allow using QMP with the QSD Hanna Reitz
@ 2022-02-16 10:53 ` Hanna Reitz
  2022-02-28 14:42 ` [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Stefan Hajnoczi
  2022-03-04 13:36 ` Kevin Wolf
  4 siblings, 0 replies; 11+ messages in thread
From: Hanna Reitz @ 2022-02-16 10:53 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Eric Blake, qemu-devel, Stefan Hajnoczi

Test the following scenario:
1. Some block node (null-co) attached to a user (here: NBD server) that
   performs I/O and keeps the node in an I/O thread
2. Repeatedly run blockdev-add/blockdev-del to add/remove an overlay
   to/from that node

Each blockdev-add triggers bdrv_refresh_limits(), and because
blockdev-add runs in the main thread, it does not stop the I/O requests.
I/O can thus happen while the limits are refreshed, and when such a
request sees a temporarily invalid block limit (e.g. alignment is 0),
this may easily crash qemu (or the storage daemon in this case).

The block layer needs to ensure that I/O requests to a node are paused
while that node's BlockLimits are refreshed.

Signed-off-by: Hanna Reitz <hreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 .../qemu-iotests/tests/graph-changes-while-io | 91 +++++++++++++++++++
 .../tests/graph-changes-while-io.out          |  5 +
 2 files changed, 96 insertions(+)
 create mode 100755 tests/qemu-iotests/tests/graph-changes-while-io
 create mode 100644 tests/qemu-iotests/tests/graph-changes-while-io.out

diff --git a/tests/qemu-iotests/tests/graph-changes-while-io b/tests/qemu-iotests/tests/graph-changes-while-io
new file mode 100755
index 0000000000..567e8cf21e
--- /dev/null
+++ b/tests/qemu-iotests/tests/graph-changes-while-io
@@ -0,0 +1,91 @@
+#!/usr/bin/env python3
+# group: rw
+#
+# Test graph changes while I/O is happening
+#
+# Copyright (C) 2022 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import os
+from threading import Thread
+import iotests
+from iotests import imgfmt, qemu_img, qemu_img_create, QMPTestCase, \
+        QemuStorageDaemon
+
+
+top = os.path.join(iotests.test_dir, 'top.img')
+nbd_sock = os.path.join(iotests.sock_dir, 'nbd.sock')
+
+
+def do_qemu_img_bench() -> None:
+    """
+    Do some I/O requests on `nbd_sock`.
+    """
+    assert qemu_img('bench', '-f', 'raw', '-c', '2000000',
+                    f'nbd+unix:///node0?socket={nbd_sock}') == 0
+
+
+class TestGraphChangesWhileIO(QMPTestCase):
+    def setUp(self) -> None:
+        # Create an overlay that can be added at runtime on top of the
+        # null-co block node that will receive I/O
+        assert qemu_img_create('-f', imgfmt, '-F', 'raw', '-b', 'null-co://',
+                               top) == 0
+
+        # QSD instance with a null-co block node in an I/O thread,
+        # exported over NBD (on `nbd_sock`, export name "node0")
+        self.qsd = QemuStorageDaemon(
+            '--object', 'iothread,id=iothread0',
+            '--blockdev', 'null-co,node-name=node0,read-zeroes=true',
+            '--nbd-server', f'addr.type=unix,addr.path={nbd_sock}',
+            '--export', 'nbd,id=exp0,node-name=node0,iothread=iothread0,' +
+                        'fixed-iothread=true,writable=true',
+            qmp=True
+        )
+
+    def tearDown(self) -> None:
+        self.qsd.stop()
+
+    def test_blockdev_add_while_io(self) -> None:
+        # Run qemu-img bench in the background
+        bench_thr = Thread(target=do_qemu_img_bench)
+        bench_thr.start()
+
+        # While qemu-img bench is running, repeatedly add and remove an
+        # overlay to/from node0
+        while bench_thr.is_alive():
+            result = self.qsd.qmp('blockdev-add', {
+                'driver': imgfmt,
+                'node-name': 'overlay',
+                'backing': 'node0',
+                'file': {
+                    'driver': 'file',
+                    'filename': top
+                }
+            })
+            self.assert_qmp(result, 'return', {})
+
+            result = self.qsd.qmp('blockdev-del', {
+                'node-name': 'overlay'
+            })
+            self.assert_qmp(result, 'return', {})
+
+        bench_thr.join()
+
+if __name__ == '__main__':
+    # Format must support raw backing files
+    iotests.main(supported_fmts=['qcow', 'qcow2', 'qed'],
+                 supported_protocols=['file'])
diff --git a/tests/qemu-iotests/tests/graph-changes-while-io.out b/tests/qemu-iotests/tests/graph-changes-while-io.out
new file mode 100644
index 0000000000..ae1213e6f8
--- /dev/null
+++ b/tests/qemu-iotests/tests/graph-changes-while-io.out
@@ -0,0 +1,5 @@
+.
+----------------------------------------------------------------------
+Ran 1 tests
+
+OK
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/3] iotests: Allow using QMP with the QSD
  2022-02-16 10:53 ` [PATCH v2 2/3] iotests: Allow using QMP with the QSD Hanna Reitz
@ 2022-02-25 18:54   ` Eric Blake
  0 siblings, 0 replies; 11+ messages in thread
From: Eric Blake @ 2022-02-25 18:54 UTC (permalink / raw)
  To: Hanna Reitz; +Cc: Kevin Wolf, Stefan Hajnoczi, qemu-devel, qemu-block

On Wed, Feb 16, 2022 at 11:53:54AM +0100, Hanna Reitz wrote:
> Add a parameter to optionally open a QMP connection when creating a
> QemuStorageDaemon instance.
> 
> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
> ---
>  tests/qemu-iotests/iotests.py | 32 +++++++++++++++++++++++++++++++-
>  1 file changed, 31 insertions(+), 1 deletion(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive
  2022-02-16 10:53 [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Hanna Reitz
                   ` (2 preceding siblings ...)
  2022-02-16 10:53 ` [PATCH v2 3/3] iotests/graph-changes-while-io: New test Hanna Reitz
@ 2022-02-28 14:42 ` Stefan Hajnoczi
  2022-03-04 13:36 ` Kevin Wolf
  4 siblings, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2022-02-28 14:42 UTC (permalink / raw)
  To: Hanna Reitz; +Cc: Kevin Wolf, Eric Blake, qemu-devel, qemu-block

[-- Attachment #1: Type: text/plain, Size: 2047 bytes --]

On Wed, Feb 16, 2022 at 11:53:52AM +0100, Hanna Reitz wrote:
> Hi,
> 
> v1 with detailed reasoning:
> https://lists.nongnu.org/archive/html/qemu-block/2022-02/msg00508.html
> 
> This series makes bdrv_refresh_limits() non-recursive so that it is
> sufficient for callers to ensure that the node on which they call it
> will not receive concurrent I/O requests (instead of ensuring the same
> for the whole subtree).
> 
> We need to ensure such I/O does not happen because bdrv_refresh_limits()
> is not atomic and will produce intermediate invalid values, which will
> break concurrent I/O requests that read these values.
> 
> 
> v2:
> - Use separate `try` block to clean up in patch 2 instead of putting the
>   `os.remove()` in the existing one (which would cause the second
>   `os.remove()` to be skipped if the first one failed)
> 
> 
> git-backport-diff against v1:
> 
> Key:
> [----] : patches are identical
> [####] : number of functional differences between upstream/downstream patch
> [down] : patch is downstream-only
> The flags [FC] indicate (F)unctional and (C)ontextual differences, respectively
> 
> 001/3:[----] [--] 'block: Make bdrv_refresh_limits() non-recursive'
> 002/3:[0005] [FC] 'iotests: Allow using QMP with the QSD'
> 003/3:[----] [--] 'iotests/graph-changes-while-io: New test'
> 
> 
> Hanna Reitz (3):
>   block: Make bdrv_refresh_limits() non-recursive
>   iotests: Allow using QMP with the QSD
>   iotests/graph-changes-while-io: New test
> 
>  block/io.c                                    |  4 -
>  tests/qemu-iotests/iotests.py                 | 32 ++++++-
>  .../qemu-iotests/tests/graph-changes-while-io | 91 +++++++++++++++++++
>  .../tests/graph-changes-while-io.out          |  5 +
>  4 files changed, 127 insertions(+), 5 deletions(-)
>  create mode 100755 tests/qemu-iotests/tests/graph-changes-while-io
>  create mode 100644 tests/qemu-iotests/tests/graph-changes-while-io.out
> 
> -- 
> 2.34.1
> 

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/3] block: Make bdrv_refresh_limits() non-recursive
  2022-02-16 10:53 ` [PATCH v2 1/3] " Hanna Reitz
@ 2022-03-03 16:56   ` Kevin Wolf
  2022-03-04 12:44     ` Hanna Reitz
  0 siblings, 1 reply; 11+ messages in thread
From: Kevin Wolf @ 2022-03-03 16:56 UTC (permalink / raw)
  To: Hanna Reitz; +Cc: Stefan Hajnoczi, Eric Blake, qemu-devel, qemu-block

Am 16.02.2022 um 11:53 hat Hanna Reitz geschrieben:
> bdrv_refresh_limits() recurses down to the node's children.  That does
> not seem necessary: When we refresh limits on some node, and then
> recurse down and were to change one of its children's BlockLimits, then
> that would mean we noticed the changed limits by pure chance.  The fact
> that we refresh the parent's limits has nothing to do with it, so the
> reason for the change probably happened before this point in time, and
> we should have refreshed the limits then.
> 
> On the other hand, we do not have infrastructure for noticing that block
> limits change after they have been initialized for the first time (this
> would require propagating the change upwards to the respective node's
> parents), and so evidently we consider this case impossible.

I like your optimistic approach, but my interpretation would have been
that this is simply a bug. ;-)

blockdev-reopen allows changing options that affect the block limits
(most importantly probably request_alignment), so this should be
propagated to the parents. I think we'll actually not see failures if we
forget to do this, but parents can either advertise excessive alignment
requirements or they may run into RMW when accessing the child, so this
would only affect performance. This is probably why nobody reported it
yet.

> If this case is impossible, then we will not need to recurse down in
> bdrv_refresh_limits().  Every node's limits are initialized in
> bdrv_open_driver(), and are refreshed whenever its children change.
> We want to use the childrens' limits to get some initial default, but
> we can just take them, we do not need to refresh them.

I think even if we need to propagate to the parents, we still don't need
to propagate to the children because the children have already been
refreshed by whatever changed their options (like bdrv_reopen_commit()).
And parent limits don't influence the child limits at all.

So this patch looks good to me, just not the reasoning.

Kevin

> The problem with recursing is that bdrv_refresh_limits() is not atomic.
> It begins with zeroing BDS.bl, and only then sets proper, valid limits.
> If we do not drain all nodes whose limits are refreshed, then concurrent
> I/O requests can encounter invalid request_alignment values and crash
> qemu.  Therefore, a recursing bdrv_refresh_limits() requires the whole
> subtree to be drained, which is currently not ensured by most callers.
> 
> A non-recursive bdrv_refresh_limits() only requires the node in question
> to not receive I/O requests, and this is done by most callers in some
> way or another:
> - bdrv_open_driver() deals with a new node with no parents yet
> - bdrv_set_file_or_backing_noperm() acts on a drained node
> - bdrv_reopen_commit() acts only on drained nodes
> - bdrv_append() should in theory require the node to be drained; in
>   practice most callers just lock the AioContext, which should at least
>   be enough to prevent concurrent I/O requests from accessing invalid
>   limits
> 
> So we can resolve the bug by making bdrv_refresh_limits() non-recursive.
> 
> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1879437
> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> ---
>  block/io.c | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/block/io.c b/block/io.c
> index 4e4cb556c5..c3e7301613 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -189,10 +189,6 @@ void bdrv_refresh_limits(BlockDriverState *bs, Transaction *tran, Error **errp)
>      QLIST_FOREACH(c, &bs->children, next) {
>          if (c->role & (BDRV_CHILD_DATA | BDRV_CHILD_FILTERED | BDRV_CHILD_COW))
>          {
> -            bdrv_refresh_limits(c->bs, tran, errp);
> -            if (*errp) {
> -                return;
> -            }
>              bdrv_merge_limits(&bs->bl, &c->bs->bl);
>              have_limits = true;
>          }
> -- 
> 2.34.1
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/3] block: Make bdrv_refresh_limits() non-recursive
  2022-03-03 16:56   ` Kevin Wolf
@ 2022-03-04 12:44     ` Hanna Reitz
  2022-03-04 14:14       ` Kevin Wolf
  0 siblings, 1 reply; 11+ messages in thread
From: Hanna Reitz @ 2022-03-04 12:44 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: Stefan Hajnoczi, Eric Blake, qemu-devel, qemu-block

On 03.03.22 17:56, Kevin Wolf wrote:
> Am 16.02.2022 um 11:53 hat Hanna Reitz geschrieben:
>> bdrv_refresh_limits() recurses down to the node's children.  That does
>> not seem necessary: When we refresh limits on some node, and then
>> recurse down and were to change one of its children's BlockLimits, then
>> that would mean we noticed the changed limits by pure chance.  The fact
>> that we refresh the parent's limits has nothing to do with it, so the
>> reason for the change probably happened before this point in time, and
>> we should have refreshed the limits then.
>>
>> On the other hand, we do not have infrastructure for noticing that block
>> limits change after they have been initialized for the first time (this
>> would require propagating the change upwards to the respective node's
>> parents), and so evidently we consider this case impossible.
> I like your optimistic approach, but my interpretation would have been
> that this is simply a bug. ;-)
>
> blockdev-reopen allows changing options that affect the block limits
> (most importantly probably request_alignment), so this should be
> propagated to the parents. I think we'll actually not see failures if we
> forget to do this, but parents can either advertise excessive alignment
> requirements or they may run into RMW when accessing the child, so this
> would only affect performance. This is probably why nobody reported it
> yet.

Ah, right, I forgot this for parents of parents...  I thought the block 
limits of a node might change if its children list changes, and so we 
should bdrv_refresh_limits() when that children list changes, but forgot 
that we really do need to propagate this up, right.

>> If this case is impossible, then we will not need to recurse down in
>> bdrv_refresh_limits().  Every node's limits are initialized in
>> bdrv_open_driver(), and are refreshed whenever its children change.
>> We want to use the childrens' limits to get some initial default, but
>> we can just take them, we do not need to refresh them.
> I think even if we need to propagate to the parents, we still don't need
> to propagate to the children because the children have already been
> refreshed by whatever changed their options (like bdrv_reopen_commit()).
> And parent limits don't influence the child limits at all.
>
> So this patch looks good to me, just not the reasoning.

OK, so, uh, can we just drop these two paragraphs?  (“On the other 
hand...” and “If this case is impossible…”)

Or we could replace them with a note hinting at the potential bug that 
would need to be fixed, e.g.

“
Consequently, we should actually propagate block limits changes upwards,
not downwards.  That is a separate and pre-existing issue, though, and
so will not be addressed in this patch.
”

Question is, if we at some point do propagate this upwards, won’t this 
cause exactly the same problem that this patch is trying to get around, 
i.e. that we might call bdrv_refresh_limits() on non-drained parent nodes?

Hanna

> Kevin
>
>> The problem with recursing is that bdrv_refresh_limits() is not atomic.
>> It begins with zeroing BDS.bl, and only then sets proper, valid limits.
>> If we do not drain all nodes whose limits are refreshed, then concurrent
>> I/O requests can encounter invalid request_alignment values and crash
>> qemu.  Therefore, a recursing bdrv_refresh_limits() requires the whole
>> subtree to be drained, which is currently not ensured by most callers.
>>
>> A non-recursive bdrv_refresh_limits() only requires the node in question
>> to not receive I/O requests, and this is done by most callers in some
>> way or another:
>> - bdrv_open_driver() deals with a new node with no parents yet
>> - bdrv_set_file_or_backing_noperm() acts on a drained node
>> - bdrv_reopen_commit() acts only on drained nodes
>> - bdrv_append() should in theory require the node to be drained; in
>>    practice most callers just lock the AioContext, which should at least
>>    be enough to prevent concurrent I/O requests from accessing invalid
>>    limits
>>
>> So we can resolve the bug by making bdrv_refresh_limits() non-recursive.
>>
>> Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1879437
>> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
>> Reviewed-by: Eric Blake <eblake@redhat.com>
>> ---
>>   block/io.c | 4 ----
>>   1 file changed, 4 deletions(-)
>>
>> diff --git a/block/io.c b/block/io.c
>> index 4e4cb556c5..c3e7301613 100644
>> --- a/block/io.c
>> +++ b/block/io.c
>> @@ -189,10 +189,6 @@ void bdrv_refresh_limits(BlockDriverState *bs, Transaction *tran, Error **errp)
>>       QLIST_FOREACH(c, &bs->children, next) {
>>           if (c->role & (BDRV_CHILD_DATA | BDRV_CHILD_FILTERED | BDRV_CHILD_COW))
>>           {
>> -            bdrv_refresh_limits(c->bs, tran, errp);
>> -            if (*errp) {
>> -                return;
>> -            }
>>               bdrv_merge_limits(&bs->bl, &c->bs->bl);
>>               have_limits = true;
>>           }
>> -- 
>> 2.34.1
>>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive
  2022-02-16 10:53 [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Hanna Reitz
                   ` (3 preceding siblings ...)
  2022-02-28 14:42 ` [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Stefan Hajnoczi
@ 2022-03-04 13:36 ` Kevin Wolf
  4 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2022-03-04 13:36 UTC (permalink / raw)
  To: Hanna Reitz; +Cc: Stefan Hajnoczi, Eric Blake, qemu-devel, qemu-block

Am 16.02.2022 um 11:53 hat Hanna Reitz geschrieben:
> Hi,
> 
> v1 with detailed reasoning:
> https://lists.nongnu.org/archive/html/qemu-block/2022-02/msg00508.html
> 
> This series makes bdrv_refresh_limits() non-recursive so that it is
> sufficient for callers to ensure that the node on which they call it
> will not receive concurrent I/O requests (instead of ensuring the same
> for the whole subtree).
> 
> We need to ensure such I/O does not happen because bdrv_refresh_limits()
> is not atomic and will produce intermediate invalid values, which will
> break concurrent I/O requests that read these values.

Thanks, applied to the block branch.

Kevin



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/3] block: Make bdrv_refresh_limits() non-recursive
  2022-03-04 12:44     ` Hanna Reitz
@ 2022-03-04 14:14       ` Kevin Wolf
  2022-03-04 14:59         ` Hanna Reitz
  0 siblings, 1 reply; 11+ messages in thread
From: Kevin Wolf @ 2022-03-04 14:14 UTC (permalink / raw)
  To: Hanna Reitz; +Cc: Stefan Hajnoczi, Eric Blake, qemu-devel, qemu-block

Am 04.03.2022 um 13:44 hat Hanna Reitz geschrieben:
> On 03.03.22 17:56, Kevin Wolf wrote:
> > Am 16.02.2022 um 11:53 hat Hanna Reitz geschrieben:
> > > bdrv_refresh_limits() recurses down to the node's children.  That does
> > > not seem necessary: When we refresh limits on some node, and then
> > > recurse down and were to change one of its children's BlockLimits, then
> > > that would mean we noticed the changed limits by pure chance.  The fact
> > > that we refresh the parent's limits has nothing to do with it, so the
> > > reason for the change probably happened before this point in time, and
> > > we should have refreshed the limits then.
> > > 
> > > On the other hand, we do not have infrastructure for noticing that block
> > > limits change after they have been initialized for the first time (this
> > > would require propagating the change upwards to the respective node's
> > > parents), and so evidently we consider this case impossible.
> > I like your optimistic approach, but my interpretation would have been
> > that this is simply a bug. ;-)
> > 
> > blockdev-reopen allows changing options that affect the block limits
> > (most importantly probably request_alignment), so this should be
> > propagated to the parents. I think we'll actually not see failures if we
> > forget to do this, but parents can either advertise excessive alignment
> > requirements or they may run into RMW when accessing the child, so this
> > would only affect performance. This is probably why nobody reported it
> > yet.
> 
> Ah, right, I forgot this for parents of parents...  I thought the
> block limits of a node might change if its children list changes, and
> so we should bdrv_refresh_limits() when that children list changes,
> but forgot that we really do need to propagate this up, right.

I mean the case that you mention is true as well. A few places do call
bdrv_refresh_limits() after changing the graph, but I don't know if it
covers all cases.

> > > If this case is impossible, then we will not need to recurse down in
> > > bdrv_refresh_limits().  Every node's limits are initialized in
> > > bdrv_open_driver(), and are refreshed whenever its children change.
> > > We want to use the childrens' limits to get some initial default, but
> > > we can just take them, we do not need to refresh them.
> > I think even if we need to propagate to the parents, we still don't need
> > to propagate to the children because the children have already been
> > refreshed by whatever changed their options (like bdrv_reopen_commit()).
> > And parent limits don't influence the child limits at all.
> > 
> > So this patch looks good to me, just not the reasoning.
> 
> OK, so, uh, can we just drop these two paragraphs?  (“On the other hand...”
> and “If this case is impossible…”)
> 
> Or we could replace them with a note hinting at the potential bug that would
> need to be fixed, e.g.
> 
> “
> Consequently, we should actually propagate block limits changes upwards,
> not downwards.  That is a separate and pre-existing issue, though, and
> so will not be addressed in this patch.
> ”

Ok, I'm replacing this in my tree.

> Question is, if we at some point do propagate this upwards, won’t this cause
> exactly the same problem that this patch is trying to get around, i.e. that
> we might call bdrv_refresh_limits() on non-drained parent nodes?

Drain also propagates upwards, so at least those callers that drain the
node itself won't have the problem. And the other cases from the commit
messages look like they shouldn't have any parents.

Kevin



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/3] block: Make bdrv_refresh_limits() non-recursive
  2022-03-04 14:14       ` Kevin Wolf
@ 2022-03-04 14:59         ` Hanna Reitz
  0 siblings, 0 replies; 11+ messages in thread
From: Hanna Reitz @ 2022-03-04 14:59 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: Stefan Hajnoczi, Eric Blake, qemu-devel, qemu-block

On 04.03.22 15:14, Kevin Wolf wrote:
> Am 04.03.2022 um 13:44 hat Hanna Reitz geschrieben:
>> On 03.03.22 17:56, Kevin Wolf wrote:
>>> Am 16.02.2022 um 11:53 hat Hanna Reitz geschrieben:
>>>> bdrv_refresh_limits() recurses down to the node's children.  That does
>>>> not seem necessary: When we refresh limits on some node, and then
>>>> recurse down and were to change one of its children's BlockLimits, then
>>>> that would mean we noticed the changed limits by pure chance.  The fact
>>>> that we refresh the parent's limits has nothing to do with it, so the
>>>> reason for the change probably happened before this point in time, and
>>>> we should have refreshed the limits then.
>>>>
>>>> On the other hand, we do not have infrastructure for noticing that block
>>>> limits change after they have been initialized for the first time (this
>>>> would require propagating the change upwards to the respective node's
>>>> parents), and so evidently we consider this case impossible.
>>> I like your optimistic approach, but my interpretation would have been
>>> that this is simply a bug. ;-)
>>>
>>> blockdev-reopen allows changing options that affect the block limits
>>> (most importantly probably request_alignment), so this should be
>>> propagated to the parents. I think we'll actually not see failures if we
>>> forget to do this, but parents can either advertise excessive alignment
>>> requirements or they may run into RMW when accessing the child, so this
>>> would only affect performance. This is probably why nobody reported it
>>> yet.
>> Ah, right, I forgot this for parents of parents...  I thought the
>> block limits of a node might change if its children list changes, and
>> so we should bdrv_refresh_limits() when that children list changes,
>> but forgot that we really do need to propagate this up, right.
> I mean the case that you mention is true as well. A few places do call
> bdrv_refresh_limits() after changing the graph, but I don't know if it
> covers all cases.
>
>>>> If this case is impossible, then we will not need to recurse down in
>>>> bdrv_refresh_limits().  Every node's limits are initialized in
>>>> bdrv_open_driver(), and are refreshed whenever its children change.
>>>> We want to use the childrens' limits to get some initial default, but
>>>> we can just take them, we do not need to refresh them.
>>> I think even if we need to propagate to the parents, we still don't need
>>> to propagate to the children because the children have already been
>>> refreshed by whatever changed their options (like bdrv_reopen_commit()).
>>> And parent limits don't influence the child limits at all.
>>>
>>> So this patch looks good to me, just not the reasoning.
>> OK, so, uh, can we just drop these two paragraphs?  (“On the other hand...”
>> and “If this case is impossible…”)
>>
>> Or we could replace them with a note hinting at the potential bug that would
>> need to be fixed, e.g.
>>
>> “
>> Consequently, we should actually propagate block limits changes upwards,
>> not downwards.  That is a separate and pre-existing issue, though, and
>> so will not be addressed in this patch.
>> ”
> Ok, I'm replacing this in my tree.
>
>> Question is, if we at some point do propagate this upwards, won’t this cause
>> exactly the same problem that this patch is trying to get around, i.e. that
>> we might call bdrv_refresh_limits() on non-drained parent nodes?
> Drain also propagates upwards, so at least those callers that drain the
> node itself won't have the problem. And the other cases from the commit
> messages look like they shouldn't have any parents.

Finally some good news today :)



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-03-04 15:31 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-16 10:53 [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Hanna Reitz
2022-02-16 10:53 ` [PATCH v2 1/3] " Hanna Reitz
2022-03-03 16:56   ` Kevin Wolf
2022-03-04 12:44     ` Hanna Reitz
2022-03-04 14:14       ` Kevin Wolf
2022-03-04 14:59         ` Hanna Reitz
2022-02-16 10:53 ` [PATCH v2 2/3] iotests: Allow using QMP with the QSD Hanna Reitz
2022-02-25 18:54   ` Eric Blake
2022-02-16 10:53 ` [PATCH v2 3/3] iotests/graph-changes-while-io: New test Hanna Reitz
2022-02-28 14:42 ` [PATCH v2 0/3] block: Make bdrv_refresh_limits() non-recursive Stefan Hajnoczi
2022-03-04 13:36 ` Kevin Wolf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.