All of lore.kernel.org
 help / color / mirror / Atom feed
* Best practices to handle shared objects through qemu upgrades?
@ 2019-11-01  7:14 Christian Ehrhardt
  2019-11-01  9:34 ` Daniel P. Berrangé
  0 siblings, 1 reply; 8+ messages in thread
From: Christian Ehrhardt @ 2019-11-01  7:14 UTC (permalink / raw)
  To: qemu-devel, Paolo Bonzini

Hi everyone,
we've got a bug report recently - on handling qemu .so's through
upgrades - that got me wondering how to best handle it.
After checking with Paolo yesterday that there is no obvious solution
that I missed we agreed this should be brought up on the list for
wider discussion.
Maybe there already is a good best practise out there, or if it
doesn't exist we might want to agree upon one going forward.
Let me outline the case and the ideas brought up so far.

Case
- You have qemu representing a Guest
- Due to other constraints e.g. PT you can't live migrate (which would
be preferred)
- You haven't used a specific shared object yet - lets say RBD storage
driver as example
- Qemu gets an update, packaging replaces the .so files on disk
- The Qemu process and the .so files on disk now have a mismatch in $buildid
- If you hotplug an RBD device it will fail to load the (now new) .so

On almost any other service than "qemu representing a VM" the answer
is "restart it", some even re-exec in place to keep things up and
running.

Ideas so far:
a) Modules are checked by build-id, so keep them in a per build-id dir on disk
  - qemu could be made looking preferred in -$buildid dir first
  - do not remove the packages with .so's on upgrades
  - needs a not-too-complex way to detect which buildids running qemu processes
    have for packaging to be able to "autoclean later"
  - Needs some dependency juggling for Distro packaging but IMHO can be made
    to work if above simple "probing buildid of running qemu" would exist

b) Preload the modules before upgrade
  - One could load the .so files before upgrade
  - The open file reference will keep the content around even with the
on disk file gone
  - lacking a 'load-module' command that would require fake hotplugs
which seems wrong
  - Required additional upgrade pre-planning
  - kills most benefits of modular code without an actual need for it
being loaded

c) go back to non modular build
  - No :-)

d) anything else out there?

-- 
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Best practices to handle shared objects through qemu upgrades?
  2019-11-01  7:14 Best practices to handle shared objects through qemu upgrades? Christian Ehrhardt
@ 2019-11-01  9:34 ` Daniel P. Berrangé
  2019-11-01  9:55   ` Christian Ehrhardt
  2020-03-04  9:37   ` Christian Ehrhardt
  0 siblings, 2 replies; 8+ messages in thread
From: Daniel P. Berrangé @ 2019-11-01  9:34 UTC (permalink / raw)
  To: Christian Ehrhardt; +Cc: Paolo Bonzini, qemu-devel

On Fri, Nov 01, 2019 at 08:14:08AM +0100, Christian Ehrhardt wrote:
> Hi everyone,
> we've got a bug report recently - on handling qemu .so's through
> upgrades - that got me wondering how to best handle it.
> After checking with Paolo yesterday that there is no obvious solution
> that I missed we agreed this should be brought up on the list for
> wider discussion.
> Maybe there already is a good best practise out there, or if it
> doesn't exist we might want to agree upon one going forward.
> Let me outline the case and the ideas brought up so far.
> 
> Case
> - You have qemu representing a Guest
> - Due to other constraints e.g. PT you can't live migrate (which would
> be preferred)
> - You haven't used a specific shared object yet - lets say RBD storage
> driver as example
> - Qemu gets an update, packaging replaces the .so files on disk
> - The Qemu process and the .so files on disk now have a mismatch in $buildid
> - If you hotplug an RBD device it will fail to load the (now new) .so

What happens when it fails to load ?  Does the user get a graceful
error message or does QEMU abort ? I'd hope the former.

> 
> On almost any other service than "qemu representing a VM" the answer
> is "restart it", some even re-exec in place to keep things up and
> running.
> 
> Ideas so far:
> a) Modules are checked by build-id, so keep them in a per build-id dir on disk
>   - qemu could be made looking preferred in -$buildid dir first
>   - do not remove the packages with .so's on upgrades
>   - needs a not-too-complex way to detect which buildids running qemu processes
>     have for packaging to be able to "autoclean later"
>   - Needs some dependency juggling for Distro packaging but IMHO can be made
>     to work if above simple "probing buildid of running qemu" would exist

So this needs a bunch of special QEMU hacks in package mgmt tools
to prevent the package upgrade & cleanup later. This does not look
like a viable strategy to me.

> 
> b) Preload the modules before upgrade
>   - One could load the .so files before upgrade
>   - The open file reference will keep the content around even with the
> on disk file gone
>   - lacking a 'load-module' command that would require fake hotplugs
> which seems wrong
>   - Required additional upgrade pre-planning
>   - kills most benefits of modular code without an actual need for it
> being loaded

Well there's two benefits to modular approach

 - Allow a single build to be selectively installed on a host or container
   image, such that the install disk footprint is reduced
 - Allow a faster startup such that huge RBD libraries dont slow down
   startup of VMs not using RBD disks.

Preloading the modules before upgrade doesn't have to the second benefit.
We just have to make sure the pre loading doesn't impact the VM startup
performance.

IOW, register a SIGUSR2 handler which preloads all modules it finds on
disk. Have a pre-uninstall option on the .so package that sends SIGUSR2
to all QEMU processes. The challenge of course is that signals are
async. You might suggest a QMP command, but only 1 process can have the
QMP monitor open at any time and that's libvirt. Adding a second QMP
monitor instance is possible but kind of gross for this purpose.

Another option would be to pre-load the modules during startup, but
do it asynchronously, so that its not blocking overall VM startup.
eg just before starting the mainloop, spawn a background thread to
load all remaining modules.

This will potentially degrade performance of the guest CPUs a bit,
but avoids the latency spike from being synchronous in the startup
path.


> c) go back to non modular build
>   - No :-)
> 
> d) anything else out there?

e) Don't do upgrades on a host with running VMs :-)

   Upgrades can break the running VM even ignoring this particular
   QEMU module scenario. 

f) Simply document that if you upgrade with running VMs that some
   features like hotplug of RBD will become unavialable. Users can
   then avoid upgrades if that matters to them.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Best practices to handle shared objects through qemu upgrades?
  2019-11-01  9:34 ` Daniel P. Berrangé
@ 2019-11-01  9:55   ` Christian Ehrhardt
  2019-11-01 17:09     ` Daniel P. Berrangé
  2020-03-04  9:37   ` Christian Ehrhardt
  1 sibling, 1 reply; 8+ messages in thread
From: Christian Ehrhardt @ 2019-11-01  9:55 UTC (permalink / raw)
  To: Daniel P. Berrangé; +Cc: Paolo Bonzini, qemu-devel

On Fri, Nov 1, 2019 at 10:34 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Fri, Nov 01, 2019 at 08:14:08AM +0100, Christian Ehrhardt wrote:
> > Hi everyone,
> > we've got a bug report recently - on handling qemu .so's through
> > upgrades - that got me wondering how to best handle it.
> > After checking with Paolo yesterday that there is no obvious solution
> > that I missed we agreed this should be brought up on the list for
> > wider discussion.
> > Maybe there already is a good best practise out there, or if it
> > doesn't exist we might want to agree upon one going forward.
> > Let me outline the case and the ideas brought up so far.
> >
> > Case
> > - You have qemu representing a Guest
> > - Due to other constraints e.g. PT you can't live migrate (which would
> > be preferred)
> > - You haven't used a specific shared object yet - lets say RBD storage
> > driver as example
> > - Qemu gets an update, packaging replaces the .so files on disk
> > - The Qemu process and the .so files on disk now have a mismatch in $buildid
> > - If you hotplug an RBD device it will fail to load the (now new) .so
>
> What happens when it fails to load ?  Does the user get a graceful
> error message or does QEMU abort ? I'd hope the former.
>

It is fortunately a graceful error message, here an example:

$ virsh attach-device lateload curldisk.xml
Reported issue happens on attach:
root@b:~# virsh attach-device lateload cdrom-curl.xml
error: Failed to attach device from cdrom-curl.xml
error: internal error: unable to execute QEMU command 'device_add':
Property 'virtio-blk-device.drive' can't find value
'drive-virtio-disk2'

In the qemu output log we can see:
Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so
Note: only modules from the same build can be loaded.



> >
> > On almost any other service than "qemu representing a VM" the answer
> > is "restart it", some even re-exec in place to keep things up and
> > running.
> >
> > Ideas so far:
> > a) Modules are checked by build-id, so keep them in a per build-id dir on disk
> >   - qemu could be made looking preferred in -$buildid dir first
> >   - do not remove the packages with .so's on upgrades
> >   - needs a not-too-complex way to detect which buildids running qemu processes
> >     have for packaging to be able to "autoclean later"
> >   - Needs some dependency juggling for Distro packaging but IMHO can be made
> >     to work if above simple "probing buildid of running qemu" would exist
>
> So this needs a bunch of special QEMU hacks in package mgmt tools
> to prevent the package upgrade & cleanup later. This does not look
> like a viable strategy to me.
>
> >
> > b) Preload the modules before upgrade
> >   - One could load the .so files before upgrade
> >   - The open file reference will keep the content around even with the
> > on disk file gone
> >   - lacking a 'load-module' command that would require fake hotplugs
> > which seems wrong
> >   - Required additional upgrade pre-planning
> >   - kills most benefits of modular code without an actual need for it
> > being loaded
>
> Well there's two benefits to modular approach
>
>  - Allow a single build to be selectively installed on a host or container
>    image, such that the install disk footprint is reduced
>  - Allow a faster startup such that huge RBD libraries dont slow down
>    startup of VMs not using RBD disks.
>
> Preloading the modules before upgrade doesn't have to the second benefit.
> We just have to make sure the pre loading doesn't impact the VM startup
> performance.

I haven't looked at it that way yet and somewhat neglected former
suggestions of such a command.
I thought there might be concerns about "amount of loaded code", but
it shouldn't be "active" unless we really have a device of that kind
right?
You are right, it seems it won't "loose much" by loading all of them late.

> IOW, register a SIGUSR2 handler which preloads all modules it finds on
> disk. Have a pre-uninstall option on the .so package that sends SIGUSR2
> to all QEMU processes. The challenge of course is that signals are
> async.

If there would be something simple as log line people could check on
that to ensure the async loading is done.
Not perfectly synchronous, but maybe useful if a new QMP is considered
too heavy.

> You might suggest a QMP command, but only 1 process can have the
> QMP monitor open at any time and that's libvirt. Adding a second QMP
> monitor instance is possible but kind of gross for this purpose.

This (hopefully) already is a corner case.
I think admins would be ok with `virsh qemu-monitor-command` or such.
No need for a second QMP monitor IMHO.

> Another option would be to pre-load the modules during startup, but
> do it asynchronously, so that its not blocking overall VM startup.
> eg just before starting the mainloop, spawn a background thread to
> load all remaining modules.
>
> This will potentially degrade performance of the guest CPUs a bit,
> but avoids the latency spike from being synchronous in the startup
> path.

As above, I think it could be ok to load them even later than main
loop as long as there is e.g. a reliable entry people can check.
But this comes close to a QMP "are things loaded" command and in this
case the synchronous "load-all-you-find" command seems to make more
sense.

>
> > c) go back to non modular build
> >   - No :-)
> >
> > d) anything else out there?
>
> e) Don't do upgrades on a host with running VMs :-)
>
>    Upgrades can break the running VM even ignoring this particular
>    QEMU module scenario.

Which is true and I think clear in general - I'd even assume it is a
general guidance for almost all admins.
But we all know that things never end up so perfect, which is why I
directly started with the example case of a guest that can neither
migrate away nor be restarted.

> f) Simply document that if you upgrade with running VMs that some
>    features like hotplug of RBD will become unavailable. Users can
>    then avoid upgrades if that matters to them.

That is similar like your above approach.
It is absolutely valid and a good best practise policy, but the
question is how could we help out people that are locked in and still
want to avoid that.

Your suggestion of a sync "load-all-modules" command could be a way
out for people where policies are not, lets see what other people
think of it.

> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
>


-- 
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Best practices to handle shared objects through qemu upgrades?
  2019-11-01  9:55   ` Christian Ehrhardt
@ 2019-11-01 17:09     ` Daniel P. Berrangé
  0 siblings, 0 replies; 8+ messages in thread
From: Daniel P. Berrangé @ 2019-11-01 17:09 UTC (permalink / raw)
  To: Christian Ehrhardt; +Cc: Paolo Bonzini, qemu-devel

On Fri, Nov 01, 2019 at 10:55:29AM +0100, Christian Ehrhardt wrote:
> On Fri, Nov 1, 2019 at 10:34 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
> >
> > On Fri, Nov 01, 2019 at 08:14:08AM +0100, Christian Ehrhardt wrote:
> > > Hi everyone,
> > > we've got a bug report recently - on handling qemu .so's through
> > > upgrades - that got me wondering how to best handle it.
> > > After checking with Paolo yesterday that there is no obvious solution
> > > that I missed we agreed this should be brought up on the list for
> > > wider discussion.
> > > Maybe there already is a good best practise out there, or if it
> > > doesn't exist we might want to agree upon one going forward.
> > > Let me outline the case and the ideas brought up so far.
> > >
> > > Case
> > > - You have qemu representing a Guest
> > > - Due to other constraints e.g. PT you can't live migrate (which would
> > > be preferred)
> > > - You haven't used a specific shared object yet - lets say RBD storage
> > > driver as example
> > > - Qemu gets an update, packaging replaces the .so files on disk
> > > - The Qemu process and the .so files on disk now have a mismatch in $buildid
> > > - If you hotplug an RBD device it will fail to load the (now new) .so
> >
> > What happens when it fails to load ?  Does the user get a graceful
> > error message or does QEMU abort ? I'd hope the former.
> >
> 
> It is fortunately a graceful error message, here an example:
> 
> $ virsh attach-device lateload curldisk.xml
> Reported issue happens on attach:
> root@b:~# virsh attach-device lateload cdrom-curl.xml
> error: Failed to attach device from cdrom-curl.xml
> error: internal error: unable to execute QEMU command 'device_add':
> Property 'virtio-blk-device.drive' can't find value
> 'drive-virtio-disk2'

Ok, that's graceful, but horrifically useless as an error message :-)

I'd like to think there would be a way to do better.

It looks like the 'drive-add' (or whatever we run to add the
backend) is failing, and then we blindly run device_add anyway.

This means either there's some error message printed that we
are missing, or QEMU is not reporting it back to the monitor.
Either way, I think this can be improved so that libvirt can
directly report the message you found hidden in the log:

> 
> In the qemu output log we can see:
> Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so
> Note: only modules from the same build can be loaded.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Best practices to handle shared objects through qemu upgrades?
  2019-11-01  9:34 ` Daniel P. Berrangé
  2019-11-01  9:55   ` Christian Ehrhardt
@ 2020-03-04  9:37   ` Christian Ehrhardt
  2020-03-04  9:39     ` [PATCH] modules: load modules from versioned /var/run dir Christian Ehrhardt
  1 sibling, 1 reply; 8+ messages in thread
From: Christian Ehrhardt @ 2020-03-04  9:37 UTC (permalink / raw)
  To: Daniel P. Berrangé; +Cc: Paolo Bonzini, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 5789 bytes --]

On Fri, Nov 1, 2019 at 10:34 AM Daniel P. Berrangé <berrange@redhat.com>
wrote:

> On Fri, Nov 01, 2019 at 08:14:08AM +0100, Christian Ehrhardt wrote:
> > Hi everyone,
> > we've got a bug report recently - on handling qemu .so's through
> > upgrades - that got me wondering how to best handle it.
> > After checking with Paolo yesterday that there is no obvious solution
> > that I missed we agreed this should be brought up on the list for
> > wider discussion.
> > Maybe there already is a good best practise out there, or if it
> > doesn't exist we might want to agree upon one going forward.
> > Let me outline the case and the ideas brought up so far.
> >
> > Case
> > - You have qemu representing a Guest
> > - Due to other constraints e.g. PT you can't live migrate (which would
> > be preferred)
> > - You haven't used a specific shared object yet - lets say RBD storage
> > driver as example
> > - Qemu gets an update, packaging replaces the .so files on disk
> > - The Qemu process and the .so files on disk now have a mismatch in
> $buildid
> > - If you hotplug an RBD device it will fail to load the (now new) .so
>
> What happens when it fails to load ?  Does the user get a graceful
> error message or does QEMU abort ? I'd hope the former.
>
> >
> > On almost any other service than "qemu representing a VM" the answer
> > is "restart it", some even re-exec in place to keep things up and
> > running.
> >
> > Ideas so far:
> > a) Modules are checked by build-id, so keep them in a per build-id dir
> on disk
> >   - qemu could be made looking preferred in -$buildid dir first
> >   - do not remove the packages with .so's on upgrades
> >   - needs a not-too-complex way to detect which buildids running qemu
> processes
> >     have for packaging to be able to "autoclean later"
> >   - Needs some dependency juggling for Distro packaging but IMHO can be
> made
> >     to work if above simple "probing buildid of running qemu" would exist
>
> So this needs a bunch of special QEMU hacks in package mgmt tools
> to prevent the package upgrade & cleanup later. This does not look
> like a viable strategy to me.
>
> >
> > b) Preload the modules before upgrade
> >   - One could load the .so files before upgrade
> >   - The open file reference will keep the content around even with the
> > on disk file gone
> >   - lacking a 'load-module' command that would require fake hotplugs
> > which seems wrong
> >   - Required additional upgrade pre-planning
> >   - kills most benefits of modular code without an actual need for it
> > being loaded
>
> Well there's two benefits to modular approach
>
>  - Allow a single build to be selectively installed on a host or container
>    image, such that the install disk footprint is reduced
>  - Allow a faster startup such that huge RBD libraries dont slow down
>    startup of VMs not using RBD disks.
>
> Preloading the modules before upgrade doesn't have to the second benefit.
> We just have to make sure the pre loading doesn't impact the VM startup
> performance.
>
> IOW, register a SIGUSR2 handler which preloads all modules it finds on
> disk. Have a pre-uninstall option on the .so package that sends SIGUSR2
> to all QEMU processes. The challenge of course is that signals are
> async. You might suggest a QMP command, but only 1 process can have the
> QMP monitor open at any time and that's libvirt. Adding a second QMP
> monitor instance is possible but kind of gross for this purpose.
>
> Another option would be to pre-load the modules during startup, but
> do it asynchronously, so that its not blocking overall VM startup.
> eg just before starting the mainloop, spawn a background thread to
> load all remaining modules.
>
> This will potentially degrade performance of the guest CPUs a bit,
> but avoids the latency spike from being synchronous in the startup
> path.
>
>
> > c) go back to non modular build
> >   - No :-)
> >
> > d) anything else out there?
>
> e) Don't do upgrades on a host with running VMs :-)
>
>    Upgrades can break the running VM even ignoring this particular
>    QEMU module scenario.
>
> f) Simply document that if you upgrade with running VMs that some
>    features like hotplug of RBD will become unavialable. Users can
>    then avoid upgrades if that matters to them.
>

Hi,
I've come back to this after a while and now think all the pre-load or
load-command Ideas we had were in vain.
They would be overly complex and need a lot of integration into different
places to trigger them.
All of that would not be well integrated in the trigger of the issue itself
which usually is a package upgrade.

But qemu already does try to load modules from different places and with a
slight extension there I think we could
provide something that packaging (the actual place knowing about upgrades)
can use to avoid this issue.

I'll reply to this thread with a patch for your consideration in a few
minutes.

There is already a Ubuntu 20.04 test build with the qemu and packaging
changes in [1].
The related Debian/Ubuntu packaging changes themselves can be seen in [2].
I hope that helps to illustrate how it would work overall

[1]: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/3961
[2]:
https://git.launchpad.net/~paelzer/ubuntu/+source/qemu/log/?h=bug-1847361-miss-old-so-on-upgrade-UBUNTU



> Regards,
> Daniel
> --
> |: https://berrange.com      -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-
> https://www.instagram.com/dberrange :|
>
>

-- 
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd

[-- Attachment #2: Type: text/html, Size: 7670 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] modules: load modules from versioned /var/run dir
  2020-03-04  9:37   ` Christian Ehrhardt
@ 2020-03-04  9:39     ` Christian Ehrhardt
  2020-03-06 10:54       ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Christian Ehrhardt @ 2020-03-04  9:39 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Daniel P . Berrangé, Christian Ehrhardt

On upgrades the old .so files usually are replaced. But on the other
hand since a qemu process represents a guest instance it is usually kept
around.

That makes late addition of dynamic features e.g. 'hot-attach of a ceph
disk' fail by trying to load a new version of e.f. block-rbd.so into an
old still running qemu binary.

This adds a fallback to also load modules from a versioned directory in the
temporary /var/run path. That way qemu is providing a way for packaging
to store modules of an upgraded qemu package as needed until the next reboot.

An example how that can then be used in packaging can be seen in:
https://git.launchpad.net/~paelzer/ubuntu/+source/qemu/log/?h=bug-1847361-miss-old-so-on-upgrade-UBUNTU

Fixes: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
---
 util/module.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/util/module.c b/util/module.c
index 236a7bb52a..d2446104be 100644
--- a/util/module.c
+++ b/util/module.c
@@ -19,6 +19,7 @@
 #endif
 #include "qemu/queue.h"
 #include "qemu/module.h"
+#include "qemu-version.h"
 
 typedef struct ModuleEntry
 {
@@ -170,6 +171,7 @@ bool module_load_one(const char *prefix, const char *lib_name)
 #ifdef CONFIG_MODULES
     char *fname = NULL;
     char *exec_dir;
+    char *version_dir;
     const char *search_dir;
     char *dirs[4];
     char *module_name;
@@ -201,6 +203,11 @@ bool module_load_one(const char *prefix, const char *lib_name)
     dirs[n_dirs++] = g_strdup_printf("%s", CONFIG_QEMU_MODDIR);
     dirs[n_dirs++] = g_strdup_printf("%s/..", exec_dir ? : "");
     dirs[n_dirs++] = g_strdup_printf("%s", exec_dir ? : "");
+    version_dir = g_strcanon(g_strdup(QEMU_PKGVERSION),
+                             G_CSET_A_2_Z G_CSET_a_2_z G_CSET_DIGITS "+-.~",
+                             '_');
+    dirs[n_dirs++] = g_strdup_printf("/var/run/qemu/%s", version_dir);
+
     assert(n_dirs <= ARRAY_SIZE(dirs));
 
     g_free(exec_dir);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] modules: load modules from versioned /var/run dir
  2020-03-04  9:39     ` [PATCH] modules: load modules from versioned /var/run dir Christian Ehrhardt
@ 2020-03-06 10:54       ` Stefan Hajnoczi
  2020-03-06 13:27         ` Christian Ehrhardt
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2020-03-06 10:54 UTC (permalink / raw)
  To: Christian Ehrhardt; +Cc: Paolo Bonzini, Daniel P . Berrangé, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 222 bytes --]

On Wed, Mar 04, 2020 at 10:39:46AM +0100, Christian Ehrhardt wrote:

Please start a new email thread.  Patches sent as replies to existing
email threads are easily missed by humans and tooling also doesn't
recognize them.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] modules: load modules from versioned /var/run dir
  2020-03-06 10:54       ` Stefan Hajnoczi
@ 2020-03-06 13:27         ` Christian Ehrhardt
  0 siblings, 0 replies; 8+ messages in thread
From: Christian Ehrhardt @ 2020-03-06 13:27 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Paolo Bonzini, Daniel P . Berrangé, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 485 bytes --]

On Fri, Mar 6, 2020 at 11:54 AM Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Wed, Mar 04, 2020 at 10:39:46AM +0100, Christian Ehrhardt wrote:
>
> Please start a new email thread.  Patches sent as replies to existing
> email threads are easily missed by humans and tooling also doesn't
> recognize them.
>

Sure, thanks Stefan for the hint about how that will be processed/looked at
by maintainers and reviewers.

-- 
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd

[-- Attachment #2: Type: text/html, Size: 900 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-03-06 13:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-01  7:14 Best practices to handle shared objects through qemu upgrades? Christian Ehrhardt
2019-11-01  9:34 ` Daniel P. Berrangé
2019-11-01  9:55   ` Christian Ehrhardt
2019-11-01 17:09     ` Daniel P. Berrangé
2020-03-04  9:37   ` Christian Ehrhardt
2020-03-04  9:39     ` [PATCH] modules: load modules from versioned /var/run dir Christian Ehrhardt
2020-03-06 10:54       ` Stefan Hajnoczi
2020-03-06 13:27         ` Christian Ehrhardt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.