qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: vit9696 <vit9696@protonmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Reiter <s.reiter@proxmox.com>,
	qemu devel list <qemu-devel@nongnu.org>,
	Igor Mammedov <imammedo@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Laszlo Ersek <lersek@redhat.com>,
	Richard Henderson <rth@twiddle.net>
Subject: Re: [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths
Date: Sun, 28 Feb 2021 11:43:55 +0100	[thread overview]
Message-ID: <e36b3025-d709-1a6c-6b5a-a08bc6de819f@proxmox.com> (raw)
In-Reply-To: <1D984D1D-0DFB-49FC-BD7F-1AAF9B862A5D@protonmail.com>

Hi Vitaly,

On 28.02.21 10:11, vit9696 wrote:
> For us this breaks the ability to control the boot options between the operating system and the OVMF. It happens because the operating system builds the DPs based on ACPI (in fact the only source available to it), while OVMF uses another source. The previous behaviour also violates the specification, so I do not believe there is room for reverting it. I believe it is also not possible to update QEMU to internally use the 1 UID, since it may conflict with the case when there are multiple PCI bus.

I think you may have misunderstood me a little bit, I did not ask for this to
be reverted in upstream QEMU, it's quite clear to me that this should be the
new default behaviour and should have been since ever.

Albeit, I must ask what makes macOS special to not be allowed doing things that
Windows and Linux guest can do just fine?

I mainly asked for other drawbacks of such a revert as it is currently the
straight forward stop gap solution for us as downstream. What we probably will
do, is keeping this as default to the new standard behavior and adding a switch
to revert to the old one - our QEMU integration library in Proxmox VE can then
set this for old VMs and use the new standard for new ones on VM start, that
way we keep backward compatible - as only Windows VMs seems to be affected we
can even do this only for those (we have a OS type config property from which
we can derive this).

>
> In my opinion, the most logical workaround is to provide in-guest steps to update VM configuration to account for this.

Often the Hypervisor admin and Guest admin are not the same, so this is only
a small band-aid and for most helping only after the fact.

We also have quite easy to setup clustering so this means that such affected
VMs will seemingly break on migration to an update node for lots of users - for
us an unacceptable situation to expose our users with and honestly, I have a
hard time seeing me and colleagues to wish spending our nerves to direct
hundreds of reports to the documented solution (some will certainly find it on
their own, but whatever one does, lots won't) and dealing with their,
relatable, fit they'll throw and me having to hold back telling them off to
just use Linux instead ;-)

And I think that other integrator will get some reports too, and FWICT there's
no outside way an user can use to revert to the old behavior.
Note that QEMU 5.2 is not yet released in some major distributions, e.g.,
Debian will ship it with Bullseye which release is still months away, latest
Fedora (33) is shipping QEMU 5.1, so RHEL/CentOS are probably using something
even older and Ubuntu will only add it in 21.04, also two months away.

Currently, QEMU 5.2 which introduces this change, is only released in some is
released in faster moving targets, where Windows VMs are more often for
non-server workloads (educated guess) which again correlates with higher
probability to use of DHCP and not static address assignment (again, educated
guess) - which is the most obvious and noticeable thing we and our users saw
break.

Which brings me again to my other point, there may be lots of other things
breaking in a more subtle way, we do not know but can tell there's lots of
device reshuffling going on when checking out the Windows Device Manager I
cannot immagine that the loss of network configuration is the only thing that
breaks is the only thing that breaks.

So why all this fuss and wall of text? Because I think that this will affect
lots of users, most of them in distros which will only ship the problematic
QEMU version later this year. How many affected there will be: no idea, but we
got quite some reports (compared to usual small stuff breakage) with only
rolling this QEMU version out *partially*, to only some parts of our user base.

That's why I personally think it may be worth to think about adding a switch to
QEMU directly to keep the backwards compatible, albeit standard incompatible
behavior either as opt-in or opt-out to new standard-conform behavior. And
while I thought opt-out is the way to go when starting out this message, I now
rather think opt-in to new is, at least if rustling bells of users with
Windows + static IPs is thought to be worth to avoid. As said, if there's quorum
against this, we can live fine with keeping that switch as downstream patch but
I'd like to avoid that and certainly won't just rush forward shipping it but
wait until next week, maybe there are some other opinions or better ideas.

cheers,
Thomas



  reply	other threads:[~2021-02-28 10:44 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-30 15:58 [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths Michael S. Tsirkin
2020-07-30 15:58 ` [PATCH 2/2] arm/acpi: fix an out of spec _UID for PCI root Michael S. Tsirkin
2020-07-30 16:12   ` Philippe Mathieu-Daudé
2020-07-30 19:35   ` Laszlo Ersek
2020-07-30 20:33   ` Peter Maydell
2020-07-31  9:31   ` Igor Mammedov
2020-07-30 16:11 ` [PATCH 1/2] i386/acpi: fix inconsistent QEMU/OVMF device paths Philippe Mathieu-Daudé
2020-07-30 19:35   ` Michael S. Tsirkin
2020-07-31  2:55     ` vit9696 via
2020-07-30 19:34 ` Laszlo Ersek
2020-07-31  9:30 ` Igor Mammedov
2021-02-27 19:41 ` Thomas Lamprecht
2021-02-28  9:11   ` vit9696
2021-02-28 10:43     ` Thomas Lamprecht [this message]
2021-02-28 20:45       ` Michael S. Tsirkin
     [not found]       ` <x3i3TiibtrC1qTDQUKxuOA_9qvmInzVwv6RrvzzSCSj-S21gLypbbZgEbYvJSGMxC1r8RaDrnHGgRbDI7vfpA_XuDINdZej9yKCW3_Sc4YM=@protonmail.com>
2021-02-28 21:37         ` Michael S. Tsirkin
2021-02-28 20:43   ` Michael S. Tsirkin
2021-03-01  7:12     ` Thomas Lamprecht
2021-03-01  7:20       ` Michael S. Tsirkin
2021-03-01  7:45         ` Thomas Lamprecht
2021-03-01 14:20           ` Igor Mammedov
2021-03-01 14:27             ` Thomas Lamprecht
2021-03-01 20:16               ` Igor Mammedov
2021-03-01 15:31           ` Michael S. Tsirkin
2021-03-01 13:28     ` Igor Mammedov
2021-03-01 16:14       ` Michael S. Tsirkin
2021-03-01 16:28         ` Laszlo Ersek
2021-03-01 19:08           ` Igor Mammedov
2021-03-01 20:06             ` vit9696
2021-03-02  8:40             ` Laszlo Ersek
2023-04-18  9:06 zhangying (AZ) via
2023-04-19  2:48 zhangying (AZ) via
2023-04-20 14:54 ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e36b3025-d709-1a6c-6b5a-a08bc6de819f@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=ehabkost@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=lersek@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=s.reiter@proxmox.com \
    --cc=vit9696@protonmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).