On Wed, Mar 31, 2021 at 04:47:14PM -0300, Daniel Henrique Barboza wrote: > > > On 3/30/21 8:46 PM, David Gibson wrote: > > On Tue, Mar 30, 2021 at 01:28:31AM +0200, Igor Mammedov wrote: > > > On Wed, 24 Mar 2021 16:09:59 -0300 > > > Daniel Henrique Barboza wrote: > > > > > > > On 3/23/21 10:40 PM, David Gibson wrote: > > > > > On Tue, Mar 23, 2021 at 02:10:22PM -0300, Daniel Henrique Barboza wrote: > > > > > > > > > > > > > > > > > > On 3/22/21 10:12 PM, David Gibson wrote: > > > > > > > On Fri, Mar 12, 2021 at 05:07:36PM -0300, Daniel Henrique Barboza wrote: > > > > > > > > Hi, > > > > > > > > > > > > > > > > This series adds 2 new QAPI events, DEVICE_NOT_DELETED and > > > > > > > > DEVICE_UNPLUG_ERROR. They were (and are still being) discussed in [1]. > > > > > > > > > > > > > > > > Patches 1 and 3 are independent of the ppc patches and can be applied > > > > > > > > separately. Patches 2 and 4 are based on David's ppc-for-6.0 branch and > > > > > > > > are dependent on the QAPI patches. > > > > > > > > > > > > > > Implementation looks fine, but I think there's a bit more to discuss > > > > > > > before we can apply. > > > > > > > > > > > > > > I think it would make sense to re-order this and put UNPLUG_ERROR > > > > > > > first. Its semantics are clearer, and I think there's a stronger case > > > > > > > for it. > > > > > > > > > > > > Alright > > > > > > > > > > > > > > I'm a bit less sold on DEVICE_NOT_DELETED, after consideration. Does > > > > > > > it really tell the user/management anything useful beyond what > > > > > > > receiving neither a DEVICE_DELETED nor a DEVICE_UNPLUG_ERROR does? > > > > > > > > > > > > > > > > > > It informs that the hotunplug operation exceed the timeout that QEMU > > > > > > internals considers adequate, but QEMU can't assert that it was caused > > > > > > by an error or an unexpected delay. The end result is that the device > > > > > > is not going to be deleted from QMP, so DEVICE_NOT_DELETED. > > > > > > > > > > Is it, though? I mean, it is with this implementation for papr: > > > > > because we clear the unplug_requested flag, even if the guest later > > > > > tries to complete the unplug, it will fail. > > > > > > > > > > But if I understand what Markus was saying correctly, that might not > > > > > be possible for all hotplug systems. I believe Markus was suggesting > > > > > that DEVICE_NOT_DELETED could just mean that we haven't deleted the > > > > > device yet, but it could still happen later. > > > > > > > > > > And in that case, I'm not yet sold on the value of a message that > > > > > essentially just means "Ayup, still dunno what's happening, sorry". > > > > > > Perhaps we should just be straightforward and create a DEVICE_UNPLUG_TIMEOUT > > > > > > event. > > > > > > > > > > Hm... what if we added a "reason" field to UNPLUG_ERROR. That could > > > > > be "guest rejected hotplug", or something more specific, in the rare > > > > > case that the guest has a way of signalling something more specific, > > > > > or "timeout" - but the later *only* to be sent in cases where on the > > > > > timeout we're able to block any later completion of the unplug (as we > > > > > can on papr). > > > > > > Is canceling unplug on timeout documented somewhere (like some spec)? > > > > Uh.. not as such. In the PAPR model, hotplugs and unplugs are mostly > > guest directed, so the question doesn't really arise. > > > > > If not it might (theoretically) confuse guest when it tries to unplug > > > after timeout and leave guest in some unexpected state. > > > > Possible, but probably not that likely. The mechanism we use to > > "cancel" the hotplugs is that we just fail the hypercalls that the > > guest will need to call to actually complete the hotplug. We also > > fail those in some other situations, and that seems to work. > > > > That said, I no longer think this cancelling on timeout is a good > > idea, since it mismatches what happens on other platforms more than I > > think we need to. > > > > My now preferred approach is to revert the timeout changes, but > > instead allow retries of unplugs to be issued. I think that's just a > > matter of resending the unplug message to the guest, while making it > > otherwise a no-op on the qemu side. > > I used this approach in a patch I sent back in January: Yes, you did. I rejected it at the time, but the discussion since has convinced me that I made a mistake. In particular, the point that a really loaded host could pretty much arbitrarily extend the time for the guest to process the request is convincing to me. > "[PATCH v2 1/1] spapr.c: always pulse guest IRQ in spapr_core_unplug_request()" Although.. I think we should actually do a full resend of the unplug request message to the queue, not just pulse the irq. AFAICT an unplug request on a DRC that is already unplugged should be safe (remove-LMB-by-count-only requests might have to be an exception). > https://lists.gnu.org/archive/html/qemu-devel/2021-01/msg04399.html > > > Let me know and I'll revert the timeout mechanism and re-post this one. > I guess there's still time to make this change in the 6.0.0 window, avoiding > releasing a mechanism we're not happy with. Yes, please, I think this is the way to go. 1) Revert timeouts (6.0) 2) Allow retries (6.0) 3) Add notifications when the guest positively signals failure (6.1) I think this gives us the best mix of a) user experience, b) not allowing nasty edge cases on loaded systems, c) matching x86 behaviour where possible. -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson