From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>,
netdev@vger.kernel.org, xen-devel@lists.xenproject.org,
linux-kernel@vger.kernel.org, jgross@suse.com,
sstabellini@kernel.org, davem@davemloft.net
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [Xen-devel][PATCH] xen/netfront: Remove unneeded .resume callback
Date: Thu, 14 Mar 2019 14:16:04 -0400 [thread overview]
Message-ID: <be1f029c-326a-7e8c-f1f8-216b581468e3@oracle.com> (raw)
In-Reply-To: <46fe25f2-2db7-496a-cd2c-071cd211ea50@gmail.com>
On 3/14/19 12:33 PM, Oleksandr Andrushchenko wrote:
> On 3/14/19 17:40, Boris Ostrovsky wrote:
>> On 3/14/19 11:10 AM, Oleksandr Andrushchenko wrote:
>>> On 3/14/19 5:02 PM, Boris Ostrovsky wrote:
>>>> On 3/14/19 10:52 AM, Oleksandr Andrushchenko wrote:
>>>>> On 3/14/19 4:47 PM, Boris Ostrovsky wrote:
>>>>>> On 3/14/19 9:17 AM, Oleksandr Andrushchenko wrote:
>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>
>>>>>>> Currently on driver resume we remove all the network queues and
>>>>>>> destroy shared Tx/Rx rings leaving the driver in its current state
>>>>>>> and never signaling the backend of this frontend's state change.
>>>>>>> This leads to the number of consequences:
>>>>>>> - when frontend withdraws granted references to the rings etc. it
>>>>>>> cannot
>>>>>>> be cleanly done as the backend still holds those (it was not
>>>>>>> told to
>>>>>>> free the resources)
>>>>>>> - it is not possible to resume driver operation as all the
>>>>>>> communication
>>>>>>> means with the backned were destroyed by the frontend, thus
>>>>>>> making the frontend appear to the guest OS as functional, but
>>>>>>> not really.
>>>>>> What do you mean? Are you saying that after resume you lose
>>>>>> connectivity?
>>>>> Exactly, if you take a look at the .resume callback as it is now
>>>>> what it does it destroys the rings etc. and never notifies the
>>>>> backend
>>>>> of that, e.g. it stays in, say, connected state with communication
>>>>> channels destroyed. It never goes into any other Xen bus state, so
>>>>> there is
>>>>> no way its state machine can help recovering.
>>>> My tree is about a month old so perhaps there is some sort of
>>>> regression
>>>> but this certainly works for me. After resume netfront gets
>>>> XenbusStateInitWait from backend which causes xennet_connect().
>>> Ah, the difference can be of the way we get the guest enter
>>> the suspend state. I am making my guest to suspend with:
>>> echo mem > /sys/power/state
>>> And then I use an interrupt to the guest (this is a test code)
>>> to wake it up.
>>> Could you please share your exact use-case when the guest enters
>>> suspend
>>> and what you do to resume it?
>>
>> xl save / xl restore
>>
>>> I can see no way backend may want enter XenbusStateInitWait in my
>>> use-case
>>> as it simply doesn't know we want him to.
>>
>> Yours looks like ACPI path, I don't know how well it was tested TBH.
>
> Hm, so it does work for your use-case, but doesn't for mine.
>
> What would be the best way forward?
>
> 1. Implement .resume properly as, for example, block front does [1]
>
> 2. Remove .resume completely: this does work as long as backend
> doesn't change anything
For save/restore (migration) there is no guarantee that the new backend
has the same set of features.
>
> I am still a bit unsure if we really need to re-initialize rings,
> re-read front's config from
>
> Xenstore etc - what changes on backend side are expected when we
> resume the front driver?
Number of queues, for example. Or things in xennet_fix_features().
-boris
>
>>
>>
>> -boris
>
> Thank you,
>
> Oleksandr
>
>
> [1]
> https://elixir.bootlin.com/linux/v5.0.2/source/drivers/block/xen-blkfront.c#L2072
>
next prev parent reply other threads:[~2019-03-14 18:16 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-14 13:17 [Xen-devel][PATCH] xen/netfront: Remove unneeded .resume callback Oleksandr Andrushchenko
2019-03-14 13:50 ` Oleksandr Andrushchenko
2019-03-14 14:47 ` Boris Ostrovsky
2019-03-14 14:52 ` Oleksandr Andrushchenko
2019-03-14 15:02 ` Boris Ostrovsky
2019-03-14 15:10 ` Oleksandr Andrushchenko
2019-03-14 15:40 ` Boris Ostrovsky
2019-03-14 16:33 ` Oleksandr Andrushchenko
2019-03-14 18:16 ` Boris Ostrovsky [this message]
2019-03-14 18:20 ` Oleksandr Andrushchenko
2019-03-14 19:00 ` [Xen-devel] [PATCH] " Julien Grall
2019-03-18 10:02 ` Oleksandr Andrushchenko
2019-03-20 3:50 ` Munehisa Kamata
2019-03-22 10:44 ` Oleksandr Andrushchenko
[not found] ` <20190325173011.GA20277@kaos-source-ops-60001.pdx1.amazon.com>
2019-03-27 6:40 ` Oleksandr Andrushchenko
[not found] ` <20190328231928.GA5172@kaos-source-ops-60001.pdx1.amazon.com>
2019-05-16 6:26 ` Oleksandr Andrushchenko
2019-05-30 12:32 ` Agarwal, Anchal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=be1f029c-326a-7e8c-f1f8-216b581468e3@oracle.com \
--to=boris.ostrovsky@oracle.com \
--cc=Volodymyr_Babchuk@epam.com \
--cc=andr2000@gmail.com \
--cc=davem@davemloft.net \
--cc=jgross@suse.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=oleksandr_andrushchenko@epam.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).