From: Jason Wang <jasowang@redhat.com>
To: "Zhu, Lingshan" <lingshan.zhu@intel.com>,
Zhu Lingshan <lingshan.zhu@linux.intel.com>,
mst@redhat.com, alex.williamson@redhat.com
Cc: linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
netdev@vger.kernel.org, dan.daly@intel.com,
cunming.liang@intel.com, tiwei.bie@intel.com,
jason.zeng@intel.com, zhiyuan.lv@intel.com
Subject: Re: [RFC 2/2] vhost: IFC VF vdpa layer
Date: Wed, 23 Oct 2019 14:39:15 +0800 [thread overview]
Message-ID: <02d44f0a-687f-ed87-518b-7a4d3e83c5d3@redhat.com> (raw)
In-Reply-To: <6588d9f4-f357-ec78-16a4-ccaf0e3768e7@intel.com>
On 2019/10/23 下午2:19, Zhu, Lingshan wrote:
>
> On 10/22/2019 9:05 PM, Jason Wang wrote:
>>
>> On 2019/10/22 下午2:53, Zhu Lingshan wrote:
>>>
>>> On 10/21/2019 6:19 PM, Jason Wang wrote:
>>>>
>>>> On 2019/10/21 下午5:53, Zhu, Lingshan wrote:
>>>>>
>>>>> On 10/16/2019 6:19 PM, Jason Wang wrote:
>>>>>>
>>>>>> On 2019/10/16 上午9:30, Zhu Lingshan wrote:
>>>>>>> This commit introduced IFC VF operations for vdpa, which complys to
>>>>>>> vhost_mdev interfaces, handles IFC VF initialization,
>>>>>>> configuration and removal.
>>>>>>>
>>>>>>> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
>>>>>>> ---
>>
>>
>> [...]
>>
>>
>>>>
>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>> +}
>>>>>>> +
>>>>>>> +static int ifcvf_mdev_set_features(struct mdev_device *mdev,
>>>>>>> u64 features)
>>>>>>> +{
>>>>>>> + struct ifcvf_adapter *adapter = mdev_get_drvdata(mdev);
>>>>>>> + struct ifcvf_hw *vf = IFC_PRIVATE_TO_VF(adapter);
>>>>>>> +
>>>>>>> + vf->req_features = features;
>>>>>>> +
>>>>>>> + return 0;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static u64 ifcvf_mdev_get_vq_state(struct mdev_device *mdev,
>>>>>>> u16 qid)
>>>>>>> +{
>>>>>>> + struct ifcvf_adapter *adapter = mdev_get_drvdata(mdev);
>>>>>>> + struct ifcvf_hw *vf = IFC_PRIVATE_TO_VF(adapter);
>>>>>>> +
>>>>>>> + return vf->vring[qid].last_avail_idx;
>>>>>>
>>>>>>
>>>>>> Does this really work? I'd expect it should be fetched from hw
>>>>>> since it's an internal state.
>>>>> for now, it's working, we intend to support LM in next version
>>>>> drivers.
>>>>
>>>>
>>>> I'm not sure I understand here, I don't see any synchronization
>>>> between the hardware and last_avail_idx, so last_avail_idx should
>>>> not change.
>>>>
>>>> Btw, what did "LM" mean :) ?
>>>
>>> I can add bar IO operations here, LM = live migration, sorry for the
>>> abbreviation.
>>
>>
>> Just make sure I understand here, I believe you mean reading
>> last_avail_idx through IO bar here?
>>
>> Thanks
>
> Hi Jason,
>
> Yes, I mean last_avail_idx. is that correct?
>
> THanks
Yes.
Thanks
>
>>
>>
next prev parent reply other threads:[~2019-10-23 6:39 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-16 1:30 [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16 1:30 ` [RFC 1/2] vhost: IFC VF hardware operation layer Zhu Lingshan
2019-10-16 8:40 ` Jason Wang
2019-10-21 10:00 ` Zhu, Lingshan
2019-10-21 10:35 ` Jason Wang
2019-10-16 8:45 ` Jason Wang
2019-10-21 9:57 ` Zhu, Lingshan
2019-10-21 10:21 ` Jason Wang
2019-10-16 1:30 ` [RFC 2/2] vhost: IFC VF vdpa layer Zhu Lingshan
2019-10-16 10:19 ` Jason Wang
2019-10-18 6:36 ` Zhu, Lingshan
2019-10-21 9:53 ` Zhu, Lingshan
2019-10-21 10:19 ` Jason Wang
2019-10-22 6:53 ` Zhu Lingshan
2019-10-22 13:05 ` Jason Wang
2019-10-23 6:19 ` Zhu, Lingshan
2019-10-23 6:39 ` Jason Wang [this message]
2019-10-23 9:24 ` Zhu, Lingshan
2019-10-23 9:58 ` Jason Wang
2019-10-16 1:36 ` [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16 8:26 ` Jason Wang
2019-10-21 7:10 ` Zhu, Lingshan
-- strict thread matches above, loose matches on Subject: below --
2019-10-16 1:10 Zhu Lingshan
2019-10-16 1:10 ` [RFC 2/2] vhost: IFC VF vdpa layer Zhu Lingshan
2019-10-16 1:03 [RFC 0/2] Intel IFC VF driver for vdpa Zhu Lingshan
2019-10-16 1:03 ` [RFC 2/2] vhost: IFC VF vdpa layer Zhu Lingshan
2019-10-16 9:53 ` Simon Horman
2019-10-21 8:48 ` Zhu, Lingshan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=02d44f0a-687f-ed87-518b-7a4d3e83c5d3@redhat.com \
--to=jasowang@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=cunming.liang@intel.com \
--cc=dan.daly@intel.com \
--cc=jason.zeng@intel.com \
--cc=kvm@vger.kernel.org \
--cc=lingshan.zhu@intel.com \
--cc=lingshan.zhu@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=tiwei.bie@intel.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).