* [dpdk-dev] [RFC] net: new vdpa PMD for Mellanox devices
@ 2019-08-15 10:47 Matan Azrad
0 siblings, 0 replies; only message in thread
From: Matan Azrad @ 2019-08-15 10:47 UTC (permalink / raw)
Cc: Shahaf Shuler, Asaf Penso, Slava Ovsiienko, Thomas Monjalon,
Olga Shern, Yigit, Ferruh
The last Mellanox adapters\SoCs support virtio operations to accelerate the vhost data path.
A new net PMD will be added to implement the vdpa driver requirements on top of Mellanox
devices support it like ConnectX6 and BlueField.
* The mlx5_vdpa PMD run on top of PCI devices VF and PF.
* The Mellanox PCI device can be configured ether to support ethdev device or to support vdpa device.
* An one physical device can contain ethdev VFs\PF driven by the mlx5 PMD and vdpa VFs\PF driven by the new mlx5_vdpa PMD parallelly.
* The decision which driver should be selected to probe Mellanox PCI device should be taken by the user using the PCI device devargs.
* mlx5 and mlx5_vdpa depend in rdma-core lib so some code may be shared between them,
due that a new mlx5 common directory will be added in drivers/commom for code reusing.
* All the guest physical memory of the virtqs will be translated to the host physical memory by the HW.
^ permalink raw reply [flat|nested] only message in thread
only message in thread, back to index
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-15 10:47 [dpdk-dev] [RFC] net: new vdpa PMD for Mellanox devices Matan Azrad
DPDK-dev Archive on lore.kernel.org
Archives are clonable:
git clone --mirror https://lore.kernel.org/dpdk-dev/0 dpdk-dev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 dpdk-dev dpdk-dev/ https://lore.kernel.org/dpdk-dev \
Newsgroup available over NNTP:
AGPL code for this site: git clone https://public-inbox.org/ public-inbox