dev.dpdk.org archive mirror
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC] net: new vdpa PMD for Mellanox devices
@ 2019-08-15 10:47 Matan Azrad
  0 siblings, 0 replies; only message in thread
From: Matan Azrad @ 2019-08-15 10:47 UTC (permalink / raw)
  To: dev
  Cc: Shahaf Shuler, Asaf Penso, Slava Ovsiienko, Thomas Monjalon,
	Olga Shern, Yigit, Ferruh

The last Mellanox adapters\SoCs support virtio operations to accelerate the vhost data path.

A new net PMD will be added to implement the vdpa driver requirements on top of Mellanox
devices support it like ConnectX6 and BlueField.

Points:

  *   The mlx5_vdpa PMD run on top of PCI devices VF and PF.
  *   The Mellanox PCI device can be configured ether to support ethdev device or to support vdpa device.
  *   An one physical device can contain ethdev VFs\PF driven by the mlx5 PMD and vdpa VFs\PF driven by the new mlx5_vdpa PMD parallelly.
  *   The  decision which driver should be selected to probe Mellanox PCI device should be taken by the user using the PCI device devargs.
  *   mlx5 and mlx5_vdpa depend in rdma-core lib so some code may be shared between them,

due that a new mlx5 common directory will be added in drivers/commom for code reusing.

  *   All the guest physical memory of the virtqs will be translated to the host physical memory by the HW.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-08-15 10:47 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-15 10:47 [dpdk-dev] [RFC] net: new vdpa PMD for Mellanox devices Matan Azrad

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).