From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ferruh Yigit Subject: Re: [RFC v4] /net: memory interface (memif) Date: Wed, 27 Feb 2019 17:04:44 +0000 Message-ID: <8e474eec-0ac0-3e30-9419-ab5c1d4789ab@intel.com> References: <20181213133051.18779-1-jgrajcia@cisco.com> <20190220115254.18724-1-jgrajcia@cisco.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit To: Jakub Grajciar , dev@dpdk.org Return-path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 5DCDE58CB for ; Wed, 27 Feb 2019 18:04:47 +0100 (CET) In-Reply-To: <20190220115254.18724-1-jgrajcia@cisco.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2/20/2019 11:52 AM, Jakub Grajciar wrote: > Memory interface (memif), provides high performance > packet transfer over shared memory. > > Signed-off-by: Jakub Grajciar > --- > MAINTAINERS | 6 + > config/common_base | 5 + > config/common_linuxapp | 1 + > doc/guides/nics/features/memif.ini | 14 + > doc/guides/nics/index.rst | 1 + > doc/guides/nics/memif.rst | 194 ++++ > drivers/net/Makefile | 1 + > drivers/net/memif/Makefile | 28 + > drivers/net/memif/memif.h | 178 +++ > drivers/net/memif/memif_socket.c | 1092 ++++++++++++++++++ > drivers/net/memif/memif_socket.h | 104 ++ > drivers/net/memif/meson.build | 13 + > drivers/net/memif/rte_eth_memif.c | 1124 +++++++++++++++++++ > drivers/net/memif/rte_eth_memif.h | 203 ++++ > drivers/net/memif/rte_pmd_memif_version.map | 4 + > drivers/net/meson.build | 1 + > mk/rte.app.mk | 1 + Can you please update release notes to document new PMD. <...> > > requires patch: http://patchwork.dpdk.org/patch/49009/ Thanks for highlighting this dependency, can you please elaborate the relation with interrupt and memif PMD? <...> > +Example: testpmd and testpmd > +---------------------------- > +In this example we run two instances of testpmd application and transmit packets over memif. > + > +First create master interface:: > + > + #./testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=master -- -i > + > +Now create slave interace (master must be already running so the slave will connect):: s/interace/interface/ > + > + #./testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif -- -i > + > +Set forwarding mode in one of the instances to 'rx only' and the other to 'tx only':: > + > + testpmd> set fwd rxonly > + testpmd> start > + > + testpmd> set fwd txonly > + testpmd> start Is shared mem used as a ring, with single producer, single consumer? Is there a way to use memif PMD for both sending and receiving packets. It is possible to create two ring PMDs in a single testpmd application, and loop packets between them continuously via tx_first param [1], can same be done via memif? [1] ./build/app/testpmd -w 0:0.0 --vdev net_ring0 --vdev net_ring1 -- -i testpmd> start tx_first testpmd> show port stats all ######################## NIC statistics for port 0 ######################## RX-packets: 1365649088 RX-missed: 0 RX-bytes: 0 RX-errors: 0 RX-nombuf: 0 TX-packets: 1365649120 TX-errors: 0 TX-bytes: 0 ..... <...> > + > +CFLAGS += -O3 > +CFLAGS += $(WERROR_FLAGS) > +CFLAGS += -DALLOW_EXPERIMENTAL_API > +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring Is rte_ring library used? <...> > @@ -0,0 +1,4 @@ > +EXPERIMENTAL { > + > + local: *; > +}; Please use the version name instead of "EXPERIMENTAL" which is for experimental APIs.