All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>
To: "Legacy, Allain" <Allain.Legacy@windriver.com>
Cc: "Adrien Mazarguil (adrien.mazarguil@6wind.com)"
	<adrien.mazarguil@6wind.com>, "dev@dpdk.org" <dev@dpdk.org>,
	"Peters, Matt" <Matt.Peters@windriver.com>
Subject: Re: mlx5 flow create/destroy behaviour
Date: Tue, 28 Mar 2017 17:36:02 +0200	[thread overview]
Message-ID: <20170328153602.GC16796@autoinstall.dev.6wind.com> (raw)
In-Reply-To: <70A7408C6E1BFB41B192A929744D8523968F8E2F@ALA-MBC.corp.ad.wrs.com>

Hi Allain,

My attempt to reproduce it was a failure, may be I missed something,
please see below,

On Tue, Mar 28, 2017 at 12:42:05PM +0000, Legacy, Allain wrote:
> Hi,
> I am setting up an experiment to gauge the usability of the flow API
> and the flow marking behavior of the CX4.   I am working from v17.02.
> I am seeing some unpredictable behavior that I am unsure of the cause. 
> 
> This is the layout of the test:
>    
>    2 x CX4 (15b3:1015) 
>       + 1 port used on each
>    A test application with 1 core, and 1 queue/port
>    Traffic generator attached to each port
>       + 500 unique src+dst MAC address combinations sent from each port
>       + All traffic is VLAN tagged (1 VLAN per port)
> 
> The test application examines packets as they are received on each
> port.  It sets up flow rules and calls rte_flow_create() for each new
> layer2 flow that it observes.    The flow patterns are of the form
> ETH+VLAN+END where ETH matches src+dst+type=vlan, VLAN matches the
> port's VLAN ID.  The flow actions are of the form MARK+QUEUE+END where
> MARK assigns a unique integer to each flow and, and QUEUE assigns the
> flow to queue_id=0 (since the test app only has 1 queue per port).

If I understand correctly, your application is adding 500 rules like:

 flow create 0 ingress pattern eth src is <smac> dst is <dmac> / vlan vid is <vid> / end action mark id is <id> / queue index 0 / end

> Once the flows are setup, the application then checks that ingress
> packets are properly marked with the intended unique integer specified
> in the MARK action.

It is sending packets to verify this?

> The traffic is run for a short period of time and then stopped.  Once
> the traffic is stopped the application removes the flow rules by
> calling rte_flow_destroy().    There is no guarantee that the order of
> the destroys resembles in any way the order of the creates.   (I
> mention this because of this warning in rte_flow.h:  "This function is
> only guaranteed to succeed if handles are destroyed in reverse order
> of their creation.").   All of the calls to rte_flow_destroy()
> succeed. 
> 
> When I run this test after the NIC has been reset there are no issues.

What do you mean by "reset"?

> All calls to rte_flow_create()/rte_flow_destroy() succeed and all
> packets have a valid mark ID that corresponds to the unique integer
> assigned to that src+dst+vlan grouping.

In mlx5 PMD rte_flow_destroy() always returns success as the destruction
should never fail.
Can you compile in debug mode (by setting CONFIG_RTE_LIBRTE_MLX5_DEBUG
to "y")?  Then you should have as many print for the creation rules than
the destroyed ones.

> The problem happens when I run this test for a second or third time
> without first resetting the NIC.  On subsequent test runs I still see
> no errors in create/destroy API calls but packets are no longer marked
> by the hardware.  In some test runs none of the flows have valid mark
> id values, and other test runs have some percentage of flows with
> valid mark id values while others do not.   The behavior seems
> inconsistent but if I reset the NIC the behavior goes back to working
> for 1 test run and then starts behaving incorrectly again on
> subsequent runs.
> 
> I should note that in subsequent test runs the MAC addresses are the
> same as previous runs, and but the mapping from unique integer to
> src+dst+vlan are different each time.
> 
> Is this behavior consistent with your experience using the device
> and/or API?

No I did not face such issue, the behavior was consistent, but I never
tried to generate so many rules in the past.


Thanks,

-- 
Nélio Laranjeiro
6WIND

  reply	other threads:[~2017-03-28 15:36 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-28 12:42 mlx5 flow create/destroy behaviour Legacy, Allain
2017-03-28 15:36 ` Nélio Laranjeiro [this message]
2017-03-28 16:16   ` Legacy, Allain
2017-03-29  9:45     ` Nélio Laranjeiro
2017-03-29 12:29       ` Legacy, Allain
2017-03-30 13:03         ` Nélio Laranjeiro
2017-03-30 16:53           ` Legacy, Allain
2017-03-31  8:34             ` Nélio Laranjeiro
2017-03-31 13:16               ` Legacy, Allain
2017-03-31 13:34                 ` Nélio Laranjeiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170328153602.GC16796@autoinstall.dev.6wind.com \
    --to=nelio.laranjeiro@6wind.com \
    --cc=Allain.Legacy@windriver.com \
    --cc=Matt.Peters@windriver.com \
    --cc=adrien.mazarguil@6wind.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.