All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device
@ 2015-01-06  2:24 sfeldma
  2015-01-06  2:24 ` sfeldma
                   ` (10 more replies)
  0 siblings, 11 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

v2:

 - Address reg_guide.txt review comments from Eric Blake
 - Address QMP review comments from Eric Blake

v1:

[This is a collaboration between myself and Jiri Pirko].

This patch set adds a new ethernet switch device, called rocker.  Rocker is
intended to emulate HW features of switch ASICs found in today's
data-center-class switch/routers.  The original motivation in creating a new
device is to accelerate device driver development for ethernet switches in the
Linux kernel.  A device driver for rocker already exists in the Linux 3.18
kernel and loads against this device.  Basic L2 switching (bridging)
functionality is offloaded to the device.  Work continues to enable offloading
of L3 routing functions and ACLs, as well as support for a flow-based modes,
such as OpenVSwitch with OpenFlow.  Future support for terminating L2-over-L3
tunnels is also planned.

The core network processing functions are based on the spec of a real device:
Broadcom's OF-DPA.  Specifically, rocker borrows OF-DPA's network processing
pipeline comprised of flow match and action tables.  Only the OF-DPA spec was
used in constructing rocker.  The rocker developers do not have access to the
real OF-DPA's software source code, so this is a clean-room, ground-up
development.

Each rocker device is a PCI device with a memory-mapped register space and
MSI-X interrupts for command and event processing, as well as CPU-bound I/O.
Each device can support up to 62 "front-panel" ports, which present themselves
as -netdev attachment points.  The device is programmed using OF-DPA flow and
group tables to setup the flow pipeline.  The programming defines the
forwarding path for packets ingressing on 'front-panel' ports.  The forwarding
path can look at L2/L3/L4 packet header to forward the packet to its
destination.  For the performance path, packets would ingress and egress only
on the device, and not be passed up to the device driver (or host OS).  The
slow path for control packets will forward packets to the CPU via the device
driver for host OS processing.

A QMP/HMP interface is added to give inside into the device's internal port
configuration and flow/group tables.

A test directory is included with some basic sanity tests to verify the device
and driver.

Scott Feldman (10):
  pci: move REDHAT_SDHCI device ID to make room for Rocker
  net: add MAC address string printer
  virtio-net: use qemu_mac_strdup_printf
  rocker: add register programming guide
  pci: add rocker device ID
  pci: add network device class 'other' for network switches
  rocker: add new rocker switch device
  qmp: add rocker device support
  rocker: add tests
  MAINTAINERS: add rocker

 MAINTAINERS                        |    6 +
 default-configs/pci.mak            |    1 +
 docs/specs/pci-ids.txt             |    3 +-
 hmp-commands.hx                    |   56 +
 hmp.c                              |  303 +++++
 hmp.h                              |    4 +
 hw/net/Makefile.objs               |    3 +
 hw/net/rocker/reg_guide.txt        |  961 +++++++++++++
 hw/net/rocker/rocker.c             | 1440 ++++++++++++++++++++
 hw/net/rocker/rocker.h             |   76 ++
 hw/net/rocker/rocker_desc.c        |  379 ++++++
 hw/net/rocker/rocker_desc.h        |   57 +
 hw/net/rocker/rocker_fp.c          |  242 ++++
 hw/net/rocker/rocker_fp.h          |   54 +
 hw/net/rocker/rocker_hw.h          |  475 +++++++
 hw/net/rocker/rocker_of_dpa.c      | 2644 ++++++++++++++++++++++++++++++++++++
 hw/net/rocker/rocker_of_dpa.h      |   25 +
 hw/net/rocker/rocker_tlv.h         |  247 ++++
 hw/net/rocker/rocker_world.c       |  108 ++
 hw/net/rocker/rocker_world.h       |   63 +
 hw/net/rocker/test/README          |    5 +
 hw/net/rocker/test/all             |   19 +
 hw/net/rocker/test/bridge          |   43 +
 hw/net/rocker/test/bridge-stp      |   52 +
 hw/net/rocker/test/bridge-vlan     |   52 +
 hw/net/rocker/test/bridge-vlan-stp |   64 +
 hw/net/rocker/test/port            |   22 +
 hw/net/rocker/test/tut.dot         |    8 +
 hw/net/virtio-net.c                |   12 +-
 include/hw/pci/pci.h               |    3 +-
 include/hw/pci/pci_ids.h           |    1 +
 include/net/net.h                  |    1 +
 net/net.c                          |    7 +
 qapi-schema.json                   |    3 +
 qapi/rocker.json                   |  249 ++++
 qmp-commands.hx                    |   97 ++
 36 files changed, 7774 insertions(+), 11 deletions(-)
 create mode 100644 hw/net/rocker/reg_guide.txt
 create mode 100644 hw/net/rocker/rocker.c
 create mode 100644 hw/net/rocker/rocker.h
 create mode 100644 hw/net/rocker/rocker_desc.c
 create mode 100644 hw/net/rocker/rocker_desc.h
 create mode 100644 hw/net/rocker/rocker_fp.c
 create mode 100644 hw/net/rocker/rocker_fp.h
 create mode 100644 hw/net/rocker/rocker_hw.h
 create mode 100644 hw/net/rocker/rocker_of_dpa.c
 create mode 100644 hw/net/rocker/rocker_of_dpa.h
 create mode 100644 hw/net/rocker/rocker_tlv.h
 create mode 100644 hw/net/rocker/rocker_world.c
 create mode 100644 hw/net/rocker/rocker_world.h
 create mode 100644 hw/net/rocker/test/README
 create mode 100755 hw/net/rocker/test/all
 create mode 100755 hw/net/rocker/test/bridge
 create mode 100755 hw/net/rocker/test/bridge-stp
 create mode 100755 hw/net/rocker/test/bridge-vlan
 create mode 100755 hw/net/rocker/test/bridge-vlan-stp
 create mode 100755 hw/net/rocker/test/port
 create mode 100644 hw/net/rocker/test/tut.dot
 create mode 100644 qapi/rocker.json

-- 
1.7.10.4

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker sfeldma
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

v2:

 - Address reg_guide.txt review comments from Eric Blake
 - Address QMP review comments from Eric Blake

v1:

[This is a collaboration between myself and Jiri Pirko].

This patch set adds a new ethernet switch device, called rocker.  Rocker is
intended to emulate HW features of switch ASICs found in today's
data-center-class switch/routers.  The original motivation in creating a new
device is to accelerate device driver development for ethernet switches in the
Linux kernel.  A device driver for rocker already exists in the Linux 3.18
kernel and loads against this device.  Basic L2 switching (bridging)
functionality is offloaded to the device.  Work continues to enable offloading
of L3 routing functions and ACLs, as well as support for a flow-based modes,
such as OpenVSwitch with OpenFlow.  Future support for terminating L2-over-L3
tunnels is also planned.

The core network processing functions are based on the spec of a real device:
Broadcom's OF-DPA.  Specifically, rocker borrows OF-DPA's network processing
pipeline comprised of flow match and action tables.  Only the OF-DPA spec was
used in constructing rocker.  The rocker developers do not have access to the
real OF-DPA's software source code, so this is a clean-room, ground-up
development.

Each rocker device is a PCI device with a memory-mapped register space and
MSI-X interrupts for command and event processing, as well as CPU-bound I/O.
Each device can support up to 62 "front-panel" ports, which present themselves
as -netdev attachment points.  The device is programmed using OF-DPA flow and
group tables to setup the flow pipeline.  The programming defines the
forwarding path for packets ingressing on 'front-panel' ports.  The forwarding
path can look at L2/L3/L4 packet header to forward the packet to its
destination.  For the performance path, packets would ingress and egress only
on the device, and not be passed up to the device driver (or host OS).  The
slow path for control packets will forward packets to the CPU via the device
driver for host OS processing.

A QMP/HMP interface is added to give inside into the device's internal port
configuration and flow/group tables.

A test directory is included with some basic sanity tests to verify the device
and driver.

Scott Feldman (10):
  pci: move REDHAT_SDHCI device ID to make room for Rocker
  net: add MAC address string printer
  virtio-net: use qemu_mac_strdup_printf
  rocker: add register programming guide
  pci: add rocker device ID
  pci: add network device class 'other' for network switches
  rocker: add new rocker switch device
  qmp: add rocker device support
  rocker: add tests
  MAINTAINERS: add rocker

 MAINTAINERS                        |    6 +
 default-configs/pci.mak            |    1 +
 docs/specs/pci-ids.txt             |    3 +-
 hmp-commands.hx                    |   56 +
 hmp.c                              |  303 +++++
 hmp.h                              |    4 +
 hw/net/Makefile.objs               |    3 +
 hw/net/rocker/reg_guide.txt        |  961 +++++++++++++
 hw/net/rocker/rocker.c             | 1440 ++++++++++++++++++++
 hw/net/rocker/rocker.h             |   76 ++
 hw/net/rocker/rocker_desc.c        |  379 ++++++
 hw/net/rocker/rocker_desc.h        |   57 +
 hw/net/rocker/rocker_fp.c          |  243 ++++
 hw/net/rocker/rocker_fp.h          |   54 +
 hw/net/rocker/rocker_hw.h          |  475 +++++++
 hw/net/rocker/rocker_of_dpa.c      | 2644 ++++++++++++++++++++++++++++++++++++
 hw/net/rocker/rocker_of_dpa.h      |   25 +
 hw/net/rocker/rocker_tlv.h         |  247 ++++
 hw/net/rocker/rocker_world.c       |  108 ++
 hw/net/rocker/rocker_world.h       |   63 +
 hw/net/rocker/test/README          |    5 +
 hw/net/rocker/test/all             |   19 +
 hw/net/rocker/test/bridge          |   43 +
 hw/net/rocker/test/bridge-stp      |   52 +
 hw/net/rocker/test/bridge-vlan     |   52 +
 hw/net/rocker/test/bridge-vlan-stp |   64 +
 hw/net/rocker/test/port            |   22 +
 hw/net/rocker/test/tut.dot         |    8 +
 hw/net/virtio-net.c                |   12 +-
 include/hw/pci/pci.h               |    3 +-
 include/hw/pci/pci_ids.h           |    1 +
 include/net/net.h                  |    1 +
 net/net.c                          |    7 +
 qapi-schema.json                   |    3 +
 qapi/rocker.json                   |  249 ++++
 qmp-commands.hx                    |   97 ++
 36 files changed, 7775 insertions(+), 11 deletions(-)
 create mode 100644 hw/net/rocker/reg_guide.txt
 create mode 100644 hw/net/rocker/rocker.c
 create mode 100644 hw/net/rocker/rocker.h
 create mode 100644 hw/net/rocker/rocker_desc.c
 create mode 100644 hw/net/rocker/rocker_desc.h
 create mode 100644 hw/net/rocker/rocker_fp.c
 create mode 100644 hw/net/rocker/rocker_fp.h
 create mode 100644 hw/net/rocker/rocker_hw.h
 create mode 100644 hw/net/rocker/rocker_of_dpa.c
 create mode 100644 hw/net/rocker/rocker_of_dpa.h
 create mode 100644 hw/net/rocker/rocker_tlv.h
 create mode 100644 hw/net/rocker/rocker_world.c
 create mode 100644 hw/net/rocker/rocker_world.h
 create mode 100644 hw/net/rocker/test/README
 create mode 100755 hw/net/rocker/test/all
 create mode 100755 hw/net/rocker/test/bridge
 create mode 100755 hw/net/rocker/test/bridge-stp
 create mode 100755 hw/net/rocker/test/bridge-vlan
 create mode 100755 hw/net/rocker/test/bridge-vlan-stp
 create mode 100755 hw/net/rocker/test/port
 create mode 100644 hw/net/rocker/test/tut.dot
 create mode 100644 qapi/rocker.json

-- 
1.7.10.4

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
  2015-01-06  2:24 ` sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06  9:52   ` Peter Maydell
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 02/10] net: add MAC address string printer sfeldma
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

The rocker device uses same PCI device ID as sdhci.  Since rocker device driver
has already been accepted into Linux 3.18, and REDHAT_SDHCI device ID isn't
used by any drivers, it's safe to move REDHAT_SDHCI device ID, avoiding
conflict with rocker.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 docs/specs/pci-ids.txt |    2 +-
 include/hw/pci/pci.h   |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt
index 9b57d5e..c6732fe 100644
--- a/docs/specs/pci-ids.txt
+++ b/docs/specs/pci-ids.txt
@@ -45,7 +45,7 @@ PCI devices (other than virtio):
 1b36:0003  PCI Dual-port 16550A adapter (docs/specs/pci-serial.txt)
 1b36:0004  PCI Quad-port 16550A adapter (docs/specs/pci-serial.txt)
 1b36:0005  PCI test device (docs/specs/pci-testdev.txt)
-1b36:0006  PCI SD Card Host Controller Interface (SDHCI)
+1b36:0007  PCI SD Card Host Controller Interface (SDHCI)
 
 All these devices are documented in docs/specs.
 
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 97e4257..97a83d3 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -88,7 +88,7 @@
 #define PCI_DEVICE_ID_REDHAT_SERIAL2     0x0003
 #define PCI_DEVICE_ID_REDHAT_SERIAL4     0x0004
 #define PCI_DEVICE_ID_REDHAT_TEST        0x0005
-#define PCI_DEVICE_ID_REDHAT_SDHCI       0x0006
+#define PCI_DEVICE_ID_REDHAT_SDHCI       0x0007
 #define PCI_DEVICE_ID_REDHAT_QXL         0x0100
 
 #define FMT_PCIBUS                      PRIx64
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 02/10] net: add MAC address string printer
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
  2015-01-06  2:24 ` sfeldma
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 03/10] virtio-net: use qemu_mac_strdup_printf sfeldma
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

We can use this in virtio-net code as well as new Rocker driver code, so
up-level this.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
---
 include/net/net.h |    1 +
 net/net.c         |    7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 008d610..90742f0 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -97,6 +97,7 @@ typedef struct NICState {
     bool peer_deleted;
 } NICState;
 
+char *qemu_mac_strdup_printf(const uint8_t *macaddr);
 NetClientState *qemu_find_netdev(const char *id);
 int qemu_find_net_clients_except(const char *id, NetClientState **ncs,
                                  NetClientOptionsKind type, int max);
diff --git a/net/net.c b/net/net.c
index 7acc162..2387616 100644
--- a/net/net.c
+++ b/net/net.c
@@ -151,6 +151,13 @@ int parse_host_port(struct sockaddr_in *saddr, const char *str)
     return 0;
 }
 
+char *qemu_mac_strdup_printf(const uint8_t *macaddr)
+{
+    return g_strdup_printf("%.2x:%.2x:%.2x:%.2x:%.2x:%.2x",
+                           macaddr[0], macaddr[1], macaddr[2],
+                           macaddr[3], macaddr[4], macaddr[5]);
+}
+
 void qemu_format_nic_info_str(NetClientState *nc, uint8_t macaddr[6])
 {
     snprintf(nc->info_str, sizeof(nc->info_str),
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 03/10] virtio-net: use qemu_mac_strdup_printf
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (2 preceding siblings ...)
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 02/10] net: add MAC address string printer sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 04/10] rocker: add register programming guide sfeldma
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
---
 hw/net/virtio-net.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index e574bd4..9afe669 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -226,12 +226,6 @@ static void rxfilter_notify(NetClientState *nc)
     }
 }
 
-static char *mac_strdup_printf(const uint8_t *mac)
-{
-    return g_strdup_printf("%.2x:%.2x:%.2x:%.2x:%.2x:%.2x", mac[0],
-                            mac[1], mac[2], mac[3], mac[4], mac[5]);
-}
-
 static intList *get_vlan_table(VirtIONet *n)
 {
     intList *list, *entry;
@@ -284,12 +278,12 @@ static RxFilterInfo *virtio_net_query_rxfilter(NetClientState *nc)
     info->multicast_overflow = n->mac_table.multi_overflow;
     info->unicast_overflow = n->mac_table.uni_overflow;
 
-    info->main_mac = mac_strdup_printf(n->mac);
+    info->main_mac = qemu_mac_strdup_printf(n->mac);
 
     str_list = NULL;
     for (i = 0; i < n->mac_table.first_multi; i++) {
         entry = g_malloc0(sizeof(*entry));
-        entry->value = mac_strdup_printf(n->mac_table.macs + i * ETH_ALEN);
+        entry->value = qemu_mac_strdup_printf(n->mac_table.macs + i * ETH_ALEN);
         entry->next = str_list;
         str_list = entry;
     }
@@ -298,7 +292,7 @@ static RxFilterInfo *virtio_net_query_rxfilter(NetClientState *nc)
     str_list = NULL;
     for (i = n->mac_table.first_multi; i < n->mac_table.in_use; i++) {
         entry = g_malloc0(sizeof(*entry));
-        entry->value = mac_strdup_printf(n->mac_table.macs + i * ETH_ALEN);
+        entry->value = qemu_mac_strdup_printf(n->mac_table.macs + i * ETH_ALEN);
         entry->next = str_list;
         str_list = entry;
     }
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 04/10] rocker: add register programming guide
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (3 preceding siblings ...)
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 03/10] virtio-net: use qemu_mac_strdup_printf sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06  5:16   ` Jason Wang
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 05/10] pci: add rocker device ID sfeldma
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

This is the register programming guide for the Rocker device.  It's intended
for driver writers and device writers.  It covers the device's PCI space,
the register set, DMA interface, and interrupts.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 hw/net/rocker/reg_guide.txt |  961 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 961 insertions(+)
 create mode 100644 hw/net/rocker/reg_guide.txt

diff --git a/hw/net/rocker/reg_guide.txt b/hw/net/rocker/reg_guide.txt
new file mode 100644
index 0000000..3146708
--- /dev/null
+++ b/hw/net/rocker/reg_guide.txt
@@ -0,0 +1,961 @@
+Rocker Network Switch Register Programming Guide
+Copyright (c) Scott Feldman <sfeldma@gmail.com>
+Copyright (c) Neil Horman <nhorman@tuxdriver.com>
+Version 0.11, 12/29/2014
+
+LICENSE
+=======
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+SECTION 1: Introduction
+=======================
+
+Overview
+--------
+
+This document describes the hardware/software interface for the Rocker switch
+device.  The intended audience is authors of OS drivers and device emulation
+software.
+
+Notations and Conventions
+-------------------------
+
+o In register descriptions, [n:m] indicates a range from bit n to bit m,
+inclusive.
+o Use of leading 0x indicates a hexadecimal number.
+o Use of leading 0b indicates a binary number.
+o The use of RSVD or Reserved indicates that a bit or field is reserved for
+future use.
+o Field width is in bytes, unless otherwise noted.
+o Register are (R) read-only, (R/W) read/write, (W) write-only, or (COR) clear
+on read
+o TLV values in network-byte-order are designated with (N).
+
+
+SECTION 2: PCI Configuration Registers
+======================================
+
+PCI Configuration Space
+-----------------------
+
+Each switch instance registers as a PCI device with PCI configuration space:
+
+	offset	width	description		value
+	---------------------------------------------
+	0x0	2	Vendor ID		0x1b36
+	0x2	2	Device ID		0x0006
+	0x4	4	Command/Status
+	0x8	1	Revision ID		0x01
+	0x9	3	Class code		0x2800
+	0xC	1	Cache line size
+	0xD	1	Latency timer
+	0xE	1	Header type
+	0xF	1	Built-in self test
+	0x10	4	Base address low
+	0x14	4	Base address high
+	0x18-28		Reserved
+	0x2C	2	Subsystem vendor ID	0x0000
+	0x2E	2	Subsystem ID		0x0000
+	0x30-38		Reserved
+	0x3C	1	Interrupt line
+	0x3D	1	Interrupt pin		0x00
+	0x3E	1	Min grant		0x00
+	0x3D	1	Max latency		0x00
+	0x40	1	TRDY timeout
+	0x41	1	Retry count
+	0x42	2	Reserved
+
+
+SECTION 3: Memory-Mapped Register Space
+=======================================
+
+There are two memory-mapped BARs.  BAR0 maps device register space and is
+0x2000 in size.  BAR1 maps MSI-X vector and PBA tables and is also 0x2000 in
+size, allowing for 256 MSI-X vectors.  The host BIOS will assign the base
+address location.  The host driver/OS will map the base address to host memory,
+giving the driver mmio access to the device register space.
+
+All registers are 4 or 8 bytes long.  It is assumed host software will access 4
+byte registers with one 4-byte access, and 8 byte registers with either two
+4-byte accesses or a single 8-byte access.  In the case of two 4-byte accesses,
+access must be lower and then upper 4-bytes, in that order.
+
+BAR0 device register space is organized as follows:
+
+	offset		description
+	------------------------------------------------------
+	0x0000-0x000f	Bogus registers to catch misbehaving
+			drivers.  Writes do nothing.  Reads
+			back as 0xDEADBABE.
+	0x0010-0x00ff	Test registers
+	0x0300-0x03ff	General purpose registers
+	0x1000-0x1fff	Descriptor control
+
+Holes in register space are reserved.  Writes to reserved registers do nothing.
+Reads to reserved registers read back as 0.
+
+No fancy stuff like write-combining is enabled on any of the registers.
+
+BAR1 MSI-X register space is organized as follows:
+
+	offset		description
+	------------------------------------------------------
+	0x0000-0x0fff	MSI-X vector table (256 vectors total)
+	0x1000-0x1fff	MSI-X PBA table
+
+
+SECTION 4: Interrupts, DMA, and Endianess
+=========================================
+
+PCI Interrupts
+--------------
+
+The device supports only MSI-X interrupts.  BAR1 memory-mapped region contains
+the MSI-X vector and PBA tables, with support for up to 256 MSI-X vectors.
+
+The vector assignment is:
+
+	vector		description
+	-----------------------------------------------------
+	0		Command descriptor ring completion
+	1		Event descriptor ring completion
+	2		Test operation completion
+	3		RSVD
+	4-255		Tx and Rx descriptor ring completion
+			  Tx vector is even
+			  Rx vector is odd
+
+A MSI-X vector table entry is 16 bytes:
+
+	field		offset	width	description
+	-------------------------------------------------------------
+	lower_addr	0x0	4	[31:2] message address[31:2]
+					[1:0] Rsvd (4 byte alignment
+						    required)
+	upper_addr	0x4	4	[31:19] Rsvd
+					[14:0] message address[46:32]
+	data		0x8	4	message data[31:0]
+	control		0xc	4	[31:1] Rsvd
+					[0] mask (0 = enable,
+						  1 = masked)
+
+Software should install the Interrupt Service Routine (ISR) before any ports
+are enabled or any commands are issued on the command ring.
+
+DMA Operations
+--------------
+
+DMA operations are used for packet DMA to/from the CPU, command and event
+processing.  Command processing includes statistical counters and table dumps,
+table insertion/deletion, and more.  Event processing provides an async
+notification method for device-originating events.  Each DMA operation has a
+set of control registers to manage a descriptor ring.  The descriptor rings are
+allocated from contiguous host DMA-able memory and registers specify the rings
+base address, size and current head and tail indices.  Software always writes
+the head, and hardware always writes the tail.
+
+The higher-order bit of DMA_DESC_COMP_ERR is used to mark hardware completion
+of a descriptor.  Software will clear this bit when posting a descriptor to the
+ring, and hardware will set this bit when the descriptor is complete.
+
+Descriptor ring sizes must be a power of 2 and range from 2 to 64K entries.
+Descriptor rings' base address must be 8-byte aligned.  Descriptors must be
+packed within ring.  Each descriptor in each ring must also be aligned on an 8
+byte boundary.  Each descriptor ring will have these registers:
+
+	DMA_DESC_xxx_BASE_ADDR, offset 0x1000 + (x * 32), 64-bit, (R/W)
+	DMA_DESC_xxx_SIZE, offset 0x1008 + (x * 32), 32-bit, (R/W)
+	DMA_DESC_xxx_HEAD, offset 0x100c + (x * 32), 32-bit, (R/W)
+	DMA_DESC_xxx_TAIL, offset 0x1010 + (x * 32), 32-bit, (R)
+	DMA_DESC_xxx_CTRL, offset 0x1014 + (x * 32), 32-bit, (W)
+	DMA_DESC_xxx_CREDITS, offset 0x1018 + (x * 32), 32-bit, (R/W)
+	DMA_DESC_xxx_RSVD1, offset 0x101c + (x * 32), 32-bit, (R/W)
+
+Where x is descriptor ring index:
+
+	index		ring
+	--------------------
+	0		CMD
+	1		EVENT
+	2		TX (port 0)
+	3		RX (port 0)
+	4		TX (port 1)
+	5		RX (port 1)
+	.
+	.
+	.
+	124		TX (port 61)
+	125		RX (port 61)
+	126		Resv
+	127		Resv
+
+Writing BASE_ADDR or SIZE will reset HEAD and TAIL to zero.  HEAD cannot be
+written passed TAIL.  To do so would wrap the ring.  An empty ring is when HEAD
+== TAIL.  A full ring is when HEAD is one position behind TAIL.  Both HEAD and
+TAIL increment and modulo wrap at the ring size.
+
+CTRL register bits:
+
+	bit	name		description
+	------------------------------------------------------------------------
+	[0]	CTRL_RESET	Reset the descriptor ring
+	[1:31]	Reserved
+
+All descriptor types share some common fields:
+
+	field			width	description
+	-------------------------------------------------------------------
+	DMA_DESC_BUF_ADDR	8	Phys addr of desc payload, 8-byte
+					aligned
+	DMA_DESC_COOKIE		8	Desc cookie for completion matching,
+					upper-most bit is reserved
+	DMA_DESC_BUF_SIZE	2	Desc payload size in bytes
+	DMA_DESC_TLV_SIZE	2	Desc payload total size in bytes
+					used for TLVs.  Must be <=
+					DMA_DESC_BUF_SIZE.
+	DMA_DESC_COMP_ERR	2	Completion status of associated
+					desc payload.  High order bit is
+					clear on new descs, toggled by
+					hw for completed items.
+
+To support forward- and backward-compatibility, descriptor and completion
+payloads are specified in TLV format.  Fields are packed with Type=field name,
+Length=field length, and Value=field value.  Software will ignore unknown fields
+filled in by the switch.  Likewise, the switch will ignore unknown fields
+filled in by software.
+
+Descriptor payload buffer is 8-byte aligned and TLVs are 8-byte aligned.  The
+value within a TLV is also 8-byte aligned.  The (packed, 8 byte) TLV header is:
+
+	field	width	description
+	-----------------------------
+	type	4	TLV type
+	len	2	TLV value length
+	pad	2	Reserved
+
+The alignment requirements for descriptors and TLVs are to avoid unaligned
+access exceptions in software.  Note that the payload for each TLV is also
+8 byte aligned.
+
+Figure 1 shows an example descriptor buffer with two TLVs.
+
+                  <------- 8 bytes ------->
+
+  8-byte  +––––+  +–––––––––––+–––––+–––––+                     +–+
+  align           |   type    | len | pad |    TLV#1 hdr          |
+                  +–––––––––––+–––––+–––––+    (len=22)           |
+                  |                       |                       |
+                  |  value                |    TVL#1 value        |
+                  |                       |    (padded to 8-byte  |
+                  |                 +–––––+     alignment)        |
+                  |                 |/////|                       |
+   8-byte +––––+  +–––––––––––+–––––––––––+                       |
+   align          |   type    | len | pad |    TLV#2 hdr    DESC_BUF_SIZE
+                  +–––––+–––––+–––––+–––––+    (len=2)            |
+                  |value|/////////////////|    TLV#2 value        |
+                  +–––––+/////////////////|                       |
+                  |///////////////////////|                       |
+                  |///////////////////////|                       |
+                  |///////////////////////|                       |
+                  |////////unused/////////|                       |
+                  |////////space//////////|                       |
+                  |///////////////////////|                       |
+                  |///////////////////////|                       |
+                  |///////////////////////|                       |
+                  +–––––––––––––––––––––––+                     +–+
+
+				fig. 1
+
+TLVs can be nested within the NEST TLV type.
+
+Interrupt credits
+^^^^^^^^^^^^^^^^^
+
+MSI-X vectors used for descriptor ring completions use a credit mechanism for
+efficient device, PCIe bus, OS and driver operations.  Each descriptor ring has
+a credit count which represent the number of outstanding descriptors to be
+processed by the driver.  As the device marks descriptors complete, the credit
+count is incremented.  As the driver processes those outstanding descriptors,
+it returns credits back to the device.  This way, the device knows the driver's
+progress and can make decisions about when to fire the next interrupt or not.
+When the credit count is zero, and the first descriptors are posted for the
+driver, a single interrupt is fired.  Once the interrupt is fired, the
+interrupt is disabled (auto-masked).  In response to the interrupt, the driver
+will process descriptors and PIO write a returned credit value for that
+descriptor ring.  If the driver returns all credits (the driver caught up with
+the device and there is no outstanding work), then the interrupt is unmasked,
+but not fired.  If only partial credits are returned, the interrupt remains
+masked but the device generates an interrupt, signaling the driver that more
+outstanding work is available.
+
+Endianess
+---------
+
+Device registers are hard-coding to little-endian (LE).  The driver should
+convert to/from host endianess to LE for device register accesses.
+
+Descriptors are LE.  Descriptor buffer TLVs will have LE type and length
+fields, but the value field can either be LE or network-byte-order, depending
+on context.  TLV values containing network packet data will be in network-byte
+order.  A TLV value containing a field or mask used to compare against network
+packet data is network-byte order.  For example, flow match fields (and masks)
+are network-byte-order since they're matched directly, byte-by-byte, against
+network packet data.  All non-network-packet TLV multi-byte values will be LE.
+
+TLV values in network-byte-order are designated with (N).
+
+
+SECTION 5: Test Registers
+=========================
+
+Rocker switch has several test registers to support troubleshooting register
+access, interrupt generation, and DMA operations:
+
+	TEST_REG, offset 0x0010, 32-bit (R/W)
+	TEST_REG64, offset 0x0018, 64-bit (R/W)
+	TEST_IRQ, offset 0x0020, 32-bit (R/W)
+	TEST_DMA_ADDR, offset 0x0028, 64-bit (R/W)
+	TEST_DMA_SIZE, offset 0x0030, 32-bit (R/W)
+	TEST_DMA_CTRL, offset 0x0034, 32-bit (R/W)
+
+Reads to TEST_REG and TEST_REG64 will read a value 2x the last value written to
+the register.  The 32-bit and 64-bit versions are for testing 32-bit and 64-bit
+host accesses.
+
+Bits written to TEST_IRQ will cause same (unmasked) bits to be written to
+IRQ_STAT and an interrupt generated.  Use IRQ_MASK to mask and unmask
+particular bits.
+
+To test basic DMA operations, allocate a DMA-able host buffer and put the
+buffer address into TEST_DMA_ADDR and size into TEST_DMA_SIZE.  Then, write to
+TEST_DMA_CTRL to manipulate the buffer contents.  TEST_DMA_CTRL operations are:
+
+	operation		value	description
+	-----------------------------------------------------------
+	TEST_DMA_CTRL_CLEAR	1	clear buffer
+	TEST_DMA_CTRL_FILL	2	fill buffer bytes with 0x96
+	TEST_DMA_CTRL_INVERT	4	invert bytes in buffer
+
+Various buffer address and sizes should be tested to verify no address boundary
+issue exists.  In particular, buffers that start on odd-8-byte boundary and/or
+span multiple PAGE sizes should be tested.
+
+
+SECTION 6: Ports
+================
+
+Physical and Logical Ports
+------------------------------------
+
+The switch supports up to 62 physical (front-panel) ports.  Register
+PORT_PHYS_COUNT returns the actual number of physical ports available:
+
+	PORT_PHYS_COUNT, offset 0x0304, 32-bit, (R)
+
+In addition to front-panel ports, the switch supports logical ports for
+tunnels.
+
+Front-panel ports and logical tunnel ports are mapped into a single 32-bit port
+space.  A special CPU port is assigned port 0.  The front-panel ports are
+mapped to ports 1-62.  A special loopback port is assigned port 63.  Logical
+tunnel ports are assigned ports 0x0001000-0x0001ffff.
+To summarize the port assignments:
+
+	port			mapping
+	-------------------------------------------------------
+	0			CPU port (for packets to/from host CPU)
+	1-62			front-panel physical ports
+	63			loopback port
+	64-0x0000ffff		RSVD
+	0x00010000-0x0001ffff	logical tunnel ports
+        0x00020000-0xffffffff   RSVD
+
+Physical Port Mode
+------------------
+
+Switch front-panel ports operate in a mode.  Current, only mode is OF-DPA.
+OF-DPA[1] mode is based on OpenFlow Data Plane Abstraction (OF-DPA) Abstract
+Switch Specification, Version 1.0, from Broadcom Corporation.  To set/get the
+mode for front-panel ports, see port settings, below.
+
+Port Settings
+-------------
+
+Links status for all front-panel ports is available via PORT_PHYS_LINK_STATUS:
+
+	PORT_PHYS_LINK_STATUS, offset 0x0310, 64-bit, (R)
+
+	Value is port bitmap.  Bits 0 and 63 always read 0.  Bits 1-62
+	read 1 for link UP and 0 for link DOWN for respective front-panel ports.
+
+Other properties for front-panel ports are available via DMA CMD descriptors:
+
+	Get PORT_SETTINGS descriptor:
+
+		field		width	description
+		----------------------------------------------
+		PORT_SETTINGS	2	CMD_GET
+		PPORT		4	Physical port #
+
+	Get PORT_SETTINGS completion:
+
+		field		width	description
+		----------------------------------------------
+		PPORT		4	Physical port #
+		SPEED		4	Current port interface speed, in Mbps
+		DUPLEX		1	1 = Full, 0 = Half
+		AUTONEG		1	1 = enabled, 0 = disabled
+		MACADDR		6	Port MAC address
+		MODE		1	0 = OF-DPA
+
+	Set PORT_SETTINGS descriptor:
+
+		field		width	description
+		----------------------------------------------
+		PORT_SETTINGS	2	CMD_SET
+		PPORT		4	Physical port #
+		SPEED		4	Port interface speed, in Mbps
+		DUPLEX		1	1 = Full, 0 = Half
+		AUTONEG		1	1 = enabled, 0 = disabled
+		MACADDR		6	Port MAC address
+		MODE		1	0 = OF-DPA
+
+Port Enable
+-----------
+
+Front-panel ports are initially disabled, which means port ingress and egress
+packets will be dropped.  To enable or disable a port, use PORT_PHYS_ENABLE:
+
+	PORT_PHYS_ENABLE: offset 0x0318, 64-bit, (R/W)
+
+	Value is bitmap of first 64 ports.  Bits 0 and 63 are ignored
+	and always read as 0.  Write 1 to enable port; write 0 to disable it.
+	Default is 0.
+
+SECTION 7: Switch Control
+=========================
+
+This section covers switch-wide register settings.
+
+Control
+-------
+
+This register is used for low level control of the switch.
+
+	CONTROL: offset 0x0300, 32-bit, (W)
+
+	bit	name		description
+	------------------------------------------------------------------------
+	[0]	CONTROL_RESET	If set, device will perform reset (same
+				as pci reset)
+	[1:31]	Reserved
+
+Switch ID
+---------
+
+The switch has a SWITCH_ID to be used by software to uniquely identify the
+switch:
+
+	SWITCH_ID: offset 0x0320, 64-bit, (R)
+
+	Value is opaque to switch software and no special encoding is implied.
+
+
+SECTION 8: CPU Packet Processing
+================================
+
+For packets ingressing on switch ports that are not forwarded by the switch but
+rather directed to the host CPU for further processing are delivered in the
+DMA RX ring.  Likewise, for host CPU originating packets destined to egress on
+switch ports onto the network are scheduled by software using the DMA TX ring.
+
+Tx Packet Processing
+--------------------
+
+Software schedules packets for egress on switch ports using the DMA TX ring.  A
+TX descriptor buffer describes the packet location and size in host DMA-able
+memory, the destination port, and any hardware-offload functions (such as L3
+payload checksum offload).  Software then bumps the descriptor head to signal
+hardware of new Tx work.  In response, hardware will DMA read Tx descriptors up
+to head, DMA read descriptor buffer and packet data, perform offloading
+functions, and finally frame packet on wire (network).  Once packet processing
+is complete, hardware will writeback status to descriptor(s) to signal to
+software that Tx is complete and software resources (e.g. skb) backing packet
+can be released.
+
+Figure 2 shows an example 3-fragment packet queued with one Tx descriptor.  A
+TLV is used for each packet fragment.
+
+	                                           pkt frag 1
+	                                           +–––––––+  +–+
+	                                       +–––+       |    |
+	                         desc buf      |   |       |    |
+	                        +––––––––+     |   |       |    |
+	        Tx ring     +–––+        +–––––+   |       |    |
+	      +–––––––––+   |   |  TLVs  |         +–––––––+    |
+	      |         +–––+   +––––––––+         pkt frag 2   |
+	      | desc 0  |       |        +–––––+   +–––––––+    |
+	      +–––––––––+       |  TLVs  |     +–––+       |    |
+	head+–+         |       +––––––––+         |       |    |
+	      | desc 1  |       |        +–––––+   +–––––––+    |pkt
+	      +–––––––––+       |  TLVs  |     |                |
+	      |         |       +––––––––+     |   pkt frag 3   |
+	      |         |                      |   +–––––––+    |
+	      +–––––––––+                      +–––+       |    |
+	      |         |                          |       |    |
+	      |         |                          |       |    |
+	      +–––––––––+                          |       |    |
+	      |         |                          |       |    |
+	      |         |                          |       |    |
+	      +–––––––––+                          |       |    |
+	      |         |                          +–––––––+  +–+
+	      |         |
+	      +–––––––––+
+
+				fig 2.
+
+The TLVs for Tx descriptor buffer are:
+
+	field			width	description
+	---------------------------------------------------------------------
+	PPORT			4	Destination physical port #
+	TX_OFFLOAD		1	Hardware offload modes:
+					  0: no offload
+					  1: insert IP csum (ipv4 only)
+					  2: insert TCP/UDP csum
+					  3: L3 csum calc and insert
+                        	             into csum offset (TX_L3_CSUM_OFF)
+                 	                    16-bit 1's complement csum value.
+                                	     IPv4 pseudo-header and IP
+                        	             already calculated by OS
+                  	                   and inserted.
+					  4: TSO (TCP Segmentation Offload)
+	TX_L3_CSUM_OFF		2	For L3 csum offload mode, the offset,
+					from the beginning of the packet,
+					of the csum field in the L3 header
+	TX_TSO_MSS		2	For TSO offload mode, the
+					Maximum Segment Size in bytes
+        TX_TSO_HDR_LEN		2	For TSO offload mode, the
+					length of ethernet, IP, and
+					TCP/UDP headers, including IP
+					and TCP options.
+	TX_FRAGS		<array>	Packet fragments
+	  TX_FRAG		<nest>	Packet fragment
+	    TX_FRAG_ADDR	8	DMA address of packet fragment
+	    TX_FRAG_LEN		2	Packet fragment length
+
+Possible status return codes in descriptor on completion are:
+
+	DESC_COMP_ERR	reason
+	--------------------------------------------------------------------
+	0		OK
+	ENXIO		address or data read err on desc buf or packet
+			fragment
+	EINVAL		bad pport or TSO or csum offloading error
+	ENOMEM		no memory for internal staging tx fragment
+
+Rx Packet Processing
+--------------------
+
+For packets ingressing on switch ports that are not forwarded by the switch but
+rather directed to the host CPU for further processing are delivered in the
+DMA RX ring.  Rx descriptor buffers are allocated by software and placed on the
+ring.  Hardware will fill Rx descriptor buffers with packet data, write the
+completion, and signal to software that a new packet is ready.  Since Rx packet
+size is not known a-priori, the Rx descriptor buffer must be allocated for
+worst-case packet size.  A single Rx descriptor will contain the entire Rx
+packet data in one RX_PACKET TLV.  Other Rx TLVs describe and hardware offloads
+performed on the packet, such as checksum validation.
+
+The TLVs for Rx descriptor buffer are:
+
+	field		width	description
+	---------------------------------------------------
+	PPORT		4	Source physical port #
+	RX_FLAGS	2	Packet parsing flags:
+				  (1 << 0): IPv4 packet
+				  (1 << 1): IPv6 packet
+				  (1 << 2): csum calculated
+				  (1 << 3): IPv4 csum good
+				  (1 << 4): IP fragment
+				  (1 << 5): TCP packet
+				  (1 << 6): UDP packet
+				  (1 << 7): TCP/UDP csum good
+	RX_CSUM		2	IP calculated checksum:
+				  IPv4: IP payload csum
+				  IPv6: header and payload csum
+				(Only valid is RX_FLAGS:csum calc is set)
+	RX_PACKET (N)	<var>	Packet data
+
+Possible status return codes in descriptor on completion are:
+
+	DESC_COMP_ERR	reason
+	--------------------------------------------------------------------
+	0		OK
+	ENXIO		address or data read err on desc buf
+	ENOMEM		no memory for internal staging desc buf
+	EMSGSIZE	Rx descriptor buffer wasn't big enough to contain
+			pactet data TLV and other TLVs.
+
+
+SECTION 9: OF-DPA Mode
+======================
+
+OF-DPA mode allows the switch to offload flow packet processing functions to
+hardware.  An OpenFlow controller would communicate with an OpenFlow agent
+installed on the switch.  The OpenFlow agent would (directly or indirectly)
+communicate with the Rocker switch driver, which in turn would program switch
+hardware with flow functionality, as defined in OF-DPA.  The block diagram is:
+
+		+–––––––––––––––----–––+
+		|        OF            |
+		|  Remote Controller   |
+		+––––––––+––----–––––––+
+		         |
+		         |
+		+––––––––+–––––––––+
+		|       OF         |
+		|   Local Agent    |
+		+––––––––––––––––––+
+		|                  |
+		|   Rocker Driver  |
+		+––––––––––––––––––+
+		    <this spec>
+		+––––––––––––––––––+
+		|                  |
+		|   Rocker Switch  |
+		+––––––––––––––––––+
+
+To participate in flow functions, ports must be configure for OF-DPA mode
+during switch initialization.
+
+OF-DPA Flow Table Interface
+---------------------------
+
+There are commands to add, modify, delete, and get stats of flow table entries.
+The commands are issued using the DMA CMD descriptor ring.  The following
+commands are defined:
+
+	CMD_ADD:		add an entry to flow table
+	CMD_MOD:		modify an entry in flow table
+	CMD_DEL:		delete an entry from flow table
+	CMD_GET_STATS:		get stats for flow entry
+
+TLVs for add and modify commands are:
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_CMD		2	CMD_[ADD|MOD]
+	OF_DPA_TBL		2	Flow table ID
+					  0: ingress port
+					  10: vlan
+					  20: termination mac
+					  30: unicast routing
+					  40: multicast routing
+					  50: bridging
+					  60: ACL policy
+	OF_DPA_PRIORITY		4	Flow priority
+	OF_DPA_HARDTIME		4	Hard timeout for flow
+	OF_DPA_IDLETIME		4	Idle timeout for flow
+	OF_DPA_COOKIE		8	Cookie
+
+Additional TLVs based on flow table ID:
+
+Table ID 0: ingress port
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_IN_PPORT		4	ingress physical port number
+	OF_DPA_GOTO_TBL		2	goto table ID; zero to drop
+
+Table ID 10: vlan
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_IN_PPORT		4	ingress physical port number
+	OF_DPA_VLAN_ID		2 (N)	vlan ID
+	OF_DPA_VLAN_ID_MASK	2 (N)	vlan ID mask
+	OF_DPA_GOTO_TBL		2	goto table ID; zero to drop
+	OF_DPA_NEW_VLAN_ID	2 (N)	new vlan ID
+
+Table ID 20: termination mac
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_IN_PPORT		4	ingress physical port number
+	OF_DPA_IN_PPORT_MASK	4	ingress physical port number mask
+	OF_DPA_ETHERTYPE	2 (N)	must be either 0x0800 or 0x86dd
+	OF_DPA_DST_MAC		6 (N)	destination MAC
+	OF_DPA_DST_MAC_MASK	6 (N)	destination MAC mask
+	OF_DPA_VLAN_ID		2 (N)	vlan ID
+	OF_DPA_VLAN_ID_MASK	2 (N)	vlan ID mask
+	OF_DPA_GOTO_TBL		2	only acceptable values are
+					unicast or multicast routing
+					table IDs
+	OF_DPA_OUT_PPORT	2	if specified, must be
+					controller, set zero otherwise
+
+Table ID 30: unicast routing
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_ETHERTYPE	2 (N)	must be either 0x0800 or 0x86dd
+	OF_DPA_DST_IP		4 (N)	destination IPv4 address.
+					Must be unicast address
+	OF_DPA_DST_IP_MASK	4 (N)	IP mask.  Must be prefix mask
+	OF_DPA_DST_IPV6		16 (N)	destination IPv6 address.
+					Must be unicast address
+	OF_DPA_DST_IPV6_MASK	16 (N)	IPv6 mask. Must be prefix mask
+	OF_DPA_GOTO_TBL		2	goto table ID; zero to drop
+	OF_DPA_GROUP_ID		4	data for GROUP action must
+					be an L3 Unicast group entry
+
+Table ID 40: multicast routing
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_ETHERTYPE	2 (N)	must be either 0x0800 or 0x86dd
+	OF_DPA_VLAN_ID		2 (N)	vlan ID
+	OF_DPA_SRC_IP		4 (N)	source IPv4. Optional,
+					can contain IPv4 address,
+					must be completely masked
+					if not used
+	OF_DPA_SRC_IP_MASK	4 (N)	IP Mask
+	OF_DPA_DST_IP		4 (N)	destination IPv4 address.
+					Must be multicast address
+	OF_DPA_SRC_IPV6		16 (N)	source IPv6 Address. Optional.
+					Can contain IPv6 address,
+					must be completely masked
+					if not used
+	OF_DPA_SRC_IPV6_MASK	16 (N)	IPv6 mask.
+	OF_DPA_DST_IPV6		16 (N)	destination IPv6 Address. Must
+					be multicast address
+					Must be multicast address
+	OF_DPA_GOTO_TBL		2	goto table ID; zero to drop
+	OF_DPA_GROUP_ID		4	data for GROUP action must
+					be an L3 multicast group entry
+
+Table ID 50: bridging
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_VLAN_ID		2 (N)	vlan ID
+	OF_DPA_TUNNEL_ID	4	tunnel ID
+	OF_DPA_DST_MAC		6 (N)	destination MAC
+	OF_DPA_DST_MAC_MASK	6 (N)	destination MAC mask
+	OF_DPA_GOTO_TBL		2	goto table ID; zero to drop
+	OF_DPA_GROUP_ID		4	data for GROUP action must
+					be a L2 Interface, L2
+					Multicast, L2 Flood,
+					or L2 Overlay group entry
+					as appropriate
+	OF_DPA_TUNNEL_LPORT	4	unicast Tenant Bridging
+					flows specify a tunnel
+					logical port ID
+	OF_DPA_OUT_PPORT	2	data for OUTPUT action,
+					restricted to CONTROLLER,
+					set to 0 otherwise
+
+Table ID 60: acl policy
+
+	field			width	description
+	----------------------------------------------------
+	OF_DPA_IN_PPORT		4	ingress physical port number
+	OF_DPA_IN_PPORT_MASK	4	ingress physical port number mask
+	OF_DPA_ETHERTYPE	2 (N)	ethertype
+	OF_DPA_VLAN_ID		2 (N)	vlan ID
+	OF_DPA_VLAN_ID_MASK	2 (N)	vlan ID mask
+	OF_DPA_VLAN_PCP		2 (N)	vlan Priority Code Point
+	OF_DPA_VLAN_PCP_MASK	2 (N)	vlan Priority Code Point mask
+	OF_DPA_SRC_MAC		6 (N)	source MAC
+	OF_DPA_SRC_MAC_MASK	6 (N)	source MAC mask
+	OF_DPA_DST_MAC		6 (N)	destination MAC
+	OF_DPA_DST_MAC_MASK	6 (N)	destination MAC mask
+	OF_DPA_TUNNEL_ID	4	tunnel ID
+	OF_DPA_SRC_IP		4 (N)	source IPv4. Optional,
+					can contain IPv4 address,
+					must be completely masked
+					if not used
+	OF_DPA_SRC_IP_MASK	4 (N)	IP Mask
+	OF_DPA_DST_IP		4 (N)	destination IPv4 address.
+					Must be multicast address
+	OF_DPA_DST_IP_MASK	4 (N)	IP Mask
+	OF_DPA_SRC_IPV6		16 (N)	source IPv6 Address. Optional.
+					Can contain IPv6 address,
+					must be completely masked
+					if not used
+	OF_DPA_SRC_IPV6_MASK	16 (N)	IPv6 mask
+	OF_DPA_DST_IPV6		16 (N)	destination IPv6 Address. Must
+					be multicast address.
+	OF_DPA_DST_IPV6_MASK	16 (N)	IPv6 mask
+	OF_DPA_SRC_ARP_IP	4 (N)	source IPv4 address in the ARP
+					payload.  Only used if ethertype
+					== 0x0806.
+	OF_DPA_SRC_ARP_IP_MASK	4 (N)	IP Mask
+	OF_DPA_IP_PROTO		1	IP protocol
+	OF_DPA_IP_PROTO_MASK	1	IP protocol mask
+	OF_DPA_IP_DSCP		1	DSCP
+	OF_DPA_IP_DSCP_MASK	1	DSCP mask
+	OF_DPA_IP_ECN		1	ECN
+	OF_DPA_IP_ECN_MASK		1	ECN mask
+	OF_DPA_L4_SRC_PORT	2 (N)	L4 source port, only for
+					TCP, UDP, or SCTP
+	OF_DPA_L4_SRC_PORT_MASK	2 (N)	L4 source port mask
+	OF_DPA_L4_DST_PORT	2 (N)	L4 source port, only for
+					TCP, UDP, or SCTP
+	OF_DPA_L4_DST_PORT_MASK	2 (N)	L4 source port mask
+	OF_DPA_ICMP_TYPE	1	ICMP type, only if IP
+					protocol is 1
+	OF_DPA_ICMP_TYPE_MASK	1	ICMP type mask
+	OF_DPA_ICMP_CODE	1	ICMP code
+	OF_DPA_ICMP_CODE_MASK	1	ICMP code mask
+	OF_DPA_IPV6_LABEL	4 (N)	IPv6 flow label
+	OF_DPA_IPV6_LABEL_MASK	4 (N)	IPv6 flow label mask
+	OF_DPA_GROUP_ID		4	data for GROUP action
+	OF_DPA_QUEUE_ID_ACTION	1	write the queue ID
+	OF_DPA_NEW_QUEUE_ID	1	queue ID
+	OF_DPA_VLAN_PCP_ACTION	1	write the VLAN priority
+	OF_DPA_NEW_VLAN_PCP	1	VLAN priority
+	OF_DPA_IP_DSCP_ACTION	1	write the DSCP
+	OF_DPA_NEW_IP_DSCP	1	new DSCP
+	OF_DPA_TUNNEL_LPORT	4	restrct to valid tunnel
+					logical port, set to 0
+					otherwise.
+	OF_DPA_OUT_PPORT	2	data for OUTPUT action,
+					restricted to CONTROLLER,
+					set to 0 otherwise
+	OF_DPA_CLEAR_ACTIONS	4	if 1 packets matching flow are
+					dropped (all other instructions
+					ignored)
+
+TLVs for flow delete and get stats command are:
+
+	field			width	description
+	---------------------------------------------------
+	OF_DPA_CMD		2	CMD_[DEL|GET_STATS]
+	OF_DPA_COOKIE		8	Cookie
+
+On completion of get stats command, the descriptor buffer is written back with
+the following TLVs:
+
+	field			width	description
+	---------------------------------------------------
+	OF_DPA_STAT_DURATION	4	Flow duration
+	OF_DPA_STAT_RX_PKTS	8	Received packets
+	OF_DPA_STAT_TX_PKTS	8	Transmit packets
+
+Possible status return codes in descriptor on completion are:
+
+	DESC_COMP_ERR	command			reason
+	--------------------------------------------------------------------
+	0		all			OK
+	EFAULT		all			head or tail index outside
+						of ring
+	ENXIO		all			address or data read err on
+						desc buf
+	EMSGSIZE	GET_STATS		cmd descriptor buffer wasn't
+						big enough to contain write-back
+						TLVs
+	EINVAL		all			invalid parameters passed in
+	EEXIST		ADD			entry already exists
+	ENOSPC		ADD			no space left in flow table
+	ENOENT		MOD|DEL|GET_STATS	cookie invalid
+
+Group Table Interface
+---------------------
+
+There are commands to add, modify, delete, and get stats of group table
+entries.  The commands are issued using the DMA CMD descriptor ring.  The
+following commands are defined:
+
+	CMD_ADD:		add an entry to group table
+	CMD_MOD:		modify an entry in group table
+	CMD_DEL:		delete an entry from group table
+	CMD_GET_STATS:		get stats for group entry
+
+TLVs for add and modify commands are:
+
+	field			width	description
+	-----------------------------------------------------------
+	FLOW_GROUP_CMD		2	CMD_[ADD|MOD]
+	FLOW_GROUP_ID		2	Flow group ID
+	FLOW_GROUP_TYPE		1	Group type:
+					  0: L2 interface
+					  1: L2 rewrite
+					  2: L3 unicast
+					  3: L2 multicast
+					  4: L2 flood
+					  5: L3 interface
+					  6: L3 multicast
+					  7: L3 ECMP
+					  8: L2 overlay
+	FLOW_VLAN_ID		2	Vlan ID (types 0, 3, 4, 6)
+	FLOW_L2_PORT		2	Port (types 0)
+	FLOW_INDEX		4	Index (all types but 0)
+	FLOW_OVERLAY_TYPE	1	Overlay sub-type (type 8):
+					  0: Flood unicast tunnel
+					  1: Flood multicast tunnel
+					  2: Multicast unicast tunnel
+					  3: Multicast multicast tunnel
+	FLOW_GROUP_ACTION		nest
+	  FLOW_GROUP_ID		2	next group ID in chain (all
+					types except 0)
+	  FLOW_OUT_PORT		4	egress port (types 0, 8)
+	  FLOW_POP_VLAN_TAG	1	strip outer VLAN tag (type 1
+					only)
+	  FLOW_VLAN_ID		2	(types 1, 5)
+	  FLOW_SRC_MAC		6	(types 1, 2, 5)
+	  FLOW_DST_MAC		6	(types 1, 2)
+
+TLVs for flow delete and get stats command are:
+
+	field			width	description
+	-----------------------------------------------------------
+	FLOW_GROUP_CMD		2	CMD_[DEL|GET_STATS]
+	FLOW_GROUP_ID		2	Flow group ID
+
+On completion of get stats command, the descriptor buffer is written back with
+the following TLVs:
+
+	field			width	description
+	---------------------------------------------------
+	FLOW_GROUP_ID		2	Flow group ID
+	FLOW_STAT_DURATION	4	Flow duration
+	FLOW_STAT_REF_COUNT	4	Flow reference count
+	FLOW_STAT_BUCKET_COUNT	4	Flow bucket count
+
+Possible status return codes in descriptor on completion are:
+
+	DESC_COMP_ERR	command			reason
+	--------------------------------------------------------------------
+	0		all			OK
+	EFAULT		all			head or tail index outside
+						of ring
+	ENXIO		all			address or data read err on
+						desc buf
+	ENOSPC		GET_STATS		cmd descriptor buffer wasn't
+						big enough to contain write-back
+						TLVs
+	EINVAL		ADD|MOD			invalid parameters passed in
+	EEXIST		ADD			entry already exists
+	ENOSPC		ADD			no space left in flow table
+	ENOENT		MOD|DEL|GET_STATS	group ID invalid
+	EBUSY		DEL			group reference count non-zero
+	ENODEV		ADD			next group ID doesn't exist
+
+
+
+References
+==========
+
+[1] OpenFlow Data Plane Abstraction (OF-DPA) Abstract Switch Specification,
+Version 1.0, from Broadcom Corporation, February 21, 2014.
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 05/10] pci: add rocker device ID
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (4 preceding siblings ...)
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 04/10] rocker: add register programming guide sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 06/10] pci: add network device class 'other' for network switches sfeldma
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 docs/specs/pci-ids.txt |    1 +
 include/hw/pci/pci.h   |    1 +
 2 files changed, 2 insertions(+)

diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt
index c6732fe..e4a4490 100644
--- a/docs/specs/pci-ids.txt
+++ b/docs/specs/pci-ids.txt
@@ -45,6 +45,7 @@ PCI devices (other than virtio):
 1b36:0003  PCI Dual-port 16550A adapter (docs/specs/pci-serial.txt)
 1b36:0004  PCI Quad-port 16550A adapter (docs/specs/pci-serial.txt)
 1b36:0005  PCI test device (docs/specs/pci-testdev.txt)
+1b36:0006  PCI Rocker Ethernet switch device
 1b36:0007  PCI SD Card Host Controller Interface (SDHCI)
 
 All these devices are documented in docs/specs.
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 97a83d3..2c58bf2 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -88,6 +88,7 @@
 #define PCI_DEVICE_ID_REDHAT_SERIAL2     0x0003
 #define PCI_DEVICE_ID_REDHAT_SERIAL4     0x0004
 #define PCI_DEVICE_ID_REDHAT_TEST        0x0005
+#define PCI_DEVICE_ID_REDHAT_ROCKER      0x0006
 #define PCI_DEVICE_ID_REDHAT_SDHCI       0x0007
 #define PCI_DEVICE_ID_REDHAT_QXL         0x0100
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 06/10] pci: add network device class 'other' for network switches
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (5 preceding siblings ...)
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 05/10] pci: add rocker device ID sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device sfeldma
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

Rocker is an ethernet switch device, so add 'other' network device class as
defined by PCI to cover these types of devices.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 include/hw/pci/pci_ids.h |    1 +
 1 file changed, 1 insertion(+)

diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h
index d7be386..c6de710 100644
--- a/include/hw/pci/pci_ids.h
+++ b/include/hw/pci/pci_ids.h
@@ -23,6 +23,7 @@
 #define PCI_CLASS_STORAGE_OTHER          0x0180
 
 #define PCI_CLASS_NETWORK_ETHERNET       0x0200
+#define PCI_CLASS_NETWORK_OTHER          0x0280
 
 #define PCI_CLASS_DISPLAY_VGA            0x0300
 #define PCI_CLASS_DISPLAY_OTHER          0x0380
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (6 preceding siblings ...)
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 06/10] pci: add network device class 'other' for network switches sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06 15:12   ` Stefan Hajnoczi
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 08/10] qmp: add rocker device support sfeldma
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

Rocker is a simulated ethernet switch device.  The device supports up to 62
front-panel ports and supports L2 switching and L3 routing functions, as well
as L2/L3/L4 ACLs.  The device presents a single PCI device for each switch,
with a memory-mapped register space for device driver access.

Rocker device is invoked with -device, for example a 4-port switch:

  -device rocker,name=sw1,len-ports=4,ports[0]=dev0,ports[1]=dev1, \
         ports[2]=dev2,ports[3]=dev3

Each port is a netdev and can be paired with using -netdev id=<port name>.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 default-configs/pci.mak       |    1 +
 hw/net/Makefile.objs          |    3 +
 hw/net/rocker/rocker.c        | 1394 ++++++++++++++++++++++++
 hw/net/rocker/rocker.h        |   76 ++
 hw/net/rocker/rocker_desc.c   |  379 +++++++
 hw/net/rocker/rocker_desc.h   |   57 +
 hw/net/rocker/rocker_fp.c     |  242 +++++
 hw/net/rocker/rocker_fp.h     |   54 +
 hw/net/rocker/rocker_hw.h     |  475 +++++++++
 hw/net/rocker/rocker_of_dpa.c | 2335 +++++++++++++++++++++++++++++++++++++++++
 hw/net/rocker/rocker_of_dpa.h |   25 +
 hw/net/rocker/rocker_tlv.h    |  247 +++++
 hw/net/rocker/rocker_world.c  |  108 ++
 hw/net/rocker/rocker_world.h  |   63 ++
 14 files changed, 5459 insertions(+)
 create mode 100644 hw/net/rocker/rocker.c
 create mode 100644 hw/net/rocker/rocker.h
 create mode 100644 hw/net/rocker/rocker_desc.c
 create mode 100644 hw/net/rocker/rocker_desc.h
 create mode 100644 hw/net/rocker/rocker_fp.c
 create mode 100644 hw/net/rocker/rocker_fp.h
 create mode 100644 hw/net/rocker/rocker_hw.h
 create mode 100644 hw/net/rocker/rocker_of_dpa.c
 create mode 100644 hw/net/rocker/rocker_of_dpa.h
 create mode 100644 hw/net/rocker/rocker_tlv.h
 create mode 100644 hw/net/rocker/rocker_world.c
 create mode 100644 hw/net/rocker/rocker_world.h

diff --git a/default-configs/pci.mak b/default-configs/pci.mak
index a186c39..a7f3278 100644
--- a/default-configs/pci.mak
+++ b/default-configs/pci.mak
@@ -32,3 +32,4 @@ CONFIG_PCI_TESTDEV=y
 CONFIG_NVME_PCI=y
 CONFIG_SD=y
 CONFIG_SDHCI=y
+CONFIG_ROCKER=y
diff --git a/hw/net/Makefile.objs b/hw/net/Makefile.objs
index ea93293..4f8e826 100644
--- a/hw/net/Makefile.objs
+++ b/hw/net/Makefile.objs
@@ -35,3 +35,6 @@ obj-y += vhost_net.o
 
 obj-$(CONFIG_ETSEC) += fsl_etsec/etsec.o fsl_etsec/registers.o \
 			fsl_etsec/rings.o fsl_etsec/miim.o
+
+common-obj-y += rocker/rocker.o rocker/rocker_fp.o rocker/rocker_desc.o \
+                        rocker/rocker_world.o rocker/rocker_of_dpa.o
diff --git a/hw/net/rocker/rocker.c b/hw/net/rocker/rocker.c
new file mode 100644
index 0000000..b410552
--- /dev/null
+++ b/hw/net/rocker/rocker.c
@@ -0,0 +1,1394 @@
+/*
+ * QEMU rocker switch emulation - PCI device
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ * Copyright (c) 2014 Jiri Pirko <jiri@resnulli.us>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "hw/hw.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/msix.h"
+#include "net/net.h"
+#include "net/eth.h"
+#include "qemu/iov.h"
+#include "qemu/bitops.h"
+
+#include "rocker.h"
+#include "rocker_hw.h"
+#include "rocker_fp.h"
+#include "rocker_desc.h"
+#include "rocker_tlv.h"
+#include "rocker_world.h"
+#include "rocker_of_dpa.h"
+
+struct rocker {
+    /* private */
+    PCIDevice parent_obj;
+    /* public */
+
+    MemoryRegion mmio;
+    MemoryRegion msix_bar;
+
+    /* switch configuration */
+    char *name;                  /* switch name */
+    uint32_t fp_ports;           /* front-panel port count */
+    NICPeers *fp_ports_peers;
+    MACAddr fp_start_macaddr;    /* front-panel port 0 mac addr */
+    uint64_t switch_id;          /* switch id */
+
+    /* front-panel ports */
+    struct fp_port *fp_port[ROCKER_FP_PORTS_MAX];
+
+    /* register backings */
+    uint32_t test_reg;
+    uint64_t test_reg64;
+    dma_addr_t test_dma_addr;
+    uint32_t test_dma_size;
+
+    /* desc rings */
+    struct desc_ring **rings;
+
+    /* switch worlds */
+    struct world *worlds[ROCKER_WORLD_TYPE_MAX];
+    struct world *world_dflt;
+
+    QLIST_ENTRY(rocker) next;
+};
+
+#define ROCKER "rocker"
+
+#define to_rocker(obj) \
+    OBJECT_CHECK(struct rocker, (obj), ROCKER)
+
+static QLIST_HEAD(, rocker) rockers;
+
+struct rocker *rocker_find(const char *name)
+{
+    struct rocker *r;
+
+    QLIST_FOREACH(r, &rockers, next)
+        if (strcmp(r->name, name) == 0) {
+            return r;
+        }
+
+    return NULL;
+}
+
+struct world *rocker_get_world(struct rocker *r, enum rocker_world_type type)
+{
+    if (type < ROCKER_WORLD_TYPE_MAX) {
+        return r->worlds[type];
+    }
+    return NULL;
+}
+
+uint32_t rocker_fp_ports(struct rocker *r)
+{
+    return r->fp_ports;
+}
+
+static uint32_t rocker_get_pport_by_tx_ring(struct rocker *r,
+                                            struct desc_ring *ring)
+{
+    return (desc_ring_index(ring) - 2) / 2 + 1;
+}
+
+static int tx_consume(struct rocker *r, struct desc_info *info)
+{
+    PCIDevice *dev = PCI_DEVICE(r);
+    char *buf = desc_get_buf(info, true);
+    struct rocker_tlv *tlv_frag;
+    struct rocker_tlv *tlvs[ROCKER_TLV_TX_MAX + 1];
+    struct iovec iov[ROCKER_TX_FRAGS_MAX] = { { 0, }, };
+    uint32_t pport;
+    uint32_t port;
+    uint16_t tx_offload = ROCKER_TX_OFFLOAD_NONE;
+    uint16_t tx_l3_csum_off = 0;
+    uint16_t tx_tso_mss = 0;
+    uint16_t tx_tso_hdr_len = 0;
+    int iovcnt = 0;
+    int err = 0;
+    int rem;
+    int i;
+
+    if (!buf) {
+        return -ENXIO;
+    }
+
+    rocker_tlv_parse(tlvs, ROCKER_TLV_TX_MAX, buf, desc_tlv_size(info));
+
+    if (!tlvs[ROCKER_TLV_TX_FRAGS]) {
+        return -EINVAL;
+    }
+
+    pport = rocker_get_pport_by_tx_ring(r, desc_get_ring(info));
+    if (!fp_port_from_pport(pport, &port)) {
+        return -EINVAL;
+    }
+
+    if (tlvs[ROCKER_TLV_TX_OFFLOAD]) {
+        tx_offload = rocker_tlv_get_u8(tlvs[ROCKER_TLV_TX_OFFLOAD]);
+    }
+
+    switch (tx_offload) {
+    case ROCKER_TX_OFFLOAD_L3_CSUM:
+        if (!tlvs[ROCKER_TLV_TX_L3_CSUM_OFF]) {
+            return -EINVAL;
+        }
+    case ROCKER_TX_OFFLOAD_TSO:
+        if (!tlvs[ROCKER_TLV_TX_TSO_MSS] ||
+            !tlvs[ROCKER_TLV_TX_TSO_HDR_LEN]) {
+            return -EINVAL;
+        }
+    }
+
+    if (tlvs[ROCKER_TLV_TX_L3_CSUM_OFF]) {
+        tx_l3_csum_off = rocker_tlv_get_le16(tlvs[ROCKER_TLV_TX_L3_CSUM_OFF]);
+    }
+
+    if (tlvs[ROCKER_TLV_TX_TSO_MSS]) {
+        tx_tso_mss = rocker_tlv_get_le16(tlvs[ROCKER_TLV_TX_TSO_MSS]);
+    }
+
+    if (tlvs[ROCKER_TLV_TX_TSO_HDR_LEN]) {
+        tx_tso_hdr_len = rocker_tlv_get_le16(tlvs[ROCKER_TLV_TX_TSO_HDR_LEN]);
+    }
+
+    rocker_tlv_for_each_nested(tlv_frag, tlvs[ROCKER_TLV_TX_FRAGS], rem) {
+        hwaddr frag_addr;
+        uint16_t frag_len;
+
+        if (rocker_tlv_type(tlv_frag) != ROCKER_TLV_TX_FRAG) {
+            return -EINVAL;
+        }
+
+        rocker_tlv_parse_nested(tlvs, ROCKER_TLV_TX_FRAG_ATTR_MAX, tlv_frag);
+
+        if (!tlvs[ROCKER_TLV_TX_FRAG_ATTR_ADDR] ||
+            !tlvs[ROCKER_TLV_TX_FRAG_ATTR_LEN]) {
+            return -EINVAL;
+        }
+
+        frag_addr = rocker_tlv_get_le64(tlvs[ROCKER_TLV_TX_FRAG_ATTR_ADDR]);
+        frag_len = rocker_tlv_get_le16(tlvs[ROCKER_TLV_TX_FRAG_ATTR_LEN]);
+
+        iov[iovcnt].iov_len = frag_len;
+        iov[iovcnt].iov_base = g_malloc(frag_len);
+        if (!iov[iovcnt].iov_base) {
+            err = -ENOMEM;
+            goto err_no_mem;
+        }
+
+        if (pci_dma_read(dev, frag_addr, iov[iovcnt].iov_base,
+                     iov[iovcnt].iov_len)) {
+            err = -ENXIO;
+            goto err_bad_io;
+        }
+
+        if (++iovcnt > ROCKER_TX_FRAGS_MAX) {
+            goto err_too_many_frags;
+        }
+    }
+
+    if (iovcnt) {
+        /* XXX perform Tx offloads */
+        /* XXX   silence compiler for now */
+        tx_l3_csum_off += tx_tso_mss = tx_tso_hdr_len = 0;
+    }
+
+    err = fp_port_eg(r->fp_port[port], iov, iovcnt);
+
+err_no_mem:
+err_bad_io:
+err_too_many_frags:
+    for (i = 0; i < iovcnt; i++) {
+        if (iov[i].iov_base) {
+            g_free(iov[i].iov_base);
+        }
+    }
+
+    return err;
+}
+
+static int cmd_get_port_settings(struct rocker *r,
+                                 struct desc_info *info, char *buf,
+                                 struct rocker_tlv *cmd_info_tlv)
+{
+    struct rocker_tlv *tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_MAX + 1];
+    struct rocker_tlv *nest;
+    struct fp_port *fp_port;
+    uint32_t pport;
+    uint32_t port;
+    uint32_t speed;
+    uint8_t duplex;
+    uint8_t autoneg;
+    uint8_t learning;
+    MACAddr macaddr;
+    enum rocker_world_type mode;
+    size_t tlv_size;
+    int pos;
+    int err;
+
+    rocker_tlv_parse_nested(tlvs, ROCKER_TLV_CMD_PORT_SETTINGS_MAX,
+                            cmd_info_tlv);
+
+    if (!tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_PPORT]) {
+        return -EINVAL;
+    }
+
+    pport = rocker_tlv_get_le32(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_PPORT]);
+    if (!fp_port_from_pport(pport, &port)) {
+        return -EINVAL;
+    }
+    fp_port = r->fp_port[port];
+
+    err = fp_port_get_settings(fp_port, &speed, &duplex, &autoneg);
+    if (err) {
+        return err;
+    }
+
+    fp_port_get_macaddr(fp_port, &macaddr);
+    mode = world_type(fp_port_get_world(fp_port));
+    learning = fp_port_get_learning(fp_port);
+
+    tlv_size = rocker_tlv_total_size(0) +                 /* nest */
+               rocker_tlv_total_size(sizeof(uint32_t)) +  /*   pport */
+               rocker_tlv_total_size(sizeof(uint32_t)) +  /*   speed */
+               rocker_tlv_total_size(sizeof(uint8_t)) +   /*   duplex */
+               rocker_tlv_total_size(sizeof(uint8_t)) +   /*   autoneg */
+               rocker_tlv_total_size(sizeof(macaddr.a)) + /*   macaddr */
+               rocker_tlv_total_size(sizeof(uint8_t)) +   /*   mode */
+               rocker_tlv_total_size(sizeof(uint8_t));    /*   learning */
+
+    if (tlv_size > desc_buf_size(info)) {
+        return -EMSGSIZE;
+    }
+
+    pos = 0;
+    nest = rocker_tlv_nest_start(buf, &pos, ROCKER_TLV_CMD_INFO);
+    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_CMD_PORT_SETTINGS_PPORT, pport);
+    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_CMD_PORT_SETTINGS_SPEED, speed);
+    rocker_tlv_put_u8(buf, &pos, ROCKER_TLV_CMD_PORT_SETTINGS_DUPLEX, duplex);
+    rocker_tlv_put_u8(buf, &pos, ROCKER_TLV_CMD_PORT_SETTINGS_AUTONEG, autoneg);
+    rocker_tlv_put(buf, &pos, ROCKER_TLV_CMD_PORT_SETTINGS_MACADDR,
+                   sizeof(macaddr.a), macaddr.a);
+    rocker_tlv_put_u8(buf, &pos, ROCKER_TLV_CMD_PORT_SETTINGS_MODE, mode);
+    rocker_tlv_put_u8(buf, &pos, ROCKER_TLV_CMD_PORT_SETTINGS_LEARNING,
+                      learning);
+    rocker_tlv_nest_end(buf, &pos, nest);
+
+    return desc_set_buf(info, tlv_size);
+}
+
+static int cmd_set_port_settings(struct rocker *r,
+                                 struct rocker_tlv *cmd_info_tlv)
+{
+    struct rocker_tlv *tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_MAX + 1];
+    struct fp_port *fp_port;
+    uint32_t pport;
+    uint32_t port;
+    uint32_t speed;
+    uint8_t duplex;
+    uint8_t autoneg;
+    uint8_t learning;
+    MACAddr macaddr;
+    enum rocker_world_type mode;
+    int err;
+
+    rocker_tlv_parse_nested(tlvs, ROCKER_TLV_CMD_PORT_SETTINGS_MAX,
+                            cmd_info_tlv);
+
+    if (!tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_PPORT]) {
+        return -EINVAL;
+    }
+
+    pport = rocker_tlv_get_le32(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_PPORT]);
+    if (!fp_port_from_pport(pport, &port)) {
+        return -EINVAL;
+    }
+    fp_port = r->fp_port[port];
+
+    if (tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_SPEED] &&
+        tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_DUPLEX] &&
+        tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_AUTONEG]) {
+
+        speed = rocker_tlv_get_le32(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_SPEED]);
+        duplex = rocker_tlv_get_u8(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_DUPLEX]);
+        autoneg = rocker_tlv_get_u8(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_AUTONEG]);
+
+        err = fp_port_set_settings(fp_port, speed, duplex, autoneg);
+        if (err) {
+            return err;
+        }
+    }
+
+    if (tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_MACADDR]) {
+        if (rocker_tlv_len(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_MACADDR]) !=
+            sizeof(macaddr.a)) {
+            return -EINVAL;
+        }
+        memcpy(macaddr.a,
+               rocker_tlv_data(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_MACADDR]),
+               sizeof(macaddr.a));
+        fp_port_set_macaddr(fp_port, &macaddr);
+    }
+
+    if (tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_MODE]) {
+        mode = rocker_tlv_get_u8(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_MODE]);
+        fp_port_set_world(fp_port, r->worlds[mode]);
+    }
+
+    if (tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_LEARNING]) {
+        learning =
+            rocker_tlv_get_u8(tlvs[ROCKER_TLV_CMD_PORT_SETTINGS_LEARNING]);
+        fp_port_set_learning(fp_port, learning);
+    }
+
+    return 0;
+}
+
+static int cmd_consume(struct rocker *r, struct desc_info *info)
+{
+    char *buf = desc_get_buf(info, false);
+    struct rocker_tlv *tlvs[ROCKER_TLV_CMD_MAX + 1];
+    struct rocker_tlv *info_tlv;
+    struct world *world;
+    uint16_t cmd;
+    int err;
+
+    if (!buf) {
+        return -ENXIO;
+    }
+
+    rocker_tlv_parse(tlvs, ROCKER_TLV_CMD_MAX, buf, desc_tlv_size(info));
+
+    if (!tlvs[ROCKER_TLV_CMD_TYPE] || !tlvs[ROCKER_TLV_CMD_INFO]) {
+        return -EINVAL;
+    }
+
+    cmd = rocker_tlv_get_le16(tlvs[ROCKER_TLV_CMD_TYPE]);
+    info_tlv = tlvs[ROCKER_TLV_CMD_INFO];
+
+    /* This might be reworked to something like this:
+     * Every world will have an array of command handlers from
+     * ROCKER_TLV_CMD_TYPE_UNSPEC to ROCKER_TLV_CMD_TYPE_MAX. There is
+     * up to each world to implement whatever command it want.
+     * It can reference "generic" commands as cmd_set_port_settings or
+     * cmd_get_port_settings
+     */
+
+    switch (cmd) {
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_ADD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_MOD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_DEL:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_GET_STATS:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_ADD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_MOD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_DEL:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_GET_STATS:
+        world = r->worlds[ROCKER_WORLD_TYPE_OF_DPA];
+        err = world_do_cmd(world, info, buf, cmd, info_tlv);
+        break;
+    case ROCKER_TLV_CMD_TYPE_GET_PORT_SETTINGS:
+        err = cmd_get_port_settings(r, info, buf, info_tlv);
+        break;
+    case ROCKER_TLV_CMD_TYPE_SET_PORT_SETTINGS:
+        err = cmd_set_port_settings(r, info_tlv);
+        break;
+    default:
+        err = -EINVAL;
+        break;
+    }
+
+    return err;
+}
+
+static void rocker_msix_irq(struct rocker *r, unsigned vector)
+{
+    PCIDevice *dev = PCI_DEVICE(r);
+
+    DPRINTF("MSI-X notify request for vector %d\n", vector);
+    if (vector >= ROCKER_MSIX_VEC_COUNT(r->fp_ports)) {
+        DPRINTF("incorrect vector %d\n", vector);
+        return;
+    }
+    msix_notify(dev, vector);
+}
+
+int rocker_event_link_changed(struct rocker *r, uint32_t pport, bool link_up)
+{
+    struct desc_ring *ring = r->rings[ROCKER_RING_EVENT];
+    struct desc_info *info = desc_ring_fetch_desc(ring);
+    struct rocker_tlv *nest;
+    char *buf;
+    size_t tlv_size;
+    int pos;
+    int err;
+
+    if (!info) {
+        return -ENOBUFS;
+    }
+
+    tlv_size = rocker_tlv_total_size(sizeof(uint16_t)) +  /* event type */
+               rocker_tlv_total_size(0) +                 /* nest */
+               rocker_tlv_total_size(sizeof(uint32_t)) +  /*   pport */
+               rocker_tlv_total_size(sizeof(uint8_t));    /*   link up */
+
+    if (tlv_size > desc_buf_size(info)) {
+        err = -EMSGSIZE;
+        goto err_too_big;
+    }
+
+    buf = desc_get_buf(info, false);
+    if (!buf) {
+        err = -ENOMEM;
+        goto err_no_mem;
+    }
+
+    pos = 0;
+    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_EVENT_TYPE,
+                        ROCKER_TLV_EVENT_TYPE_LINK_CHANGED);
+    nest = rocker_tlv_nest_start(buf, &pos, ROCKER_TLV_EVENT_INFO);
+    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_EVENT_LINK_CHANGED_PPORT, pport);
+    rocker_tlv_put_u8(buf, &pos, ROCKER_TLV_EVENT_LINK_CHANGED_LINKUP,
+                      link_up ? 1 : 0);
+    rocker_tlv_nest_end(buf, &pos, nest);
+
+    err = desc_set_buf(info, tlv_size);
+
+err_too_big:
+err_no_mem:
+    if (desc_ring_post_desc(ring, err)) {
+        rocker_msix_irq(r, ROCKER_MSIX_VEC_EVENT);
+    }
+
+    return err;
+}
+
+int rocker_event_mac_vlan_seen(struct rocker *r, uint32_t pport, uint8_t *addr,
+                               uint16_t vlan_id)
+{
+    struct desc_ring *ring = r->rings[ROCKER_RING_EVENT];
+    struct desc_info *info;
+    struct fp_port *fp_port;
+    uint32_t port;
+    struct rocker_tlv *nest;
+    char *buf;
+    size_t tlv_size;
+    int pos;
+    int err;
+
+    if (!fp_port_from_pport(pport, &port)) {
+        return -EINVAL;
+    }
+    fp_port = r->fp_port[port];
+    if (!fp_port_get_learning(fp_port)) {
+        return 0;
+    }
+
+    info = desc_ring_fetch_desc(ring);
+    if (!info) {
+        return -ENOBUFS;
+    }
+
+    tlv_size = rocker_tlv_total_size(sizeof(uint16_t)) +  /* event type */
+               rocker_tlv_total_size(0) +                 /* nest */
+               rocker_tlv_total_size(sizeof(uint32_t)) +  /*   pport */
+               rocker_tlv_total_size(ETH_ALEN) +          /*   mac addr */
+               rocker_tlv_total_size(sizeof(uint16_t));   /*   vlan_id */
+
+    if (tlv_size > desc_buf_size(info)) {
+        err = -EMSGSIZE;
+        goto err_too_big;
+    }
+
+    buf = desc_get_buf(info, false);
+    if (!buf) {
+        err = -ENOMEM;
+        goto err_no_mem;
+    }
+
+    pos = 0;
+    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_EVENT_TYPE,
+                        ROCKER_TLV_EVENT_TYPE_MAC_VLAN_SEEN);
+    nest = rocker_tlv_nest_start(buf, &pos, ROCKER_TLV_EVENT_INFO);
+    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_EVENT_MAC_VLAN_PPORT, pport);
+    rocker_tlv_put(buf, &pos, ROCKER_TLV_EVENT_MAC_VLAN_MAC, ETH_ALEN, addr);
+    rocker_tlv_put_u16(buf, &pos, ROCKER_TLV_EVENT_MAC_VLAN_VLAN_ID, vlan_id);
+    rocker_tlv_nest_end(buf, &pos, nest);
+
+    err = desc_set_buf(info, tlv_size);
+
+err_too_big:
+err_no_mem:
+    if (desc_ring_post_desc(ring, err)) {
+        rocker_msix_irq(r, ROCKER_MSIX_VEC_EVENT);
+    }
+
+    return err;
+}
+
+static struct desc_ring *rocker_get_rx_ring_by_pport(struct rocker *r,
+                                                     uint32_t pport)
+{
+    return r->rings[(pport - 1) * 2 + 3];
+}
+
+int rx_produce(struct world *world, uint32_t pport,
+               const struct iovec *iov, int iovcnt)
+{
+    struct rocker *r = world_rocker(world);
+    PCIDevice *dev = (PCIDevice *)r;
+    struct desc_ring *ring = rocker_get_rx_ring_by_pport(r, pport);
+    struct desc_info *info = desc_ring_fetch_desc(ring);
+    char *data;
+    size_t data_size = iov_size(iov, iovcnt);
+    char *buf;
+    uint16_t rx_flags = 0;
+    uint16_t rx_csum = 0;
+    size_t tlv_size;
+    struct rocker_tlv *tlvs[ROCKER_TLV_RX_MAX + 1];
+    hwaddr frag_addr;
+    uint16_t frag_max_len;
+    int pos;
+    int err;
+
+    if (!info) {
+        return -ENOBUFS;
+    }
+
+    buf = desc_get_buf(info, false);
+    if (!buf) {
+        err = -ENXIO;
+        goto out;
+    }
+    rocker_tlv_parse(tlvs, ROCKER_TLV_RX_MAX, buf, desc_tlv_size(info));
+
+    if (!tlvs[ROCKER_TLV_RX_FRAG_ADDR] ||
+        !tlvs[ROCKER_TLV_RX_FRAG_MAX_LEN]) {
+        err = -EINVAL;
+        goto out;
+    }
+
+    frag_addr = rocker_tlv_get_le64(tlvs[ROCKER_TLV_RX_FRAG_ADDR]);
+    frag_max_len = rocker_tlv_get_le16(tlvs[ROCKER_TLV_RX_FRAG_MAX_LEN]);
+
+    if (data_size > frag_max_len) {
+        err = -EMSGSIZE;
+        goto out;
+    }
+
+    /* XXX calc rx flags/csum */
+
+    tlv_size = rocker_tlv_total_size(sizeof(uint16_t)) + /* flags */
+               rocker_tlv_total_size(sizeof(uint16_t)) + /* scum */
+               rocker_tlv_total_size(sizeof(uint64_t)) + /* frag addr */
+               rocker_tlv_total_size(sizeof(uint16_t)) + /* frag max len */
+               rocker_tlv_total_size(sizeof(uint16_t));  /* frag len */
+
+    if (tlv_size > desc_buf_size(info)) {
+        err = -EMSGSIZE;
+        goto out;
+    }
+
+    /* TODO:
+     * iov dma write can be optimized in similar way e1000 does it in
+     * e1000_receive_iov. But maybe if would make sense to introduce
+     * generic helper iov_dma_write.
+     */
+
+    data = g_malloc(data_size);
+    if (!data) {
+        err = -ENOMEM;
+        goto out;
+    }
+    iov_to_buf(iov, iovcnt, 0, data, data_size);
+    pci_dma_write(dev, frag_addr, data, data_size);
+    g_free(data);
+
+    pos = 0;
+    rocker_tlv_put_le16(buf, &pos, ROCKER_TLV_RX_FLAGS, rx_flags);
+    rocker_tlv_put_le16(buf, &pos, ROCKER_TLV_RX_CSUM, rx_csum);
+    rocker_tlv_put_le64(buf, &pos, ROCKER_TLV_RX_FRAG_ADDR, frag_addr);
+    rocker_tlv_put_le16(buf, &pos, ROCKER_TLV_RX_FRAG_MAX_LEN, frag_max_len);
+    rocker_tlv_put_le16(buf, &pos, ROCKER_TLV_RX_FRAG_LEN, data_size);
+
+    err = desc_set_buf(info, tlv_size);
+
+out:
+    if (desc_ring_post_desc(ring, err)) {
+        rocker_msix_irq(r, ROCKER_MSIX_VEC_RX(pport - 1));
+    }
+
+    return err;
+}
+
+int rocker_port_eg(struct rocker *r, uint32_t pport,
+                   const struct iovec *iov, int iovcnt)
+{
+    struct fp_port *fp_port;
+    uint32_t port;
+
+    if (!fp_port_from_pport(pport, &port)) {
+        return -EINVAL;
+    }
+
+    fp_port = r->fp_port[port];
+
+    return fp_port_eg(fp_port, iov, iovcnt);
+}
+
+static void rocker_test_dma_ctrl(struct rocker *r, uint32_t val)
+{
+    PCIDevice *dev = PCI_DEVICE(r);
+    char *buf;
+    int i;
+
+    buf = malloc(r->test_dma_size);
+
+    if (!buf) {
+        DPRINTF("test dma buffer alloc failed");
+        return;
+    }
+
+    switch (val) {
+    case ROCKER_TEST_DMA_CTRL_CLEAR:
+        memset(buf, 0, r->test_dma_size);
+        break;
+    case ROCKER_TEST_DMA_CTRL_FILL:
+        memset(buf, 0x96, r->test_dma_size);
+        break;
+    case ROCKER_TEST_DMA_CTRL_INVERT:
+        pci_dma_read(dev, r->test_dma_addr, buf, r->test_dma_size);
+        for (i = 0; i < r->test_dma_size; i++) {
+            buf[i] = ~buf[i];
+        }
+        break;
+    default:
+        DPRINTF("not test dma control val=0x%08x\n", val);
+        return;
+    }
+    pci_dma_write(dev, r->test_dma_addr, buf, r->test_dma_size);
+
+    rocker_msix_irq(r, ROCKER_MSIX_VEC_TEST);
+}
+
+static void rocker_reset(DeviceState *dev);
+
+static void rocker_control(struct rocker *r, uint32_t val)
+{
+    if (val & ROCKER_CONTROL_RESET) {
+        rocker_reset(DEVICE(r));
+    }
+}
+
+static int rocker_pci_ring_count(struct rocker *r)
+{
+    /* There are:
+     * - command ring
+     * - event ring
+     * - tx and rx ring per each port
+     */
+    return 2 + (2 * r->fp_ports);
+}
+
+static bool rocker_addr_is_desc_reg(struct rocker *r, hwaddr addr)
+{
+    hwaddr start = ROCKER_DMA_DESC_BASE;
+    hwaddr end = start + (ROCKER_DMA_DESC_SIZE * rocker_pci_ring_count(r));
+
+    return addr >= start && addr < end;
+}
+
+static void rocker_io_writel(void *opaque, hwaddr addr, uint32_t val)
+{
+    struct rocker *r = opaque;
+
+    if (rocker_addr_is_desc_reg(r, addr)) {
+        unsigned index = ROCKER_RING_INDEX(addr);
+        unsigned offset = addr & ROCKER_DMA_DESC_MASK;
+
+        switch (offset) {
+        case ROCKER_DMA_DESC_SIZE_OFFSET:
+            desc_ring_set_size(r->rings[index], val);
+            break;
+        case ROCKER_DMA_DESC_HEAD_OFFSET:
+            if (desc_ring_set_head(r->rings[index], val)) {
+                rocker_msix_irq(r, desc_ring_get_msix_vector(r->rings[index]));
+            }
+            break;
+        case ROCKER_DMA_DESC_CTRL_OFFSET:
+            desc_ring_set_ctrl(r->rings[index], val);
+            break;
+        case ROCKER_DMA_DESC_CREDITS_OFFSET:
+            if (desc_ring_ret_credits(r->rings[index], val)) {
+                rocker_msix_irq(r, desc_ring_get_msix_vector(r->rings[index]));
+            }
+            break;
+        default:
+            DPRINTF("not implemented dma reg write(l) addr=0x%lx "
+                    "val=0x%08x (ring %d, addr=0x%02x)\n",
+                    addr, val, index, offset);
+            break;
+        }
+        return;
+    }
+
+    switch (addr) {
+    case ROCKER_TEST_REG:
+        r->test_reg = val;
+        break;
+    case ROCKER_TEST_IRQ:
+        rocker_msix_irq(r, val);
+        break;
+    case ROCKER_TEST_DMA_SIZE:
+        r->test_dma_size = val;
+        break;
+    case ROCKER_TEST_DMA_CTRL:
+        rocker_test_dma_ctrl(r, val);
+        break;
+    case ROCKER_CONTROL:
+        rocker_control(r, val);
+        break;
+    default:
+        DPRINTF("not implemented write(l) addr=0x%lx val=0x%08x\n", addr, val);
+        break;
+    }
+}
+
+static void rocker_port_phys_enable_write(struct rocker *r, uint64_t new)
+{
+    int i;
+    bool old_enabled;
+    bool new_enabled;
+    struct fp_port *fp_port;
+
+    for (i = 0; i < r->fp_ports; i++) {
+        fp_port = r->fp_port[i];
+        old_enabled = fp_port_enabled(fp_port);
+        new_enabled = (new >> (i + 1)) & 0x1;
+        if (new_enabled == old_enabled) {
+            continue;
+        }
+        if (new_enabled) {
+            fp_port_enable(r->fp_port[i]);
+        } else {
+            fp_port_disable(r->fp_port[i]);
+        }
+    }
+}
+
+static void rocker_io_writeq(void *opaque, hwaddr addr, uint64_t val)
+{
+    struct rocker *r = opaque;
+
+    if (rocker_addr_is_desc_reg(r, addr)) {
+        unsigned index = ROCKER_RING_INDEX(addr);
+        unsigned offset = addr & ROCKER_DMA_DESC_MASK;
+
+        switch (offset) {
+        case ROCKER_DMA_DESC_ADDR_OFFSET:
+            desc_ring_set_base_addr(r->rings[index], val);
+            break;
+        default:
+            DPRINTF("not implemented dma reg write(q) addr=0x%lx "
+                    "val=0x%016lx (ring %d, addr=0x%02x)\n",
+                    addr, val, index, offset);
+            break;
+        }
+        return;
+    }
+
+    switch (addr) {
+    case ROCKER_TEST_REG64:
+        r->test_reg64 = val;
+        break;
+    case ROCKER_TEST_DMA_ADDR:
+        r->test_dma_addr = val;
+        break;
+    case ROCKER_PORT_PHYS_ENABLE:
+        rocker_port_phys_enable_write(r, val);
+        break;
+    default:
+        DPRINTF("not implemented write(q) addr=0x%lx val=0x%016lx\n",
+                addr, val);
+        break;
+    }
+}
+
+#ifdef DEBUG_ROCKER
+#define regname(reg) case (reg): return #reg
+static const char *rocker_reg_name(void *opaque, hwaddr addr)
+{
+    struct rocker *r = opaque;
+
+    if (rocker_addr_is_desc_reg(r, addr)) {
+        unsigned index = ROCKER_RING_INDEX(addr);
+        unsigned offset = addr & ROCKER_DMA_DESC_MASK;
+        static char buf[100];
+        char ring_name[10];
+
+        switch (index) {
+        case 0:
+            sprintf(ring_name, "cmd");
+            break;
+        case 1:
+            sprintf(ring_name, "event");
+            break;
+        default:
+            sprintf(ring_name, "%s-%d", index % 2 ? "rx" : "tx",
+                    (index - 2) / 2);
+        }
+
+        switch (offset) {
+        case ROCKER_DMA_DESC_ADDR_OFFSET:
+            sprintf(buf, "Ring[%s] ADDR", ring_name);
+            return buf;
+        case ROCKER_DMA_DESC_SIZE_OFFSET:
+            sprintf(buf, "Ring[%s] SIZE", ring_name);
+            return buf;
+        case ROCKER_DMA_DESC_HEAD_OFFSET:
+            sprintf(buf, "Ring[%s] HEAD", ring_name);
+            return buf;
+        case ROCKER_DMA_DESC_TAIL_OFFSET:
+            sprintf(buf, "Ring[%s] TAIL", ring_name);
+            return buf;
+        case ROCKER_DMA_DESC_CTRL_OFFSET:
+            sprintf(buf, "Ring[%s] CTRL", ring_name);
+            return buf;
+        case ROCKER_DMA_DESC_CREDITS_OFFSET:
+            sprintf(buf, "Ring[%s] CREDITS", ring_name);
+            return buf;
+        default:
+            sprintf(buf, "Ring[%s] ???", ring_name);
+            return buf;
+        }
+    } else {
+        switch (addr) {
+            regname(ROCKER_BOGUS_REG0);
+            regname(ROCKER_BOGUS_REG1);
+            regname(ROCKER_BOGUS_REG2);
+            regname(ROCKER_BOGUS_REG3);
+            regname(ROCKER_TEST_REG);
+            regname(ROCKER_TEST_REG64);
+            regname(ROCKER_TEST_IRQ);
+            regname(ROCKER_TEST_DMA_ADDR);
+            regname(ROCKER_TEST_DMA_SIZE);
+            regname(ROCKER_TEST_DMA_CTRL);
+            regname(ROCKER_CONTROL);
+            regname(ROCKER_PORT_PHYS_COUNT);
+            regname(ROCKER_PORT_PHYS_LINK_STATUS);
+            regname(ROCKER_PORT_PHYS_ENABLE);
+            regname(ROCKER_SWITCH_ID);
+        }
+    }
+    return "???";
+}
+#else
+static const char *rocker_reg_name(void *opaque, hwaddr addr)
+{
+    return NULL;
+}
+#endif
+
+static void rocker_mmio_write(void *opaque, hwaddr addr, uint64_t val,
+                              unsigned size)
+{
+    DPRINTF("Write %s addr %lx, size %u, val %lx\n",
+            rocker_reg_name(opaque, addr), addr, size, val);
+
+    switch (size) {
+    case 4:
+        rocker_io_writel(opaque, addr, val);
+        break;
+    case 8:
+        rocker_io_writeq(opaque, addr, val);
+        break;
+    }
+}
+
+static uint32_t rocker_io_readl(void *opaque, hwaddr addr)
+{
+    struct rocker *r = opaque;
+    uint32_t ret;
+
+    if (rocker_addr_is_desc_reg(r, addr)) {
+        unsigned index = ROCKER_RING_INDEX(addr);
+        unsigned offset = addr & ROCKER_DMA_DESC_MASK;
+
+        switch (offset) {
+        case ROCKER_DMA_DESC_SIZE_OFFSET:
+            ret = desc_ring_get_size(r->rings[index]);
+            break;
+        case ROCKER_DMA_DESC_HEAD_OFFSET:
+            ret = desc_ring_get_head(r->rings[index]);
+            break;
+        case ROCKER_DMA_DESC_TAIL_OFFSET:
+            ret = desc_ring_get_tail(r->rings[index]);
+            break;
+        case ROCKER_DMA_DESC_CREDITS_OFFSET:
+            ret = desc_ring_get_credits(r->rings[index]);
+            break;
+        default:
+            DPRINTF("not implemented dma reg read(l) addr=0x%lx "
+                    "(ring %d, addr=0x%02x)\n", addr, index, offset);
+            ret = 0;
+            break;
+        }
+        return ret;
+    }
+
+    switch (addr) {
+    case ROCKER_BOGUS_REG0:
+    case ROCKER_BOGUS_REG1:
+    case ROCKER_BOGUS_REG2:
+    case ROCKER_BOGUS_REG3:
+        ret = 0xDEADBABE;
+        break;
+    case ROCKER_TEST_REG:
+        ret = r->test_reg * 2;
+        break;
+    case ROCKER_TEST_DMA_SIZE:
+        ret = r->test_dma_size;
+        break;
+    case ROCKER_PORT_PHYS_COUNT:
+        ret = r->fp_ports;
+        break;
+    default:
+        DPRINTF("not implemented read(l) addr=0x%lx\n", addr);
+        ret = 0;
+        break;
+    }
+    return ret;
+}
+
+static uint64_t rocker_port_phys_link_status(struct rocker *r)
+{
+    int i;
+    uint64_t status = 0;
+
+    for (i = 0; i < r->fp_ports; i++) {
+        struct fp_port *port = r->fp_port[i];
+
+        if (fp_port_get_link_up(port)) {
+            status |= 1 << (i + 1);
+        }
+    }
+    return status;
+}
+
+static uint64_t rocker_port_phys_enable_read(struct rocker *r)
+{
+    int i;
+    uint64_t ret = 0;
+
+    for (i = 0; i < r->fp_ports; i++) {
+        struct fp_port *port = r->fp_port[i];
+
+        if (fp_port_enabled(port)) {
+            ret |= 1 << (i + 1);
+        }
+    }
+    return ret;
+}
+
+static uint64_t rocker_io_readq(void *opaque, hwaddr addr)
+{
+    struct rocker *r = opaque;
+    uint64_t ret;
+
+    if (rocker_addr_is_desc_reg(r, addr)) {
+        unsigned index = ROCKER_RING_INDEX(addr);
+        unsigned offset = addr & ROCKER_DMA_DESC_MASK;
+
+        switch (addr & ROCKER_DMA_DESC_MASK) {
+        case ROCKER_DMA_DESC_ADDR_OFFSET:
+            ret = desc_ring_get_base_addr(r->rings[index]);
+            break;
+        default:
+            DPRINTF("not implemented dma reg read(q) addr=0x%lx "
+                    "(ring %d, addr=0x%02x)\n", addr, index, offset);
+            ret = 0;
+            break;
+        }
+        return ret;
+    }
+
+    switch (addr) {
+    case ROCKER_BOGUS_REG0:
+    case ROCKER_BOGUS_REG2:
+        ret = 0xDEADBABEDEADBABE;
+        break;
+    case ROCKER_TEST_REG64:
+        ret = r->test_reg64 * 2;
+        break;
+    case ROCKER_TEST_DMA_ADDR:
+        ret = r->test_dma_addr;
+        break;
+    case ROCKER_PORT_PHYS_LINK_STATUS:
+        ret = rocker_port_phys_link_status(r);
+        break;
+    case ROCKER_PORT_PHYS_ENABLE:
+        ret = rocker_port_phys_enable_read(r);
+        break;
+    case ROCKER_SWITCH_ID:
+        ret = r->switch_id;
+        break;
+    default:
+        DPRINTF("not implemented read(q) addr=0x%lx\n", addr);
+        ret = 0;
+        break;
+    }
+    return ret;
+}
+
+static uint64_t rocker_mmio_read(void *opaque, hwaddr addr, unsigned size)
+{
+    DPRINTF("Read %s addr %lx, size %u\n",
+            rocker_reg_name(opaque, addr), addr, size);
+
+    switch (size) {
+    case 4:
+        return rocker_io_readl(opaque, addr);
+    case 8:
+        return rocker_io_readq(opaque, addr);
+    }
+
+    return -1;
+}
+
+static const MemoryRegionOps rocker_mmio_ops = {
+    .read = rocker_mmio_read,
+    .write = rocker_mmio_write,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .valid = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+    },
+    .impl = {
+        .min_access_size = 4,
+        .max_access_size = 8,
+    },
+};
+
+static void rocker_msix_vectors_unuse(struct rocker *r,
+                                      unsigned int num_vectors)
+{
+    PCIDevice *dev = PCI_DEVICE(r);
+    int i;
+
+    for (i = 0; i < num_vectors; i++) {
+        msix_vector_unuse(dev, i);
+    }
+}
+
+static int rocker_msix_vectors_use(struct rocker *r,
+                                   unsigned int num_vectors)
+{
+    PCIDevice *dev = PCI_DEVICE(r);
+    int err;
+    int i;
+
+    for (i = 0; i < num_vectors; i++) {
+        err = msix_vector_use(dev, i);
+        if (err) {
+            goto rollback;
+        }
+    }
+    return 0;
+
+rollback:
+    rocker_msix_vectors_unuse(r, i);
+    return err;
+}
+
+static int rocker_msix_init(struct rocker *r)
+{
+    PCIDevice *dev = PCI_DEVICE(r);
+    int err;
+
+    err = msix_init(dev, ROCKER_MSIX_VEC_COUNT(r->fp_ports),
+                    &r->msix_bar,
+                    ROCKER_PCI_MSIX_BAR_IDX, ROCKER_PCI_MSIX_TABLE_OFFSET,
+                    &r->msix_bar,
+                    ROCKER_PCI_MSIX_BAR_IDX, ROCKER_PCI_MSIX_PBA_OFFSET,
+                    0);
+    if (err) {
+        return err;
+    }
+
+    err = rocker_msix_vectors_use(r, ROCKER_MSIX_VEC_COUNT(r->fp_ports));
+    if (err) {
+        goto err_msix_vectors_use;
+    }
+
+    return 0;
+
+err_msix_vectors_use:
+    msix_uninit(dev, &r->msix_bar, &r->msix_bar);
+    return err;
+}
+
+static void rocker_msix_uninit(struct rocker *r)
+{
+    PCIDevice *dev = PCI_DEVICE(r);
+
+    msix_uninit(dev, &r->msix_bar, &r->msix_bar);
+    rocker_msix_vectors_unuse(r, ROCKER_MSIX_VEC_COUNT(r->fp_ports));
+}
+
+static int pci_rocker_init(PCIDevice *dev)
+{
+    struct rocker *r = to_rocker(dev);
+    const MACAddr zero = { .a = { 0, 0, 0, 0, 0, 0 } };
+    const MACAddr dflt = { .a = { 0x52, 0x54, 0x00, 0x12, 0x35, 0x01 } };
+    static int sw_index;
+    int i, err = 0;
+
+    /* allocate worlds */
+
+    r->worlds[ROCKER_WORLD_TYPE_OF_DPA] = of_dpa_world_alloc(r);
+    r->world_dflt = r->worlds[ROCKER_WORLD_TYPE_OF_DPA];
+
+    for (i = 0; i < ROCKER_WORLD_TYPE_MAX; i++) {
+        if (!r->worlds[i]) {
+            goto err_world_alloc;
+        }
+    }
+
+    /* set up memory-mapped region at BAR0 */
+
+    memory_region_init_io(&r->mmio, OBJECT(r), &rocker_mmio_ops, r,
+                          "rocker-mmio", ROCKER_PCI_BAR0_SIZE);
+    pci_register_bar(dev, ROCKER_PCI_BAR0_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY, &r->mmio);
+
+    /* set up memory-mapped region for MSI-X */
+
+    memory_region_init(&r->msix_bar, OBJECT(r), "rocker-msix-bar",
+                       ROCKER_PCI_MSIX_BAR_SIZE);
+    pci_register_bar(dev, ROCKER_PCI_MSIX_BAR_IDX,
+                     PCI_BASE_ADDRESS_SPACE_MEMORY, &r->msix_bar);
+
+    /* MSI-X init */
+
+    err = rocker_msix_init(r);
+    if (err) {
+        goto err_msix_init;
+    }
+
+    /* validate switch properties */
+
+    if (!r->name) {
+        r->name = g_strdup(ROCKER);
+    }
+
+    if (rocker_find(r->name)) {
+        err = -EEXIST;
+        goto err_duplicate;
+    }
+
+    if (memcmp(&r->fp_start_macaddr, &zero, sizeof(zero)) == 0) {
+        memcpy(&r->fp_start_macaddr, &dflt, sizeof(dflt));
+        r->fp_start_macaddr.a[4] += (sw_index++);
+    }
+
+    if (!r->switch_id) {
+        memcpy(&r->switch_id, &r->fp_start_macaddr,
+               sizeof(r->fp_start_macaddr));
+    }
+
+    if (r->fp_ports > ROCKER_FP_PORTS_MAX) {
+        r->fp_ports = ROCKER_FP_PORTS_MAX;
+    }
+
+    r->rings = g_malloc(sizeof(struct desc_ring *) * rocker_pci_ring_count(r));
+    if (!r->rings) {
+        goto err_rings_alloc;
+    }
+
+    /* Rings are ordered like this:
+     * - command ring
+     * - event ring
+     * - port0 tx ring
+     * - port0 rx ring
+     * - port1 tx ring
+     * - port1 rx ring
+     * .....
+     */
+
+    for (i = 0; i < rocker_pci_ring_count(r); i++) {
+        struct desc_ring *ring = desc_ring_alloc(r, i);
+
+        if (!ring) {
+            goto err_ring_alloc;
+        }
+
+        if (i == ROCKER_RING_CMD) {
+            desc_ring_set_consume(ring, cmd_consume, ROCKER_MSIX_VEC_CMD);
+        } else if (i == ROCKER_RING_EVENT) {
+            desc_ring_set_consume(ring, NULL, ROCKER_MSIX_VEC_EVENT);
+        } else if (i % 2 == 0) {
+            desc_ring_set_consume(ring, tx_consume,
+                                  ROCKER_MSIX_VEC_TX((i - 2) / 2));
+        } else if (i % 2 == 1) {
+            desc_ring_set_consume(ring, NULL, ROCKER_MSIX_VEC_RX((i - 3) / 2));
+        }
+
+        r->rings[i] = ring;
+    }
+
+    for (i = 0; i < r->fp_ports; i++) {
+        struct fp_port *port =
+            fp_port_alloc(r, r->name, &r->fp_start_macaddr,
+                          i, &r->fp_ports_peers[i]);
+
+        if (!port) {
+            goto err_port_alloc;
+        }
+
+        r->fp_port[i] = port;
+        fp_port_set_world(port, r->world_dflt);
+    }
+
+    QLIST_INSERT_HEAD(&rockers, r, next);
+
+    return 0;
+
+err_port_alloc:
+    for (--i; i >= 0; i--) {
+        struct fp_port *port = r->fp_port[i];
+        fp_port_free(port);
+    }
+    i = rocker_pci_ring_count(r);
+err_ring_alloc:
+    for (--i; i >= 0; i--) {
+        desc_ring_free(r->rings[i]);
+    }
+    g_free(r->rings);
+err_rings_alloc:
+err_duplicate:
+    rocker_msix_uninit(r);
+err_msix_init:
+    object_unparent(OBJECT(&r->msix_bar));
+    object_unparent(OBJECT(&r->mmio));
+err_world_alloc:
+    for (i = 0; i < ROCKER_WORLD_TYPE_MAX; i++) {
+        if (r->worlds[i]) {
+            world_free(r->worlds[i]);
+        }
+    }
+    return err;
+}
+
+static void pci_rocker_uninit(PCIDevice *dev)
+{
+    struct rocker *r = to_rocker(dev);
+    int i;
+
+    QLIST_REMOVE(r, next);
+
+    for (i = 0; i < r->fp_ports; i++) {
+        struct fp_port *port = r->fp_port[i];
+
+        fp_port_free(port);
+        r->fp_port[i] = NULL;
+    }
+
+    for (i = 0; i < rocker_pci_ring_count(r); i++) {
+        if (r->rings[i]) {
+            desc_ring_free(r->rings[i]);
+        }
+    }
+    g_free(r->rings);
+
+    rocker_msix_uninit(r);
+    object_unparent(OBJECT(&r->msix_bar));
+    object_unparent(OBJECT(&r->mmio));
+
+    for (i = 0; i < ROCKER_WORLD_TYPE_MAX; i++) {
+        if (r->worlds[i]) {
+            world_free(r->worlds[i]);
+        }
+    }
+    g_free(r->fp_ports_peers);
+}
+
+static void rocker_reset(DeviceState *dev)
+{
+    struct rocker *r = to_rocker(dev);
+    int i;
+
+    for (i = 0; i < ROCKER_WORLD_TYPE_MAX; i++) {
+        if (r->worlds[i]) {
+            world_reset(r->worlds[i]);
+        }
+    }
+    for (i = 0; i < r->fp_ports; i++) {
+        fp_port_reset(r->fp_port[i]);
+        fp_port_set_world(r->fp_port[i], r->world_dflt);
+    }
+
+    r->test_reg = 0;
+    r->test_reg64 = 0;
+    r->test_dma_addr = 0;
+    r->test_dma_size = 0;
+
+    for (i = 0; i < rocker_pci_ring_count(r); i++) {
+        desc_ring_reset(r->rings[i]);
+    }
+
+    DPRINTF("Reset done\n");
+}
+
+static Property rocker_properties[] = {
+    DEFINE_PROP_STRING("name", struct rocker, name),
+    DEFINE_PROP_MACADDR("fp_start_macaddr", struct rocker,
+                        fp_start_macaddr),
+    DEFINE_PROP_UINT64("switch_id", struct rocker,
+                       switch_id, 0),
+    DEFINE_PROP_ARRAY("ports", struct rocker, fp_ports,
+                      fp_ports_peers, qdev_prop_netdev, NICPeers),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void rocker_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+
+    k->init = pci_rocker_init;
+    k->exit = pci_rocker_uninit;
+    k->vendor_id = PCI_VENDOR_ID_REDHAT;
+    k->device_id = PCI_DEVICE_ID_REDHAT_ROCKER;
+    k->revision = ROCKER_PCI_REVISION;
+    k->class_id = PCI_CLASS_NETWORK_OTHER;
+    set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
+    dc->desc = "Rocker Switch";
+    dc->reset = rocker_reset;
+    dc->props = rocker_properties;
+}
+
+static const TypeInfo rocker_info = {
+    .name          = ROCKER,
+    .parent        = TYPE_PCI_DEVICE,
+    .instance_size = sizeof(struct rocker),
+    .class_init    = rocker_class_init,
+};
+
+static void rocker_register_types(void)
+{
+    type_register_static(&rocker_info);
+}
+
+type_init(rocker_register_types)
diff --git a/hw/net/rocker/rocker.h b/hw/net/rocker/rocker.h
new file mode 100644
index 0000000..dcdb3e4
--- /dev/null
+++ b/hw/net/rocker/rocker.h
@@ -0,0 +1,76 @@
+/*
+ * QEMU rocker switch emulation
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ * Copyright (c) 2014 Jiri Pirko <jiri@resnulli.us>
+ * Copyright (c) 2014 Neil Horman <nhorman@tuxdriver.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ROCKER_H_
+#define _ROCKER_H_
+
+#include <arpa/inet.h>
+
+#include "rocker_world.h"
+
+#if defined(DEBUG_ROCKER)
+#  define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "ROCKER: " fmt, ## __VA_ARGS__); } while (0)
+#else
+static inline GCC_FMT_ATTR(1, 2) int DPRINTF(const char *fmt, ...)
+{
+    return 0;
+}
+#endif
+
+#define __le16 uint16_t
+#define __le32 uint32_t
+#define __le64 uint64_t
+
+#define __be16 uint16_t
+#define __be32 uint32_t
+#define __be64 uint64_t
+
+static inline bool ipv4_addr_is_multicast(__be32 addr)
+{
+    return (addr & htonl(0xf0000000)) == htonl(0xe0000000);
+}
+
+typedef struct _ipv6_addr {
+    union {
+        uint8_t addr8[16];
+        __be16 addr16[8];
+        __be32 addr32[4];
+    };
+} ipv6_addr;
+
+static inline bool ipv6_addr_is_multicast(const ipv6_addr *addr)
+{
+    return (addr->addr32[0] & htonl(0xFF000000)) == htonl(0xFF000000);
+}
+
+struct world;
+struct rocker;
+
+struct rocker *rocker_find(const char *name);
+struct world *rocker_get_world(struct rocker *r, enum rocker_world_type type);
+uint32_t rocker_fp_ports(struct rocker *r);
+int rocker_event_link_changed(struct rocker *r, uint32_t pport, bool link_up);
+int rocker_event_mac_vlan_seen(struct rocker *r, uint32_t pport, uint8_t *addr,
+                               uint16_t vlan_id);
+int rx_produce(struct world *world, uint32_t pport,
+               const struct iovec *iov, int iovcnt);
+int rocker_port_eg(struct rocker *r, uint32_t pport,
+                   const struct iovec *iov, int iovcnt);
+
+#endif /* _ROCKER_H_ */
diff --git a/hw/net/rocker/rocker_desc.c b/hw/net/rocker/rocker_desc.c
new file mode 100644
index 0000000..2ad53de
--- /dev/null
+++ b/hw/net/rocker/rocker_desc.c
@@ -0,0 +1,379 @@
+/*
+ * QEMU rocker switch emulation - Descriptor ring support
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "net/net.h"
+#include "hw/hw.h"
+#include "hw/pci/pci.h"
+
+#include "rocker.h"
+#include "rocker_hw.h"
+#include "rocker_desc.h"
+
+struct desc_info;
+
+struct desc_ring {
+    hwaddr base_addr;
+    uint32_t size;
+    uint32_t head;
+    uint32_t tail;
+    uint32_t ctrl;
+    uint32_t credits;
+    struct rocker *r;
+    struct desc_info *info;
+    int index;
+    desc_ring_consume *consume;
+    unsigned msix_vector;
+};
+
+struct desc_info {
+    struct desc_ring *ring;
+    struct rocker_desc desc;
+    char *buf;
+    size_t buf_size;
+};
+
+uint16_t desc_buf_size(struct desc_info *info)
+{
+    return le16_to_cpu(info->desc.buf_size);
+}
+
+uint16_t desc_tlv_size(struct desc_info *info)
+{
+    return le16_to_cpu(info->desc.tlv_size);
+}
+
+char *desc_get_buf(struct desc_info *info, bool read_only)
+{
+    PCIDevice *dev = PCI_DEVICE(info->ring->r);
+    size_t size = read_only ? le16_to_cpu(info->desc.tlv_size) :
+                              le16_to_cpu(info->desc.buf_size);
+
+    if (size > info->buf_size) {
+        info->buf = g_realloc(info->buf, size);
+        info->buf_size = size;
+    }
+
+    if (!info->buf) {
+        return NULL;
+    }
+
+    if (pci_dma_read(dev, le64_to_cpu(info->desc.buf_addr), info->buf, size)) {
+        return NULL;
+    }
+
+    return info->buf;
+}
+
+int desc_set_buf(struct desc_info *info, size_t tlv_size)
+{
+    PCIDevice *dev = PCI_DEVICE(info->ring->r);
+
+    if (tlv_size > info->buf_size) {
+        DPRINTF("ERROR: trying to write more to desc buf than it "
+                "can hold buf_size %ld tlv_size %ld\n",
+                info->buf_size, tlv_size);
+        return -EMSGSIZE;
+    }
+
+    info->desc.tlv_size = cpu_to_le16(tlv_size);
+    pci_dma_write(dev, le64_to_cpu(info->desc.buf_addr), info->buf, tlv_size);
+
+    return 0;
+}
+
+struct desc_ring *desc_get_ring(struct desc_info *info)
+{
+    return info->ring;
+}
+
+int desc_ring_index(struct desc_ring *ring)
+{
+    return ring->index;
+}
+
+static bool desc_ring_empty(struct desc_ring *ring)
+{
+    return ring->head == ring->tail;
+}
+
+bool desc_ring_set_base_addr(struct desc_ring *ring, uint64_t base_addr)
+{
+    if (base_addr & 0x7) {
+        DPRINTF("ERROR: ring[%d] desc base addr (0x%lx) not 8-byte aligned\n",
+                ring->index, base_addr);
+        return false;
+    }
+
+    ring->base_addr = base_addr;
+
+    return true;
+}
+
+uint64_t desc_ring_get_base_addr(struct desc_ring *ring)
+{
+    return ring->base_addr;
+}
+
+bool desc_ring_set_size(struct desc_ring *ring, uint32_t size)
+{
+    int i;
+
+    if (size < 2 || size > 0x10000 || (size & (size - 1))) {
+        DPRINTF("ERROR: ring[%d] size (%d) not a power of 2 "
+                "or in range [2, 64K]\n", ring->index, size);
+        return false;
+    }
+
+    for (i = 0; i < ring->size; i++) {
+        if (ring->info[i].buf) {
+            g_free(ring->info[i].buf);
+        }
+    }
+
+    ring->size = size;
+    ring->head = ring->tail = 0;
+
+    ring->info = g_realloc(ring->info, size * sizeof(struct desc_info));
+    if (!ring->info) {
+        return false;
+    }
+
+    memset(ring->info, 0, size * sizeof(struct desc_info));
+
+    for (i = 0; i < size; i++) {
+        ring->info[i].ring = ring;
+    }
+
+    return true;
+}
+
+uint32_t desc_ring_get_size(struct desc_ring *ring)
+{
+    return ring->size;
+}
+
+static struct desc_info *desc_read(struct desc_ring *ring, uint32_t index)
+{
+    PCIDevice *dev = PCI_DEVICE(ring->r);
+    struct desc_info *info = &ring->info[index];
+    hwaddr addr = ring->base_addr + (sizeof(struct rocker_desc) * index);
+
+    pci_dma_read(dev, addr, &info->desc, sizeof(info->desc));
+
+    return info;
+}
+
+static void desc_write(struct desc_ring *ring, uint32_t index)
+{
+    PCIDevice *dev = PCI_DEVICE(ring->r);
+    struct desc_info *info = &ring->info[index];
+    hwaddr addr = ring->base_addr + (sizeof(struct rocker_desc) * index);
+
+    pci_dma_write(dev, addr, &info->desc, sizeof(info->desc));
+}
+
+static bool desc_ring_base_addr_check(struct desc_ring *ring)
+{
+    if (!ring->base_addr) {
+        DPRINTF("ERROR: ring[%d] not-initialized desc base address!\n",
+                ring->index);
+        return false;
+    }
+    return true;
+}
+
+static struct desc_info *__desc_ring_fetch_desc(struct desc_ring *ring)
+{
+    return desc_read(ring, ring->tail);
+}
+
+struct desc_info *desc_ring_fetch_desc(struct desc_ring *ring)
+{
+    if (desc_ring_empty(ring) || !desc_ring_base_addr_check(ring)) {
+        return NULL;
+    }
+
+    return desc_read(ring, ring->tail);
+}
+
+static bool __desc_ring_post_desc(struct desc_ring *ring, int err)
+{
+    uint16_t comp_err = 0x8000 | (uint16_t)-err;
+    struct desc_info *info = &ring->info[ring->tail];
+
+    info->desc.comp_err = cpu_to_le16(comp_err);
+    desc_write(ring, ring->tail);
+    ring->tail = (ring->tail + 1) % ring->size;
+
+    /* return true if starting credit count */
+
+    return ring->credits++ == 0;
+}
+
+bool desc_ring_post_desc(struct desc_ring *ring, int err)
+{
+    if (desc_ring_empty(ring)) {
+        DPRINTF("ERROR: ring[%d] trying to post desc to empty ring\n",
+                ring->index);
+        return false;
+    }
+
+    if (!desc_ring_base_addr_check(ring)) {
+        return false;
+    }
+
+    return __desc_ring_post_desc(ring, err);
+}
+
+static bool ring_pump(struct desc_ring *ring)
+{
+    struct desc_info *info;
+    bool primed = false;
+    int err;
+
+    /* If the ring has a consumer, call consumer for each
+     * desc starting at tail and stopping when tail reaches
+     * head (the empty ring condition).
+     */
+
+    if (ring->consume) {
+        while (ring->head != ring->tail) {
+            info = __desc_ring_fetch_desc(ring);
+            err = ring->consume(ring->r, info);
+            if (__desc_ring_post_desc(ring, err)) {
+                primed = true;
+            }
+        }
+    }
+
+    return primed;
+}
+
+bool desc_ring_set_head(struct desc_ring *ring, uint32_t new)
+{
+    uint32_t tail = ring->tail;
+    uint32_t head = ring->head;
+
+    if (!desc_ring_base_addr_check(ring)) {
+        return false;
+    }
+
+    if (new >= ring->size) {
+        DPRINTF("ERROR: trying to set head (%d) past ring[%d] size (%d)\n",
+                new, ring->index, ring->size);
+        return false;
+    }
+
+    if (((head < tail) && ((new >= tail) || (new < head))) ||
+        ((head > tail) && ((new >= tail) && (new < head)))) {
+        DPRINTF("ERROR: trying to wrap ring[%d] "
+                "(head %d, tail %d, new head %d)\n",
+                ring->index, head, tail, new);
+        return false;
+    }
+
+    if (new == ring->head) {
+        DPRINTF("WARNING: setting head (%d) to current head position\n", new);
+    }
+
+    ring->head = new;
+
+    return ring_pump(ring);
+}
+
+uint32_t desc_ring_get_head(struct desc_ring *ring)
+{
+    return ring->head;
+}
+
+uint32_t desc_ring_get_tail(struct desc_ring *ring)
+{
+    return ring->tail;
+}
+
+void desc_ring_set_ctrl(struct desc_ring *ring, uint32_t val)
+{
+    if (val & ROCKER_DMA_DESC_CTRL_RESET) {
+        DPRINTF("ring[%d] resetting\n", ring->index);
+        desc_ring_reset(ring);
+    }
+}
+
+bool desc_ring_ret_credits(struct desc_ring *ring, uint32_t credits)
+{
+    if (credits > ring->credits) {
+        DPRINTF("ERROR: trying to return more credits (%d) "
+                "than are outstanding (%d)\n", credits, ring->credits);
+        ring->credits = 0;
+        return false;
+    }
+
+    ring->credits -= credits;
+
+    /* return true if credits are still outstanding */
+
+    return ring->credits > 0;
+}
+
+uint32_t desc_ring_get_credits(struct desc_ring *ring)
+{
+    return ring->credits;
+}
+
+void desc_ring_set_consume(struct desc_ring *ring,
+                           desc_ring_consume *consume, unsigned vector)
+{
+    ring->consume = consume;
+    ring->msix_vector = vector;
+}
+
+unsigned desc_ring_get_msix_vector(struct desc_ring *ring)
+{
+    return ring->msix_vector;
+}
+
+struct desc_ring *desc_ring_alloc(struct rocker *r, int index)
+{
+    struct desc_ring *ring;
+
+    ring = g_malloc0(sizeof(struct desc_ring));
+    if (!ring) {
+        return NULL;
+    }
+
+    ring->r = r;
+    ring->index = index;
+
+    return ring;
+}
+
+void desc_ring_free(struct desc_ring *ring)
+{
+    if (ring->info) {
+        g_free(ring->info);
+    }
+    g_free(ring);
+}
+
+void desc_ring_reset(struct desc_ring *ring)
+{
+    ring->base_addr = 0;
+    ring->size = 0;
+    ring->head = 0;
+    ring->tail = 0;
+    ring->ctrl = 0;
+    ring->credits = 0;
+}
diff --git a/hw/net/rocker/rocker_desc.h b/hw/net/rocker/rocker_desc.h
new file mode 100644
index 0000000..5a7bc2b
--- /dev/null
+++ b/hw/net/rocker/rocker_desc.h
@@ -0,0 +1,57 @@
+/*
+ * QEMU rocker switch emulation - Descriptor ring support
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+
+#ifndef _ROCKER_DESC_H_
+#define _ROCKER_DESC_H_
+
+#include "rocker_hw.h"
+
+struct rocker;
+struct desc_ring;
+struct desc_info;
+
+typedef int (desc_ring_consume)(struct rocker *r, struct desc_info *info);
+
+uint16_t desc_buf_size(struct desc_info *info);
+uint16_t desc_tlv_size(struct desc_info *info);
+char *desc_get_buf(struct desc_info *info, bool read_only);
+int desc_set_buf(struct desc_info *info, size_t tlv_size);
+struct desc_ring *desc_get_ring(struct desc_info *info);
+
+int desc_ring_index(struct desc_ring *ring);
+bool desc_ring_set_base_addr(struct desc_ring *ring, uint64_t base_addr);
+uint64_t desc_ring_get_base_addr(struct desc_ring *ring);
+bool desc_ring_set_size(struct desc_ring *ring, uint32_t size);
+uint32_t desc_ring_get_size(struct desc_ring *ring);
+bool desc_ring_set_head(struct desc_ring *ring, uint32_t new);
+uint32_t desc_ring_get_head(struct desc_ring *ring);
+uint32_t desc_ring_get_tail(struct desc_ring *ring);
+void desc_ring_set_ctrl(struct desc_ring *ring, uint32_t val);
+bool desc_ring_ret_credits(struct desc_ring *ring, uint32_t credits);
+uint32_t desc_ring_get_credits(struct desc_ring *ring);
+
+struct desc_info *desc_ring_fetch_desc(struct desc_ring *ring);
+bool desc_ring_post_desc(struct desc_ring *ring, int status);
+
+void desc_ring_set_consume(struct desc_ring *ring,
+                           desc_ring_consume *consume, unsigned vector);
+unsigned desc_ring_get_msix_vector(struct desc_ring *ring);
+struct desc_ring *desc_ring_alloc(struct rocker *r, int index);
+void desc_ring_free(struct desc_ring *ring);
+void desc_ring_reset(struct desc_ring *ring);
+
+#endif
diff --git a/hw/net/rocker/rocker_fp.c b/hw/net/rocker/rocker_fp.c
new file mode 100644
index 0000000..01734e5
--- /dev/null
+++ b/hw/net/rocker/rocker_fp.c
@@ -0,0 +1,242 @@
+/*
+ * QEMU rocker switch emulation - front-panel ports
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "net/clients.h"
+
+#include "rocker.h"
+#include "rocker_hw.h"
+#include "rocker_fp.h"
+#include "rocker_world.h"
+
+enum duplex {
+    DUPLEX_HALF = 0,
+    DUPLEX_FULL
+};
+
+struct fp_port {
+    struct rocker *r;
+    struct world *world;
+    uint index;
+    char *name;
+    uint32_t pport;
+    bool enabled;
+    uint32_t speed;
+    uint8_t duplex;
+    uint8_t autoneg;
+    uint8_t learning;
+    NICState *nic;
+    NICConf conf;
+};
+
+bool fp_port_get_link_up(struct fp_port *port)
+{
+    return !qemu_get_queue(port->nic)->link_down;
+}
+
+void fp_port_get_info(struct fp_port *port, RockerPortList *info)
+{
+    info->value->name = g_strdup(port->name);
+    info->value->enabled = port->enabled;
+    info->value->link_up = fp_port_get_link_up(port);
+    info->value->speed = port->speed;
+    info->value->duplex = port->duplex;
+    info->value->autoneg = port->autoneg;
+}
+
+void fp_port_get_macaddr(struct fp_port *port, MACAddr *macaddr)
+{
+    memcpy(macaddr->a, port->conf.macaddr.a, sizeof(macaddr->a));
+}
+
+void fp_port_set_macaddr(struct fp_port *port, MACAddr *macaddr)
+{
+/*XXX memcpy(port->conf.macaddr.a, macaddr.a, sizeof(port->conf.macaddr.a)); */
+}
+
+uint8_t fp_port_get_learning(struct fp_port *port)
+{
+    return port->learning;
+}
+
+void fp_port_set_learning(struct fp_port *port, uint8_t learning)
+{
+    port->learning = learning;
+}
+
+int fp_port_get_settings(struct fp_port *port, uint32_t *speed,
+                         uint8_t *duplex, uint8_t *autoneg)
+{
+    *speed = port->speed;
+    *duplex = port->duplex;
+    *autoneg = port->autoneg;
+
+    return 0;
+}
+
+int fp_port_set_settings(struct fp_port *port, uint32_t speed,
+                         uint8_t duplex, uint8_t autoneg)
+{
+    /* XXX validate inputs */
+
+    port->speed = speed;
+    port->duplex = duplex;
+    port->autoneg = autoneg;
+
+    return 0;
+}
+
+bool fp_port_from_pport(uint32_t pport, uint32_t *port)
+{
+    if (pport < 1 || pport > ROCKER_FP_PORTS_MAX) {
+        return false;
+    }
+    *port = pport - 1;
+    return true;
+}
+
+int fp_port_eg(struct fp_port *port, const struct iovec *iov, int iovcnt)
+{
+    NetClientState *nc = qemu_get_queue(port->nic);
+
+    if (port->enabled) {
+        qemu_sendv_packet(nc, iov, iovcnt);
+    }
+
+    return 0;
+}
+
+static int fp_port_can_receive(NetClientState *nc)
+{
+    struct fp_port *port = qemu_get_nic_opaque(nc);
+
+    return port->enabled;
+}
+
+static ssize_t fp_port_receive_iov(NetClientState *nc, const struct iovec *iov,
+                                   int iovcnt)
+{
+    struct fp_port *port = qemu_get_nic_opaque(nc);
+
+    return world_ingress(port->world, port->pport, iov, iovcnt);
+}
+
+static ssize_t fp_port_receive(NetClientState *nc, const uint8_t *buf,
+                               size_t size)
+{
+    const struct iovec iov = {
+        .iov_base = (uint8_t *)buf,
+        .iov_len = size
+    };
+
+    return fp_port_receive_iov(nc, &iov, 1);
+}
+
+static void fp_port_cleanup(NetClientState *nc)
+{
+}
+
+static void fp_port_set_link_status(NetClientState *nc)
+{
+    struct fp_port *port = qemu_get_nic_opaque(nc);
+
+    rocker_event_link_changed(port->r, port->pport, !nc->link_down);
+}
+
+static NetClientInfo fp_port_info = {
+    .type = NET_CLIENT_OPTIONS_KIND_NIC,
+    .size = sizeof(NICState),
+    .can_receive = fp_port_can_receive,
+    .receive = fp_port_receive,
+    .receive_iov = fp_port_receive_iov,
+    .cleanup = fp_port_cleanup,
+    .link_status_changed = fp_port_set_link_status,
+};
+
+struct world *fp_port_get_world(struct fp_port *port)
+{
+    return port->world;
+}
+
+void fp_port_set_world(struct fp_port *port, struct world *world)
+{
+    DPRINTF("port %d setting world \"%s\"\n", port->index, world_name(world));
+    port->world = world;
+}
+
+bool fp_port_enabled(struct fp_port *port)
+{
+    return port->enabled;
+}
+
+void fp_port_enable(struct fp_port *port)
+{
+    port->enabled = true;
+    DPRINTF("port %d enabled\n", port->index);
+}
+
+void fp_port_disable(struct fp_port *port)
+{
+    port->enabled = false;
+    DPRINTF("port %d disabled\n", port->index);
+}
+
+struct fp_port *fp_port_alloc(struct rocker *r, char *sw_name,
+                              MACAddr *start_mac, uint index,
+                              NICPeers *peers)
+{
+    struct fp_port *port = g_malloc0(sizeof(struct fp_port));
+
+    if (!port) {
+        return NULL;
+    }
+
+    port->r = r;
+    port->index = index;
+    port->pport = index + 1;
+
+    /* front-panel switch port names are 1-based */
+
+    port->name = g_strdup_printf("%s.%d", sw_name, port->pport);
+
+    memcpy(port->conf.macaddr.a, start_mac, sizeof(port->conf.macaddr.a));
+    port->conf.macaddr.a[5] += index;
+    port->conf.bootindex = -1;
+    port->conf.peers = *peers;
+
+    port->nic = qemu_new_nic(&fp_port_info, &port->conf,
+                             sw_name, NULL, port);
+    qemu_format_nic_info_str(qemu_get_queue(port->nic),
+                             port->conf.macaddr.a);
+
+    fp_port_reset(port);
+
+    return port;
+}
+
+void fp_port_free(struct fp_port *port)
+{
+    qemu_del_nic(port->nic);
+    g_free(port->name);
+    g_free(port);
+}
+
+void fp_port_reset(struct fp_port *port)
+{
+    fp_port_disable(port);
+    port->speed = 10000;   /* 10Gbps */
+    port->duplex = DUPLEX_FULL;
+    port->autoneg = 0;
+}
diff --git a/hw/net/rocker/rocker_fp.h b/hw/net/rocker/rocker_fp.h
new file mode 100644
index 0000000..bda78da
--- /dev/null
+++ b/hw/net/rocker/rocker_fp.h
@@ -0,0 +1,54 @@
+/*
+ * QEMU rocker switch emulation - front-panel ports
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ROCKER_FP_H_
+#define _ROCKER_FP_H_
+
+#include "net/net.h"
+#include "qemu/iov.h"
+
+#define ROCKER_FP_PORTS_MAX 62
+
+struct rocker;
+struct fp_port;
+struct world;
+
+int fp_port_eg(struct fp_port *port, const struct iovec *iov, int iovcnt);
+
+bool fp_port_get_link_up(struct fp_port *port);
+void fp_port_get_info(struct fp_port *port, RockerPortList *info);
+void fp_port_get_macaddr(struct fp_port *port, MACAddr *macaddr);
+void fp_port_set_macaddr(struct fp_port *port, MACAddr *macaddr);
+uint8_t fp_port_get_learning(struct fp_port *port);
+void fp_port_set_learning(struct fp_port *port, uint8_t learning);
+int fp_port_get_settings(struct fp_port *port, uint32_t *speed,
+                         uint8_t *duplex, uint8_t *autoneg);
+int fp_port_set_settings(struct fp_port *port, uint32_t speed,
+                         uint8_t duplex, uint8_t autoneg);
+bool fp_port_from_pport(uint32_t pport, uint32_t *port);
+struct world *fp_port_get_world(struct fp_port *port);
+void fp_port_set_world(struct fp_port *port, struct world *world);
+bool fp_port_enabled(struct fp_port *port);
+void fp_port_enable(struct fp_port *port);
+void fp_port_disable(struct fp_port *port);
+
+struct fp_port *fp_port_alloc(struct rocker *r, char *sw_name,
+                              MACAddr *start_mac, uint index,
+                              NICPeers *peers);
+void fp_port_free(struct fp_port *port);
+void fp_port_reset(struct fp_port *port);
+
+#endif /* _ROCKER_FP_H_ */
diff --git a/hw/net/rocker/rocker_hw.h b/hw/net/rocker/rocker_hw.h
new file mode 100644
index 0000000..44521ed
--- /dev/null
+++ b/hw/net/rocker/rocker_hw.h
@@ -0,0 +1,475 @@
+/*
+ * Rocker switch hardware register and descriptor definitions.
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ * Copyright (c) 2014 Jiri Pirko <jiri@resnulli.us>
+ *
+ */
+
+#ifndef _ROCKER_HW_
+#define _ROCKER_HW_
+
+#define __le16 uint16_t
+#define __le32 uint32_t
+#define __le64 uint64_t
+
+/*
+ * PCI configuration space
+ */
+
+#define ROCKER_PCI_REVISION             0x1
+#define ROCKER_PCI_BAR0_IDX             0
+#define ROCKER_PCI_BAR0_SIZE            0x2000
+#define ROCKER_PCI_MSIX_BAR_IDX         1
+#define ROCKER_PCI_MSIX_BAR_SIZE        0x2000
+#define ROCKER_PCI_MSIX_TABLE_OFFSET    0x0000
+#define ROCKER_PCI_MSIX_PBA_OFFSET      0x1000
+
+/*
+ * MSI-X vectors
+ */
+
+enum {
+    ROCKER_MSIX_VEC_CMD,
+    ROCKER_MSIX_VEC_EVENT,
+    ROCKER_MSIX_VEC_TEST,
+    ROCKER_MSIX_VEC_RESERVED0,
+    __ROCKER_MSIX_VEC_TX,
+    __ROCKER_MSIX_VEC_RX,
+#define ROCKER_MSIX_VEC_TX(port) \
+                (__ROCKER_MSIX_VEC_TX + ((port) * 2))
+#define ROCKER_MSIX_VEC_RX(port) \
+                (__ROCKER_MSIX_VEC_RX + ((port) * 2))
+#define ROCKER_MSIX_VEC_COUNT(portcnt) \
+                (ROCKER_MSIX_VEC_RX((portcnt) - 1) + 1)
+};
+
+/*
+ * Rocker bogus registers
+ */
+#define ROCKER_BOGUS_REG0               0x0000
+#define ROCKER_BOGUS_REG1               0x0004
+#define ROCKER_BOGUS_REG2               0x0008
+#define ROCKER_BOGUS_REG3               0x000c
+
+/*
+ * Rocker test registers
+ */
+#define ROCKER_TEST_REG                 0x0010
+#define ROCKER_TEST_REG64               0x0018  /* 8-byte */
+#define ROCKER_TEST_IRQ                 0x0020
+#define ROCKER_TEST_DMA_ADDR            0x0028  /* 8-byte */
+#define ROCKER_TEST_DMA_SIZE            0x0030
+#define ROCKER_TEST_DMA_CTRL            0x0034
+
+/*
+ * Rocker test register ctrl
+ */
+#define ROCKER_TEST_DMA_CTRL_CLEAR      (1 << 0)
+#define ROCKER_TEST_DMA_CTRL_FILL       (1 << 1)
+#define ROCKER_TEST_DMA_CTRL_INVERT     (1 << 2)
+
+/*
+ * Rocker DMA ring register offsets
+ */
+#define ROCKER_DMA_DESC_BASE            0x1000
+#define ROCKER_DMA_DESC_SIZE            32
+#define ROCKER_DMA_DESC_MASK            0x1F
+#define ROCKER_DMA_DESC_TOTAL_SIZE \
+    (ROCKER_DMA_DESC_SIZE * 64) /* 62 ports + event + cmd */
+#define ROCKER_DMA_DESC_ADDR_OFFSET     0x00     /* 8-byte */
+#define ROCKER_DMA_DESC_SIZE_OFFSET     0x08
+#define ROCKER_DMA_DESC_HEAD_OFFSET     0x0c
+#define ROCKER_DMA_DESC_TAIL_OFFSET     0x10
+#define ROCKER_DMA_DESC_CTRL_OFFSET     0x14
+#define ROCKER_DMA_DESC_CREDITS_OFFSET  0x18
+#define ROCKER_DMA_DESC_RSVD_OFFSET     0x1c
+
+/*
+ * Rocker dma ctrl register bits
+ */
+#define ROCKER_DMA_DESC_CTRL_RESET      (1 << 0)
+
+/*
+ * Rocker ring indices
+ */
+#define ROCKER_RING_CMD                 0
+#define ROCKER_RING_EVENT               1
+
+/*
+ * Helper macro to do convert a dma ring register
+ * to its index.  Based on the fact that the register
+ * group stride is 32 bytes.
+ */
+#define ROCKER_RING_INDEX(reg) ((reg >> 5) & 0x7F)
+
+/*
+ * Rocker DMA Descriptor
+ */
+
+struct rocker_desc {
+    __le64 buf_addr;
+    uint64_t cookie;
+    __le16 buf_size;
+    __le16 tlv_size;
+    __le16 rsvd[5];   /* pad to 32 bytes */
+    __le16 comp_err;
+} __attribute__((packed, aligned(8)));
+
+/*
+ * Rocker TLV type fields
+ */
+
+struct rocker_tlv {
+    __le32 type;
+    __le16 len;
+    __le16 rsvd;
+} __attribute__((packed, aligned(8)));
+
+/* cmd msg */
+enum {
+    ROCKER_TLV_CMD_UNSPEC,
+    ROCKER_TLV_CMD_TYPE,                /* u16 */
+    ROCKER_TLV_CMD_INFO,                /* nest */
+
+    __ROCKER_TLV_CMD_MAX,
+    ROCKER_TLV_CMD_MAX = __ROCKER_TLV_CMD_MAX - 1,
+};
+
+enum {
+    ROCKER_TLV_CMD_TYPE_UNSPEC,
+    ROCKER_TLV_CMD_TYPE_GET_PORT_SETTINGS,
+    ROCKER_TLV_CMD_TYPE_SET_PORT_SETTINGS,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_ADD,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_MOD,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_DEL,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_GET_STATS,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_ADD,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_MOD,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_DEL,
+    ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_GET_STATS,
+
+    __ROCKER_TLV_CMD_TYPE_MAX,
+    ROCKER_TLV_CMD_TYPE_MAX = __ROCKER_TLV_CMD_TYPE_MAX - 1,
+};
+
+/* cmd info nested for set/get port settings */
+enum {
+    ROCKER_TLV_CMD_PORT_SETTINGS_UNSPEC,
+    ROCKER_TLV_CMD_PORT_SETTINGS_PPORT,         /* u32 */
+    ROCKER_TLV_CMD_PORT_SETTINGS_SPEED,         /* u32 */
+    ROCKER_TLV_CMD_PORT_SETTINGS_DUPLEX,        /* u8 */
+    ROCKER_TLV_CMD_PORT_SETTINGS_AUTONEG,       /* u8 */
+    ROCKER_TLV_CMD_PORT_SETTINGS_MACADDR,       /* binary */
+    ROCKER_TLV_CMD_PORT_SETTINGS_MODE,          /* u8 */
+    ROCKER_TLV_CMD_PORT_SETTINGS_LEARNING,      /* u8 */
+
+    __ROCKER_TLV_CMD_PORT_SETTINGS_MAX,
+    ROCKER_TLV_CMD_PORT_SETTINGS_MAX = __ROCKER_TLV_CMD_PORT_SETTINGS_MAX - 1,
+};
+
+enum {
+    ROCKER_PORT_MODE_OF_DPA,
+};
+
+/* event msg */
+enum {
+    ROCKER_TLV_EVENT_UNSPEC,
+    ROCKER_TLV_EVENT_TYPE,              /* u16 */
+    ROCKER_TLV_EVENT_INFO,              /* nest */
+
+    __ROCKER_TLV_EVENT_MAX,
+    ROCKER_TLV_EVENT_MAX = __ROCKER_TLV_EVENT_MAX - 1,
+};
+
+enum {
+    ROCKER_TLV_EVENT_TYPE_UNSPEC,
+    ROCKER_TLV_EVENT_TYPE_LINK_CHANGED,
+    ROCKER_TLV_EVENT_TYPE_MAC_VLAN_SEEN,
+
+    __ROCKER_TLV_EVENT_TYPE_MAX,
+    ROCKER_TLV_EVENT_TYPE_MAX = __ROCKER_TLV_EVENT_TYPE_MAX - 1,
+};
+
+/* event info nested for link changed */
+enum {
+    ROCKER_TLV_EVENT_LINK_CHANGED_UNSPEC,
+    ROCKER_TLV_EVENT_LINK_CHANGED_PPORT,    /* u32 */
+    ROCKER_TLV_EVENT_LINK_CHANGED_LINKUP,   /* u8 */
+
+    __ROCKER_TLV_EVENT_LINK_CHANGED_MAX,
+    ROCKER_TLV_EVENT_LINK_CHANGED_MAX = __ROCKER_TLV_EVENT_LINK_CHANGED_MAX - 1,
+};
+
+/* event info nested for MAC/VLAN */
+enum {
+    ROCKER_TLV_EVENT_MAC_VLAN_UNSPEC,
+    ROCKER_TLV_EVENT_MAC_VLAN_PPORT,        /* u32 */
+    ROCKER_TLV_EVENT_MAC_VLAN_MAC,          /* binary */
+    ROCKER_TLV_EVENT_MAC_VLAN_VLAN_ID,      /* __be16 */
+
+    __ROCKER_TLV_EVENT_MAC_VLAN_MAX,
+    ROCKER_TLV_EVENT_MAC_VLAN_MAX = __ROCKER_TLV_EVENT_MAC_VLAN_MAX - 1,
+};
+
+/* Rx msg */
+enum {
+    ROCKER_TLV_RX_UNSPEC,
+    ROCKER_TLV_RX_FLAGS,                /* u16, see RX_FLAGS_ */
+    ROCKER_TLV_RX_CSUM,                 /* u16 */
+    ROCKER_TLV_RX_FRAG_ADDR,            /* u64 */
+    ROCKER_TLV_RX_FRAG_MAX_LEN,         /* u16 */
+    ROCKER_TLV_RX_FRAG_LEN,             /* u16 */
+
+    __ROCKER_TLV_RX_MAX,
+    ROCKER_TLV_RX_MAX = __ROCKER_TLV_RX_MAX - 1,
+};
+
+#define ROCKER_RX_FLAGS_IPV4                    (1 << 0)
+#define ROCKER_RX_FLAGS_IPV6                    (1 << 1)
+#define ROCKER_RX_FLAGS_CSUM_CALC               (1 << 2)
+#define ROCKER_RX_FLAGS_IPV4_CSUM_GOOD          (1 << 3)
+#define ROCKER_RX_FLAGS_IP_FRAG                 (1 << 4)
+#define ROCKER_RX_FLAGS_TCP                     (1 << 5)
+#define ROCKER_RX_FLAGS_UDP                     (1 << 6)
+#define ROCKER_RX_FLAGS_TCP_UDP_CSUM_GOOD       (1 << 7)
+
+/* Tx msg */
+enum {
+    ROCKER_TLV_TX_UNSPEC,
+    ROCKER_TLV_TX_OFFLOAD,              /* u8, see TX_OFFLOAD_ */
+    ROCKER_TLV_TX_L3_CSUM_OFF,          /* u16 */
+    ROCKER_TLV_TX_TSO_MSS,              /* u16 */
+    ROCKER_TLV_TX_TSO_HDR_LEN,          /* u16 */
+    ROCKER_TLV_TX_FRAGS,                /* array */
+
+    __ROCKER_TLV_TX_MAX,
+    ROCKER_TLV_TX_MAX = __ROCKER_TLV_TX_MAX - 1,
+};
+
+#define ROCKER_TX_OFFLOAD_NONE          0
+#define ROCKER_TX_OFFLOAD_IP_CSUM       1
+#define ROCKER_TX_OFFLOAD_TCP_UDP_CSUM  2
+#define ROCKER_TX_OFFLOAD_L3_CSUM       3
+#define ROCKER_TX_OFFLOAD_TSO           4
+
+#define ROCKER_TX_FRAGS_MAX             16
+
+enum {
+    ROCKER_TLV_TX_FRAG_UNSPEC,
+    ROCKER_TLV_TX_FRAG,                 /* nest */
+
+    __ROCKER_TLV_TX_FRAG_MAX,
+    ROCKER_TLV_TX_FRAG_MAX = __ROCKER_TLV_TX_FRAG_MAX - 1,
+};
+
+enum {
+    ROCKER_TLV_TX_FRAG_ATTR_UNSPEC,
+    ROCKER_TLV_TX_FRAG_ATTR_ADDR,       /* u64 */
+    ROCKER_TLV_TX_FRAG_ATTR_LEN,        /* u16 */
+
+    __ROCKER_TLV_TX_FRAG_ATTR_MAX,
+    ROCKER_TLV_TX_FRAG_ATTR_MAX = __ROCKER_TLV_TX_FRAG_ATTR_MAX - 1,
+};
+
+/*
+ * cmd info nested for OF-DPA msgs
+ */
+
+enum {
+    ROCKER_TLV_OF_DPA_UNSPEC,
+    ROCKER_TLV_OF_DPA_TABLE_ID,            /* u16 */
+    ROCKER_TLV_OF_DPA_PRIORITY,            /* u32 */
+    ROCKER_TLV_OF_DPA_HARDTIME,            /* u32 */
+    ROCKER_TLV_OF_DPA_IDLETIME,            /* u32 */
+    ROCKER_TLV_OF_DPA_COOKIE,              /* u64 */
+    ROCKER_TLV_OF_DPA_IN_PPORT,            /* u32 */
+    ROCKER_TLV_OF_DPA_IN_PPORT_MASK,       /* u32 */
+    ROCKER_TLV_OF_DPA_OUT_PPORT,           /* u32 */
+    ROCKER_TLV_OF_DPA_GOTO_TABLE_ID,       /* u16 */
+    ROCKER_TLV_OF_DPA_GROUP_ID,            /* u32 */
+    ROCKER_TLV_OF_DPA_GROUP_ID_LOWER,      /* u32 */
+    ROCKER_TLV_OF_DPA_GROUP_COUNT,         /* u16 */
+    ROCKER_TLV_OF_DPA_GROUP_IDS,           /* u32 array */
+    ROCKER_TLV_OF_DPA_VLAN_ID,             /* __be16 */
+    ROCKER_TLV_OF_DPA_VLAN_ID_MASK,        /* __be16 */
+    ROCKER_TLV_OF_DPA_VLAN_PCP,            /* __be16 */
+    ROCKER_TLV_OF_DPA_VLAN_PCP_MASK,       /* __be16 */
+    ROCKER_TLV_OF_DPA_VLAN_PCP_ACTION,     /* u8 */
+    ROCKER_TLV_OF_DPA_NEW_VLAN_ID,         /* __be16 */
+    ROCKER_TLV_OF_DPA_NEW_VLAN_PCP,        /* u8 */
+    ROCKER_TLV_OF_DPA_TUNNEL_ID,           /* u32 */
+    ROCKER_TLV_OF_DPA_TUNNEL_LPORT,        /* u32 */
+    ROCKER_TLV_OF_DPA_ETHERTYPE,           /* __be16 */
+    ROCKER_TLV_OF_DPA_DST_MAC,             /* binary */
+    ROCKER_TLV_OF_DPA_DST_MAC_MASK,        /* binary */
+    ROCKER_TLV_OF_DPA_SRC_MAC,             /* binary */
+    ROCKER_TLV_OF_DPA_SRC_MAC_MASK,        /* binary */
+    ROCKER_TLV_OF_DPA_IP_PROTO,            /* u8 */
+    ROCKER_TLV_OF_DPA_IP_PROTO_MASK,       /* u8 */
+    ROCKER_TLV_OF_DPA_IP_DSCP,             /* u8 */
+    ROCKER_TLV_OF_DPA_IP_DSCP_MASK,        /* u8 */
+    ROCKER_TLV_OF_DPA_IP_DSCP_ACTION,      /* u8 */
+    ROCKER_TLV_OF_DPA_NEW_IP_DSCP,         /* u8 */
+    ROCKER_TLV_OF_DPA_IP_ECN,              /* u8 */
+    ROCKER_TLV_OF_DPA_IP_ECN_MASK,         /* u8 */
+    ROCKER_TLV_OF_DPA_DST_IP,              /* __be32 */
+    ROCKER_TLV_OF_DPA_DST_IP_MASK,         /* __be32 */
+    ROCKER_TLV_OF_DPA_SRC_IP,              /* __be32 */
+    ROCKER_TLV_OF_DPA_SRC_IP_MASK,         /* __be32 */
+    ROCKER_TLV_OF_DPA_DST_IPV6,            /* binary */
+    ROCKER_TLV_OF_DPA_DST_IPV6_MASK,       /* binary */
+    ROCKER_TLV_OF_DPA_SRC_IPV6,            /* binary */
+    ROCKER_TLV_OF_DPA_SRC_IPV6_MASK,       /* binary */
+    ROCKER_TLV_OF_DPA_SRC_ARP_IP,          /* __be32 */
+    ROCKER_TLV_OF_DPA_SRC_ARP_IP_MASK,     /* __be32 */
+    ROCKER_TLV_OF_DPA_L4_DST_PORT,         /* __be16 */
+    ROCKER_TLV_OF_DPA_L4_DST_PORT_MASK,    /* __be16 */
+    ROCKER_TLV_OF_DPA_L4_SRC_PORT,         /* __be16 */
+    ROCKER_TLV_OF_DPA_L4_SRC_PORT_MASK,    /* __be16 */
+    ROCKER_TLV_OF_DPA_ICMP_TYPE,           /* u8 */
+    ROCKER_TLV_OF_DPA_ICMP_TYPE_MASK,      /* u8 */
+    ROCKER_TLV_OF_DPA_ICMP_CODE,           /* u8 */
+    ROCKER_TLV_OF_DPA_ICMP_CODE_MASK,      /* u8 */
+    ROCKER_TLV_OF_DPA_IPV6_LABEL,          /* __be32 */
+    ROCKER_TLV_OF_DPA_IPV6_LABEL_MASK,     /* __be32 */
+    ROCKER_TLV_OF_DPA_QUEUE_ID_ACTION,     /* u8 */
+    ROCKER_TLV_OF_DPA_NEW_QUEUE_ID,        /* u8 */
+    ROCKER_TLV_OF_DPA_CLEAR_ACTIONS,       /* u32 */
+    ROCKER_TLV_OF_DPA_POP_VLAN,            /* u8 */
+    ROCKER_TLV_OF_DPA_TTL_CHECK,           /* u8 */
+    ROCKER_TLV_OF_DPA_COPY_CPU_ACTION,     /* u8 */
+
+    __ROCKER_TLV_OF_DPA_MAX,
+    ROCKER_TLV_OF_DPA_MAX = __ROCKER_TLV_OF_DPA_MAX - 1,
+};
+
+/*
+ * OF-DPA table IDs
+ */
+
+enum rocker_of_dpa_table_id {
+    ROCKER_OF_DPA_TABLE_ID_INGRESS_PORT = 0,
+    ROCKER_OF_DPA_TABLE_ID_VLAN = 10,
+    ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC = 20,
+    ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING = 30,
+    ROCKER_OF_DPA_TABLE_ID_MULTICAST_ROUTING = 40,
+    ROCKER_OF_DPA_TABLE_ID_BRIDGING = 50,
+    ROCKER_OF_DPA_TABLE_ID_ACL_POLICY = 60,
+};
+
+/*
+ * OF-DPA flow stats
+ */
+
+enum {
+    ROCKER_TLV_OF_DPA_FLOW_STAT_UNSPEC,
+    ROCKER_TLV_OF_DPA_FLOW_STAT_DURATION,    /* u32 */
+    ROCKER_TLV_OF_DPA_FLOW_STAT_RX_PKTS,     /* u64 */
+    ROCKER_TLV_OF_DPA_FLOW_STAT_TX_PKTS,     /* u64 */
+
+    __ROCKER_TLV_OF_DPA_FLOW_STAT_MAX,
+    ROCKER_TLV_OF_DPA_FLOW_STAT_MAX = __ROCKER_TLV_OF_DPA_FLOW_STAT_MAX - 1,
+};
+
+/*
+ * OF-DPA group types
+ */
+
+enum rocker_of_dpa_group_type {
+    ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE = 0,
+    ROCKER_OF_DPA_GROUP_TYPE_L2_REWRITE,
+    ROCKER_OF_DPA_GROUP_TYPE_L3_UCAST,
+    ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST,
+    ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD,
+    ROCKER_OF_DPA_GROUP_TYPE_L3_INTERFACE,
+    ROCKER_OF_DPA_GROUP_TYPE_L3_MCAST,
+    ROCKER_OF_DPA_GROUP_TYPE_L3_ECMP,
+    ROCKER_OF_DPA_GROUP_TYPE_L2_OVERLAY,
+};
+
+/*
+ * OF-DPA group L2 overlay types
+ */
+
+enum rocker_of_dpa_overlay_type {
+    ROCKER_OF_DPA_OVERLAY_TYPE_FLOOD_UCAST = 0,
+    ROCKER_OF_DPA_OVERLAY_TYPE_FLOOD_MCAST,
+    ROCKER_OF_DPA_OVERLAY_TYPE_MCAST_UCAST,
+    ROCKER_OF_DPA_OVERLAY_TYPE_MCAST_MCAST,
+};
+
+/*
+ * OF-DPA group ID encoding
+ */
+
+#define ROCKER_GROUP_TYPE_SHIFT 28
+#define ROCKER_GROUP_TYPE_MASK 0xf0000000
+#define ROCKER_GROUP_VLAN_ID_SHIFT 16
+#define ROCKER_GROUP_VLAN_ID_MASK 0x0fff0000
+#define ROCKER_GROUP_PORT_SHIFT 0
+#define ROCKER_GROUP_PORT_MASK 0x0000ffff
+#define ROCKER_GROUP_TUNNEL_ID_SHIFT 12
+#define ROCKER_GROUP_TUNNEL_ID_MASK 0x0ffff000
+#define ROCKER_GROUP_SUBTYPE_SHIFT 10
+#define ROCKER_GROUP_SUBTYPE_MASK 0x00000c00
+#define ROCKER_GROUP_INDEX_SHIFT 0
+#define ROCKER_GROUP_INDEX_MASK 0x0000ffff
+#define ROCKER_GROUP_INDEX_LONG_SHIFT 0
+#define ROCKER_GROUP_INDEX_LONG_MASK 0x0fffffff
+
+#define ROCKER_GROUP_TYPE_GET(group_id) \
+    (((group_id) & ROCKER_GROUP_TYPE_MASK) >> ROCKER_GROUP_TYPE_SHIFT)
+#define ROCKER_GROUP_TYPE_SET(type) \
+    (((type) << ROCKER_GROUP_TYPE_SHIFT) & ROCKER_GROUP_TYPE_MASK)
+#define ROCKER_GROUP_VLAN_GET(group_id) \
+    (((group_id) & ROCKER_GROUP_VLAN_ID_MASK) >> ROCKER_GROUP_VLAN_ID_SHIFT)
+#define ROCKER_GROUP_VLAN_SET(vlan_id) \
+    (((vlan_id) << ROCKER_GROUP_VLAN_ID_SHIFT) & ROCKER_GROUP_VLAN_ID_MASK)
+#define ROCKER_GROUP_PORT_GET(group_id) \
+    (((group_id) & ROCKER_GROUP_PORT_MASK) >> ROCKER_GROUP_PORT_SHIFT)
+#define ROCKER_GROUP_PORT_SET(port) \
+    (((port) << ROCKER_GROUP_PORT_SHIFT) & ROCKER_GROUP_PORT_MASK)
+#define ROCKER_GROUP_INDEX_GET(group_id) \
+    (((group_id) & ROCKER_GROUP_INDEX_MASK) >> ROCKER_GROUP_INDEX_SHIFT)
+#define ROCKER_GROUP_INDEX_SET(index) \
+    (((index) << ROCKER_GROUP_INDEX_SHIFT) & ROCKER_GROUP_INDEX_MASK)
+#define ROCKER_GROUP_INDEX_LONG_GET(group_id) \
+    (((group_id) & ROCKER_GROUP_INDEX_LONG_MASK) >> \
+     ROCKER_GROUP_INDEX_LONG_SHIFT)
+#define ROCKER_GROUP_INDEX_LONG_SET(index) \
+    (((index) << ROCKER_GROUP_INDEX_LONG_SHIFT) & \
+     ROCKER_GROUP_INDEX_LONG_MASK)
+
+#define ROCKER_GROUP_NONE 0
+#define ROCKER_GROUP_L2_INTERFACE(vlan_id, port) \
+    (ROCKER_GROUP_TYPE_SET(ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE) |\
+     ROCKER_GROUP_VLAN_SET(ntohs(vlan_id)) | ROCKER_GROUP_PORT_SET(port))
+#define ROCKER_GROUP_L2_REWRITE(index) \
+    (ROCKER_GROUP_TYPE_SET(ROCKER_OF_DPA_GROUP_TYPE_L2_REWRITE) |\
+     ROCKER_GROUP_INDEX_LONG_SET(index))
+#define ROCKER_GROUP_L2_MCAST(vlan_id, index) \
+    (ROCKER_GROUP_TYPE_SET(ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST) |\
+     ROCKER_GROUP_VLAN_SET(ntohs(vlan_id)) | ROCKER_GROUP_INDEX_SET(index))
+#define ROCKER_GROUP_L2_FLOOD(vlan_id, index) \
+    (ROCKER_GROUP_TYPE_SET(ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD) |\
+     ROCKER_GROUP_VLAN_SET(ntohs(vlan_id)) | ROCKER_GROUP_INDEX_SET(index))
+#define ROCKER_GROUP_L3_UNICAST(index) \
+    (ROCKER_GROUP_TYPE_SET(ROCKER_OF_DPA_GROUP_TYPE_L3_UCAST) |\
+     ROCKER_GROUP_INDEX_LONG_SET(index))
+
+/*
+ * Rocker general purpose registers
+ */
+#define ROCKER_CONTROL                  0x0300
+#define ROCKER_PORT_PHYS_COUNT          0x0304
+#define ROCKER_PORT_PHYS_LINK_STATUS    0x0310 /* 8-byte */
+#define ROCKER_PORT_PHYS_ENABLE         0x0318 /* 8-byte */
+#define ROCKER_SWITCH_ID                0x0320 /* 8-byte */
+
+/*
+ * Rocker control bits
+ */
+#define ROCKER_CONTROL_RESET            (1 << 0)
+
+#endif /* _ROCKER_HW_ */
diff --git a/hw/net/rocker/rocker_of_dpa.c b/hw/net/rocker/rocker_of_dpa.c
new file mode 100644
index 0000000..328f351
--- /dev/null
+++ b/hw/net/rocker/rocker_of_dpa.c
@@ -0,0 +1,2335 @@
+/*
+ * QEMU rocker switch emulation - OF-DPA flow processing support
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "net/eth.h"
+#include "qemu/iov.h"
+#include "qemu/timer.h"
+
+#include "rocker.h"
+#include "rocker_hw.h"
+#include "rocker_fp.h"
+#include "rocker_tlv.h"
+#include "rocker_world.h"
+#include "rocker_desc.h"
+#include "rocker_of_dpa.h"
+
+static const MACAddr zero_mac = { .a = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 } };
+static const MACAddr ff_mac =   { .a = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff } };
+
+struct of_dpa {
+    struct world *world;
+    GHashTable *flow_tbl;
+    GHashTable *group_tbl;
+    unsigned int flow_tbl_max_size;
+    unsigned int group_tbl_max_size;
+};
+
+/* flow_key stolen mostly from OVS
+ *
+ * Note: fields that compare with network packet header fields
+ * are stored in network order (BE) to avoid per-packet field
+ * byte-swaps.
+ */
+
+struct of_dpa_flow_key {
+    uint32_t in_pport;               /* ingress port */
+    uint32_t tunnel_id;              /* overlay tunnel id */
+    uint32_t tbl_id;                 /* table id */
+    struct {
+        __be16 vlan_id;              /* 0 if no VLAN */
+        MACAddr src;                 /* ethernet source address */
+        MACAddr dst;                 /* ethernet destination address */
+        __be16 type;                 /* ethernet frame type */
+    } eth;
+    struct {
+        uint8_t proto;               /* IP protocol or ARP opcode */
+        uint8_t tos;                 /* IP ToS */
+        uint8_t ttl;                 /* IP TTL/hop limit */
+        uint8_t frag;                /* one of FRAG_TYPE_* */
+    } ip;
+    union {
+        struct {
+            struct {
+                __be32 src;          /* IP source address */
+                __be32 dst;          /* IP destination address */
+            } addr;
+            union {
+                struct {
+                    __be16 src;      /* TCP/UDP/SCTP source port */
+                    __be16 dst;      /* TCP/UDP/SCTP destination port */
+                    __be16 flags;    /* TCP flags */
+                } tp;
+                struct {
+                    MACAddr sha;     /* ARP source hardware address */
+                    MACAddr tha;     /* ARP target hardware address */
+                } arp;
+            };
+        } ipv4;
+        struct {
+            struct {
+                ipv6_addr src;       /* IPv6 source address */
+                ipv6_addr dst;       /* IPv6 destination address */
+            } addr;
+            __be32 label;            /* IPv6 flow label */
+            struct {
+                __be16 src;          /* TCP/UDP/SCTP source port */
+                __be16 dst;          /* TCP/UDP/SCTP destination port */
+                __be16 flags;        /* TCP flags */
+            } tp;
+            struct {
+                ipv6_addr target;    /* ND target address */
+                MACAddr sll;         /* ND source link layer address */
+                MACAddr tll;         /* ND target link layer address */
+            } nd;
+        } ipv6;
+    };
+    int width;                       /* how many uint64_t's in key? */
+};
+
+/* Width of key which includes field 'f' in u64s, rounded up */
+#define FLOW_KEY_WIDTH(f) \
+    ((offsetof(struct of_dpa_flow_key, f) + \
+      sizeof(((struct of_dpa_flow_key *)0)->f) + \
+      sizeof(uint64_t) - 1) / sizeof(uint64_t))
+
+struct of_dpa_flow_action {
+    uint32_t goto_tbl;
+    struct {
+        uint32_t group_id;
+        uint32_t tun_log_lport;
+        __be16 vlan_id;
+    } write;
+    struct {
+        __be16 new_vlan_id;
+        uint32_t out_pport;
+        uint8_t copy_to_cpu;
+        __be16 vlan_id;
+    } apply;
+};
+
+struct of_dpa_flow {
+    uint32_t lpm;
+    uint32_t priority;
+    uint32_t hardtime;
+    uint32_t idletime;
+    uint64_t cookie;
+    struct of_dpa_flow_key key;
+    struct of_dpa_flow_key mask;
+    struct of_dpa_flow_action action;
+    struct {
+        uint64_t hits;
+        int64_t install_time;
+        int64_t refresh_time;
+        uint64_t rx_pkts;
+        uint64_t tx_pkts;
+    } stats;
+};
+
+struct of_dpa_flow_pkt_fields {
+    uint32_t tunnel_id;
+    struct eth_header *ethhdr;
+    __be16 *h_proto;
+    struct vlan_header *vlanhdr;
+    struct ip_header *ipv4hdr;
+    struct ip6_header *ipv6hdr;
+    ipv6_addr *ipv6_src_addr;
+    ipv6_addr *ipv6_dst_addr;
+};
+
+struct of_dpa_flow_context {
+    uint32_t in_pport;
+    uint32_t tunnel_id;
+    struct iovec *iov;
+    int iovcnt;
+    struct eth_header ethhdr_rewrite;
+    struct vlan_header vlanhdr_rewrite;
+    struct vlan_header vlanhdr;
+    struct of_dpa *of_dpa;
+    struct of_dpa_flow_pkt_fields fields;
+    struct of_dpa_flow_action action_set;
+};
+
+struct of_dpa_flow_match {
+    struct of_dpa_flow_key value;
+    struct of_dpa_flow *best;
+};
+
+struct of_dpa_group {
+    uint32_t id;
+    union {
+        struct {
+            uint32_t out_pport;
+            uint8_t pop_vlan;
+        } l2_interface;
+        struct {
+            uint32_t group_id;
+            MACAddr src_mac;
+            MACAddr dst_mac;
+            __be16 vlan_id;
+        } l2_rewrite;
+        struct {
+            uint16_t group_count;
+            uint32_t *group_ids;
+        } l2_flood;
+        struct {
+            uint32_t group_id;
+            MACAddr src_mac;
+            MACAddr dst_mac;
+            __be16 vlan_id;
+            uint8_t ttl_check;
+        } l3_unicast;
+    };
+};
+
+static int of_dpa_mask2prefix(__be32 mask)
+{
+    int i;
+    int count = 32;
+
+    for (i = 0; i < 32; i++) {
+        if (!(ntohl(mask) & ((2 << i) - 1))) {
+            count--;
+        }
+    }
+
+    return count;
+}
+
+#if defined(DEBUG_ROCKER)
+static void of_dpa_flow_key_dump(struct of_dpa_flow_key *key,
+                                 struct of_dpa_flow_key *mask)
+{
+    char buf[512], *b = buf, *mac;
+
+    b += sprintf(b, " tbl %2d", key->tbl_id);
+
+    if (key->in_pport || (mask && mask->in_pport)) {
+        b += sprintf(b, " in_pport %2d", key->in_pport);
+        if (mask && mask->in_pport != 0xffffffff) {
+            b += sprintf(b, "/0x%08x", key->in_pport);
+        }
+    }
+
+    if (key->tunnel_id || (mask && mask->tunnel_id)) {
+        b += sprintf(b, " tun %8d", key->tunnel_id);
+        if (mask && mask->tunnel_id != 0xffffffff) {
+            b += sprintf(b, "/0x%08x", key->tunnel_id);
+        }
+    }
+
+    if (key->eth.vlan_id || (mask && mask->eth.vlan_id)) {
+        b += sprintf(b, " vlan %4d", ntohs(key->eth.vlan_id));
+        if (mask && mask->eth.vlan_id != 0xffff) {
+            b += sprintf(b, "/0x%04x", ntohs(key->eth.vlan_id));
+        }
+    }
+
+    if (memcmp(key->eth.src.a, zero_mac.a, ETH_ALEN) ||
+        (mask && memcmp(mask->eth.src.a, zero_mac.a, ETH_ALEN))) {
+        mac = qemu_mac_strdup_printf(key->eth.src.a);
+        b += sprintf(b, " src %s", mac);
+        g_free(mac);
+        if (mask && memcmp(mask->eth.src.a, ff_mac.a, ETH_ALEN)) {
+            mac = qemu_mac_strdup_printf(mask->eth.src.a);
+            b += sprintf(b, "/%s", mac);
+            g_free(mac);
+        }
+    }
+
+    if (memcmp(key->eth.dst.a, zero_mac.a, ETH_ALEN) ||
+        (mask && memcmp(mask->eth.dst.a, zero_mac.a, ETH_ALEN))) {
+        mac = qemu_mac_strdup_printf(key->eth.dst.a);
+        b += sprintf(b, " dst %s", mac);
+        g_free(mac);
+        if (mask && memcmp(mask->eth.dst.a, ff_mac.a, ETH_ALEN)) {
+            mac = qemu_mac_strdup_printf(mask->eth.dst.a);
+            b += sprintf(b, "/%s", mac);
+            g_free(mac);
+        }
+    }
+
+    if (key->eth.type || (mask && mask->eth.type)) {
+        b += sprintf(b, " type 0x%04x", ntohs(key->eth.type));
+        if (mask && mask->eth.type != 0xffff) {
+            b += sprintf(b, "/0x%04x", ntohs(mask->eth.type));
+        }
+        switch (ntohs(key->eth.type)) {
+        case 0x0800:
+        case 0x86dd:
+            if (key->ip.proto || (mask && mask->ip.proto)) {
+                b += sprintf(b, " ip proto %2d", key->ip.proto);
+                if (mask && mask->ip.proto != 0xff) {
+                    b += sprintf(b, "/0x%02x", mask->ip.proto);
+                }
+            }
+            if (key->ip.tos || (mask && mask->ip.tos)) {
+                b += sprintf(b, " ip tos %2d", key->ip.tos);
+                if (mask && mask->ip.tos != 0xff) {
+                    b += sprintf(b, "/0x%02x", mask->ip.tos);
+                }
+            }
+            break;
+        }
+        switch (ntohs(key->eth.type)) {
+        case 0x0800:
+            if (key->ipv4.addr.dst || (mask && mask->ipv4.addr.dst)) {
+                b += sprintf(b, " dst %s",
+                    inet_ntoa(*(struct in_addr *)&key->ipv4.addr.dst));
+                if (mask) {
+                    b += sprintf(b, "/%d",
+                                 of_dpa_mask2prefix(mask->ipv4.addr.dst));
+                }
+            }
+            break;
+        }
+    }
+
+    DPRINTF("%s\n", buf);
+}
+#else
+#define of_dpa_flow_key_dump(k, m)
+#endif
+
+static void _of_dpa_flow_match(void *key, void *value, void *user_data)
+{
+    struct of_dpa_flow *flow = value;
+    struct of_dpa_flow_match *match = user_data;
+    uint64_t *k = (uint64_t *)&flow->key;
+    uint64_t *m = (uint64_t *)&flow->mask;
+    uint64_t *v = (uint64_t *)&match->value;
+    int i;
+
+    if (flow->key.tbl_id == match->value.tbl_id) {
+        of_dpa_flow_key_dump(&flow->key, &flow->mask);
+    }
+
+    if (flow->key.width > match->value.width) {
+        return;
+    }
+
+    for (i = 0; i < flow->key.width; i++, k++, m++, v++) {
+        if ((~*k & *m & *v) | (*k & *m & ~*v)) {
+            return;
+        }
+    }
+
+    DPRINTF("match\n");
+
+    if (!match->best ||
+        flow->priority > match->best->priority ||
+        flow->lpm > match->best->lpm) {
+        match->best = flow;
+    }
+}
+
+static struct of_dpa_flow *of_dpa_flow_match(struct of_dpa *of_dpa,
+                                             struct of_dpa_flow_match *match)
+{
+    DPRINTF("\nnew search\n");
+    of_dpa_flow_key_dump(&match->value, NULL);
+
+    g_hash_table_foreach(of_dpa->flow_tbl, _of_dpa_flow_match, match);
+
+    return match->best;
+}
+
+static struct of_dpa_flow *of_dpa_flow_find(struct of_dpa *of_dpa,
+                                            uint64_t cookie)
+{
+    return g_hash_table_lookup(of_dpa->flow_tbl, &cookie);
+}
+
+static int of_dpa_flow_add(struct of_dpa *of_dpa, struct of_dpa_flow *flow)
+{
+    g_hash_table_insert(of_dpa->flow_tbl, &flow->cookie, flow);
+
+    return 0;
+}
+
+static int of_dpa_flow_mod(struct of_dpa_flow *flow)
+{
+    return 0;
+}
+
+static void of_dpa_flow_del(struct of_dpa *of_dpa, struct of_dpa_flow *flow)
+{
+    g_hash_table_remove(of_dpa->flow_tbl, &flow->cookie);
+}
+
+static struct of_dpa_flow *of_dpa_flow_alloc(uint64_t cookie,
+                                             uint32_t priority,
+                                             uint32_t hardtime,
+                                             uint32_t idletime)
+{
+    struct of_dpa_flow *flow;
+    int64_t now = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) / 1000;
+
+    flow = g_malloc0(sizeof(struct of_dpa_flow));
+    if (!flow) {
+        return NULL;
+    }
+
+    flow->cookie = cookie;
+    flow->priority = priority;
+    flow->hardtime = hardtime;
+    flow->idletime = idletime;
+    flow->mask.tbl_id = 0xffffffff;
+
+    flow->stats.install_time = flow->stats.refresh_time = now;
+
+    return flow;
+}
+
+static void of_dpa_flow_pkt_hdr_reset(struct of_dpa_flow_context *fc)
+{
+    struct of_dpa_flow_pkt_fields *fields = &fc->fields;
+
+    fc->iov[0].iov_base = fields->ethhdr;
+    fc->iov[0].iov_len = sizeof(struct eth_header);
+    fc->iov[1].iov_base = fields->vlanhdr;
+    fc->iov[1].iov_len = fields->vlanhdr ? sizeof(struct vlan_header) : 0;
+}
+
+static void of_dpa_flow_pkt_parse(struct of_dpa_flow_context *fc,
+                                  const struct iovec *iov, int iovcnt)
+{
+    struct of_dpa_flow_pkt_fields *fields = &fc->fields;
+    size_t sofar = 0;
+    int i;
+
+    sofar += sizeof(struct eth_header);
+    if (iov->iov_len < sofar) {
+        DPRINTF("flow_pkt_parse underrun on eth_header\n");
+        return;
+    }
+
+    fields->ethhdr = iov->iov_base;
+    fields->h_proto = &fields->ethhdr->h_proto;
+
+    if (ntohs(*fields->h_proto) == ETH_P_VLAN) {
+        sofar += sizeof(struct vlan_header);
+        if (iov->iov_len < sofar) {
+            DPRINTF("flow_pkt_parse underrun on vlan_header\n");
+            return;
+        }
+        fields->vlanhdr = (struct vlan_header *)(fields->ethhdr + 1);
+        fields->h_proto = &fields->vlanhdr->h_proto;
+    }
+
+    switch (ntohs(*fields->h_proto)) {
+    case ETH_P_IP:
+        sofar += sizeof(struct ip_header);
+        if (iov->iov_len < sofar) {
+            DPRINTF("flow_pkt_parse underrun on ip_header\n");
+            return;
+        }
+        fields->ipv4hdr = (struct ip_header *)(fields->h_proto + 1);
+        break;
+    case ETH_P_IPV6:
+        sofar += sizeof(struct ip6_header);
+        if (iov->iov_len < sofar) {
+            DPRINTF("flow_pkt_parse underrun on ip6_header\n");
+            return;
+        }
+        fields->ipv6hdr = (struct ip6_header *)(fields->h_proto + 1);
+        break;
+    }
+
+    /* To facilitate (potential) VLAN tag insertion, Make a
+     * copy of the iov and insert two new vectors at the
+     * beginning for eth hdr and vlan hdr.  No data is copied,
+     * just the vectors.
+     */
+
+    of_dpa_flow_pkt_hdr_reset(fc);
+
+    fc->iov[2].iov_base = fields->h_proto + 1;
+    fc->iov[2].iov_len = iov->iov_len - fc->iov[0].iov_len - fc->iov[1].iov_len;
+
+    for (i = 1; i < iovcnt; i++) {
+        fc->iov[i+2] = iov[i];
+    }
+
+    fc->iovcnt = iovcnt + 2;
+}
+
+static void of_dpa_flow_pkt_insert_vlan(struct of_dpa_flow_context *fc,
+                                        __be16 vlan_id)
+{
+    struct of_dpa_flow_pkt_fields *fields = &fc->fields;
+    uint16_t h_proto = fields->ethhdr->h_proto;
+
+    if (fields->vlanhdr) {
+        DPRINTF("flow_pkt_insert_vlan packet already has vlan\n");
+        return;
+    }
+
+    fields->ethhdr->h_proto = htons(ETH_P_VLAN);
+    fields->vlanhdr = &fc->vlanhdr;
+    fields->vlanhdr->h_tci = vlan_id;
+    fields->vlanhdr->h_proto = h_proto;
+    fields->h_proto = &fields->vlanhdr->h_proto;
+
+    fc->iov[1].iov_base = fields->vlanhdr;
+    fc->iov[1].iov_len = sizeof(struct vlan_header);
+}
+
+static void of_dpa_flow_pkt_strip_vlan(struct of_dpa_flow_context *fc)
+{
+    struct of_dpa_flow_pkt_fields *fields = &fc->fields;
+
+    if (!fields->vlanhdr) {
+        return;
+    }
+
+    fc->iov[0].iov_len -= sizeof(fields->ethhdr->h_proto);
+    fc->iov[1].iov_base = fields->h_proto;
+    fc->iov[1].iov_len = sizeof(fields->ethhdr->h_proto);
+}
+
+static void of_dpa_flow_pkt_hdr_rewrite(struct of_dpa_flow_context *fc,
+                                        uint8_t *src_mac, uint8_t *dst_mac,
+                                        __be16 vlan_id)
+{
+    struct of_dpa_flow_pkt_fields *fields = &fc->fields;
+
+    if (src_mac || dst_mac) {
+        memcpy(&fc->ethhdr_rewrite, fields->ethhdr, sizeof(struct eth_header));
+        if (src_mac && memcmp(src_mac, zero_mac.a, ETH_ALEN)) {
+            memcpy(fc->ethhdr_rewrite.h_source, src_mac, ETH_ALEN);
+        }
+        if (dst_mac && memcmp(dst_mac, zero_mac.a, ETH_ALEN)) {
+            memcpy(fc->ethhdr_rewrite.h_dest, dst_mac, ETH_ALEN);
+        }
+        fc->iov[0].iov_base = &fc->ethhdr_rewrite;
+    }
+
+    if (vlan_id && fields->vlanhdr) {
+        fc->vlanhdr_rewrite = fc->vlanhdr;
+        fc->vlanhdr_rewrite.h_tci = vlan_id;
+        fc->iov[1].iov_base = &fc->vlanhdr_rewrite;
+    }
+}
+
+static void of_dpa_flow_ig_tbl(struct of_dpa_flow_context *fc, uint32_t tbl_id);
+
+static void of_dpa_ig_port_build_match(struct of_dpa_flow_context *fc,
+                                       struct of_dpa_flow_match *match)
+{
+    match->value.tbl_id = ROCKER_OF_DPA_TABLE_ID_INGRESS_PORT;
+    match->value.in_pport = fc->in_pport;
+    match->value.width = FLOW_KEY_WIDTH(tbl_id);
+}
+
+static void of_dpa_ig_port_miss(struct of_dpa_flow_context *fc)
+{
+    uint32_t port;
+
+    /* The default on miss is for packets from physical ports
+     * to go to the VLAN Flow Table. There is no default rule
+     * for packets from logical ports, which are dropped on miss.
+     */
+
+    if (fp_port_from_pport(fc->in_pport, &port)) {
+        of_dpa_flow_ig_tbl(fc, ROCKER_OF_DPA_TABLE_ID_VLAN);
+    }
+}
+
+static void of_dpa_vlan_build_match(struct of_dpa_flow_context *fc,
+                                    struct of_dpa_flow_match *match)
+{
+    match->value.tbl_id = ROCKER_OF_DPA_TABLE_ID_VLAN;
+    match->value.in_pport = fc->in_pport;
+    if (fc->fields.vlanhdr) {
+        match->value.eth.vlan_id = fc->fields.vlanhdr->h_tci;
+    }
+    match->value.width = FLOW_KEY_WIDTH(eth.vlan_id);
+}
+
+static void of_dpa_vlan_insert(struct of_dpa_flow_context *fc,
+                               struct of_dpa_flow *flow)
+{
+    if (flow->action.apply.new_vlan_id) {
+        of_dpa_flow_pkt_insert_vlan(fc, flow->action.apply.new_vlan_id);
+    }
+}
+
+static void of_dpa_term_mac_build_match(struct of_dpa_flow_context *fc,
+                                        struct of_dpa_flow_match *match)
+{
+    match->value.tbl_id = ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC;
+    match->value.in_pport = fc->in_pport;
+    match->value.eth.type = *fc->fields.h_proto;
+    match->value.eth.vlan_id = fc->fields.vlanhdr->h_tci;
+    memcpy(match->value.eth.dst.a, fc->fields.ethhdr->h_dest,
+           sizeof(match->value.eth.dst.a));
+    match->value.width = FLOW_KEY_WIDTH(eth.type);
+}
+
+static void of_dpa_term_mac_miss(struct of_dpa_flow_context *fc)
+{
+    of_dpa_flow_ig_tbl(fc, ROCKER_OF_DPA_TABLE_ID_BRIDGING);
+}
+
+static void of_dpa_apply_actions(struct of_dpa_flow_context *fc,
+                                 struct of_dpa_flow *flow)
+{
+    fc->action_set.apply.copy_to_cpu = flow->action.apply.copy_to_cpu;
+    fc->action_set.apply.vlan_id = flow->key.eth.vlan_id;
+}
+
+static void of_dpa_bridging_build_match(struct of_dpa_flow_context *fc,
+                                        struct of_dpa_flow_match *match)
+{
+    match->value.tbl_id = ROCKER_OF_DPA_TABLE_ID_BRIDGING;
+    if (fc->fields.vlanhdr) {
+        match->value.eth.vlan_id = fc->fields.vlanhdr->h_tci;
+    } else if (fc->tunnel_id) {
+        match->value.tunnel_id = fc->tunnel_id;
+    }
+    memcpy(match->value.eth.dst.a, fc->fields.ethhdr->h_dest,
+           sizeof(match->value.eth.dst.a));
+    match->value.width = FLOW_KEY_WIDTH(eth.dst);
+}
+
+static void of_dpa_bridging_learn(struct of_dpa_flow_context *fc,
+                                  struct of_dpa_flow *dst_flow)
+{
+    struct of_dpa_flow_match match = { { 0, }, };
+    struct of_dpa_flow *flow;
+    uint8_t *addr;
+    uint16_t vlan_id;
+    int64_t now = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) / 1000;
+    int64_t refresh_delay = 1;
+
+    /* Do a lookup in bridge table by src_mac/vlan */
+
+    addr = fc->fields.ethhdr->h_source;
+    vlan_id = fc->fields.vlanhdr->h_tci;
+
+    match.value.tbl_id = ROCKER_OF_DPA_TABLE_ID_BRIDGING;
+    match.value.eth.vlan_id = vlan_id;
+    memcpy(match.value.eth.dst.a, addr, sizeof(match.value.eth.dst.a));
+    match.value.width = FLOW_KEY_WIDTH(eth.dst);
+
+    flow = of_dpa_flow_match(fc->of_dpa, &match);
+    if (flow) {
+        if (!memcmp(flow->mask.eth.dst.a, ff_mac.a,
+                    sizeof(flow->mask.eth.dst.a))) {
+            /* src_mac/vlan already learned; if in_port and out_port
+             * don't match, the end station has moved and the port
+             * needs updating */
+            /* XXX implement the in_port/out_port check */
+            if (now - flow->stats.refresh_time < refresh_delay) {
+                return;
+            }
+            flow->stats.refresh_time = now;
+        }
+    }
+
+    /* Let driver know about mac/vlan.  This may be a new mac/vlan
+     * or a refresh of existing mac/vlan that's been hit after the
+     * refresh_delay.
+     */
+
+    rocker_event_mac_vlan_seen(world_rocker(fc->of_dpa->world),
+                               fc->in_pport, addr, vlan_id);
+}
+
+static void of_dpa_bridging_miss(struct of_dpa_flow_context *fc)
+{
+    of_dpa_bridging_learn(fc, NULL);
+    of_dpa_flow_ig_tbl(fc, ROCKER_OF_DPA_TABLE_ID_ACL_POLICY);
+}
+
+static void of_dpa_bridging_action_write(struct of_dpa_flow_context *fc,
+                                         struct of_dpa_flow *flow)
+{
+    if (flow->action.write.group_id != ROCKER_GROUP_NONE) {
+        fc->action_set.write.group_id = flow->action.write.group_id;
+    }
+    fc->action_set.write.tun_log_lport = flow->action.write.tun_log_lport;
+}
+
+static void of_dpa_unicast_routing_build_match(struct of_dpa_flow_context *fc,
+                                               struct of_dpa_flow_match *match)
+{
+    match->value.tbl_id = ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING;
+    match->value.eth.type = *fc->fields.h_proto;
+    if (fc->fields.ipv4hdr) {
+        match->value.ipv4.addr.dst = fc->fields.ipv4hdr->ip_dst;
+    }
+    if (fc->fields.ipv6_dst_addr) {
+        memcpy(&match->value.ipv6.addr.dst, fc->fields.ipv6_dst_addr,
+               sizeof(match->value.ipv6.addr.dst));
+    }
+    match->value.width = FLOW_KEY_WIDTH(ipv6.addr.dst);
+}
+
+static void of_dpa_unicast_routing_miss(struct of_dpa_flow_context *fc)
+{
+    of_dpa_flow_ig_tbl(fc, ROCKER_OF_DPA_TABLE_ID_ACL_POLICY);
+}
+
+static void of_dpa_unicast_routing_action_write(struct of_dpa_flow_context *fc,
+                                                struct of_dpa_flow *flow)
+{
+    if (flow->action.write.group_id != ROCKER_GROUP_NONE) {
+        fc->action_set.write.group_id = flow->action.write.group_id;
+    }
+}
+
+static void
+of_dpa_multicast_routing_build_match(struct of_dpa_flow_context *fc,
+                                     struct of_dpa_flow_match *match)
+{
+    match->value.tbl_id = ROCKER_OF_DPA_TABLE_ID_MULTICAST_ROUTING;
+    match->value.eth.type = *fc->fields.h_proto;
+    match->value.eth.vlan_id = fc->fields.vlanhdr->h_tci;
+    if (fc->fields.ipv4hdr) {
+        match->value.ipv4.addr.src = fc->fields.ipv4hdr->ip_src;
+        match->value.ipv4.addr.dst = fc->fields.ipv4hdr->ip_dst;
+    }
+    if (fc->fields.ipv6_src_addr) {
+        memcpy(&match->value.ipv6.addr.src, fc->fields.ipv6_src_addr,
+               sizeof(match->value.ipv6.addr.src));
+    }
+    if (fc->fields.ipv6_dst_addr) {
+        memcpy(&match->value.ipv6.addr.dst, fc->fields.ipv6_dst_addr,
+               sizeof(match->value.ipv6.addr.dst));
+    }
+    match->value.width = FLOW_KEY_WIDTH(ipv6.addr.dst);
+}
+
+static void of_dpa_multicast_routing_miss(struct of_dpa_flow_context *fc)
+{
+    of_dpa_flow_ig_tbl(fc, ROCKER_OF_DPA_TABLE_ID_ACL_POLICY);
+}
+
+static void
+of_dpa_multicast_routing_action_write(struct of_dpa_flow_context *fc,
+                                      struct of_dpa_flow *flow)
+{
+    if (flow->action.write.group_id != ROCKER_GROUP_NONE) {
+        fc->action_set.write.group_id = flow->action.write.group_id;
+    }
+    fc->action_set.write.vlan_id = flow->action.write.vlan_id;
+}
+
+static void of_dpa_acl_build_match(struct of_dpa_flow_context *fc,
+                                   struct of_dpa_flow_match *match)
+{
+    match->value.tbl_id = ROCKER_OF_DPA_TABLE_ID_ACL_POLICY;
+    match->value.in_pport = fc->in_pport;
+    memcpy(match->value.eth.src.a, fc->fields.ethhdr->h_source,
+           sizeof(match->value.eth.src.a));
+    memcpy(match->value.eth.dst.a, fc->fields.ethhdr->h_dest,
+           sizeof(match->value.eth.dst.a));
+    match->value.eth.type = *fc->fields.h_proto;
+    match->value.eth.vlan_id = fc->fields.vlanhdr->h_tci;
+    match->value.width = FLOW_KEY_WIDTH(eth.type);
+    if (fc->fields.ipv4hdr) {
+        match->value.ip.proto = fc->fields.ipv4hdr->ip_p;
+        match->value.ip.tos = fc->fields.ipv4hdr->ip_tos;
+        match->value.width = FLOW_KEY_WIDTH(ip.tos);
+    } else if (fc->fields.ipv6hdr) {
+        match->value.ip.proto =
+            fc->fields.ipv6hdr->ip6_ctlun.ip6_un1.ip6_un1_nxt;
+        match->value.ip.tos = 0; /* XXX what goes here? */
+        match->value.width = FLOW_KEY_WIDTH(ip.tos);
+    }
+}
+
+static void of_dpa_eg(struct of_dpa_flow_context *fc);
+static void of_dpa_acl_hit(struct of_dpa_flow_context *fc,
+                           struct of_dpa_flow *dst_flow)
+{
+    of_dpa_eg(fc);
+}
+
+static void of_dpa_acl_action_write(struct of_dpa_flow_context *fc,
+                                    struct of_dpa_flow *flow)
+{
+    if (flow->action.write.group_id != ROCKER_GROUP_NONE) {
+        fc->action_set.write.group_id = flow->action.write.group_id;
+    }
+}
+
+static void of_dpa_drop(struct of_dpa_flow_context *fc)
+{
+    /* drop packet */
+}
+
+static struct of_dpa_group *of_dpa_group_find(struct of_dpa *of_dpa,
+                                              uint32_t group_id)
+{
+    return g_hash_table_lookup(of_dpa->group_tbl, &group_id);
+}
+
+static int of_dpa_group_add(struct of_dpa *of_dpa, struct of_dpa_group *group)
+{
+    g_hash_table_insert(of_dpa->group_tbl, &group->id, group);
+
+    return 0;
+}
+
+#if 0
+static int of_dpa_group_mod(struct of_dpa *of_dpa, struct of_dpa_group *group)
+{
+    struct of_dpa_group *old_group = of_dpa_group_find(of_dpa, group->id);
+
+    if (!old_group) {
+        return -ENOENT;
+    }
+
+    /* XXX */
+
+    return 0;
+}
+#endif
+
+static int of_dpa_group_del(struct of_dpa *of_dpa, struct of_dpa_group *group)
+{
+    g_hash_table_remove(of_dpa->group_tbl, &group->id);
+
+    return 0;
+}
+
+#if 0
+static int of_dpa_group_get_stats(struct of_dpa *of_dpa, uint32_t id)
+{
+    struct of_dpa_group *group = of_dpa_group_find(of_dpa, id);
+
+    if (!group) {
+        return -ENOENT;
+    }
+
+    /* XXX get/return stats */
+
+    return 0;
+}
+#endif
+
+static struct of_dpa_group *of_dpa_group_alloc(uint32_t id)
+{
+    struct of_dpa_group *group = g_malloc0(sizeof(struct of_dpa_group));
+
+    if (!group) {
+        return NULL;
+    }
+
+    group->id = id;
+
+    return group;
+}
+
+static void of_dpa_output_l2_interface(struct of_dpa_flow_context *fc,
+                                       struct of_dpa_group *group)
+{
+    if (group->l2_interface.pop_vlan) {
+        of_dpa_flow_pkt_strip_vlan(fc);
+    }
+
+    /* Note: By default, and as per the OpenFlow 1.3.1
+     * specification, a packet cannot be forwarded back
+     * to the IN_PORT from which it came in. An action
+     * bucket that specifies the particular packet's
+     * egress port is not evaluated.
+     */
+
+    if (group->l2_interface.out_pport == 0) {
+        rx_produce(fc->of_dpa->world, fc->in_pport, fc->iov, fc->iovcnt);
+    } else if (group->l2_interface.out_pport != fc->in_pport) {
+        rocker_port_eg(world_rocker(fc->of_dpa->world),
+                       group->l2_interface.out_pport,
+                       fc->iov, fc->iovcnt);
+    }
+}
+
+static void of_dpa_output_l2_rewrite(struct of_dpa_flow_context *fc,
+                                     struct of_dpa_group *group)
+{
+    struct of_dpa_group *l2_group =
+        of_dpa_group_find(fc->of_dpa, group->l2_rewrite.group_id);
+
+    if (!l2_group) {
+        return;
+    }
+
+    of_dpa_flow_pkt_hdr_rewrite(fc, group->l2_rewrite.src_mac.a,
+                         group->l2_rewrite.dst_mac.a,
+                         group->l2_rewrite.vlan_id);
+    of_dpa_output_l2_interface(fc, l2_group);
+}
+
+static void of_dpa_output_l2_flood(struct of_dpa_flow_context *fc,
+                                   struct of_dpa_group *group)
+{
+    struct of_dpa_group *l2_group;
+    int i;
+
+    for (i = 0; i < group->l2_flood.group_count; i++) {
+        of_dpa_flow_pkt_hdr_reset(fc);
+        l2_group = of_dpa_group_find(fc->of_dpa, group->l2_flood.group_ids[i]);
+        if (!l2_group) {
+            continue;
+        }
+        switch (ROCKER_GROUP_TYPE_GET(l2_group->id)) {
+        case ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE:
+            of_dpa_output_l2_interface(fc, l2_group);
+            break;
+        case ROCKER_OF_DPA_GROUP_TYPE_L2_REWRITE:
+            of_dpa_output_l2_rewrite(fc, l2_group);
+            break;
+        }
+    }
+}
+
+static void of_dpa_output_l3_unicast(struct of_dpa_flow_context *fc,
+                                     struct of_dpa_group *group)
+{
+    struct of_dpa_group *l2_group =
+        of_dpa_group_find(fc->of_dpa, group->l3_unicast.group_id);
+
+    if (!l2_group) {
+        return;
+    }
+
+    of_dpa_flow_pkt_hdr_rewrite(fc, group->l3_unicast.src_mac.a,
+                                group->l3_unicast.dst_mac.a,
+                                group->l3_unicast.vlan_id);
+    /* XXX need ttl_check */
+    of_dpa_output_l2_interface(fc, l2_group);
+}
+
+static void of_dpa_eg(struct of_dpa_flow_context *fc)
+{
+    struct of_dpa_flow_action *set = &fc->action_set;
+    struct of_dpa_group *group;
+    uint32_t group_id;
+
+    /* send a copy of pkt to CPU (controller)? */
+
+    if (set->apply.copy_to_cpu) {
+        group_id = ROCKER_GROUP_L2_INTERFACE(set->apply.vlan_id, 0);
+        group = of_dpa_group_find(fc->of_dpa, group_id);
+        if (group) {
+            of_dpa_output_l2_interface(fc, group);
+            of_dpa_flow_pkt_hdr_reset(fc);
+        }
+    }
+
+    /* process group write actions */
+
+    if (!set->write.group_id) {
+        return;
+    }
+
+    group = of_dpa_group_find(fc->of_dpa, set->write.group_id);
+    if (!group) {
+        return;
+    }
+
+    switch (ROCKER_GROUP_TYPE_GET(group->id)) {
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE:
+        of_dpa_output_l2_interface(fc, group);
+        break;
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_REWRITE:
+        of_dpa_output_l2_rewrite(fc, group);
+        break;
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD:
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST:
+        of_dpa_output_l2_flood(fc, group);
+        break;
+    case ROCKER_OF_DPA_GROUP_TYPE_L3_UCAST:
+        of_dpa_output_l3_unicast(fc, group);
+        break;
+    }
+}
+
+static struct of_dpa_flow_tbl_ops {
+    void (*build_match)(struct of_dpa_flow_context *fc,
+                        struct of_dpa_flow_match *match);
+    void (*hit)(struct of_dpa_flow_context *fc, struct of_dpa_flow *flow);
+    void (*miss)(struct of_dpa_flow_context *fc);
+    void (*hit_no_goto)(struct of_dpa_flow_context *fc);
+    void (*action_apply)(struct of_dpa_flow_context *fc,
+                         struct of_dpa_flow *flow);
+    void (*action_write)(struct of_dpa_flow_context *fc,
+                         struct of_dpa_flow *flow);
+} of_dpa_tbl_ops[] = {
+    [ROCKER_OF_DPA_TABLE_ID_INGRESS_PORT] = {
+        .build_match = of_dpa_ig_port_build_match,
+        .miss = of_dpa_ig_port_miss,
+        .hit_no_goto = of_dpa_drop,
+    },
+    [ROCKER_OF_DPA_TABLE_ID_VLAN] = {
+        .build_match = of_dpa_vlan_build_match,
+        .hit_no_goto = of_dpa_drop,
+        .action_apply = of_dpa_vlan_insert,
+    },
+    [ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC] = {
+        .build_match = of_dpa_term_mac_build_match,
+        .miss = of_dpa_term_mac_miss,
+        .hit_no_goto = of_dpa_drop,
+        .action_apply = of_dpa_apply_actions,
+    },
+    [ROCKER_OF_DPA_TABLE_ID_BRIDGING] = {
+        .build_match = of_dpa_bridging_build_match,
+        .hit = of_dpa_bridging_learn,
+        .miss = of_dpa_bridging_miss,
+        .hit_no_goto = of_dpa_drop,
+        .action_apply = of_dpa_apply_actions,
+        .action_write = of_dpa_bridging_action_write,
+    },
+    [ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING] = {
+        .build_match = of_dpa_unicast_routing_build_match,
+        .miss = of_dpa_unicast_routing_miss,
+        .hit_no_goto = of_dpa_drop,
+        .action_write = of_dpa_unicast_routing_action_write,
+    },
+    [ROCKER_OF_DPA_TABLE_ID_MULTICAST_ROUTING] = {
+        .build_match = of_dpa_multicast_routing_build_match,
+        .miss = of_dpa_multicast_routing_miss,
+        .hit_no_goto = of_dpa_drop,
+        .action_write = of_dpa_multicast_routing_action_write,
+    },
+    [ROCKER_OF_DPA_TABLE_ID_ACL_POLICY] = {
+        .build_match = of_dpa_acl_build_match,
+        .hit = of_dpa_acl_hit,
+        .miss = of_dpa_eg,
+        .action_apply = of_dpa_apply_actions,
+        .action_write = of_dpa_acl_action_write,
+    },
+};
+
+static void of_dpa_flow_ig_tbl(struct of_dpa_flow_context *fc, uint32_t tbl_id)
+{
+    struct of_dpa_flow_tbl_ops *ops = &of_dpa_tbl_ops[tbl_id];
+    struct of_dpa_flow_match match = { { 0, }, };
+    struct of_dpa_flow *flow;
+
+    if (ops->build_match) {
+        ops->build_match(fc, &match);
+    } else {
+        return;
+    }
+
+    flow = of_dpa_flow_match(fc->of_dpa, &match);
+    if (!flow) {
+        if (ops->miss) {
+            ops->miss(fc);
+        }
+        return;
+    }
+
+    flow->stats.hits++;
+
+    if (ops->action_apply) {
+        ops->action_apply(fc, flow);
+    }
+
+    if (ops->action_write) {
+        ops->action_write(fc, flow);
+    }
+
+    if (ops->hit) {
+        ops->hit(fc, flow);
+    }
+
+    if (flow->action.goto_tbl) {
+        of_dpa_flow_ig_tbl(fc, flow->action.goto_tbl);
+    } else if (ops->hit_no_goto) {
+        ops->hit_no_goto(fc);
+    }
+
+    /* drop packet */
+}
+
+static ssize_t of_dpa_ig(struct world *world, uint32_t pport,
+                         const struct iovec *iov, int iovcnt)
+{
+    struct iovec iov_copy[iovcnt + 2];
+    struct of_dpa_flow_context fc = {
+        .of_dpa = world_private(world),
+        .in_pport = pport,
+        .iov = iov_copy,
+        .iovcnt = iovcnt + 2,
+    };
+
+    of_dpa_flow_pkt_parse(&fc, iov, iovcnt);
+    of_dpa_flow_ig_tbl(&fc, ROCKER_OF_DPA_TABLE_ID_INGRESS_PORT);
+
+    return iov_size(iov, iovcnt);
+}
+
+#define ROCKER_TUNNEL_LPORT 0x00010000
+
+static int of_dpa_cmd_add_ig_port(struct of_dpa_flow *flow,
+                                  struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_action *action = &flow->action;
+    bool overlay_tunnel;
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]) {
+        return -EINVAL;
+    }
+
+    key->tbl_id = ROCKER_OF_DPA_TABLE_ID_INGRESS_PORT;
+    key->width = FLOW_KEY_WIDTH(tbl_id);
+
+    key->in_pport = rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT]);
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT_MASK]) {
+        mask->in_pport =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT_MASK]);
+    }
+
+    overlay_tunnel = !!(key->in_pport & ROCKER_TUNNEL_LPORT);
+
+    action->goto_tbl =
+        rocker_tlv_get_le16(flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]);
+
+    if (!overlay_tunnel && action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_VLAN) {
+        return -EINVAL;
+    }
+
+    if (overlay_tunnel && action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_BRIDGING) {
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_vlan(struct of_dpa_flow *flow,
+                               struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_action *action = &flow->action;
+    uint32_t port;
+    bool untagged;
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]) {
+        DPRINTF("Must give in_pport and vlan_id to install VLAN tbl entry\n");
+        return -EINVAL;
+    }
+
+    key->tbl_id = ROCKER_OF_DPA_TABLE_ID_VLAN;
+    key->width = FLOW_KEY_WIDTH(eth.vlan_id);
+
+    key->in_pport = rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT]);
+    if (!fp_port_from_pport(key->in_pport, &port)) {
+        DPRINTF("in_pport (%d) not a front-panel port\n", key->in_pport);
+        return -EINVAL;
+    }
+    mask->in_pport = 0xffffffff;
+
+    key->eth.vlan_id = rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]);
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID_MASK]) {
+        mask->eth.vlan_id =
+            rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID_MASK]);
+    }
+
+    if (key->eth.vlan_id) {
+        untagged = false; /* filtering */
+    } else {
+        untagged = true;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]) {
+        action->goto_tbl =
+            rocker_tlv_get_le16(flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]);
+        if (action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC) {
+            DPRINTF("Goto tbl (%d) must be TERM_MAC\n", action->goto_tbl);
+            return -EINVAL;
+        }
+    }
+
+    if (untagged) {
+        if (!flow_tlvs[ROCKER_TLV_OF_DPA_NEW_VLAN_ID]) {
+            DPRINTF("Must specify new vlan_id if untagged\n");
+            return -EINVAL;
+        }
+        action->apply.new_vlan_id =
+            rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_NEW_VLAN_ID]);
+        if (1 > ntohs(action->apply.new_vlan_id) ||
+            ntohs(action->apply.new_vlan_id) > 4095) {
+            DPRINTF("New vlan_id (%d) must be between 1 and 4095\n",
+                    ntohs(action->apply.new_vlan_id));
+            return -EINVAL;
+        }
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_term_mac(struct of_dpa_flow *flow,
+                                   struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_action *action = &flow->action;
+    const MACAddr ipv4_mcast = { .a = { 0x01, 0x00, 0x5e, 0x00, 0x00, 0x00 } };
+    const MACAddr ipv4_mask =  { .a = { 0xff, 0xff, 0xff, 0x80, 0x00, 0x00 } };
+    const MACAddr ipv6_mcast = { .a = { 0x33, 0x33, 0x00, 0x00, 0x00, 0x00 } };
+    const MACAddr ipv6_mask =  { .a = { 0xff, 0xff, 0x00, 0x00, 0x00, 0x00 } };
+    uint32_t port;
+    bool unicast = false;
+    bool multicast = false;
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT_MASK] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC_MASK] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID_MASK]) {
+        return -EINVAL;
+    }
+
+    key->tbl_id = ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC;
+    key->width = FLOW_KEY_WIDTH(eth.type);
+
+    key->in_pport = rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT]);
+    if (!fp_port_from_pport(key->in_pport, &port)) {
+        return -EINVAL;
+    }
+    mask->in_pport =
+        rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT_MASK]);
+
+    key->eth.type = rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE]);
+    if (key->eth.type != htons(0x0800) && key->eth.type != htons(0x86dd)) {
+        return -EINVAL;
+    }
+    mask->eth.type = htons(0xffff);
+
+    memcpy(key->eth.dst.a,
+           rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]),
+           sizeof(key->eth.dst.a));
+    memcpy(mask->eth.dst.a,
+           rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC_MASK]),
+           sizeof(mask->eth.dst.a));
+
+    if ((key->eth.dst.a[0] & 0x01) == 0x00) {
+        unicast = true;
+    }
+
+    /* only two wildcard rules are acceptable for IPv4 and IPv6 multicast */
+    if (memcmp(key->eth.dst.a, ipv4_mcast.a, sizeof(key->eth.dst.a)) == 0 &&
+        memcmp(mask->eth.dst.a, ipv4_mask.a, sizeof(mask->eth.dst.a)) == 0) {
+        multicast = true;
+    }
+    if (memcmp(key->eth.dst.a, ipv6_mcast.a, sizeof(key->eth.dst.a)) == 0 &&
+        memcmp(mask->eth.dst.a, ipv6_mask.a, sizeof(mask->eth.dst.a)) == 0) {
+        multicast = true;
+    }
+
+    if (!unicast && !multicast) {
+        return -EINVAL;
+    }
+
+    key->eth.vlan_id = rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]);
+    mask->eth.vlan_id =
+        rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID_MASK]);
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]) {
+        action->goto_tbl =
+            rocker_tlv_get_le16(flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]);
+
+        if (action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING &&
+            action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_MULTICAST_ROUTING) {
+            return -EINVAL;
+        }
+
+        if (unicast &&
+            action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING) {
+            return -EINVAL;
+        }
+
+        if (multicast &&
+            action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_MULTICAST_ROUTING) {
+            return -EINVAL;
+        }
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_COPY_CPU_ACTION]) {
+        action->apply.copy_to_cpu =
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_COPY_CPU_ACTION]);
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_bridging(struct of_dpa_flow *flow,
+                                   struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_action *action = &flow->action;
+    bool unicast = false;
+    bool dst_mac = false;
+    bool dst_mac_mask = false;
+    enum {
+        BRIDGING_MODE_UNKNOWN,
+        BRIDGING_MODE_VLAN_UCAST,
+        BRIDGING_MODE_VLAN_MCAST,
+        BRIDGING_MODE_VLAN_DFLT,
+        BRIDGING_MODE_TUNNEL_UCAST,
+        BRIDGING_MODE_TUNNEL_MCAST,
+        BRIDGING_MODE_TUNNEL_DFLT,
+    } mode = BRIDGING_MODE_UNKNOWN;
+
+    key->tbl_id = ROCKER_OF_DPA_TABLE_ID_BRIDGING;
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]) {
+        key->eth.vlan_id =
+            rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]);
+        mask->eth.vlan_id = 0xffff;
+        key->width = FLOW_KEY_WIDTH(eth.vlan_id);
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_TUNNEL_ID]) {
+        key->tunnel_id =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_TUNNEL_ID]);
+        mask->tunnel_id = 0xffffffff;
+        key->width = FLOW_KEY_WIDTH(tunnel_id);
+    }
+
+    /* can't do VLAN bridging and tunnel bridging at same time */
+    if (key->eth.vlan_id && key->tunnel_id) {
+        DPRINTF("can't do VLAN bridging and tunnel bridging at same time\n");
+        return -EINVAL;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]) {
+        memcpy(key->eth.dst.a,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]),
+               sizeof(key->eth.dst.a));
+        key->width = FLOW_KEY_WIDTH(eth.dst);
+        dst_mac = true;
+        unicast = (key->eth.dst.a[0] & 0x01) == 0x00;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC_MASK]) {
+        memcpy(mask->eth.dst.a,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC_MASK]),
+               sizeof(mask->eth.dst.a));
+        key->width = FLOW_KEY_WIDTH(eth.dst);
+        dst_mac_mask = true;
+    } else if (flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]) {
+        memcpy(mask->eth.dst.a, ff_mac.a, sizeof(mask->eth.dst.a));
+    }
+
+    if (key->eth.vlan_id) {
+        if (dst_mac && !dst_mac_mask) {
+            mode = unicast ? BRIDGING_MODE_VLAN_UCAST :
+                             BRIDGING_MODE_VLAN_MCAST;
+        } else if ((dst_mac && dst_mac_mask) || !dst_mac) {
+            mode = BRIDGING_MODE_VLAN_DFLT;
+        }
+    } else if (key->tunnel_id) {
+        if (dst_mac && !dst_mac_mask) {
+            mode = unicast ? BRIDGING_MODE_TUNNEL_UCAST :
+                             BRIDGING_MODE_TUNNEL_MCAST;
+        } else if ((dst_mac && dst_mac_mask) || !dst_mac) {
+            mode = BRIDGING_MODE_TUNNEL_DFLT;
+        }
+    }
+
+    if (mode == BRIDGING_MODE_UNKNOWN) {
+        DPRINTF("Unknown bridging mode\n");
+        return -EINVAL;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]) {
+        action->goto_tbl =
+            rocker_tlv_get_le16(flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]);
+        if (action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_ACL_POLICY) {
+            DPRINTF("Briding goto tbl must be ACL policy\n");
+            return -EINVAL;
+        }
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]) {
+        action->write.group_id =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]);
+        switch (mode) {
+        case BRIDGING_MODE_VLAN_UCAST:
+            if (ROCKER_GROUP_TYPE_GET(action->write.group_id) !=
+                ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE) {
+                DPRINTF("Bridging mode vlan ucast needs L2 "
+                        "interface group (0x%08x)\n",
+                        action->write.group_id);
+                return -EINVAL;
+            }
+            break;
+        case BRIDGING_MODE_VLAN_MCAST:
+            if (ROCKER_GROUP_TYPE_GET(action->write.group_id) !=
+                ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST) {
+                DPRINTF("Bridging mode vlan mcast needs L2 "
+                        "mcast group (0x%08x)\n",
+                        action->write.group_id);
+                return -EINVAL;
+            }
+            break;
+        case BRIDGING_MODE_VLAN_DFLT:
+            if (ROCKER_GROUP_TYPE_GET(action->write.group_id) !=
+                ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD) {
+                DPRINTF("Bridging mode vlan dflt needs L2 "
+                        "flood group (0x%08x)\n",
+                        action->write.group_id);
+                return -EINVAL;
+            }
+            break;
+        case BRIDGING_MODE_TUNNEL_MCAST:
+            if (ROCKER_GROUP_TYPE_GET(action->write.group_id) !=
+                ROCKER_OF_DPA_GROUP_TYPE_L2_OVERLAY) {
+                DPRINTF("Bridging mode tunnel mcast needs L2 "
+                        "overlay group (0x%08x)\n",
+                        action->write.group_id);
+                return -EINVAL;
+            }
+            break;
+        case BRIDGING_MODE_TUNNEL_DFLT:
+            if (ROCKER_GROUP_TYPE_GET(action->write.group_id) !=
+                ROCKER_OF_DPA_GROUP_TYPE_L2_OVERLAY) {
+                DPRINTF("Bridging mode tunnel dflt needs L2 "
+                        "overlay group (0x%08x)\n",
+                        action->write.group_id);
+                return -EINVAL;
+            }
+            break;
+        default:
+            return -EINVAL;
+        }
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_TUNNEL_LPORT]) {
+        action->write.tun_log_lport =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_TUNNEL_LPORT]);
+        if (mode != BRIDGING_MODE_TUNNEL_UCAST) {
+            DPRINTF("Have tunnel logical port but not "
+                    "in bridging tunnel mode\n");
+            return -EINVAL;
+        }
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_COPY_CPU_ACTION]) {
+        action->apply.copy_to_cpu =
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_COPY_CPU_ACTION]);
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_unicast_routing(struct of_dpa_flow *flow,
+                                          struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_action *action = &flow->action;
+    enum {
+        UNICAST_ROUTING_MODE_UNKNOWN,
+        UNICAST_ROUTING_MODE_IPV4,
+        UNICAST_ROUTING_MODE_IPV6,
+    } mode = UNICAST_ROUTING_MODE_UNKNOWN;
+    uint8_t type;
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE]) {
+        return -EINVAL;
+    }
+
+    key->tbl_id = ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING;
+    key->width = FLOW_KEY_WIDTH(ipv6.addr.dst);
+
+    key->eth.type = rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE]);
+    switch (ntohs(key->eth.type)) {
+    case 0x0800:
+        mode = UNICAST_ROUTING_MODE_IPV4;
+        break;
+    case 0x86dd:
+        mode = UNICAST_ROUTING_MODE_IPV6;
+        break;
+    default:
+        return -EINVAL;
+    }
+    mask->eth.type = htons(0xffff);
+
+    switch (mode) {
+    case UNICAST_ROUTING_MODE_IPV4:
+        if (!flow_tlvs[ROCKER_TLV_OF_DPA_DST_IP]) {
+            return -EINVAL;
+        }
+        key->ipv4.addr.dst =
+            rocker_tlv_get_u32(flow_tlvs[ROCKER_TLV_OF_DPA_DST_IP]);
+        if (ipv4_addr_is_multicast(key->ipv4.addr.dst)) {
+            return -EINVAL;
+        }
+        flow->lpm = of_dpa_mask2prefix(htonl(0xffffffff));
+        if (flow_tlvs[ROCKER_TLV_OF_DPA_DST_IP_MASK]) {
+            mask->ipv4.addr.dst =
+                rocker_tlv_get_u32(flow_tlvs[ROCKER_TLV_OF_DPA_DST_IP_MASK]);
+            flow->lpm = of_dpa_mask2prefix(mask->ipv4.addr.dst);
+        }
+        break;
+    case UNICAST_ROUTING_MODE_IPV6:
+        if (!flow_tlvs[ROCKER_TLV_OF_DPA_DST_IPV6]) {
+            return -EINVAL;
+        }
+        memcpy(&key->ipv6.addr.dst,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_IPV6]),
+               sizeof(key->ipv6.addr.dst));
+        if (ipv6_addr_is_multicast(&key->ipv6.addr.dst)) {
+            return -EINVAL;
+        }
+        if (flow_tlvs[ROCKER_TLV_OF_DPA_DST_IPV6_MASK]) {
+            memcpy(&mask->ipv6.addr.dst,
+                   rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_IPV6_MASK]),
+                   sizeof(mask->ipv6.addr.dst));
+        }
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]) {
+        action->goto_tbl =
+            rocker_tlv_get_le16(flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]);
+        if (action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_ACL_POLICY) {
+            return -EINVAL;
+        }
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]) {
+        action->write.group_id =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]);
+        type = ROCKER_GROUP_TYPE_GET(action->write.group_id);
+        if (type != ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE &&
+            type != ROCKER_OF_DPA_GROUP_TYPE_L3_UCAST &&
+            type != ROCKER_OF_DPA_GROUP_TYPE_L3_ECMP) {
+            return -EINVAL;
+        }
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_multicast_routing(struct of_dpa_flow *flow,
+                                            struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_action *action = &flow->action;
+    enum {
+        MULTICAST_ROUTING_MODE_UNKNOWN,
+        MULTICAST_ROUTING_MODE_IPV4,
+        MULTICAST_ROUTING_MODE_IPV6,
+    } mode = MULTICAST_ROUTING_MODE_UNKNOWN;
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]) {
+        return -EINVAL;
+    }
+
+    key->tbl_id = ROCKER_OF_DPA_TABLE_ID_MULTICAST_ROUTING;
+    key->width = FLOW_KEY_WIDTH(ipv6.addr.dst);
+
+    key->eth.type = rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE]);
+    switch (ntohs(key->eth.type)) {
+    case 0x0800:
+        mode = MULTICAST_ROUTING_MODE_IPV4;
+        break;
+    case 0x86dd:
+        mode = MULTICAST_ROUTING_MODE_IPV6;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    key->eth.vlan_id = rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]);
+
+    switch (mode) {
+    case MULTICAST_ROUTING_MODE_IPV4:
+
+        if (flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IP]) {
+            key->ipv4.addr.src =
+                rocker_tlv_get_u32(flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IP]);
+        }
+
+        if (flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IP_MASK]) {
+            mask->ipv4.addr.src =
+                rocker_tlv_get_u32(flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IP_MASK]);
+        }
+
+        if (!flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IP]) {
+            if (mask->ipv4.addr.src != 0) {
+                return -EINVAL;
+            }
+        }
+
+        if (!flow_tlvs[ROCKER_TLV_OF_DPA_DST_IP]) {
+            return -EINVAL;
+        }
+
+        key->ipv4.addr.dst =
+            rocker_tlv_get_u32(flow_tlvs[ROCKER_TLV_OF_DPA_DST_IP]);
+        if (!ipv4_addr_is_multicast(key->ipv4.addr.dst)) {
+            return -EINVAL;
+        }
+
+        break;
+
+    case MULTICAST_ROUTING_MODE_IPV6:
+
+        if (flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IPV6]) {
+            memcpy(&key->ipv6.addr.src,
+                   rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IPV6]),
+                   sizeof(key->ipv6.addr.src));
+        }
+
+        if (flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IPV6_MASK]) {
+            memcpy(&mask->ipv6.addr.src,
+                   rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IPV6_MASK]),
+                   sizeof(mask->ipv6.addr.src));
+        }
+
+        if (!flow_tlvs[ROCKER_TLV_OF_DPA_SRC_IPV6]) {
+            if (mask->ipv6.addr.src.addr32[0] != 0 &&
+                mask->ipv6.addr.src.addr32[1] != 0 &&
+                mask->ipv6.addr.src.addr32[2] != 0 &&
+                mask->ipv6.addr.src.addr32[3] != 0) {
+                return -EINVAL;
+            }
+        }
+
+        if (!flow_tlvs[ROCKER_TLV_OF_DPA_DST_IPV6]) {
+            return -EINVAL;
+        }
+
+        memcpy(&key->ipv6.addr.dst,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_IPV6]),
+               sizeof(key->ipv6.addr.dst));
+        if (!ipv6_addr_is_multicast(&key->ipv6.addr.dst)) {
+            return -EINVAL;
+        }
+
+        break;
+
+    default:
+        return -EINVAL;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]) {
+        action->goto_tbl =
+            rocker_tlv_get_le16(flow_tlvs[ROCKER_TLV_OF_DPA_GOTO_TABLE_ID]);
+        if (action->goto_tbl != ROCKER_OF_DPA_TABLE_ID_ACL_POLICY) {
+            return -EINVAL;
+        }
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]) {
+        action->write.group_id =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]);
+        if (ROCKER_GROUP_TYPE_GET(action->write.group_id) !=
+            ROCKER_OF_DPA_GROUP_TYPE_L3_MCAST) {
+            return -EINVAL;
+        }
+        action->write.vlan_id = key->eth.vlan_id;
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_acl_ip(struct of_dpa_flow_key *key,
+                                 struct of_dpa_flow_key *mask,
+                                 struct rocker_tlv **flow_tlvs)
+{
+    key->width = FLOW_KEY_WIDTH(ip.tos);
+
+    key->ip.proto = 0;
+    key->ip.tos = 0;
+    mask->ip.proto = 0;
+    mask->ip.tos = 0;
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IP_PROTO]) {
+        key->ip.proto =
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_IP_PROTO]);
+    }
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IP_PROTO_MASK]) {
+        mask->ip.proto =
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_IP_PROTO_MASK]);
+    }
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IP_DSCP]) {
+        key->ip.tos =
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_IP_DSCP]);
+    }
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IP_DSCP_MASK]) {
+        mask->ip.tos =
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_IP_DSCP_MASK]);
+    }
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IP_ECN]) {
+        key->ip.tos |=
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_IP_ECN]) << 6;
+    }
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IP_ECN_MASK]) {
+        mask->ip.tos |=
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_IP_ECN_MASK]) << 6;
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_acl(struct of_dpa_flow *flow,
+                              struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_action *action = &flow->action;
+    enum {
+        ACL_MODE_UNKNOWN,
+        ACL_MODE_IPV4_VLAN,
+        ACL_MODE_IPV6_VLAN,
+        ACL_MODE_IPV4_TENANT,
+        ACL_MODE_IPV6_TENANT,
+        ACL_MODE_NON_IP_VLAN,
+        ACL_MODE_NON_IP_TENANT,
+        ACL_MODE_ANY_VLAN,
+        ACL_MODE_ANY_TENANT,
+    } mode = ACL_MODE_UNKNOWN;
+    int err = 0;
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE]) {
+        return -EINVAL;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID] &&
+        flow_tlvs[ROCKER_TLV_OF_DPA_TUNNEL_ID]) {
+        return -EINVAL;
+    }
+
+    key->tbl_id = ROCKER_OF_DPA_TABLE_ID_ACL_POLICY;
+    key->width = FLOW_KEY_WIDTH(eth.type);
+
+    key->in_pport = rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT]);
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT_MASK]) {
+        mask->in_pport =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IN_PPORT_MASK]);
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC]) {
+        memcpy(key->eth.src.a,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC]),
+               sizeof(key->eth.src.a));
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC_MASK]) {
+        memcpy(mask->eth.src.a,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC_MASK]),
+               sizeof(mask->eth.src.a));
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]) {
+        memcpy(key->eth.dst.a,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]),
+               sizeof(key->eth.dst.a));
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC_MASK]) {
+        memcpy(mask->eth.dst.a,
+               rocker_tlv_data(flow_tlvs[ROCKER_TLV_OF_DPA_DST_MAC_MASK]),
+               sizeof(mask->eth.dst.a));
+    }
+
+    key->eth.type = rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_ETHERTYPE]);
+    if (key->eth.type) {
+        mask->eth.type = 0xffff;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]) {
+        key->eth.vlan_id =
+            rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]);
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID_MASK]) {
+        mask->eth.vlan_id =
+            rocker_tlv_get_u16(flow_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID_MASK]);
+    }
+
+    switch (ntohs(key->eth.type)) {
+    case 0x0000:
+        mode = (key->eth.vlan_id) ? ACL_MODE_ANY_VLAN : ACL_MODE_ANY_TENANT;
+        break;
+    case 0x0800:
+        mode = (key->eth.vlan_id) ? ACL_MODE_IPV4_VLAN : ACL_MODE_IPV4_TENANT;
+        break;
+    case 0x86dd:
+        mode = (key->eth.vlan_id) ? ACL_MODE_IPV6_VLAN : ACL_MODE_IPV6_TENANT;
+        break;
+    default:
+        mode = (key->eth.vlan_id) ? ACL_MODE_NON_IP_VLAN :
+                                    ACL_MODE_NON_IP_TENANT;
+        break;
+    }
+
+    /* XXX only supporting VLAN modes for now */
+    if (mode != ACL_MODE_IPV4_VLAN &&
+        mode != ACL_MODE_IPV6_VLAN &&
+        mode != ACL_MODE_NON_IP_VLAN &&
+        mode != ACL_MODE_ANY_VLAN) {
+        return -EINVAL;
+    }
+
+    switch (ntohs(key->eth.type)) {
+    case 0x0800:
+    case 0x86dd:
+        err = of_dpa_cmd_add_acl_ip(key, mask, flow_tlvs);
+        break;
+    }
+
+    if (err) {
+        return err;
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]) {
+        action->write.group_id =
+            rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]);
+    }
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_COPY_CPU_ACTION]) {
+        action->apply.copy_to_cpu =
+            rocker_tlv_get_u8(flow_tlvs[ROCKER_TLV_OF_DPA_COPY_CPU_ACTION]);
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_flow_add(struct of_dpa *of_dpa, uint64_t cookie,
+                               struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow *flow = of_dpa_flow_find(of_dpa, cookie);
+    enum rocker_of_dpa_table_id tbl;
+    uint32_t priority;
+    uint32_t hardtime;
+    uint32_t idletime = 0;
+    int err = 0;
+
+    if (flow) {
+        return -EEXIST;
+    }
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_TABLE_ID] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_PRIORITY] ||
+        !flow_tlvs[ROCKER_TLV_OF_DPA_HARDTIME]) {
+        return -EINVAL;
+    }
+
+    tbl = rocker_tlv_get_le16(flow_tlvs[ROCKER_TLV_OF_DPA_TABLE_ID]);
+    priority = rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_PRIORITY]);
+    hardtime = rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_HARDTIME]);
+
+    if (flow_tlvs[ROCKER_TLV_OF_DPA_IDLETIME]) {
+        if (tbl == ROCKER_OF_DPA_TABLE_ID_INGRESS_PORT ||
+            tbl == ROCKER_OF_DPA_TABLE_ID_VLAN ||
+            tbl == ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC) {
+            return -EINVAL;
+        }
+        idletime = rocker_tlv_get_le32(flow_tlvs[ROCKER_TLV_OF_DPA_IDLETIME]);
+    }
+
+    flow = of_dpa_flow_alloc(cookie, priority, hardtime, idletime);
+    if (!flow) {
+        return -ENOMEM;
+    }
+
+    switch (tbl) {
+    case ROCKER_OF_DPA_TABLE_ID_INGRESS_PORT:
+        err = of_dpa_cmd_add_ig_port(flow, flow_tlvs);
+        break;
+    case ROCKER_OF_DPA_TABLE_ID_VLAN:
+        err = of_dpa_cmd_add_vlan(flow, flow_tlvs);
+        break;
+    case ROCKER_OF_DPA_TABLE_ID_TERMINATION_MAC:
+        err = of_dpa_cmd_add_term_mac(flow, flow_tlvs);
+        break;
+    case ROCKER_OF_DPA_TABLE_ID_BRIDGING:
+        err = of_dpa_cmd_add_bridging(flow, flow_tlvs);
+        break;
+    case ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING:
+        err = of_dpa_cmd_add_unicast_routing(flow, flow_tlvs);
+        break;
+    case ROCKER_OF_DPA_TABLE_ID_MULTICAST_ROUTING:
+        err = of_dpa_cmd_add_multicast_routing(flow, flow_tlvs);
+        break;
+    case ROCKER_OF_DPA_TABLE_ID_ACL_POLICY:
+        err = of_dpa_cmd_add_acl(flow, flow_tlvs);
+        break;
+    }
+
+    if (err) {
+        goto err_cmd_add;
+    }
+
+    err = of_dpa_flow_add(of_dpa, flow);
+    if (err) {
+        goto err_cmd_add;
+    }
+
+    return 0;
+
+err_cmd_add:
+        g_free(flow);
+        return err;
+}
+
+static int of_dpa_cmd_flow_mod(struct of_dpa *of_dpa, uint64_t cookie,
+                               struct rocker_tlv **flow_tlvs)
+{
+    struct of_dpa_flow *flow = of_dpa_flow_find(of_dpa, cookie);
+
+    if (!flow) {
+        return -ENOENT;
+    }
+
+    return of_dpa_flow_mod(flow);
+}
+
+static int of_dpa_cmd_flow_del(struct of_dpa *of_dpa, uint64_t cookie)
+{
+    struct of_dpa_flow *flow = of_dpa_flow_find(of_dpa, cookie);
+
+    if (!flow) {
+        return -ENOENT;
+    }
+
+    of_dpa_flow_del(of_dpa, flow);
+
+    return 0;
+}
+
+static int of_dpa_cmd_flow_get_stats(struct of_dpa *of_dpa, uint64_t cookie,
+                                     struct desc_info *info, char *buf)
+{
+    struct of_dpa_flow *flow = of_dpa_flow_find(of_dpa, cookie);
+    size_t tlv_size;
+    int64_t now = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) / 1000;
+    int pos;
+
+    if (!flow) {
+        return -ENOENT;
+    }
+
+    tlv_size = rocker_tlv_total_size(sizeof(uint32_t)) +  /* duration */
+               rocker_tlv_total_size(sizeof(uint64_t)) +  /* rx_pkts */
+               rocker_tlv_total_size(sizeof(uint64_t));   /* tx_ptks */
+
+    if (tlv_size > desc_buf_size(info)) {
+        return -EMSGSIZE;
+    }
+
+    pos = 0;
+    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_OF_DPA_FLOW_STAT_DURATION,
+                        (int32_t)(now - flow->stats.install_time));
+    rocker_tlv_put_le64(buf, &pos, ROCKER_TLV_OF_DPA_FLOW_STAT_RX_PKTS,
+                        flow->stats.rx_pkts);
+    rocker_tlv_put_le64(buf, &pos, ROCKER_TLV_OF_DPA_FLOW_STAT_TX_PKTS,
+                        flow->stats.tx_pkts);
+
+    return desc_set_buf(info, tlv_size);
+}
+
+static int of_dpa_flow_cmd(struct of_dpa *of_dpa, struct desc_info *info,
+                           char *buf, uint16_t cmd,
+                           struct rocker_tlv **flow_tlvs)
+{
+    uint64_t cookie;
+
+    if (!flow_tlvs[ROCKER_TLV_OF_DPA_COOKIE]) {
+        return -EINVAL;
+    }
+
+    cookie = rocker_tlv_get_le64(flow_tlvs[ROCKER_TLV_OF_DPA_COOKIE]);
+
+    switch (cmd) {
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_ADD:
+        return of_dpa_cmd_flow_add(of_dpa, cookie, flow_tlvs);
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_MOD:
+        return of_dpa_cmd_flow_mod(of_dpa, cookie, flow_tlvs);
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_DEL:
+        return of_dpa_cmd_flow_del(of_dpa, cookie);
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_GET_STATS:
+        return of_dpa_cmd_flow_get_stats(of_dpa, cookie, info, buf);
+    }
+
+    return -ENOTSUP;
+}
+
+static int of_dpa_cmd_add_l2_interface(struct of_dpa_group *group,
+                                       struct rocker_tlv **group_tlvs)
+{
+    if (!group_tlvs[ROCKER_TLV_OF_DPA_OUT_PPORT] ||
+        !group_tlvs[ROCKER_TLV_OF_DPA_POP_VLAN]) {
+        return -EINVAL;
+    }
+
+    group->l2_interface.out_pport =
+        rocker_tlv_get_le32(group_tlvs[ROCKER_TLV_OF_DPA_OUT_PPORT]);
+    group->l2_interface.pop_vlan =
+        rocker_tlv_get_u8(group_tlvs[ROCKER_TLV_OF_DPA_POP_VLAN]);
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_l2_rewrite(struct of_dpa *of_dpa,
+                                     struct of_dpa_group *group,
+                                     struct rocker_tlv **group_tlvs)
+{
+    struct of_dpa_group *l2_interface_group;
+
+    if (!group_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID_LOWER]) {
+        return -EINVAL;
+    }
+
+    group->l2_rewrite.group_id =
+        rocker_tlv_get_le32(group_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID_LOWER]);
+
+    l2_interface_group = of_dpa_group_find(of_dpa, group->l2_rewrite.group_id);
+    if (!l2_interface_group ||
+        ROCKER_GROUP_TYPE_GET(l2_interface_group->id) !=
+                              ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE) {
+        DPRINTF("l2 rewrite group needs a valid l2 interface group\n");
+        return -EINVAL;
+    }
+
+    if (group_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC]) {
+        memcpy(group->l2_rewrite.src_mac.a,
+               rocker_tlv_data(group_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC]),
+               sizeof(group->l2_rewrite.src_mac.a));
+    }
+
+    if (group_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]) {
+        memcpy(group->l2_rewrite.dst_mac.a,
+               rocker_tlv_data(group_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]),
+               sizeof(group->l2_rewrite.dst_mac.a));
+    }
+
+    if (group_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]) {
+        group->l2_rewrite.vlan_id =
+            rocker_tlv_get_u16(group_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]);
+        if (ROCKER_GROUP_VLAN_GET(l2_interface_group->id) !=
+            (ntohs(group->l2_rewrite.vlan_id) & VLAN_VID_MASK)) {
+            DPRINTF("Set VLAN ID must be same as L2 interface group\n");
+            return -EINVAL;
+        }
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_add_l2_flood(struct of_dpa *of_dpa,
+                                   struct of_dpa_group *group,
+                                   struct rocker_tlv **group_tlvs)
+{
+    struct of_dpa_group *l2_group;
+    struct rocker_tlv **tlvs;
+    int err = 0;
+    int i;
+
+    if (!group_tlvs[ROCKER_TLV_OF_DPA_GROUP_COUNT] ||
+        !group_tlvs[ROCKER_TLV_OF_DPA_GROUP_IDS]) {
+        return -EINVAL;
+    }
+
+    group->l2_flood.group_count =
+        rocker_tlv_get_le16(group_tlvs[ROCKER_TLV_OF_DPA_GROUP_COUNT]);
+
+    tlvs = g_malloc0((group->l2_flood.group_count + 1) *
+                     sizeof(struct rocker_tlv *));
+    if (!tlvs) {
+        return -ENOMEM;
+    }
+
+    g_free(group->l2_flood.group_ids);
+    group->l2_flood.group_ids =
+        g_malloc0(group->l2_flood.group_count * sizeof(uint32_t));
+    if (!group->l2_flood.group_ids) {
+        err = -ENOMEM;
+        goto err_out;
+    }
+
+    rocker_tlv_parse_nested(tlvs, group->l2_flood.group_count,
+                            group_tlvs[ROCKER_TLV_OF_DPA_GROUP_IDS]);
+
+    for (i = 0; i < group->l2_flood.group_count; i++) {
+        group->l2_flood.group_ids[i] = rocker_tlv_get_le32(tlvs[i + 1]);
+    }
+
+    /* All of the L2 interface groups referenced by the L2 flood
+     * must have same VLAN
+     */
+
+    for (i = 0; i < group->l2_flood.group_count; i++) {
+        l2_group = of_dpa_group_find(of_dpa, group->l2_flood.group_ids[i]);
+        if (!l2_group) {
+            continue;
+        }
+        if ((ROCKER_GROUP_TYPE_GET(l2_group->id) ==
+             ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE) &&
+            (ROCKER_GROUP_VLAN_GET(l2_group->id) !=
+             ROCKER_GROUP_VLAN_GET(group->id))) {
+            DPRINTF("l2 interface group 0x%08x VLAN doesn't match l2 "
+                    "flood group 0x%08x\n",
+                    group->l2_flood.group_ids[i], group->id);
+            err = -EINVAL;
+            goto err_out;
+        }
+    }
+
+    g_free(tlvs);
+    return 0;
+
+err_out:
+    group->l2_flood.group_count = 0;
+    g_free(group->l2_flood.group_ids);
+    g_free(tlvs);
+
+    return err;
+}
+
+static int of_dpa_cmd_add_l3_unicast(struct of_dpa_group *group,
+                                     struct rocker_tlv **group_tlvs)
+{
+    if (!group_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID_LOWER]) {
+        return -EINVAL;
+    }
+
+    group->l3_unicast.group_id =
+        rocker_tlv_get_le32(group_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID_LOWER]);
+
+    if (group_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC]) {
+        memcpy(group->l3_unicast.src_mac.a,
+               rocker_tlv_data(group_tlvs[ROCKER_TLV_OF_DPA_SRC_MAC]),
+               sizeof(group->l3_unicast.src_mac.a));
+    }
+
+    if (group_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]) {
+        memcpy(group->l3_unicast.dst_mac.a,
+               rocker_tlv_data(group_tlvs[ROCKER_TLV_OF_DPA_DST_MAC]),
+               sizeof(group->l3_unicast.dst_mac.a));
+    }
+
+    if (group_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]) {
+        group->l3_unicast.vlan_id =
+            rocker_tlv_get_u16(group_tlvs[ROCKER_TLV_OF_DPA_VLAN_ID]);
+    }
+
+    if (group_tlvs[ROCKER_TLV_OF_DPA_TTL_CHECK]) {
+        group->l3_unicast.ttl_check =
+            rocker_tlv_get_u8(group_tlvs[ROCKER_TLV_OF_DPA_TTL_CHECK]);
+    }
+
+    return 0;
+}
+
+static int of_dpa_cmd_group_do(struct of_dpa *of_dpa, uint32_t group_id,
+                               struct of_dpa_group *group,
+                               struct rocker_tlv **group_tlvs)
+{
+    uint8_t type = ROCKER_GROUP_TYPE_GET(group_id);
+
+    switch (type) {
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE:
+        return of_dpa_cmd_add_l2_interface(group, group_tlvs);
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_REWRITE:
+        return of_dpa_cmd_add_l2_rewrite(of_dpa, group, group_tlvs);
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD:
+    /* Treat L2 multicast group same as a L2 flood group */
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST:
+        return of_dpa_cmd_add_l2_flood(of_dpa, group, group_tlvs);
+    case ROCKER_OF_DPA_GROUP_TYPE_L3_UCAST:
+        return of_dpa_cmd_add_l3_unicast(group, group_tlvs);
+    }
+
+    return -ENOTSUP;
+}
+
+static int of_dpa_cmd_group_add(struct of_dpa *of_dpa, uint32_t group_id,
+                                struct rocker_tlv **group_tlvs)
+{
+    struct of_dpa_group *group = of_dpa_group_find(of_dpa, group_id);
+    int err = 0;
+
+    if (group) {
+        return -EEXIST;
+    }
+
+    group = of_dpa_group_alloc(group_id);
+    if (!group) {
+        return -ENOMEM;
+    }
+
+    err = of_dpa_cmd_group_do(of_dpa, group_id, group, group_tlvs);
+    if (err) {
+        goto err_cmd_add;
+    }
+
+    err = of_dpa_group_add(of_dpa, group);
+    if (err) {
+        goto err_cmd_add;
+    }
+
+    return 0;
+
+err_cmd_add:
+        g_free(group);
+        return err;
+}
+
+static int of_dpa_cmd_group_mod(struct of_dpa *of_dpa, uint32_t group_id,
+                                struct rocker_tlv **group_tlvs)
+{
+    struct of_dpa_group *group = of_dpa_group_find(of_dpa, group_id);
+
+    if (!group) {
+        return -ENOENT;
+    }
+
+    return of_dpa_cmd_group_do(of_dpa, group_id, group, group_tlvs);
+}
+
+static int of_dpa_cmd_group_del(struct of_dpa *of_dpa, uint32_t group_id)
+{
+    struct of_dpa_group *group = of_dpa_group_find(of_dpa, group_id);
+
+    if (!group) {
+        return -ENOENT;
+    }
+
+    return of_dpa_group_del(of_dpa, group);
+}
+
+static int of_dpa_cmd_group_get_stats(struct of_dpa *of_dpa,
+                                      uint32_t group_id,
+                                      struct desc_info *info, char *buf)
+{
+    return -ENOTSUP;
+}
+
+static int of_dpa_group_cmd(struct of_dpa *of_dpa, struct desc_info *info,
+                            char *buf, uint16_t cmd,
+                            struct rocker_tlv **group_tlvs)
+{
+    uint32_t group_id;
+
+    if (!group_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]) {
+        return -EINVAL;
+    }
+
+    group_id = rocker_tlv_get_le32(group_tlvs[ROCKER_TLV_OF_DPA_GROUP_ID]);
+
+    switch (cmd) {
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_ADD:
+        return of_dpa_cmd_group_add(of_dpa, group_id, group_tlvs);
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_MOD:
+        return of_dpa_cmd_group_mod(of_dpa, group_id, group_tlvs);
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_DEL:
+        return of_dpa_cmd_group_del(of_dpa, group_id);
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_GET_STATS:
+        return of_dpa_cmd_group_get_stats(of_dpa, group_id, info, buf);
+    }
+
+    return -ENOTSUP;
+}
+
+static int of_dpa_cmd(struct world *world, struct desc_info *info,
+                     char *buf, uint16_t cmd,
+                     struct rocker_tlv *cmd_info_tlv)
+{
+    struct of_dpa *of_dpa = world_private(world);
+    struct rocker_tlv *tlvs[ROCKER_TLV_OF_DPA_MAX + 1];
+
+    rocker_tlv_parse_nested(tlvs, ROCKER_TLV_OF_DPA_MAX, cmd_info_tlv);
+
+    switch (cmd) {
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_ADD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_MOD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_DEL:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_FLOW_GET_STATS:
+        return of_dpa_flow_cmd(of_dpa, info, buf, cmd, tlvs);
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_ADD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_MOD:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_DEL:
+    case ROCKER_TLV_CMD_TYPE_OF_DPA_GROUP_GET_STATS:
+        return of_dpa_group_cmd(of_dpa, info, buf, cmd, tlvs);
+    }
+
+    return -ENOTSUP;
+}
+
+static int of_dpa_init(struct world *world)
+{
+    struct of_dpa *of_dpa = world_private(world);
+
+    of_dpa->world = world;
+
+    of_dpa->flow_tbl = g_hash_table_new_full(g_int64_hash, g_int64_equal,
+                                             NULL, g_free);
+    if (!of_dpa->flow_tbl) {
+        return -ENOMEM;
+    }
+
+    of_dpa->group_tbl = g_hash_table_new_full(g_int_hash, g_int_equal,
+                                              NULL, g_free);
+    if (!of_dpa->group_tbl) {
+        goto err_group_tbl;
+    }
+
+    /* XXX hardcode some artificial table max values */
+    of_dpa->flow_tbl_max_size = 100;
+    of_dpa->group_tbl_max_size = 100;
+
+    return 0;
+
+err_group_tbl:
+    g_hash_table_destroy(of_dpa->flow_tbl);
+    return -ENOMEM;
+}
+
+static void of_dpa_uninit(struct world *world)
+{
+    struct of_dpa *of_dpa = world_private(world);
+
+    g_hash_table_destroy(of_dpa->group_tbl);
+    g_hash_table_destroy(of_dpa->flow_tbl);
+}
+
+static struct world_ops of_dpa_ops = {
+    .init = of_dpa_init,
+    .uninit = of_dpa_uninit,
+    .ig = of_dpa_ig,
+    .cmd = of_dpa_cmd,
+};
+
+struct world *of_dpa_world_alloc(struct rocker *r)
+{
+    return world_alloc(r, sizeof(struct of_dpa),
+                       ROCKER_WORLD_TYPE_OF_DPA, &of_dpa_ops);
+}
diff --git a/hw/net/rocker/rocker_of_dpa.h b/hw/net/rocker/rocker_of_dpa.h
new file mode 100644
index 0000000..1b7ef3f
--- /dev/null
+++ b/hw/net/rocker/rocker_of_dpa.h
@@ -0,0 +1,25 @@
+/*
+ * QEMU rocker switch emulation - OF-DPA flow processing support
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ROCKER_OF_DPA_H_
+#define _ROCKER_OF_DPA_H_
+
+struct rocker;
+struct world;
+
+struct world *of_dpa_world_alloc(struct rocker *r);
+
+#endif /* _ROCKER_OF_DPA_H_ */
diff --git a/hw/net/rocker/rocker_tlv.h b/hw/net/rocker/rocker_tlv.h
new file mode 100644
index 0000000..ca6aa61
--- /dev/null
+++ b/hw/net/rocker/rocker_tlv.h
@@ -0,0 +1,247 @@
+/*
+ * QEMU rocker switch emulation - TLV parsing and composing
+ *
+ * Copyright (c) 2014 Jiri Pirko <jiri@resnulli.us>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ROCKER_TLV_H_
+#define _ROCKER_TLV_H_
+
+#define ROCKER_TLV_ALIGNTO 8U
+#define ROCKER_TLV_ALIGN(len) \
+    (((len) + ROCKER_TLV_ALIGNTO - 1) & ~(ROCKER_TLV_ALIGNTO - 1))
+#define ROCKER_TLV_HDRLEN ROCKER_TLV_ALIGN(sizeof(struct rocker_tlv))
+
+/*
+ *  <------- ROCKER_TLV_HDRLEN -------> <--- ROCKER_TLV_ALIGN(payload) --->
+ * +-----------------------------+- - -+- - - - - - - - - - - - - - -+- - -+
+ * |             Header          | Pad |           Payload           | Pad |
+ * |     (struct rocker_tlv)     | ing |                             | ing |
+ * +-----------------------------+- - -+- - - - - - - - - - - - - - -+- - -+
+ *  <--------------------------- tlv->len -------------------------->
+ */
+
+static inline struct rocker_tlv *rocker_tlv_next(const struct rocker_tlv *tlv,
+                                                 int *remaining)
+{
+    int totlen = ROCKER_TLV_ALIGN(le16_to_cpu(tlv->len));
+
+    *remaining -= totlen;
+    return (struct rocker_tlv *) ((char *) tlv + totlen);
+}
+
+static inline int rocker_tlv_ok(const struct rocker_tlv *tlv, int remaining)
+{
+    return remaining >= (int) ROCKER_TLV_HDRLEN &&
+           le16_to_cpu(tlv->len) >= ROCKER_TLV_HDRLEN &&
+           le16_to_cpu(tlv->len) <= remaining;
+}
+
+#define rocker_tlv_for_each(pos, head, len, rem) \
+    for (pos = head, rem = len; \
+         rocker_tlv_ok(pos, rem); \
+         pos = rocker_tlv_next(pos, &(rem)))
+
+#define rocker_tlv_for_each_nested(pos, tlv, rem) \
+        rocker_tlv_for_each(pos, rocker_tlv_data(tlv), rocker_tlv_len(tlv), rem)
+
+static inline int rocker_tlv_size(int payload)
+{
+    return ROCKER_TLV_HDRLEN + payload;
+}
+
+static inline int rocker_tlv_total_size(int payload)
+{
+    return ROCKER_TLV_ALIGN(rocker_tlv_size(payload));
+}
+
+static inline int rocker_tlv_padlen(int payload)
+{
+    return rocker_tlv_total_size(payload) - rocker_tlv_size(payload);
+}
+
+static inline int rocker_tlv_type(const struct rocker_tlv *tlv)
+{
+    return le32_to_cpu(tlv->type);
+}
+
+static inline void *rocker_tlv_data(const struct rocker_tlv *tlv)
+{
+    return (char *) tlv + ROCKER_TLV_HDRLEN;
+}
+
+static inline int rocker_tlv_len(const struct rocker_tlv *tlv)
+{
+    return le16_to_cpu(tlv->len) - ROCKER_TLV_HDRLEN;
+}
+
+static inline uint8_t rocker_tlv_get_u8(const struct rocker_tlv *tlv)
+{
+    return *(uint8_t *) rocker_tlv_data(tlv);
+}
+
+static inline uint16_t rocker_tlv_get_u16(const struct rocker_tlv *tlv)
+{
+    return *(uint16_t *) rocker_tlv_data(tlv);
+}
+
+static inline uint32_t rocker_tlv_get_u32(const struct rocker_tlv *tlv)
+{
+    return *(uint32_t *) rocker_tlv_data(tlv);
+}
+
+static inline uint64_t rocker_tlv_get_u64(const struct rocker_tlv *tlv)
+{
+    return *(uint64_t *) rocker_tlv_data(tlv);
+}
+
+static inline uint16_t rocker_tlv_get_le16(const struct rocker_tlv *tlv)
+{
+    return le16_to_cpup((uint16_t *) rocker_tlv_data(tlv));
+}
+
+static inline uint32_t rocker_tlv_get_le32(const struct rocker_tlv *tlv)
+{
+    return le32_to_cpup((uint32_t *) rocker_tlv_data(tlv));
+}
+
+static inline uint64_t rocker_tlv_get_le64(const struct rocker_tlv *tlv)
+{
+    return le64_to_cpup((uint64_t *) rocker_tlv_data(tlv));
+}
+
+static inline void rocker_tlv_parse(struct rocker_tlv **tb, int maxtype,
+                                    const char *buf, int buf_len)
+{
+    const struct rocker_tlv *tlv;
+    const struct rocker_tlv *head = (const struct rocker_tlv *) buf;
+    int rem;
+
+    memset(tb, 0, sizeof(struct rocker_tlv *) * (maxtype + 1));
+
+    rocker_tlv_for_each(tlv, head, buf_len, rem) {
+        uint32_t type = rocker_tlv_type(tlv);
+
+        if (type > 0 && type <= maxtype) {
+            tb[type] = (struct rocker_tlv *) tlv;
+        }
+    }
+}
+
+static inline void rocker_tlv_parse_nested(struct rocker_tlv **tb,
+                                           int maxtype,
+                                           const struct rocker_tlv *tlv)
+{
+    rocker_tlv_parse(tb, maxtype, rocker_tlv_data(tlv), rocker_tlv_len(tlv));
+}
+
+static inline struct rocker_tlv *
+rocker_tlv_start(char *buf, int buf_pos)
+{
+    return (struct rocker_tlv *) (buf + buf_pos);
+}
+
+static inline void rocker_tlv_put_iov(char *buf, int *buf_pos,
+                                      int type, const struct iovec *iov,
+                                      const unsigned int iovcnt)
+{
+    size_t len = iov_size(iov, iovcnt);
+    int total_size = rocker_tlv_total_size(len);
+    struct rocker_tlv *tlv;
+
+    tlv = rocker_tlv_start(buf, *buf_pos);
+    *buf_pos += total_size;
+    tlv->type = cpu_to_le32(type);
+    tlv->len = cpu_to_le16(rocker_tlv_size(len));
+    iov_to_buf(iov, iovcnt, 0, rocker_tlv_data(tlv), len);
+    memset((char *) tlv + le16_to_cpu(tlv->len), 0, rocker_tlv_padlen(len));
+}
+
+static inline void rocker_tlv_put(char *buf, int *buf_pos,
+                                  int type, int len, void *data)
+{
+    struct iovec iov = {
+        .iov_base = data,
+        .iov_len = len,
+    };
+
+    rocker_tlv_put_iov(buf, buf_pos, type, &iov, 1);
+}
+
+static inline void rocker_tlv_put_u8(char *buf, int *buf_pos,
+                                     int type, uint8_t value)
+{
+    rocker_tlv_put(buf, buf_pos, type, sizeof(uint8_t), &value);
+}
+
+static inline void rocker_tlv_put_u16(char *buf, int *buf_pos,
+                                      int type, uint16_t value)
+{
+    rocker_tlv_put(buf, buf_pos, type, sizeof(uint16_t), &value);
+}
+
+static inline void rocker_tlv_put_u32(char *buf, int *buf_pos,
+                                      int type, uint32_t value)
+{
+    rocker_tlv_put(buf, buf_pos, type, sizeof(uint32_t), &value);
+}
+
+static inline void rocker_tlv_put_u64(char *buf, int *buf_pos,
+                                      int type, uint64_t value)
+{
+    rocker_tlv_put(buf, buf_pos, type, sizeof(uint64_t), &value);
+}
+
+static inline void rocker_tlv_put_le16(char *buf, int *buf_pos,
+                                      int type, uint16_t value)
+{
+    value = cpu_to_le16(value);
+    rocker_tlv_put(buf, buf_pos, type, sizeof(uint16_t), &value);
+}
+
+static inline void rocker_tlv_put_le32(char *buf, int *buf_pos,
+                                      int type, uint32_t value)
+{
+    value = cpu_to_le32(value);
+    rocker_tlv_put(buf, buf_pos, type, sizeof(uint32_t), &value);
+}
+
+static inline void rocker_tlv_put_le64(char *buf, int *buf_pos,
+                                      int type, uint64_t value)
+{
+    value = cpu_to_le64(value);
+    rocker_tlv_put(buf, buf_pos, type, sizeof(uint64_t), &value);
+}
+
+static inline struct rocker_tlv *rocker_tlv_nest_start(char *buf, int *buf_pos,
+                                                       int type)
+{
+    struct rocker_tlv *start = rocker_tlv_start(buf, *buf_pos);
+
+    rocker_tlv_put(buf, buf_pos, type, 0, NULL);
+    return start;
+}
+
+static inline void rocker_tlv_nest_end(char *buf, int *buf_pos,
+                                       struct rocker_tlv *start)
+{
+    start->len = (char *) rocker_tlv_start(buf, *buf_pos) - (char *) start;
+}
+
+static inline void rocker_tlv_nest_cancel(char *buf, int *buf_pos,
+                                          struct rocker_tlv *start)
+{
+    *buf_pos = (char *) start - buf;
+}
+
+#endif
diff --git a/hw/net/rocker/rocker_world.c b/hw/net/rocker/rocker_world.c
new file mode 100644
index 0000000..23cb11e
--- /dev/null
+++ b/hw/net/rocker/rocker_world.c
@@ -0,0 +1,108 @@
+/*
+ * QEMU rocker switch emulation - switch worlds
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "qemu/iov.h"
+
+#include "rocker.h"
+#include "rocker_world.h"
+
+struct rocker;
+
+struct world {
+    struct rocker *r;
+    enum rocker_world_type type;
+    struct world_ops *ops;
+};
+
+ssize_t world_ingress(struct world *world, uint32_t pport,
+                      const struct iovec *iov, int iovcnt)
+{
+    if (world->ops->ig) {
+        return world->ops->ig(world, pport, iov, iovcnt);
+    }
+
+    return iov_size(iov, iovcnt);
+}
+
+int world_do_cmd(struct world *world, struct desc_info *info,
+                 char *buf, uint16_t cmd, struct rocker_tlv *cmd_info_tlv)
+{
+    if (world->ops->cmd) {
+        return world->ops->cmd(world, info, buf, cmd, cmd_info_tlv);
+    }
+
+    return -ENOTSUP;
+}
+
+struct world *world_alloc(struct rocker *r, size_t sizeof_private,
+                          enum rocker_world_type type, struct world_ops *ops)
+{
+    struct world *w = g_malloc0(sizeof(struct world) + sizeof_private);
+
+    if (w) {
+        w->r = r;
+        w->type = type;
+        w->ops = ops;
+        if (w->ops->init) {
+            w->ops->init(w);
+        }
+    }
+
+    return w;
+}
+
+void world_free(struct world *world)
+{
+    if (world->ops->uninit) {
+        world->ops->uninit(world);
+    }
+    g_free(world);
+}
+
+void world_reset(struct world *world)
+{
+    if (world->ops->uninit) {
+        world->ops->uninit(world);
+    }
+    if (world->ops->init) {
+        world->ops->init(world);
+    }
+}
+
+void *world_private(struct world *world)
+{
+    return world + 1;
+}
+
+struct rocker *world_rocker(struct world *world)
+{
+    return world->r;
+}
+
+enum rocker_world_type world_type(struct world *world)
+{
+    return world->type;
+}
+
+const char *world_name(struct world *world)
+{
+    switch (world->type) {
+    case ROCKER_WORLD_TYPE_OF_DPA:
+        return "OF_DPA";
+    default:
+        return "unknown";
+    }
+}
diff --git a/hw/net/rocker/rocker_world.h b/hw/net/rocker/rocker_world.h
new file mode 100644
index 0000000..d756908
--- /dev/null
+++ b/hw/net/rocker/rocker_world.h
@@ -0,0 +1,63 @@
+/*
+ * QEMU rocker switch emulation - switch worlds
+ *
+ * Copyright (c) 2014 Scott Feldman <sfeldma@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ROCKER_WORLD_H_
+#define _ROCKER_WORLD_H_
+
+#include "rocker_hw.h"
+
+struct world;
+struct rocker;
+struct rocker_tlv;
+struct desc_info;
+
+enum rocker_world_type {
+    ROCKER_WORLD_TYPE_OF_DPA = ROCKER_PORT_MODE_OF_DPA,
+    ROCKER_WORLD_TYPE_MAX,
+};
+
+typedef int (world_init)(struct world *world);
+typedef void (world_uninit)(struct world *world);
+typedef ssize_t (world_ig)(struct world *world, uint32_t pport,
+                           const struct iovec *iov, int iovcnt);
+typedef int (world_cmd)(struct world *world, struct desc_info *info,
+                        char *buf, uint16_t cmd,
+                        struct rocker_tlv *cmd_info_tlv);
+
+struct world_ops {
+    world_init *init;
+    world_uninit *uninit;
+    world_ig *ig;
+    world_cmd *cmd;
+};
+
+ssize_t world_ingress(struct world *world, uint32_t pport,
+                      const struct iovec *iov, int iovcnt);
+int world_do_cmd(struct world *world, struct desc_info *info,
+                 char *buf, uint16_t cmd, struct rocker_tlv *cmd_info_tlv);
+
+struct world *world_alloc(struct rocker *r, size_t sizeof_private,
+                          enum rocker_world_type type, struct world_ops *ops);
+void world_free(struct world *world);
+void world_reset(struct world *world);
+
+void *world_private(struct world *world);
+struct rocker *world_rocker(struct world *world);
+
+enum rocker_world_type world_type(struct world *world);
+const char *world_name(struct world *world);
+
+#endif /* _ROCKER_WORLD_H_ */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 08/10] qmp: add rocker device support
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (7 preceding siblings ...)
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device sfeldma
@ 2015-01-06  2:24 ` sfeldma
  2015-01-06 15:19   ` Stefan Hajnoczi
  2015-01-06  2:25 ` [Qemu-devel] [PATCH v2 09/10] rocker: add tests sfeldma
  2015-01-06  2:25 ` [Qemu-devel] [PATCH v2 10/10] MAINTAINERS: add rocker sfeldma
  10 siblings, 1 reply; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:24 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

Add QMP/HMP support for rocker devices.  This is mostly for debugging purposes
to see inside the device's tables and port configurations.  Some examples:

(qemu) rocker sw1
name: sw1
id: 0x0000013512005452
ports: 4

(qemu) rocker-ports sw1
            ena/    speed/ auto
      port  link    duplex neg?
     sw1.1  up     10G  FD  No
     sw1.2  up     10G  FD  No
     sw1.3  !ena   10G  FD  No
     sw1.4  !ena   10G  FD  No

(qemu) rocker-of-dpa-flows sw1
prio tbl hits key(mask) --> actions
2    60       lport 1 vlan 1 LLDP src 00:02:00:00:02:00 dst 01:80:c2:00:00:0e
2    60       lport 1 vlan 1 ARP src 00:02:00:00:02:00 dst 00:02:00:00:03:00
2    60       lport 2 vlan 2 IPv6 src 00:02:00:00:03:00 dst 33:33:ff:00:00:02 proto 58
3    50       vlan 2 dst 33:33:ff:00:00:02 --> write group 0x32000001 goto tbl 60
2    60       lport 2 vlan 2 IPv6 src 00:02:00:00:03:00 dst 33:33:ff:00:03:00 proto 58
3    50  1    vlan 2 dst 33:33:ff:00:03:00 --> write group 0x32000001 goto tbl 60
2    60       lport 2 vlan 2 ARP src 00:02:00:00:03:00 dst 00:02:00:00:02:00
3    50  2    vlan 2 dst 00:02:00:00:02:00 --> write group 0x02000001 goto tbl 60
2    60  1    lport 2 vlan 2 IP src 00:02:00:00:03:00 dst 00:02:00:00:02:00 proto 1
3    50  2    vlan 1 dst 00:02:00:00:03:00 --> write group 0x01000002 goto tbl 60
2    60  1    lport 1 vlan 1 IP src 00:02:00:00:02:00 dst 00:02:00:00:03:00 proto 1
2    60       lport 1 vlan 1 IPv6 src 00:02:00:00:02:00 dst 33:33:ff:00:00:01 proto 58
3    50       vlan 1 dst 33:33:ff:00:00:01 --> write group 0x31000000 goto tbl 60
2    60       lport 1 vlan 1 IPv6 src 00:02:00:00:02:00 dst 33:33:ff:00:02:00 proto 58
3    50  1    vlan 1 dst 33:33:ff:00:02:00 --> write group 0x31000000 goto tbl 60
1    60  173  lport 2 vlan 2 LLDP src <any> dst 01:80:c2:00:00:0e --> write group 0x02000000
1    60  6    lport 2 vlan 2 IPv6 src <any> dst <any> --> write group 0x02000000
1    60  174  lport 1 vlan 1 LLDP src <any> dst 01:80:c2:00:00:0e --> write group 0x01000000
1    60  174  lport 2 vlan 2 IP src <any> dst <any> --> write group 0x02000000
1    60  6    lport 1 vlan 1 IPv6 src <any> dst <any> --> write group 0x01000000
1    60  181  lport 2 vlan 2 ARP src <any> dst <any> --> write group 0x02000000
1    10  715  lport 2 --> apply new vlan 2 goto tbl 20
1    60  177  lport 1 vlan 1 ARP src <any> dst <any> --> write group 0x01000000
1    60  174  lport 1 vlan 1 IP src <any> dst <any> --> write group 0x01000000
1    10  717  lport 1 --> apply new vlan 1 goto tbl 20
1    0   1432 lport 0(0xffff) --> goto tbl 10

(qemu) rocker-of-dpa-groups sw1
id (decode) --> buckets
0x32000001 (type L2 multicast vlan 2 index 1) --> groups [0x02000001,0x02000000]
0x02000001 (type L2 interface vlan 2 lport 1) --> pop vlan out lport 1
0x01000002 (type L2 interface vlan 1 lport 2) --> pop vlan out lport 2
0x02000000 (type L2 interface vlan 2 lport 0) --> pop vlan out lport 0
0x01000000 (type L2 interface vlan 1 lport 0) --> pop vlan out lport 0
0x31000000 (type L2 multicast vlan 1 index 0) --> groups [0x01000002,0x01000000]

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 hmp-commands.hx               |   56 ++++++++
 hmp.c                         |  303 ++++++++++++++++++++++++++++++++++++++++
 hmp.h                         |    4 +
 hw/net/rocker/rocker.c        |   46 ++++++
 hw/net/rocker/rocker_of_dpa.c |  309 +++++++++++++++++++++++++++++++++++++++++
 qapi-schema.json              |    3 +
 qapi/rocker.json              |  249 +++++++++++++++++++++++++++++++++
 qmp-commands.hx               |   97 +++++++++++++
 8 files changed, 1067 insertions(+)
 create mode 100644 qapi/rocker.json

diff --git a/hmp-commands.hx b/hmp-commands.hx
index e37bc8b..035a872 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1786,6 +1786,62 @@ STEXI
 show available trace events and their state
 ETEXI
 
+    {
+        .name       = "rocker",
+        .args_type  = "name:s",
+        .params     = "rocker name",
+        .help       = "Show rocker switch",
+        .mhandler.cmd = hmp_rocker,
+    },
+
+STEXI
+@item rocker @var{name}
+@findex rocker
+Show Rocker(s)
+ETEXI
+
+    {
+        .name       = "rocker-ports",
+        .args_type  = "name:s",
+        .params     = "rocker_ports name",
+        .help       = "Show rocker ports",
+        .mhandler.cmd = hmp_rocker_ports,
+    },
+
+STEXI
+@item rocker_ports @var{name}
+@findex rocker_ports
+Show Rocker ports
+ETEXI
+
+    {
+        .name       = "rocker-of-dpa-flows",
+        .args_type  = "name:s,tbl_id:i?",
+        .params     = "rocker_of_dpa_flows name [tbl_id]",
+        .help       = "Show rocker OF-DPA flow tables",
+        .mhandler.cmd = hmp_rocker_of_dpa_flows,
+    },
+
+STEXI
+@item rocker_of_dpa_flows @var{name} [@var{tbl_id}]
+@findex rocker_of_dpa_flows
+Show Rocker OF-DPA flow tables
+ETEXI
+
+    {
+        .name       = "rocker-of-dpa-groups",
+        .args_type  = "name:s,type:i?",
+        .params     = "rocker_of_dpa_groups name [type]",
+        .help       = "Show rocker OF-DPA groups",
+        .mhandler.cmd = hmp_rocker_of_dpa_groups,
+    },
+
+STEXI
+@item rocker_of_dpa_groups @var{name} [@var{type}]
+@findex rocker_of_dpa_groups
+Show Rocker OF-DPA groups
+ETEXI
+
 STEXI
 @end table
 ETEXI
diff --git a/hmp.c b/hmp.c
index 481be80..2290d10 100644
--- a/hmp.c
+++ b/hmp.c
@@ -15,6 +15,7 @@
 
 #include "hmp.h"
 #include "net/net.h"
+#include "net/eth.h"
 #include "sysemu/char.h"
 #include "qemu/option.h"
 #include "qemu/timer.h"
@@ -1813,3 +1814,305 @@ void hmp_info_memory_devices(Monitor *mon, const QDict *qdict)
 
     qapi_free_MemoryDeviceInfoList(info_list);
 }
+
+void hmp_rocker(Monitor *mon, const QDict *qdict)
+{
+    const char *name = qdict_get_str(qdict, "name");
+    Rocker *rocker;
+    Error *errp = NULL;
+
+    rocker = qmp_rocker(name, &errp);
+    if (errp != NULL) {
+        hmp_handle_error(mon, &errp);
+        return;
+    }
+
+    monitor_printf(mon, "name: %s\n", rocker->name);
+    monitor_printf(mon, "id: 0x%016lx\n", rocker->id);
+    monitor_printf(mon, "ports: %d\n", rocker->ports);
+
+    qapi_free_Rocker(rocker);
+}
+
+void hmp_rocker_ports(Monitor *mon, const QDict *qdict)
+{
+    RockerPortList *list, *port;
+    const char *name = qdict_get_str(qdict, "name");
+    Error *errp = NULL;
+
+    list = qmp_rocker_ports(name, &errp);
+    if (errp != NULL) {
+        hmp_handle_error(mon, &errp);
+        return;
+    }
+
+    monitor_printf(mon, "            ena/    speed/ auto\n");
+    monitor_printf(mon, "      port  link    duplex neg?\n");
+
+    for (port = list; port; port = port->next)
+        monitor_printf(mon, "%10s  %-4s   %-3s  %2s  %-3s\n",
+                       port->value->name,
+                       port->value->enabled ? port->value->link_up ?
+                       "up" : "down" : "!ena",
+                       port->value->speed == 10000 ? "10G" : "??",
+                       port->value->duplex ? "FD" : "HD",
+                       port->value->autoneg ? "Yes" : "No");
+
+    qapi_free_RockerPortList(list);
+}
+
+void hmp_rocker_of_dpa_flows(Monitor *mon, const QDict *qdict)
+{
+    RockerOfDpaFlowList *list, *info;
+    const char *name = qdict_get_str(qdict, "name");
+    uint32_t tbl_id = qdict_get_try_int(qdict, "tbl_id", -1);
+    Error *errp = NULL;
+
+    list = qmp_rocker_of_dpa_flows(name, tbl_id != -1, tbl_id, &errp);
+    if (errp != NULL) {
+        hmp_handle_error(mon, &errp);
+        return;
+    }
+
+    monitor_printf(mon, "prio tbl hits key(mask) --> actions\n");
+
+    for (info = list; info; info = info->next) {
+        RockerOfDpaFlow *flow = info->value;
+        RockerOfDpaFlowKey *key = flow->key;
+        RockerOfDpaFlowMask *mask = flow->mask;
+        RockerOfDpaFlowAction *action = flow->action;
+
+        if (flow->hits) {
+            monitor_printf(mon, "%-4d %-3d %-4ld",
+                           key->priority, key->tbl_id, flow->hits);
+        } else {
+            monitor_printf(mon, "%-4d %-3d     ",
+                           key->priority, key->tbl_id);
+        }
+
+        if (key->has_in_pport) {
+            monitor_printf(mon, " pport %d", key->in_pport);
+            if (mask->has_in_pport) {
+                monitor_printf(mon, "(0x%x)", mask->in_pport);
+            }
+        }
+
+        if (key->has_vlan_id) {
+            monitor_printf(mon, " vlan %d",
+                           key->vlan_id & VLAN_VID_MASK);
+            if (mask->has_vlan_id) {
+                monitor_printf(mon, "(0x%x)", mask->vlan_id);
+            }
+        }
+
+        if (key->has_tunnel_id) {
+            monitor_printf(mon, " tunnel %d", key->tunnel_id);
+            if (mask->has_tunnel_id) {
+                monitor_printf(mon, "(0x%x)", mask->tunnel_id);
+            }
+        }
+
+        if (key->has_eth_type) {
+            switch (key->eth_type) {
+            case 0x0806:
+                monitor_printf(mon, " ARP");
+                break;
+            case 0x0800:
+                monitor_printf(mon, " IP");
+                break;
+            case 0x86dd:
+                monitor_printf(mon, " IPv6");
+                break;
+            case 0x8809:
+                monitor_printf(mon, " LACP");
+                break;
+            case 0x88cc:
+                monitor_printf(mon, " LLDP");
+                break;
+            default:
+                monitor_printf(mon, " eth type 0x%04x", key->eth_type);
+                break;
+            }
+        }
+
+        if (key->has_eth_src) {
+            if ((strcmp(key->eth_src, "01:00:00:00:00:00") == 0) &&
+                (mask->has_eth_src) &&
+                (strcmp(mask->eth_src, "01:00:00:00:00:00") == 0)) {
+                monitor_printf(mon, " src <any mcast/bcast>");
+            } else if ((strcmp(key->eth_src, "00:00:00:00:00:00") == 0) &&
+                (mask->has_eth_src) &&
+                (strcmp(mask->eth_src, "01:00:00:00:00:00") == 0)) {
+                monitor_printf(mon, " src <any ucast>");
+            } else {
+                monitor_printf(mon, " src %s", key->eth_src);
+                if (mask->has_eth_src) {
+                    monitor_printf(mon, "(%s)", mask->eth_src);
+                }
+            }
+        }
+
+        if (key->has_eth_dst) {
+            if ((strcmp(key->eth_dst, "01:00:00:00:00:00") == 0) &&
+                (mask->has_eth_dst) &&
+                (strcmp(mask->eth_dst, "01:00:00:00:00:00") == 0)) {
+                monitor_printf(mon, " dst <any mcast/bcast>");
+            } else if ((strcmp(key->eth_dst, "00:00:00:00:00:00") == 0) &&
+                (mask->has_eth_dst) &&
+                (strcmp(mask->eth_dst, "01:00:00:00:00:00") == 0)) {
+                monitor_printf(mon, " dst <any ucast>");
+            } else {
+                monitor_printf(mon, " dst %s", key->eth_dst);
+                if (mask->has_eth_dst) {
+                    monitor_printf(mon, "(%s)", mask->eth_dst);
+                }
+            }
+        }
+
+        if (key->has_ip_proto) {
+            monitor_printf(mon, " proto %d", key->ip_proto);
+            if (mask->has_ip_proto) {
+                monitor_printf(mon, "(0x%x)", mask->ip_proto);
+            }
+        }
+
+        if (key->has_ip_tos) {
+            monitor_printf(mon, " TOS %d", key->ip_tos);
+            if (mask->has_ip_tos) {
+                monitor_printf(mon, "(0x%x)", mask->ip_tos);
+            }
+        }
+
+        if (key->has_ip_dst) {
+            monitor_printf(mon, " dst %s", key->ip_dst);
+        }
+
+        if (action->has_goto_tbl || action->has_group_id ||
+            action->has_new_vlan_id) {
+            monitor_printf(mon, " -->");
+        }
+
+        if (action->has_new_vlan_id) {
+            monitor_printf(mon, " apply new vlan %d",
+                           ntohs(action->new_vlan_id));
+        }
+
+        if (action->has_group_id) {
+            monitor_printf(mon, " write group 0x%08x", action->group_id);
+        }
+
+        if (action->has_goto_tbl) {
+            monitor_printf(mon, " goto tbl %d", action->goto_tbl);
+        }
+
+        monitor_printf(mon, "\n");
+    }
+
+    qapi_free_RockerOfDpaFlowList(list);
+}
+
+void hmp_rocker_of_dpa_groups(Monitor *mon, const QDict *qdict)
+{
+    RockerOfDpaGroupList *list, *g;
+    const char *name = qdict_get_str(qdict, "name");
+    uint8_t type = qdict_get_try_int(qdict, "type", 9);
+    Error *errp = NULL;
+    bool set = false;
+
+    list = qmp_rocker_of_dpa_groups(name, type != 9, type, &errp);
+    if (errp != NULL) {
+        hmp_handle_error(mon, &errp);
+        return;
+    }
+
+    monitor_printf(mon, "id (decode) --> buckets\n");
+
+    for (g = list; g; g = g->next) {
+        RockerOfDpaGroup *group = g->value;
+
+        monitor_printf(mon, "0x%08x", group->id);
+
+        monitor_printf(mon, " (type %s", group->type == 0 ? "L2 interface" :
+                                         group->type == 1 ? "L2 rewrite" :
+                                         group->type == 2 ? "L3 unicast" :
+                                         group->type == 3 ? "L2 multicast" :
+                                         group->type == 4 ? "L2 flood" :
+                                         group->type == 5 ? "L3 interface" :
+                                         group->type == 6 ? "L3 multicast" :
+                                         group->type == 7 ? "L3 ECMP" :
+                                         group->type == 8 ? "L2 overlay" :
+                                         "unknown");
+
+        if (group->has_vlan_id) {
+            monitor_printf(mon, " vlan %d", group->vlan_id);
+        }
+
+        if (group->has_pport) {
+            monitor_printf(mon, " pport %d", group->pport);
+        }
+
+        if (group->has_index) {
+            monitor_printf(mon, " index %d", group->index);
+        }
+
+        monitor_printf(mon, ") -->");
+
+        if (group->has_set_vlan_id && group->set_vlan_id) {
+            set = true;
+            monitor_printf(mon, " set vlan %d",
+                           group->set_vlan_id & VLAN_VID_MASK);
+        }
+
+        if (group->has_set_eth_src) {
+            if (!set) {
+                set = true;
+                monitor_printf(mon, " set");
+            }
+            monitor_printf(mon, " src %s", group->set_eth_src);
+        }
+
+        if (group->has_set_eth_dst) {
+            if (!set) {
+                set = true;
+                monitor_printf(mon, " set");
+            }
+            monitor_printf(mon, " dst %s", group->set_eth_dst);
+        }
+
+        set = false;
+
+        if (group->has_ttl_check && group->ttl_check) {
+            monitor_printf(mon, " check TTL");
+        }
+
+        if (group->has_group_id && group->group_id) {
+            monitor_printf(mon, " group id 0x%08x", group->group_id);
+        }
+
+        if (group->has_pop_vlan && group->pop_vlan) {
+            monitor_printf(mon, " pop vlan");
+        }
+
+        if (group->has_out_pport) {
+            monitor_printf(mon, " out pport %d", group->out_pport);
+        }
+
+        if (group->has_group_ids) {
+            struct uint32List *id;
+
+            monitor_printf(mon, " groups [");
+            for (id = group->group_ids; id; id = id->next) {
+                monitor_printf(mon, "0x%08x", id->value);
+                if (id->next) {
+                    monitor_printf(mon, ",");
+                }
+            }
+            monitor_printf(mon, "]");
+        }
+
+        monitor_printf(mon, "\n");
+    }
+
+    qapi_free_RockerOfDpaGroupList(list);
+}
+
diff --git a/hmp.h b/hmp.h
index 4bb5dca..dc8a15c 100644
--- a/hmp.h
+++ b/hmp.h
@@ -116,5 +116,9 @@ void host_net_remove_completion(ReadLineState *rs, int nb_args,
                                 const char *str);
 void delvm_completion(ReadLineState *rs, int nb_args, const char *str);
 void loadvm_completion(ReadLineState *rs, int nb_args, const char *str);
+void hmp_rocker(Monitor *mon, const QDict *qdict);
+void hmp_rocker_ports(Monitor *mon, const QDict *qdict);
+void hmp_rocker_of_dpa_flows(Monitor *mon, const QDict *qdict);
+void hmp_rocker_of_dpa_groups(Monitor *mon, const QDict *qdict);
 
 #endif
diff --git a/hw/net/rocker/rocker.c b/hw/net/rocker/rocker.c
index b410552..278894e 100644
--- a/hw/net/rocker/rocker.c
+++ b/hw/net/rocker/rocker.c
@@ -22,6 +22,7 @@
 #include "net/eth.h"
 #include "qemu/iov.h"
 #include "qemu/bitops.h"
+#include "qmp-commands.h"
 
 #include "rocker.h"
 #include "rocker_hw.h"
@@ -92,6 +93,51 @@ struct world *rocker_get_world(struct rocker *r, enum rocker_world_type type)
     return NULL;
 }
 
+Rocker *qmp_rocker(const char *name, Error **errp)
+{
+    Rocker *rocker = g_malloc0(sizeof(*rocker));
+    struct rocker *r;
+
+    r = rocker_find(name);
+    if (!r) {
+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+                  "rocker %s not found", name);
+        return NULL;
+    }
+
+    rocker->name = g_strdup(r->name);
+    rocker->id = r->switch_id;
+    rocker->ports = r->fp_ports;
+
+    return rocker;
+}
+
+RockerPortList *qmp_rocker_ports(const char *name, Error **errp)
+{
+    RockerPortList *list = NULL;
+    struct rocker *r;
+    int i;
+
+    r = rocker_find(name);
+    if (!r) {
+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+                  "rocker %s not found", name);
+        return NULL;
+    }
+
+    for (i = r->fp_ports - 1; i >= 0; i--) {
+        RockerPortList *info = g_malloc0(sizeof(*info));
+        info->value = g_malloc0(sizeof(*info->value));
+        struct fp_port *port = r->fp_port[i];
+
+        fp_port_get_info(port, info);
+        info->next = list;
+        list = info;
+    }
+
+    return list;
+}
+
 uint32_t rocker_fp_ports(struct rocker *r)
 {
     return r->fp_ports;
diff --git a/hw/net/rocker/rocker_of_dpa.c b/hw/net/rocker/rocker_of_dpa.c
index 328f351..432665b 100644
--- a/hw/net/rocker/rocker_of_dpa.c
+++ b/hw/net/rocker/rocker_of_dpa.c
@@ -17,6 +17,7 @@
 #include "net/eth.h"
 #include "qemu/iov.h"
 #include "qemu/timer.h"
+#include "qmp-commands.h"
 
 #include "rocker.h"
 #include "rocker_hw.h"
@@ -2321,6 +2322,314 @@ static void of_dpa_uninit(struct world *world)
     g_hash_table_destroy(of_dpa->flow_tbl);
 }
 
+struct of_dpa_flow_fill_context {
+    RockerOfDpaFlowList *list;
+    uint32_t tbl_id;
+};
+
+static void of_dpa_flow_fill(void *cookie, void *value, void *user_data)
+{
+    struct of_dpa_flow *flow = value;
+    struct of_dpa_flow_key *key = &flow->key;
+    struct of_dpa_flow_key *mask = &flow->mask;
+    struct of_dpa_flow_fill_context *flow_context = user_data;
+    RockerOfDpaFlowList *new;
+    RockerOfDpaFlow *nflow;
+    RockerOfDpaFlowKey *nkey;
+    RockerOfDpaFlowMask *nmask;
+    RockerOfDpaFlowAction *naction;
+
+    if (flow_context->tbl_id != -1 &&
+        flow_context->tbl_id != key->tbl_id) {
+        return;
+    }
+
+    new = g_malloc0(sizeof(*new));
+    nflow = new->value = g_malloc0(sizeof(*nflow));
+    nkey = nflow->key = g_malloc0(sizeof(*nkey));
+    nmask = nflow->mask = g_malloc0(sizeof(*nmask));
+    naction = nflow->action = g_malloc0(sizeof(*naction));
+
+    nflow->cookie = flow->cookie;
+    nflow->hits = flow->stats.hits;
+    nkey->priority = flow->priority;
+    nkey->tbl_id = key->tbl_id;
+
+    if (key->in_pport || mask->in_pport) {
+        nkey->has_in_pport = true;
+        nkey->in_pport = key->in_pport;
+    }
+
+    if (nkey->has_in_pport && mask->in_pport != 0xffffffff) {
+        nmask->has_in_pport = true;
+        nmask->in_pport = mask->in_pport;
+    }
+
+    if (key->eth.vlan_id || mask->eth.vlan_id) {
+        nkey->has_vlan_id = true;
+        nkey->vlan_id = ntohs(key->eth.vlan_id);
+    }
+
+    if (nkey->has_vlan_id && mask->eth.vlan_id != 0xffff) {
+        nmask->has_vlan_id = true;
+        nmask->vlan_id = ntohs(mask->eth.vlan_id);
+    }
+
+    if (key->tunnel_id || mask->tunnel_id) {
+        nkey->has_tunnel_id = true;
+        nkey->tunnel_id = key->tunnel_id;
+    }
+
+    if (nkey->has_tunnel_id && mask->tunnel_id != 0xffffffff) {
+        nmask->has_tunnel_id = true;
+        nmask->tunnel_id = mask->tunnel_id;
+    }
+
+    if (memcmp(key->eth.src.a, zero_mac.a, ETH_ALEN) ||
+        memcmp(mask->eth.src.a, zero_mac.a, ETH_ALEN)) {
+        nkey->has_eth_src = true;
+        nkey->eth_src = qemu_mac_strdup_printf(key->eth.src.a);
+    }
+
+    if (nkey->has_eth_src && memcmp(mask->eth.src.a, ff_mac.a, ETH_ALEN)) {
+        nmask->has_eth_src = true;
+        nmask->eth_src = qemu_mac_strdup_printf(mask->eth.src.a);
+    }
+
+    if (memcmp(key->eth.dst.a, zero_mac.a, ETH_ALEN) ||
+        memcmp(mask->eth.dst.a, zero_mac.a, ETH_ALEN)) {
+        nkey->has_eth_dst = true;
+        nkey->eth_dst = qemu_mac_strdup_printf(key->eth.dst.a);
+    }
+
+    if (nkey->has_eth_dst && memcmp(mask->eth.dst.a, ff_mac.a, ETH_ALEN)) {
+        nmask->has_eth_dst = true;
+        nmask->eth_dst = qemu_mac_strdup_printf(mask->eth.dst.a);
+    }
+
+    if (key->eth.type) {
+
+        nkey->has_eth_type = true;
+        nkey->eth_type = ntohs(key->eth.type);
+
+        switch (ntohs(key->eth.type)) {
+        case 0x0800:
+        case 0x86dd:
+            if (key->ip.proto || mask->ip.proto) {
+                nkey->has_ip_proto = true;
+                nkey->ip_proto = key->ip.proto;
+            }
+            if (nkey->has_ip_proto && mask->ip.proto != 0xff) {
+                nmask->has_ip_proto = true;
+                nmask->ip_proto = mask->ip.proto;
+            }
+            if (key->ip.tos || mask->ip.tos) {
+                nkey->has_ip_tos = true;
+                nkey->ip_tos = key->ip.tos;
+            }
+            if (nkey->has_ip_tos && mask->ip.tos != 0xff) {
+                nmask->has_ip_tos = true;
+                nmask->ip_tos = mask->ip.tos;
+            }
+            break;
+        }
+
+        switch (ntohs(key->eth.type)) {
+        case 0x0800:
+            if (key->ipv4.addr.dst || mask->ipv4.addr.dst) {
+                char *dst = inet_ntoa(*(struct in_addr *)&key->ipv4.addr.dst);
+                int dst_len = of_dpa_mask2prefix(mask->ipv4.addr.dst);
+                nkey->has_ip_dst = true;
+                nkey->ip_dst = g_strdup_printf("%s/%d", dst, dst_len);
+            }
+            break;
+        }
+    }
+
+    if (flow->action.goto_tbl) {
+        naction->has_goto_tbl = true;
+        naction->goto_tbl = flow->action.goto_tbl;
+    }
+
+    if (flow->action.write.group_id) {
+        naction->has_group_id = true;
+        naction->group_id = flow->action.write.group_id;
+    }
+
+    if (flow->action.apply.new_vlan_id) {
+        naction->has_new_vlan_id = true;
+        naction->new_vlan_id = flow->action.apply.new_vlan_id;
+    }
+
+    new->next = flow_context->list;
+    flow_context->list = new;
+}
+
+RockerOfDpaFlowList *qmp_rocker_of_dpa_flows(const char *name, bool has_tbl_id,
+                                             uint32_t tbl_id, Error **errp)
+{
+    struct rocker *r;
+    struct world *w;
+    struct of_dpa *of_dpa;
+    struct of_dpa_flow_fill_context fill_context = {
+        .list = NULL,
+        .tbl_id = tbl_id,
+    };
+
+    r = rocker_find(name);
+    if (!r) {
+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+                  "rocker %s not found", name);
+        return NULL;
+    }
+
+    w = rocker_get_world(r, ROCKER_WORLD_TYPE_OF_DPA);
+    if (!w) {
+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+                  "rocker %s doesn't have OF-DPA world", name);
+        return NULL;
+    }
+
+    of_dpa = world_private(w);
+
+    g_hash_table_foreach(of_dpa->flow_tbl, of_dpa_flow_fill, &fill_context);
+
+    return fill_context.list;
+}
+
+struct of_dpa_group_fill_context {
+    RockerOfDpaGroupList *list;
+    uint8_t type;
+};
+
+static void of_dpa_group_fill(void *key, void *value, void *user_data)
+{
+    struct of_dpa_group *group = value;
+    struct of_dpa_group_fill_context *flow_context = user_data;
+    RockerOfDpaGroupList *new;
+    RockerOfDpaGroup *ngroup;
+    struct uint32List *id;
+    int i;
+
+    if (flow_context->type != 9 &&
+        flow_context->type != ROCKER_GROUP_TYPE_GET(group->id)) {
+        return;
+    }
+
+    new = g_malloc0(sizeof(*new));
+    ngroup = new->value = g_malloc0(sizeof(*ngroup));
+
+    ngroup->id = group->id;
+
+    ngroup->type = ROCKER_GROUP_TYPE_GET(group->id);
+
+    switch (ngroup->type) {
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_INTERFACE:
+        ngroup->has_vlan_id = true;
+        ngroup->vlan_id = ROCKER_GROUP_VLAN_GET(group->id);
+        ngroup->has_pport = true;
+        ngroup->pport = ROCKER_GROUP_PORT_GET(group->id);
+        ngroup->has_out_pport = true;
+        ngroup->out_pport = group->l2_interface.out_pport;
+        ngroup->has_pop_vlan = true;
+        ngroup->pop_vlan = group->l2_interface.pop_vlan;
+        break;
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_REWRITE:
+        ngroup->has_index = true;
+        ngroup->index = ROCKER_GROUP_INDEX_LONG_GET(group->id);
+        ngroup->has_group_id = true;
+        ngroup->group_id = group->l2_rewrite.group_id;
+        if (group->l2_rewrite.vlan_id) {
+            ngroup->has_set_vlan_id = true;
+            ngroup->set_vlan_id = ntohs(group->l2_rewrite.vlan_id);
+        }
+        break;
+        if (memcmp(group->l2_rewrite.src_mac.a, zero_mac.a, ETH_ALEN)) {
+            ngroup->has_set_eth_src = true;
+            ngroup->set_eth_src =
+                qemu_mac_strdup_printf(group->l2_rewrite.src_mac.a);
+        }
+        if (memcmp(group->l2_rewrite.dst_mac.a, zero_mac.a, ETH_ALEN)) {
+            ngroup->has_set_eth_dst = true;
+            ngroup->set_eth_dst =
+                qemu_mac_strdup_printf(group->l2_rewrite.dst_mac.a);
+        }
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_FLOOD:
+    case ROCKER_OF_DPA_GROUP_TYPE_L2_MCAST:
+        ngroup->has_vlan_id = true;
+        ngroup->vlan_id = ROCKER_GROUP_VLAN_GET(group->id);
+        ngroup->has_index = true;
+        ngroup->index = ROCKER_GROUP_INDEX_GET(group->id);
+        for (i = 0; i < group->l2_flood.group_count; i++) {
+            ngroup->has_group_ids = true;
+            id = g_malloc0(sizeof(*id));
+            id->value = group->l2_flood.group_ids[i];
+            id->next = ngroup->group_ids;
+            ngroup->group_ids = id;
+        }
+        break;
+    case ROCKER_OF_DPA_GROUP_TYPE_L3_UCAST:
+        ngroup->has_index = true;
+        ngroup->index = ROCKER_GROUP_INDEX_LONG_GET(group->id);
+        ngroup->has_group_id = true;
+        ngroup->group_id = group->l3_unicast.group_id;
+        if (group->l3_unicast.vlan_id) {
+            ngroup->has_set_vlan_id = true;
+            ngroup->set_vlan_id = ntohs(group->l3_unicast.vlan_id);
+        }
+        if (memcmp(group->l3_unicast.src_mac.a, zero_mac.a, ETH_ALEN)) {
+            ngroup->has_set_eth_src = true;
+            ngroup->set_eth_src =
+                qemu_mac_strdup_printf(group->l3_unicast.src_mac.a);
+        }
+        if (memcmp(group->l3_unicast.dst_mac.a, zero_mac.a, ETH_ALEN)) {
+            ngroup->has_set_eth_dst = true;
+            ngroup->set_eth_dst =
+                qemu_mac_strdup_printf(group->l3_unicast.dst_mac.a);
+        }
+        if (group->l3_unicast.ttl_check) {
+            ngroup->has_ttl_check = true;
+            ngroup->ttl_check = group->l3_unicast.ttl_check;
+        }
+        break;
+    }
+
+    new->next = flow_context->list;
+    flow_context->list = new;
+}
+
+RockerOfDpaGroupList *qmp_rocker_of_dpa_groups(const char *name, bool has_type,
+                                               uint8_t type, Error **errp)
+{
+    struct rocker *r;
+    struct world *w;
+    struct of_dpa *of_dpa;
+    struct of_dpa_group_fill_context fill_context = {
+        .list = NULL,
+        .type = type,
+    };
+
+    r = rocker_find(name);
+    if (!r) {
+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+                  "rocker %s not found", name);
+        return NULL;
+    }
+
+    w = rocker_get_world(r, ROCKER_WORLD_TYPE_OF_DPA);
+    if (!w) {
+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+                  "rocker %s doesn't have OF-DPA world", name);
+        return NULL;
+    }
+
+    of_dpa = world_private(w);
+
+    g_hash_table_foreach(of_dpa->group_tbl, of_dpa_group_fill, &fill_context);
+
+    return fill_context.list;
+}
+
 static struct world_ops of_dpa_ops = {
     .init = of_dpa_init,
     .uninit = of_dpa_uninit,
diff --git a/qapi-schema.json b/qapi-schema.json
index 563b4ad..54a180b 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -3515,3 +3515,6 @@
 # Since: 2.1
 ##
 { 'command': 'rtc-reset-reinjection' }
+
+# Rocker ethernet network switch
+{ 'include': 'qapi/rocker.json' }
diff --git a/qapi/rocker.json b/qapi/rocker.json
new file mode 100644
index 0000000..87406f9
--- /dev/null
+++ b/qapi/rocker.json
@@ -0,0 +1,249 @@
+##
+# @Rocker:
+#
+# Rocker switch information.
+#
+# @name: switch name
+#
+# @id: switch ID
+#
+# @ports: number of front-panel ports
+##
+{ 'type': 'Rocker',
+  'data': { 'name': 'str', 'id': 'uint64', 'ports': 'uint32' } }
+
+##
+# @rocker:
+#
+# Return rocker switch information.
+#
+# Returns: @Rocker information
+##
+{ 'command': 'rocker',
+  'data': { 'name': 'str' },
+  'returns': 'Rocker' }
+
+##
+# @RockerPortDuplex:
+#
+# An eumeration of port duplex states.
+#
+# @half: half duplex
+#
+# @full: full duplex
+##
+{ 'enum': 'RockerPortDuplex', 'data': [ 'half', 'full' ] }
+
+##
+# @RockerPortAutoneg:
+#
+# An eumeration of port autoneg states.
+#
+# @off: autoneg is off
+#
+# @on: autoneg is on
+##
+{ 'enum': 'RockerPortAutoneg', 'data': [ 'off', 'on' ] }
+
+##
+# @RockerPort:
+#
+# Rocker switch port information.
+#
+# @name: port name
+#
+# @enabled: port is enabled for I/O
+#
+# @link-up: physical link is UP on port
+#
+# @speed: port link speed in Mbps
+#
+# @duplex: port link duplex
+#
+# @autoneg: port link autoneg
+##
+{ 'type': 'RockerPort',
+  'data': { 'name': 'str', 'enabled': 'bool', 'link-up': 'bool',
+            'speed': 'uint32', 'duplex': 'RockerPortDuplex',
+            'autoneg': 'RockerPortAutoneg' } }
+
+##
+# @rocker-ports:
+#
+# Return rocker switch information.
+#
+# Returns: @Rocker information
+##
+{ 'command': 'rocker-ports',
+  'data': { 'name': 'str' },
+  'returns': ['RockerPort'] }
+
+##
+# @RockerOfDpaFlowKey:
+#
+# Rocker switch OF-DPA flow key
+#
+# @priority: key priority, 0 being lowest priority
+#
+# @tbl-id: flow table ID
+#
+# @in-pport: physical input port
+#
+# @tunnel-id: tunnel ID
+#
+# @vlan-id: VLAN ID
+#
+# @eth-type: Ethernet header type
+#
+# @eth-src: Ethernet header source MAC address
+#
+# @eth-dst: Ethernet header destination MAC address
+#
+# @ip-proto: IP Header protocol field
+#
+# @ip-tos: IP header TOS field
+#
+# @ip-dst: IP header destination address
+##
+{ 'type': 'RockerOfDpaFlowKey',
+  'data' : { 'priority': 'uint32', 'tbl-id': 'uint32', '*in-pport': 'uint32',
+             '*tunnel-id': 'uint32', '*vlan-id': 'uint16',
+             '*eth-type': 'uint16', '*eth-src': 'str', '*eth-dst': 'str',
+             '*ip-proto': 'uint8', '*ip-tos': 'uint8', '*ip-dst': 'str' } }
+
+##
+# @RockerOfDpaFlowMask:
+#
+# Rocker switch OF-DPA flow mask
+#
+# @in-pport: physical input port
+#
+# @tunnel-id: tunnel ID
+#
+# @vlan-id: VLAN ID
+#
+# @eth-src: Ethernet header source MAC address
+#
+# @eth-dst: Ethernet header destination MAC address
+#
+# @ip-proto: IP Header protocol field
+#
+# @ip-tos: IP header TOS field
+##
+{ 'type': 'RockerOfDpaFlowMask',
+  'data' : { '*in-pport': 'uint32', '*tunnel-id': 'uint32',
+             '*vlan-id': 'uint16', '*eth-src': 'str', '*eth-dst': 'str',
+             '*ip-proto': 'uint8', '*ip-tos': 'uint8' } }
+
+##
+# @RockerOfDpaFlowAction:
+#
+# Rocker switch OF-DPA flow action
+#
+# @goto-tbl: next table ID
+#
+# @group-id: group ID
+#
+# @tunnel-lport: tunnel logical port ID
+#
+# @vlan-id: VLAN ID
+#
+# @new-vlan-id: new VLAN ID
+#
+# @out-pport: physical output port
+##
+{ 'type': 'RockerOfDpaFlowAction',
+  'data' : { '*goto-tbl': 'uint32', '*group-id': 'uint32',
+             '*tunnel-lport': 'uint32', '*vlan-id': 'uint16',
+             '*new-vlan-id': 'uint16', '*out-pport': 'uint32' } }
+
+##
+# @RockerOfDpaFlow:
+#
+# Rocker switch OF-DPA flow
+#
+# @cookie: flow unique cookie ID
+#
+# @hits: count of matches (hits) on flow
+#
+# @key: flow key
+#
+# @mask: flow mask
+#
+# @action: flow action
+##
+{ 'type': 'RockerOfDpaFlow',
+  'data': { 'cookie': 'uint64', 'hits': 'uint64', 'key': 'RockerOfDpaFlowKey',
+            'mask': 'RockerOfDpaFlowMask', 'action': 'RockerOfDpaFlowAction' } }
+
+##
+# @rocker-of-dpa-flows:
+#
+# Return rocker OF-DPA flow information.
+#
+# @name: switch name
+#
+# @tbl-id: (optional) flow table ID.  If tbl-id is not specified, returns
+# flow information for all tables.
+#
+# Returns: @Rocker OF-DPA flow information
+##
+{ 'command': 'rocker-of-dpa-flows',
+  'data': { 'name': 'str', '*tbl-id': 'uint32' },
+  'returns': ['RockerOfDpaFlow'] }
+
+##
+# @RockerOfDpaGroup:
+#
+# Rocker switch OF-DPA group
+#
+# @id: group unique ID
+#
+# @type: group type
+#
+# @vlan-id: VLAN ID
+#
+# @pport: physical port number
+#
+# @index: group index, unique with group type
+#
+# @out-pport: output physical port number
+#
+# @group-id: next group ID
+#
+# @set-vlan-id: VLAN ID to set
+#
+# @pop-vlan: pop VLAN headr from packet
+#
+# @group-ids: list of next group IDs
+#
+# @set-eth-src: set source MAC address in Ethernet header
+#
+# @set-eth-dst: set destination MAC address in Ethernet header
+#
+# @ttl-check: perform TTL check
+##
+{ 'type': 'RockerOfDpaGroup',
+  'data': { 'id': 'uint32',  'type': 'uint8', '*vlan-id': 'uint16',
+            '*pport': 'uint32', '*index': 'uint32', '*out-pport': 'uint32',
+            '*group-id': 'uint32', '*set-vlan-id': 'uint16',
+            '*pop-vlan': 'uint8', '*group-ids': ['uint32'],
+            '*set-eth-src': 'str', '*set-eth-dst': 'str',
+            '*ttl-check': 'uint8' } }
+
+##
+# @rocker-of-dpa-groups:
+#
+# Return rocker OF-DPA group information.
+#
+# @name: switch name
+#
+# @type: (optional) group type.  If type is not specified, returns
+# group information for all group types.
+#
+# Returns: @Rocker OF-DPA group information
+##
+{ 'command': 'rocker-of-dpa-groups',
+  'data': { 'name': 'str', '*type': 'uint8' },
+  'returns': ['RockerOfDpaGroup'] }
+
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 6945d30..2753d37 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -3860,3 +3860,100 @@ Move mouse pointer to absolute coordinates (20000, 400).
 <- { "return": {} }
 
 EQMP
+
+    {
+        .name       = "rocker",
+        .args_type  = "name:s",
+        .mhandler.cmd_new = qmp_marshal_input_rocker,
+    },
+
+SQMP
+Show rocker switch
+------------------
+
+Arguments:
+
+- "name": switch name
+
+Example:
+
+-> { "execute": "rocker", "arguments": { "name": "sw1" } }
+<- { "return": {"name": "sw1", "ports": 2, "id": 1327446905938}}
+
+EQMP
+
+    {
+        .name       = "rocker-ports",
+        .args_type  = "name:s",
+        .mhandler.cmd_new = qmp_marshal_input_rocker_ports,
+    },
+
+SQMP
+Show rocker switch ports
+------------------------
+
+Arguments:
+
+- "name": switch name
+
+Example:
+
+-> { "execute": "rocker-ports", "arguments": { "name": "sw1" } }
+<- { "return": [ {"duplex": "full", "enabled": true, "name": "sw1.1", "autoneg": "off", "link-up": true, "speed": 10000},
+                 {"duplex": "full", "enabled": true, "name": "sw1.2", "autoneg": "off", "link-up": true, "speed": 10000}
+   ]}
+
+EQMP
+
+    {
+        .name       = "rocker-of-dpa-flows",
+        .args_type  = "name:s,tbl-id:i?",
+        .mhandler.cmd_new = qmp_marshal_input_rocker_of_dpa_flows,
+    },
+
+SQMP
+Show rocker switch OF-DPA flow tables
+-------------------------------------
+
+Arguments:
+
+- "name": switch name
+- "tbl-id": (optional) flow table ID
+
+Example:
+
+-> { "execute": "rocker-of-dpa-flows", "arguments": { "name": "sw1" } }
+<- { "return": [ {"key": {"in-pport": 0, "priority": 1, "tbl-id": 0},
+                  "hits": 138,
+                  "cookie": 0,
+                  "action": {"goto-tbl": 10},
+                  "mask": {"in-pport": 4294901760}
+                 },
+                 {...more...},
+   ]}
+
+EQMP
+
+    {
+        .name       = "rocker-of-dpa-groups",
+        .args_type  = "name:s,type:i?",
+        .mhandler.cmd_new = qmp_marshal_input_rocker_of_dpa_groups,
+    },
+
+SQMP
+Show rocker OF-DPA group tables
+-------------------------------
+
+Arguments:
+
+- "name": switch name
+- "type": (optional) group type
+
+Example:
+
+-> { "execute": "rocker-of-dpa-groups", "arguments": { "name": "sw1" } }
+<- { "return": [ {"type": 0, "out-pport": 2, "pport": 2, "vlan-id": 3841, "pop-vlan": 1, "id": 251723778},
+                 {"type": 0, "out-pport": 0, "pport": 0, "vlan-id": 3841, "pop-vlan": 1, "id": 251723776},
+                 {"type": 0, "out-pport": 1, "pport": 1, "vlan-id": 3840, "pop-vlan": 1, "id": 251658241},
+                 {"type": 0, "out-pport": 0, "pport": 0, "vlan-id": 3840, "pop-vlan": 1, "id": 251658240}
+   ]}
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 09/10] rocker: add tests
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (8 preceding siblings ...)
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 08/10] qmp: add rocker device support sfeldma
@ 2015-01-06  2:25 ` sfeldma
  2015-01-06  2:25 ` [Qemu-devel] [PATCH v2 10/10] MAINTAINERS: add rocker sfeldma
  10 siblings, 0 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:25 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

Add some basic test for rocker to test L2/L3/L4 functionality.  Requires an
external test environment, simp, located here:

https://github.com/scottfeldman/simp

To run tests, simp environment must be installed and a suitable VM image built
and installed with a Linux 3.18 (or greater) kernel with rocker driver support
enabled.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
---
 hw/net/rocker/test/README          |    5 +++
 hw/net/rocker/test/all             |   19 +++++++++++
 hw/net/rocker/test/bridge          |   43 ++++++++++++++++++++++++
 hw/net/rocker/test/bridge-stp      |   52 +++++++++++++++++++++++++++++
 hw/net/rocker/test/bridge-vlan     |   52 +++++++++++++++++++++++++++++
 hw/net/rocker/test/bridge-vlan-stp |   64 ++++++++++++++++++++++++++++++++++++
 hw/net/rocker/test/port            |   22 +++++++++++++
 hw/net/rocker/test/tut.dot         |    8 +++++
 8 files changed, 265 insertions(+)
 create mode 100644 hw/net/rocker/test/README
 create mode 100755 hw/net/rocker/test/all
 create mode 100755 hw/net/rocker/test/bridge
 create mode 100755 hw/net/rocker/test/bridge-stp
 create mode 100755 hw/net/rocker/test/bridge-vlan
 create mode 100755 hw/net/rocker/test/bridge-vlan-stp
 create mode 100755 hw/net/rocker/test/port
 create mode 100644 hw/net/rocker/test/tut.dot

diff --git a/hw/net/rocker/test/README b/hw/net/rocker/test/README
new file mode 100644
index 0000000..531e673
--- /dev/null
+++ b/hw/net/rocker/test/README
@@ -0,0 +1,5 @@
+Tests require simp (simple network simulator) found here:
+
+https://github.com/scottfeldman/simp
+
+Run 'all' to run all tests.
diff --git a/hw/net/rocker/test/all b/hw/net/rocker/test/all
new file mode 100755
index 0000000..d5ae963
--- /dev/null
+++ b/hw/net/rocker/test/all
@@ -0,0 +1,19 @@
+echo -n "Running port test...              "
+./port
+if [ $? -eq 0 ]; then echo "pass"; else echo "FAILED"; exit 1; fi
+
+echo -n "Running bridge test...            "
+./bridge
+if [ $? -eq 0 ]; then echo "pass"; else echo "FAILED"; exit 1; fi
+
+echo -n "Running bridge STP test...        "
+./bridge-stp
+if [ $? -eq 0 ]; then echo "pass"; else echo "FAILED"; exit 1; fi
+
+echo -n "Running bridge VLAN test...       "
+./bridge-vlan
+if [ $? -eq 0 ]; then echo "pass"; else echo "FAILED"; exit 1; fi
+
+echo -n "Running bridge VLAN STP test...   "
+./bridge-vlan-stp
+if [ $? -eq 0 ]; then echo "pass"; else echo "FAILED"; exit 1; fi
diff --git a/hw/net/rocker/test/bridge b/hw/net/rocker/test/bridge
new file mode 100755
index 0000000..f1f555a
--- /dev/null
+++ b/hw/net/rocker/test/bridge
@@ -0,0 +1,43 @@
+simp destroy ".*"
+simp create -o sw1:rocker:sw1 tut tut.dot
+simp start tut
+sleep 10
+while ! simp ssh tut sw1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h2 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+
+# configure a 2-port bridge
+
+simp ssh tut sw1 --cmd "sudo /sbin/ip link add name br0 type bridge"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp1 master br0"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp2 master br0"
+
+# turn off vlan default_pvid on br0
+
+simp ssh tut sw1 --cmd "echo 0 | sudo dd of=/sys/class/net/br0/bridge/default_pvid 2> /dev/null"
+
+# turn off learning and flooding in SW
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 learning off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 learning off"
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 flood off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 flood off"
+
+# bring up bridge and ports
+
+simp ssh tut sw1 --cmd "sudo ifconfig br0 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp1 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp2 up"
+simp ssh tut sw1 --cmd "sudo ifconfig br0 11.0.0.3/24"
+
+# config IP on hosts
+
+simp ssh tut h1 --cmd "sudo ifconfig swp1 11.0.0.1/24"
+simp ssh tut h2 --cmd "sudo ifconfig swp1 11.0.0.2/24"
+
+# test...
+
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+if [ $? -ne 0 ]; then exit 1; fi
+simp ssh tut h1 --cmd "ping -c10 11.0.0.3 >/dev/null"
diff --git a/hw/net/rocker/test/bridge-stp b/hw/net/rocker/test/bridge-stp
new file mode 100755
index 0000000..60f4483
--- /dev/null
+++ b/hw/net/rocker/test/bridge-stp
@@ -0,0 +1,52 @@
+simp destroy ".*"
+simp create -o sw1:rocker:sw1 tut tut.dot
+simp start tut
+sleep 10
+while ! simp ssh tut sw1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h2 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+
+# configure a 2-port bridge
+
+simp ssh tut sw1 --cmd "sudo /sbin/ip link add name br0 type bridge"
+simp ssh tut sw1 --cmd "sudo brctl stp br0 on"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp1 master br0"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp2 master br0"
+
+# turn off vlan default_pvid on br0
+
+simp ssh tut sw1 --cmd "echo 0 | sudo dd of=/sys/class/net/br0/bridge/default_pvid 2> /dev/null"
+
+# turn off learning and flooding in SW
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 learning off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 learning off"
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 flood off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 flood off"
+
+# config IP on hosts
+
+simp ssh tut h1 --cmd "sudo ifconfig swp1 11.0.0.1/24"
+simp ssh tut h2 --cmd "sudo ifconfig swp1 11.0.0.2/24"
+
+# bring up bridge and ports
+
+simp ssh tut sw1 --cmd "sudo ifconfig br0 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp1 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp2 up"
+
+# test...
+
+simp ssh tut h1 --cmd "ping -w 1 -c1 11.0.0.2 >/dev/null"
+if [ $? -eq 0 ]; then exit 1; fi
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
diff --git a/hw/net/rocker/test/bridge-vlan b/hw/net/rocker/test/bridge-vlan
new file mode 100755
index 0000000..faef827
--- /dev/null
+++ b/hw/net/rocker/test/bridge-vlan
@@ -0,0 +1,52 @@
+simp destroy ".*"
+simp create -o sw1:rocker:sw1 tut tut.dot
+simp start tut
+sleep 10
+while ! simp ssh tut sw1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h2 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+
+# configure a 2-port bridge
+
+simp ssh tut sw1 --cmd "sudo /sbin/ip link add name br0 type bridge"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp1 master br0"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp2 master br0"
+
+# turn off vlan default_pvid on br0
+# turn on vlan filtering on br0
+
+simp ssh tut sw1 --cmd "echo 0 | sudo dd of=/sys/class/net/br0/bridge/default_pvid 2> /dev/null"
+simp ssh tut sw1 --cmd "echo 1 | sudo dd of=/sys/class/net/br0/bridge/vlan_filtering 2> /dev/null"
+
+# add both ports to VLAN 57
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge vlan add vid 57 dev swp1 master"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge vlan add vid 57 dev swp2 master"
+
+# turn off learning and flooding in SW
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 learning off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 learning off"
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 flood off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 flood off"
+
+# bring up bridge and ports
+
+simp ssh tut sw1 --cmd "sudo ifconfig br0 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp1 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp2 up"
+
+# config IP on host VLANs
+
+simp ssh tut h1 --cmd "sudo vconfig add swp1 57 >/dev/null 2>&1"
+simp ssh tut h1 --cmd "sudo ifconfig swp1 up"
+simp ssh tut h1 --cmd "sudo ifconfig swp1.57 11.0.0.1/24"
+
+simp ssh tut h2 --cmd "sudo vconfig add swp1 57 >/dev/null 2>&1"
+simp ssh tut h2 --cmd "sudo ifconfig swp1 up"
+simp ssh tut h2 --cmd "sudo ifconfig swp1.57 11.0.0.2/24"
+
+# test...
+
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
diff --git a/hw/net/rocker/test/bridge-vlan-stp b/hw/net/rocker/test/bridge-vlan-stp
new file mode 100755
index 0000000..2e4e334
--- /dev/null
+++ b/hw/net/rocker/test/bridge-vlan-stp
@@ -0,0 +1,64 @@
+simp destroy ".*"
+simp create -o sw1:rocker:sw1 tut tut.dot
+simp start tut
+sleep 10
+while ! simp ssh tut sw1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h2 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+
+# configure a 2-port bridge
+
+simp ssh tut sw1 --cmd "sudo /sbin/ip link add name br0 type bridge"
+simp ssh tut sw1 --cmd "sudo brctl stp br0 on"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp1 master br0"
+simp ssh tut sw1 --cmd "sudo /sbin/ip link set dev swp2 master br0"
+
+# turn off vlan default_pvid on br0
+# turn on vlan filtering on br0
+
+simp ssh tut sw1 --cmd "echo 0 | sudo dd of=/sys/class/net/br0/bridge/default_pvid 2> /dev/null"
+simp ssh tut sw1 --cmd "echo 1 | sudo dd of=/sys/class/net/br0/bridge/vlan_filtering 2> /dev/null"
+
+# add both ports to VLAN 57
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge vlan add vid 57 dev swp1 master"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge vlan add vid 57 dev swp2 master"
+
+# turn off learning and flooding in SW
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 learning off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 learning off"
+
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp1 flood off"
+simp ssh tut sw1 --cmd "sudo /sbin/bridge link set dev swp2 flood off"
+
+# config IP on host VLANs
+
+simp ssh tut h1 --cmd "sudo vconfig add swp1 57 >/dev/null 2>&1"
+simp ssh tut h1 --cmd "sudo ifconfig swp1 up"
+simp ssh tut h1 --cmd "sudo ifconfig swp1.57 11.0.0.1/24"
+
+simp ssh tut h2 --cmd "sudo vconfig add swp1 57 >/dev/null 2>&1"
+simp ssh tut h2 --cmd "sudo ifconfig swp1 up"
+simp ssh tut h2 --cmd "sudo ifconfig swp1.57 11.0.0.2/24"
+
+# bring up bridge and ports
+
+simp ssh tut sw1 --cmd "sudo ifconfig br0 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp1 up"
+simp ssh tut sw1 --cmd "sudo ifconfig swp2 up"
+
+# test...
+
+simp ssh tut h1 --cmd "ping -w 1 -c1 11.0.0.2 >/dev/null"
+if [ $? -eq 0 ]; then exit 1; fi
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
+sleep 10
+simp ssh tut h1 --cmd "ping -c10 11.0.0.2 >/dev/null"
diff --git a/hw/net/rocker/test/port b/hw/net/rocker/test/port
new file mode 100755
index 0000000..3437f7d
--- /dev/null
+++ b/hw/net/rocker/test/port
@@ -0,0 +1,22 @@
+simp destroy ".*"
+simp create -o sw1:rocker:sw1 tut tut.dot
+simp start tut
+while ! simp ssh tut sw1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h1 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+while ! simp ssh tut h2 --cmd "ping -c 1 localhost >/dev/null"; do sleep 1; done
+
+# bring up DUT ports
+
+simp ssh tut sw1 --cmd "sudo ifconfig swp1 11.0.0.1/24"
+simp ssh tut sw1 --cmd "sudo ifconfig swp2 12.0.0.1/24"
+
+# config IP on hosts
+
+simp ssh tut h1 --cmd "sudo ifconfig swp1 11.0.0.2/24"
+simp ssh tut h2 --cmd "sudo ifconfig swp1 12.0.0.2/24"
+
+# test...
+
+simp ssh tut h1 --cmd "ping -c10 11.0.0.1 >/dev/null"
+if [ $? -eq 1 ]; then exit 1; fi
+simp ssh tut h2 --cmd "ping -c10 12.0.0.1 >/dev/null"
diff --git a/hw/net/rocker/test/tut.dot b/hw/net/rocker/test/tut.dot
new file mode 100644
index 0000000..87f7266
--- /dev/null
+++ b/hw/net/rocker/test/tut.dot
@@ -0,0 +1,8 @@
+graph G {
+	graph [hostidtype="hostname", version="1:0", date="04/12/2013"];
+	edge [dir=none, notify="log"];
+	sw1:swp1 -- h1:swp1;
+	sw1:swp2 -- h2:swp1;
+	sw1:swp3 -- h3:swp1;
+	sw1:swp4 -- h4:swp1;
+}
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [Qemu-devel] [PATCH v2 10/10] MAINTAINERS: add rocker
  2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
                   ` (9 preceding siblings ...)
  2015-01-06  2:25 ` [Qemu-devel] [PATCH v2 09/10] rocker: add tests sfeldma
@ 2015-01-06  2:25 ` sfeldma
  10 siblings, 0 replies; 23+ messages in thread
From: sfeldma @ 2015-01-06  2:25 UTC (permalink / raw)
  To: qemu-devel, jiri, roopa, john.fastabend, eblake

From: Scott Feldman <sfeldma@gmail.com>

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
---
 MAINTAINERS |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 01cfb05..287b147 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -730,6 +730,12 @@ S: Maintained
 F: hw/net/vmxnet*
 F: hw/scsi/vmw_pvscsi*
 
+Rocker
+M: Scott Feldman <sfeldma@gmail.com>
+M: Jiri Pirko <jiri@resnulli.us>
+S: Maintained
+F: hw/net/rocker/
+
 Subsystems
 ----------
 Audio
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 04/10] rocker: add register programming guide
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 04/10] rocker: add register programming guide sfeldma
@ 2015-01-06  5:16   ` Jason Wang
  2015-01-06  7:50     ` Scott Feldman
  0 siblings, 1 reply; 23+ messages in thread
From: Jason Wang @ 2015-01-06  5:16 UTC (permalink / raw)
  To: sfeldma, qemu-devel, jiri, roopa, john.fastabend, eblake


On 01/06/2015 10:24 AM, sfeldma@gmail.com wrote:
> From: Scott Feldman <sfeldma@gmail.com>
>
> This is the register programming guide for the Rocker device.  It's intended
> for driver writers and device writers.  It covers the device's PCI space,
> the register set, DMA interface, and interrupts.
>
> Signed-off-by: Scott Feldman <sfeldma@gmail.com>
> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
> ---
>  hw/net/rocker/reg_guide.txt |  961 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 961 insertions(+)
>  create mode 100644 hw/net/rocker/reg_guide.txt

Looks like docs/specs is a better place for this. And do you want to
keep this spec officially in qemu.git? I'm asking since if there's
another place to keep the official version, you may do a lot of syncs in
the future.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 04/10] rocker: add register programming guide
  2015-01-06  5:16   ` Jason Wang
@ 2015-01-06  7:50     ` Scott Feldman
  0 siblings, 0 replies; 23+ messages in thread
From: Scott Feldman @ 2015-01-06  7:50 UTC (permalink / raw)
  To: Jason Wang
  Cc: john fastabend, Roopa Prabhu, Jiří Pírko, qemu-devel

I would like to keep it with qemu.git, if possible.  Since the device
doesn't really exist in the real world, and there is no particular
company behind it, there isn't really a place for the spec to live
other than beside the implementation.  docs/specs is fine by me.  I
had a test/ directory in there also; not sure where these things live
for device tests in qemu.

I also struggled with the QMP/HMP code as there really aren't examples
of individual devices with qmp/hmp interfaces.  Although it sure is
handy for dumping device state and otherwise having a back-door into
device.

On Mon, Jan 5, 2015 at 9:16 PM, Jason Wang <jasowang@redhat.com> wrote:
>
> On 01/06/2015 10:24 AM, sfeldma@gmail.com wrote:
>> From: Scott Feldman <sfeldma@gmail.com>
>>
>> This is the register programming guide for the Rocker device.  It's intended
>> for driver writers and device writers.  It covers the device's PCI space,
>> the register set, DMA interface, and interrupts.
>>
>> Signed-off-by: Scott Feldman <sfeldma@gmail.com>
>> Signed-off-by: Jiri Pirko <jiri@resnulli.us>
>> ---
>>  hw/net/rocker/reg_guide.txt |  961 +++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 961 insertions(+)
>>  create mode 100644 hw/net/rocker/reg_guide.txt
>
> Looks like docs/specs is a better place for this. And do you want to
> keep this spec officially in qemu.git? I'm asking since if there's
> another place to keep the official version, you may do a lot of syncs in
> the future.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker sfeldma
@ 2015-01-06  9:52   ` Peter Maydell
  2015-01-07  7:47     ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Maydell @ 2015-01-06  9:52 UTC (permalink / raw)
  To: Scott Feldman
  Cc: Jiří Pírko, roopa, john fastabend,
	QEMU Developers, Paolo Bonzini

On 6 January 2015 at 02:24,  <sfeldma@gmail.com> wrote:
> From: Scott Feldman <sfeldma@gmail.com>
>
> The rocker device uses same PCI device ID as sdhci.  Since rocker device driver
> has already been accepted into Linux 3.18, and REDHAT_SDHCI device ID isn't
> used by any drivers, it's safe to move REDHAT_SDHCI device ID, avoiding
> conflict with rocker.

Same remarks apply as I made on v1 of this patch -- I don't want
to take this series until we have an answer for who's the
authoritative source for handing out IDs in this space and
why we ended up with this conflict.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device sfeldma
@ 2015-01-06 15:12   ` Stefan Hajnoczi
  2015-01-06 16:45     ` Scott Feldman
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2015-01-06 15:12 UTC (permalink / raw)
  To: sfeldma; +Cc: john.fastabend, roopa, jiri, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 961 bytes --]

On Mon, Jan 05, 2015 at 06:24:58PM -0800, sfeldma@gmail.com wrote:
> From: Scott Feldman <sfeldma@gmail.com>
> 
> Rocker is a simulated ethernet switch device.  The device supports up to 62
> front-panel ports and supports L2 switching and L3 routing functions, as well
> as L2/L3/L4 ACLs.  The device presents a single PCI device for each switch,
> with a memory-mapped register space for device driver access.
> 
> Rocker device is invoked with -device, for example a 4-port switch:
> 
>   -device rocker,name=sw1,len-ports=4,ports[0]=dev0,ports[1]=dev1, \
>          ports[2]=dev2,ports[3]=dev3
> 
> Each port is a netdev and can be paired with using -netdev id=<port name>.

This design looks good, it fits the QEMU network subsystem.

Please follow QEMU coding style, for example, using typedefs for structs
instead of "struct tag".  Details are in ./HACKING, ./CODING_STYLE, and
you can scan patches with QEMU's scripts/checkpatch.pl.

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 08/10] qmp: add rocker device support
  2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 08/10] qmp: add rocker device support sfeldma
@ 2015-01-06 15:19   ` Stefan Hajnoczi
  0 siblings, 0 replies; 23+ messages in thread
From: Stefan Hajnoczi @ 2015-01-06 15:19 UTC (permalink / raw)
  To: sfeldma; +Cc: john.fastabend, roopa, jiri, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1369 bytes --]

On Mon, Jan 05, 2015 at 06:24:59PM -0800, sfeldma@gmail.com wrote:
> From: Scott Feldman <sfeldma@gmail.com>
> 
> Add QMP/HMP support for rocker devices.  This is mostly for debugging purposes
> to see inside the device's tables and port configurations.  Some examples:
> 
> (qemu) rocker sw1
> name: sw1
> id: 0x0000013512005452
> ports: 4

The convention is for HMP commands that show information to be "info"
sub-commands.  So this example would be "info rocker sw1".  See
monitor.c:info_cmds[].

The convention for QMP is to call the command "query-rocker".

> 
> (qemu) rocker-ports sw1
>             ena/    speed/ auto
>       port  link    duplex neg?
>      sw1.1  up     10G  FD  No
>      sw1.2  up     10G  FD  No
>      sw1.3  !ena   10G  FD  No
>      sw1.4  !ena   10G  FD  No

HMP: info rocker-ports sw1
QMP: query-rocker-ports

and so on.

> +##
> +# @RockerOfDpaFlow:
> +#
> +# Rocker switch OF-DPA flow
> +#
> +# @cookie: flow unique cookie ID
> +#
> +# @hits: count of matches (hits) on flow
> +#
> +# @key: flow key
> +#
> +# @mask: flow mask
> +#
> +# @action: flow action
> +##

For versioning, the schema should mention which QEMU release a command
was added in:

 # Since: 2.3

That way QAPI consumers can plan accordingly and be aware when older
QEMU binaries may not implement a command.

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device
  2015-01-06 15:12   ` Stefan Hajnoczi
@ 2015-01-06 16:45     ` Scott Feldman
  2015-01-07 12:55       ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Scott Feldman @ 2015-01-06 16:45 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: john fastabend, Roopa Prabhu, Jiří Pírko, qemu-devel

On Tue, Jan 6, 2015 at 7:12 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Mon, Jan 05, 2015 at 06:24:58PM -0800, sfeldma@gmail.com wrote:
>> From: Scott Feldman <sfeldma@gmail.com>
>>
>> Rocker is a simulated ethernet switch device.  The device supports up to 62
>> front-panel ports and supports L2 switching and L3 routing functions, as well
>> as L2/L3/L4 ACLs.  The device presents a single PCI device for each switch,
>> with a memory-mapped register space for device driver access.
>>
>> Rocker device is invoked with -device, for example a 4-port switch:
>>
>>   -device rocker,name=sw1,len-ports=4,ports[0]=dev0,ports[1]=dev1, \
>>          ports[2]=dev2,ports[3]=dev3
>>
>> Each port is a netdev and can be paired with using -netdev id=<port name>.
>
> This design looks good, it fits the QEMU network subsystem.
>
> Please follow QEMU coding style, for example, using typedefs for structs
> instead of "struct tag".  Details are in ./HACKING, ./CODING_STYLE, and
> you can scan patches with QEMU's scripts/checkpatch.pl.

The patches are already scripts/checkpatch.pl clean.

And we did follow the HACKING and CODING_STYLE guidelines, with the
exception of typedefs for structs.  Did you spot anything else
out-of-compliance?

On typedefs for structs, there are plenty of examples in Qemu of not
following that rule.  Perhaps this rule can be enforced by
checkpatch.pl?  Personally, I feel that typedef on a struct makes the
code harder to read as it obfuscate the type.  For example, typedef
union {...} foo or typedef struct {...} foo or typedef enum {...} foo:
there is no way to tell with foo bar what bar is, at a glance, whereas
union foo bar or struct foo bar or enum foo bar is clear.

We can make the typedef change for v3 if it's a hard requirement for inclusion.

-scott

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker
  2015-01-06  9:52   ` Peter Maydell
@ 2015-01-07  7:47     ` Paolo Bonzini
  2015-01-07 10:39       ` Peter Maydell
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-01-07  7:47 UTC (permalink / raw)
  To: Peter Maydell, Scott Feldman
  Cc: john fastabend, roopa, Jiří Pírko, QEMU Developers



On 06/01/2015 10:52, Peter Maydell wrote:
> On 6 January 2015 at 02:24,  <sfeldma@gmail.com> wrote:
>> From: Scott Feldman <sfeldma@gmail.com>
>>
>> The rocker device uses same PCI device ID as sdhci.  Since rocker device driver
>> has already been accepted into Linux 3.18, and REDHAT_SDHCI device ID isn't
>> used by any drivers, it's safe to move REDHAT_SDHCI device ID, avoiding
>> conflict with rocker.
> 
> Same remarks apply as I made on v1 of this patch -- I don't want
> to take this series until we have an answer for who's the
> authoritative source for handing out IDs in this space and
> why we ended up with this conflict.

Within the virt team, we have always considered the authoritative source
to be qemu.git and Gerd to be the maintainer.  Jiri is a Red Hatter but
not in the virt team, hence the confusion.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker
  2015-01-07  7:47     ` Paolo Bonzini
@ 2015-01-07 10:39       ` Peter Maydell
  2015-01-07 10:55         ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Maydell @ 2015-01-07 10:39 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Jiří Pírko, roopa, john fastabend,
	QEMU Developers, Scott Feldman

On 7 January 2015 at 07:47, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 06/01/2015 10:52, Peter Maydell wrote:
>> On 6 January 2015 at 02:24,  <sfeldma@gmail.com> wrote:
>>> From: Scott Feldman <sfeldma@gmail.com>
>>>
>>> The rocker device uses same PCI device ID as sdhci.  Since rocker device driver
>>> has already been accepted into Linux 3.18, and REDHAT_SDHCI device ID isn't
>>> used by any drivers, it's safe to move REDHAT_SDHCI device ID, avoiding
>>> conflict with rocker.
>>
>> Same remarks apply as I made on v1 of this patch -- I don't want
>> to take this series until we have an answer for who's the
>> authoritative source for handing out IDs in this space and
>> why we ended up with this conflict.
>
> Within the virt team, we have always considered the authoritative source
> to be qemu.git and Gerd to be the maintainer.  Jiri is a Red Hatter but
> not in the virt team, hence the confusion.

OK, so do we:
 (1) say that the authoritative list says this ID is SDHCI,
 so rocker needs to renumber
 (2) as this patch suggests, renumber SDHCI as a one-off fixing
 of an error?

-- PMM

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker
  2015-01-07 10:39       ` Peter Maydell
@ 2015-01-07 10:55         ` Paolo Bonzini
  2015-01-07 20:28           ` Scott Feldman
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2015-01-07 10:55 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Jiří Pírko, roopa, john fastabend,
	QEMU Developers, Scott Feldman



On 07/01/2015 11:39, Peter Maydell wrote:
>> > Within the virt team, we have always considered the authoritative source
>> > to be qemu.git and Gerd to be the maintainer.  Jiri is a Red Hatter but
>> > not in the virt team, hence the confusion.
> OK, so do we:
>  (1) say that the authoritative list says this ID is SDHCI,
>  so rocker needs to renumber
>  (2) as this patch suggests, renumber SDHCI as a one-off fixing
>  of an error?

I already sent a patch for (2) in a pull request.  SDHCI was never in a
released version, unlike rocker which was in Linux 3.18, so I don't
think there's even a choice. :)

I'll send a patch to Linux saying that the authoritative list of device
IDs for 0x1b36 resides in qemu.git.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device
  2015-01-06 16:45     ` Scott Feldman
@ 2015-01-07 12:55       ` Stefan Hajnoczi
  0 siblings, 0 replies; 23+ messages in thread
From: Stefan Hajnoczi @ 2015-01-07 12:55 UTC (permalink / raw)
  To: Scott Feldman
  Cc: john fastabend, Roopa Prabhu, Jiří Pírko, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2166 bytes --]

On Tue, Jan 06, 2015 at 08:45:44AM -0800, Scott Feldman wrote:
> On Tue, Jan 6, 2015 at 7:12 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > On Mon, Jan 05, 2015 at 06:24:58PM -0800, sfeldma@gmail.com wrote:
> >> From: Scott Feldman <sfeldma@gmail.com>
> >>
> >> Rocker is a simulated ethernet switch device.  The device supports up to 62
> >> front-panel ports and supports L2 switching and L3 routing functions, as well
> >> as L2/L3/L4 ACLs.  The device presents a single PCI device for each switch,
> >> with a memory-mapped register space for device driver access.
> >>
> >> Rocker device is invoked with -device, for example a 4-port switch:
> >>
> >>   -device rocker,name=sw1,len-ports=4,ports[0]=dev0,ports[1]=dev1, \
> >>          ports[2]=dev2,ports[3]=dev3
> >>
> >> Each port is a netdev and can be paired with using -netdev id=<port name>.
> >
> > This design looks good, it fits the QEMU network subsystem.
> >
> > Please follow QEMU coding style, for example, using typedefs for structs
> > instead of "struct tag".  Details are in ./HACKING, ./CODING_STYLE, and
> > you can scan patches with QEMU's scripts/checkpatch.pl.
> 
> The patches are already scripts/checkpatch.pl clean.
> 
> And we did follow the HACKING and CODING_STYLE guidelines, with the
> exception of typedefs for structs.  Did you spot anything else
> out-of-compliance?

No, just the lack of typedef struct caught my eye.

> On typedefs for structs, there are plenty of examples in Qemu of not
> following that rule.  Perhaps this rule can be enforced by
> checkpatch.pl?  Personally, I feel that typedef on a struct makes the
> code harder to read as it obfuscate the type.  For example, typedef
> union {...} foo or typedef struct {...} foo or typedef enum {...} foo:
> there is no way to tell with foo bar what bar is, at a glance, whereas
> union foo bar or struct foo bar or enum foo bar is clear.

It seems checkpatch.pl doesn't enforce the rule.

There is old code that doesn't follow the coding standard, but new code
should.

> We can make the typedef change for v3 if it's a hard requirement for inclusion.

Thank you!

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker
  2015-01-07 10:55         ` Paolo Bonzini
@ 2015-01-07 20:28           ` Scott Feldman
  0 siblings, 0 replies; 23+ messages in thread
From: Scott Feldman @ 2015-01-07 20:28 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Maydell, Jiří Pírko, Roopa Prabhu,
	john fastabend, QEMU Developers

On Wed, Jan 7, 2015 at 2:55 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 07/01/2015 11:39, Peter Maydell wrote:
>>> > Within the virt team, we have always considered the authoritative source
>>> > to be qemu.git and Gerd to be the maintainer.  Jiri is a Red Hatter but
>>> > not in the virt team, hence the confusion.
>> OK, so do we:
>>  (1) say that the authoritative list says this ID is SDHCI,
>>  so rocker needs to renumber
>>  (2) as this patch suggests, renumber SDHCI as a one-off fixing
>>  of an error?
>
> I already sent a patch for (2) in a pull request.  SDHCI was never in a
> released version, unlike rocker which was in Linux 3.18, so I don't
> think there's even a choice. :)
>
> I'll send a patch to Linux saying that the authoritative list of device
> IDs for 0x1b36 resides in qemu.git.

Thanks Paolo.  I'll resume the qemu rocker submission process assuming
rocker ID stays where it's at and SDHCI is moving.

-scott

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2015-01-07 20:28 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-06  2:24 [Qemu-devel] [PATCH v2 00/10] rocker: add new rocker ethernet switch device sfeldma
2015-01-06  2:24 ` sfeldma
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 01/10] pci: move REDHAT_SDHCI device ID to make room for Rocker sfeldma
2015-01-06  9:52   ` Peter Maydell
2015-01-07  7:47     ` Paolo Bonzini
2015-01-07 10:39       ` Peter Maydell
2015-01-07 10:55         ` Paolo Bonzini
2015-01-07 20:28           ` Scott Feldman
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 02/10] net: add MAC address string printer sfeldma
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 03/10] virtio-net: use qemu_mac_strdup_printf sfeldma
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 04/10] rocker: add register programming guide sfeldma
2015-01-06  5:16   ` Jason Wang
2015-01-06  7:50     ` Scott Feldman
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 05/10] pci: add rocker device ID sfeldma
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 06/10] pci: add network device class 'other' for network switches sfeldma
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 07/10] rocker: add new rocker switch device sfeldma
2015-01-06 15:12   ` Stefan Hajnoczi
2015-01-06 16:45     ` Scott Feldman
2015-01-07 12:55       ` Stefan Hajnoczi
2015-01-06  2:24 ` [Qemu-devel] [PATCH v2 08/10] qmp: add rocker device support sfeldma
2015-01-06 15:19   ` Stefan Hajnoczi
2015-01-06  2:25 ` [Qemu-devel] [PATCH v2 09/10] rocker: add tests sfeldma
2015-01-06  2:25 ` [Qemu-devel] [PATCH v2 10/10] MAINTAINERS: add rocker sfeldma

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.