All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver
@ 2019-03-24  3:23 Vladimir Oltean
  2019-03-24  3:23 ` [RFC PATCH net-next 01/13] lib: Add support for generic packing operations Vladimir Oltean
                   ` (14 more replies)
  0 siblings, 15 replies; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

This patchset adds a DSA driver for the SPI-managed NXP SJA1105 driver.
Due to the hardware's unfriendliness, most of its state needs to be
shadowed in kernel memory by the driver. To support this and keep a
decent amount of cleanliness in the code, a new generic API for
converting between CPU-accessible ("unpacked") structures and
hardware-accessible ("packed") structures is proposed and used.

Then several small modifications are done to the DSA core, like changing
the order of two calls during initialization, or permitting driver
access to the dp->vlan_filtering property.

These small modifications are done for the greater goal of adding
support for 802.1Q pseudo-switch tagging. The limitations of this type
of tagging are discussed in the commit that adds it, and in the code
comments.

The SJA1105 driver then proceeds to extend this 8021q switch tagging
protocol while adding its own (tag_sja1105). This is done because
SJA1105 needs SPI intervention during transmission of link-local
traffic, which cannot be done from the xmit handler but requires a
deferred worker thread.

The driver is GPL-2.0 licensed. The source code files which are licensed
as BSD-3-Clause are hardware support files and derivative of the
userspace NXP sja1105-tool program, which is BSD-3-Clause licensed.

TODO items:
* Add full support for the P/Q/R/S series. The patches were mostly
  tested on a first-generation T device.
* Add timestamping support and PTP clock manipulation.
* Figure out what the current state of tc-taprio hw offload is, and
  attempt to configure the switch's time-aware scheduler using that.

Vladimir Oltean (13):
  lib: Add support for generic packing operations
  net: dsa: Store vlan_filtering as a property of dsa_port
  net: dsa: Create a more convenient function for installing port VLANs
  net: dsa: Call driver's setup callback after setting up its switchdev
    notifier
  net: dsa: Optional VLAN-based port separation for switches without
    tagging
  net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch
  net: dsa: sja1105: Add support for FDB and MDB management
  net: dsa: sja1105: Add support for VLAN operations
  net: dsa: sja1105: Add support for ethtool port counters
  net: dsa: sja1105: Add support for traffic through standalone ports
  net: dsa: sja1105: Add support for Spanning Tree Protocol
  Documentation: networking: dsa: Add details about NXP SJA1105 driver
  dt-bindings: net: dsa: Add documentation for NXP SJA1105 driver

 .../devicetree/bindings/net/dsa/sja1105.txt   |  123 ++
 Documentation/networking/dsa/sja1105.txt      |   83 +
 Documentation/packing.txt                     |  150 ++
 MAINTAINERS                                   |   14 +
 drivers/net/dsa/Kconfig                       |    2 +
 drivers/net/dsa/Makefile                      |    1 +
 drivers/net/dsa/sja1105/Kconfig               |   17 +
 drivers/net/dsa/sja1105/Makefile              |   10 +
 drivers/net/dsa/sja1105/sja1105.h             |  148 ++
 drivers/net/dsa/sja1105/sja1105_clocking.c    |  677 ++++++
 .../net/dsa/sja1105/sja1105_dynamic_config.c  |  607 ++++++
 .../net/dsa/sja1105/sja1105_dynamic_config.h  |   40 +
 drivers/net/dsa/sja1105/sja1105_ethtool.c     |  420 ++++
 drivers/net/dsa/sja1105/sja1105_main.c        | 1580 ++++++++++++++
 drivers/net/dsa/sja1105/sja1105_spi.c         |  667 ++++++
 .../net/dsa/sja1105/sja1105_static_config.c   | 1810 +++++++++++++++++
 .../net/dsa/sja1105/sja1105_static_config.h   |  500 +++++
 include/linux/dsa/sja1105.h                   |   52 +
 include/linux/packing.h                       |   49 +
 include/net/dsa.h                             |    6 +
 lib/Makefile                                  |    2 +-
 lib/packing.c                                 |  211 ++
 net/dsa/Kconfig                               |   12 +
 net/dsa/Makefile                              |    2 +
 net/dsa/dsa.c                                 |    6 +
 net/dsa/dsa2.c                                |    8 +-
 net/dsa/dsa_priv.h                            |   15 +
 net/dsa/port.c                                |   36 +-
 net/dsa/slave.c                               |   16 +-
 net/dsa/tag_8021q.c                           |  185 ++
 net/dsa/tag_sja1105.c                         |  142 ++
 31 files changed, 7568 insertions(+), 23 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/net/dsa/sja1105.txt
 create mode 100644 Documentation/networking/dsa/sja1105.txt
 create mode 100644 Documentation/packing.txt
 create mode 100644 drivers/net/dsa/sja1105/Kconfig
 create mode 100644 drivers/net/dsa/sja1105/Makefile
 create mode 100644 drivers/net/dsa/sja1105/sja1105.h
 create mode 100644 drivers/net/dsa/sja1105/sja1105_clocking.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_dynamic_config.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_dynamic_config.h
 create mode 100644 drivers/net/dsa/sja1105/sja1105_ethtool.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_main.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_spi.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_static_config.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_static_config.h
 create mode 100644 include/linux/dsa/sja1105.h
 create mode 100644 include/linux/packing.h
 create mode 100644 lib/packing.c
 create mode 100644 net/dsa/tag_8021q.c
 create mode 100644 net/dsa/tag_sja1105.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 01/13] lib: Add support for generic packing operations
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-24 19:02   ` Richard Cochran
  2019-03-24  3:23 ` [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port Vladimir Oltean
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

This provides an unified API for accessing register bit fields
regardless of memory layout. The basic unit of data for these API
functions is the u64. The process of transforming an u64 from native CPU
encoding into the peripheral's encoding is called 'pack', and
transforming it from peripheral to native CPU encoding is 'unpack'.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 Documentation/packing.txt | 150 +++++++++++++++++++++++++++
 MAINTAINERS               |   8 ++
 include/linux/packing.h   |  49 +++++++++
 lib/Makefile              |   2 +-
 lib/packing.c             | 211 ++++++++++++++++++++++++++++++++++++++
 5 files changed, 419 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/packing.txt
 create mode 100644 include/linux/packing.h
 create mode 100644 lib/packing.c

diff --git a/Documentation/packing.txt b/Documentation/packing.txt
new file mode 100644
index 000000000000..32eba9d23611
--- /dev/null
+++ b/Documentation/packing.txt
@@ -0,0 +1,150 @@
+=============================================
+Generic field packing and unpacking functions
+=============================================
+
+Problem statement
+-----------------
+
+When working with hardware, one has to choose between several approaches of
+interfacing with it.
+One can memory-map a pointer to a carefully crafted struct over the hardware
+device's memory region, and access its fields as struct members (potentially
+declared as bit fields). But writing code this way would make it less portable,
+due to potential endianness mismatches between the CPU and the hardware device.
+Additionally, one has to pay close attention when translating register
+definitions from the hardware documentation into bit field indices for the
+structs. Also, some hardware (typically networking equipment) tends to group
+its register fields in ways that violate any reasonable word boundaries
+(sometimes even 64 bit ones). This creates the inconvenience of having to
+define "high" and "low" portions of register fields within the struct.
+A more robust alternative to struct field definitions would be to extract the
+required fields by shifting the appropriate number of bits. But this would
+still not protect from endianness mismatches, except if all memory accesses
+were performed byte-by-byte. Also the code can easily get cluttered, and the
+high-level idea might get lost among the many bit shifts required.
+Many drivers take the bit-shifting approach and then attempt to reduce the
+clutter with tailored macros, but more often than not these macros take
+shortcuts that still prevent the code from being truly portable.
+
+The solution
+------------
+
+This API deals with 2 basic operations:
+  - Packing a CPU-usable number into a memory buffer (with hardware
+    constraints/quirks)
+  - Unpacking a memory buffer (which has hardware constraints/quirks)
+    into a CPU-usable number.
+
+The API offers an abstraction over said hardware constraints and quirks,
+over CPU endianness and therefore between possible mismatches between
+the two.
+
+The basic unit of these API functions is the u64. From the CPU's
+perspective, bit 63 always means bit offset 7 of byte 7, albeit only
+logically. The question is: where do we lay this bit out in memory?
+
+The following examples cover the memory layout of a packed u64 field.
+The byte offsets in the packed buffer are always implicitly 0, 1, ... 7.
+What the examples show is where the logical bytes and bits sit.
+
+1. Normally (no quirks), we would do it like this:
+
+63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32
+7                       6                       5                        4
+31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
+3                       2                       1                        0
+
+That is, the MSByte (7) of the CPU-usable u64 sits at memory offset 0, and the
+LSByte (0) of the u64 sits at memory offset 7.
+This corresponds to what most folks would regard to as "big endian", where
+bit i corresponds to the number 2^i. This is also referred to in the code
+comments as "logical" notation.
+
+
+2. If QUIRK_MSB_ON_THE_RIGHT is set, we do it like this:
+
+56 57 58 59 60 61 62 63 48 49 50 51 52 53 54 55 40 41 42 43 44 45 46 47 32 33 34 35 36 37 38 39
+7                       6                        5                       4
+24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23  8  9 10 11 12 13 14 15  0  1  2  3  4  5  6  7
+3                       2                        1                       0
+
+That is, QUIRK_MSB_ON_THE_RIGHT does not affect byte positioning, but
+inverts bit offsets inside a byte.
+
+
+3. If QUIRK_LITTLE_ENDIAN is set, we do it like this:
+
+39 38 37 36 35 34 33 32 47 46 45 44 43 42 41 40 55 54 53 52 51 50 49 48 63 62 61 60 59 58 57 56
+4                       5                       6                       7
+7  6  5  4  3  2  1  0  15 14 13 12 11 10  9  8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24
+0                       1                       2                       3
+
+Therefore, QUIRK_LITTLE_ENDIAN means that inside the memory region, every
+byte from each 4-byte word is placed at its mirrored position compared to
+the boundary of that word.
+
+4. If QUIRK_MSB_ON_THE_RIGHT and QUIRK_LITTLE_ENDIAN are both set, we do it
+   like this:
+
+32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
+4                       5                       6                       7
+0  1  2  3  4  5  6  7  8   9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
+0                       1                       2                       3
+
+
+5. If just QUIRK_LSW32_IS_FIRST is set, we do it like this:
+
+31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
+3                       2                       1                        0
+63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32
+7                       6                       5                        4
+
+In this case the 8 byte memory region is interpreted as follows: first
+4 bytes correspond to the least significant 4-byte word, next 4 bytes to
+the more significant 4-byte word.
+
+
+6. If QUIRK_LSW32_IS_FIRST and QUIRK_MSB_ON_THE_RIGHT are set, we do it like
+   this:
+
+24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23  8  9 10 11 12 13 14 15  0  1  2  3  4  5  6  7
+3                       2                        1                       0
+56 57 58 59 60 61 62 63 48 49 50 51 52 53 54 55 40 41 42 43 44 45 46 47 32 33 34 35 36 37 38 39
+7                       6                        5                       4
+
+
+7. If QUIRK_LSW32_IS_FIRST and QUIRK_LITTLE_ENDIAN are set, it looks like
+   this:
+
+7  6  5  4  3  2  1  0  15 14 13 12 11 10  9  8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24
+0                       1                       2                       3
+39 38 37 36 35 34 33 32 47 46 45 44 43 42 41 40 55 54 53 52 51 50 49 48 63 62 61 60 59 58 57 56
+4                       5                       6                       7
+
+
+8. If QUIRK_LSW32_IS_FIRST, QUIRK_LITTLE_ENDIAN and QUIRK_MSB_ON_THE_RIGHT
+   are set, it looks like this:
+
+0  1  2  3  4  5  6  7  8   9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
+0                       1                       2                       3
+32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
+4                       5                       6                       7
+
+
+We always think of our offsets as if there were no quirk, and we translate
+them afterwards, before accessing the memory region.
+
+Intended use
+------------
+
+Drivers that opt to use this API first need to identify which of the above 3
+quirk combinations (for a total of 8) match what the hardware documentation
+describes. Then they should wrap the packing() function, creating a new
+xxx_packing() that calls it using the proper QUIRK_* one-hot bits set.
+
+The packing() function returns an int-encoded error code, which protects the
+programmer against incorrect API use.  The errors are not expected to occur
+durring runtime, therefore it is reasonable for xxx_packing() to return void
+and simply swallow those errors. Optionally it can dump stack or print the
+error description.
+
diff --git a/MAINTAINERS b/MAINTAINERS
index f8ff9ae52c21..89315bb1cb83 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11670,6 +11670,14 @@ L:	linux-i2c@vger.kernel.org
 S:	Orphan
 F:	drivers/i2c/busses/i2c-pasemi.c
 
+PACKING
+M:	Vladimir Oltean <olteanv@gmail.com>
+L:	netdev@vger.kernel.org
+S:	Supported
+F:	lib/packing.c
+F:	include/linux/packing.h
+F:	Documentation/packing.txt
+
 PADATA PARALLEL EXECUTION MECHANISM
 M:	Steffen Klassert <steffen.klassert@secunet.com>
 L:	linux-crypto@vger.kernel.org
diff --git a/include/linux/packing.h b/include/linux/packing.h
new file mode 100644
index 000000000000..cc646e4f5df1
--- /dev/null
+++ b/include/linux/packing.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2016-2018, NXP Semiconductors
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#ifndef _LINUX_PACKING_H
+#define _LINUX_PACKING_H
+
+#include <linux/types.h>
+#include <linux/bitops.h>
+
+#define QUIRK_MSB_ON_THE_RIGHT BIT(0)
+#define QUIRK_LITTLE_ENDIAN    BIT(1)
+#define QUIRK_LSW32_IS_FIRST   BIT(2)
+
+enum packing_op {
+	PACK,
+	UNPACK,
+};
+
+/**
+ * packing - Convert numbers (currently u64) between a packed and an unpacked
+ *	     format. Unpacked means laid out in memory in the CPU's native
+ *	     understanding of integers, while packed means anything else that
+ *	     requires translation.
+ *
+ * @pbuf: Pointer to a buffer holding the packed value.
+ * @uval: Pointer to an u64 holding the unpacked value.
+ * @startbit: The index (in logical notation, compensated for quirks) where
+ *	      the packed value starts within pbuf. Must be larger than, or
+ *	      equal to, endbit.
+ * @endbit: The index (in logical notation, compensated for quirks) where
+ *	    the packed value ends within pbuf. Must be smaller than, or equal
+ *	    to, startbit.
+ * @op: If PACK, then uval will be treated as const pointer and copied (packed)
+ *	into pbuf, between startbit and endbit.
+ *	If UNPACK, then pbuf will be treated as const pointer and the logical
+ *	value between startbit and endbit will be copied (unpacked) to uval.
+ * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
+ *	    QUIRK_MSB_ON_THE_RIGHT.
+ *
+ * Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming
+ *	   correct usage, return code may be discarded.
+ *	   If op is PACK, pbuf is modified.
+ *	   If op is UNPACK, uval is modified.
+ */
+int packing(void *pbuf, u64 *uval, int startbit, int endbit, size_t pbuflen,
+	    enum packing_op op, u8 quirks);
+
+#endif
diff --git a/lib/Makefile b/lib/Makefile
index 4e066120a0d6..d5780689f7ef 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -37,7 +37,7 @@ obj-y += bcd.o div64.o sort.o parser.o debug_locks.o random32.o \
 	 bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \
 	 gcd.o lcm.o list_sort.o uuid.o iov_iter.o clz_ctz.o \
 	 bsearch.o find_bit.o llist.o memweight.o kfifo.o \
-	 percpu-refcount.o rhashtable.o reciprocal_div.o \
+	 packing.o percpu-refcount.o rhashtable.o reciprocal_div.o \
 	 once.o refcount.o usercopy.o errseq.o bucket_locks.o \
 	 generic-radix-tree.o
 obj-$(CONFIG_STRING_SELFTEST) += test_string.o
diff --git a/lib/packing.c b/lib/packing.c
new file mode 100644
index 000000000000..2d0bfd78bfe9
--- /dev/null
+++ b/lib/packing.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/* Copyright (c) 2016-2018, NXP Semiconductors
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include <linux/packing.h>
+#include <linux/module.h>
+#include <linux/bitops.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+
+static int get_le_offset(int offset)
+{
+	int closest_multiple_of_4;
+
+	closest_multiple_of_4 = (offset / 4) * 4;
+	offset -= closest_multiple_of_4;
+	return closest_multiple_of_4 + (3 - offset);
+}
+
+static int get_reverse_lsw32_offset(int offset, size_t len)
+{
+	int closest_multiple_of_4;
+	int word_index;
+
+	word_index = offset / 4;
+	closest_multiple_of_4 = word_index * 4;
+	offset -= closest_multiple_of_4;
+	word_index = (len / 4) - word_index - 1;
+	return word_index * 4 + offset;
+}
+
+static u64 bit_reverse(u64 val, unsigned int width)
+{
+	u64 new_val = 0;
+	unsigned int bit;
+	unsigned int i;
+
+	for (i = 0; i < width; i++) {
+		bit = (val & (1 << i)) != 0;
+		new_val |= (bit << (width - i - 1));
+	}
+	return new_val;
+}
+
+static void adjust_for_msb_right_quirk(u64 *to_write, int *box_start_bit,
+				       int *box_end_bit, u8 *box_mask)
+{
+	int box_bit_width = *box_start_bit - *box_end_bit + 1;
+	int new_box_start_bit, new_box_end_bit;
+
+	*to_write >>= *box_end_bit;
+	*to_write = bit_reverse(*to_write, box_bit_width);
+	*to_write <<= *box_end_bit;
+
+	new_box_end_bit   = box_bit_width - *box_start_bit - 1;
+	new_box_start_bit = box_bit_width - *box_end_bit - 1;
+	*box_mask = GENMASK_ULL(new_box_start_bit, new_box_end_bit);
+	*box_start_bit = new_box_start_bit;
+	*box_end_bit   = new_box_end_bit;
+}
+
+/**
+ * packing - Convert numbers (currently u64) between a packed and an unpacked
+ *	     format. Unpacked means laid out in memory in the CPU's native
+ *	     understanding of integers, while packed means anything else that
+ *	     requires translation.
+ *
+ * @pbuf: Pointer to a buffer holding the packed value.
+ * @uval: Pointer to an u64 holding the unpacked value.
+ * @startbit: The index (in logical notation, compensated for quirks) where
+ *	      the packed value starts within pbuf. Must be larger than, or
+ *	      equal to, endbit.
+ * @endbit: The index (in logical notation, compensated for quirks) where
+ *	    the packed value ends within pbuf. Must be smaller than, or equal
+ *	    to, startbit.
+ * @op: If PACK, then uval will be treated as const pointer and copied (packed)
+ *	into pbuf, between startbit and endbit.
+ *	If UNPACK, then pbuf will be treated as const pointer and the logical
+ *	value between startbit and endbit will be copied (unpacked) to uval.
+ * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
+ *	    QUIRK_MSB_ON_THE_RIGHT.
+ *
+ * Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming
+ *	   correct usage, return code may be discarded.
+ *	   If op is PACK, pbuf is modified.
+ *	   If op is UNPACK, uval is modified.
+ */
+int packing(void *pbuf, u64 *uval, int startbit, int endbit, size_t pbuflen,
+	    enum packing_op op, u8 quirks)
+{
+	/* Number of bits for storing "uval"
+	 * also width of the field to access in the pbuf
+	 */
+	u64 value_width;
+	/* Logical byte indices corresponding to the
+	 * start and end of the field.
+	 */
+	int plogical_first_u8, plogical_last_u8, box;
+
+	/* startbit is expected to be larger than endbit */
+	if (startbit < endbit)
+		/* Invalid function call */
+		return -EINVAL;
+
+	value_width = startbit - endbit + 1;
+	if (value_width > 64)
+		return -ERANGE;
+
+	/* Check if "uval" fits in "value_width" bits.
+	 * If value_width is 64, the check will fail, but any
+	 * 64-bit uval will surely fit.
+	 */
+	if (op == PACK && value_width < 64 && (*uval >= (1ull << value_width)))
+		/* Cannot store "uval" inside "value_width" bits.
+		 * Truncating "uval" is most certainly not desirable,
+		 * so simply erroring out is appropriate.
+		 */
+		return -ERANGE;
+
+	/* Initialize parameter */
+	if (op == UNPACK)
+		*uval = 0;
+
+	/* Iterate through an idealistic view of the pbuf as an u64 with
+	 * no quirks, u8 by u8 (aligned at u8 boundaries), from high to low
+	 * logical bit significance. "box" denotes the current logical u8.
+	 */
+	plogical_first_u8 = startbit / 8;
+	plogical_last_u8  = endbit / 8;
+
+	for (box = plogical_first_u8; box >= plogical_last_u8; box--) {
+		/* Bit indices into the currently accessed 8-bit box */
+		int box_start_bit, box_end_bit, box_addr;
+		u8  box_mask;
+		/* Corresponding bits from the unpacked u64 parameter */
+		int proj_start_bit, proj_end_bit;
+		u64 proj_mask;
+
+		/* This u8 may need to be accessed in its entirety
+		 * (from bit 7 to bit 0), or not, depending on the
+		 * input arguments startbit and endbit.
+		 */
+		if (box == plogical_first_u8)
+			box_start_bit = startbit % 8;
+		else
+			box_start_bit = 7;
+		if (box == plogical_last_u8)
+			box_end_bit = endbit % 8;
+		else
+			box_end_bit = 0;
+
+		/* We have determined the box bit start and end.
+		 * Now we calculate where this (masked) u8 box would fit
+		 * in the unpacked (CPU-readable) u64 - the u8 box's
+		 * projection onto the unpacked u64. Though the
+		 * box is u8, the projection is u64 because it may fall
+		 * anywhere within the unpacked u64.
+		 */
+		proj_start_bit = ((box * 8) + box_start_bit) - endbit;
+		proj_end_bit   = ((box * 8) + box_end_bit) - endbit;
+		proj_mask = GENMASK_ULL(proj_start_bit, proj_end_bit);
+		box_mask  = GENMASK_ULL(box_start_bit, box_end_bit);
+
+		/* Determine the offset of the u8 box inside the pbuf,
+		 * adjusted for quirks. The adjusted box_addr will be used for
+		 * effective addressing inside the pbuf (so it's not
+		 * logical any longer).
+		 */
+		box_addr = pbuflen - box - 1;
+		if (quirks & QUIRK_LITTLE_ENDIAN)
+			box_addr = get_le_offset(box_addr);
+		if (quirks & QUIRK_LSW32_IS_FIRST)
+			box_addr = get_reverse_lsw32_offset(box_addr,
+							    pbuflen);
+
+		if (op == UNPACK) {
+			u64 pval;
+
+			/* Read from pbuf, write to uval */
+			pval = ((u8 *)pbuf)[box_addr] & box_mask;
+			if (quirks & QUIRK_MSB_ON_THE_RIGHT)
+				adjust_for_msb_right_quirk(&pval,
+							   &box_start_bit,
+							   &box_end_bit,
+							   &box_mask);
+
+			pval >>= box_end_bit;
+			pval <<= proj_end_bit;
+			*uval &= ~proj_mask;
+			*uval |= pval;
+		} else {
+			u64 pval;
+
+			/* Write to pbuf, read from uval */
+			pval = (*uval) & proj_mask;
+			pval >>= proj_end_bit;
+			if (quirks & QUIRK_MSB_ON_THE_RIGHT)
+				adjust_for_msb_right_quirk(&pval,
+							   &box_start_bit,
+							   &box_end_bit,
+							   &box_mask);
+
+			pval <<= box_end_bit;
+			((u8 *)pbuf)[box_addr] &= ~box_mask;
+			((u8 *)pbuf)[box_addr] |= pval;
+		}
+	}
+	return 0;
+}
+EXPORT_SYMBOL(packing);
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
  2019-03-24  3:23 ` [RFC PATCH net-next 01/13] lib: Add support for generic packing operations Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-24 20:34   ` Andrew Lunn
  2019-03-25 16:46   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 03/13] net: dsa: Create a more convenient function for installing port VLANs Vladimir Oltean
                   ` (12 subsequent siblings)
  14 siblings, 2 replies; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

This allows drivers to query the VLAN setting imposed by the bridge
driver directly from DSA, instead of keeping their own state based on
the .port_vlan_filtering callback.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 include/net/dsa.h |  1 +
 net/dsa/port.c    | 12 ++++++++----
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/include/net/dsa.h b/include/net/dsa.h
index ae480bba11f5..a16fd577349b 100644
--- a/include/net/dsa.h
+++ b/include/net/dsa.h
@@ -142,6 +142,7 @@ struct dsa_port {
 	const struct dsa_port	*cpu_dp;
 	struct device_node	*dn;
 	unsigned int		ageing_time;
+	bool			vlan_filtering;
 	u8			stp_state;
 	struct net_device	*bridge_dev;
 	struct devlink_port	devlink_port;
diff --git a/net/dsa/port.c b/net/dsa/port.c
index caeef4c99dc0..a86fe3be1261 100644
--- a/net/dsa/port.c
+++ b/net/dsa/port.c
@@ -158,15 +158,19 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
 			    struct switchdev_trans *trans)
 {
 	struct dsa_switch *ds = dp->ds;
+	int err;
 
 	/* bridge skips -EOPNOTSUPP, so skip the prepare phase */
 	if (switchdev_trans_ph_prepare(trans))
 		return 0;
 
-	if (ds->ops->port_vlan_filtering)
-		return ds->ops->port_vlan_filtering(ds, dp->index,
-						    vlan_filtering);
-
+	if (ds->ops->port_vlan_filtering) {
+		err = ds->ops->port_vlan_filtering(ds, dp->index,
+						   vlan_filtering);
+		if (err)
+			return err;
+		dp->vlan_filtering = vlan_filtering;
+	}
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 03/13] net: dsa: Create a more convenient function for installing port VLANs
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
  2019-03-24  3:23 ` [RFC PATCH net-next 01/13] lib: Add support for generic packing operations Vladimir Oltean
  2019-03-24  3:23 ` [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-25 17:06   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 04/13] net: dsa: Call driver's setup callback after setting up its switchdev notifier Vladimir Oltean
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

This refactors the two-phase transaction from dsa_slave_vlan_rx_add_vid
and also makes that code available for other functions from within DSA.
The newly exposed function either adds or deletes the specified VLAN
entry based on a boolean argument.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 net/dsa/dsa_priv.h |  2 ++
 net/dsa/port.c     | 24 ++++++++++++++++++++++++
 net/dsa/slave.c    | 16 ++--------------
 3 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
index 093b7d145eb1..8048ced3708f 100644
--- a/net/dsa/dsa_priv.h
+++ b/net/dsa/dsa_priv.h
@@ -164,6 +164,8 @@ int dsa_port_pre_bridge_flags(const struct dsa_port *dp, unsigned long flags,
 			      struct switchdev_trans *trans);
 int dsa_port_bridge_flags(const struct dsa_port *dp, unsigned long flags,
 			  struct switchdev_trans *trans);
+int dsa_port_trans_vlan_apply(struct dsa_port *dp, u16 vid, u16 flags,
+			      bool enabled);
 int dsa_port_vlan_add(struct dsa_port *dp,
 		      const struct switchdev_obj_port_vlan *vlan,
 		      struct switchdev_trans *trans);
diff --git a/net/dsa/port.c b/net/dsa/port.c
index a86fe3be1261..9c7358f98004 100644
--- a/net/dsa/port.c
+++ b/net/dsa/port.c
@@ -326,6 +326,30 @@ int dsa_port_vlan_del(struct dsa_port *dp,
 	return 0;
 }
 
+int dsa_port_trans_vlan_apply(struct dsa_port *dp, u16 vid, u16 flags,
+			      bool enabled)
+{
+	struct switchdev_obj_port_vlan vlan = {
+		.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
+		.flags = flags,
+		.vid_begin = vid,
+		.vid_end = vid,
+	};
+	struct switchdev_trans trans;
+	int err;
+
+	if (!enabled)
+		return dsa_port_vlan_del(dp, &vlan);
+
+	trans.ph_prepare = true;
+	err = dsa_port_vlan_add(dp, &vlan, &trans);
+	if (err == -EOPNOTSUPP)
+		return 0;
+
+	trans.ph_prepare = false;
+	return dsa_port_vlan_add(dp, &vlan, &trans);
+}
+
 static struct phy_device *dsa_port_get_phy_device(struct dsa_port *dp)
 {
 	struct device_node *phy_dn;
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 093eef6f2599..3191ef74f6a1 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -987,13 +987,6 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
 				     u16 vid)
 {
 	struct dsa_port *dp = dsa_slave_to_port(dev);
-	struct switchdev_obj_port_vlan vlan = {
-		.vid_begin = vid,
-		.vid_end = vid,
-		/* This API only allows programming tagged, non-PVID VIDs */
-		.flags = 0,
-	};
-	struct switchdev_trans trans;
 	struct bridge_vlan_info info;
 	int ret;
 
@@ -1010,13 +1003,8 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
 			return -EBUSY;
 	}
 
-	trans.ph_prepare = true;
-	ret = dsa_port_vlan_add(dp, &vlan, &trans);
-	if (ret == -EOPNOTSUPP)
-		return 0;
-
-	trans.ph_prepare = false;
-	return dsa_port_vlan_add(dp, &vlan, &trans);
+	/* This API only allows programming tagged, non-PVID VIDs */
+	return dsa_port_trans_vlan_apply(dp, vid, 0, true);
 }
 
 static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 04/13] net: dsa: Call driver's setup callback after setting up its switchdev notifier
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (2 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 03/13] net: dsa: Create a more convenient function for installing port VLANs Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-25 16:47   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 05/13] net: dsa: Optional VLAN-based port separation for switches without tagging Vladimir Oltean
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

This allows the driver to perform some manipulations of its own during
setup, using generic code.
One current usage scenario is for the driver to request DSA to set up
802.1Q based switch tagging for its ports.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 net/dsa/dsa2.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
index c00ee464afc7..5beceb18b7e2 100644
--- a/net/dsa/dsa2.c
+++ b/net/dsa/dsa2.c
@@ -360,14 +360,14 @@ static int dsa_switch_setup(struct dsa_switch *ds)
 	if (err)
 		return err;
 
-	err = ds->ops->setup(ds);
-	if (err < 0)
-		return err;
-
 	err = dsa_switch_register_notifier(ds);
 	if (err)
 		return err;
 
+	err = ds->ops->setup(ds);
+	if (err < 0)
+		return err;
+
 	if (!ds->slave_mii_bus && ds->ops->phy_read) {
 		ds->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
 		if (!ds->slave_mii_bus)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 05/13] net: dsa: Optional VLAN-based port separation for switches without tagging
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (3 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 04/13] net: dsa: Call driver's setup callback after setting up its switchdev notifier Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26  2:21   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 06/13] net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch Vladimir Oltean
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

This patch provides generic DSA code for using VLAN (802.1Q) tags for
the same purpose as a dedicated switch tag for injection/extraction.
It is based on the discussions and interest that has been so far
expressed in https://www.spinics.net/lists/netdev/msg556125.html.

Unlike all other DSA-supported tagging protocols, CONFIG_NET_DSA_TAG_8021Q
does not offer a complete solution for drivers (nor can it). Instead, it
provides generic code that driver can opt into calling:
- dsa_8021q_xmit: Inserts a VLAN header with the specified contents.
  Currently a few driver are inserting headers that are simply 802.1Q
  with custom fields. Can be called from another tagging protocol's xmit
  function.
- dsa_8021q_rcv: Retrieves the TPID and TCI from a VLAN-tagged skb.
  Removing the VLAN header is left as a decision for the caller to make.
- dsa_port_setup_8021q_tagging: For each user port, installs an Rx VID
  and a Tx VID, for proper untagged traffic identification on ingress
  and steering on egress. Also sets up the VLAN trunk on the upstream
  (CPU or DSA) port. Drivers are intentionally left to call this
  function explicitly, depending on the context and hardware support.
  The expected switch behavior and VLAN semantics should not be violated
  under any conditions. That is, after calling
  dsa_port_setup_8021q_tagging, the hardware should still pass all
  ingress traffic, be it tagged or untagged.

This only works when switch ports are standalone, or when they are added
to a VLAN-unaware bridge. It will probably remain this way for the
reasons below.

When added to a bridge that has vlan_filtering 1, the bridge core will
install its own VLANs and reset the pvids through switchdev. For the
bridge core, switchdev is a write-only pipe. All VLAN-related state is
kept in the bridge core and nothing is read from DSA/switchdev or from
the driver. So the bridge core will break this port separation because
it will install the vlan_default_pvid into all switchdev ports.

Even if we could teach the bridge driver about switchdev preference of a
certain vlan_default_pvid, there would still exist many other challenges.

Firstly, in the DSA rcv callback, a driver would have to perform an
iterative reverse lookup to find the correct switch port. That is
because the port is a bridge slave, so its Rx VID (port PVID) is subject
to user configuration. How would we ensure that the user doesn't reset
the pvid to a different value, or to a non-unique value within this DSA
switch tree?

Finally, not all switch ports are equal in DSA, and that makes it
difficult for the bridge to be completely aware of this anyway.
The CPU port needs to transmit tagged packets (VLAN trunk) in order for
the DSA rcv code to be able to decode source information.
But the bridge code has absolutely no idea which switch port is the CPU
port, if nothing else then just because there is no netdevice registered
by DSA for the CPU port.
Also DSA does not currently allow the user to specify that they want the
CPU port to do VLAN trunking anyway. VLANs are added to the CPU port
using the same flags as they were added on the user port.

So the VLANs installed by dsa_port_setup_8021q_tagging per driver
request should remain private from the bridge's and user's perspective,
and should not alter the hardware's behavior with VLAN-tagged traffic.
If the hardware cannot handle VLAN tag stacking, it should also disable
this port separation when added as slave to a vlan_filtering bridge.
If the hardware does support VLAN tag stacking, it should somehow back
up its private VLAN settings when the bridge tries to override them.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 include/net/dsa.h   |   4 +
 net/dsa/Kconfig     |   9 +++
 net/dsa/Makefile    |   1 +
 net/dsa/dsa_priv.h  |  10 +++
 net/dsa/tag_8021q.c | 185 ++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 209 insertions(+)
 create mode 100644 net/dsa/tag_8021q.c

diff --git a/include/net/dsa.h b/include/net/dsa.h
index a16fd577349b..b22c350c40f0 100644
--- a/include/net/dsa.h
+++ b/include/net/dsa.h
@@ -574,5 +574,9 @@ int dsa_port_get_phy_strings(struct dsa_port *dp, uint8_t *data);
 int dsa_port_get_ethtool_phy_stats(struct dsa_port *dp, uint64_t *data);
 int dsa_port_get_phy_sset_count(struct dsa_port *dp);
 void dsa_port_phylink_mac_change(struct dsa_switch *ds, int port, bool up);
+#ifdef CONFIG_NET_DSA_TAG_8021Q
+int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
+				 bool enabled);
+#endif
 
 #endif
diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig
index fab49132345f..2f3a103d7d1a 100644
--- a/net/dsa/Kconfig
+++ b/net/dsa/Kconfig
@@ -26,6 +26,15 @@ config NET_DSA_LEGACY
 	  This feature is scheduled for removal in 4.17.
 
 # tagging formats
+config NET_DSA_TAG_8021Q
+	bool
+	help
+	  Unlike the other tagging protocols, the 802.1Q config option simply
+	  provides helpers for other tagging implementations that might rely on
+	  VLAN in one way or another. It is not a complete solution.
+
+	  Drivers which use these helpers should select this as dependency.
+
 config NET_DSA_TAG_BRCM
 	bool
 
diff --git a/net/dsa/Makefile b/net/dsa/Makefile
index 6e721f7a2947..d7fc3253d497 100644
--- a/net/dsa/Makefile
+++ b/net/dsa/Makefile
@@ -5,6 +5,7 @@ dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o
 dsa_core-$(CONFIG_NET_DSA_LEGACY) += legacy.o
 
 # tagging formats
+dsa_core-$(CONFIG_NET_DSA_TAG_8021Q) += tag_8021q.o
 dsa_core-$(CONFIG_NET_DSA_TAG_BRCM) += tag_brcm.o
 dsa_core-$(CONFIG_NET_DSA_TAG_BRCM_PREPEND) += tag_brcm.o
 dsa_core-$(CONFIG_NET_DSA_TAG_DSA) += tag_dsa.o
diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
index 8048ced3708f..105058450621 100644
--- a/net/dsa/dsa_priv.h
+++ b/net/dsa/dsa_priv.h
@@ -203,6 +203,16 @@ dsa_slave_to_master(const struct net_device *dev)
 int dsa_switch_register_notifier(struct dsa_switch *ds);
 void dsa_switch_unregister_notifier(struct dsa_switch *ds);
 
+/* tag_8021q.c */
+struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
+			       u16 tpid, u16 tci);
+struct sk_buff *dsa_8021q_rcv(struct sk_buff *skb, struct net_device *netdev,
+			      struct packet_type *pt, u16 *tpid, u16 *tci);
+u16 dsa_tagging_tx_vid(struct dsa_switch *ds, int port);
+u16 dsa_tagging_rx_vid(struct dsa_switch *ds, int port);
+int dsa_tagging_rx_switch_id(u16 vid);
+int dsa_tagging_rx_source_port(u16 vid);
+
 /* tag_brcm.c */
 extern const struct dsa_device_ops brcm_netdev_ops;
 extern const struct dsa_device_ops brcm_prepend_netdev_ops;
diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
new file mode 100644
index 000000000000..221299b264f5
--- /dev/null
+++ b/net/dsa/tag_8021q.c
@@ -0,0 +1,185 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include <linux/if_bridge.h>
+#include <linux/if_vlan.h>
+
+#include "dsa_priv.h"
+
+#define DSA_TAGGING_VID_RANGE    (DSA_MAX_SWITCHES * DSA_MAX_PORTS)
+#define DSA_TAGGING_VID_BASE     (VLAN_N_VID - 2 * DSA_TAGGING_VID_RANGE - 1)
+#define DSA_TAGGING_RX_VID_BASE  (DSA_TAGGING_VID_BASE)
+#define DSA_TAGGING_TX_VID_BASE  (DSA_TAGGING_VID_BASE + DSA_TAGGING_VID_RANGE)
+
+u16 dsa_tagging_tx_vid(struct dsa_switch *ds, int port)
+{
+	return DSA_TAGGING_TX_VID_BASE + (DSA_MAX_PORTS * ds->index) + port;
+}
+
+u16 dsa_tagging_rx_vid(struct dsa_switch *ds, int port)
+{
+	return DSA_TAGGING_RX_VID_BASE + (DSA_MAX_PORTS * ds->index) + port;
+}
+
+int dsa_tagging_rx_switch_id(u16 vid)
+{
+	return ((vid - DSA_TAGGING_RX_VID_BASE) / DSA_MAX_PORTS);
+}
+
+int dsa_tagging_rx_source_port(u16 vid)
+{
+	return ((vid - DSA_TAGGING_RX_VID_BASE) % DSA_MAX_PORTS);
+}
+
+/* Rx VLAN tagging (left) and Tx VLAN tagging (right) setup shown for a single
+ * front-panel switch port (here swp0).
+ *
+ * Port identification through VLAN (802.1Q) tags has different requirements
+ * for it to work effectively:
+ *  - On Rx (ingress from network): each front-panel port must have a pvid
+ *    that uniquely identifies it, and the egress of this pvid must be tagged
+ *    towards the CPU port, so that software can recover the source port based
+ *    on the VID in the frame. But this would only work for standalone ports;
+ *    if bridged, this VLAN setup would break autonomous forwarding and would
+ *    force all switched traffic to pass through the CPU. So we must also make
+ *    the other front-panel ports members of this VID we're adding, albeit
+ *    we're not making it their PVID (they'll still have their own).
+ *    By the way - just because we're installing the same VID in multiple
+ *    switch ports doesn't mean that they'll start to talk to one another, even
+ *    while not bridged: the final forwarding decision is still an AND between
+ *    the L2 forwarding information (which is limiting forwarding in this case)
+ *    and the VLAN-based restrictions (of which there are none in this case,
+ *    since all ports are members).
+ *  - On Tx (ingress from CPU and towards network) we are faced with a problem.
+ *    If we were to tag traffic (from within DSA) with the port's pvid, all
+ *    would be well, assuming the switch ports were standalone. Frames would
+ *    have no choice but to be directed towards the correct front-panel port.
+ *    But because we also want the Rx VLAN to not break bridging, then
+ *    inevitably that means that we have to give them a choice (of what
+ *    front-panel port to go out on), and therefore we cannot steer traffic
+ *    based on the Rx VID. So what we do is simply install one more VID on the
+ *    front-panel and CPU ports, and profit off of the fact that steering will
+ *    work just by virtue of the fact that there is only one other port that's
+ *    a member of the VID we're tagging the traffic with - the desired one.
+ *
+ * So at the end, each front-panel port will have one Rx VID (also the PVID),
+ * the Rx VID of all other front-panel ports, and one Tx VID. Whereas the CPU
+ * port will have the Rx and Tx VIDs of all front-panel ports, and on top of
+ * that, is also tagged-input and tagged-output (VLAN trunk).
+ *
+ *               CPU port                               CPU port
+ * +-------------+-----+-------------+    +-------------+-----+-------------+
+ * |  Rx VID     |     |             |    |  Tx VID     |     |             |
+ * |  of swp0    |     |             |    |  of swp0    |     |             |
+ * |             +-----+             |    |             +-----+             |
+ * |                ^ T              |    |                | Tagged         |
+ * |                |                |    |                | ingress        |
+ * |    +-------+---+---+-------+    |    |    +-----------+                |
+ * |    |       |       |       |    |    |    | Untagged                   |
+ * |    |     U v     U v     U v    |    |    v egress                     |
+ * | +-----+ +-----+ +-----+ +-----+ |    | +-----+ +-----+ +-----+ +-----+ |
+ * | |     | |     | |     | |     | |    | |     | |     | |     | |     | |
+ * | |PVID | |     | |     | |     | |    | |     | |     | |     | |     | |
+ * +-+-----+-+-----+-+-----+-+-----+-+    +-+-----+-+-----+-+-----+-+-----+-+
+ *   swp0    swp1    swp2    swp3           swp0    swp1    swp2    swp3
+ */
+int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int port, bool enabled)
+{
+	int upstream = dsa_upstream_port(ds, port);
+	struct dsa_port *dp = &ds->ports[port];
+	struct dsa_port *upstream_dp = &ds->ports[upstream];
+	u16 rx_vid = dsa_tagging_rx_vid(ds, port);
+	u16 tx_vid = dsa_tagging_tx_vid(ds, port);
+	int i, err;
+
+	/* The CPU port is implicitly configured by
+	 * configuring the front-panel ports
+	 */
+	if (!dsa_is_user_port(ds, port))
+		return 0;
+
+	/* Add this user port's Rx VID to the membership list of all others
+	 * (including itself). This is so that bridging will not be hindered.
+	 * L2 forwarding rules still take precedence when there are no VLAN
+	 * restrictions, so there are no concerns about leaking traffic.
+	 */
+	for (i = 0; i < ds->num_ports; i++) {
+		struct dsa_port *other_dp = &ds->ports[i];
+		u16 flags;
+
+		if (i == upstream)
+			/* CPU port needs to see this port's Rx VID
+			 * as tagged egress.
+			 */
+			flags = 0;
+		else if (i == port)
+			/* The Rx VID is pvid on this port */
+			flags = BRIDGE_VLAN_INFO_UNTAGGED |
+				BRIDGE_VLAN_INFO_PVID;
+		else
+			/* The Rx VID is a regular VLAN on all others */
+			flags = BRIDGE_VLAN_INFO_UNTAGGED;
+
+		err = dsa_port_trans_vlan_apply(other_dp, rx_vid, flags,
+						enabled);
+		if (err) {
+			dev_err(ds->dev, "Failed to apply Rx VID %d to port %d: %d\n",
+				rx_vid, port, err);
+			return err;
+		}
+	}
+	/* Finally apply the Tx VID on this port and on the CPU port */
+	err = dsa_port_trans_vlan_apply(dp, tx_vid, BRIDGE_VLAN_INFO_UNTAGGED,
+					enabled);
+	if (err) {
+		dev_err(ds->dev, "Failed to apply Tx VID %d on port %d: %d\n",
+			tx_vid, port, err);
+		return err;
+	}
+	err = dsa_port_trans_vlan_apply(upstream_dp, tx_vid, 0, enabled);
+	if (err) {
+		dev_err(ds->dev, "Failed to apply Tx VID %d on port %d: %d\n",
+			tx_vid, upstream, err);
+		return err;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dsa_port_setup_8021q_tagging);
+
+struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
+			       u16 tpid, u16 tci)
+{
+	/* skb->data points at skb_mac_header, which
+	 * is fine for vlan_insert_tag.
+	 */
+	return vlan_insert_tag(skb, tpid, tci);
+}
+EXPORT_SYMBOL_GPL(dsa_8021q_xmit);
+
+struct sk_buff *dsa_8021q_rcv(struct sk_buff *skb, struct net_device *netdev,
+			      struct packet_type *pt, u16 *tpid, u16 *tci)
+{
+	struct vlan_ethhdr *tag;
+
+	if (unlikely(!pskb_may_pull(skb, VLAN_HLEN)))
+		return NULL;
+
+	tag = vlan_eth_hdr(skb);
+	*tpid = ntohs(tag->h_vlan_proto);
+	*tci = ntohs(tag->h_vlan_TCI);
+
+	/* skb->data points in the middle of the VLAN tag,
+	 * after tpid and before tci. This is because so far,
+	 * ETH_HLEN (DMAC, SMAC, EtherType) bytes were pulled.
+	 * There are 2 bytes of VLAN tag left in skb->data, and upper
+	 * layers expect the 'real' EtherType to be consumed as well.
+	 * Coincidentally, a VLAN header is also of the same size as
+	 * the number of bytes that need to be pulled.
+	 */
+	skb_pull_rcsum(skb, VLAN_HLEN);
+
+	return skb;
+}
+EXPORT_SYMBOL_GPL(dsa_8021q_rcv);
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 06/13] net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (4 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 05/13] net: dsa: Optional VLAN-based port separation for switches without tagging Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26 13:02   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 07/13] net: dsa: sja1105: Add support for FDB and MDB management Vladimir Oltean
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij,
	Vladimir Oltean, Georg Waibel

At this moment the following is supported:
* Link state management through phylib
* Autonomous L2 forwarding managed through iproute2 bridge commands. The
  switch ports are initialized in a mode where they can only talk to the
  CPU port. However, IP termination must be done currently through the
  master netdevice.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: Georg Waibel <georg.waibel@sensor-technik.de>
---
 MAINTAINERS                                   |    6 +
 drivers/net/dsa/Kconfig                       |    2 +
 drivers/net/dsa/Makefile                      |    1 +
 drivers/net/dsa/sja1105/Kconfig               |   17 +
 drivers/net/dsa/sja1105/Makefile              |    9 +
 drivers/net/dsa/sja1105/sja1105.h             |  134 ++
 drivers/net/dsa/sja1105/sja1105_clocking.c    |  677 ++++++
 .../net/dsa/sja1105/sja1105_dynamic_config.c  |  607 ++++++
 .../net/dsa/sja1105/sja1105_dynamic_config.h  |   40 +
 drivers/net/dsa/sja1105/sja1105_main.c        |  904 ++++++++
 drivers/net/dsa/sja1105/sja1105_spi.c         |  667 ++++++
 .../net/dsa/sja1105/sja1105_static_config.c   | 1810 +++++++++++++++++
 .../net/dsa/sja1105/sja1105_static_config.h   |  500 +++++
 13 files changed, 5374 insertions(+)
 create mode 100644 drivers/net/dsa/sja1105/Kconfig
 create mode 100644 drivers/net/dsa/sja1105/Makefile
 create mode 100644 drivers/net/dsa/sja1105/sja1105.h
 create mode 100644 drivers/net/dsa/sja1105/sja1105_clocking.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_dynamic_config.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_dynamic_config.h
 create mode 100644 drivers/net/dsa/sja1105/sja1105_main.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_spi.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_static_config.c
 create mode 100644 drivers/net/dsa/sja1105/sja1105_static_config.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 89315bb1cb83..d808520b4fa3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11117,6 +11117,12 @@ S:	Maintained
 F:	Documentation/devicetree/bindings/sound/sgtl5000.txt
 F:	sound/soc/codecs/sgtl5000*
 
+NXP SJA1105 ETHERNET SWITCH DRIVER
+M:	Vladimir Oltean <olteanv@gmail.com>
+L:	linux-kernel@vger.kernel.org
+S:	Maintained
+F:	drivers/net/dsa/sja1105
+
 NXP TDA998X DRM DRIVER
 M:	Russell King <linux@armlinux.org.uk>
 S:	Maintained
diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
index 71bb3aebded4..d38e7e00c4e8 100644
--- a/drivers/net/dsa/Kconfig
+++ b/drivers/net/dsa/Kconfig
@@ -51,6 +51,8 @@ source "drivers/net/dsa/microchip/Kconfig"
 
 source "drivers/net/dsa/mv88e6xxx/Kconfig"
 
+source "drivers/net/dsa/sja1105/Kconfig"
+
 config NET_DSA_QCA8K
 	tristate "Qualcomm Atheros QCA8K Ethernet switch family support"
 	depends on NET_DSA
diff --git a/drivers/net/dsa/Makefile b/drivers/net/dsa/Makefile
index 82e5d794c41f..fefb6aaa82ba 100644
--- a/drivers/net/dsa/Makefile
+++ b/drivers/net/dsa/Makefile
@@ -18,3 +18,4 @@ obj-$(CONFIG_NET_DSA_VITESSE_VSC73XX) += vitesse-vsc73xx.o
 obj-y				+= b53/
 obj-y				+= microchip/
 obj-y				+= mv88e6xxx/
+obj-y				+= sja1105/
diff --git a/drivers/net/dsa/sja1105/Kconfig b/drivers/net/dsa/sja1105/Kconfig
new file mode 100644
index 000000000000..1fb0e504f055
--- /dev/null
+++ b/drivers/net/dsa/sja1105/Kconfig
@@ -0,0 +1,17 @@
+config NET_DSA_SJA1105
+tristate "NXP SJA1105 Ethernet switch family support"
+	depends on NET_DSA
+	select NET_DSA_TAG_SJA1105
+	select NET_DSA_TAG_8021Q
+	help
+	  This is the driver for the NXP SJA1105 automotive Ethernet switch
+	  family. These are 5-port devices and are managed over an SPI
+	  interface. Probing is handled based on OF bindings and so is the
+	  linkage to phylib. The driver supports the following revisions:
+	    - SJA1105E (Gen. 1, No TT-Ethernet)
+	    - SJA1105T (Gen. 1, TT-Ethernet)
+	    - SJA1105P (Gen. 2, No SGMII, No TT-Ethernet)
+	    - SJA1105Q (Gen. 2, No SGMII, TT-Ethernet)
+	    - SJA1105R (Gen. 2, SGMII, No TT-Ethernet)
+	    - SJA1105S (Gen. 2, SGMII, TT-Ethernet)
+
diff --git a/drivers/net/dsa/sja1105/Makefile b/drivers/net/dsa/sja1105/Makefile
new file mode 100644
index 000000000000..ed00840802f4
--- /dev/null
+++ b/drivers/net/dsa/sja1105/Makefile
@@ -0,0 +1,9 @@
+obj-$(CONFIG_NET_DSA_SJA1105) += sja1105.o
+
+sja1105-objs := \
+    sja1105_spi.o \
+    sja1105_main.o \
+    sja1105_clocking.o \
+    sja1105_static_config.o \
+    sja1105_dynamic_config.o \
+
diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h
new file mode 100644
index 000000000000..f8cac518a30a
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105.h
@@ -0,0 +1,134 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Copyright (c) 2018, Sensor-Technik Wiedemann GmbH
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#ifndef _SJA1105_H
+#define _SJA1105_H
+
+#include <net/dsa.h>
+#include "sja1105_static_config.h"
+
+/* IEEE 802.3 Annex 57A: Slow Protocols PDUs */
+#define SJA1105_LINKLOCAL_FILTER_A	0x0180C2000000
+#define SJA1105_LINKLOCAL_FILTER_A_MASK	0xFFFFFF000000
+/* IEEE 1588 Annex F: Transport of PTP over Ethernet */
+#define SJA1105_LINKLOCAL_FILTER_B	0x011B19000000
+#define SJA1105_LINKLOCAL_FILTER_B_MASK	0xFFFFFF000000
+
+#define SJA1105_NUM_PORTS 5
+#define SJA1105_NUM_TC    8
+#define SJA1105ET_FDB_BIN_SIZE 4
+
+/* Keeps the different addresses between E/T and P/Q/R/S */
+struct sja1105_regs {
+	u64 general_status;
+	u64 rgu;
+	u64 config;
+	u64 rmii_pll1;
+	u64 pad_mii_tx[SJA1105_NUM_PORTS];
+	u64 cgu_idiv[SJA1105_NUM_PORTS];
+	u64 rgmii_pad_mii_tx[SJA1105_NUM_PORTS];
+	u64 mii_tx_clk[SJA1105_NUM_PORTS];
+	u64 mii_rx_clk[SJA1105_NUM_PORTS];
+	u64 mii_ext_tx_clk[SJA1105_NUM_PORTS];
+	u64 mii_ext_rx_clk[SJA1105_NUM_PORTS];
+	u64 rgmii_txc[SJA1105_NUM_PORTS];
+	u64 rmii_ref_clk[SJA1105_NUM_PORTS];
+	u64 rmii_ext_tx_clk[SJA1105_NUM_PORTS];
+	u64 mac[SJA1105_NUM_PORTS];
+	u64 mac_hl1[SJA1105_NUM_PORTS];
+	u64 mac_hl2[SJA1105_NUM_PORTS];
+	u64 qlevel[SJA1105_NUM_PORTS];
+};
+
+struct sja1105_private {
+	const struct sja1105_dynamic_table_ops *dyn_ops;
+	struct sja1105_static_config static_config;
+	struct gpio_desc *reset_gpio;
+	struct spi_device *spidev;
+	struct sja1105_regs *regs;
+	struct dsa_switch *ds;
+	u64 device_id;
+	u64 part_nr; /* Needed for P/R distinction (same switch core) */
+};
+
+#include "sja1105_dynamic_config.h"
+
+struct sja1105_spi_message {
+	u64 access;
+	u64 read_count;
+	u64 address;
+};
+
+enum sja1105_spi_access_mode {
+	SPI_READ = 0,
+	SPI_WRITE = 1,
+};
+
+/* From sja1105-spi.c */
+int
+sja1105_spi_send_packed_buf(const struct sja1105_private *priv,
+			    enum sja1105_spi_access_mode read_or_write,
+			    u64 reg_addr, void *packed_buf, size_t size_bytes);
+int sja1105_spi_send_int(const struct sja1105_private *priv,
+			 enum sja1105_spi_access_mode read_or_write,
+			 u64 reg_addr, u64 *value, u64 size_bytes);
+int
+sja1105_spi_send_long_packed_buf(const struct sja1105_private *priv,
+				 enum sja1105_spi_access_mode read_or_write,
+				 u64 base_addr, void *packed_buf, u64 buf_len);
+int sja1105_static_config_upload(struct sja1105_private *priv);
+int sja1105_device_id_get(struct sja1105_private *priv);
+const char *sja1105_device_id_string_get(u64 device_id, u64 part_nr);
+
+#define SIZE_SPI_MSG_HEADER    4
+#define SIZE_SPI_MSG_MAXLEN    (64 * 4)
+
+/* From sja1105-clocking.c */
+
+#define XMII_MAC               0ull
+#define XMII_PHY               1ull
+#define XMII_MODE_MII          0ull
+#define XMII_MODE_RMII         1ull
+#define XMII_MODE_RGMII        2ull
+#define XMII_MODE_SGMII        3ull /* Only available for port 4 on R/S */
+#define XMII_MODE_TRISTATE     3ull
+
+#define SJA1105_SPEED_10MBPS   3ull
+#define SJA1105_SPEED_100MBPS  2ull
+#define SJA1105_SPEED_1000MBPS 1ull
+#define SJA1105_SPEED_AUTO     0ull
+
+int sja1105_clocking_setup_port(struct sja1105_private *priv, int port);
+int sja1105_clocking_setup(struct sja1105_private *priv);
+
+/* From sja1105-dynamic-config.c */
+
+int sja1105_dynamic_config_read(struct sja1105_private *priv,
+				enum sja1105_blk_idx blk_idx,
+				int index, void *entry);
+int sja1105_dynamic_config_write(struct sja1105_private *priv,
+				 enum sja1105_blk_idx blk_idx,
+				 int index, void *entry, bool keep);
+int sja1105_dynamic_config_init(struct sja1105_private *priv);
+
+u8 sja1105_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid);
+
+/* Common implementations for the static and dynamic configs */
+size_t sja1105_l2_forwarding_entry_packing(void *buf, void *entry_ptr,
+					   enum packing_op op);
+size_t sja1105pqrs_l2_lookup_entry_packing(void *buf, void *entry_ptr,
+					   enum packing_op op);
+size_t sja1105et_l2_lookup_entry_packing(void *buf, void *entry_ptr,
+					 enum packing_op op);
+size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr,
+					 enum packing_op op);
+size_t sja1105_retagging_entry_packing(void *buf, void *entry_ptr,
+				       enum packing_op op);
+size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr,
+					    enum packing_op op);
+size_t sja1105_vl_lookup_entry_packing(void *buf, void *entry_ptr,
+				       enum packing_op op);
+
+#endif
+
diff --git a/drivers/net/dsa/sja1105/sja1105_clocking.c b/drivers/net/dsa/sja1105/sja1105_clocking.c
new file mode 100644
index 000000000000..adfa3a51b46c
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_clocking.c
@@ -0,0 +1,677 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/* Copyright (c) 2016-2018, NXP Semiconductors
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include <linux/packing.h>
+#include "sja1105.h"
+
+struct sja1105_cfg_pad_mii_tx {
+	u64 d32_os;
+	u64 d32_ipud;
+	u64 d10_os;
+	u64 d10_ipud;
+	u64 ctrl_os;
+	u64 ctrl_ipud;
+	u64 clk_os;
+	u64 clk_ih;
+	u64 clk_ipud;
+};
+
+/* UM10944 Table 82.
+ * IDIV_0_C to IDIV_4_C control registers
+ * (addr. 10000Bh to 10000Fh)
+ */
+struct sja1105_cgu_idiv {
+	u64 clksrc;
+	u64 autoblock;
+	u64 idiv;
+	u64 pd;
+};
+
+/* UM10944 Table 80.
+ * PLL_x_S clock status registers 0 and 1
+ * (address 100007h and 100009h)
+ */
+struct sja1105_cgu_pll_status {
+	u64 lock;
+};
+
+/* PLL_1_C control register
+ *
+ * SJA1105 E/T: UM10944 Table 81 (address 10000Ah)
+ * SJA1105 P/Q/R/S: UM11040 Table 116 (address 10000Ah)
+ */
+struct sja1105_cgu_pll_ctrl {
+	u64 pllclksrc;
+	u64 msel;
+	u64 nsel; /* Only for P/Q/R/S series */
+	u64 autoblock;
+	u64 psel;
+	u64 direct;
+	u64 fbsel;
+	u64 p23en; /* Only for P/Q/R/S series */
+	u64 bypass;
+	u64 pd;
+};
+
+#define CLKSRC_MII0_TX_CLK 0x00
+#define CLKSRC_MII0_RX_CLK 0x01
+#define CLKSRC_MII1_TX_CLK 0x02
+#define CLKSRC_MII1_RX_CLK 0x03
+#define CLKSRC_MII2_TX_CLK 0x04
+#define CLKSRC_MII2_RX_CLK 0x05
+#define CLKSRC_MII3_TX_CLK 0x06
+#define CLKSRC_MII3_RX_CLK 0x07
+#define CLKSRC_MII4_TX_CLK 0x08
+#define CLKSRC_MII4_RX_CLK 0x09
+#define CLKSRC_PLL0        0x0B
+#define CLKSRC_PLL1        0x0E
+#define CLKSRC_IDIV0       0x11
+#define CLKSRC_IDIV1       0x12
+#define CLKSRC_IDIV2       0x13
+#define CLKSRC_IDIV3       0x14
+#define CLKSRC_IDIV4       0x15
+
+/* UM10944 Table 83.
+ * MIIx clock control registers 1 to 30
+ * (addresses 100013h to 100035h)
+ */
+struct sja1105_cgu_mii_ctrl {
+	u64 clksrc;
+	u64 autoblock;
+	u64 pd;
+};
+
+static void sja1105_cgu_idiv_packing(void *buf, struct sja1105_cgu_idiv *idiv,
+				     enum packing_op op)
+{
+	const int size = 4;
+
+	if (op == UNPACK)
+		memset(idiv, 0, sizeof(*idiv));
+	else
+		memset(buf, 0, size);
+
+	sja1105_packing(buf, &idiv->clksrc,    28, 24, size, op);
+	sja1105_packing(buf, &idiv->autoblock, 11, 11, size, op);
+	sja1105_packing(buf, &idiv->idiv,       5,  2, size, op);
+	sja1105_packing(buf, &idiv->pd,         0,  0, size, op);
+}
+
+static int sja1105_cgu_idiv_config(struct sja1105_private *priv, int port,
+				   bool enabled, int factor)
+{
+#define BUF_LEN 4
+	struct device *dev = priv->ds->dev;
+	struct sja1105_cgu_idiv idiv;
+	u8 packed_buf[BUF_LEN];
+
+	if (enabled && factor != 1 && factor != 10) {
+		dev_err(dev, "idiv factor must be 1 or 10\n");
+		return -ERANGE;
+	}
+
+	/* Payload for packed_buf */
+	idiv.clksrc    = 0x0A;            /* 25MHz */
+	idiv.autoblock = 1;               /* Block clk automatically */
+	idiv.idiv      = factor - 1;      /* Divide by 1 or 10 */
+	idiv.pd        = enabled ? 0 : 1; /* Power down? */
+	sja1105_cgu_idiv_packing(packed_buf, &idiv, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->cgu_idiv[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static void
+sja1105_cgu_mii_control_packing(void *buf, struct sja1105_cgu_mii_ctrl *cmd,
+				enum packing_op op)
+{
+	const int size = 4;
+
+	if (op == UNPACK)
+		memset(cmd, 0, sizeof(*cmd));
+	else
+		memset(buf, 0, size);
+
+	sja1105_packing(buf, &cmd->clksrc,    28, 24, 4, op);
+	sja1105_packing(buf, &cmd->autoblock, 11, 11, 4, op);
+	sja1105_packing(buf, &cmd->pd,         0,  0, 4, op);
+}
+
+static int sja1105_cgu_mii_tx_clk_config(struct sja1105_private *priv,
+					 int port, int mii_mode)
+{
+#define BUF_LEN 4
+	u8 packed_buf[BUF_LEN];
+	struct  sja1105_cgu_mii_ctrl mii_tx_clk;
+	const int mac_clk_sources[] = {
+		CLKSRC_MII0_TX_CLK,
+		CLKSRC_MII1_TX_CLK,
+		CLKSRC_MII2_TX_CLK,
+		CLKSRC_MII3_TX_CLK,
+		CLKSRC_MII4_TX_CLK,
+	};
+	const int phy_clk_sources[] = {
+		CLKSRC_IDIV0,
+		CLKSRC_IDIV1,
+		CLKSRC_IDIV2,
+		CLKSRC_IDIV3,
+		CLKSRC_IDIV4,
+	};
+	int clksrc;
+
+	if (mii_mode == XMII_MAC)
+		clksrc = mac_clk_sources[port];
+	else
+		clksrc = phy_clk_sources[port];
+
+	/* Payload for packed_buf */
+	mii_tx_clk.clksrc    = clksrc;
+	mii_tx_clk.autoblock = 1;  /* Autoblock clk while changing clksrc */
+	mii_tx_clk.pd        = 0;  /* Power Down off => enabled */
+	sja1105_cgu_mii_control_packing(packed_buf, &mii_tx_clk, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->mii_tx_clk[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int
+sja1105_cgu_mii_rx_clk_config(struct sja1105_private *priv, int port)
+{
+#define BUF_LEN 4
+	u8 packed_buf[BUF_LEN];
+	struct  sja1105_cgu_mii_ctrl mii_rx_clk;
+	const int clk_sources[] = {
+		CLKSRC_MII0_RX_CLK,
+		CLKSRC_MII1_RX_CLK,
+		CLKSRC_MII2_RX_CLK,
+		CLKSRC_MII3_RX_CLK,
+		CLKSRC_MII4_RX_CLK,
+	};
+
+	/* Payload for packed_buf */
+	mii_rx_clk.clksrc    = clk_sources[port];
+	mii_rx_clk.autoblock = 1;  /* Autoblock clk while changing clksrc */
+	mii_rx_clk.pd        = 0;  /* Power Down off => enabled */
+	sja1105_cgu_mii_control_packing(packed_buf, &mii_rx_clk, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->mii_rx_clk[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int
+sja1105_cgu_mii_ext_tx_clk_config(struct sja1105_private *priv, int port)
+{
+#define BUF_LEN 4
+	u8 packed_buf[BUF_LEN];
+	struct  sja1105_cgu_mii_ctrl mii_ext_tx_clk;
+	const int clk_sources[] = {
+		CLKSRC_IDIV0,
+		CLKSRC_IDIV1,
+		CLKSRC_IDIV2,
+		CLKSRC_IDIV3,
+		CLKSRC_IDIV4,
+	};
+
+	/* Payload for packed_buf */
+	mii_ext_tx_clk.clksrc    = clk_sources[port];
+	mii_ext_tx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
+	mii_ext_tx_clk.pd        = 0; /* Power Down off => enabled */
+	sja1105_cgu_mii_control_packing(packed_buf, &mii_ext_tx_clk, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->mii_ext_tx_clk[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int
+sja1105_cgu_mii_ext_rx_clk_config(struct sja1105_private *priv, int port)
+{
+#define BUF_LEN 4
+	u8 packed_buf[BUF_LEN];
+	struct  sja1105_cgu_mii_ctrl mii_ext_rx_clk;
+	const int clk_sources[] = {
+		CLKSRC_IDIV0,
+		CLKSRC_IDIV1,
+		CLKSRC_IDIV2,
+		CLKSRC_IDIV3,
+		CLKSRC_IDIV4,
+	};
+
+	/* Payload for packed_buf */
+	mii_ext_rx_clk.clksrc    = clk_sources[port];
+	mii_ext_rx_clk.autoblock = 1; /* Autoblock clk while changing clksrc */
+	mii_ext_rx_clk.pd        = 0; /* Power Down off => enabled */
+	sja1105_cgu_mii_control_packing(packed_buf, &mii_ext_rx_clk, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->mii_ext_rx_clk[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int mii_clocking_setup(struct sja1105_private *priv, int port,
+			      int mii_mode)
+{
+	struct device *dev = priv->ds->dev;
+	int rc;
+
+	if (mii_mode != XMII_MAC && mii_mode != XMII_PHY)
+		goto error;
+
+	dev_dbg(dev, "Configuring MII-%s clocking\n",
+		(mii_mode == XMII_MAC) ? "MAC" : "PHY");
+	/* If mii_mode is MAC, disable IDIV
+	 * If mii_mode is PHY, enable IDIV and configure for 1/1 divider
+	 */
+	rc = sja1105_cgu_idiv_config(priv, port, (mii_mode == XMII_PHY), 1);
+	if (rc < 0)
+		goto error;
+
+	/* Configure CLKSRC of MII_TX_CLK_n
+	 *   * If mii_mode is MAC, select TX_CLK_n
+	 *   * If mii_mode is PHY, select IDIV_n
+	 */
+	rc = sja1105_cgu_mii_tx_clk_config(priv, port, mii_mode);
+	if (rc < 0)
+		goto error;
+
+	/* Configure CLKSRC of MII_RX_CLK_n
+	 * Select RX_CLK_n
+	 */
+	rc = sja1105_cgu_mii_rx_clk_config(priv, port);
+	if (rc < 0)
+		goto error;
+
+	if (mii_mode == XMII_PHY) {
+		/* Per MII spec, the PHY (which is us) drives the TX_CLK pin */
+
+		/* Configure CLKSRC of EXT_TX_CLK_n
+		 * Select IDIV_n
+		 */
+		rc = sja1105_cgu_mii_ext_tx_clk_config(priv, port);
+		if (rc < 0)
+			goto error;
+
+		/* Configure CLKSRC of EXT_RX_CLK_n
+		 * Select IDIV_n
+		 */
+		rc = sja1105_cgu_mii_ext_rx_clk_config(priv, port);
+		if (rc < 0)
+			goto error;
+	}
+	return 0;
+error:
+	return -1;
+}
+
+static void
+sja1105_cgu_pll_control_packing(void *buf, struct sja1105_cgu_pll_ctrl *cmd,
+				enum packing_op op)
+{
+	const int size = 4;
+
+	if (op == UNPACK)
+		memset(cmd, 0, sizeof(*cmd));
+	else
+		memset(buf, 0, size);
+
+	sja1105_packing(buf, &cmd->pllclksrc, 28, 24, size, op);
+	sja1105_packing(buf, &cmd->msel,      23, 16, size, op);
+	sja1105_packing(buf, &cmd->autoblock, 11, 11, size, op);
+	sja1105_packing(buf, &cmd->psel,       9,  8, size, op);
+	sja1105_packing(buf, &cmd->direct,     7,  7, size, op);
+	sja1105_packing(buf, &cmd->fbsel,      6,  6, size, op);
+	sja1105_packing(buf, &cmd->bypass,     1,  1, size, op);
+	sja1105_packing(buf, &cmd->pd,         0,  0, size, op);
+	/* P/Q/R/S only, but packing zeroes for E/T doesn't hurt */
+	sja1105_packing(buf, &cmd->nsel,      13, 12, size, op);
+	sja1105_packing(buf, &cmd->p23en,      2,  2, size, op);
+}
+
+static int sja1105_cgu_rgmii_tx_clk_config(struct sja1105_private *priv,
+					   int port, int speed)
+{
+#define BUF_LEN 4
+	struct sja1105_cgu_mii_ctrl txc;
+	u8 packed_buf[BUF_LEN];
+	int clksrc;
+
+	if (speed == SJA1105_SPEED_1000MBPS) {
+		clksrc = CLKSRC_PLL0;
+	} else {
+		int clk_sources[] = {CLKSRC_IDIV0, CLKSRC_IDIV1, CLKSRC_IDIV2,
+				     CLKSRC_IDIV3, CLKSRC_IDIV4};
+		clksrc = clk_sources[port];
+	}
+
+	/* RGMII: 125MHz for 1000, 25MHz for 100, 2.5MHz for 10 */
+	txc.clksrc = clksrc;
+	/* Autoblock clk while changing clksrc */
+	txc.autoblock = 1;
+	/* Power Down off => enabled */
+	txc.pd = 0;
+	sja1105_cgu_mii_control_packing(packed_buf, &txc, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->rgmii_txc[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+/* AGU */
+static void
+sja1105_cfg_pad_mii_tx_packing(void *buf, struct sja1105_cfg_pad_mii_tx *cmd,
+			       enum packing_op op)
+{
+	const int size = 4;
+
+	if (op == UNPACK)
+		memset(cmd, 0, sizeof(*cmd));
+	else
+		memset(buf, 0, size);
+
+	sja1105_packing(buf, &cmd->d32_os,   28, 27, size, op);
+	sja1105_packing(buf, &cmd->d32_ipud, 25, 24, size, op);
+	sja1105_packing(buf, &cmd->d10_os,   20, 19, size, op);
+	sja1105_packing(buf, &cmd->d10_ipud, 17, 16, size, op);
+	sja1105_packing(buf, &cmd->ctrl_os,  12, 11, size, op);
+	sja1105_packing(buf, &cmd->ctrl_ipud, 9,  8, size, op);
+	sja1105_packing(buf, &cmd->clk_os,    4,  3, size, op);
+	sja1105_packing(buf, &cmd->clk_ih,    2,  2, size, op);
+	sja1105_packing(buf, &cmd->clk_ipud,  1,  0, size, op);
+}
+
+static int sja1105_rgmii_cfg_pad_tx_config(struct sja1105_private *priv,
+					   int port)
+{
+#define BUF_LEN 4
+	u8 packed_buf[BUF_LEN];
+	struct  sja1105_cfg_pad_mii_tx pad_mii_tx;
+
+	/* Payload */
+	pad_mii_tx.d32_os    = 3; /* TXD[3:2] output stage: */
+				  /*          high noise/high speed */
+	pad_mii_tx.d32_ipud  = 2; /* TXD[3:2] input stage: */
+				  /*          plain input (default) */
+	pad_mii_tx.d10_os    = 3; /* TXD[1:0] output stage: */
+				  /*          high noise/high speed */
+	pad_mii_tx.d10_ipud  = 2; /* TXD[1:0] input stage: */
+				  /*          plain input (default) */
+	pad_mii_tx.ctrl_os   = 3; /* TX_CTL / TX_ER output stage */
+	pad_mii_tx.ctrl_ipud = 2; /* TX_CTL / TX_ER input stage (default) */
+	pad_mii_tx.clk_os    = 3; /* TX_CLK output stage */
+	pad_mii_tx.clk_ih    = 0; /* TX_CLK input hysteresis (default) */
+	pad_mii_tx.clk_ipud  = 2; /* TX_CLK input stage (default) */
+	sja1105_cfg_pad_mii_tx_packing(packed_buf, &pad_mii_tx, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->rgmii_pad_mii_tx[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int rgmii_clocking_setup(struct sja1105_private *priv, int port)
+{
+	struct device *dev = priv->ds->dev;
+	struct sja1105_table *mac;
+	int speed;
+	int rc;
+
+	mac = &priv->static_config.tables[BLK_IDX_MAC_CONFIG];
+	speed = ((struct sja1105_mac_config_entry *)mac->entries)[port].speed;
+
+	dev_dbg(dev, "Configuring port %d RGMII at speed %dMbps\n",
+		port, speed);
+
+	switch (speed) {
+	case SJA1105_SPEED_1000MBPS:
+		/* 1000Mbps, IDIV disabled, divide by 1 */
+		rc = sja1105_cgu_idiv_config(priv, port, false, 1);
+		break;
+	case SJA1105_SPEED_100MBPS:
+		/* 100Mbps, IDIV enabled, divide by 1 */
+		rc = sja1105_cgu_idiv_config(priv, port, true, 1);
+		break;
+	case SJA1105_SPEED_10MBPS:
+		/* 10Mbps, IDIV enabled, divide by 10 */
+		rc = sja1105_cgu_idiv_config(priv, port, true, 10);
+		break;
+	case SJA1105_SPEED_AUTO:
+		/* Skip CGU configuration if there is no speed available
+		 * (e.g. link is not established yet)
+		 */
+		dev_dbg(dev, "Speed not available, skipping CGU config\n");
+		rc = 0;
+		goto out;
+	default:
+		rc = -EINVAL;
+	}
+
+	if (rc < 0) {
+		dev_err(dev, "Failed to configure idiv\n");
+		goto out;
+	}
+	rc = sja1105_cgu_rgmii_tx_clk_config(priv, port, speed);
+	if (rc < 0) {
+		dev_err(dev, "Failed to configure RGMII Tx clock\n");
+		goto out;
+	}
+	rc = sja1105_rgmii_cfg_pad_tx_config(priv, port);
+	if (rc < 0) {
+		dev_err(dev, "Failed to configure Tx pad registers\n");
+		goto out;
+	}
+out:
+	return rc;
+}
+
+static int sja1105_cgu_rmii_ref_clk_config(struct sja1105_private *priv,
+					   int port)
+{
+#define BUF_LEN 4
+	struct  sja1105_cgu_mii_ctrl ref_clk;
+	u8 packed_buf[BUF_LEN];
+	const int clk_sources[] = {
+		CLKSRC_MII0_TX_CLK,
+		CLKSRC_MII1_TX_CLK,
+		CLKSRC_MII2_TX_CLK,
+		CLKSRC_MII3_TX_CLK,
+		CLKSRC_MII4_TX_CLK,
+	};
+
+	/* Payload for packed_buf */
+	ref_clk.clksrc    = clk_sources[port];
+	ref_clk.autoblock = 1;      /* Autoblock clk while changing clksrc */
+	ref_clk.pd        = 0;      /* Power Down off => enabled */
+	sja1105_cgu_mii_control_packing(packed_buf, &ref_clk, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->rmii_ref_clk[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int
+sja1105_cgu_rmii_ext_tx_clk_config(struct sja1105_private *priv, int port)
+{
+#define BUF_LEN 4
+	struct sja1105_cgu_mii_ctrl ext_tx_clk;
+	u8 packed_buf[BUF_LEN];
+
+	/* Payload for packed_buf */
+	ext_tx_clk.clksrc    = CLKSRC_PLL1;
+	ext_tx_clk.autoblock = 1;   /* Autoblock clk while changing clksrc */
+	ext_tx_clk.pd        = 0;   /* Power Down off => enabled */
+	sja1105_cgu_mii_control_packing(packed_buf, &ext_tx_clk, PACK);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					   priv->regs->rmii_ext_tx_clk[port],
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int sja1105_cgu_rmii_pll_config(struct sja1105_private *priv)
+{
+#define BUF_LEN 4
+	struct device *dev = priv->ds->dev;
+	struct sja1105_cgu_pll_ctrl pll;
+	u8 packed_buf[BUF_LEN];
+	int rc;
+
+	/* PLL1 must be enabled and output 50 Mhz.
+	 * This is done by writing first 0x0A010941 to
+	 * the PLL_1_C register and then deasserting
+	 * power down (PD) 0x0A010940.
+	 */
+
+	/* Step 1: PLL1 setup for 50Mhz */
+	pll.pllclksrc = 0xA;
+	pll.msel      = 0x1;
+	pll.autoblock = 0x1;
+	pll.psel      = 0x1;
+	pll.direct    = 0x0;
+	pll.fbsel     = 0x1;
+	pll.bypass    = 0x0;
+	pll.pd        = 0x1;
+	/* P/Q/R/S only */
+	pll.nsel      = 0x0; /* PLL pre-divider is 1 (nsel + 1) */
+	pll.p23en     = 0x0; /* disable 120 and 240 degree phase PLL outputs */
+
+	sja1105_cgu_pll_control_packing(packed_buf, &pll, PACK);
+	rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					 priv->regs->rmii_pll1,
+					 packed_buf, BUF_LEN);
+	if (rc < 0) {
+		dev_err(dev, "failed to configure PLL1 for 50MHz\n");
+		goto out;
+	}
+
+	/* Step 2: Enable PLL1 */
+	pll.pd = 0x0;
+
+	sja1105_cgu_pll_control_packing(packed_buf, &pll, PACK);
+	rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+					 priv->regs->rmii_pll1,
+					 packed_buf, BUF_LEN);
+	if (rc < 0) {
+		dev_err(dev, "failed to enable PLL1\n");
+		goto out;
+	}
+out:
+	return rc;
+#undef BUF_LEN
+}
+
+static int rmii_clocking_setup(struct sja1105_private *priv, int port,
+			       int rmii_mode)
+{
+	struct device *dev = priv->ds->dev;
+	int rc;
+
+	if (rmii_mode != XMII_MAC && rmii_mode != XMII_PHY) {
+		dev_err(dev, "RMII mode must either be MAC or PHY\n");
+		return -EINVAL;
+	}
+	dev_dbg(dev, "Configuring RMII-%s clocking\n",
+		(rmii_mode == XMII_MAC) ? "MAC" : "PHY");
+	/* AH1601.pdf chapter 2.5.1. Sources */
+	if (rmii_mode == XMII_MAC) {
+		/* Configure and enable PLL1 for 50Mhz output */
+		rc = sja1105_cgu_rmii_pll_config(priv);
+		if (rc < 0)
+			return rc;
+	}
+	/* Disable IDIV for this port */
+	rc = sja1105_cgu_idiv_config(priv, port, false, 1);
+	if (rc < 0)
+		return rc;
+	/* Source to sink mappings */
+	rc = sja1105_cgu_rmii_ref_clk_config(priv, port);
+	if (rc < 0)
+		return rc;
+	if (rmii_mode == XMII_MAC) {
+		rc = sja1105_cgu_rmii_ext_tx_clk_config(priv, port);
+		if (rc < 0)
+			return rc;
+	}
+	return 0;
+}
+
+/* TODO:
+ * Standard clause 22 registers for the internal SGMII PCS are
+ * memory-mapped starting at SPI address 0x1F0000.
+ * The SGMII port should already have a basic initialization done
+ * through the static configuration tables.
+ * If any further SGMII initialization steps (autonegotiation or checking the
+ * link status) need to be done, they might as well be added here.
+ */
+static int sgmii_clocking_setup(struct sja1105_private *priv, int port)
+{
+	struct device *dev = priv->ds->dev;
+
+	dev_err(dev, "TODO: Configure SGMII clocking\n");
+	return 0;
+}
+
+int sja1105_clocking_setup_port(struct sja1105_private *priv, int port)
+{
+	struct sja1105_xmii_params_entry *mii;
+	struct device *dev = priv->ds->dev;
+	int rc = 0;
+
+	mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
+
+	switch (mii->xmii_mode[port]) {
+	case XMII_MODE_MII:
+		rc = mii_clocking_setup(priv, port, mii->phy_mac[port]);
+		break;
+	case XMII_MODE_RMII:
+		rc = rmii_clocking_setup(priv, port, mii->phy_mac[port]);
+		break;
+	case XMII_MODE_RGMII:
+		rc = rgmii_clocking_setup(priv, port);
+		break;
+	case XMII_MODE_SGMII:
+		if (!IS_PQRS(priv->device_id)) {
+			dev_err(dev, "SGMII mode not supported!\n");
+			rc = -EINVAL;
+			goto out;
+		}
+		if ((IS_R(priv->device_id, priv->part_nr) ||
+		     IS_S(priv->device_id, priv->part_nr)) && port == 4)
+			rc = sgmii_clocking_setup(priv, port);
+		else
+			dev_info(dev, "port is tri-stated\n");
+		break;
+	default:
+		dev_err(dev, "Invalid MII mode specified: %llx\n",
+			mii->xmii_mode[port]);
+		rc = -EINVAL;
+	}
+out:
+	if (rc)
+		dev_err(dev, "Clocking setup for port %d failed: %d\n",
+			port, rc);
+	return rc;
+}
+
+int sja1105_clocking_setup(struct sja1105_private *priv)
+{
+	int port, rc;
+
+	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
+		rc = sja1105_clocking_setup_port(priv, port);
+		if (rc < 0)
+			return rc;
+	}
+	return 0;
+}
+
diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
new file mode 100644
index 000000000000..3dc928e5a40a
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
@@ -0,0 +1,607 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include "sja1105.h"
+
+#define SIZE_DYN_CMD                     4
+#define SIZE_MAC_CONFIG_DYN_ENTRY_ET     SIZE_DYN_CMD
+#define SIZE_VL_LOOKUP_DYN_CMD_ET        SIZE_DYN_CMD
+#define SIZE_VL_LOOKUP_DYN_CMD_PQRS     (SIZE_DYN_CMD + SIZE_VL_LOOKUP_ENTRY)
+#define SIZE_L2_LOOKUP_DYN_CMD_ET       (SIZE_DYN_CMD + SIZE_L2_LOOKUP_ENTRY_ET)
+#define SIZE_L2_LOOKUP_DYN_CMD_PQRS     (SIZE_DYN_CMD + SIZE_L2_LOOKUP_ENTRY_PQRS)
+#define SIZE_VLAN_LOOKUP_DYN_CMD        (SIZE_DYN_CMD + 4 + SIZE_VLAN_LOOKUP_ENTRY)
+#define SIZE_L2_FORWARDING_DYN_CMD      (SIZE_DYN_CMD + SIZE_L2_FORWARDING_ENTRY)
+#define SIZE_MAC_CONFIG_DYN_CMD_ET      (SIZE_DYN_CMD + SIZE_MAC_CONFIG_DYN_ENTRY_ET)
+#define SIZE_MAC_CONFIG_DYN_CMD_PQRS    (SIZE_DYN_CMD + SIZE_MAC_CONFIG_ENTRY_PQRS)
+#define SIZE_L2_LOOKUP_PARAMS_DYN_CMD_ET SIZE_DYN_CMD
+#define SIZE_GENERAL_PARAMS_DYN_CMD_ET   SIZE_DYN_CMD
+#define SIZE_RETAGGING_DYN_CMD_ET       (SIZE_DYN_CMD + SIZE_RETAGGING_ENTRY)
+#define MAX_DYN_CMD_SIZE                 SIZE_MAC_CONFIG_DYN_CMD_PQRS
+
+static void
+sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+			      enum packing_op op)
+{
+	sja1105_packing(buf, &cmd->valid,   31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(buf, &cmd->errors,  30, 30, SIZE_DYN_CMD, op);
+	sja1105_packing(buf, &cmd->rdwrset, 29, 29, SIZE_DYN_CMD, op);
+	sja1105_packing(buf, &cmd->index,    9,  0, SIZE_DYN_CMD, op);
+}
+
+static size_t sja1105et_vl_lookup_entry_packing(void *buf, void *entry_ptr,
+						enum packing_op op)
+{
+	struct sja1105_vl_lookup_entry *entry = entry_ptr;
+	const int size = SIZE_VL_LOOKUP_DYN_CMD_ET;
+
+	sja1105_packing(buf, &entry->egrmirr,  21, 17, size, op);
+	sja1105_packing(buf, &entry->ingrmirr, 16, 16, size, op);
+	return size;
+}
+
+static void
+sja1105pqrs_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				  enum packing_op op)
+{
+	u8 *p = buf + SIZE_L2_LOOKUP_ENTRY_PQRS;
+
+	sja1105_packing(p, &cmd->valid,    31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->rdwrset,  30, 30, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->errors,   29, 29, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->valident, 27, 27, SIZE_DYN_CMD, op);
+	/* Hack - The hardware takes the 'index' field within
+	 * struct sja1105_l2_lookup_entry as the index on which this command
+	 * will operate. However it will ignore everything else, so 'index'
+	 * is logically part of command but physically part of entry.
+	 * Populate the 'index' entry field from within the command callback,
+	 * such that our API doesn't need to ask for a full-blown entry
+	 * structure when e.g. a delete is requested.
+	 */
+	sja1105_packing(buf, &cmd->index, 29, 20, SIZE_L2_LOOKUP_ENTRY_PQRS, op);
+	/* TODO hostcmd */
+}
+
+static void
+sja1105et_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				enum packing_op op)
+{
+	u8 *p = buf + SIZE_L2_LOOKUP_ENTRY_ET;
+
+	sja1105_packing(p, &cmd->valid,    31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->rdwrset,  30, 30, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->errors,   29, 29, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->valident, 27, 27, SIZE_DYN_CMD, op);
+	/* Hack - see comments above. */
+	sja1105_packing(buf, &cmd->index, 29, 20, SIZE_L2_LOOKUP_ENTRY_ET, op);
+}
+
+static void
+sja1105et_mgmt_route_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				 enum packing_op op)
+{
+	u8 *p = buf + SIZE_L2_LOOKUP_ENTRY_ET;
+	u64 mgmtroute = 1;
+
+	sja1105et_l2_lookup_cmd_packing(buf, cmd, op);
+	if (op == PACK)
+		sja1105_pack(p, &mgmtroute, 26, 26, SIZE_DYN_CMD);
+}
+
+static size_t sja1105et_mgmt_route_entry_packing(void *buf, void *entry_ptr,
+						 enum packing_op op)
+{
+	struct sja1105_mgmt_entry *entry = entry_ptr;
+	const size_t size = SIZE_L2_LOOKUP_ENTRY_ET;
+
+	/* UM10944: To specify if a PTP egress timestamp shall be captured on
+	 * each port upon transmission of the frame, the LSB of VLANID in the
+	 * ENTRY field provided by the host must be set.
+	 * Bit 1 of VLANID then specifies the register where the timestamp for
+	 * this port is stored in.
+	 */
+	sja1105_packing(buf, &entry->tsreg,     85, 85, size, op);
+	sja1105_packing(buf, &entry->takets,    84, 84, size, op);
+	sja1105_packing(buf, &entry->macaddr,   83, 36, size, op);
+	sja1105_packing(buf, &entry->destports, 35, 31, size, op);
+	sja1105_packing(buf, &entry->enfport,   30, 30, size, op);
+	return size;
+}
+
+/* In E/T, entry is at addresses 0x27-0x28. There is a 4 byte gap at 0x29,
+ * and command is at 0x2a. Similarly in P/Q/R/S there is a 1 register gap
+ * between entry (0x2d, 0x2e) and command (0x30).
+ */
+static void
+sja1105_vlan_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				enum packing_op op)
+{
+	u8 *p = buf + SIZE_VLAN_LOOKUP_ENTRY + 4;
+
+	sja1105_packing(p, &cmd->valid,    31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->rdwrset,  30, 30, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->valident, 27, 27, SIZE_DYN_CMD, op);
+	/* Hack - see comments above, applied for 'vlanid' field of
+	 * struct sja1105_vlan_lookup_entry.
+	 */
+	sja1105_packing(buf, &cmd->index, 38, 27, SIZE_VLAN_LOOKUP_ENTRY, op);
+}
+
+static void
+sja1105_l2_forwarding_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				  enum packing_op op)
+{
+	u8 *p = buf + SIZE_L2_FORWARDING_ENTRY;
+
+	sja1105_packing(p, &cmd->valid,   31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->errors,  30, 30, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->rdwrset, 29, 29, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->index,    4,  0, SIZE_DYN_CMD, op);
+}
+
+static void
+sja1105et_mac_config_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				 enum packing_op op)
+{
+	/* Yup, user manual definitions are reversed */
+	u8 *reg1 = buf + 4;
+
+	sja1105_packing(reg1, &cmd->valid, 31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(reg1, &cmd->index, 26, 24, SIZE_DYN_CMD, op);
+}
+
+static size_t sja1105et_mac_config_entry_packing(void *buf, void *entry_ptr,
+						 enum packing_op op)
+{
+	struct sja1105_mac_config_entry *entry = entry_ptr;
+	const int size = SIZE_MAC_CONFIG_DYN_ENTRY_ET;
+	/* Yup, user manual definitions are reversed */
+	u8 *reg1 = buf + 4;
+	u8 *reg2 = buf;
+
+	sja1105_packing(reg1, &entry->speed,     30, 29, size, op);
+	sja1105_packing(reg1, &entry->drpdtag,   23, 23, size, op);
+	sja1105_packing(reg1, &entry->drpuntag,  22, 22, size, op);
+	sja1105_packing(reg1, &entry->retag,     21, 21, size, op);
+	sja1105_packing(reg1, &entry->dyn_learn, 20, 20, size, op);
+	sja1105_packing(reg1, &entry->egress,    19, 19, size, op);
+	sja1105_packing(reg1, &entry->ingress,   18, 18, size, op);
+	sja1105_packing(reg1, &entry->ing_mirr,  17, 17, size, op);
+	sja1105_packing(reg1, &entry->egr_mirr,  16, 16, size, op);
+	sja1105_packing(reg1, &entry->vlanprio,  14, 12, size, op);
+	sja1105_packing(reg1, &entry->vlanid,    11,  0, size, op);
+	sja1105_packing(reg2, &entry->tp_delin,  31, 16, size, op);
+	sja1105_packing(reg2, &entry->tp_delout, 15,  0, size, op);
+	/* MAC configuration table entries which can't be reconfigured:
+	 * top, base, enabled, ifg, maxage, drpnona664
+	 */
+	/* Bogus return value, not used anywhere */
+	return 0;
+}
+
+static void
+sja1105pqrs_mac_config_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				   enum packing_op op)
+{
+	u8 *p = buf + SIZE_MAC_CONFIG_ENTRY_PQRS;
+
+	sja1105_packing(p, &cmd->valid,   31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->errors,  30, 30, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->rdwrset, 29, 29, SIZE_DYN_CMD, op);
+	sja1105_packing(p, &cmd->index,    2,  0, SIZE_DYN_CMD, op);
+}
+
+static void
+sja1105et_l2_lookup_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				       enum packing_op op)
+{
+	sja1105_packing(buf, &cmd->valid, 31, 31,
+			SIZE_L2_LOOKUP_PARAMS_DYN_CMD_ET, op);
+}
+
+static size_t
+sja1105et_l2_lookup_params_entry_packing(void *buf, void *entry_ptr,
+					 enum packing_op op)
+{
+	struct sja1105_l2_lookup_params_entry *entry = entry_ptr;
+
+	sja1105_packing(buf, &entry->poly, 7, 0,
+			SIZE_L2_LOOKUP_PARAMS_DYN_CMD_ET, op);
+	/* Bogus return value, not used anywhere */
+	return 0;
+}
+
+static void
+sja1105et_general_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+				     enum packing_op op)
+{
+	const int size = SIZE_GENERAL_PARAMS_DYN_CMD_ET;
+
+	sja1105_packing(buf, &cmd->valid,  31, 31, size, op);
+	sja1105_packing(buf, &cmd->errors, 30, 30, size, op);
+}
+
+static size_t
+sja1105et_general_params_entry_packing(void *buf, void *entry_ptr,
+				       enum packing_op op)
+{
+	struct sja1105_general_params_entry *entry = entry_ptr;
+	const int size = SIZE_GENERAL_PARAMS_DYN_CMD_ET;
+
+	sja1105_packing(buf, &entry->mirr_port, 2, 0, size, op);
+	/* Bogus return value, not used anywhere */
+	return 0;
+}
+
+static void
+sja1105_retagging_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+			      enum packing_op op)
+{
+	sja1105_packing(buf, &cmd->valid,    31, 31, SIZE_DYN_CMD, op);
+	sja1105_packing(buf, &cmd->errors,   30, 30, SIZE_DYN_CMD, op);
+	sja1105_packing(buf, &cmd->valident, 29, 29, SIZE_DYN_CMD, op);
+	sja1105_packing(buf, &cmd->index,     5,  0, SIZE_DYN_CMD, op);
+}
+
+#define OP_READ  BIT(0)
+#define OP_WRITE BIT(1)
+#define OP_DEL   BIT(2)
+
+/* SJA1105E/T: First generation */
+static struct sja1105_dynamic_table_ops sja1105et_table_ops[BLK_IDX_MAX_DYN] = {
+	[BLK_IDX_SCHEDULE] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = { 0 },
+	[BLK_IDX_VL_LOOKUP] = {
+		.entry_packing = sja1105et_vl_lookup_entry_packing,
+		.cmd_packing = sja1105_vl_lookup_cmd_packing,
+		.access = OP_WRITE,
+		.max_entry_count = MAX_VL_LOOKUP_COUNT,
+		.packed_size = SIZE_VL_LOOKUP_DYN_CMD_ET,
+		.addr = 0x35,
+	},
+	[BLK_IDX_VL_POLICING] = { 0 },
+	[BLK_IDX_VL_FORWARDING] = { 0 },
+	[BLK_IDX_L2_LOOKUP] = {
+		.entry_packing = sja1105et_l2_lookup_entry_packing,
+		.cmd_packing = sja1105et_l2_lookup_cmd_packing,
+		.access = (OP_READ | OP_WRITE | OP_DEL),
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+		.packed_size = SIZE_L2_LOOKUP_DYN_CMD_ET,
+		.addr = 0x20,
+	},
+	[BLK_IDX_MGMT_ROUTE] = {
+		.entry_packing = sja1105et_mgmt_route_entry_packing,
+		.cmd_packing = sja1105et_mgmt_route_cmd_packing,
+		.access = (OP_READ | OP_WRITE),
+		.max_entry_count = SJA1105_NUM_PORTS,
+		.packed_size = SIZE_L2_LOOKUP_DYN_CMD_ET,
+		.addr = 0x20,
+	},
+	[BLK_IDX_L2_POLICING] = { 0 },
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.entry_packing = sja1105_vlan_lookup_entry_packing,
+		.cmd_packing = sja1105_vlan_lookup_cmd_packing,
+		.access = (OP_WRITE | OP_DEL),
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+		.packed_size = SIZE_VLAN_LOOKUP_DYN_CMD,
+		.addr = 0x27,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.entry_packing = sja1105_l2_forwarding_entry_packing,
+		.cmd_packing = sja1105_l2_forwarding_cmd_packing,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+		.access = OP_WRITE,
+		.packed_size = SIZE_L2_FORWARDING_DYN_CMD,
+		.addr = 0x24,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.entry_packing = sja1105et_mac_config_entry_packing,
+		.cmd_packing = sja1105et_mac_config_cmd_packing,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+		.access = OP_WRITE,
+		.packed_size = SIZE_MAC_CONFIG_DYN_CMD_ET,
+		.addr = 0x36,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { 0 },
+	[BLK_IDX_VL_FORWARDING_PARAMS] = { 0 },
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.entry_packing = sja1105et_l2_lookup_params_entry_packing,
+		.cmd_packing = sja1105et_l2_lookup_params_cmd_packing,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+		.access = OP_WRITE,
+		.packed_size = SIZE_L2_LOOKUP_PARAMS_DYN_CMD_ET,
+		.addr = 0x38,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = { 0 },
+	[BLK_IDX_CLK_SYNC_PARAMS] = { 0 },
+	[BLK_IDX_AVB_PARAMS] = { 0 },
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.entry_packing = sja1105et_general_params_entry_packing,
+		.cmd_packing = sja1105et_general_params_cmd_packing,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+		.access = OP_WRITE,
+		.packed_size = SIZE_GENERAL_PARAMS_DYN_CMD_ET,
+		.addr = 0x34,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.entry_packing = sja1105_retagging_entry_packing,
+		.cmd_packing = sja1105_retagging_cmd_packing,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+		.access = (OP_WRITE | OP_DEL),
+		.packed_size = SIZE_RETAGGING_DYN_CMD_ET,
+		.addr = 0x31,
+	},
+	[BLK_IDX_XMII_PARAMS] = { 0 },
+	[BLK_IDX_SGMII] = { 0 },
+};
+
+/* SJA1105P/Q/R/S: Second generation: TODO */
+static struct sja1105_dynamic_table_ops sja1105pqrs_table_ops[BLK_IDX_MAX_DYN] = {
+	[BLK_IDX_SCHEDULE] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = { 0 },
+	[BLK_IDX_VL_LOOKUP] = {
+		.entry_packing = sja1105_vl_lookup_entry_packing,
+		.cmd_packing = sja1105_vl_lookup_cmd_packing,
+		.access = (OP_READ | OP_WRITE),
+		.max_entry_count = MAX_VL_LOOKUP_COUNT,
+		.packed_size = SIZE_VL_LOOKUP_DYN_CMD_PQRS,
+		.addr = 0x47,
+	},
+	[BLK_IDX_VL_POLICING] = { 0 },
+	[BLK_IDX_VL_FORWARDING] = { 0 },
+	[BLK_IDX_L2_LOOKUP] = {
+		.entry_packing = sja1105pqrs_l2_lookup_entry_packing,
+		.cmd_packing = sja1105pqrs_l2_lookup_cmd_packing,
+		.access = (OP_READ | OP_WRITE | OP_DEL),
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+		.packed_size = SIZE_L2_LOOKUP_DYN_CMD_ET,
+		.addr = 0x24,
+	},
+	[BLK_IDX_L2_POLICING] = { 0 },
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.entry_packing = sja1105_vlan_lookup_entry_packing,
+		.cmd_packing = sja1105_vlan_lookup_cmd_packing,
+		.access = (OP_READ | OP_WRITE | OP_DEL),
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+		.packed_size = SIZE_VLAN_LOOKUP_DYN_CMD,
+		.addr = 0x2D,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.entry_packing = sja1105_l2_forwarding_entry_packing,
+		.cmd_packing = sja1105_l2_forwarding_cmd_packing,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+		.access = OP_WRITE,
+		.packed_size = SIZE_L2_FORWARDING_DYN_CMD,
+		.addr = 0x2A,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.entry_packing = sja1105pqrs_mac_config_entry_packing,
+		.cmd_packing = sja1105pqrs_mac_config_cmd_packing,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+		.access = (OP_READ | OP_WRITE),
+		.packed_size = SIZE_MAC_CONFIG_DYN_CMD_PQRS,
+		.addr = 0x4B,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { 0 },
+	[BLK_IDX_VL_FORWARDING_PARAMS] = { 0 },
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.entry_packing = sja1105et_l2_lookup_params_entry_packing,
+		.cmd_packing = sja1105et_l2_lookup_params_cmd_packing,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+		.access = (OP_READ | OP_WRITE),
+		.packed_size = SIZE_L2_LOOKUP_PARAMS_DYN_CMD_ET,
+		.addr = 0x38,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = { 0 },
+	[BLK_IDX_CLK_SYNC_PARAMS] = { 0 },
+	[BLK_IDX_AVB_PARAMS] = { 0 },
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.entry_packing = sja1105et_general_params_entry_packing,
+		.cmd_packing = sja1105et_general_params_cmd_packing,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+		.access = OP_WRITE,
+		.packed_size = SIZE_GENERAL_PARAMS_DYN_CMD_ET,
+		.addr = 0x34,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.entry_packing = sja1105_retagging_entry_packing,
+		.cmd_packing = sja1105_retagging_cmd_packing,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+		.access = (OP_WRITE | OP_DEL),
+		.packed_size = SIZE_RETAGGING_DYN_CMD_ET,
+		.addr = 0x31,
+	},
+	[BLK_IDX_XMII_PARAMS] = { 0 },
+	[BLK_IDX_SGMII] = { 0 },
+};
+
+int sja1105_dynamic_config_read(struct sja1105_private *priv,
+				enum sja1105_blk_idx blk_idx,
+				int index, void *entry)
+{
+	const struct sja1105_dynamic_table_ops *ops;
+	struct sja1105_dyn_cmd cmd = { 0 };
+	/* SPI payload buffer */
+	u8 packed_buf[MAX_DYN_CMD_SIZE];
+	int retries = 3;
+	int rc;
+
+	if (blk_idx >= BLK_IDX_MAX_DYN)
+		return -ERANGE;
+
+	ops = &priv->dyn_ops[blk_idx];
+
+	if (index >= ops->max_entry_count)
+		return -ERANGE;
+	if (!(ops->access & OP_READ))
+		return -EOPNOTSUPP;
+	if (ops->packed_size > MAX_DYN_CMD_SIZE)
+		return -ERANGE;
+	if (!ops->cmd_packing)
+		return -EOPNOTSUPP;
+	if (!ops->entry_packing)
+		return -EOPNOTSUPP;
+
+	memset(packed_buf, 0, ops->packed_size);
+
+	cmd.valid = true; /* Trigger action on table entry */
+	cmd.rdwrset = SPI_READ; /* Action is read */
+	cmd.index = index;
+	ops->cmd_packing(packed_buf, &cmd, PACK);
+
+	/* Send SPI write operation: read config table entry */
+	rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE, ops->addr,
+					 packed_buf, ops->packed_size);
+	if (rc < 0)
+		return rc;
+
+	/* Loop until we have confirmation that hardware has finished
+	 * processing the command and has cleared the VALID field
+	 */
+	do {
+		memset(packed_buf, 0, ops->packed_size);
+
+		/* Retrieve the read operation's result */
+		rc = sja1105_spi_send_packed_buf(priv, SPI_READ, ops->addr,
+						 packed_buf, ops->packed_size);
+		if (rc < 0)
+			return rc;
+
+		memset(&cmd, 0, sizeof(cmd));
+		ops->cmd_packing(packed_buf, &cmd, UNPACK);
+		/* UM10944: [valident] will always be found cleared
+		 * during a read access with MGMTROUTE set.
+		 * So don't error out in that case.
+		 */
+		if (!cmd.valident && blk_idx != BLK_IDX_MGMT_ROUTE)
+			return -EINVAL;
+		cpu_relax();
+	} while (cmd.valid && --retries);
+
+	if (cmd.valid)
+		return -ETIMEDOUT;
+
+	/* Don't dereference possibly NULL pointer - maybe caller
+	 * only wanted to see whether the entry existed or not.
+	 */
+	if (entry)
+		ops->entry_packing(packed_buf, entry, UNPACK);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(sja1105_dynamic_config_read);
+
+int sja1105_dynamic_config_write(struct sja1105_private *priv,
+				 enum sja1105_blk_idx blk_idx,
+				 int index, void *entry, bool keep)
+{
+	const struct sja1105_dynamic_table_ops *ops;
+	struct sja1105_dyn_cmd cmd = { 0 };
+	/* SPI payload buffer */
+	u8 packed_buf[MAX_DYN_CMD_SIZE];
+	int rc;
+
+	if (blk_idx >= BLK_IDX_MAX_DYN)
+		return -ERANGE;
+
+	ops = &priv->dyn_ops[blk_idx];
+
+	if (index >= ops->max_entry_count)
+		return -ERANGE;
+	if (!(ops->access & OP_WRITE))
+		return -EOPNOTSUPP;
+	if (!keep && !(ops->access & OP_DEL))
+		return -EOPNOTSUPP;
+	if (ops->packed_size > MAX_DYN_CMD_SIZE)
+		return -ERANGE;
+
+	memset(packed_buf, 0, ops->packed_size);
+
+	cmd.valident = keep; /* If false, deletes entry */
+	cmd.valid = true; /* Trigger action on table entry */
+	cmd.rdwrset = SPI_WRITE; /* Action is write */
+	cmd.index = index;
+
+	if (!ops->cmd_packing)
+		return -EOPNOTSUPP;
+	ops->cmd_packing(packed_buf, &cmd, PACK);
+
+	if (!ops->entry_packing)
+		return -EOPNOTSUPP;
+	/* Don't dereference potentially NULL pointer if just
+	 * deleting a table entry is what was requested. For cases
+	 * where 'index' field is physically part of entry structure,
+	 * and needed here, we deal with that in the cmd_packing callback.
+	 */
+	if (keep)
+		ops->entry_packing(packed_buf, entry, PACK);
+
+	/* Send SPI write operation: read config table entry */
+	rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE, ops->addr,
+					 packed_buf, ops->packed_size);
+	if (rc < 0)
+		return rc;
+
+	memset(&cmd, 0, sizeof(cmd));
+	ops->cmd_packing(packed_buf, &cmd, UNPACK);
+	if (cmd.errors)
+		return -EINVAL;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(sja1105_dynamic_config_write);
+
+int sja1105_dynamic_config_init(struct sja1105_private *priv)
+{
+	const struct sja1105_dynamic_table_ops *ops;
+
+	if (IS_ET(priv->device_id))
+		ops = sja1105et_table_ops;
+	else if (IS_PQRS(priv->device_id))
+		ops = sja1105pqrs_table_ops;
+	else
+		return -EINVAL;
+
+	priv->dyn_ops = ops;
+	return 0;
+}
+
+static u8 crc8_add(u8 crc, u8 byte, u8 poly)
+{
+	int i;
+
+	for (i = 0; i < 8; i++) {
+		if ((crc ^ byte) & (1 << 7)) {
+			crc <<= 1;
+			crc ^= poly;
+		} else {
+			crc <<= 1;
+		}
+		byte <<= 1;
+	}
+	return crc;
+}
+
+/* CRC8 algorithm with non-reversed input, non-reversed output,
+ * no input xor and no output xor. Code customized for receiving
+ * the SJA1105 E/T FDB keys (vlanid, macaddr) as input. CRC polynomial
+ * is also received as argument in the Koopman notation that the switch
+ * hardware stores it in.
+ */
+u8 sja1105_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid)
+{
+	struct sja1105_l2_lookup_params_entry *l2_lookup_params =
+		priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS].entries;
+	u64 poly_koopman = l2_lookup_params->poly;
+	/* Convert polynomial from Koopman to 'normal' notation */
+	u8 poly = (u8)(1 + (poly_koopman << 1));
+	u64 vlanid = l2_lookup_params->shared_learn ? 0 : vid;
+	u64 input = (vlanid << 48) | ether_addr_to_u64(addr);
+	u8 crc = 0; /* seed */
+	int i;
+
+	/* Mask the eight bytes starting from MSB one at a time */
+	for (i = 56; i >= 0; i -= 8)
+		crc = crc8_add(crc, (input & (0xffull << i)) >> i, poly);
+	return crc;
+}
diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.h b/drivers/net/dsa/sja1105/sja1105_dynamic_config.h
new file mode 100644
index 000000000000..afc6d41f0330
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#ifndef _SJA1105_DYNAMIC_CONFIG_H
+#define _SJA1105_DYNAMIC_CONFIG_H
+
+#include "sja1105.h"
+#include <linux/packing.h>
+
+struct sja1105_dyn_cmd {
+	u64 valid;
+	u64 rdwrset;
+	u64 errors;
+	u64 valident;
+	u64 index;
+};
+
+struct sja1105_dynamic_table_ops {
+	/* This returns size_t just to keep same prototype as the
+	 * static config ops, of which we are reusing some functions.
+	 */
+	size_t (*entry_packing)(void *buf, void *entry_ptr, enum packing_op op);
+	void (*cmd_packing)(void *buf, struct sja1105_dyn_cmd *cmd,
+			    enum packing_op op);
+	size_t max_entry_count;
+	size_t packed_size;
+	u64 addr;
+	u8 access;
+};
+
+struct sja1105_mgmt_entry {
+	u64 tsreg;
+	u64 takets;
+	u64 macaddr;
+	u64 destports;
+	u64 enfport;
+	u64 index;
+};
+
+#endif
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
new file mode 100644
index 000000000000..78bdb577c16b
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -0,0 +1,904 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2018, Sensor-Technik Wiedemann GmbH
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/printk.h>
+#include <linux/spi/spi.h>
+#include <linux/errno.h>
+#include <linux/gpio/consumer.h>
+#include <linux/of.h>
+#include <linux/of_net.h>
+#include <linux/of_mdio.h>
+#include <linux/netdev_features.h>
+#include <linux/netdevice.h>
+#include <linux/if_bridge.h>
+#include <linux/if_ether.h>
+#include "sja1105.h"
+
+static void sja1105_hw_reset(struct gpio_desc *gpio, unsigned int pulse_len,
+			     unsigned int startup_delay)
+{
+	gpiod_set_value_cansleep(gpio, 1);
+	/* Wait for minimum reset pulse length */
+	msleep(pulse_len);
+	gpiod_set_value_cansleep(gpio, 0);
+	/* Wait until chip is ready after reset */
+	msleep(startup_delay);
+}
+
+static void
+sja1105_port_allow_traffic(struct sja1105_l2_forwarding_entry *l2_fwd,
+			   int from, int to, bool allow)
+{
+	if (allow) {
+		l2_fwd[from].bc_domain  |= BIT(to);
+		l2_fwd[from].reach_port |= BIT(to);
+		l2_fwd[from].fl_domain  |= BIT(to);
+	} else {
+		l2_fwd[from].bc_domain  &= ~BIT(to);
+		l2_fwd[from].reach_port &= ~BIT(to);
+		l2_fwd[from].fl_domain  &= ~BIT(to);
+	}
+}
+
+/* Structure used to temporarily transport device tree
+ * settings into sja1105_setup
+ */
+struct sja1105_dt_port {
+	phy_interface_t phy_mode;
+	int xmii_mode;
+};
+
+static int sja1105_init_mac_settings(struct sja1105_private *priv)
+{
+	struct sja1105_mac_config_entry default_mac = {
+		/* Enable all 8 priority queues on egress.
+		 * Every queue i holds top[i] - base[i] frames.
+		 * Sum of top[i] - base[i] is 511 (max hardware limit).
+		 */
+		.top  = {0x3F, 0x7F, 0xBF, 0xFF, 0x13F, 0x17F, 0x1BF, 0x1FF},
+		.base = {0x0, 0x40, 0x80, 0xC0, 0x100, 0x140, 0x180, 0x1C0},
+		.enabled = {true, true, true, true, true, true, true, true},
+		/* Keep standard IFG of 12 bytes on egress. */
+		.ifg = 0,
+		/* Always put the MAC speed in automatic mode, where it can be
+		 * retrieved from the PHY object through phylib and
+		 * sja1105_adjust_port_config.
+		 */
+		.speed = SJA1105_SPEED_AUTO,
+		/* No static correction for 1-step 1588 events */
+		.tp_delin = 0,
+		.tp_delout = 0,
+		/* Disable aging for critical TTEthernet traffic */
+		.maxage = 0xFF,
+		/* Internal VLAN (pvid) to apply to untagged ingress */
+		.vlanprio = 0,
+		.vlanid = 0,
+		.ing_mirr = false,
+		.egr_mirr = false,
+		/* Don't drop traffic with other EtherType than 800h */
+		.drpnona664 = false,
+		/* Don't drop double-tagged traffic */
+		.drpdtag = false,
+		/* Don't drop VLAN with single outer tag - P/Q/R/S only */
+		.drpsotag = false,
+		/* Don't drop VLAN with single inner tag - P/Q/R/S only */
+		.drpsitag = false,
+		/* Don't drop untagged traffic */
+		.drpuntag = false,
+		/* Don't retag 802.1p (VID 0) traffic with the pvid */
+		.retag = false,
+		/* Enable learning and I/O on user ports by default. */
+		.dyn_learn = true,
+		.egress = false,
+		.ingress = false,
+		.mirrcie = 0,
+		.mirrcetag = 0,
+		.ingmirrvid = 0,
+		.ingmirrpcp = 0,
+		.ingmirrdei = 0,
+	};
+	struct sja1105_mac_config_entry *mac;
+	struct sja1105_table *table;
+	int i;
+
+	table = &priv->static_config.tables[BLK_IDX_MAC_CONFIG];
+
+	/* Discard previous MAC Configuration Table */
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(SJA1105_NUM_PORTS,
+				 table->ops->unpacked_entry_size, GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	/* Override table based on phylib DT bindings */
+	table->entry_count = SJA1105_NUM_PORTS;
+
+	mac = table->entries;
+
+	for (i = 0; i < SJA1105_NUM_PORTS; i++)
+		mac[i] = default_mac;
+
+	return 0;
+}
+
+static int sja1105_init_mii_settings(struct sja1105_private *priv,
+				     struct sja1105_dt_port *ports)
+{
+	struct device *dev = &priv->spidev->dev;
+	struct sja1105_xmii_params_entry *mii;
+	struct sja1105_table *table;
+	int i;
+
+	table = &priv->static_config.tables[BLK_IDX_XMII_PARAMS];
+
+	/* Discard previous xMII Mode Parameters Table */
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(MAX_XMII_PARAMS_COUNT,
+				 table->ops->unpacked_entry_size, GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	/* Override table based on phylib DT bindings */
+	table->entry_count = MAX_XMII_PARAMS_COUNT;
+
+	mii = table->entries;
+
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		switch (ports[i].phy_mode) {
+		case PHY_INTERFACE_MODE_MII:
+			mii->xmii_mode[i] = XMII_MODE_MII;
+			break;
+		case PHY_INTERFACE_MODE_RMII:
+			mii->xmii_mode[i] = XMII_MODE_RMII;
+			break;
+		case PHY_INTERFACE_MODE_RGMII:
+		case PHY_INTERFACE_MODE_RGMII_ID:
+		case PHY_INTERFACE_MODE_RGMII_RXID:
+		case PHY_INTERFACE_MODE_RGMII_TXID:
+			mii->xmii_mode[i] = XMII_MODE_RGMII;
+			break;
+		case PHY_INTERFACE_MODE_SGMII:
+			mii->xmii_mode[i] = XMII_MODE_SGMII;
+			break;
+		default:
+			dev_err(dev, "Unsupported PHY mode %s!\n",
+				phy_modes(ports[i].phy_mode));
+		}
+
+		mii->phy_mac[i] = ports[i].xmii_mode;
+	}
+	return 0;
+}
+
+static int sja1105_init_static_fdb(struct sja1105_private *priv)
+{
+	struct sja1105_table *table;
+
+	table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP];
+
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+	return 0;
+}
+
+static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
+{
+	struct sja1105_table *table;
+	struct sja1105_l2_lookup_params_entry default_l2_lookup_params = {
+		/* TODO Learned FDB entries are never forgotten */
+		.maxage = 0,
+		/* All entries within a FDB bin are available for learning */
+		.dyn_tbsz = SJA1105ET_FDB_BIN_SIZE,
+		/* 2^8 + 2^5 + 2^3 + 2^2 + 2^1 + 1 in Koopman notation */
+		.poly = 0x97,
+		/* This selects between Independent VLAN Learning (IVL) and
+		 * Shared VLAN Learning (SVL)
+		 */
+		.shared_learn = false,
+		/* Don't discard management traffic based on ENFPORT -
+		 * we don't perform SMAC port enforcement anyway, so
+		 * what we are setting here doesn't matter.
+		 */
+		.no_enf_hostprt = false,
+		/* Don't learn SMAC for mac_fltres1 and mac_fltres0.
+		 * TODO Maybe correlate with no_linklocal_learn from bridge
+		 * driver?
+		 */
+		.no_mgmt_learn = true,
+	};
+
+	table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
+
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(MAX_L2_LOOKUP_PARAMS_COUNT,
+				 table->ops->unpacked_entry_size, GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	table->entry_count = MAX_L2_LOOKUP_PARAMS_COUNT;
+
+	/* This table only has a single entry */
+	((struct sja1105_l2_lookup_params_entry *)table->entries)[0] =
+				default_l2_lookup_params;
+
+	return 0;
+}
+
+static int sja1105_init_static_vlan(struct sja1105_private *priv)
+{
+	struct sja1105_table *table;
+	struct sja1105_vlan_lookup_entry pvid = {
+		.ving_mirr = 0,
+		.vegr_mirr = 0,
+		.vmemb_port = 0,
+		.vlan_bc = 0,
+		.tag_port = 0,
+		.vlanid = 0,
+	};
+	int i;
+
+	table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
+
+	/* The static VLAN table will only contain the initial pvid of 0.
+	 */
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(1, table->ops->unpacked_entry_size,
+				 GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	table->entry_count = 1;
+
+	/* VLAN ID 0: all DT-defined ports are members; no restrictions on
+	 * forwarding; always transmit priority-tagged frames as untagged.
+	 */
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		pvid.vmemb_port |= BIT(i);
+		pvid.vlan_bc |= BIT(i);
+		pvid.tag_port &= ~BIT(i);
+	}
+
+	((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
+	return 0;
+}
+
+static int sja1105_init_l2_forwarding(struct sja1105_private *priv)
+{
+	struct sja1105_l2_forwarding_entry *l2fwd;
+	struct sja1105_table *table;
+	int i, j;
+
+	table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING];
+
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(MAX_L2_FORWARDING_COUNT,
+				 table->ops->unpacked_entry_size, GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	table->entry_count = MAX_L2_FORWARDING_COUNT;
+
+	l2fwd = table->entries;
+
+	/* First 5 entries define the forwarding rules */
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		unsigned int upstream = dsa_upstream_port(priv->ds, i);
+
+		for (j = 0; j < SJA1105_NUM_TC; j++)
+			l2fwd[i].vlan_pmap[j] = j;
+
+		if (i == upstream)
+			continue;
+
+		sja1105_port_allow_traffic(l2fwd, i, upstream, true);
+		sja1105_port_allow_traffic(l2fwd, upstream, i, true);
+	}
+	/* Next 8 entries define VLAN PCP mapping from ingress to egress.
+	 * Create a one-to-one mapping.
+	 */
+	for (i = 0; i < SJA1105_NUM_TC; i++)
+		for (j = 0; j < SJA1105_NUM_PORTS; j++)
+			l2fwd[SJA1105_NUM_PORTS + i].vlan_pmap[j] = i;
+
+	return 0;
+}
+
+static int sja1105_init_l2_forwarding_params(struct sja1105_private *priv)
+{
+	struct sja1105_l2_forwarding_params_entry default_l2fwd_params = {
+		/* Disallow dynamic reconfiguration of vlan_pmap */
+		.max_dynp = 0,
+		/* Use a single memory partition for all ingress queues */
+		.part_spc = { MAX_FRAME_MEMORY, 0, 0, 0, 0, 0, 0, 0 },
+	};
+	struct sja1105_table *table;
+
+	table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS];
+
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(MAX_L2_FORWARDING_PARAMS_COUNT,
+				 table->ops->unpacked_entry_size, GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	table->entry_count = MAX_L2_FORWARDING_PARAMS_COUNT;
+
+	/* This table only has a single entry */
+	((struct sja1105_l2_forwarding_params_entry *)table->entries)[0] =
+				default_l2fwd_params;
+
+	return 0;
+}
+
+static int sja1105_init_general_params(struct sja1105_private *priv)
+{
+	struct sja1105_general_params_entry default_general_params = {
+		/* Disallow dynamic changing of the mirror port */
+		.mirr_ptacu = 0,
+		.switchid = priv->ds->index,
+		/* Priority queue for link-local frames trapped to CPU */
+		.hostprio = 0,
+		.mac_fltres1 = SJA1105_LINKLOCAL_FILTER_A,
+		.mac_flt1    = SJA1105_LINKLOCAL_FILTER_A_MASK,
+		.incl_srcpt1 = true,
+		.send_meta1  = false,
+		.mac_fltres0 = SJA1105_LINKLOCAL_FILTER_B,
+		.mac_flt0    = SJA1105_LINKLOCAL_FILTER_B_MASK,
+		.incl_srcpt0 = true,
+		.send_meta0  = false,
+		/* The destination for traffic matching mac_fltres1 and
+		 * mac_fltres0 on all ports except host_port. Such traffic
+		 * receieved on host_port itself would be dropped, except
+		 * by installing a temporary 'management route'
+		 */
+		.host_port = dsa_upstream_port(priv->ds, 0),
+		/* Same as host port */
+		.mirr_port = dsa_upstream_port(priv->ds, 0),
+		/* Link-local traffic received on casc_port will be forwarded
+		 * to host_port without embedding the source port and device ID
+		 * info in the destination MAC address (presumably because it
+		 * is a cascaded port and a downstream SJA switch already did
+		 * that). Default to an invalid port (to disable the feature)
+		 * and overwrite this if we find any DSA (cascaded) ports.
+		 */
+		.casc_port = SJA1105_NUM_PORTS,
+		/* No TTEthernet */
+		.vllupformat = 0,
+		.vlmarker = 0,
+		.vlmask = 0,
+		/* Only update correctionField for 1-step PTP (L2 transport) */
+		.ignore2stf = 0,
+		.tpid = ETH_P_8021Q,
+		.tpid2 = ETH_P_8021Q,
+		/* P/Q/R/S only */
+		.queue_ts = 0,
+		.egrmirrvid = 0,
+		.egrmirrpcp = 0,
+		.egrmirrdei = 0,
+		.replay_port = 0,
+	};
+	struct sja1105_table *table;
+	int i;
+
+	for (i = 0; i < SJA1105_NUM_PORTS; i++)
+		if (dsa_is_dsa_port(priv->ds, i))
+			default_general_params.casc_port = i;
+
+	table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
+
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(MAX_GENERAL_PARAMS_COUNT,
+				 table->ops->unpacked_entry_size, GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	table->entry_count = MAX_GENERAL_PARAMS_COUNT;
+
+	/* This table only has a single entry */
+	((struct sja1105_general_params_entry *)table->entries)[0] =
+				default_general_params;
+
+	return 0;
+}
+
+static inline void
+sja1105_setup_policer(struct sja1105_l2_policing_entry *policing,
+		      int index)
+{
+#define RATE_MBPS(speed) (((speed) * 64000) / 1000)
+	policing[index].sharindx = index;
+	policing[index].smax = 65535; /* Burst size in bytes */
+	policing[index].rate = RATE_MBPS(1000);
+	policing[index].maxlen = ETH_FRAME_LEN + VLAN_HLEN + ETH_FCS_LEN;
+	policing[index].partition = 0;
+#undef RATE_MBPS
+}
+
+static int sja1105_init_l2_policing(struct sja1105_private *priv)
+{
+	struct sja1105_l2_policing_entry *policing;
+	struct sja1105_table *table;
+	int i, j, k;
+
+	table = &priv->static_config.tables[BLK_IDX_L2_POLICING];
+
+	/* Discard previous L2 Policing Table */
+	if (table->entry_count) {
+		kfree(table->entries);
+		table->entry_count = 0;
+	}
+
+	table->entries = kcalloc(MAX_L2_POLICING_COUNT,
+				 table->ops->unpacked_entry_size, GFP_KERNEL);
+	if (!table->entries)
+		return -ENOMEM;
+
+	/* Override table based on phylib DT bindings */
+	table->entry_count = MAX_L2_POLICING_COUNT;
+
+	policing = table->entries;
+
+	/* k sweeps through all unicast policers (0-39).
+	 * bcast sweeps through policers 40-44.
+	 */
+	for (i = 0, k = 0; i < SJA1105_NUM_PORTS; i++) {
+		int bcast = (SJA1105_NUM_PORTS * SJA1105_NUM_TC) + i;
+
+		for (j = 0; j < SJA1105_NUM_TC; j++, k++)
+			sja1105_setup_policer(policing, k);
+
+		/* Set up this port's policer for broadcast traffic */
+		sja1105_setup_policer(policing, bcast);
+	}
+	return 0;
+}
+
+static int sja1105_static_config_load(struct sja1105_private *priv,
+				      struct sja1105_dt_port *ports)
+{
+	int rc;
+
+	sja1105_static_config_free(&priv->static_config);
+	rc = sja1105_static_config_init(&priv->static_config,
+					priv->device_id, priv->part_nr);
+	if (rc)
+		return rc;
+
+	/* Build static configuration */
+	rc = sja1105_init_mac_settings(priv);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_mii_settings(priv, ports);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_static_fdb(priv);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_static_vlan(priv);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_l2_lookup_params(priv);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_l2_forwarding(priv);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_l2_forwarding_params(priv);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_l2_policing(priv);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_init_general_params(priv);
+	if (rc < 0)
+		return rc;
+
+	/* Send initial configuration to hardware via SPI */
+	return sja1105_static_config_upload(priv);
+}
+
+static int sja1105_parse_ports_node(struct sja1105_dt_port *ports,
+				    struct device_node *ports_node)
+{
+	struct device_node *child;
+
+	for_each_child_of_node(ports_node, child) {
+		struct device_node *phy_node;
+		int phy_mode;
+		u32 index;
+
+		/* Get switch port number from DT */
+		if (of_property_read_u32(child, "reg", &index) < 0) {
+			pr_err("Port number not defined in device tree (property \"reg\")\n");
+			return -ENODEV;
+		}
+
+		/* Get PHY mode from DT */
+		phy_mode = of_get_phy_mode(child);
+		if (phy_mode < 0) {
+			pr_err("Failed to read phy-mode or phy-interface-type property for port %d\n",
+			       index);
+			return -ENODEV;
+		}
+		ports[index].phy_mode = phy_mode;
+
+		phy_node = of_parse_phandle(child, "phy-handle", 0);
+		if (!phy_node) {
+			if (!of_phy_is_fixed_link(child)) {
+				pr_err("phy-handle or fixed-link properties missing!\n");
+				return -ENODEV;
+			}
+			/* phy-handle is missing, but fixed-link isn't.
+			 * So it's a fixed link. Default to PHY mode.
+			 */
+			ports[index].xmii_mode = XMII_PHY;
+		} else {
+			/* phy-handle present => put port in MAC mode */
+			ports[index].xmii_mode = XMII_MAC;
+			of_node_put(phy_node);
+		}
+
+		/* The MAC/PHY role can be overridden with explicit bindings */
+		if (of_property_read_bool(child, "sja1105,mac-mode"))
+			ports[index].xmii_mode = XMII_MAC;
+		else if (of_property_read_bool(child, "sja1105,phy-mode"))
+			ports[index].xmii_mode = XMII_PHY;
+	}
+
+	return 0;
+}
+
+static int sja1105_parse_dt(struct sja1105_private *priv,
+			    struct sja1105_dt_port *ports)
+{
+	struct device *dev = &priv->spidev->dev;
+	struct device_node *switch_node = dev->of_node;
+	struct device_node *ports_node;
+	int rc;
+
+	ports_node = of_get_child_by_name(switch_node, "ports");
+	if (!ports_node) {
+		dev_err(dev, "Incorrect bindings: absent \"ports\" node\n");
+		return -ENODEV;
+	}
+
+	rc = sja1105_parse_ports_node(ports, ports_node);
+	of_node_put(ports_node);
+
+	return rc;
+}
+
+/* Convert back and forth MAC speed from Mbps to SJA1105 encoding */
+static int sja1105_speed[] = {
+	[SJA1105_SPEED_AUTO]     = 0,
+	[SJA1105_SPEED_10MBPS]   = 10,
+	[SJA1105_SPEED_100MBPS]  = 100,
+	[SJA1105_SPEED_1000MBPS] = 1000,
+};
+
+static int sja1105_get_speed_cfg(unsigned int speed_mbps)
+{
+	int i;
+
+	for (i = SJA1105_SPEED_AUTO; i <= SJA1105_SPEED_1000MBPS; i++)
+		if (sja1105_speed[i] == speed_mbps)
+			return i;
+	return -EINVAL;
+}
+
+/* Set link speed and enable/disable traffic I/O in the MAC configuration
+ * for a specific port.
+ *
+ * @speed_mbps: If 0, leave the speed unchanged, else adapt MAC to PHY speed.
+ * @enabled: Manage Rx and Tx settings for this port. Overrides the static
+ *	     configuration settings.
+ */
+static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
+				      int speed_mbps, bool enabled)
+{
+	struct sja1105_xmii_params_entry *mii;
+	struct sja1105_mac_config_entry *mac;
+	struct device *dev = priv->ds->dev;
+	int xmii_mode;
+	int speed;
+	int rc;
+
+	mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
+	mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
+
+	speed = sja1105_get_speed_cfg(speed_mbps);
+	if (speed_mbps && speed < 0) {
+		dev_err(dev, "Invalid speed %iMbps\n", speed_mbps);
+		return -EINVAL;
+	}
+
+	/* If requested, overwrite SJA1105_SPEED_AUTO from the static MAC
+	 * configuration table, since this will be used for the clocking setup,
+	 * and we no longer need to store it in the static config (already told
+	 * hardware we want auto during upload phase).
+	 */
+	if (speed_mbps)
+		mac[port].speed = speed;
+	else
+		mac[port].speed = SJA1105_SPEED_AUTO;
+
+	/* On P/Q/R/S, one can read from the device via the MAC reconfiguration
+	 * tables. On E/T, MAC reconfig tables are not readable, only writable.
+	 * We have to *know* what the MAC looks like.  For the sake of keeping
+	 * the code common, we'll use the static configuration tables as a
+	 * reasonable approximation for both E/T and P/Q/R/S.
+	 */
+	mac[port].ingress = enabled;
+	mac[port].egress  = enabled;
+
+	/* Write to the dynamic reconfiguration tables */
+	rc = sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG,
+					  port, &mac[port], true);
+	if (rc < 0) {
+		dev_err(dev, "Failed to write MAC config: %d\n", rc);
+		return rc;
+	}
+
+	/* Reconfigure the CGU only for RGMII and SGMII interfaces.
+	 * xmii_mode and mac_phy setting cannot change at this point, only
+	 * speed does. For MII and RMII no change of the clock setup is
+	 * required. Actually, changing the clock setup does interrupt the
+	 * clock signal for a certain time which causes trouble for all PHYs
+	 * relying on this signal.
+	 */
+	if (!enabled)
+		return 0;
+
+	xmii_mode = mii->xmii_mode[port];
+	if (xmii_mode != XMII_MODE_RGMII && xmii_mode != XMII_MODE_SGMII)
+		return 0;
+
+	return sja1105_clocking_setup_port(priv, port);
+}
+
+static void sja1105_adjust_link(struct dsa_switch *ds, int port,
+				struct phy_device *phydev)
+{
+	struct sja1105_private *priv = ds->priv;
+
+	if (!phydev->link)
+		sja1105_adjust_port_config(priv, port, 0, false);
+	else
+		sja1105_adjust_port_config(priv, port, phydev->speed, true);
+}
+
+static int sja1105_bridge_member(struct dsa_switch *ds, int port,
+				 struct net_device *br, bool member)
+{
+	struct sja1105_l2_forwarding_entry *l2_fwd;
+	struct sja1105_private *priv = ds->priv;
+	int i, rc;
+
+	l2_fwd = priv->static_config.tables[BLK_IDX_L2_FORWARDING].entries;
+
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		/* Add this port to the forwarding matrix of the
+		 * other ports in the same bridge, and viceversa.
+		 */
+		if (!dsa_is_user_port(ds, i))
+			continue;
+		if (i == port)
+			continue;
+		if (dsa_to_port(ds, i)->bridge_dev != br)
+			continue;
+		sja1105_port_allow_traffic(l2_fwd, i, port, member);
+		sja1105_port_allow_traffic(l2_fwd, port, i, member);
+
+		rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_FORWARDING,
+						  i, &l2_fwd[i], true);
+		if (rc < 0)
+			return rc;
+	}
+
+	return sja1105_dynamic_config_write(priv, BLK_IDX_L2_FORWARDING,
+					    port, &l2_fwd[port], true);
+}
+
+static int sja1105_bridge_join(struct dsa_switch *ds, int port,
+			       struct net_device *br)
+{
+	return sja1105_bridge_member(ds, port, br, true);
+}
+
+static void sja1105_bridge_leave(struct dsa_switch *ds, int port,
+				 struct net_device *br)
+{
+	sja1105_bridge_member(ds, port, br, false);
+}
+
+static enum dsa_tag_protocol
+sja1105_get_tag_protocol(struct dsa_switch *ds, int port)
+{
+	return DSA_TAG_PROTO_NONE;
+}
+
+/* The programming model for the SJA1105 switch is "all-at-once" via static
+ * configuration tables. Some of these can be dynamically modified at runtime,
+ * but not the xMII mode parameters table.
+ * Furthermode, some PHYs may not have crystals for generating their clocks
+ * (e.g. RMII). Instead, their 50MHz clock is supplied via the SJA1105 port's
+ * ref_clk pin. So port clocking needs to be initialized early, before
+ * connecting to PHYs is attempted, otherwise they won't respond through MDIO.
+ * Setting correct PHY link speed does not matter now.
+ * But dsa_slave_phy_setup is called later than sja1105_setup, so the PHY
+ * bindings are not yet parsed by DSA core. We need to parse early so that we
+ * can populate the xMII mode parameters table.
+ */
+static int sja1105_setup(struct dsa_switch *ds)
+{
+	struct sja1105_dt_port ports[SJA1105_NUM_PORTS];
+	struct sja1105_private *priv = ds->priv;
+	int rc;
+
+	rc = sja1105_parse_dt(priv, ports);
+	if (rc < 0) {
+		dev_err(ds->dev, "Failed to parse DT: %d\n", rc);
+		return rc;
+	}
+	/* Create and send configuration down to device */
+	rc = sja1105_static_config_load(priv, ports);
+	if (rc < 0) {
+		dev_err(ds->dev, "Failed to load static config: %d\n", rc);
+		return rc;
+	}
+	/* Configure the CGU (PHY link modes and speeds) */
+	rc = sja1105_clocking_setup(priv);
+	if (rc < 0) {
+		dev_err(ds->dev, "Failed to configure MII clocking: %d\n", rc);
+		return rc;
+	}
+
+	return 0;
+}
+
+static const struct dsa_switch_ops sja1105_switch_ops = {
+	.get_tag_protocol	= sja1105_get_tag_protocol,
+	.setup			= sja1105_setup,
+	.adjust_link		= sja1105_adjust_link,
+	.port_bridge_join	= sja1105_bridge_join,
+	.port_bridge_leave	= sja1105_bridge_leave,
+};
+
+static int sja1105_probe(struct spi_device *spi)
+{
+	struct device *dev = &spi->dev;
+	struct sja1105_private *priv;
+	struct dsa_switch *ds;
+	int rc;
+
+	if (!dev->of_node) {
+		dev_err(dev, "No DTS bindings for SJA1105 driver\n");
+		return -EINVAL;
+	}
+
+	priv = devm_kzalloc(dev, sizeof(struct sja1105_private), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	ds = dsa_switch_alloc(dev, SJA1105_NUM_PORTS);
+	if (!ds)
+		return -ENOMEM;
+
+	ds->ops = &sja1105_switch_ops;
+	ds->priv = priv;
+	priv->ds = ds;
+
+	/* Populate our driver private structure (priv) based on
+	 * the device tree node that was probed (spi)
+	 */
+	priv->spidev = spi;
+	spi_set_drvdata(spi, priv);
+
+	/* Configure the SPI bus */
+	spi->mode = SPI_CPHA;
+	spi->bits_per_word = 8;
+	rc = spi_setup(spi);
+	if (rc < 0) {
+		dev_err(dev, "Could not init SPI\n");
+		return rc;
+	}
+
+	/* Configure the optional reset pin and bring up switch */
+	priv->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
+	if (IS_ERR(priv->reset_gpio))
+		dev_dbg(dev, "reset-gpios not defined, ignoring\n");
+	else
+		sja1105_hw_reset(priv->reset_gpio, 1, 1);
+
+	/* Detect hardware device */
+	rc = sja1105_device_id_get(priv);
+	if (rc < 0) {
+		dev_err(dev, "Failed to read device ID\n");
+		return rc;
+	}
+
+	dev_dbg(dev, "Probed switch chip: %s\n",
+		sja1105_device_id_string_get(priv->device_id, priv->part_nr));
+
+	rc = sja1105_dynamic_config_init(priv);
+	if (rc < 0) {
+		dev_err(dev, "Failed to initialize dynamic config\n");
+		return rc;
+	}
+
+	rc = dsa_register_switch(priv->ds);
+	if (rc < 0) {
+		dev_err(dev, "Failed to register DSA driver\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+static int sja1105_remove(struct spi_device *spi)
+{
+	struct sja1105_private *priv = spi_get_drvdata(spi);
+
+	dsa_unregister_switch(priv->ds);
+	sja1105_static_config_free(&priv->static_config);
+	return 0;
+}
+
+static const struct of_device_id sja1105_dt_ids[] = {
+	{ .compatible = "nxp,sja1105" },
+	{ /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, sja1105_dt_ids);
+
+static struct spi_driver sja1105_driver = {
+	.driver = {
+		.name  = "sja1105",
+		.owner = THIS_MODULE,
+		.of_match_table = of_match_ptr(sja1105_dt_ids),
+	},
+	.probe  = sja1105_probe,
+	.remove = sja1105_remove,
+};
+
+module_spi_driver(sja1105_driver);
+
+MODULE_AUTHOR("Vladimir Oltean <olteanv@gmail.com>");
+MODULE_AUTHOR("Georg Waibel <georg.waibel@sensor-technik.de>");
+MODULE_DESCRIPTION("SJA1105 Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/net/dsa/sja1105/sja1105_spi.c b/drivers/net/dsa/sja1105/sja1105_spi.c
new file mode 100644
index 000000000000..ef4bc24f0839
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_spi.c
@@ -0,0 +1,667 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/* Copyright (c) 2016-2018, NXP Semiconductors
+ * Copyright (c) 2018, Sensor-Technik Wiedemann GmbH
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include <linux/spi/spi.h>
+#include <linux/packing.h>
+#include "sja1105.h"
+
+#define SPI_TRANSFER_SIZE_MAX  (SIZE_SPI_MSG_HEADER + SIZE_SPI_MSG_MAXLEN)
+
+static int sja1105_spi_transfer(const struct sja1105_private *priv,
+				const void *tx, void *rx, int size)
+{
+	struct spi_device *spi = priv->spidev;
+	struct spi_transfer transfer = {
+		.tx_buf = tx,
+		.rx_buf = rx,
+		.len = size,
+	};
+	struct spi_message msg;
+	int rc;
+
+	if (size > SPI_TRANSFER_SIZE_MAX) {
+		dev_err(&spi->dev, "SPI message (%d) longer than max of %d\n",
+			size, SPI_TRANSFER_SIZE_MAX);
+		return -EMSGSIZE;
+	}
+
+	spi_message_init(&msg);
+	spi_message_add_tail(&transfer, &msg);
+
+	rc = spi_sync(spi, &msg);
+	if (rc < 0) {
+		dev_err(&spi->dev, "SPI transfer failed: %d\n", rc);
+		return rc;
+	}
+
+	return rc;
+}
+
+static void
+sja1105_spi_message_pack(void *buf, const struct sja1105_spi_message *msg)
+{
+	const int size = SIZE_SPI_MSG_HEADER;
+
+	memset(buf, 0, size);
+
+	sja1105_pack(buf, &msg->access,     31, 31, size);
+	sja1105_pack(buf, &msg->read_count, 30, 25, size);
+	sja1105_pack(buf, &msg->address,    24,  4, size);
+}
+
+/* If read_or_write is:
+ *     * SPI_WRITE: creates and sends an SPI write message at absolute
+ *                  address reg_addr, taking size_bytes from *packed_buf
+ *     * SPI_READ: creates and sends an SPI read message from absolute
+ *                 address reg_addr, writing size_bytes into *packed_buf
+ *
+ * This function should only be called if it is priorly known that
+ * size_bytes is smaller than SIZE_SPI_MSG_MAXLEN. Larger packed buffers
+ * are chunked in smaller pieces by sja1105_spi_send_long_packed_buf below.
+ */
+int
+sja1105_spi_send_packed_buf(const struct sja1105_private *priv,
+			    enum sja1105_spi_access_mode read_or_write,
+			    u64 reg_addr, void *packed_buf, size_t size_bytes)
+{
+	const int msg_len = size_bytes + SIZE_SPI_MSG_HEADER;
+	struct sja1105_spi_message msg;
+	u8 tx_buf[SIZE_SPI_MSG_HEADER + SIZE_SPI_MSG_MAXLEN];
+	u8 rx_buf[SIZE_SPI_MSG_HEADER + SIZE_SPI_MSG_MAXLEN];
+	int rc;
+
+	if (msg_len > SIZE_SPI_MSG_HEADER + SIZE_SPI_MSG_MAXLEN)
+		return -ERANGE;
+
+	memset(rx_buf, 0, msg_len);
+
+	msg.access     = read_or_write;
+	msg.read_count = (read_or_write == SPI_READ) ? (size_bytes / 4) : 0;
+	msg.address    = reg_addr;
+	sja1105_spi_message_pack(tx_buf, &msg);
+
+	if (read_or_write == SPI_READ)
+		memset(tx_buf + SIZE_SPI_MSG_HEADER, 0, size_bytes);
+	else if (read_or_write == SPI_WRITE)
+		memcpy(tx_buf + SIZE_SPI_MSG_HEADER, packed_buf, size_bytes);
+	else
+		return -EINVAL;
+
+	rc = sja1105_spi_transfer(priv, tx_buf, rx_buf, msg_len);
+	if (rc < 0)
+		return rc;
+
+	if (read_or_write == SPI_READ)
+		memcpy(packed_buf, rx_buf + SIZE_SPI_MSG_HEADER, size_bytes);
+
+	return 0;
+}
+
+/* If read_or_write is:
+ *     * SPI_WRITE: creates and sends an SPI write message at absolute
+ *                  address reg_addr, taking size_bytes from *value
+ *     * SPI_READ: creates and sends an SPI read message from absolute
+ *                 address reg_addr, writing size_bytes into *value
+ *
+ * The u64 *value is unpacked, meaning that it's stored in the native
+ * CPU endianness and directly usable by software running on the core.
+ *
+ * This is a wrapper around sja1105_spi_send_packed_buf().
+ */
+int sja1105_spi_send_int(const struct sja1105_private *priv,
+			 enum sja1105_spi_access_mode read_or_write,
+			 u64 reg_addr, u64 *value, u64 size_bytes)
+{
+	u8 packed_buf[SIZE_SPI_MSG_MAXLEN];
+	int rc;
+
+	if (size_bytes > SIZE_SPI_MSG_MAXLEN)
+		return -ERANGE;
+
+	if (read_or_write == SPI_WRITE)
+		sja1105_pack(packed_buf, value, 8 * size_bytes - 1, 0,
+			     size_bytes);
+
+	rc = sja1105_spi_send_packed_buf(priv, read_or_write, reg_addr,
+					 packed_buf, size_bytes);
+
+	if (read_or_write == SPI_READ)
+		sja1105_unpack(packed_buf, value, 8 * size_bytes - 1, 0,
+			       size_bytes);
+
+	return rc;
+}
+
+/* Should be used if a packed_buf larger than SIZE_SPI_MSG_MAXLEN must be
+ * sent/received. Splitting the buffer into chunks and assembling those
+ * into SPI messages is done automatically by this function.
+ */
+int
+sja1105_spi_send_long_packed_buf(const struct sja1105_private *priv,
+				 enum sja1105_spi_access_mode read_or_write,
+				 u64 base_addr, void *packed_buf, u64 buf_len)
+{
+	struct chunk {
+		void *buf_ptr;
+		int len;
+		u64 spi_address;
+	} chunk;
+	int distance_to_end;
+	int rc = 0;
+
+	/* Initialize chunk */
+	chunk.buf_ptr = packed_buf;
+	chunk.spi_address = base_addr;
+	chunk.len = min_t(int, buf_len, SIZE_SPI_MSG_MAXLEN);
+
+	while (chunk.len) {
+		rc = sja1105_spi_send_packed_buf(priv, read_or_write,
+						 chunk.spi_address,
+						 chunk.buf_ptr, chunk.len);
+		if (rc < 0)
+			return rc;
+
+		chunk.buf_ptr += chunk.len;
+		chunk.spi_address += chunk.len / 4;
+		distance_to_end = (uintptr_t)((packed_buf + buf_len) -
+					chunk.buf_ptr);
+		chunk.len = min(distance_to_end, SIZE_SPI_MSG_MAXLEN);
+	}
+
+	return 0;
+}
+
+/* Back-ported structure from UM11040 Table 112.
+ * Reset control register (addr. 100440h)
+ * In the SJA1105 E/T, only warm_rst and cold_rst are
+ * supported (exposed in UM10944 as rst_ctrl), but the bit
+ * offsets of warm_rst and cold_rst are actually reversed.
+ */
+struct sja1105_reset_cmd {
+	u64 switch_rst;
+	u64 cfg_rst;
+	u64 car_rst;
+	u64 otp_rst;
+	u64 warm_rst;
+	u64 cold_rst;
+	u64 por_rst;
+};
+
+static void
+sja1105et_reset_cmd_pack(void *buf, const struct sja1105_reset_cmd *reset)
+{
+	const int size = 4;
+
+	memset(buf, 0, size);
+
+	sja1105_pack(buf, &reset->cold_rst, 3, 3, size);
+	sja1105_pack(buf, &reset->warm_rst, 2, 2, size);
+}
+
+static void
+sja1105pqrs_reset_cmd_pack(void *buf, const struct sja1105_reset_cmd *reset)
+{
+	const int size = 4;
+
+	memset(buf, 0, size);
+
+	sja1105_pack(buf, &reset->switch_rst, 8, 8, size);
+	sja1105_pack(buf, &reset->cfg_rst,    7, 7, size);
+	sja1105_pack(buf, &reset->car_rst,    5, 5, size);
+	sja1105_pack(buf, &reset->otp_rst,    4, 4, size);
+	sja1105_pack(buf, &reset->warm_rst,   3, 3, size);
+	sja1105_pack(buf, &reset->cold_rst,   2, 2, size);
+	sja1105_pack(buf, &reset->por_rst,    1, 1, size);
+}
+
+static int sja1105_reset_cmd_commit(const struct sja1105_private *priv,
+				    const struct sja1105_reset_cmd *reset)
+{
+#define BUF_LEN 4
+	struct device *dev = priv->ds->dev;
+	u8 packed_buf[BUF_LEN];
+
+	if (reset->switch_rst)
+		dev_dbg(dev, "Main reset for all functional modules requested\n");
+	if (reset->cfg_rst)
+		dev_dbg(dev, "Chip configuration reset requested\n");
+	if (reset->car_rst)
+		dev_dbg(dev, "Clock and reset control logic reset requested\n");
+	if (reset->otp_rst)
+		dev_dbg(dev, "OTP read cycle for reading product "
+			"config settings requested\n");
+	if (reset->warm_rst)
+		dev_dbg(dev, "Warm reset requested\n");
+	if (reset->cold_rst)
+		dev_dbg(dev, "Cold reset requested\n");
+	if (reset->por_rst)
+		dev_dbg(dev, "Power-on reset requested\n");
+
+	if ((reset->switch_rst || reset->cfg_rst || reset->car_rst ||
+	     reset->otp_rst || reset->por_rst) && IS_ET(priv->device_id)) {
+		dev_err(dev, "Only warm and cold reset is supported "
+			"for SJA1105 E/T!\n");
+		return -EINVAL;
+	}
+	if (IS_ET(priv->device_id))
+		sja1105et_reset_cmd_pack(packed_buf, reset);
+	else
+		sja1105pqrs_reset_cmd_pack(packed_buf, reset);
+
+	return sja1105_spi_send_packed_buf(priv, SPI_WRITE, priv->regs->rgu,
+					   packed_buf, BUF_LEN);
+#undef BUF_LEN
+}
+
+static int sja1105_cold_reset(const struct sja1105_private *priv)
+{
+	struct sja1105_reset_cmd reset = {0};
+
+	reset.cold_rst = 1;
+	return sja1105_reset_cmd_commit(priv, &reset);
+}
+
+static const char *SJA1105E_DEVICE_ID_STR   = "SJA1105E";
+static const char *SJA1105T_DEVICE_ID_STR   = "SJA1105T";
+static const char *SJA1105P_DEVICE_ID_STR   = "SJA1105P";
+static const char *SJA1105Q_DEVICE_ID_STR   = "SJA1105Q";
+static const char *SJA1105R_DEVICE_ID_STR   = "SJA1105R";
+static const char *SJA1105S_DEVICE_ID_STR   = "SJA1105S";
+static const char *SJA1105PR_DEVICE_ID_STR  = "SJA1105P or SJA1105R";
+static const char *SJA1105QS_DEVICE_ID_STR  = "SJA1105Q or SJA1105S";
+static const char *SJA1105_NO_DEVICE_ID_STR = "None";
+
+const char *sja1105_device_id_string_get(u64 device_id, u64 part_nr)
+{
+	if (device_id == SJA1105E_DEVICE_ID)
+		return SJA1105E_DEVICE_ID_STR;
+	if (device_id == SJA1105T_DEVICE_ID)
+		return SJA1105T_DEVICE_ID_STR;
+	/* P and R have same Device ID, and differ by Part Number.
+	 * Same do Q and S.
+	 */
+	if (IS_P(device_id, part_nr))
+		return SJA1105P_DEVICE_ID_STR;
+	if (IS_Q(device_id, part_nr))
+		return SJA1105Q_DEVICE_ID_STR;
+	if (IS_R(device_id, part_nr))
+		return SJA1105R_DEVICE_ID_STR;
+	if (IS_S(device_id, part_nr))
+		return SJA1105S_DEVICE_ID_STR;
+	/* Fallback: if we don't know/care what the part_nr is, and we
+	 * have a P/R, we can simply pass -1 to part_nr and have this
+	 * function say it's either P or R, instead of reporting it
+	 * as invalid.
+	 */
+	if (device_id == SJA1105PR_DEVICE_ID)
+		return SJA1105PR_DEVICE_ID_STR;
+	if (device_id == SJA1105QS_DEVICE_ID)
+		return SJA1105QS_DEVICE_ID_STR;
+
+	return SJA1105_NO_DEVICE_ID_STR;
+}
+
+struct sja1105_regs sja1105et_regs = {
+	.rgu = 0x100440,
+	.config = 0x020000,
+	.pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
+	.rmii_pll1 = 0x10000A,
+	.cgu_idiv = {0x10000B, 0x10000C, 0x10000D, 0x10000E, 0x10000F},
+	/* UM10944.pdf, Table 86, ACU Register overview */
+	.rgmii_pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
+	/* The base address is off-by-1 compared to UM10944,
+	 * because we are skipping device_id from the readout.
+	 */
+	.general_status = 0x1,
+	.mac = {0x200, 0x202, 0x204, 0x206, 0x208},
+	.mac_hl1 = {0x400, 0x410, 0x420, 0x430, 0x440},
+	.mac_hl2 = {0x600, 0x610, 0x620, 0x630, 0x640},
+	/* UM10944.pdf, Table 78, CGU Register overview */
+	.mii_tx_clk = {0x100013, 0x10001A, 0x100021, 0x100028, 0x10002F},
+	.mii_rx_clk = {0x100014, 0x10001B, 0x100022, 0x100029, 0x100030},
+	.mii_ext_tx_clk = {0x100018, 0x10001F, 0x100026, 0x10002D, 0x100034},
+	.mii_ext_rx_clk = {0x100019, 0x100020, 0x100027, 0x10002E, 0x100035},
+	.rgmii_txc = {0x100016, 0x10001D, 0x100024, 0x10002B, 0x100032},
+	.rmii_ref_clk = {0x100015, 0x10001C, 0x100023, 0x10002A, 0x100031},
+	.rmii_ext_tx_clk = {0x100018, 0x10001F, 0x100026, 0x10002D, 0x100034},
+};
+
+struct sja1105_regs sja1105pqrs_regs = {
+	.rgu = 0x100440,
+	.config = 0x020000,
+	.pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
+	.rmii_pll1 = 0x10000A,
+	.cgu_idiv = {0x10000B, 0x10000C, 0x10000D, 0x10000E, 0x10000F},
+	/* UM10944.pdf, Table 86, ACU Register overview */
+	.rgmii_pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
+	/* The base address is off-by-1 compared to UM10944,
+	 * because we are skipping device_id from the readout.
+	 */
+	.general_status = 0x1,
+	.mac = {0x200, 0x202, 0x204, 0x206, 0x208},
+	.mac_hl1 = {0x400, 0x410, 0x420, 0x430, 0x440},
+	.mac_hl2 = {0x600, 0x610, 0x620, 0x630, 0x640},
+	/* UM11040.pdf, Table 114 */
+	.mii_tx_clk = {0x100013, 0x100019, 0x10001F, 0x100025, 0x10002B},
+	.mii_rx_clk = {0x100014, 0x10001A, 0x100020, 0x100026, 0x10002C},
+	.mii_ext_tx_clk = {0x100017, 0x10001D, 0x100023, 0x100029, 0x10002F},
+	.mii_ext_rx_clk = {0x100018, 0x10001E, 0x100024, 0x10002A, 0x100030},
+	.rgmii_txc = {0x100016, 0x10001C, 0x100022, 0x100028, 0x10002E},
+	.rmii_ref_clk = {0x100015, 0x10001B, 0x100021, 0x100027, 0x10002D},
+	.rmii_ext_tx_clk = {0x100017, 0x10001D, 0x100023, 0x100029, 0x10002F},
+	.qlevel = {0x604, 0x614, 0x624, 0x634, 0x644},
+};
+
+/* Populates priv structures device_id, part_nr and regs */
+int sja1105_device_id_get(struct sja1105_private *priv)
+{
+#define DEVICE_ID_ADDR	0x0
+#define PROD_ID_ADDR	0x100BC3
+	/* These can't be part of regs, because otherwise we'd have
+	 * a chicken and egg problem
+	 */
+	u64 compatible_device_ids[] = {
+		SJA1105E_DEVICE_ID,
+		SJA1105T_DEVICE_ID,
+		SJA1105PR_DEVICE_ID,
+		SJA1105QS_DEVICE_ID,
+	};
+	struct device *dev = priv->ds->dev;
+	u64 tmp_device_id;
+	u64 tmp_part_nr;
+	unsigned int i;
+	int rc;
+
+	rc = sja1105_spi_send_int(priv, SPI_READ, DEVICE_ID_ADDR,
+				  &tmp_device_id, SIZE_SJA1105_DEVICE_ID);
+	if (rc < 0)
+		return rc;
+
+	priv->device_id = SJA1105_NO_DEVICE_ID;
+	for (i = 0; i < ARRAY_SIZE(compatible_device_ids); i++) {
+		if (tmp_device_id == compatible_device_ids[i]) {
+			priv->device_id = compatible_device_ids[i];
+			break;
+		}
+	}
+	if (priv->device_id == SJA1105_NO_DEVICE_ID) {
+		dev_err(dev, "Unrecognized Device ID 0x%llx\n", tmp_device_id);
+		return -EINVAL;
+	}
+	if (IS_PQRS(priv->device_id)) {
+		rc = sja1105_spi_send_int(priv, SPI_READ, PROD_ID_ADDR,
+					  &tmp_part_nr, 4);
+		if (rc < 0)
+			return -EINVAL;
+
+		sja1105_unpack(&tmp_part_nr, &priv->part_nr, 19, 4, 4);
+	}
+	dev_dbg(dev, "%s Device ID detected.\n",
+		sja1105_device_id_string_get(priv->device_id, priv->part_nr));
+
+	if (IS_ET(priv->device_id))
+		priv->regs = &sja1105et_regs;
+	else if (IS_PQRS(priv->device_id))
+		priv->regs = &sja1105pqrs_regs;
+
+	return 0;
+#undef PROD_ID_ADDR
+#undef DEVICE_ID_ADDR
+}
+
+struct sja1105_general_status {
+	u64 configs;
+	u64 crcchkl;
+	u64 ids;
+	u64 crcchkg;
+	u64 nslot;
+	u64 vlind;
+	u64 vlparind;
+	u64 vlroutes;
+	u64 vlparts;
+	u64 macaddl;
+	u64 portenf;
+	u64 fwds_03h;
+	u64 macfds;
+	u64 enffds;
+	u64 l2busyfds;
+	u64 l2busys;
+	u64 macaddu;
+	u64 macaddhcl;
+	u64 vlanidhc;
+	u64 hashconfs;
+	u64 macaddhcu;
+	u64 wpvlanid;
+	u64 port_07h;
+	u64 vlanbusys;
+	u64 wrongports;
+	u64 vnotfounds;
+	u64 vlid;
+	u64 portvl;
+	u64 vlnotfound;
+	u64 emptys;
+	u64 buffers;
+	u64 buflwmark; /* Only on P/Q/R/S */
+	u64 port_0ah;
+	u64 fwds_0ah;
+	u64 parts;
+	u64 ramparerrl;
+	u64 ramparerru;
+};
+
+static void
+sja1105_general_status_unpack(void *buf, struct sja1105_general_status *status,
+			      u64 device_id)
+{
+	/* So that addition translates to 4 bytes */
+	u32 *p = (u32 *)buf;
+
+	memset(status, 0, sizeof(*status));
+	/* device_id is missing from the buffer, but we don't
+	 * want to diverge from the manual definition of the
+	 * register addresses, so we'll back off one step with
+	 * the register pointer, and never access p[0].
+	 */
+	p--;
+	sja1105_unpack(p + 0x1, &status->configs,   31, 31, 4);
+	sja1105_unpack(p + 0x1, &status->crcchkl,   30, 30, 4);
+	sja1105_unpack(p + 0x1, &status->ids,       29, 29, 4);
+	sja1105_unpack(p + 0x1, &status->crcchkg,   28, 28, 4);
+	sja1105_unpack(p + 0x1, &status->nslot,      3,  0, 4);
+	sja1105_unpack(p + 0x2, &status->vlind,     31, 16, 4);
+	sja1105_unpack(p + 0x2, &status->vlparind,  15,  8, 4);
+	sja1105_unpack(p + 0x2, &status->vlroutes,   1,  1, 4);
+	sja1105_unpack(p + 0x2, &status->vlparts,    0,  0, 4);
+	sja1105_unpack(p + 0x3, &status->macaddl,   31, 16, 4);
+	sja1105_unpack(p + 0x3, &status->portenf,   15,  8, 4);
+	sja1105_unpack(p + 0x3, &status->fwds_03h,   4,  4, 4);
+	sja1105_unpack(p + 0x3, &status->macfds,     3,  3, 4);
+	sja1105_unpack(p + 0x3, &status->enffds,     2,  2, 4);
+	sja1105_unpack(p + 0x3, &status->l2busyfds,  1,  1, 4);
+	sja1105_unpack(p + 0x3, &status->l2busys,    0,  0, 4);
+	sja1105_unpack(p + 0x4, &status->macaddu,   31,  0, 4);
+	sja1105_unpack(p + 0x5, &status->macaddhcl, 31, 16, 4);
+	sja1105_unpack(p + 0x5, &status->vlanidhc,  15,  4, 4);
+	sja1105_unpack(p + 0x5, &status->hashconfs,  0,  0, 4);
+	sja1105_unpack(p + 0x6, &status->macaddhcu, 31,  0, 4);
+	sja1105_unpack(p + 0x7, &status->wpvlanid,  31, 16, 4);
+	sja1105_unpack(p + 0x7, &status->port_07h,  15,  8, 4);
+	sja1105_unpack(p + 0x7, &status->vlanbusys,  4,  4, 4);
+	sja1105_unpack(p + 0x7, &status->wrongports, 3,  3, 4);
+	sja1105_unpack(p + 0x7, &status->vnotfounds, 2,  2, 4);
+	sja1105_unpack(p + 0x8, &status->vlid,      31, 16, 4);
+	sja1105_unpack(p + 0x8, &status->portvl,    15,  8, 4);
+	sja1105_unpack(p + 0x8, &status->vlnotfound, 0,  0, 4);
+	sja1105_unpack(p + 0x9, &status->emptys,    31, 31, 4);
+	sja1105_unpack(p + 0x9, &status->buffers,   30,  0, 4);
+	if (IS_ET(device_id)) {
+		sja1105_unpack(p + 0xA, &status->port_0ah,   15,  8, 4);
+		sja1105_unpack(p + 0xA, &status->fwds_0ah,    1,  1, 4);
+		sja1105_unpack(p + 0xA, &status->parts,       0,  0, 4);
+		sja1105_unpack(p + 0xB, &status->ramparerrl, 20,  0, 4);
+		sja1105_unpack(p + 0xC, &status->ramparerru,  4,  0, 4);
+	} else {
+		sja1105_unpack(p + 0xA, &status->buflwmark,  30,  0, 4);
+		sja1105_unpack(p + 0xB, &status->port_0ah,   15,  8, 4);
+		sja1105_unpack(p + 0xB, &status->fwds_0ah,    1,  1, 4);
+		sja1105_unpack(p + 0xB, &status->parts,       0,  0, 4);
+		sja1105_unpack(p + 0xC, &status->ramparerrl, 22,  0, 4);
+		sja1105_unpack(p + 0xD, &status->ramparerru,  4,  0, 4);
+	}
+}
+
+static int sja1105_general_status_get(struct sja1105_private *priv,
+				      struct sja1105_general_status *status)
+{
+#define SIZE_ET   (0x0C * 4) /* 0x01 to 0x0C */
+#define SIZE_PQRS (0x0D * 4) /* 0x01 to 0x0D */
+#define MAX_SIZE (max(SIZE_ET, SIZE_PQRS))
+	u8 packed_buf[MAX_SIZE];
+	const int size = IS_ET(priv->device_id) ? SIZE_ET : SIZE_PQRS;
+	int rc;
+
+	rc = sja1105_spi_send_packed_buf(priv, SPI_READ,
+					 priv->regs->general_status,
+					 packed_buf, size);
+	if (rc < 0)
+		return rc;
+
+	sja1105_general_status_unpack(packed_buf, status, priv->device_id);
+
+	return 0;
+#undef MAX_SIZE
+#undef SIZE_PQRS
+#undef SIZE_ET
+}
+
+/* Not const because unpacking priv->static_config into buffers and preparing
+ * for upload requires the recalculation of table CRCs and updating the
+ * structures with these.
+ */
+static int
+static_config_buf_prepare_for_upload(struct sja1105_private *priv,
+				     void *config_buf, int buf_len)
+{
+	struct sja1105_static_config *config = &priv->static_config;
+	enum sja1105_static_config_validity valid;
+	struct sja1105_table_header final_header;
+	char *final_header_ptr;
+	int crc_len;
+
+	valid = sja1105_static_config_check_valid(config);
+	if (valid != SJA1105_CONFIG_OK) {
+		dev_err(&priv->spidev->dev,
+			sja1105_static_config_error_msg[valid]);
+		return -EINVAL;
+	}
+
+	if (config->device_id != priv->device_id) {
+		dev_err(&priv->spidev->dev,
+			"The static config is for device id %llx "
+			"but the chip is %s (%llx)\n", config->device_id,
+			sja1105_device_id_string_get(priv->device_id,
+						     priv->part_nr),
+			priv->device_id);
+		return -EINVAL;
+	}
+
+	/* Write Device ID and config tables to config_buf */
+	sja1105_static_config_pack(config_buf, config);
+	/* Recalculate CRC of the last header (right now 0xDEADBEEF).
+	 * Don't include the CRC field itself.
+	 */
+	crc_len = buf_len - 4;
+	/* Read the whole table header */
+	final_header_ptr = config_buf + buf_len - SIZE_TABLE_HEADER;
+	sja1105_table_header_packing(final_header_ptr, &final_header, UNPACK);
+	/* Modify */
+	final_header.crc = sja1105_crc32(config_buf, crc_len);
+	/* Rewrite */
+	sja1105_table_header_packing(final_header_ptr, &final_header, PACK);
+
+	return 0;
+}
+
+int sja1105_static_config_upload(struct sja1105_private *priv)
+{
+#define RETRIES 10
+	struct sja1105_static_config *config = &priv->static_config;
+	struct device *dev = &priv->spidev->dev;
+	struct sja1105_general_status status;
+	int rc, retries = RETRIES;
+	u8 *config_buf;
+	int buf_len;
+
+	buf_len = sja1105_static_config_get_length(config);
+	config_buf = kcalloc(buf_len, sizeof(char), GFP_KERNEL);
+	if (!config_buf)
+		return -ENOMEM;
+
+	rc = static_config_buf_prepare_for_upload(priv, config_buf, buf_len);
+	if (rc < 0) {
+		dev_err(dev, "Invalid config, cannot upload\n");
+		return -EINVAL;
+	}
+	do {
+		/* Put the SJA1105 in programming mode */
+		rc = sja1105_cold_reset(priv);
+		if (rc < 0) {
+			dev_err(dev, "Failed to reset switch, retrying...\n");
+			continue;
+		}
+		/* Wait for the switch to come out of reset */
+		usleep_range(1000, 5000);
+		/* Upload the static config to the device */
+		rc = sja1105_spi_send_long_packed_buf(priv, SPI_WRITE,
+						      priv->regs->config,
+						      config_buf, buf_len);
+		if (rc < 0) {
+			dev_err(dev, "Failed to upload config, retrying...\n");
+			continue;
+		}
+		/* Check that SJA1105 responded well to the config upload */
+		rc = sja1105_general_status_get(priv, &status);
+		if (rc < 0)
+			continue;
+
+		if (status.ids == 1) {
+			dev_err(dev, "Mismatch between hardware and staging area "
+				"device id. Wrote 0x%llx, wants 0x%llx\n",
+				config->device_id, priv->device_id);
+			continue;
+		}
+		if (status.crcchkl == 1) {
+			dev_err(dev, "Switch reported invalid local CRC on "
+				"the uploaded config, retrying...\n");
+			continue;
+		}
+		if (status.crcchkg == 1) {
+			dev_err(dev, "Switch reported invalid global CRC on "
+				"the uploaded config, retrying...\n");
+			continue;
+		}
+		if (status.configs == 0) {
+			dev_err(dev, "Switch reported that configuration is "
+				"invalid, retrying...\n");
+			continue;
+		}
+	} while (--retries && (status.crcchkl == 1 || status.crcchkg == 1 ||
+		 status.configs == 0 || status.ids == 1));
+
+	if (!retries) {
+		rc = -EIO;
+		dev_err(dev, "Failed to upload config to device, giving up\n");
+		goto out;
+	} else if (retries != RETRIES - 1) {
+		dev_info(dev, "Succeeded after %d tried\n", RETRIES - retries);
+	}
+
+	dev_info(dev, "Reset switch and programmed static config\n");
+out:
+	kfree(config_buf);
+	return rc;
+#undef RETRIES
+}
+
diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.c b/drivers/net/dsa/sja1105/sja1105_static_config.c
new file mode 100644
index 000000000000..c9de28abfba7
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_static_config.c
@@ -0,0 +1,1810 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/* Copyright (c) 2016-2018, NXP Semiconductors
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include "sja1105_static_config.h"
+#include <linux/crc32.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+
+/* Convenience wrappers over the generic packing functions. These take into
+ * account the SJA1105 memory layout quirks and provide some level of
+ * programmer protection against incorrect API use. The errors are not expected
+ * to occur durring runtime, therefore printing and swallowing them here is
+ * appropriate instead of clutterring up higher-level code.
+ */
+void sja1105_pack(void *buf, const u64 *val, int start, int end, size_t len)
+{
+	int rc = packing(buf, (u64 *)val, start, end, len,
+			 PACK, QUIRK_LSW32_IS_FIRST);
+
+	if (likely(!rc))
+		return;
+
+	if (rc == -EINVAL) {
+		pr_err("Start bit (%d) expected to be larger than end (%d)\n",
+		       start, end);
+	} else if (rc == -ERANGE) {
+		if ((start - end + 1) > 64)
+			pr_err("Field %d-%d too large for 64 bits!\n",
+			       start, end);
+		else
+			pr_err("Cannot store %llx inside bits %d-%d (would truncate)\n",
+			       *val, start, end);
+	}
+	dump_stack();
+}
+
+void sja1105_unpack(const void *buf, u64 *val, int start, int end, size_t len)
+{
+	int rc = packing((void *)buf, val, start, end, len,
+			 UNPACK, QUIRK_LSW32_IS_FIRST);
+
+	if (likely(!rc))
+		return;
+
+	if (rc == -EINVAL)
+		pr_err("Start bit (%d) expected to be larger than end (%d)\n",
+		       start, end);
+	else if (rc == -ERANGE)
+		pr_err("Field %d-%d too large for 64 bits!\n",
+		       start, end);
+	dump_stack();
+}
+
+void sja1105_packing(void *buf, u64 *val, int start, int end,
+		     size_t len, enum packing_op op)
+{
+	int rc = packing(buf, val, start, end, len, op, QUIRK_LSW32_IS_FIRST);
+
+	if (likely(!rc))
+		return;
+
+	if (rc == -EINVAL) {
+		pr_err("Start bit (%d) expected to be larger than end (%d)\n",
+		       start, end);
+	} else if (rc == -ERANGE) {
+		if ((start - end + 1) > 64)
+			pr_err("Field %d-%d too large for 64 bits!\n",
+			       start, end);
+		else
+			pr_err("Cannot store %llx inside bits %d-%d (would truncate)\n",
+			       *val, start, end);
+	}
+	dump_stack();
+}
+
+/* Little-endian Ethernet CRC32 of data packed as big-endian u32 words */
+u32 sja1105_crc32(const void *buf, size_t len)
+{
+	unsigned int i;
+	u64 word;
+	u32 crc;
+
+	/* seed */
+	crc = ~0;
+	for (i = 0; i < len; i += 4) {
+		sja1105_unpack((void *)buf + i, &word, 31, 0, 4);
+		crc = crc32_le(crc, (u8 *)&word, 4);
+	}
+	return ~crc;
+}
+
+static size_t sja1105et_avb_params_entry_packing(void *buf, void *entry_ptr,
+						 enum packing_op op)
+{
+	const size_t size = SIZE_AVB_PARAMS_ENTRY_ET;
+	struct sja1105_avb_params_entry *entry;
+
+	entry = (struct sja1105_avb_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->destmeta, 95, 48, size, op);
+	sja1105_packing(buf, &entry->srcmeta,  47,  0, size, op);
+	return size;
+}
+
+static size_t sja1105pqrs_avb_params_entry_packing(void *buf, void *entry_ptr,
+						   enum packing_op op)
+{
+	const size_t size = SIZE_AVB_PARAMS_ENTRY_PQRS;
+	struct sja1105_avb_params_entry *entry;
+
+	entry = (struct sja1105_avb_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->l2cbs,      127, 127, size, op);
+	sja1105_packing(buf, &entry->cas_master, 126, 126, size, op);
+	sja1105_packing(buf, &entry->destmeta,   125,  78, size, op);
+	sja1105_packing(buf, &entry->srcmeta,     77,  33, size, op);
+	return size;
+}
+
+static size_t sja1105et_general_params_entry_packing(void *buf, void *entry_ptr,
+						     enum packing_op op)
+{
+	const size_t size = SIZE_GENERAL_PARAMS_ENTRY_ET;
+	struct sja1105_general_params_entry *entry;
+
+	entry = (struct sja1105_general_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->vllupformat, 319, 319, size, op);
+	sja1105_packing(buf, &entry->mirr_ptacu,  318, 318, size, op);
+	sja1105_packing(buf, &entry->switchid,    317, 315, size, op);
+	sja1105_packing(buf, &entry->hostprio,    314, 312, size, op);
+	sja1105_packing(buf, &entry->mac_fltres1, 311, 264, size, op);
+	sja1105_packing(buf, &entry->mac_fltres0, 263, 216, size, op);
+	sja1105_packing(buf, &entry->mac_flt1,    215, 168, size, op);
+	sja1105_packing(buf, &entry->mac_flt0,    167, 120, size, op);
+	sja1105_packing(buf, &entry->incl_srcpt1, 119, 119, size, op);
+	sja1105_packing(buf, &entry->incl_srcpt0, 118, 118, size, op);
+	sja1105_packing(buf, &entry->send_meta1,  117, 117, size, op);
+	sja1105_packing(buf, &entry->send_meta0,  116, 116, size, op);
+	sja1105_packing(buf, &entry->casc_port,   115, 113, size, op);
+	sja1105_packing(buf, &entry->host_port,   112, 110, size, op);
+	sja1105_packing(buf, &entry->mirr_port,   109, 107, size, op);
+	sja1105_packing(buf, &entry->vlmarker,    106,  75, size, op);
+	sja1105_packing(buf, &entry->vlmask,       74,  43, size, op);
+	sja1105_packing(buf, &entry->tpid,         42,  27, size, op);
+	sja1105_packing(buf, &entry->ignore2stf,   26,  26, size, op);
+	sja1105_packing(buf, &entry->tpid2,        25,  10, size, op);
+	return size;
+}
+
+static size_t
+sja1105pqrs_general_params_entry_packing(void *buf, void *entry_ptr,
+					 enum packing_op op)
+{
+	const size_t size = SIZE_GENERAL_PARAMS_ENTRY_PQRS;
+	struct sja1105_general_params_entry *entry;
+
+	entry = (struct sja1105_general_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->vllupformat, 351, 351, size, op);
+	sja1105_packing(buf, &entry->mirr_ptacu,  350, 350, size, op);
+	sja1105_packing(buf, &entry->switchid,    349, 347, size, op);
+	sja1105_packing(buf, &entry->hostprio,    346, 344, size, op);
+	sja1105_packing(buf, &entry->mac_fltres1, 343, 296, size, op);
+	sja1105_packing(buf, &entry->mac_fltres0, 295, 248, size, op);
+	sja1105_packing(buf, &entry->mac_flt1,    247, 200, size, op);
+	sja1105_packing(buf, &entry->mac_flt0,    199, 152, size, op);
+	sja1105_packing(buf, &entry->incl_srcpt1, 151, 151, size, op);
+	sja1105_packing(buf, &entry->incl_srcpt0, 150, 150, size, op);
+	sja1105_packing(buf, &entry->send_meta1,  149, 149, size, op);
+	sja1105_packing(buf, &entry->send_meta0,  148, 148, size, op);
+	sja1105_packing(buf, &entry->casc_port,   147, 145, size, op);
+	sja1105_packing(buf, &entry->host_port,   144, 142, size, op);
+	sja1105_packing(buf, &entry->mirr_port,   141, 139, size, op);
+	sja1105_packing(buf, &entry->vlmarker,    138, 107, size, op);
+	sja1105_packing(buf, &entry->vlmask,      106,  75, size, op);
+	sja1105_packing(buf, &entry->tpid,         74,  59, size, op);
+	sja1105_packing(buf, &entry->ignore2stf,   58,  58, size, op);
+	sja1105_packing(buf, &entry->tpid2,        57,  42, size, op);
+	sja1105_packing(buf, &entry->queue_ts,     41,  41, size, op);
+	sja1105_packing(buf, &entry->egrmirrvid,   40,  29, size, op);
+	sja1105_packing(buf, &entry->egrmirrpcp,   28,  26, size, op);
+	sja1105_packing(buf, &entry->egrmirrdei,   25,  25, size, op);
+	sja1105_packing(buf, &entry->replay_port,  24,  22, size, op);
+	return size;
+}
+
+static size_t
+sja1105_l2_forwarding_params_entry_packing(void *buf, void *entry_ptr,
+					   enum packing_op op)
+{
+	const size_t size = SIZE_L2_FORWARDING_PARAMS_ENTRY;
+	struct sja1105_l2_forwarding_params_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_l2_forwarding_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->max_dynp, 95, 93, size, op);
+	for (i = 0, offset = 13; i < 8; i++, offset += 10)
+		sja1105_packing(buf, &entry->part_spc[i],
+				offset + 9, offset + 0, size, op);
+	return size;
+}
+
+size_t sja1105_l2_forwarding_entry_packing(void *buf, void *entry_ptr,
+					   enum packing_op op)
+{
+	const size_t size = SIZE_L2_FORWARDING_ENTRY;
+	struct sja1105_l2_forwarding_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_l2_forwarding_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->bc_domain,  63, 59, size, op);
+	sja1105_packing(buf, &entry->reach_port, 58, 54, size, op);
+	sja1105_packing(buf, &entry->fl_domain,  53, 49, size, op);
+	for (i = 0, offset = 25; i < 8; i++, offset += 3)
+		sja1105_packing(buf, &entry->vlan_pmap[i],
+				offset + 2, offset + 0, size, op);
+	return size;
+}
+
+static size_t
+sja1105et_l2_lookup_params_entry_packing(void *buf, void *entry_ptr,
+					 enum packing_op op)
+{
+	const size_t size = SIZE_L2_LOOKUP_PARAMS_ENTRY_ET;
+	struct sja1105_l2_lookup_params_entry *entry;
+
+	entry = (struct sja1105_l2_lookup_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->maxage,         31, 17, size, op);
+	sja1105_packing(buf, &entry->dyn_tbsz,       16, 14, size, op);
+	sja1105_packing(buf, &entry->poly,           13,  6, size, op);
+	sja1105_packing(buf, &entry->shared_learn,    5,  5, size, op);
+	sja1105_packing(buf, &entry->no_enf_hostprt,  4,  4, size, op);
+	sja1105_packing(buf, &entry->no_mgmt_learn,   3,  3, size, op);
+	return size;
+}
+
+static size_t
+sja1105pqrs_l2_lookup_params_entry_packing(void *buf, void *entry_ptr,
+					   enum packing_op op)
+{
+	const size_t size = SIZE_L2_LOOKUP_PARAMS_ENTRY_PQRS;
+	struct sja1105_l2_lookup_params_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_l2_lookup_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->drpbc,         127, 123, size, op);
+	sja1105_packing(buf, &entry->drpmc,         122, 118, size, op);
+	sja1105_packing(buf, &entry->drpuni,        117, 113, size, op);
+	for (i = 0, offset = 58; i < 5; i++, offset += 11)
+		sja1105_packing(buf, &entry->maxaddrp[i],
+				offset + 10, offset + 0, size, op);
+	sja1105_packing(buf, &entry->maxage,         57,  43, size, op);
+	sja1105_packing(buf, &entry->start_dynspc,   42,  33, size, op);
+	sja1105_packing(buf, &entry->drpnolearn,     32,  28, size, op);
+	sja1105_packing(buf, &entry->shared_learn,   27,  27, size, op);
+	sja1105_packing(buf, &entry->no_enf_hostprt, 26,  26, size, op);
+	sja1105_packing(buf, &entry->no_mgmt_learn,  25,  25, size, op);
+	sja1105_packing(buf, &entry->use_static,     24,  24, size, op);
+	sja1105_packing(buf, &entry->owr_dyn,        23,  23, size, op);
+	sja1105_packing(buf, &entry->learn_once,     22,  22, size, op);
+	return size;
+}
+
+size_t sja1105et_l2_lookup_entry_packing(void *buf, void *entry_ptr,
+					 enum packing_op op)
+{
+	const size_t size = SIZE_L2_LOOKUP_ENTRY_ET;
+	struct sja1105_l2_lookup_entry *entry;
+
+	entry = (struct sja1105_l2_lookup_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->vlanid,    95, 84, size, op);
+	sja1105_packing(buf, &entry->macaddr,   83, 36, size, op);
+	sja1105_packing(buf, &entry->destports, 35, 31, size, op);
+	sja1105_packing(buf, &entry->enfport,   30, 30, size, op);
+	sja1105_packing(buf, &entry->index,     29, 20, size, op);
+	return size;
+}
+
+size_t sja1105pqrs_l2_lookup_entry_packing(void *buf, void *entry_ptr,
+					   enum packing_op op)
+{
+	const size_t size = SIZE_L2_LOOKUP_ENTRY_PQRS;
+	struct sja1105_l2_lookup_entry *entry;
+
+	entry = (struct sja1105_l2_lookup_entry *)entry_ptr;
+
+	/* These are static L2 lookup entries, so the structure
+	 * should match UM11040 Table 16/17 definitions when
+	 * LOCKEDS is 1.
+	 */
+	sja1105_packing(buf, &entry->mirrvlan,     158, 147, size, op);
+	sja1105_packing(buf, &entry->mirr,         145, 145, size, op);
+	sja1105_packing(buf, &entry->retag,        144, 144, size, op);
+	sja1105_packing(buf, &entry->mask_iotag,   143, 143, size, op);
+	sja1105_packing(buf, &entry->mask_vlanid,  142, 131, size, op);
+	sja1105_packing(buf, &entry->mask_macaddr, 130,  83, size, op);
+	sja1105_packing(buf, &entry->iotag,         82,  82, size, op);
+	sja1105_packing(buf, &entry->vlanid,        81,  70, size, op);
+	sja1105_packing(buf, &entry->macaddr,       69,  22, size, op);
+	sja1105_packing(buf, &entry->destports,     21,  17, size, op);
+	sja1105_packing(buf, &entry->enfport,       16,  16, size, op);
+	sja1105_packing(buf, &entry->index,         15,   6, size, op);
+	return size;
+}
+
+static size_t sja1105_l2_policing_entry_packing(void *buf, void *entry_ptr,
+						enum packing_op op)
+{
+	const size_t size = SIZE_L2_POLICING_ENTRY;
+	struct sja1105_l2_policing_entry *entry;
+
+	entry = (struct sja1105_l2_policing_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->sharindx,  63, 58, size, op);
+	sja1105_packing(buf, &entry->smax,      57, 42, size, op);
+	sja1105_packing(buf, &entry->rate,      41, 26, size, op);
+	sja1105_packing(buf, &entry->maxlen,    25, 15, size, op);
+	sja1105_packing(buf, &entry->partition, 14, 12, size, op);
+	return size;
+}
+
+static size_t sja1105et_mac_config_entry_packing(void *buf, void *entry_ptr,
+						 enum packing_op op)
+{
+	const size_t size = SIZE_MAC_CONFIG_ENTRY_ET;
+	struct sja1105_mac_config_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_mac_config_entry *)entry_ptr;
+
+	for (i = 0, offset = 72; i < 8; i++, offset += 19) {
+		sja1105_packing(buf, &entry->enabled[i],
+				offset +  0, offset +  0, size, op);
+		sja1105_packing(buf, &entry->base[i],
+				offset +  9, offset +  1, size, op);
+		sja1105_packing(buf, &entry->top[i],
+				offset + 18, offset + 10, size, op);
+	}
+	sja1105_packing(buf, &entry->ifg,       71, 67, size, op);
+	sja1105_packing(buf, &entry->speed,     66, 65, size, op);
+	sja1105_packing(buf, &entry->tp_delin,  64, 49, size, op);
+	sja1105_packing(buf, &entry->tp_delout, 48, 33, size, op);
+	sja1105_packing(buf, &entry->maxage,    32, 25, size, op);
+	sja1105_packing(buf, &entry->vlanprio,  24, 22, size, op);
+	sja1105_packing(buf, &entry->vlanid,    21, 10, size, op);
+	sja1105_packing(buf, &entry->ing_mirr,   9,  9, size, op);
+	sja1105_packing(buf, &entry->egr_mirr,   8,  8, size, op);
+	sja1105_packing(buf, &entry->drpnona664, 7,  7, size, op);
+	sja1105_packing(buf, &entry->drpdtag,    6,  6, size, op);
+	sja1105_packing(buf, &entry->drpuntag,   5,  5, size, op);
+	sja1105_packing(buf, &entry->retag,      4,  4, size, op);
+	sja1105_packing(buf, &entry->dyn_learn,  3,  3, size, op);
+	sja1105_packing(buf, &entry->egress,     2,  2, size, op);
+	sja1105_packing(buf, &entry->ingress,    1,  1, size, op);
+	return size;
+}
+
+size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr,
+					    enum packing_op op)
+{
+	const size_t size = SIZE_MAC_CONFIG_ENTRY_PQRS;
+	struct sja1105_mac_config_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_mac_config_entry *)entry_ptr;
+
+	for (i = 0, offset = 104; i < 8; i++, offset += 19) {
+		sja1105_packing(buf, &entry->enabled[i],
+				offset +  0, offset +  0, size, op);
+		sja1105_packing(buf, &entry->base[i],
+				offset +  9, offset +  1, size, op);
+		sja1105_packing(buf, &entry->top[i],
+				offset + 18, offset + 10, size, op);
+	}
+	sja1105_packing(buf, &entry->ifg,       103, 99, size, op);
+	sja1105_packing(buf, &entry->speed,      98, 97, size, op);
+	sja1105_packing(buf, &entry->tp_delin,   96, 81, size, op);
+	sja1105_packing(buf, &entry->tp_delout,  80, 65, size, op);
+	sja1105_packing(buf, &entry->maxage,     64, 57, size, op);
+	sja1105_packing(buf, &entry->vlanprio,   56, 54, size, op);
+	sja1105_packing(buf, &entry->vlanid,     53, 42, size, op);
+	sja1105_packing(buf, &entry->ing_mirr,   41, 41, size, op);
+	sja1105_packing(buf, &entry->egr_mirr,   40, 40, size, op);
+	sja1105_packing(buf, &entry->drpnona664, 39, 39, size, op);
+	sja1105_packing(buf, &entry->drpdtag,    38, 38, size, op);
+	sja1105_packing(buf, &entry->drpsotag,   37, 37, size, op);
+	sja1105_packing(buf, &entry->drpsitag,   36, 36, size, op);
+	sja1105_packing(buf, &entry->drpuntag,   35, 35, size, op);
+	sja1105_packing(buf, &entry->retag,      34, 34, size, op);
+	sja1105_packing(buf, &entry->dyn_learn,  33, 33, size, op);
+	sja1105_packing(buf, &entry->egress,     32, 32, size, op);
+	sja1105_packing(buf, &entry->ingress,    31, 31, size, op);
+	sja1105_packing(buf, &entry->mirrcie,    30, 30, size, op);
+	sja1105_packing(buf, &entry->mirrcetag,  29, 29, size, op);
+	sja1105_packing(buf, &entry->ingmirrvid, 28, 17, size, op);
+	sja1105_packing(buf, &entry->ingmirrpcp, 16, 14, size, op);
+	sja1105_packing(buf, &entry->ingmirrdei, 13, 13, size, op);
+	return size;
+}
+
+static size_t
+sja1105_schedule_entry_points_params_entry_packing(void *buf, void *entry_ptr,
+						   enum packing_op op)
+{
+	struct sja1105_schedule_entry_points_params_entry *entry;
+	const size_t size = SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY;
+
+	entry = (struct sja1105_schedule_entry_points_params_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->clksrc,    31, 30, size, op);
+	sja1105_packing(buf, &entry->actsubsch, 29, 27, size, op);
+	return size;
+}
+
+static size_t
+sja1105_schedule_entry_points_entry_packing(void *buf, void *entry_ptr,
+					    enum packing_op op)
+{
+	struct sja1105_schedule_entry_points_entry *entry;
+	const size_t size = SIZE_SCHEDULE_ENTRY_POINTS_ENTRY;
+
+	entry = (struct sja1105_schedule_entry_points_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->subschindx, 31, 29, size, op);
+	sja1105_packing(buf, &entry->delta,      28, 11, size, op);
+	sja1105_packing(buf, &entry->address,    10, 1,  size, op);
+	return size;
+}
+
+static size_t sja1105_schedule_params_entry_packing(void *buf, void *entry_ptr,
+						    enum packing_op op)
+{
+	const size_t size = SIZE_SCHEDULE_PARAMS_ENTRY;
+	struct sja1105_schedule_params_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_schedule_params_entry *)entry_ptr;
+
+	for (i = 0, offset = 16; i < 8; i++, offset += 10)
+		sja1105_packing(buf, &entry->subscheind[i],
+				offset + 9, offset + 0, size, op);
+	return size;
+}
+
+static size_t sja1105_schedule_entry_packing(void *buf, void *entry_ptr,
+					     enum packing_op op)
+{
+	const size_t size = SIZE_SCHEDULE_ENTRY;
+	struct sja1105_schedule_entry *entry;
+
+	entry = (struct sja1105_schedule_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->winstindex,  63, 54, size, op);
+	sja1105_packing(buf, &entry->winend,      53, 53, size, op);
+	sja1105_packing(buf, &entry->winst,       52, 52, size, op);
+	sja1105_packing(buf, &entry->destports,   51, 47, size, op);
+	sja1105_packing(buf, &entry->setvalid,    46, 46, size, op);
+	sja1105_packing(buf, &entry->txen,        45, 45, size, op);
+	sja1105_packing(buf, &entry->resmedia_en, 44, 44, size, op);
+	sja1105_packing(buf, &entry->resmedia,    43, 36, size, op);
+	sja1105_packing(buf, &entry->vlindex,     35, 26, size, op);
+	sja1105_packing(buf, &entry->delta,       25, 8,  size, op);
+	return size;
+}
+
+static size_t sja1105_sgmii_entry_packing(void *buf, void *entry_ptr,
+					  enum packing_op op)
+{
+	const size_t size = SIZE_SGMII_ENTRY;
+	struct sja1105_sgmii_entry *entry;
+	u64 tmp;
+
+	entry = (struct sja1105_sgmii_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->digital_error_cnt,
+			1151, 1120, size, op);
+	sja1105_packing(buf, &entry->digital_control_2,
+			1119, 1088, size, op);
+	sja1105_packing(buf, &entry->debug_control,
+			383,  352, size, op);
+	sja1105_packing(buf, &entry->test_control,
+			351,  320, size, op);
+	sja1105_packing(buf, &entry->autoneg_control,
+			287,  256, size, op);
+	sja1105_packing(buf, &entry->digital_control_1,
+			255,  224, size, op);
+	sja1105_packing(buf, &entry->autoneg_adv,
+			223,  192, size, op);
+	sja1105_packing(buf, &entry->basic_control,
+			191,  160, size, op);
+	/* Reserved areas */
+	if (op == PACK) {
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp, 1087, 1056, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp, 1055, 1024, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp, 1023,  992, size);
+		tmp = 0x0100ull; sja1105_pack(buf, &tmp,  991,  960, size);
+		tmp = 0x023Full; sja1105_pack(buf, &tmp,  959,  928, size);
+		tmp = 0x000Aull; sja1105_pack(buf, &tmp,  927,  896, size);
+		tmp = 0x1C22ull; sja1105_pack(buf, &tmp,  895,  864, size);
+		tmp = 0x0001ull; sja1105_pack(buf, &tmp,  863,  832, size);
+		tmp = 0x0003ull; sja1105_pack(buf, &tmp,  831,  800, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  799,  768, size);
+		tmp = 0x0001ull; sja1105_pack(buf, &tmp,  767,  736, size);
+		tmp = 0x0005ull; sja1105_pack(buf, &tmp,  735,  704, size);
+		tmp = 0x0101ull; sja1105_pack(buf, &tmp,  703,  672, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  671,  640, size);
+		tmp = 0x0001ull; sja1105_pack(buf, &tmp,  639,  608, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  607,  576, size);
+		tmp = 0x000Aull; sja1105_pack(buf, &tmp,  575,  544, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  543,  512, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  511,  480, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  479,  448, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  447,  416, size);
+		tmp = 0x899Cull; sja1105_pack(buf, &tmp,  415,  384, size);
+		tmp = 0x000Aull; sja1105_pack(buf, &tmp,  319,  288, size);
+		tmp = 0x0004ull; sja1105_pack(buf, &tmp,  159,  128, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,  127,   96, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,   95,   64, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,   63,   32, size);
+		tmp = 0x0000ull; sja1105_pack(buf, &tmp,   31,    0, size);
+	}
+	return size;
+}
+
+static size_t
+sja1105_vl_forwarding_params_entry_packing(void *buf, void *entry_ptr,
+					   enum packing_op op)
+{
+	const size_t size = SIZE_VL_FORWARDING_PARAMS_ENTRY;
+	struct sja1105_vl_forwarding_params_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_vl_forwarding_params_entry *)entry_ptr;
+
+	for (i = 0, offset = 16; i < 8; i++, offset += 10)
+		sja1105_packing(buf, &entry->partspc[i],
+				offset + 9, offset + 0, size, op);
+	sja1105_packing(buf, &entry->debugen, 15, 15, size, op);
+	return size;
+}
+
+static size_t sja1105_vl_forwarding_entry_packing(void *buf, void *entry_ptr,
+						  enum packing_op op)
+{
+	const size_t size = SIZE_VL_FORWARDING_ENTRY;
+	struct sja1105_vl_forwarding_entry *entry;
+
+	entry = (struct sja1105_vl_forwarding_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->type,      31, 31, size, op);
+	sja1105_packing(buf, &entry->priority,  30, 28, size, op);
+	sja1105_packing(buf, &entry->partition, 27, 25, size, op);
+	sja1105_packing(buf, &entry->destports, 24, 20, size, op);
+	return size;
+}
+
+size_t sja1105_vl_lookup_entry_packing(void *buf, void *entry_ptr,
+				       enum packing_op op)
+{
+	const size_t size = SIZE_VL_LOOKUP_ENTRY;
+	struct sja1105_vl_lookup_entry *entry;
+
+	entry = (struct sja1105_vl_lookup_entry *)entry_ptr;
+
+	if (entry->format == 0) {
+		/* Interpreting vllupformat as 0 */
+		sja1105_packing(buf, &entry->destports,
+				95, 91, size, op);
+		sja1105_packing(buf, &entry->iscritical,
+				90, 90, size, op);
+		sja1105_packing(buf, &entry->macaddr,
+				89, 42, size, op);
+		sja1105_packing(buf, &entry->vlanid,
+				41, 30, size, op);
+		sja1105_packing(buf, &entry->port,
+				29, 27, size, op);
+		sja1105_packing(buf, &entry->vlanprior,
+				26, 24, size, op);
+	} else {
+		/* Interpreting vllupformat as 1 */
+		sja1105_packing(buf, &entry->egrmirr,
+				95, 91, size, op);
+		sja1105_packing(buf, &entry->ingrmirr,
+				90, 90, size, op);
+		sja1105_packing(buf, &entry->vlid,
+				57, 42, size, op);
+		sja1105_packing(buf, &entry->port,
+				29, 27, size, op);
+	}
+	return size;
+}
+
+static size_t sja1105_vl_policing_entry_packing(void *buf, void *entry_ptr,
+						enum packing_op op)
+{
+	const size_t size = SIZE_VL_POLICING_ENTRY;
+	struct sja1105_vl_policing_entry *entry;
+
+	entry = (struct sja1105_vl_policing_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->type,      63, 63, size, op);
+	sja1105_packing(buf, &entry->maxlen,    62, 52, size, op);
+	sja1105_packing(buf, &entry->sharindx,  51, 42, size, op);
+	if (entry->type == 0) {
+		sja1105_packing(buf, &entry->bag,    41, 28, size, op);
+		sja1105_packing(buf, &entry->jitter, 27, 18, size, op);
+	}
+	return size;
+}
+
+size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr,
+					 enum packing_op op)
+{
+	const size_t size = SIZE_VLAN_LOOKUP_ENTRY;
+	struct sja1105_vlan_lookup_entry *entry;
+
+	entry = (struct sja1105_vlan_lookup_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->ving_mirr,  63, 59, size, op);
+	sja1105_packing(buf, &entry->vegr_mirr,  58, 54, size, op);
+	sja1105_packing(buf, &entry->vmemb_port, 53, 49, size, op);
+	sja1105_packing(buf, &entry->vlan_bc,    48, 44, size, op);
+	sja1105_packing(buf, &entry->tag_port,   43, 39, size, op);
+	sja1105_packing(buf, &entry->vlanid,     38, 27, size, op);
+	return size;
+}
+
+size_t sja1105_retagging_entry_packing(void *buf, void *entry_ptr,
+				       enum packing_op op)
+{
+	const size_t size = SIZE_RETAGGING_ENTRY;
+	struct sja1105_retagging_entry *entry;
+
+	entry = (struct sja1105_retagging_entry *)entry_ptr;
+
+	sja1105_packing(buf, &entry->egr_port,     63, 59, size, op);
+	sja1105_packing(buf, &entry->ing_port,     58, 54, size, op);
+	sja1105_packing(buf, &entry->vlan_ing,     53, 42, size, op);
+	sja1105_packing(buf, &entry->vlan_egr,     41, 30, size, op);
+	sja1105_packing(buf, &entry->do_not_learn, 29, 29, size, op);
+	sja1105_packing(buf, &entry->destports,    27, 23, size, op);
+	return size;
+}
+
+static size_t sja1105_xmii_params_entry_packing(void *buf, void *entry_ptr,
+						enum packing_op op)
+{
+	const size_t size = SIZE_XMII_PARAMS_ENTRY;
+	struct sja1105_xmii_params_entry *entry;
+	int offset, i;
+
+	entry = (struct sja1105_xmii_params_entry *)entry_ptr;
+
+	for (i = 0, offset = 17; i < 5; i++, offset += 3) {
+		sja1105_packing(buf, &entry->xmii_mode[i],
+				offset + 1, offset + 0, size, op);
+		sja1105_packing(buf, &entry->phy_mac[i],
+				offset + 2, offset + 2, size, op);
+	}
+	return size;
+}
+
+size_t sja1105_table_header_packing(void *buf, void *entry_ptr,
+				    enum packing_op op)
+{
+	const size_t size = SIZE_TABLE_HEADER;
+	struct sja1105_table_header *entry;
+
+	entry = (struct sja1105_table_header *)entry_ptr;
+
+	sja1105_packing(buf, &entry->block_id, 31, 24, size, op);
+	sja1105_packing(buf, &entry->len,      55, 32, size, op);
+	sja1105_packing(buf, &entry->crc,      95, 64, size, op);
+	return size;
+}
+
+/* WARNING: the *hdr pointer is really non-const, because it is
+ * modifying the CRC of the header for a 2-stage packing operation
+ */
+void
+sja1105_table_header_pack_with_crc(void *buf, struct sja1105_table_header *hdr)
+{
+	/* First pack the table as-is, then calculate the CRC, and
+	 * finally put the proper CRC into the packed buffer
+	 */
+	memset(buf, 0, SIZE_TABLE_HEADER);
+	sja1105_table_header_packing(buf, hdr, PACK);
+	hdr->crc = sja1105_crc32(buf, SIZE_TABLE_HEADER - 4);
+	sja1105_pack(buf + SIZE_TABLE_HEADER - 4, &hdr->crc, 31, 0, 4);
+}
+
+static void sja1105_table_write_crc(u8 *table_start, u8 *crc_ptr)
+{
+	u64 computed_crc;
+	int len_bytes;
+
+	len_bytes = (uintptr_t)(crc_ptr - table_start);
+	computed_crc = sja1105_crc32(table_start, len_bytes);
+	sja1105_pack(crc_ptr, &computed_crc, 31, 0, 4);
+}
+
+/* The block IDs that the switches support are unfortunately sparse, so keep a
+ * mapping table to "block indices" and translate back and forth so that we
+ * don't waste useless memory in struct sja1105_static_config.
+ * Also, since the block id comes from essentially untrusted input (unpacking
+ * the static config from userspace) it has to be sanitized (range-checked)
+ * before blindly indexing kernel memory with the blk_idx.
+ */
+static u64 blk_id_map[BLK_IDX_MAX] = {
+	[BLK_IDX_SCHEDULE] = BLKID_SCHEDULE,
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = BLKID_SCHEDULE_ENTRY_POINTS,
+	[BLK_IDX_VL_LOOKUP] = BLKID_VL_LOOKUP,
+	[BLK_IDX_VL_POLICING] = BLKID_VL_POLICING,
+	[BLK_IDX_VL_FORWARDING] = BLKID_VL_FORWARDING,
+	[BLK_IDX_L2_LOOKUP] = BLKID_L2_LOOKUP,
+	[BLK_IDX_L2_POLICING] = BLKID_L2_POLICING,
+	[BLK_IDX_VLAN_LOOKUP] = BLKID_VLAN_LOOKUP,
+	[BLK_IDX_L2_FORWARDING] = BLKID_L2_FORWARDING,
+	[BLK_IDX_MAC_CONFIG] = BLKID_MAC_CONFIG,
+	[BLK_IDX_SCHEDULE_PARAMS] = BLKID_SCHEDULE_PARAMS,
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = BLKID_SCHEDULE_ENTRY_POINTS_PARAMS,
+	[BLK_IDX_VL_FORWARDING_PARAMS] = BLKID_VL_FORWARDING_PARAMS,
+	[BLK_IDX_L2_LOOKUP_PARAMS] = BLKID_L2_LOOKUP_PARAMS,
+	[BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS,
+	[BLK_IDX_CLK_SYNC_PARAMS] = BLKID_CLK_SYNC_PARAMS,
+	[BLK_IDX_AVB_PARAMS] = BLKID_AVB_PARAMS,
+	[BLK_IDX_GENERAL_PARAMS] = BLKID_GENERAL_PARAMS,
+	[BLK_IDX_RETAGGING] = BLKID_RETAGGING,
+	[BLK_IDX_XMII_PARAMS] = BLKID_XMII_PARAMS,
+	[BLK_IDX_SGMII] = BLKID_SGMII,
+};
+
+static enum sja1105_blk_idx blk_idx_from_blk_id(u64 block_id)
+{
+	enum sja1105_blk_idx blk_idx;
+
+	if (block_id > BLKID_MAX)
+		return BLK_IDX_INVAL;
+
+	for (blk_idx = 0; blk_idx < BLK_IDX_MAX; blk_idx++)
+		if (blk_id_map[blk_idx] == block_id)
+			return blk_idx;
+
+	return BLK_IDX_INVAL;
+}
+
+static ssize_t
+sja1105_table_add_entry(struct sja1105_table *table, const void *buf)
+{
+	void *entry_ptr;
+
+	if (table->entry_count >= table->ops->max_entry_count)
+		return -ERANGE;
+
+	entry_ptr = table->entries;
+	entry_ptr += (uintptr_t)table->ops->unpacked_entry_size *
+				table->entry_count;
+
+	table->entry_count++;
+
+	memset(entry_ptr, 0, table->ops->unpacked_entry_size);
+
+	/* Discard const pointer due to common implementation
+	 * of PACK and UNPACK.
+	 */
+	return table->ops->packing((void *)buf, entry_ptr, UNPACK);
+}
+
+/* This is needed so that all information needed for
+ * sja1105_vl_lookup_entry_packing is self-contained within
+ * the structure and does not depend upon the general_params_table.
+ */
+static void
+sja1105_static_config_patch_vllupformat(struct sja1105_static_config *config)
+{
+	struct sja1105_vl_lookup_entry *vl_lookup_entries;
+	struct sja1105_general_params_entry *general_params_entries;
+	struct sja1105_table *tables = config->tables;
+	u64 vllupformat;
+	int i;
+
+	vl_lookup_entries = tables[BLK_IDX_VL_LOOKUP].entries;
+	general_params_entries = tables[BLK_IDX_GENERAL_PARAMS].entries;
+
+	vllupformat = general_params_entries[0].vllupformat;
+
+	for (i = 0; i < tables[BLK_IDX_VL_LOOKUP].entry_count; i++)
+		vl_lookup_entries[i].format = vllupformat;
+}
+
+const char *sja1105_static_config_error_msg[] = {
+	[SJA1105_CONFIG_OK] = "",
+	[SJA1105_DEVICE_ID_INVALID] =
+		"Device ID present in the static config is invalid",
+	[SJA1105_TTETHERNET_NOT_SUPPORTED] =
+		"schedule-table present, but TTEthernet is "
+		"only supported on T and Q/S",
+	[SJA1105_INCORRECT_TTETHERNET_CONFIGURATION] =
+		"schedule-table present, but one of "
+		"schedule-entry-points-table, schedule-parameters-table or "
+		"schedule-entry-points-parameters table is empty",
+	[SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION] =
+		"vl-lookup-table present, but one of vl-policing-table, "
+		"vl-forwarding-table or vl-forwarding-parameters-table is empty",
+	[SJA1105_MISSING_L2_POLICING_TABLE] =
+		"l2-policing-table needs to have at least one entry",
+	[SJA1105_MISSING_L2_FORWARDING_TABLE] =
+		"l2-forwarding-table is either missing or incomplete",
+	[SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE] =
+		"l2-forwarding-parameters-table is missing",
+	[SJA1105_MISSING_GENERAL_PARAMS_TABLE] =
+		"general-parameters-table is missing",
+	[SJA1105_MISSING_VLAN_TABLE] =
+		"vlan-lookup-table needs to have at least the default untagged VLAN",
+	[SJA1105_MISSING_XMII_TABLE] =
+		"xmii-table is missing",
+	[SJA1105_MISSING_MAC_TABLE] =
+		"mac-configuration-table needs to contain an entry for each port",
+	[SJA1105_OVERCOMMITTED_FRAME_MEMORY] =
+		"Not allowed to overcommit frame memory. L2 memory partitions "
+		"and VL memory partitions share the same space. The sum of all "
+		"16 memory partitions is not allowed to be larger than 929 "
+		"128-byte blocks (or 910 with retagging). Please adjust "
+		"l2-forwarding-parameters-table.part_spc and/or "
+		"vl-forwarding-parameters-table.partspc.",
+	[SJA1105_UNEXPECTED_END_OF_BUFFER] =
+		"Unexpected end of buffer",
+	[SJA1105_INVALID_DEVICE_ID] =
+		"Invalid device ID present in static config",
+	[SJA1105_INVALID_TABLE_HEADER_CRC] =
+		"One of the table headers has an incorrect CRC",
+	[SJA1105_INVALID_TABLE_HEADER] =
+		"One of the table headers contains an invalid block id",
+	[SJA1105_INCORRECT_TABLE_LENGTH] =
+		"The data length specified in one of the table headers is "
+		"longer than the actual size of the entries that were parsed",
+	[SJA1105_DATA_CRC_INVALID] =
+		"One of the tables has an incorrect CRC over the data area",
+	[SJA1105_EXTRA_BYTES_AT_END_OF_BUFFER] =
+		"Extra bytes found at the end of buffer after parsing it",
+};
+
+static enum sja1105_static_config_validity
+static_config_check_memory_size(const struct sja1105_table *tables)
+{
+	const struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
+	const struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
+	int i, max_mem, mem = 0;
+
+	l2_fwd_params = tables[BLK_IDX_L2_FORWARDING_PARAMS].entries;
+
+	for (i = 0; i < 8; i++)
+		mem += l2_fwd_params->part_spc[i];
+
+	if (tables[BLK_IDX_VL_FORWARDING_PARAMS].entry_count) {
+		vl_fwd_params = tables[BLK_IDX_VL_FORWARDING_PARAMS].entries;
+		for (i = 0; i < 8; i++)
+			mem += vl_fwd_params->partspc[i];
+	}
+
+	if (tables[BLK_IDX_RETAGGING].entry_count)
+		max_mem = MAX_FRAME_MEMORY_RETAGGING;
+	else
+		max_mem = MAX_FRAME_MEMORY;
+
+	if (mem > max_mem)
+		return SJA1105_OVERCOMMITTED_FRAME_MEMORY;
+
+	return SJA1105_CONFIG_OK;
+}
+
+enum sja1105_static_config_validity
+sja1105_static_config_check_valid(const struct sja1105_static_config *config)
+{
+	const struct sja1105_table *tables = config->tables;
+#define IS_FULL(blk_idx) \
+	(tables[blk_idx].entry_count == tables[blk_idx].ops->max_entry_count)
+
+	if (!DEVICE_ID_VALID(config->device_id))
+		return SJA1105_DEVICE_ID_INVALID;
+
+	if (tables[BLK_IDX_SCHEDULE].entry_count) {
+		if (!SUPPORTS_TTETHERNET(config->device_id))
+			return SJA1105_TTETHERNET_NOT_SUPPORTED;
+
+		if (!IS_FULL(BLK_IDX_SCHEDULE_ENTRY_POINTS))
+			return SJA1105_INCORRECT_TTETHERNET_CONFIGURATION;
+
+		if (!IS_FULL(BLK_IDX_SCHEDULE_PARAMS))
+			return SJA1105_INCORRECT_TTETHERNET_CONFIGURATION;
+
+		if (!IS_FULL(BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS))
+			return SJA1105_INCORRECT_TTETHERNET_CONFIGURATION;
+	}
+	if (tables[BLK_IDX_VL_LOOKUP].entry_count) {
+		if (tables[BLK_IDX_VL_POLICING].entry_count == 0)
+			return SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION;
+
+		if (tables[BLK_IDX_VL_FORWARDING].entry_count == 0)
+			return SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION;
+
+		if (!IS_FULL(BLK_IDX_VL_FORWARDING_PARAMS))
+			return SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION;
+	}
+	if (tables[BLK_IDX_L2_POLICING].entry_count == 0)
+		return SJA1105_MISSING_L2_POLICING_TABLE;
+
+	if (tables[BLK_IDX_VLAN_LOOKUP].entry_count == 0)
+		return SJA1105_MISSING_VLAN_TABLE;
+
+	if (!IS_FULL(BLK_IDX_L2_FORWARDING))
+		return SJA1105_MISSING_L2_FORWARDING_TABLE;
+
+	if (!IS_FULL(BLK_IDX_MAC_CONFIG))
+		return SJA1105_MISSING_MAC_TABLE;
+
+	if (!IS_FULL(BLK_IDX_L2_FORWARDING_PARAMS))
+		return SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE;
+
+	if (!IS_FULL(BLK_IDX_GENERAL_PARAMS))
+		return SJA1105_MISSING_GENERAL_PARAMS_TABLE;
+
+	if (!IS_FULL(BLK_IDX_XMII_PARAMS))
+		return SJA1105_MISSING_XMII_TABLE;
+
+	return static_config_check_memory_size(tables);
+#undef IS_FULL
+}
+
+enum sja1105_static_config_validity
+sja1105_static_config_unpack(const void *buf, ssize_t buf_len,
+			     struct sja1105_static_config *config)
+{
+	struct sja1105_table_header hdr;
+	enum sja1105_blk_idx blk_idx;
+	struct sja1105_table *table;
+	u64 computed_crc, read_crc;
+	int expected_entry_count;
+	const u8 *table_end;
+	const u8 *p = buf;
+	int bytes;
+
+	/* Guard memory access to buffer */
+	if (buf_len >= 4)
+		buf_len -= 4;
+	else
+		return SJA1105_UNEXPECTED_END_OF_BUFFER;
+
+	/* Retrieve device_id from first 4 bytes of packed buffer */
+	sja1105_unpack(p, &config->device_id, 31, 0, 4);
+	if (!DEVICE_ID_VALID(config->device_id))
+		return SJA1105_INVALID_DEVICE_ID;
+
+	p += SIZE_SJA1105_DEVICE_ID;
+
+	while (1) {
+		/* Guard memory access to buffer */
+		if (buf_len >= SIZE_TABLE_HEADER)
+			buf_len -= SIZE_TABLE_HEADER;
+		else
+			return SJA1105_UNEXPECTED_END_OF_BUFFER;
+
+		/* Discard const pointer due to common implementation
+		 * of PACK and UNPACK.
+		 */
+		memset(&hdr, 0, sizeof(hdr));
+		sja1105_table_header_packing((void *)p, &hdr, UNPACK);
+
+		/* This should match on last table header */
+		if (hdr.len == 0)
+			break;
+
+		computed_crc = sja1105_crc32(p, SIZE_TABLE_HEADER - 4);
+		computed_crc &= 0xFFFFFFFF;
+		read_crc = hdr.crc & 0xFFFFFFFF;
+		if (read_crc != computed_crc)
+			return SJA1105_INVALID_TABLE_HEADER_CRC;
+
+		p += SIZE_TABLE_HEADER;
+
+		/* Guard memory access to buffer */
+		if (buf_len >= (ssize_t)hdr.len * 4)
+			buf_len -= (ssize_t)hdr.len * 4;
+		else
+			return SJA1105_UNEXPECTED_END_OF_BUFFER;
+
+		table_end = p + hdr.len * 4;
+		computed_crc = sja1105_crc32(p, hdr.len * 4);
+
+		blk_idx = blk_idx_from_blk_id(hdr.block_id);
+		if (blk_idx == BLK_IDX_INVAL)
+			return -EINVAL;
+		table = &config->tables[blk_idx];
+		/* Detected duplicate table headers with the same block id */
+		if (table->entry_count)
+			return -EINVAL;
+
+		expected_entry_count = hdr.len * 4;
+		expected_entry_count /= table->ops->packed_entry_size;
+		table->entries = kcalloc(expected_entry_count,
+					 table->ops->unpacked_entry_size,
+					 GFP_KERNEL);
+		if (!table->entries)
+			return -ENOMEM;
+
+		while (p < table_end) {
+			bytes = sja1105_table_add_entry(table, p);
+			if (bytes < 0)
+				return SJA1105_INVALID_TABLE_HEADER;
+			p += bytes;
+		};
+		if (p != table_end)
+			/* Incorrect table length for this block id:
+			 * table data has (table_end - p) extra bytes.
+			 */
+			return SJA1105_INCORRECT_TABLE_LENGTH;
+		/* Guard memory access to buffer */
+		if (buf_len >= 4)
+			buf_len -= 4;
+		else
+			return SJA1105_UNEXPECTED_END_OF_BUFFER;
+
+		sja1105_unpack(p, &read_crc, 31, 0, 4);
+		p += 4;
+		if (computed_crc != read_crc)
+			return SJA1105_DATA_CRC_INVALID;
+	}
+	if (buf_len)
+		return SJA1105_EXTRA_BYTES_AT_END_OF_BUFFER;
+
+	sja1105_static_config_patch_vllupformat(config);
+	return SJA1105_CONFIG_OK;
+}
+
+void
+sja1105_static_config_pack(void *buf, struct sja1105_static_config *config)
+{
+	struct sja1105_table_header header = {0};
+	enum sja1105_blk_idx i;
+	char *p = buf;
+	int j;
+
+	sja1105_pack(p, &config->device_id, 31, 0, 4);
+	p += SIZE_SJA1105_DEVICE_ID;
+
+	for (i = 0; i < BLK_IDX_MAX; i++) {
+		const struct sja1105_table *table;
+		char *table_start;
+
+		table = &config->tables[i];
+		if (!table->entry_count)
+			continue;
+
+		header.block_id = blk_id_map[i];
+		header.len = table->entry_count *
+			     table->ops->packed_entry_size / 4;
+		sja1105_table_header_pack_with_crc(p, &header);
+		p += SIZE_TABLE_HEADER;
+		table_start = p;
+		for (j = 0; j < table->entry_count; j++) {
+			u8 *entry_ptr = table->entries;
+
+			entry_ptr += j * table->ops->unpacked_entry_size;
+			memset(p, 0, table->ops->packed_entry_size);
+			table->ops->packing(p, entry_ptr, PACK);
+			p += table->ops->packed_entry_size;
+		}
+		sja1105_table_write_crc(table_start, p);
+		p += 4;
+	}
+	/* Final header:
+	 * Block ID does not matter
+	 * Length of 0 marks that header is final
+	 * CRC will be replaced on-the-fly on "config upload"
+	 */
+	header.block_id = 0;
+	header.len = 0;
+	header.crc = 0xDEADBEEF;
+	memset(p, 0, SIZE_TABLE_HEADER);
+	sja1105_table_header_packing(p, &header, PACK);
+}
+
+size_t
+sja1105_static_config_get_length(const struct sja1105_static_config *config)
+{
+	unsigned int sum;
+	unsigned int header_count;
+	enum sja1105_blk_idx i;
+
+	/* Ending header */
+	header_count = 1;
+	sum = SIZE_SJA1105_DEVICE_ID;
+
+	/* Tables (headers and entries) */
+	for (i = 0; i < BLK_IDX_MAX; i++) {
+		const struct sja1105_table *table;
+
+		table = &config->tables[i];
+		if (table->entry_count)
+			header_count++;
+
+		sum += table->ops->packed_entry_size * table->entry_count;
+	}
+	/* Headers have an additional CRC at the end */
+	sum += header_count * (SIZE_TABLE_HEADER + 4);
+	/* Last header does not have an extra CRC because there is no data */
+	sum -= 4;
+
+	return sum;
+}
+
+/* Compatibility matrices */
+
+/* SJA1105E: First generation, no TTEthernet */
+static struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = {
+	[BLK_IDX_SCHEDULE] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = { 0 },
+	[BLK_IDX_VL_LOOKUP] = { 0 },
+	[BLK_IDX_VL_POLICING] = { 0 },
+	[BLK_IDX_VL_FORWARDING] = { 0 },
+	[BLK_IDX_L2_LOOKUP] = {
+		.packing = sja1105et_l2_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_ENTRY_ET,
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_POLICING] = {
+		.packing = sja1105_l2_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
+		.packed_entry_size = SIZE_L2_POLICING_ENTRY,
+		.max_entry_count = MAX_L2_POLICING_COUNT,
+	},
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.packing = sja1105_vlan_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
+		.packed_entry_size = SIZE_VLAN_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.packing = sja1105_l2_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.packing = sja1105et_mac_config_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
+		.packed_entry_size = SIZE_MAC_CONFIG_ENTRY_ET,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { 0 },
+	[BLK_IDX_VL_FORWARDING_PARAMS] = { 0 },
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.packing = sja1105et_l2_lookup_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_PARAMS_ENTRY_ET,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = {
+		.packing = sja1105_l2_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_CLK_SYNC_PARAMS] = { 0 },
+	[BLK_IDX_AVB_PARAMS] = {
+		.packing = sja1105et_avb_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+		.packed_entry_size = SIZE_AVB_PARAMS_ENTRY_ET,
+		.max_entry_count = MAX_AVB_PARAMS_COUNT,
+	},
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.packing = sja1105et_general_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
+		.packed_entry_size = SIZE_GENERAL_PARAMS_ENTRY_ET,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.packing = sja1105_retagging_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
+		.packed_entry_size = SIZE_RETAGGING_ENTRY,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+	},
+	[BLK_IDX_XMII_PARAMS] = {
+		.packing = sja1105_xmii_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
+		.packed_entry_size = SIZE_XMII_PARAMS_ENTRY,
+		.max_entry_count = MAX_XMII_PARAMS_COUNT,
+	},
+	[BLK_IDX_SGMII] = { 0 },
+};
+
+/* SJA1105T: First generation, TTEthernet */
+static struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
+	[BLK_IDX_SCHEDULE] = {
+		.packing = sja1105_schedule_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {
+		.packing = sja1105_schedule_entry_points_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY_POINTS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_ENTRY_POINTS_COUNT,
+	},
+	[BLK_IDX_VL_LOOKUP] = {
+		.packing = sja1105_vl_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_lookup_entry),
+		.packed_entry_size = SIZE_VL_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VL_LOOKUP_COUNT,
+	},
+	[BLK_IDX_VL_POLICING] = {
+		.packing = sja1105_vl_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_policing_entry),
+		.packed_entry_size = SIZE_VL_POLICING_ENTRY,
+		.max_entry_count = MAX_VL_POLICING_COUNT,
+	},
+	[BLK_IDX_VL_FORWARDING] = {
+		.packing = sja1105_vl_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_entry),
+		.packed_entry_size = SIZE_VL_FORWARDING_ENTRY,
+		.max_entry_count = MAX_VL_FORWARDING_COUNT,
+	},
+	[BLK_IDX_L2_LOOKUP] = {
+		.packing = sja1105et_l2_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_ENTRY_ET,
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_POLICING] = {
+		.packing = sja1105_l2_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
+		.packed_entry_size = SIZE_L2_POLICING_ENTRY,
+		.max_entry_count = MAX_L2_POLICING_COUNT,
+	},
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.packing = sja1105_vlan_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
+		.packed_entry_size = SIZE_VLAN_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.packing = sja1105_l2_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.packing = sja1105et_mac_config_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
+		.packed_entry_size = SIZE_MAC_CONFIG_ENTRY_ET,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = {
+		.packing = sja1105_schedule_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_params_entry),
+		.packed_entry_size = SIZE_SCHEDULE_PARAMS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_PARAMS_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {
+		.packing = sja1105_schedule_entry_points_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_params_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT,
+	},
+	[BLK_IDX_VL_FORWARDING_PARAMS] = {
+		.packing = sja1105_vl_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_params_entry),
+		.packed_entry_size = SIZE_VL_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_VL_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.packing = sja1105et_l2_lookup_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_PARAMS_ENTRY_ET,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = {
+		.packing = sja1105_l2_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_CLK_SYNC_PARAMS] = { 0 },
+	[BLK_IDX_AVB_PARAMS] = {
+		.packing = sja1105et_avb_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+		.packed_entry_size = SIZE_AVB_PARAMS_ENTRY_ET,
+		.max_entry_count = MAX_AVB_PARAMS_COUNT,
+	},
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.packing = sja1105et_general_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
+		.packed_entry_size = SIZE_GENERAL_PARAMS_ENTRY_ET,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.packing = sja1105_retagging_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
+		.packed_entry_size = SIZE_RETAGGING_ENTRY,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+	},
+	[BLK_IDX_XMII_PARAMS] = {
+		.packing = sja1105_xmii_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
+		.packed_entry_size = SIZE_XMII_PARAMS_ENTRY,
+		.max_entry_count = MAX_XMII_PARAMS_COUNT,
+	},
+	[BLK_IDX_SGMII] = { 0 },
+};
+
+/* SJA1105P: Second generation, no TTEthernet, no SGMII */
+static struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = {
+	[BLK_IDX_SCHEDULE] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = { 0 },
+	[BLK_IDX_VL_LOOKUP] = { 0 },
+	[BLK_IDX_VL_POLICING] = { 0 },
+	[BLK_IDX_VL_FORWARDING] = { 0 },
+	[BLK_IDX_L2_LOOKUP] = {
+		.packing = sja1105pqrs_l2_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_POLICING] = {
+		.packing = sja1105_l2_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
+		.packed_entry_size = SIZE_L2_POLICING_ENTRY,
+		.max_entry_count = MAX_L2_POLICING_COUNT,
+	},
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.packing = sja1105_vlan_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
+		.packed_entry_size = SIZE_VLAN_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.packing = sja1105_l2_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.packing = sja1105pqrs_mac_config_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
+		.packed_entry_size = SIZE_MAC_CONFIG_ENTRY_PQRS,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { 0 },
+	[BLK_IDX_VL_FORWARDING_PARAMS] = { 0 },
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.packing = sja1105pqrs_l2_lookup_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = {
+		.packing = sja1105_l2_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_CLK_SYNC_PARAMS] = { 0 },
+	[BLK_IDX_AVB_PARAMS] = {
+		.packing = sja1105pqrs_avb_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+		.packed_entry_size = SIZE_AVB_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_AVB_PARAMS_COUNT,
+	},
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.packing = sja1105pqrs_general_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
+		.packed_entry_size = SIZE_GENERAL_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.packing = sja1105_retagging_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
+		.packed_entry_size = SIZE_RETAGGING_ENTRY,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+	},
+	[BLK_IDX_XMII_PARAMS] = {
+		.packing = sja1105_xmii_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
+		.packed_entry_size = SIZE_XMII_PARAMS_ENTRY,
+		.max_entry_count = MAX_XMII_PARAMS_COUNT,
+	},
+	[BLK_IDX_SGMII] = { 0 },
+};
+
+/* SJA1105Q: Second generation, TTEthernet, no SGMII */
+static struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
+	[BLK_IDX_SCHEDULE] = {
+		.packing = sja1105_schedule_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {
+		.packing = sja1105_schedule_entry_points_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY_POINTS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_ENTRY_POINTS_COUNT,
+	},
+	[BLK_IDX_VL_LOOKUP] = {
+		.packing = sja1105_vl_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_lookup_entry),
+		.packed_entry_size = SIZE_VL_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VL_LOOKUP_COUNT,
+	},
+	[BLK_IDX_VL_POLICING] = {
+		.packing = sja1105_vl_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_policing_entry),
+		.packed_entry_size = SIZE_VL_POLICING_ENTRY,
+		.max_entry_count = MAX_VL_POLICING_COUNT,
+	},
+	[BLK_IDX_VL_FORWARDING] = {
+		.packing = sja1105_vl_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_entry),
+		.packed_entry_size = SIZE_VL_FORWARDING_ENTRY,
+		.max_entry_count = MAX_VL_FORWARDING_COUNT,
+	},
+	[BLK_IDX_L2_LOOKUP] = {
+		.packing = sja1105pqrs_l2_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_POLICING] = {
+		.packing = sja1105_l2_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
+		.packed_entry_size = SIZE_L2_POLICING_ENTRY,
+		.max_entry_count = MAX_L2_POLICING_COUNT,
+	},
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.packing = sja1105_vlan_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
+		.packed_entry_size = SIZE_VLAN_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.packing = sja1105_l2_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.packing = sja1105pqrs_mac_config_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
+		.packed_entry_size = SIZE_MAC_CONFIG_ENTRY_PQRS,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = {
+		.packing = sja1105_schedule_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_params_entry),
+		.packed_entry_size = SIZE_SCHEDULE_PARAMS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_PARAMS_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {
+		.packing = sja1105_schedule_entry_points_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_params_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT,
+	},
+	[BLK_IDX_VL_FORWARDING_PARAMS] = {
+		.packing = sja1105_vl_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_params_entry),
+		.packed_entry_size = SIZE_VL_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_VL_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.packing = sja1105pqrs_l2_lookup_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = {
+		.packing = sja1105_l2_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_CLK_SYNC_PARAMS] = { 0 },
+	[BLK_IDX_AVB_PARAMS] = {
+		.packing = sja1105pqrs_avb_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+		.packed_entry_size = SIZE_AVB_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_AVB_PARAMS_COUNT,
+	},
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.packing = sja1105pqrs_general_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
+		.packed_entry_size = SIZE_GENERAL_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.packing = sja1105_retagging_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
+		.packed_entry_size = SIZE_RETAGGING_ENTRY,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+	},
+	[BLK_IDX_XMII_PARAMS] = {
+		.packing = sja1105_xmii_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
+		.packed_entry_size = SIZE_XMII_PARAMS_ENTRY,
+		.max_entry_count = MAX_XMII_PARAMS_COUNT,
+	},
+	[BLK_IDX_SGMII] = { 0 },
+};
+
+/* SJA1105R: Second generation, no TTEthernet, SGMII */
+static struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = {
+	[BLK_IDX_SCHEDULE] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = { 0 },
+	[BLK_IDX_VL_LOOKUP] = { 0 },
+	[BLK_IDX_VL_POLICING] = { 0 },
+	[BLK_IDX_VL_FORWARDING] = { 0 },
+	[BLK_IDX_L2_LOOKUP] = {
+		.packing = sja1105pqrs_l2_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_POLICING] = {
+		.packing = sja1105_l2_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
+		.packed_entry_size = SIZE_L2_POLICING_ENTRY,
+		.max_entry_count = MAX_L2_POLICING_COUNT,
+	},
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.packing = sja1105_vlan_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
+		.packed_entry_size = SIZE_VLAN_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.packing = sja1105_l2_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.packing = sja1105pqrs_mac_config_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
+		.packed_entry_size = SIZE_MAC_CONFIG_ENTRY_PQRS,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = { 0 },
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = { 0 },
+	[BLK_IDX_VL_FORWARDING_PARAMS] = { 0 },
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.packing = sja1105pqrs_l2_lookup_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = {
+		.packing = sja1105_l2_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_CLK_SYNC_PARAMS] = { 0 },
+	[BLK_IDX_AVB_PARAMS] = {
+		.packing = sja1105pqrs_avb_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+		.packed_entry_size = SIZE_AVB_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_AVB_PARAMS_COUNT,
+	},
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.packing = sja1105pqrs_general_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
+		.packed_entry_size = SIZE_GENERAL_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.packing = sja1105_retagging_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
+		.packed_entry_size = SIZE_RETAGGING_ENTRY,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+	},
+	[BLK_IDX_XMII_PARAMS] = {
+		.packing = sja1105_xmii_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
+		.packed_entry_size = SIZE_XMII_PARAMS_ENTRY,
+		.max_entry_count = MAX_XMII_PARAMS_COUNT,
+	},
+	[BLK_IDX_SGMII] = {
+		.packing = sja1105_sgmii_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_sgmii_entry),
+		.packed_entry_size = SIZE_SGMII_ENTRY,
+		.max_entry_count = MAX_SGMII_COUNT,
+	},
+};
+
+/* SJA1105S: Second generation, TTEthernet, SGMII */
+static struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = {
+	[BLK_IDX_SCHEDULE] = {
+		.packing = sja1105_schedule_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {
+		.packing = sja1105_schedule_entry_points_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY_POINTS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_ENTRY_POINTS_COUNT,
+	},
+	[BLK_IDX_VL_LOOKUP] = {
+		.packing = sja1105_vl_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_lookup_entry),
+		.packed_entry_size = SIZE_VL_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VL_LOOKUP_COUNT,
+	},
+	[BLK_IDX_VL_POLICING] = {
+		.packing = sja1105_vl_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_policing_entry),
+		.packed_entry_size = SIZE_VL_POLICING_ENTRY,
+		.max_entry_count = MAX_VL_POLICING_COUNT,
+	},
+	[BLK_IDX_VL_FORWARDING] = {
+		.packing = sja1105_vl_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_entry),
+		.packed_entry_size = SIZE_VL_FORWARDING_ENTRY,
+		.max_entry_count = MAX_VL_FORWARDING_COUNT,
+	},
+	[BLK_IDX_L2_LOOKUP] = {
+		.packing = sja1105pqrs_l2_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_POLICING] = {
+		.packing = sja1105_l2_policing_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_policing_entry),
+		.packed_entry_size = SIZE_L2_POLICING_ENTRY,
+		.max_entry_count = MAX_L2_POLICING_COUNT,
+	},
+	[BLK_IDX_VLAN_LOOKUP] = {
+		.packing = sja1105_vlan_lookup_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vlan_lookup_entry),
+		.packed_entry_size = SIZE_VLAN_LOOKUP_ENTRY,
+		.max_entry_count = MAX_VLAN_LOOKUP_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING] = {
+		.packing = sja1105_l2_forwarding_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_COUNT,
+	},
+	[BLK_IDX_MAC_CONFIG] = {
+		.packing = sja1105pqrs_mac_config_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_mac_config_entry),
+		.packed_entry_size = SIZE_MAC_CONFIG_ENTRY_PQRS,
+		.max_entry_count = MAX_MAC_CONFIG_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_PARAMS] = {
+		.packing = sja1105_schedule_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_params_entry),
+		.packed_entry_size = SIZE_SCHEDULE_PARAMS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_PARAMS_COUNT,
+	},
+	[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {
+		.packing = sja1105_schedule_entry_points_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_schedule_entry_points_params_entry),
+		.packed_entry_size = SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY,
+		.max_entry_count = MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT,
+	},
+	[BLK_IDX_VL_FORWARDING_PARAMS] = {
+		.packing = sja1105_vl_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_params_entry),
+		.packed_entry_size = SIZE_VL_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_VL_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_LOOKUP_PARAMS] = {
+		.packing = sja1105pqrs_l2_lookup_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
+		.packed_entry_size = SIZE_L2_LOOKUP_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_L2_LOOKUP_PARAMS_COUNT,
+	},
+	[BLK_IDX_L2_FORWARDING_PARAMS] = {
+		.packing = sja1105_l2_forwarding_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_l2_forwarding_params_entry),
+		.packed_entry_size = SIZE_L2_FORWARDING_PARAMS_ENTRY,
+		.max_entry_count = MAX_L2_FORWARDING_PARAMS_COUNT,
+	},
+	[BLK_IDX_AVB_PARAMS] = {
+		.packing = sja1105pqrs_avb_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+		.packed_entry_size = SIZE_AVB_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_AVB_PARAMS_COUNT,
+	},
+	[BLK_IDX_GENERAL_PARAMS] = {
+		.packing = sja1105pqrs_general_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
+		.packed_entry_size = SIZE_GENERAL_PARAMS_ENTRY_PQRS,
+		.max_entry_count = MAX_GENERAL_PARAMS_COUNT,
+	},
+	[BLK_IDX_RETAGGING] = {
+		.packing = sja1105_retagging_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
+		.packed_entry_size = SIZE_RETAGGING_ENTRY,
+		.max_entry_count = MAX_RETAGGING_COUNT,
+	},
+	[BLK_IDX_XMII_PARAMS] = {
+		.packing = sja1105_xmii_params_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
+		.packed_entry_size = SIZE_XMII_PARAMS_ENTRY,
+		.max_entry_count = MAX_XMII_PARAMS_COUNT,
+	},
+	[BLK_IDX_SGMII] = {
+		.packing = sja1105_sgmii_entry_packing,
+		.unpacked_entry_size = sizeof(struct sja1105_sgmii_entry),
+		.packed_entry_size = SIZE_SGMII_ENTRY,
+		.max_entry_count = MAX_SGMII_COUNT,
+	},
+};
+
+int sja1105_static_config_init(struct sja1105_static_config *config,
+			       u64 device_id, u64 part_nr)
+{
+	const struct sja1105_table_ops *ops;
+	enum sja1105_blk_idx i;
+
+	memset(config, 0, sizeof(*config));
+
+	if (device_id == SJA1105E_DEVICE_ID)
+		ops = sja1105e_table_ops;
+	else if (device_id == SJA1105T_DEVICE_ID)
+		ops = sja1105t_table_ops;
+	else if (IS_P(device_id, part_nr))
+		ops = sja1105p_table_ops;
+	else if (IS_Q(device_id, part_nr))
+		ops = sja1105q_table_ops;
+	else if (IS_R(device_id, part_nr))
+		ops = sja1105r_table_ops;
+	else if (IS_S(device_id, part_nr))
+		ops = sja1105s_table_ops;
+	else
+		return -EINVAL;
+
+	for (i = 0; i < BLK_IDX_MAX; i++)
+		config->tables[i].ops = &ops[i];
+
+	config->device_id = device_id;
+	return 0;
+}
+
+void sja1105_static_config_free(struct sja1105_static_config *config)
+{
+	enum sja1105_blk_idx i;
+
+	for (i = 0; i < BLK_IDX_MAX; i++) {
+		if (config->tables[i].entry_count) {
+			kfree(config->tables[i].entries);
+			config->tables[i].entry_count = 0;
+		}
+	}
+}
+
+int sja1105_table_delete_entry(struct sja1105_table *table, int i)
+{
+	size_t entry_size = table->ops->unpacked_entry_size;
+	u8 *entries = table->entries;
+
+	if (i > table->entry_count)
+		return -ERANGE;
+
+	memmove(entries + i * entry_size, entries + (i + 1) * entry_size,
+		(table->entry_count - i) * entry_size);
+
+	table->entry_count--;
+
+	return 0;
+}
+
+/* No pointers to table->entries should be kept when this is called. */
+int sja1105_table_resize(struct sja1105_table *table, size_t new_count)
+{
+	size_t entry_size = table->ops->unpacked_entry_size;
+	void *new_entries, *old_entries = table->entries;
+
+	if (new_count > table->ops->max_entry_count)
+		return -ERANGE;
+
+	new_entries = kcalloc(new_count, entry_size, GFP_KERNEL);
+	if (!new_entries)
+		return -ENOMEM;
+
+	memcpy(new_entries, old_entries, min(new_count, table->entry_count) *
+		entry_size);
+
+	table->entries = new_entries;
+	table->entry_count = new_count;
+	kfree(old_entries);
+	return 0;
+}
+
diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.h b/drivers/net/dsa/sja1105/sja1105_static_config.h
new file mode 100644
index 000000000000..c25e6efe1c77
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_static_config.h
@@ -0,0 +1,500 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2016-2018, NXP Semiconductors
+ * Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#ifndef _SJA1105_STATIC_CONFIG_H
+#define _SJA1105_STATIC_CONFIG_H
+
+#include <linux/packing.h>
+
+#define SIZE_SJA1105_DEVICE_ID                  4
+#define SIZE_TABLE_HEADER                       12
+#define SIZE_SCHEDULE_ENTRY                     8
+#define SIZE_SCHEDULE_ENTRY_POINTS_ENTRY        4
+#define SIZE_VL_LOOKUP_ENTRY                    12
+#define SIZE_VL_POLICING_ENTRY                  8
+#define SIZE_VL_FORWARDING_ENTRY                4
+#define SIZE_L2_LOOKUP_ENTRY_ET                 12
+#define SIZE_L2_LOOKUP_ENTRY_PQRS               20
+#define SIZE_L2_POLICING_ENTRY                  8
+#define SIZE_VLAN_LOOKUP_ENTRY                  8
+#define SIZE_L2_FORWARDING_ENTRY                8
+#define SIZE_MAC_CONFIG_ENTRY_ET                28
+#define SIZE_MAC_CONFIG_ENTRY_PQRS              32
+#define SIZE_SCHEDULE_PARAMS_ENTRY              12
+#define SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY 4
+#define SIZE_VL_FORWARDING_PARAMS_ENTRY         12
+#define SIZE_L2_LOOKUP_PARAMS_ENTRY_ET          4
+#define SIZE_L2_LOOKUP_PARAMS_ENTRY_PQRS        16
+#define SIZE_L2_FORWARDING_PARAMS_ENTRY         12
+#define SIZE_CLK_SYNC_PARAMS_ENTRY              52
+#define SIZE_AVB_PARAMS_ENTRY_ET                12
+#define SIZE_AVB_PARAMS_ENTRY_PQRS              16
+#define SIZE_GENERAL_PARAMS_ENTRY_ET            40
+#define SIZE_GENERAL_PARAMS_ENTRY_PQRS          44
+#define SIZE_RETAGGING_ENTRY                    8
+#define SIZE_XMII_PARAMS_ENTRY                  4
+#define SIZE_SGMII_ENTRY                        144
+
+/* UM10944.pdf Page 11, Table 2. Configuration Blocks */
+#define BLKID_SCHEDULE                     0x00
+#define BLKID_SCHEDULE_ENTRY_POINTS        0x01
+#define BLKID_VL_LOOKUP                    0x02
+#define BLKID_VL_POLICING                  0x03
+#define BLKID_VL_FORWARDING                0x04
+#define BLKID_L2_LOOKUP                    0x05
+#define BLKID_L2_POLICING                  0x06
+#define BLKID_VLAN_LOOKUP                  0x07
+#define BLKID_L2_FORWARDING                0x08
+#define BLKID_MAC_CONFIG                   0x09
+#define BLKID_SCHEDULE_PARAMS              0x0A
+#define BLKID_SCHEDULE_ENTRY_POINTS_PARAMS 0x0B
+#define BLKID_VL_FORWARDING_PARAMS         0x0C
+#define BLKID_L2_LOOKUP_PARAMS             0x0D
+#define BLKID_L2_FORWARDING_PARAMS         0x0E
+#define BLKID_CLK_SYNC_PARAMS              0x0F
+#define BLKID_AVB_PARAMS                   0x10
+#define BLKID_GENERAL_PARAMS               0x11
+#define BLKID_RETAGGING                    0x12
+#define BLKID_XMII_PARAMS                  0x4E
+#define BLKID_SGMII                        0xC8
+#define BLKID_MAX                          BLKID_SGMII
+
+enum sja1105_blk_idx {
+	BLK_IDX_SCHEDULE = 0,
+	BLK_IDX_SCHEDULE_ENTRY_POINTS,
+	BLK_IDX_VL_LOOKUP,
+	BLK_IDX_VL_POLICING,
+	BLK_IDX_VL_FORWARDING,
+	BLK_IDX_L2_LOOKUP,
+	BLK_IDX_L2_POLICING,
+	BLK_IDX_VLAN_LOOKUP,
+	BLK_IDX_L2_FORWARDING,
+	BLK_IDX_MAC_CONFIG,
+	BLK_IDX_SCHEDULE_PARAMS,
+	BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS,
+	BLK_IDX_VL_FORWARDING_PARAMS,
+	BLK_IDX_L2_LOOKUP_PARAMS,
+	BLK_IDX_L2_FORWARDING_PARAMS,
+	BLK_IDX_CLK_SYNC_PARAMS,
+	BLK_IDX_AVB_PARAMS,
+	BLK_IDX_GENERAL_PARAMS,
+	BLK_IDX_RETAGGING,
+	BLK_IDX_XMII_PARAMS,
+	BLK_IDX_SGMII,
+	BLK_IDX_MAX,
+	/* Fake block indices that are only valid for dynamic access */
+	BLK_IDX_MGMT_ROUTE,
+	BLK_IDX_MAX_DYN,
+	BLK_IDX_INVAL = -1,
+};
+
+#define MAX_SCHEDULE_COUNT                       1024
+#define MAX_SCHEDULE_ENTRY_POINTS_COUNT          2048
+#define MAX_VL_LOOKUP_COUNT                      1024
+#define MAX_VL_POLICING_COUNT                    1024
+#define MAX_VL_FORWARDING_COUNT                  1024
+#define MAX_L2_LOOKUP_COUNT                      1024
+#define MAX_L2_POLICING_COUNT                    45
+#define MAX_VLAN_LOOKUP_COUNT                    4096
+#define MAX_L2_FORWARDING_COUNT                  13
+#define MAX_MAC_CONFIG_COUNT                     5
+#define MAX_SCHEDULE_PARAMS_COUNT                1
+#define MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT   1
+#define MAX_VL_FORWARDING_PARAMS_COUNT           1
+#define MAX_L2_LOOKUP_PARAMS_COUNT               1
+#define MAX_L2_FORWARDING_PARAMS_COUNT           1
+#define MAX_GENERAL_PARAMS_COUNT                 1
+#define MAX_RETAGGING_COUNT                      32
+#define MAX_XMII_PARAMS_COUNT                    1
+#define MAX_SGMII_COUNT                          1
+#define MAX_AVB_PARAMS_COUNT                     1
+#define MAX_CLK_SYNC_COUNT                       1
+
+#define MAX_FRAME_MEMORY                         929
+#define MAX_FRAME_MEMORY_RETAGGING               910
+
+#define SJA1105E_DEVICE_ID         0x9C00000Cull
+#define SJA1105T_DEVICE_ID         0x9E00030Eull
+#define SJA1105PR_DEVICE_ID        0xAF00030Eull
+#define SJA1105QS_DEVICE_ID        0xAE00030Eull
+#define SJA1105_NO_DEVICE_ID       0x00000000ull
+
+#define SJA1105P_PART_NR           0x9A84
+#define SJA1105Q_PART_NR           0x9A85
+#define SJA1105R_PART_NR           0x9A86
+#define SJA1105S_PART_NR           0x9A87
+#define SJA1105_PART_NR_DONT_CARE  0xFFFF
+
+#define IS_PQRS(device_id) \
+	(((device_id) == SJA1105PR_DEVICE_ID) || \
+	 ((device_id) == SJA1105QS_DEVICE_ID))
+#define IS_ET(device_id) \
+	(((device_id) == SJA1105E_DEVICE_ID) || \
+	 ((device_id) == SJA1105T_DEVICE_ID))
+/* P and R have same Device ID, and differ by Part Number */
+#define IS_P(device_id, part_nr) \
+	(((device_id) == SJA1105PR_DEVICE_ID) && \
+	 ((part_nr) == SJA1105P_PART_NR))
+#define IS_R(device_id, part_nr) \
+	(((device_id) == SJA1105PR_DEVICE_ID) && \
+	 ((part_nr) == SJA1105R_PART_NR))
+/* Same do Q and S */
+#define IS_Q(device_id, part_nr) \
+	(((device_id) == SJA1105QS_DEVICE_ID) && \
+	 ((part_nr) == SJA1105Q_PART_NR))
+#define IS_S(device_id, part_nr) \
+	(((device_id) == SJA1105QS_DEVICE_ID) && \
+	 ((part_nr) == SJA1105S_PART_NR))
+#define DEVICE_ID_VALID(device_id) \
+	(IS_ET(device_id) || IS_PQRS(device_id))
+#define SUPPORTS_TTETHERNET(device_id) \
+	(((device_id) == SJA1105T_DEVICE_ID) || \
+	 ((device_id) == SJA1105QS_DEVICE_ID))
+
+#ifdef __KERNEL__
+#include <asm/types.h>
+#include <linux/types.h>
+#endif
+
+struct sja1105_schedule_entry {
+	u64 winstindex;
+	u64 winend;
+	u64 winst;
+	u64 destports;
+	u64 setvalid;
+	u64 txen;
+	u64 resmedia_en;
+	u64 resmedia;
+	u64 vlindex;
+	u64 delta;
+};
+
+struct sja1105_schedule_params_entry {
+	u64 subscheind[8];
+};
+
+struct sja1105_general_params_entry {
+	u64 vllupformat;
+	u64 mirr_ptacu;
+	u64 switchid;
+	u64 hostprio;
+	u64 mac_fltres1;
+	u64 mac_fltres0;
+	u64 mac_flt1;
+	u64 mac_flt0;
+	u64 incl_srcpt1;
+	u64 incl_srcpt0;
+	u64 send_meta1;
+	u64 send_meta0;
+	u64 casc_port;
+	u64 host_port;
+	u64 mirr_port;
+	u64 vlmarker;
+	u64 vlmask;
+	u64 tpid;
+	u64 ignore2stf;
+	u64 tpid2;
+	/* P/Q/R/S only */
+	u64 queue_ts;
+	u64 egrmirrvid;
+	u64 egrmirrpcp;
+	u64 egrmirrdei;
+	u64 replay_port;
+};
+
+struct sja1105_schedule_entry_points_entry {
+	u64 subschindx;
+	u64 delta;
+	u64 address;
+};
+
+struct sja1105_schedule_entry_points_params_entry {
+	u64 clksrc;
+	u64 actsubsch;
+};
+
+struct sja1105_vlan_lookup_entry {
+	u64 ving_mirr;
+	u64 vegr_mirr;
+	u64 vmemb_port;
+	u64 vlan_bc;
+	u64 tag_port;
+	u64 vlanid;
+};
+
+struct sja1105_l2_lookup_entry {
+	u64 mirrvlan;      /* P/Q/R/S only - LOCKEDS=1 */
+	u64 mirr;          /* P/Q/R/S only - LOCKEDS=1 */
+	u64 retag;         /* P/Q/R/S only - LOCKEDS=1 */
+	u64 mask_iotag;    /* P/Q/R/S only */
+	u64 mask_vlanid;   /* P/Q/R/S only */
+	u64 mask_macaddr;  /* P/Q/R/S only */
+	u64 iotag;         /* P/Q/R/S only */
+	u64 vlanid;
+	u64 macaddr;
+	u64 destports;
+	u64 enfport;
+	u64 index;
+};
+
+struct sja1105_l2_lookup_params_entry {
+	u64 drpbc;           /* P/Q/R/S only */
+	u64 drpmc;           /* P/Q/R/S only */
+	u64 drpuni;          /* P/Q/R/S only */
+	u64 maxaddrp[5];     /* P/Q/R/S only */
+	u64 start_dynspc;    /* P/Q/R/S only */
+	u64 drpnolearn;      /* P/Q/R/S only */
+	u64 use_static;      /* P/Q/R/S only */
+	u64 owr_dyn;         /* P/Q/R/S only */
+	u64 learn_once;      /* P/Q/R/S only */
+	u64 maxage;          /* Shared */
+	u64 dyn_tbsz;        /* E/T only */
+	u64 poly;            /* E/T only */
+	u64 shared_learn;    /* Shared */
+	u64 no_enf_hostprt;  /* Shared */
+	u64 no_mgmt_learn;   /* Shared */
+};
+
+struct sja1105_l2_forwarding_entry {
+	u64 bc_domain;
+	u64 reach_port;
+	u64 fl_domain;
+	u64 vlan_pmap[8];
+};
+
+struct sja1105_l2_forwarding_params_entry {
+	u64 max_dynp;
+	u64 part_spc[8];
+};
+
+struct sja1105_l2_policing_entry {
+	u64 sharindx;
+	u64 smax;
+	u64 rate;
+	u64 maxlen;
+	u64 partition;
+};
+
+struct sja1105_mac_config_entry {
+	u64 top[8];
+	u64 base[8];
+	u64 enabled[8];
+	u64 ifg;
+	u64 speed;
+	u64 tp_delin;
+	u64 tp_delout;
+	u64 maxage;
+	u64 vlanprio;
+	u64 vlanid;
+	u64 ing_mirr;
+	u64 egr_mirr;
+	u64 drpnona664;
+	u64 drpdtag;
+	u64 drpsotag;   /* only on P/Q/R/S */
+	u64 drpsitag;   /* only on P/Q/R/S */
+	u64 drpuntag;
+	u64 retag;
+	u64 dyn_learn;
+	u64 egress;
+	u64 ingress;
+	u64 mirrcie;    /* only on P/Q/R/S */
+	u64 mirrcetag;  /* only on P/Q/R/S */
+	u64 ingmirrvid; /* only on P/Q/R/S */
+	u64 ingmirrpcp; /* only on P/Q/R/S */
+	u64 ingmirrdei; /* only on P/Q/R/S */
+};
+
+struct sja1105_xmii_params_entry {
+	u64 phy_mac[5];
+	u64 xmii_mode[5];
+};
+
+struct sja1105_avb_params_entry {
+	u64 l2cbs; /* only on P/Q/R/S */
+	u64 cas_master; /* only on P/Q/R/S */
+	u64 destmeta;
+	u64 srcmeta;
+};
+
+struct sja1105_sgmii_entry {
+	u64 digital_error_cnt;
+	u64 digital_control_2;
+	u64 debug_control;
+	u64 test_control;
+	u64 autoneg_control;
+	u64 digital_control_1;
+	u64 autoneg_adv;
+	u64 basic_control;
+};
+
+struct sja1105_vl_lookup_entry {
+	u64 format;
+	u64 port;
+	union {
+		/* format == 0 */
+		struct {
+			u64 destports;
+			u64 iscritical;
+			u64 macaddr;
+			u64 vlanid;
+			u64 vlanprior;
+		};
+		/* format == 1 */
+		struct {
+			u64 egrmirr;
+			u64 ingrmirr;
+			u64 vlid;
+		};
+	};
+};
+
+struct sja1105_vl_policing_entry {
+	u64 type;
+	u64 maxlen;
+	u64 sharindx;
+	u64 bag;
+	u64 jitter;
+};
+
+struct sja1105_vl_forwarding_entry {
+	u64 type;
+	u64 priority;
+	u64 partition;
+	u64 destports;
+};
+
+struct sja1105_vl_forwarding_params_entry {
+	u64 partspc[8];
+	u64 debugen;
+};
+
+struct sja1105_clk_sync_params_entry {
+	u64 etssrcpcf;
+	u64 waitthsync;
+	u64 wfintmout;
+	u64 unsytotsyth;
+	u64 unsytosyth;
+	u64 tsytosyth;
+	u64 tsyth;
+	u64 tsytousyth;
+	u64 syth;
+	u64 sytousyth;
+	u64 sypriority;
+	u64 sydomain;
+	u64 stth;
+	u64 sttointth;
+	u64 pcfsze;
+	u64 pcfpriority;
+	u64 obvwinsz;
+	u64 numunstbcy;
+	u64 numstbcy;
+	u64 maxtranspclk;
+	u64 maxintegcy;
+	u64 listentmout;
+	u64 intcydur;
+	u64 inttotentth;
+	u64 vlidout;
+	u64 vlidimnmin;
+	u64 vlidinmax;
+	u64 caentmout;
+	u64 accdevwin;
+	u64 vlidselect;
+	u64 tentsyrelen;
+	u64 asytensyen;
+	u64 sytostben;
+	u64 syrelen;
+	u64 sysyen;
+	u64 syasyen;
+	u64 ipcframesy;
+	u64 stabasyen;
+	u64 swmaster;
+	u64 fullcbg;
+	u64 srcport[8];
+};
+
+struct sja1105_retagging_entry {
+	u64 egr_port;
+	u64 ing_port;
+	u64 vlan_ing;
+	u64 vlan_egr;
+	u64 do_not_learn;
+	u64 use_dest_ports;
+	u64 destports;
+};
+
+struct sja1105_table_header {
+	u64 block_id;
+	u64 len;
+	u64 crc;
+};
+
+struct sja1105_table_ops {
+	size_t (*packing)(void *buf, void *entry_ptr, enum packing_op op);
+	size_t unpacked_entry_size;
+	size_t packed_entry_size;
+	size_t max_entry_count;
+};
+
+struct sja1105_table {
+	const struct sja1105_table_ops *ops;
+	size_t entry_count;
+	void *entries;
+};
+
+struct sja1105_static_config {
+	u64 device_id;
+	struct sja1105_table tables[BLK_IDX_MAX];
+};
+
+size_t sja1105_table_header_packing(void *buf, void *hdr, enum packing_op op);
+void
+sja1105_table_header_pack_with_crc(void *buf, struct sja1105_table_header *hdr);
+size_t
+sja1105_static_config_get_length(const struct sja1105_static_config *config);
+
+enum sja1105_static_config_validity {
+	SJA1105_CONFIG_OK = 0,
+	SJA1105_DEVICE_ID_INVALID,
+	SJA1105_TTETHERNET_NOT_SUPPORTED,
+	SJA1105_INCORRECT_TTETHERNET_CONFIGURATION,
+	SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION,
+	SJA1105_MISSING_L2_POLICING_TABLE,
+	SJA1105_MISSING_L2_FORWARDING_TABLE,
+	SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE,
+	SJA1105_MISSING_GENERAL_PARAMS_TABLE,
+	SJA1105_MISSING_VLAN_TABLE,
+	SJA1105_MISSING_XMII_TABLE,
+	SJA1105_MISSING_MAC_TABLE,
+	SJA1105_OVERCOMMITTED_FRAME_MEMORY,
+	SJA1105_UNEXPECTED_END_OF_BUFFER,
+	SJA1105_INVALID_DEVICE_ID,
+	SJA1105_INVALID_TABLE_HEADER_CRC,
+	SJA1105_INVALID_TABLE_HEADER,
+	SJA1105_INCORRECT_TABLE_LENGTH,
+	SJA1105_DATA_CRC_INVALID,
+	SJA1105_EXTRA_BYTES_AT_END_OF_BUFFER,
+};
+
+extern const char *sja1105_static_config_error_msg[];
+
+enum sja1105_static_config_validity
+sja1105_static_config_check_valid(const struct sja1105_static_config *config);
+void
+sja1105_static_config_pack(void *buf, struct sja1105_static_config *config);
+int sja1105_static_config_init(struct sja1105_static_config *config,
+			       u64 device_id, u64 part_nr);
+void sja1105_static_config_free(struct sja1105_static_config *config);
+
+int sja1105_table_delete_entry(struct sja1105_table *table, int i);
+int sja1105_table_resize(struct sja1105_table *table, size_t new_count);
+
+u32 sja1105_crc32(const void *buf, size_t len);
+
+void sja1105_pack(void *buf, const u64 *val, int start, int end, size_t len);
+void sja1105_unpack(const void *buf, u64 *val, int start, int end, size_t len);
+void sja1105_packing(void *buf, u64 *val, int start, int end,
+		     size_t len, enum packing_op op);
+
+#endif
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 07/13] net: dsa: sja1105: Add support for FDB and MDB management
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (5 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 06/13] net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26  2:37   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 08/13] net: dsa: sja1105: Add support for VLAN operations Vladimir Oltean
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

Currently only the (more difficult) first generation E/T series is
supported. Here the TCAM is only 4-way associative, and to know where
the hardware will search for a FDB entry, we need to perform the same
hash algorithm in order to install the entry in the correct bin.

On P/Q/R/S, the TCAM should be fully associative. However the SPI
command interface is different, and because I don't have access to a
new-generation device at the moment, support for it is TODO.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 drivers/net/dsa/sja1105/sja1105_main.c | 193 +++++++++++++++++++++++++
 1 file changed, 193 insertions(+)

diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index 78bdb577c16b..afcca9926497 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -190,6 +190,9 @@ static int sja1105_init_static_fdb(struct sja1105_private *priv)
 
 	table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP];
 
+	/* We only populate the FDB table through dynamic
+	 * L2 Address Lookup entries
+	 */
 	if (table->entry_count) {
 		kfree(table->entries);
 		table->entry_count = 0;
@@ -703,6 +706,190 @@ static void sja1105_adjust_link(struct dsa_switch *ds, int port,
 		sja1105_adjust_port_config(priv, port, phydev->speed, true);
 }
 
+#define fdb(bin, index) \
+	((bin) * SJA1105ET_FDB_BIN_SIZE + (index))
+#define is_bin_index_valid(i) \
+	((i) >= 0 && (i) < SJA1105ET_FDB_BIN_SIZE)
+
+static int
+sja1105_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
+			    const u8 *addr, u16 vid,
+			    struct sja1105_l2_lookup_entry *fdb_match,
+			    int *last_unused)
+{
+	int index_in_bin;
+
+	for (index_in_bin = 0; index_in_bin < SJA1105ET_FDB_BIN_SIZE;
+	     index_in_bin++) {
+		struct sja1105_l2_lookup_entry l2_lookup = { 0 };
+
+		/* Skip unused entries, optionally marking them
+		 * into the return value
+		 */
+		if (sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+						fdb(bin, index_in_bin),
+						&l2_lookup)) {
+			if (last_unused)
+				*last_unused = index_in_bin;
+			continue;
+		}
+
+		if (l2_lookup.macaddr == ether_addr_to_u64(addr) &&
+		    l2_lookup.vlanid == vid) {
+			if (fdb_match)
+				*fdb_match = l2_lookup;
+			return index_in_bin;
+		}
+	}
+	/* Return an invalid entry index if not found */
+	return SJA1105ET_FDB_BIN_SIZE;
+}
+
+static int sja1105_fdb_add(struct dsa_switch *ds, int port,
+			   const unsigned char *addr, u16 vid)
+{
+	struct sja1105_l2_lookup_entry l2_lookup = { 0 };
+	struct sja1105_private *priv = ds->priv;
+	struct device *dev = ds->dev;
+	int bin, index_in_bin;
+	int last_unused;
+
+	bin = sja1105_fdb_hash(priv, addr, vid);
+
+	index_in_bin = sja1105_is_fdb_entry_in_bin(priv, bin, addr, vid,
+						   &l2_lookup, &last_unused);
+	if (is_bin_index_valid(index_in_bin)) {
+		/* We have an FDB entry. Is our port in the destination
+		 * mask? If yes, we need to do nothing. If not, we need
+		 * to rewrite the entry by adding this port to it.
+		 */
+		if (l2_lookup.destports & BIT(port))
+			return 0;
+		l2_lookup.destports |= BIT(port);
+	} else {
+		/* We don't have an FDB entry. We construct a new one and
+		 * try to find a place for it within the FDB table.
+		 */
+		l2_lookup.macaddr = ether_addr_to_u64(addr);
+		l2_lookup.destports = BIT(port);
+		l2_lookup.vlanid = vid;
+
+		if (is_bin_index_valid(last_unused)) {
+			index_in_bin = last_unused;
+		} else {
+			/* Bin is full, need to evict somebody.
+			 * Choose victim at random. If you get these messages
+			 * often, you may need to consider changing the
+			 * distribution function:
+			 * static_config[BLK_IDX_L2_LOOKUP_PARAMS].entries->poly
+			 */
+			get_random_bytes(&index_in_bin, sizeof(u8));
+			index_in_bin %= SJA1105ET_FDB_BIN_SIZE;
+			dev_warn(dev, "Warning, FDB bin %d full while adding entry for %pM. Evicting entry %u.\n",
+				 bin, addr, index_in_bin);
+			/* Evict entry */
+			sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+						     fdb(bin, index_in_bin),
+						     NULL, false);
+		}
+	}
+	l2_lookup.index = fdb(bin, index_in_bin);
+
+	return sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+				l2_lookup.index, &l2_lookup, true);
+}
+
+static int sja1105_fdb_del(struct dsa_switch *ds, int port,
+			   const unsigned char *addr, u16 vid)
+{
+	struct sja1105_l2_lookup_entry l2_lookup = { 0 };
+	struct sja1105_private *priv = ds->priv;
+	u8 bin, index_in_bin;
+	bool keep;
+
+	bin = sja1105_fdb_hash(priv, addr, vid);
+
+	index_in_bin = sja1105_is_fdb_entry_in_bin(priv, bin, addr, vid,
+						   &l2_lookup, NULL);
+	if (!is_bin_index_valid(index_in_bin))
+		return 0;
+
+	/* We have an FDB entry. Is our port in the destination mask? If yes,
+	 * we need to remove it. If the resulting port mask becomes empty, we
+	 * need to completely evict the FDB entry.
+	 * Otherwise we just write it back.
+	 */
+	if (l2_lookup.destports & BIT(port))
+		l2_lookup.destports &= ~BIT(port);
+	if (l2_lookup.destports)
+		keep = true;
+	else
+		keep = false;
+
+	return sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+					    fdb(bin, index_in_bin),
+					    &l2_lookup, keep);
+}
+
+static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
+			    dsa_fdb_dump_cb_t *cb, void *data)
+{
+	struct sja1105_private *priv = ds->priv;
+	struct device *dev = ds->dev;
+	int i;
+
+	for (i = 0; i < MAX_L2_LOOKUP_COUNT; i++) {
+		struct sja1105_l2_lookup_entry l2_lookup;
+		u8 macaddr[ETH_ALEN];
+		int rc;
+
+		memset(&l2_lookup, 0, sizeof(l2_lookup));
+		rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+						 i, &l2_lookup);
+		/* No fdb entry at i, not an issue */
+		if (rc == -EINVAL)
+			continue;
+		if (rc) {
+			dev_err(dev, "Failed to dump FDB: %d\n", rc);
+			return rc;
+		}
+
+		/* FDB dump callback is per port. This means we have to
+		 * disregard a valid entry if it's not for this port, even if
+		 * only to revisit it later. This is inefficient because the
+		 * 1024-sized FDB table needs to be traversed 4 times through
+		 * SPI during a 'bridge fdb show' command.
+		 */
+		if (!(l2_lookup.destports & BIT(port)))
+			continue;
+		u64_to_ether_addr(l2_lookup.macaddr, macaddr);
+		cb(macaddr, l2_lookup.vlanid, false, data);
+	}
+	return 0;
+}
+
+#undef fdb
+#undef is_bin_index_valid
+
+/* This callback needs to be present */
+static int sja1105_mdb_prepare(struct dsa_switch *ds, int port,
+			       const struct switchdev_obj_port_mdb *mdb)
+{
+	return 0;
+}
+
+static void sja1105_mdb_add(struct dsa_switch *ds, int port,
+			    const struct switchdev_obj_port_mdb *mdb)
+{
+	sja1105_fdb_add(ds, port, mdb->addr, mdb->vid);
+}
+
+static int sja1105_mdb_del(struct dsa_switch *ds, int port,
+			   const struct switchdev_obj_port_mdb *mdb)
+{
+	return sja1105_fdb_del(ds, port, mdb->addr, mdb->vid);
+}
+
 static int sja1105_bridge_member(struct dsa_switch *ds, int port,
 				 struct net_device *br, bool member)
 {
@@ -796,8 +983,14 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
 	.get_tag_protocol	= sja1105_get_tag_protocol,
 	.setup			= sja1105_setup,
 	.adjust_link		= sja1105_adjust_link,
+	.port_fdb_dump		= sja1105_fdb_dump,
+	.port_fdb_add		= sja1105_fdb_add,
+	.port_fdb_del		= sja1105_fdb_del,
 	.port_bridge_join	= sja1105_bridge_join,
 	.port_bridge_leave	= sja1105_bridge_leave,
+	.port_mdb_prepare	= sja1105_mdb_prepare,
+	.port_mdb_add		= sja1105_mdb_add,
+	.port_mdb_del		= sja1105_mdb_del,
 };
 
 static int sja1105_probe(struct spi_device *spi)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 08/13] net: dsa: sja1105: Add support for VLAN operations
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (6 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 07/13] net: dsa: sja1105: Add support for FDB and MDB management Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26  2:41   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 09/13] net: dsa: sja1105: Add support for ethtool port counters Vladimir Oltean
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

VLAN filtering cannot be properly disabled in SJA1105. So in order to
emulate the "no VLAN awareness" behavior (not dropping traffic that is
tagged with a VID that isn't configured on the port), we need to hack
another switch feature: programmable TPID (which is 0x8100 for 802.1Q).
We are reprogramming the TPID to a bogus value (ETH_P_EDSA) which leaves
the switch thinking that all traffic is untagged, and therefore accepts
it.

Under a vlan_filtering bridge, the proper TPID of ETH_P_8021Q is
installed again, and the switch starts identifying 802.1Q-tagged
traffic.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 drivers/net/dsa/sja1105/sja1105_main.c | 275 ++++++++++++++++++++++++-
 1 file changed, 273 insertions(+), 2 deletions(-)

diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index afcca9926497..a1d7f3b03099 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -263,6 +263,13 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
 	table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
 
 	/* The static VLAN table will only contain the initial pvid of 0.
+	 * All other VLANs are to be configured through dynamic entries,
+	 * and kept in the static configuration table as backing memory.
+	 * The pvid of 0 is sufficient to pass traffic while the ports are
+	 * standalone and when vlan_filtering is disabled. When filtering
+	 * gets enabled, the switchdev core sets up the VLAN ID 1 and sets
+	 * it as the new pvid. Actually 'pvid 1' still comes up in 'bridge
+	 * vlan' even when vlan_filtering is off, but it has no effect.
 	 */
 	if (table->entry_count) {
 		kfree(table->entries);
@@ -403,8 +410,11 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
 		.vlmask = 0,
 		/* Only update correctionField for 1-step PTP (L2 transport) */
 		.ignore2stf = 0,
-		.tpid = ETH_P_8021Q,
-		.tpid2 = ETH_P_8021Q,
+		/* Forcefully disable VLAN filtering by telling
+		 * the switch that VLAN has a different EtherType.
+		 */
+		.tpid = ETH_P_EDSA,
+		.tpid2 = ETH_P_EDSA,
 		/* P/Q/R/S only */
 		.queue_ts = 0,
 		.egrmirrvid = 0,
@@ -934,12 +944,269 @@ static void sja1105_bridge_leave(struct dsa_switch *ds, int port,
 	sja1105_bridge_member(ds, port, br, false);
 }
 
+/* For situations where we need to change a setting at runtime that is only
+ * available through the static configuration, resetting the switch in order
+ * to upload the new static config is unavoidable. Back up the settings we
+ * modify at runtime (currently only MAC) and restore them after uploading,
+ * such that this operation is relatively seamless.
+ */
+static int sja1105_static_config_reload(struct sja1105_private *priv)
+{
+	struct sja1105_mac_config_entry *mac;
+	int speed_mbps[SJA1105_NUM_PORTS];
+	int rc, i;
+
+	mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
+
+	/* Back up settings changed by sja1105_adjust_port_config and
+	 * and restore their defaults.
+	 */
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		speed_mbps[i] = sja1105_speed[mac[i].speed];
+		mac[i].speed = SJA1105_SPEED_AUTO;
+	}
+
+	/* Reset switch and send updated static configuration */
+	rc = sja1105_static_config_upload(priv);
+	if (rc < 0)
+		goto out;
+
+	/* Configure the CGU (PLLs) for MII and RMII PHYs.
+	 * For these interfaces there is no dynamic configuration
+	 * needed, since PLLs have same settings at all speeds.
+	 */
+	rc = sja1105_clocking_setup(priv);
+	if (rc < 0)
+		goto out;
+
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		bool enabled = (speed_mbps[i] != SJA1105_SPEED_AUTO);
+
+		rc = sja1105_adjust_port_config(priv, i, speed_mbps[i],
+						enabled);
+		if (rc < 0)
+			goto out;
+	}
+out:
+	return rc;
+}
+
+#define sja1105_vlan_filtering_enabled(priv) \
+	(((struct sja1105_general_params_entry *) \
+	((struct sja1105_private *)priv)->static_config. \
+	tables[BLK_IDX_GENERAL_PARAMS].entries)->tpid == ETH_P_8021Q)
+
+/* The TPID setting belongs to the General Parameters table,
+ * which can only be partially reconfigured at runtime (and not the TPID).
+ * So a switch reset is required.
+ */
+static int sja1105_change_tpid(struct sja1105_private *priv,
+			       u16 tpid, u16 tpid2)
+{
+	struct sja1105_general_params_entry *general_params;
+	struct sja1105_table *table;
+
+	table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
+	general_params = table->entries;
+	general_params->tpid = tpid;
+	general_params->tpid2 = tpid2;
+	return sja1105_static_config_reload(priv);
+}
+
+static int sja1105_pvid_apply(struct sja1105_private *priv, int port, u16 pvid)
+{
+	struct sja1105_mac_config_entry *mac;
+
+	mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
+
+	mac[port].vlanid = pvid;
+
+	return sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG, port,
+					   &mac[port], true);
+}
+
+static int sja1105_is_vlan_configured(struct sja1105_private *priv, u16 vid)
+{
+	struct sja1105_vlan_lookup_entry *vlan;
+	int count, i;
+
+	vlan = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entries;
+	count = priv->static_config.tables[BLK_IDX_VLAN_LOOKUP].entry_count;
+
+	for (i = 0; i < count; i++)
+		if (vlan[i].vlanid == vid)
+			return i;
+
+	/* Return an invalid entry index if not found */
+	return -1;
+}
+
+static int sja1105_vlan_apply(struct sja1105_private *priv, int port, u16 vid,
+			      bool enabled, bool untagged)
+{
+	struct sja1105_vlan_lookup_entry *vlan;
+	struct sja1105_table *table;
+	bool keep = true;
+	int match, rc;
+
+	table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
+
+	match = sja1105_is_vlan_configured(priv, vid);
+	if (match < 0) {
+		/* Can't delete a missing entry. */
+		if (!enabled)
+			return 0;
+		rc = sja1105_table_resize(table, table->entry_count + 1);
+		if (rc)
+			return rc;
+		match = table->entry_count - 1;
+	}
+	/* Assign pointer after the resize (it's new memory) */
+	vlan = table->entries;
+	vlan[match].vlanid = vid;
+	if (enabled) {
+		vlan[match].vlan_bc |= BIT(port);
+		vlan[match].vmemb_port |= BIT(port);
+	} else {
+		vlan[match].vlan_bc &= ~BIT(port);
+		vlan[match].vmemb_port &= ~BIT(port);
+	}
+	/* Also unset tag_port if removing this VLAN was requested,
+	 * just so we don't have a confusing bitmap (no practical purpose).
+	 */
+	if (untagged || !enabled)
+		vlan[match].tag_port &= ~BIT(port);
+	else
+		vlan[match].tag_port |= BIT(port);
+	/* If there's no port left as member of this VLAN,
+	 * it's time for it to go.
+	 */
+	if (!vlan[match].vmemb_port)
+		keep = false;
+
+	dev_dbg(priv->ds->dev,
+		"%s: port %d, vid %llu, broadcast domain 0x%llx, "
+		"port members 0x%llx, tagged ports 0x%llx, keep %d\n",
+		__func__, port, vlan[match].vlanid, vlan[match].vlan_bc,
+		vlan[match].vmemb_port, vlan[match].tag_port, keep);
+
+	rc = sja1105_dynamic_config_write(priv, BLK_IDX_VLAN_LOOKUP, vid,
+					  &vlan[match], keep);
+	if (rc < 0)
+		return rc;
+
+	if (!keep)
+		return sja1105_table_delete_entry(table, match);
+
+	return 0;
+}
+
 static enum dsa_tag_protocol
 sja1105_get_tag_protocol(struct dsa_switch *ds, int port)
 {
 	return DSA_TAG_PROTO_NONE;
 }
 
+/* This callback needs to be present */
+static int sja1105_vlan_prepare(struct dsa_switch *ds, int port,
+				const struct switchdev_obj_port_vlan *vlan)
+{
+	return 0;
+}
+
+static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
+{
+	struct sja1105_private *priv = ds->priv;
+	int rc = 0, i;
+
+	/* On SJA1105, VLAN filtering per se is always enabled in hardware.
+	 * The only thing we can do to disable it is lie about what the 802.1Q
+	 * EtherType is. So it will still try to apply VLAN filtering, but all
+	 * ingress traffic (except frames received with EtherType of
+	 * ETH_P_EDSA, which are invalid) will be internally tagged with a
+	 * distorted VLAN header where the TPID is ETH_P_EDSA, and the VLAN ID
+	 * is the port pvid.  So since this operation is global to the switch,
+	 * we need to handle the case where multiple bridges span the same
+	 * switch device and one of them has a different setting than what is
+	 * being requested.
+	 */
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		struct net_device *bridge_dev;
+
+		bridge_dev = dsa_to_port(ds, i)->bridge_dev;
+		if (bridge_dev &&
+		    bridge_dev != dsa_to_port(ds, port)->bridge_dev &&
+		    br_vlan_enabled(bridge_dev) != enabled) {
+			netdev_err(bridge_dev,
+				   "VLAN filtering is global to the switch!\n");
+			return -EINVAL;
+		}
+	}
+
+	if (enabled && !sja1105_vlan_filtering_enabled(priv))
+		/* Enable VLAN filtering.
+		 * TODO read these from bridge_dev->protocol.
+		 */
+		rc = sja1105_change_tpid(priv, ETH_P_8021Q, ETH_P_8021AD);
+	else if (!enabled && sja1105_vlan_filtering_enabled(priv))
+		/* Disable VLAN filtering. TODO: Install a TPID
+		 * that also encodes the switch ID (aka ds->index)
+		 * so that stacking switch tags will be supported.
+		 */
+		rc = sja1105_change_tpid(priv, ETH_P_EDSA, ETH_P_EDSA);
+	else
+		return 0;
+	if (rc)
+		dev_err(ds->dev, "Failed to change VLAN Ethertype\n");
+
+	return rc;
+}
+
+static void sja1105_vlan_add(struct dsa_switch *ds, int port,
+			     const struct switchdev_obj_port_vlan *vlan)
+{
+	struct sja1105_private *priv = ds->priv;
+	u16 vid;
+	int rc;
+
+	for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+		rc = sja1105_vlan_apply(priv, port, vid, true, vlan->flags &
+					BRIDGE_VLAN_INFO_UNTAGGED);
+		if (rc < 0) {
+			dev_err(ds->dev, "Failed to add VLAN %d to port %d: %d\n",
+				vid, port, rc);
+			return;
+		}
+		if (vlan->flags & BRIDGE_VLAN_INFO_PVID) {
+			rc = sja1105_pvid_apply(ds->priv, port, vid);
+			if (rc < 0) {
+				dev_err(ds->dev, "Failed to set pvid %d on port %d: %d\n",
+					vid, port, rc);
+				return;
+			}
+		}
+	}
+}
+
+static int sja1105_vlan_del(struct dsa_switch *ds, int port,
+			    const struct switchdev_obj_port_vlan *vlan)
+{
+	struct sja1105_private *priv = ds->priv;
+	u16 vid;
+	int rc;
+
+	for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+		rc = sja1105_vlan_apply(priv, port, vid, false, vlan->flags &
+					BRIDGE_VLAN_INFO_UNTAGGED);
+		if (rc < 0) {
+			dev_err(ds->dev, "Failed to remove VLAN %d from port %d: %d\n",
+				vid, port, rc);
+			return rc;
+		}
+	}
+	return 0;
+}
+
 /* The programming model for the SJA1105 switch is "all-at-once" via static
  * configuration tables. Some of these can be dynamically modified at runtime,
  * but not the xMII mode parameters table.
@@ -988,6 +1255,10 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
 	.port_fdb_del		= sja1105_fdb_del,
 	.port_bridge_join	= sja1105_bridge_join,
 	.port_bridge_leave	= sja1105_bridge_leave,
+	.port_vlan_prepare	= sja1105_vlan_prepare,
+	.port_vlan_filtering	= sja1105_vlan_filtering,
+	.port_vlan_add		= sja1105_vlan_add,
+	.port_vlan_del		= sja1105_vlan_del,
 	.port_mdb_prepare	= sja1105_mdb_prepare,
 	.port_mdb_add		= sja1105_mdb_add,
 	.port_mdb_del		= sja1105_mdb_del,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 09/13] net: dsa: sja1105: Add support for ethtool port counters
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (7 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 08/13] net: dsa: sja1105: Add support for VLAN operations Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26  2:44   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports Vladimir Oltean
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 drivers/net/dsa/sja1105/Makefile          |   1 +
 drivers/net/dsa/sja1105/sja1105.h         |   6 +
 drivers/net/dsa/sja1105/sja1105_ethtool.c | 420 ++++++++++++++++++++++
 drivers/net/dsa/sja1105/sja1105_main.c    |   3 +
 4 files changed, 430 insertions(+)
 create mode 100644 drivers/net/dsa/sja1105/sja1105_ethtool.c

diff --git a/drivers/net/dsa/sja1105/Makefile b/drivers/net/dsa/sja1105/Makefile
index ed00840802f4..bb4404c79eb2 100644
--- a/drivers/net/dsa/sja1105/Makefile
+++ b/drivers/net/dsa/sja1105/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_NET_DSA_SJA1105) += sja1105.o
 sja1105-objs := \
     sja1105_spi.o \
     sja1105_main.o \
+    sja1105_ethtool.o \
     sja1105_clocking.o \
     sja1105_static_config.o \
     sja1105_dynamic_config.o \
diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h
index f8cac518a30a..7c2e4d660cd0 100644
--- a/drivers/net/dsa/sja1105/sja1105.h
+++ b/drivers/net/dsa/sja1105/sja1105.h
@@ -102,6 +102,12 @@ const char *sja1105_device_id_string_get(u64 device_id, u64 part_nr);
 int sja1105_clocking_setup_port(struct sja1105_private *priv, int port);
 int sja1105_clocking_setup(struct sja1105_private *priv);
 
+/* From sja1105-ethtool.c */
+void sja1105_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data);
+void sja1105_get_strings(struct dsa_switch *ds, int port,
+			 u32 stringset, u8 *data);
+int sja1105_get_sset_count(struct dsa_switch *ds, int port, int sset);
+
 /* From sja1105-dynamic-config.c */
 
 int sja1105_dynamic_config_read(struct sja1105_private *priv,
diff --git a/drivers/net/dsa/sja1105/sja1105_ethtool.c b/drivers/net/dsa/sja1105/sja1105_ethtool.c
new file mode 100644
index 000000000000..0d5961674968
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_ethtool.c
@@ -0,0 +1,420 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2018-2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include "sja1105.h"
+
+struct sja1105_port_status_mac {
+	u64 n_runt;
+	u64 n_soferr;
+	u64 n_alignerr;
+	u64 n_miierr;
+	u64 typeerr;
+	u64 sizeerr;
+	u64 tctimeout;
+	u64 priorerr;
+	u64 nomaster;
+	u64 memov;
+	u64 memerr;
+	u64 invtyp;
+	u64 intcyov;
+	u64 domerr;
+	u64 pcfbagdrop;
+	u64 spcprior;
+	u64 ageprior;
+	u64 portdrop;
+	u64 lendrop;
+	u64 bagdrop;
+	u64 policeerr;
+	u64 drpnona664err;
+	u64 spcerr;
+	u64 agedrp;
+};
+
+struct sja1105_port_status_hl1 {
+	u64 n_n664err;
+	u64 n_vlanerr;
+	u64 n_unreleased;
+	u64 n_sizeerr;
+	u64 n_crcerr;
+	u64 n_vlnotfound;
+	u64 n_ctpolerr;
+	u64 n_polerr;
+	u64 n_rxfrmsh;
+	u64 n_rxfrm;
+	u64 n_rxbytesh;
+	u64 n_rxbyte;
+	u64 n_txfrmsh;
+	u64 n_txfrm;
+	u64 n_txbytesh;
+	u64 n_txbyte;
+};
+
+struct sja1105_port_status_hl2 {
+	u64 n_qfull;
+	u64 n_part_drop;
+	u64 n_egr_disabled;
+	u64 n_not_reach;
+	u64 qlevel_hwm[8]; /* Only for P/Q/R/S */
+	u64 qlevel[8];     /* Only for P/Q/R/S */
+};
+
+struct sja1105_port_status {
+	struct sja1105_port_status_mac mac;
+	struct sja1105_port_status_hl1 hl1;
+	struct sja1105_port_status_hl2 hl2;
+};
+
+static void
+sja1105_port_status_mac_unpack(void *buf,
+			       struct sja1105_port_status_mac *status)
+{
+	/* Make pointer arithmetic work on 4 bytes */
+	u32 *p = (u32 *)buf;
+
+	sja1105_unpack(p + 0x0, &status->n_runt,       31, 24, 4);
+	sja1105_unpack(p + 0x0, &status->n_soferr,     23, 16, 4);
+	sja1105_unpack(p + 0x0, &status->n_alignerr,   15,  8, 4);
+	sja1105_unpack(p + 0x0, &status->n_miierr,      7,  0, 4);
+	sja1105_unpack(p + 0x1, &status->typeerr,      27, 27, 4);
+	sja1105_unpack(p + 0x1, &status->sizeerr,      26, 26, 4);
+	sja1105_unpack(p + 0x1, &status->tctimeout,    25, 25, 4);
+	sja1105_unpack(p + 0x1, &status->priorerr,     24, 24, 4);
+	sja1105_unpack(p + 0x1, &status->nomaster,     23, 23, 4);
+	sja1105_unpack(p + 0x1, &status->memov,        22, 22, 4);
+	sja1105_unpack(p + 0x1, &status->memerr,       21, 21, 4);
+	sja1105_unpack(p + 0x1, &status->invtyp,       19, 19, 4);
+	sja1105_unpack(p + 0x1, &status->intcyov,      18, 18, 4);
+	sja1105_unpack(p + 0x1, &status->domerr,       17, 17, 4);
+	sja1105_unpack(p + 0x1, &status->pcfbagdrop,   16, 16, 4);
+	sja1105_unpack(p + 0x1, &status->spcprior,     15, 12, 4);
+	sja1105_unpack(p + 0x1, &status->ageprior,     11,  8, 4);
+	sja1105_unpack(p + 0x1, &status->portdrop,      6,  6, 4);
+	sja1105_unpack(p + 0x1, &status->lendrop,       5,  5, 4);
+	sja1105_unpack(p + 0x1, &status->bagdrop,       4,  4, 4);
+	sja1105_unpack(p + 0x1, &status->policeerr,     3,  3, 4);
+	sja1105_unpack(p + 0x1, &status->drpnona664err, 2,  2, 4);
+	sja1105_unpack(p + 0x1, &status->spcerr,        1,  1, 4);
+	sja1105_unpack(p + 0x1, &status->agedrp,        0,  0, 4);
+}
+
+static void
+sja1105_port_status_hl1_unpack(void *buf,
+			       struct sja1105_port_status_hl1 *status)
+{
+	/* Make pointer arithmetic work on 4 bytes */
+	u32 *p = (u32 *)buf;
+
+	sja1105_unpack(p + 0xF, &status->n_n664err,    31,  0, 4);
+	sja1105_unpack(p + 0xE, &status->n_vlanerr,    31,  0, 4);
+	sja1105_unpack(p + 0xD, &status->n_unreleased, 31,  0, 4);
+	sja1105_unpack(p + 0xC, &status->n_sizeerr,    31,  0, 4);
+	sja1105_unpack(p + 0xB, &status->n_crcerr,     31,  0, 4);
+	sja1105_unpack(p + 0xA, &status->n_vlnotfound, 31,  0, 4);
+	sja1105_unpack(p + 0x9, &status->n_ctpolerr,   31,  0, 4);
+	sja1105_unpack(p + 0x8, &status->n_polerr,     31,  0, 4);
+	sja1105_unpack(p + 0x7, &status->n_rxfrmsh,    31,  0, 4);
+	sja1105_unpack(p + 0x6, &status->n_rxfrm,      31,  0, 4);
+	sja1105_unpack(p + 0x5, &status->n_rxbytesh,   31,  0, 4);
+	sja1105_unpack(p + 0x4, &status->n_rxbyte,     31,  0, 4);
+	sja1105_unpack(p + 0x3, &status->n_txfrmsh,    31,  0, 4);
+	sja1105_unpack(p + 0x2, &status->n_txfrm,      31,  0, 4);
+	sja1105_unpack(p + 0x1, &status->n_txbytesh,   31,  0, 4);
+	sja1105_unpack(p + 0x0, &status->n_txbyte,     31,  0, 4);
+	status->n_rxfrm  += status->n_rxfrmsh  << 32;
+	status->n_rxbyte += status->n_rxbytesh << 32;
+	status->n_txfrm  += status->n_txfrmsh  << 32;
+	status->n_txbyte += status->n_txbytesh << 32;
+}
+
+static void
+sja1105_port_status_hl2_unpack(void *buf,
+			       struct sja1105_port_status_hl2 *status)
+{
+	/* Make pointer arithmetic work on 4 bytes */
+	u32 *p = (u32 *)buf;
+
+	sja1105_unpack(p + 0x3, &status->n_qfull,        31,  0, 4);
+	sja1105_unpack(p + 0x2, &status->n_part_drop,    31,  0, 4);
+	sja1105_unpack(p + 0x1, &status->n_egr_disabled, 31,  0, 4);
+	sja1105_unpack(p + 0x0, &status->n_not_reach,    31,  0, 4);
+}
+
+static void
+sja1105pqrs_port_status_qlevel_unpack(void *buf,
+				      struct sja1105_port_status_hl2 *status)
+{
+	/* Make pointer arithmetic work on 4 bytes */
+	u32 *p = (u32 *)buf;
+	int i;
+
+	for (i = 0; i < 8; i++) {
+		sja1105_unpack(p + i, &status->qlevel_hwm[i], 24, 16, 4);
+		sja1105_unpack(p + i, &status->qlevel[i],      8,  0, 4);
+	}
+}
+
+static int sja1105_port_status_get_mac(struct sja1105_private *priv,
+				       struct sja1105_port_status_mac *status,
+				       int port)
+{
+#define SIZE_MAC_AREA (0x02 * 4)
+	u8 packed_buf[SIZE_MAC_AREA];
+	int rc = 0;
+
+	memset(status, 0, sizeof(*status));
+
+	/* MAC area */
+	rc = sja1105_spi_send_packed_buf(priv, SPI_READ, priv->regs->mac[port],
+					 packed_buf, SIZE_MAC_AREA);
+	if (rc < 0)
+		return rc;
+
+	sja1105_port_status_mac_unpack(packed_buf, status);
+
+	return 0;
+#undef SIZE_MAC_AREA
+}
+
+static int sja1105_port_status_get_hl1(struct sja1105_private *priv,
+				       struct sja1105_port_status_hl1 *status,
+				       int port)
+{
+#define SIZE_HL1_AREA (0x10 * 4)
+	u8 packed_buf[SIZE_HL1_AREA];
+	int rc = 0;
+
+	memset(status, 0, sizeof(*status));
+
+	rc = sja1105_spi_send_packed_buf(priv, SPI_READ,
+					 priv->regs->mac_hl1[port],
+					 packed_buf, SIZE_HL1_AREA);
+	if (rc < 0)
+		return rc;
+
+	sja1105_port_status_hl1_unpack(packed_buf, status);
+
+	return 0;
+#undef SIZE_HL1_AREA
+}
+
+static int sja1105_port_status_get_hl2(struct sja1105_private *priv,
+				       struct sja1105_port_status_hl2 *status,
+				       int port)
+{
+#define SIZE_HL2_AREA (0x4 * 4)
+#define SIZE_QLEVEL_AREA (0x8 * 4) /* 0x4 to 0xB */
+	u8 packed_buf[SIZE_QLEVEL_AREA];
+	int rc = 0;
+
+	memset(status, 0, sizeof(*status));
+
+	rc = sja1105_spi_send_packed_buf(priv, SPI_READ,
+					 priv->regs->mac_hl2[port],
+					 packed_buf, SIZE_HL2_AREA);
+	if (rc < 0)
+		return rc;
+
+	sja1105_port_status_hl2_unpack(packed_buf, status);
+
+	if (IS_ET(priv->device_id))
+		/* Code below is strictly P/Q/R/S specific. */
+		return 0;
+
+	rc = sja1105_spi_send_packed_buf(priv, SPI_READ,
+					 priv->regs->qlevel[port],
+					 packed_buf, SIZE_QLEVEL_AREA);
+	if (rc < 0)
+		return rc;
+
+	sja1105pqrs_port_status_qlevel_unpack(packed_buf, status);
+
+	return 0;
+#undef SIZE_QLEVEL_AREA
+#undef SIZE_HL2_AREA
+}
+
+static int sja1105_port_status_get(struct sja1105_private *priv,
+				   struct sja1105_port_status *status,
+				   int port)
+{
+	int rc;
+
+	rc = sja1105_port_status_get_mac(priv, &status->mac, port);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_port_status_get_hl1(priv, &status->hl1, port);
+	if (rc < 0)
+		return rc;
+	rc = sja1105_port_status_get_hl2(priv, &status->hl2, port);
+	if (rc < 0)
+		return rc;
+
+	return 0;
+}
+
+static char sja1105_port_stats[][ETH_GSTRING_LEN] = {
+	/* MAC-Level Diagnostic Counters */
+	"n_runt",
+	"n_soferr",
+	"n_alignerr",
+	"n_miierr",
+	/* MAC-Level Diagnostic Flags */
+	"typeerr",
+	"sizeerr",
+	"tctimeout",
+	"priorerr",
+	"nomaster",
+	"memov",
+	"memerr",
+	"invtyp",
+	"intcyov",
+	"domerr",
+	"pcfbagdrop",
+	"spcprior",
+	"ageprior",
+	"portdrop",
+	"lendrop",
+	"bagdrop",
+	"policeerr",
+	"drpnona664err",
+	"spcerr",
+	"agedrp",
+	/* High-Level Diagnostic Counters */
+	"n_n664err",
+	"n_vlanerr",
+	"n_unreleased",
+	"n_sizeerr",
+	"n_crcerr",
+	"n_vlnotfound",
+	"n_ctpolerr",
+	"n_polerr",
+	"n_rxfrm",
+	"n_rxbyte",
+	"n_txfrm",
+	"n_txbyte",
+	"n_qfull",
+	"n_part_drop",
+	"n_egr_disabled",
+	"n_not_reach",
+};
+
+static char sja1105pqrs_extra_port_stats[][ETH_GSTRING_LEN] = {
+	/* Queue Levels */
+	"qlevel_hwm_0",
+	"qlevel_hwm_1",
+	"qlevel_hwm_2",
+	"qlevel_hwm_3",
+	"qlevel_hwm_4",
+	"qlevel_hwm_5",
+	"qlevel_hwm_6",
+	"qlevel_hwm_7",
+	"qlevel_0",
+	"qlevel_1",
+	"qlevel_2",
+	"qlevel_3",
+	"qlevel_4",
+	"qlevel_5",
+	"qlevel_6",
+	"qlevel_7",
+};
+
+void sja1105_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
+{
+	struct sja1105_private *priv = ds->priv;
+	struct sja1105_port_status status;
+	int rc, i, k = 0;
+
+	rc = sja1105_port_status_get(priv, &status, port);
+	if (rc < 0) {
+		dev_err(ds->dev, "Failed to read port %d counters: %d\n",
+			port, rc);
+		return;
+	}
+	memset(data, 0, ARRAY_SIZE(sja1105_port_stats) * sizeof(u64));
+	data[k++] = status.mac.n_runt;
+	data[k++] = status.mac.n_soferr;
+	data[k++] = status.mac.n_alignerr;
+	data[k++] = status.mac.n_miierr;
+	data[k++] = status.mac.typeerr;
+	data[k++] = status.mac.sizeerr;
+	data[k++] = status.mac.tctimeout;
+	data[k++] = status.mac.priorerr;
+	data[k++] = status.mac.nomaster;
+	data[k++] = status.mac.memov;
+	data[k++] = status.mac.memerr;
+	data[k++] = status.mac.invtyp;
+	data[k++] = status.mac.intcyov;
+	data[k++] = status.mac.domerr;
+	data[k++] = status.mac.pcfbagdrop;
+	data[k++] = status.mac.spcprior;
+	data[k++] = status.mac.ageprior;
+	data[k++] = status.mac.portdrop;
+	data[k++] = status.mac.lendrop;
+	data[k++] = status.mac.bagdrop;
+	data[k++] = status.mac.policeerr;
+	data[k++] = status.mac.drpnona664err;
+	data[k++] = status.mac.spcerr;
+	data[k++] = status.mac.agedrp;
+	data[k++] = status.hl1.n_n664err;
+	data[k++] = status.hl1.n_vlanerr;
+	data[k++] = status.hl1.n_unreleased;
+	data[k++] = status.hl1.n_sizeerr;
+	data[k++] = status.hl1.n_crcerr;
+	data[k++] = status.hl1.n_vlnotfound;
+	data[k++] = status.hl1.n_ctpolerr;
+	data[k++] = status.hl1.n_polerr;
+	data[k++] = status.hl1.n_rxfrm;
+	data[k++] = status.hl1.n_rxbyte;
+	data[k++] = status.hl1.n_txfrm;
+	data[k++] = status.hl1.n_txbyte;
+	data[k++] = status.hl2.n_qfull;
+	data[k++] = status.hl2.n_part_drop;
+	data[k++] = status.hl2.n_egr_disabled;
+	data[k++] = status.hl2.n_not_reach;
+
+	if (!IS_PQRS(priv->device_id))
+		return;
+
+	memset(data + k, 0, ARRAY_SIZE(sja1105pqrs_extra_port_stats) *
+			sizeof(u64));
+	for (i = 0; i < 8; i++) {
+		data[k++] = status.hl2.qlevel_hwm[i];
+		data[k++] = status.hl2.qlevel[i];
+	}
+}
+
+void sja1105_get_strings(struct dsa_switch *ds, int port,
+			 u32 stringset, u8 *data)
+{
+	struct sja1105_private *priv = ds->priv;
+	u8 *p = data;
+	int i;
+
+	switch (stringset) {
+	case ETH_SS_STATS:
+		for (i = 0; i < ARRAY_SIZE(sja1105_port_stats); i++) {
+			strlcpy(p, sja1105_port_stats[i], ETH_GSTRING_LEN);
+			p += ETH_GSTRING_LEN;
+		}
+		if (!IS_PQRS(priv->device_id))
+			return;
+		for (i = 0; i < ARRAY_SIZE(sja1105pqrs_extra_port_stats); i++) {
+			strlcpy(p, sja1105pqrs_extra_port_stats[i],
+				ETH_GSTRING_LEN);
+			p += ETH_GSTRING_LEN;
+		}
+		break;
+	}
+}
+
+int sja1105_get_sset_count(struct dsa_switch *ds, int port, int sset)
+{
+	int count = ARRAY_SIZE(sja1105_port_stats);
+	struct sja1105_private *priv = ds->priv;
+
+	if (IS_PQRS(priv->device_id))
+		count += ARRAY_SIZE(sja1105pqrs_extra_port_stats);
+
+	return count;
+}
+
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index a1d7f3b03099..008bcebc4738 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -1250,6 +1250,9 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
 	.get_tag_protocol	= sja1105_get_tag_protocol,
 	.setup			= sja1105_setup,
 	.adjust_link		= sja1105_adjust_link,
+	.get_strings		= sja1105_get_strings,
+	.get_ethtool_stats	= sja1105_get_ethtool_stats,
+	.get_sset_count		= sja1105_get_sset_count,
 	.port_fdb_dump		= sja1105_fdb_dump,
 	.port_fdb_add		= sja1105_fdb_add,
 	.port_fdb_del		= sja1105_fdb_del,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (8 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 09/13] net: dsa: sja1105: Add support for ethtool port counters Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26  2:31   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 11/13] net: dsa: sja1105: Add support for Spanning Tree Protocol Vladimir Oltean
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

In order to support this, we are creating a make-shift switch tag out of
a VLAN trunk configured on the CPU port. Termination on switch ports
only works when not under a vlan_filtering bridge. We are making use of
the generic CONFIG_NET_DSA_TAG_8021Q code and leveraging it from our own
CONFIG_NET_DSA_TAG_SJA1105.

There are two types of traffic: regular and link-local.
The link-local traffic received on the CPU port is trapped from the
switch's regular forwarding decisions because it matched one of the two
DMAC filters for management traffic.
On transmission, the switch requires special massaging for these
link-local frames. Due to a weird implementation of the switching IP, by
default it drops link-local frames that originate on the CPU port. It
needs to be told where to forward them to, through an SPI command
("management route") that is valid for only a single frame.
So when we're sending link-local traffic, we need to clone skb's from
DSA and send them in our custom xmit worker that also performs SPI access.

For that purpose, the DSA xmit handler and the xmit worker communicate
through a per-port "skb ring" software structure, with a producer and a
consumer index. At the moment this structure is rather fragile
(ping-flooding to a link-local DMAC would cause most of the frames to
get dropped). I would like to move the management traffic on a separate
netdev queue that I can stop when the skb ring got full and hardware is
busy processing, so that we are not forced to drop traffic.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 drivers/net/dsa/sja1105/sja1105.h      |   8 ++
 drivers/net/dsa/sja1105/sja1105_main.c | 119 +++++++++++++++++++++
 include/linux/dsa/sja1105.h            |  52 +++++++++
 include/net/dsa.h                      |   1 +
 net/dsa/Kconfig                        |   3 +
 net/dsa/Makefile                       |   1 +
 net/dsa/dsa.c                          |   6 ++
 net/dsa/dsa_priv.h                     |   3 +
 net/dsa/tag_sja1105.c                  | 142 +++++++++++++++++++++++++
 9 files changed, 335 insertions(+)
 create mode 100644 include/linux/dsa/sja1105.h
 create mode 100644 net/dsa/tag_sja1105.c

diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h
index 7c2e4d660cd0..63e94c4dab2d 100644
--- a/drivers/net/dsa/sja1105/sja1105.h
+++ b/drivers/net/dsa/sja1105/sja1105.h
@@ -5,6 +5,7 @@
 #ifndef _SJA1105_H
 #define _SJA1105_H
 
+#include <linux/dsa/sja1105.h>
 #include <net/dsa.h>
 #include "sja1105_static_config.h"
 
@@ -19,6 +20,12 @@
 #define SJA1105_NUM_TC    8
 #define SJA1105ET_FDB_BIN_SIZE 4
 
+struct sja1105_port {
+	struct dsa_port *dp;
+	struct work_struct xmit_work;
+	struct sja1105_skb_ring xmit_ring;
+};
+
 /* Keeps the different addresses between E/T and P/Q/R/S */
 struct sja1105_regs {
 	u64 general_status;
@@ -50,6 +57,7 @@ struct sja1105_private {
 	struct dsa_switch *ds;
 	u64 device_id;
 	u64 part_nr; /* Needed for P/R distinction (same switch core) */
+	struct sja1105_port ports[SJA1105_NUM_PORTS];
 };
 
 #include "sja1105_dynamic_config.h"
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index 008bcebc4738..92dc58afd74e 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -1101,6 +1101,21 @@ static int sja1105_vlan_apply(struct sja1105_private *priv, int port, u16 vid,
 	return 0;
 }
 
+static int sja1105_setup_8021q_tagging(struct dsa_switch *ds, bool enabled)
+{
+	int rc, i;
+
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+		rc = dsa_port_setup_8021q_tagging(ds, i, enabled);
+		if (rc < 0) {
+			dev_err(ds->dev, "Failed to setup VLAN tagging for port %d: %d\n",
+				i, rc);
+			return rc;
+		}
+	}
+	return 0;
+}
+
 static enum dsa_tag_protocol
 sja1105_get_tag_protocol(struct dsa_switch *ds, int port)
 {
@@ -1159,6 +1174,14 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
 	if (rc)
 		dev_err(ds->dev, "Failed to change VLAN Ethertype\n");
 
+	/* Switch port identification based on 802.1Q is only passable
+	 * if we are not under a vlan_filtering bridge. So make sure
+	 * the two configurations are mutually exclusive.
+	 */
+	rc = sja1105_setup_8021q_tagging(ds, !enabled);
+	if (rc < 0)
+		return rc;
+
 	return rc;
 }
 
@@ -1246,6 +1269,100 @@ static int sja1105_setup(struct dsa_switch *ds)
 	return 0;
 }
 
+#include "../../../net/dsa/dsa_priv.h"
+/* Deferred work is unfortunately necessary because setting up the management
+ * route cannot be done from atomit context (SPI transfer takes a sleepable
+ * lock on the bus)
+ */
+void sja1105_xmit_work_handler(struct work_struct *work)
+{
+	struct sja1105_port *sp = container_of(work, struct sja1105_port,
+						xmit_work);
+	struct sja1105_private *priv = sp->dp->ds->priv;
+	struct net_device *slave = sp->dp->slave;
+	struct net_device *master = dsa_slave_to_master(slave);
+	int port = (uintptr_t)(sp - priv->ports);
+	struct sk_buff *skb;
+	int i, rc;
+
+	while ((i = sja1105_skb_ring_get(&sp->xmit_ring, &skb)) >= 0) {
+		struct sja1105_mgmt_entry mgmt_route = { 0 };
+		struct ethhdr *hdr;
+		int timeout = 10;
+		int skb_len;
+
+		skb_len = skb->len;
+		hdr = eth_hdr(skb);
+
+		mgmt_route.macaddr = ether_addr_to_u64(hdr->h_dest);
+		mgmt_route.destports = BIT(port);
+		mgmt_route.enfport = 1;
+		mgmt_route.tsreg = 0;
+		mgmt_route.takets = true;
+
+		rc = sja1105_dynamic_config_write(priv, BLK_IDX_MGMT_ROUTE,
+						  port, &mgmt_route, true);
+		if (rc < 0) {
+			kfree_skb(skb);
+			slave->stats.tx_dropped++;
+			continue;
+		}
+
+		/* Transfer skb to the host port. */
+		skb->dev = master;
+		dev_queue_xmit(skb);
+
+		/* Wait until the switch has processed the frame */
+		do {
+			rc = sja1105_dynamic_config_read(priv, BLK_IDX_MGMT_ROUTE,
+							 port, &mgmt_route);
+			if (rc < 0) {
+				slave->stats.tx_errors++;
+				dev_err(priv->ds->dev,
+					"xmit: failed to poll for mgmt route\n");
+				continue;
+			}
+
+			/* UM10944: The ENFPORT flag of the respective entry is
+			 * cleared when a match is found. The host can use this
+			 * flag as an acknowledgment.
+			 */
+			usleep_range(1000, 2000);
+		} while (mgmt_route.enfport && --timeout);
+
+		if (!timeout) {
+			dev_err(priv->ds->dev, "xmit timed out\n");
+			slave->stats.tx_errors++;
+			continue;
+		}
+
+		slave->stats.tx_packets++;
+		slave->stats.tx_bytes += skb_len;
+	}
+}
+
+static int sja1105_port_enable(struct dsa_switch *ds, int port,
+			       struct phy_device *phy)
+{
+	struct sja1105_private *priv = ds->priv;
+	struct sja1105_port *sp = &priv->ports[port];
+
+	sp->dp = &ds->ports[port];
+	INIT_WORK(&sp->xmit_work, sja1105_xmit_work_handler);
+	return 0;
+}
+
+static void sja1105_port_disable(struct dsa_switch *ds, int port)
+{
+	struct sja1105_private *priv = ds->priv;
+	struct sja1105_port *sp = &priv->ports[port];
+	struct sk_buff *skb;
+
+	cancel_work_sync(&sp->xmit_work);
+	while (sja1105_skb_ring_get(&sp->xmit_ring, &skb) >= 0)
+		kfree_skb(skb);
+}
+
 static const struct dsa_switch_ops sja1105_switch_ops = {
 	.get_tag_protocol	= sja1105_get_tag_protocol,
 	.setup			= sja1105_setup,
@@ -1265,6 +1382,8 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
 	.port_mdb_prepare	= sja1105_mdb_prepare,
 	.port_mdb_add		= sja1105_mdb_add,
 	.port_mdb_del		= sja1105_mdb_del,
+	.port_enable		= sja1105_port_enable,
+	.port_disable		= sja1105_port_disable,
 };
 
 static int sja1105_probe(struct spi_device *spi)
diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h
new file mode 100644
index 000000000000..d2419951b0c7
--- /dev/null
+++ b/include/linux/dsa/sja1105.h
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+
+/* Included by drivers/net/dsa/sja1105/sja1105.h and net/dsa/tag_sja1105.c */
+
+#ifndef _NET_DSA_SJA1105_H
+#define _NET_DSA_SJA1105_H
+
+#include <linux/skbuff.h>
+#include <net/dsa.h>
+
+#define SJA1105_SKB_RING_SIZE    20
+
+struct sja1105_skb_ring {
+	struct sk_buff *skb[SJA1105_SKB_RING_SIZE];
+	int count;
+	int pi; /* Producer index */
+	int ci; /* Consumer index */
+};
+
+static inline int sja1105_skb_ring_add(struct sja1105_skb_ring *ring,
+				       struct sk_buff *skb)
+{
+	int index;
+
+	if (ring->count == SJA1105_SKB_RING_SIZE)
+		return -1;
+
+	index = ring->pi;
+	ring->skb[index] = skb;
+	ring->pi = (index + 1) % SJA1105_SKB_RING_SIZE;
+	ring->count++;
+	return index;
+}
+
+static inline int sja1105_skb_ring_get(struct sja1105_skb_ring *ring,
+				       struct sk_buff **skb)
+{
+	int index;
+
+	if (ring->count == 0)
+		return -1;
+
+	index = ring->ci;
+	*skb = ring->skb[index];
+	ring->ci = (index + 1) % SJA1105_SKB_RING_SIZE;
+	ring->count--;
+	return index;
+}
+
+#endif /* _NET_DSA_SJA1105_Hk*/
diff --git a/include/net/dsa.h b/include/net/dsa.h
index b22c350c40f0..51f7967b2931 100644
--- a/include/net/dsa.h
+++ b/include/net/dsa.h
@@ -41,6 +41,7 @@ enum dsa_tag_protocol {
 	DSA_TAG_PROTO_KSZ9893,
 	DSA_TAG_PROTO_LAN9303,
 	DSA_TAG_PROTO_MTK,
+	DSA_TAG_PROTO_SJA1105,
 	DSA_TAG_PROTO_QCA,
 	DSA_TAG_PROTO_TRAILER,
 	DSA_TAG_LAST,		/* MUST BE LAST */
diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig
index 2f3a103d7d1a..feaa40c30425 100644
--- a/net/dsa/Kconfig
+++ b/net/dsa/Kconfig
@@ -63,6 +63,9 @@ config NET_DSA_TAG_LAN9303
 config NET_DSA_TAG_MTK
 	bool
 
+config NET_DSA_TAG_SJA1105
+	bool
+
 config NET_DSA_TAG_TRAILER
 	bool
 
diff --git a/net/dsa/Makefile b/net/dsa/Makefile
index d7fc3253d497..8c294cdb895a 100644
--- a/net/dsa/Makefile
+++ b/net/dsa/Makefile
@@ -15,4 +15,5 @@ dsa_core-$(CONFIG_NET_DSA_TAG_KSZ) += tag_ksz.o
 dsa_core-$(CONFIG_NET_DSA_TAG_LAN9303) += tag_lan9303.o
 dsa_core-$(CONFIG_NET_DSA_TAG_MTK) += tag_mtk.o
 dsa_core-$(CONFIG_NET_DSA_TAG_QCA) += tag_qca.o
+dsa_core-$(CONFIG_NET_DSA_TAG_SJA1105) += tag_sja1105.o
 dsa_core-$(CONFIG_NET_DSA_TAG_TRAILER) += tag_trailer.o
diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
index 36de4f2a3366..4b5d7cd0294a 100644
--- a/net/dsa/dsa.c
+++ b/net/dsa/dsa.c
@@ -65,6 +65,9 @@ const struct dsa_device_ops *dsa_device_ops[DSA_TAG_LAST] = {
 #ifdef CONFIG_NET_DSA_TAG_MTK
 	[DSA_TAG_PROTO_MTK] = &mtk_netdev_ops,
 #endif
+#ifdef CONFIG_NET_DSA_TAG_SJA1105
+	[DSA_TAG_PROTO_SJA1105] = &sja1105_netdev_ops,
+#endif
 #ifdef CONFIG_NET_DSA_TAG_QCA
 	[DSA_TAG_PROTO_QCA] = &qca_netdev_ops,
 #endif
@@ -102,6 +105,9 @@ const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops)
 #ifdef CONFIG_NET_DSA_TAG_MTK
 		[DSA_TAG_PROTO_MTK] = "mtk",
 #endif
+#ifdef CONFIG_NET_DSA_TAG_SJA1105
+		[DSA_TAG_PROTO_SJA1105] = "sja1105",
+#endif
 #ifdef CONFIG_NET_DSA_TAG_QCA
 		[DSA_TAG_PROTO_QCA] = "qca",
 #endif
diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
index 105058450621..67dc32164147 100644
--- a/net/dsa/dsa_priv.h
+++ b/net/dsa/dsa_priv.h
@@ -236,6 +236,9 @@ extern const struct dsa_device_ops lan9303_netdev_ops;
 /* tag_mtk.c */
 extern const struct dsa_device_ops mtk_netdev_ops;
 
+/* tag_sja1105.c */
+extern const struct dsa_device_ops sja1105_netdev_ops;
+
 /* tag_qca.c */
 extern const struct dsa_device_ops qca_netdev_ops;
 
diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c
new file mode 100644
index 000000000000..fc5d37ec4fd7
--- /dev/null
+++ b/net/dsa/tag_sja1105.c
@@ -0,0 +1,142 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/dsa/sja1105.h>
+#include "../../drivers/net/dsa/sja1105/sja1105.h"
+
+#include "dsa_priv.h"
+
+/* Similar to is_link_local_ether_addr(hdr->h_dest) but also covers PTP */
+static inline bool sja1105_is_link_local(struct sk_buff *skb)
+{
+	struct ethhdr *hdr = eth_hdr(skb);
+	u64 dmac = ether_addr_to_u64(hdr->h_dest);
+
+	if ((dmac & SJA1105_LINKLOCAL_FILTER_A_MASK) ==
+		    SJA1105_LINKLOCAL_FILTER_A)
+		return true;
+	if ((dmac & SJA1105_LINKLOCAL_FILTER_B_MASK) ==
+		    SJA1105_LINKLOCAL_FILTER_B)
+		return true;
+	return false;
+}
+
+static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
+				    struct net_device *netdev)
+{
+	struct dsa_port *dp = dsa_slave_to_port(netdev);
+	struct dsa_switch *ds = dp->ds;
+	struct sja1105_private *priv = ds->priv;
+	struct sja1105_port *sp = &priv->ports[dp->index];
+	struct sk_buff *clone;
+
+	if (likely(!sja1105_is_link_local(skb))) {
+		/* Normal traffic path. */
+		u16 tx_vid = dsa_tagging_tx_vid(ds, dp->index);
+		u8 pcp = skb->priority;
+
+		/* If we are under a vlan_filtering bridge, IP termination on
+		 * switch ports based on 802.1Q tags is simply too brittle to
+		 * be passable. So just defer to the dsa_slave_notag_xmit
+		 * implementation.
+		 */
+		if (dp->vlan_filtering)
+			return skb;
+
+		return dsa_8021q_xmit(skb, netdev, ETH_P_EDSA,
+				     ((pcp << VLAN_PRIO_SHIFT) | tx_vid));
+	}
+
+	/* Code path for transmitting management traffic. This does not rely
+	 * upon switch tagging, but instead SPI-installed management routes.
+	 */
+	clone = skb_clone(skb, GFP_ATOMIC);
+	if (!clone) {
+		dev_err(ds->dev, "xmit: failed to clone skb\n");
+		return NULL;
+	}
+
+	if (sja1105_skb_ring_add(&sp->xmit_ring, clone) < 0) {
+		dev_err(ds->dev, "xmit: skb ring full\n");
+		kfree_skb(clone);
+		return NULL;
+	}
+
+	if (sp->xmit_ring.count == SJA1105_SKB_RING_SIZE)
+		/* TODO setup a dedicated netdev queue for management traffic
+		 * so that we can selectively apply backpressure and not be
+		 * required to stop the entire traffic when the software skb
+		 * ring is full. This requires hooking the ndo_select_queue
+		 * from DSA and matching on mac_fltres.
+		 */
+		dev_err(ds->dev, "xmit: reached maximum skb ring size\n");
+
+	schedule_work(&sp->xmit_work);
+	/* Let DSA free its reference to the skb and we will free
+	 * the clone in the deferred worker
+	 */
+	return NULL;
+}
+
+static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
+				   struct net_device *netdev,
+				   struct packet_type *pt)
+{
+	unsigned int source_port, switch_id;
+	struct ethhdr *hdr = eth_hdr(skb);
+	u16 tpid, vid, tci;
+
+	skb = dsa_8021q_rcv(skb, netdev, pt, &tpid, &tci);
+	if (!skb)
+		return NULL;
+
+	if (tpid != ETH_P_EDSA) {
+		netdev_warn(netdev, "TPID 0x%04x not for tagging\n", tpid);
+		return NULL;
+	}
+
+	skb->priority = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
+	vid = tci & VLAN_VID_MASK;
+
+	skb->offload_fwd_mark = 1;
+
+	if (likely(!sja1105_is_link_local(skb))) {
+		/* Normal traffic path. */
+		source_port = dsa_tagging_rx_source_port(vid);
+		switch_id = dsa_tagging_rx_switch_id(vid);
+	} else {
+		/* Management traffic path. Switch embeds the switch ID and
+		 * port ID into bytes of the destination MAC, courtesy of
+		 * the incl_srcpt options.
+		 */
+		source_port = hdr->h_dest[3];
+		switch_id = hdr->h_dest[4];
+		/* Clear the DMAC bytes that were mangled by the switch */
+		hdr->h_dest[3] = 0;
+		hdr->h_dest[4] = 0;
+	}
+
+	skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
+	if (!skb->dev) {
+		netdev_warn(netdev, "Packet with invalid switch id %u and source port %u\n",
+			    switch_id, source_port);
+		return NULL;
+	}
+
+	/* Delete/overwrite fake VLAN header, DSA expects to not find
+	 * it there, see dsa_switch_rcv: skb_push(skb, ETH_HLEN).
+	 */
+	memmove(skb->data - ETH_HLEN, skb->data - ETH_HLEN - VLAN_HLEN,
+		ETH_HLEN - VLAN_HLEN);
+
+	return skb;
+}
+
+const struct dsa_device_ops sja1105_netdev_ops = {
+	.xmit = sja1105_xmit,
+	.rcv = sja1105_rcv,
+	.overhead = VLAN_HLEN,
+};
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 11/13] net: dsa: sja1105: Add support for Spanning Tree Protocol
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (9 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-24  3:23 ` [RFC PATCH net-next 12/13] Documentation: networking: dsa: Add details about NXP SJA1105 driver Vladimir Oltean
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 drivers/net/dsa/sja1105/sja1105_main.c | 108 ++++++++++++++++++++++---
 1 file changed, 99 insertions(+), 9 deletions(-)

diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index 92dc58afd74e..448ab0e71827 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -93,8 +93,10 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
 		.drpuntag = false,
 		/* Don't retag 802.1p (VID 0) traffic with the pvid */
 		.retag = false,
-		/* Enable learning and I/O on user ports by default. */
-		.dyn_learn = true,
+		/* Disable learning and I/O on user ports by default -
+		 * STP will enable it.
+		 */
+		.dyn_learn = false,
 		.egress = false,
 		.ingress = false,
 		.mirrcie = 0,
@@ -125,8 +127,17 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
 
 	mac = table->entries;
 
-	for (i = 0; i < SJA1105_NUM_PORTS; i++)
+	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
 		mac[i] = default_mac;
+		if (i == dsa_upstream_port(priv->ds, i)) {
+			/* STP doesn't get called for CPU port, so we need to
+			 * set the I/O parameters statically.
+			 */
+			mac[i].dyn_learn = true;
+			mac[i].ingress = true;
+			mac[i].egress = true;
+		}
+	}
 
 	return 0;
 }
@@ -639,12 +650,14 @@ static int sja1105_get_speed_cfg(unsigned int speed_mbps)
  * for a specific port.
  *
  * @speed_mbps: If 0, leave the speed unchanged, else adapt MAC to PHY speed.
- * @enabled: Manage Rx and Tx settings for this port. Overrides the static
- *	     configuration settings.
+ * @enabled: Manage Rx and Tx settings for this port. If false, overrides the
+ *	     settings from the STP state, but not persistently (does not
+ *	     overwrite the static MAC info for this port).
  */
 static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
 				      int speed_mbps, bool enabled)
 {
+	struct sja1105_mac_config_entry dyn_mac;
 	struct sja1105_xmii_params_entry *mii;
 	struct sja1105_mac_config_entry *mac;
 	struct device *dev = priv->ds->dev;
@@ -677,12 +690,13 @@ static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
 	 * the code common, we'll use the static configuration tables as a
 	 * reasonable approximation for both E/T and P/Q/R/S.
 	 */
-	mac[port].ingress = enabled;
-	mac[port].egress  = enabled;
+	dyn_mac = mac[port];
+	dyn_mac.ingress = enabled && mac[port].ingress;
+	dyn_mac.egress  = enabled && mac[port].egress;
 
 	/* Write to the dynamic reconfiguration tables */
 	rc = sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG,
-					  port, &mac[port], true);
+					  port, &dyn_mac, true);
 	if (rc < 0) {
 		dev_err(dev, "Failed to write MAC config: %d\n", rc);
 		return rc;
@@ -932,6 +946,50 @@ static int sja1105_bridge_member(struct dsa_switch *ds, int port,
 					    port, &l2_fwd[port], true);
 }
 
+static void sja1105_bridge_stp_state_set(struct dsa_switch *ds, int port,
+					 u8 state)
+{
+	struct sja1105_private *priv = ds->priv;
+	struct sja1105_mac_config_entry *mac;
+
+	mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
+
+	switch (state) {
+	case BR_STATE_DISABLED:
+	case BR_STATE_BLOCKING:
+		/* From UM10944 description of DRPDTAG (why put this there?):
+		 * "Management traffic flows to the port regardless of the state
+		 * of the INGRESS flag". So BPDUs are still be allowed to pass.
+		 * At the moment no difference between DISABLED and BLOCKING.
+		 */
+		mac[port].ingress   = false;
+		mac[port].egress    = false;
+		mac[port].dyn_learn = false;
+		break;
+	case BR_STATE_LISTENING:
+		mac[port].ingress   = true;
+		mac[port].egress    = false;
+		mac[port].dyn_learn = false;
+		break;
+	case BR_STATE_LEARNING:
+		mac[port].ingress   = true;
+		mac[port].egress    = false;
+		mac[port].dyn_learn = true;
+		break;
+	case BR_STATE_FORWARDING:
+		mac[port].ingress   = true;
+		mac[port].egress    = true;
+		mac[port].dyn_learn = true;
+		break;
+	default:
+		dev_err(ds->dev, "invalid STP state: %d\n", state);
+		return;
+	}
+
+	sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG, port,
+				     &mac[port], true);
+}
+
 static int sja1105_bridge_join(struct dsa_switch *ds, int port,
 			       struct net_device *br)
 {
@@ -944,6 +1002,23 @@ static void sja1105_bridge_leave(struct dsa_switch *ds, int port,
 	sja1105_bridge_member(ds, port, br, false);
 }
 
+static u8 sja1105_stp_state_get(struct sja1105_private *priv, int port)
+{
+	struct sja1105_mac_config_entry *mac;
+
+	mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
+
+	if (!mac[port].ingress && !mac[port].egress && !mac[port].dyn_learn)
+		return BR_STATE_BLOCKING;
+	if (mac[port].ingress && !mac[port].egress && !mac[port].dyn_learn)
+		return BR_STATE_LISTENING;
+	if (mac[port].ingress && !mac[port].egress && mac[port].dyn_learn)
+		return BR_STATE_LEARNING;
+	if (mac[port].ingress && mac[port].egress && mac[port].dyn_learn)
+		return BR_STATE_FORWARDING;
+	return -EINVAL;
+}
+
 /* For situations where we need to change a setting at runtime that is only
  * available through the static configuration, resetting the switch in order
  * to upload the new static config is unavoidable. Back up the settings we
@@ -954,16 +1029,27 @@ static int sja1105_static_config_reload(struct sja1105_private *priv)
 {
 	struct sja1105_mac_config_entry *mac;
 	int speed_mbps[SJA1105_NUM_PORTS];
+	u8 stp_state[SJA1105_NUM_PORTS];
 	int rc, i;
 
 	mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
 
 	/* Back up settings changed by sja1105_adjust_port_config and
-	 * and restore their defaults.
+	 * sja1105_bridge_stp_state_set and restore their defaults.
 	 */
 	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
 		speed_mbps[i] = sja1105_speed[mac[i].speed];
 		mac[i].speed = SJA1105_SPEED_AUTO;
+		if (i == dsa_upstream_port(priv->ds, i)) {
+			mac[i].ingress = true;
+			mac[i].egress = true;
+			mac[i].dyn_learn = true;
+		} else {
+			stp_state[i] = sja1105_stp_state_get(priv, i);
+			mac[i].ingress = false;
+			mac[i].egress = false;
+			mac[i].dyn_learn = false;
+		}
 	}
 
 	/* Reset switch and send updated static configuration */
@@ -982,6 +1068,9 @@ static int sja1105_static_config_reload(struct sja1105_private *priv)
 	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
 		bool enabled = (speed_mbps[i] != SJA1105_SPEED_AUTO);
 
+		if (i != dsa_upstream_port(priv->ds, i))
+			sja1105_bridge_stp_state_set(priv->ds, i, stp_state[i]);
+
 		rc = sja1105_adjust_port_config(priv, i, speed_mbps[i],
 						enabled);
 		if (rc < 0)
@@ -1375,6 +1464,7 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
 	.port_fdb_del		= sja1105_fdb_del,
 	.port_bridge_join	= sja1105_bridge_join,
 	.port_bridge_leave	= sja1105_bridge_leave,
+	.port_stp_state_set	= sja1105_bridge_stp_state_set,
 	.port_vlan_prepare	= sja1105_vlan_prepare,
 	.port_vlan_filtering	= sja1105_vlan_filtering,
 	.port_vlan_add		= sja1105_vlan_add,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 12/13] Documentation: networking: dsa: Add details about NXP SJA1105 driver
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (10 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 11/13] net: dsa: sja1105: Add support for Spanning Tree Protocol Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26  2:34   ` Florian Fainelli
  2019-03-24  3:23 ` [RFC PATCH net-next 13/13] dt-bindings: net: dsa: Add documentation for " Vladimir Oltean
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 Documentation/networking/dsa/sja1105.txt | 83 ++++++++++++++++++++++++
 1 file changed, 83 insertions(+)
 create mode 100644 Documentation/networking/dsa/sja1105.txt

diff --git a/Documentation/networking/dsa/sja1105.txt b/Documentation/networking/dsa/sja1105.txt
new file mode 100644
index 000000000000..b6f2c1bedd02
--- /dev/null
+++ b/Documentation/networking/dsa/sja1105.txt
@@ -0,0 +1,83 @@
+NXP SJA1105 switch driver
+=========================
+
+The NXP SJA1105 is a family of 6 devices:
+* SJA1105E: First generation, no TTEthernet
+* SJA1105T: First generation, TTEthernet
+* SJA1105P: Second generation, no TTEthernet, no SGMII
+* SJA1105Q: Second generation, TTEthernet, no SGMII
+* SJA1105R: Second generation, no TTEthernet, SGMII
+* SJA1105S: Second generation, TTEthernet, SGMII
+
+These are SPI-managed automotive switches, with all ports being gigabit
+capable, and supporting MII/RMII/RGMII and optionally SGMII on one port.
+
+The switches do not have an MDIO bus of their own and do not support
+in-band autonegotiation, so for proper PHY management, the host's MDIO
+bus controller needs to be used.
+
+Being automotive parts, their configuration interface is geared towards
+set-and-forget use, with minimal dynamic interaction at runtime. They
+require a static configuration to be composed by software and packed
+with CRC and table headers, and sent over SPI.
+
+The static configuration is composed of several configuration tables. Each
+table takes a number of entries. Some configuration tables can be (partially)
+reconfigured at runtime, some not. Some tables are mandatory, some not.
+
+Table                        | Mandatory        | Reconfigurable
+-----------------------------+------------------+-----------------------------
+Schedule                     | no               | no
+Schedule entry points        | if Scheduling    | no
+VL Lookup                    | no               | no
+VL Policing                  | if VL Lookup     | no
+VL Forwarding                | if VL Lookup     | no
+L2 Lookup                    | no               | no
+L2 Policing                  | yes              | no
+VLAN Lookup                  | yes              | yes
+L2 Forwarding                | yes              | partially (fully on P/Q/R/S)
+MAC Config                   | yes              | partially (fully on P/Q/R/S)
+Schedule Params              | if Scheduling    | no
+Schedule Entry Points Params | if Scheduling    | no
+VL Forwarding Params         | if VL Forwarding | no
+L2 Lookup Params             | no               | partially (fully on P/Q/R/S)
+L2 Forwarding Params         | yes              | no
+Clock Sync Params            | no               | no
+AVB Params                   | no               | no
+General Params               | yes              | partially
+Retagging                    | no               | yes
+xMII Params                  | yes              | no
+SGMII                        | no               | yes
+
+Also the configuration is write-only (software cannot read it back from the
+switch except for very few exceptions).
+
+So the driver creates the static configuration at probe time, and keeps it at
+all times in memory, as a shadow for the hardware state. When required to
+change a hardware setting, the static configuration is also updated.
+If that changed setting can be transmitted to the switch through the dynamic
+reconfiguration interface, it is; otherwise the switch is reset and
+reprogrammed with the updated static configuration.
+
+The switches do not support switch tagging in hardware. But they do support
+customizing the TPID by which VLAN traffic is identified as such. The switch
+driver is leveraging CONFIG_NET_DSA_TAG_8021Q by requesting that special VLANs
+(with a custom TPID of ETH_P_EDSA instead of ETH_P_8021Q) are installed on its
+ports when not in vlan_filtering mode. This does not interfere with the
+reception and transmission of real 802.1Q-tagged traffic, because the switch
+does no longer parse those packets as VLAN after the TPID change.
+The TPID is restored when vlan_filtering is requested, and IP termination
+becomes no longer possible through the switch netdevices in this mode.
+
+The switches have two programmable filters for link-local destination MACs.
+These are used to trap BPDUs and PTP traffic to the master netdevice, and are
+further used to support STP and 1588 ordinary clock/boundary clock
+functionality.
+
+Among other notable features, the switches have a PTP Hardware Clock that can
+be steered through SPI and used for timestamping on ingress and egress.
+Also, the T, Q and S devices support TTEthernet (an implementation of
+SAE AS6802 from TTTech), which is a set of Ethernet QoS enhancements similar in
+behavior to IEEE TSN. Configuring these features is currently not supported in
+the driver.
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RFC PATCH net-next 13/13] dt-bindings: net: dsa: Add documentation for NXP SJA1105 driver
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (11 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 12/13] Documentation: networking: dsa: Add details about NXP SJA1105 driver Vladimir Oltean
@ 2019-03-24  3:23 ` Vladimir Oltean
  2019-03-26  2:24   ` Florian Fainelli
  2019-03-25 16:31 ` [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Florian Fainelli
  2019-03-26 17:30 ` Vinicius Costa Gomes
  14 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24  3:23 UTC (permalink / raw)
  To: davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
---
 .../devicetree/bindings/net/dsa/sja1105.txt   | 123 ++++++++++++++++++
 1 file changed, 123 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/dsa/sja1105.txt

diff --git a/Documentation/devicetree/bindings/net/dsa/sja1105.txt b/Documentation/devicetree/bindings/net/dsa/sja1105.txt
new file mode 100644
index 000000000000..2c82b6fc37e3
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/dsa/sja1105.txt
@@ -0,0 +1,123 @@
+NXP SJA1105 switch driver
+=========================
+
+Required properties:
+
+- compatible: Must be "nxp,sja1105". Device ID identification (one of
+  E/T/P/Q/R/S) is performed by driver at probe time. Swapping pin-compatible
+  parts is possible with no DTS change.
+
+Optional properties:
+
+- sja1105,mac-mode, sja1105,phy-mode: Boolean properties that can be assigned
+  under each port node that is MII or RMII (has no effect for RGMII).  By
+  default (unless otherwise specified) a port is configured as MAC if it is
+  driving a PHY (phy-handle is present) or as PHY if it is PHY-less (fixed-link
+  specified, presumably because it is connected to a MAC).  These properties
+  are required in the case where SJA1105 ports are at both ends of an MII/RMII
+  PHY-less setup. One end would need to have sja1105,mac-mode, while the other
+  sja1105,phy-mode.
+
+See Documentation/devicetree/bindings/net/dsa/dsa.txt for the list of standard
+DSA required and optional properties.
+
+Other observations:
+
+The SJA1105 SPI interface requires a CS-to-CLK time (t2 in UM10944) of at least
+one half of t_CLK. At an SPI frequency of 1MHz, this means a minimum
+cs_sck_delay of 500ns. Ensuring that this SPI timing requirement is observed
+depends on the SPI bus master driver.
+
+Example:
+
+Ethernet switch connected via SPI to the host, CPU port wired to eth0:
+
+arch/arm/boot/dts/ls1021a-tsn.dts:
+
+/* SPI controller of the LS1021 */
+&dspi0 {
+	sja1105@1 {
+		reg = <0x1>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+		compatible = "nxp,sja1105";
+		spi-max-frequency = <4000000>;
+		fsl,spi-cs-sck-delay = <1000>;
+		fsl,spi-sck-cs-delay = <1000>;
+		ports {
+			#address-cells = <1>;
+			#size-cells = <0>;
+			port@0 {
+				/* ETH5 written on chassis */
+				label = "swp5";
+				phy-handle = <&rgmii_phy6>;
+				phy-mode = "rgmii";
+				reg = <0>;
+				/* Implicit "sja1105,mac-mode;" */
+			};
+			port@1 {
+				/* ETH2 written on chassis */
+				label = "swp2";
+				phy-handle = <&rgmii_phy3>;
+				phy-mode = "rgmii";
+				reg = <1>;
+				/* Implicit "sja1105,mac-mode;" */
+			};
+			port@2 {
+				/* ETH3 written on chassis */
+				label = "swp3";
+				phy-handle = <&rgmii_phy4>;
+				phy-mode = "rgmii";
+				reg = <2>;
+				/* Implicit "sja1105,mac-mode;" */
+			};
+			port@3 {
+				/* ETH4 written on chassis */
+				phy-handle = <&rgmii_phy5>;
+				label = "swp4";
+				phy-mode = "rgmii";
+				reg = <3>;
+				/* Implicit "sja1105,mac-mode;" */
+			};
+			port@4 {
+				/* Internal port connected to eth2 */
+				ethernet = <&enet2>;
+				phy-mode = "rgmii";
+				reg = <4>;
+				/* Implicit "sja1105,phy-mode;" */
+				fixed-link {
+					speed = <1000>;
+					full-duplex;
+				};
+			};
+		};
+	};
+};
+
+/* MDIO controller of the LS1021 */
+&mdio0 {
+	/* BCM5464 */
+	rgmii_phy3: ethernet-phy@3 {
+		reg = <0x3>;
+	};
+	rgmii_phy4: ethernet-phy@4 {
+		reg = <0x4>;
+	};
+	rgmii_phy5: ethernet-phy@5 {
+		reg = <0x5>;
+	};
+	rgmii_phy6: ethernet-phy@6 {
+		reg = <0x6>;
+	};
+};
+
+/* Ethernet master port of the LS1021 */
+&enet2 {
+	phy-connection-type = "rgmii";
+	status = "ok";
+	fixed-link {
+		speed = <1000>;
+		full-duplex;
+	};
+};
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 01/13] lib: Add support for generic packing operations
  2019-03-24  3:23 ` [RFC PATCH net-next 01/13] lib: Add support for generic packing operations Vladimir Oltean
@ 2019-03-24 19:02   ` Richard Cochran
  2019-03-24 20:32     ` Vladimir Oltean
  0 siblings, 1 reply; 39+ messages in thread
From: Richard Cochran @ 2019-03-24 19:02 UTC (permalink / raw)
  To: Vladimir Oltean
  Cc: davem, netdev, f.fainelli, andrew, vivien.didelot, linus.walleij

On Sun, Mar 24, 2019 at 05:23:34AM +0200, Vladimir Oltean wrote:
> This provides an unified API for accessing register bit fields
> regardless of memory layout. The basic unit of data for these API
> functions is the u64. The process of transforming an u64 from native CPU
> encoding into the peripheral's encoding is called 'pack', and
> transforming it from peripheral to native CPU encoding is 'unpack'.
> 
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
> ---
>  Documentation/packing.txt | 150 +++++++++++++++++++++++++++
>  MAINTAINERS               |   8 ++
>  include/linux/packing.h   |  49 +++++++++
>  lib/Makefile              |   2 +-
>  lib/packing.c             | 211 ++++++++++++++++++++++++++++++++++++++

For this kind of generic infrastructure, you really should CC the lkml
to get proper review.

Thanks,
Richard

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 01/13] lib: Add support for generic packing operations
  2019-03-24 19:02   ` Richard Cochran
@ 2019-03-24 20:32     ` Vladimir Oltean
  2019-03-26  4:13       ` Richard Cochran
  0 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-24 20:32 UTC (permalink / raw)
  To: Richard Cochran
  Cc: davem, netdev, f.fainelli, andrew, vivien.didelot, linus.walleij

On 3/24/19 9:02 PM, Richard Cochran wrote:
> On Sun, Mar 24, 2019 at 05:23:34AM +0200, Vladimir Oltean wrote:
>> This provides an unified API for accessing register bit fields
>> regardless of memory layout. The basic unit of data for these API
>> functions is the u64. The process of transforming an u64 from native CPU
>> encoding into the peripheral's encoding is called 'pack', and
>> transforming it from peripheral to native CPU encoding is 'unpack'.
>>
>> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
>> ---
>>   Documentation/packing.txt | 150 +++++++++++++++++++++++++++
>>   MAINTAINERS               |   8 ++
>>   include/linux/packing.h   |  49 +++++++++
>>   lib/Makefile              |   2 +-
>>   lib/packing.c             | 211 ++++++++++++++++++++++++++++++++++++++
> 
> For this kind of generic infrastructure, you really should CC the lkml
> to get proper review.
> 
> Thanks,
> Richard
> 

Hi Richard,

I didn't want to pollute LKML with the entire driver patchset from the 
get-go, just receive some initial feedback from netdev first (hence the 
RFC).
How should I proceed? Should I resend just this patch to LKML, or a v2 
patchset with LKML copied on the lib patch?

Thanks,
-Vladimir

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port
  2019-03-24  3:23 ` [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port Vladimir Oltean
@ 2019-03-24 20:34   ` Andrew Lunn
  2019-03-25 16:46   ` Florian Fainelli
  1 sibling, 0 replies; 39+ messages in thread
From: Andrew Lunn @ 2019-03-24 20:34 UTC (permalink / raw)
  To: Vladimir Oltean; +Cc: davem, netdev, f.fainelli, vivien.didelot, linus.walleij

On Sun, Mar 24, 2019 at 05:23:35AM +0200, Vladimir Oltean wrote:
> This allows drivers to query the VLAN setting imposed by the bridge
> driver directly from DSA, instead of keeping their own state based on
> the .port_vlan_filtering callback.

Hi Vladimir

It would be good to modify the mt7530 driver to make use of this new
member.

	Andrew

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (12 preceding siblings ...)
  2019-03-24  3:23 ` [RFC PATCH net-next 13/13] dt-bindings: net: dsa: Add documentation for " Vladimir Oltean
@ 2019-03-25 16:31 ` Florian Fainelli
  2019-03-26 17:30 ` Vinicius Costa Gomes
  14 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-25 16:31 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/23/19 8:23 PM, Vladimir Oltean wrote:
> This patchset adds a DSA driver for the SPI-managed NXP SJA1105 driver.
> Due to the hardware's unfriendliness, most of its state needs to be
> shadowed in kernel memory by the driver. To support this and keep a
> decent amount of cleanliness in the code, a new generic API for
> converting between CPU-accessible ("unpacked") structures and
> hardware-accessible ("packed") structures is proposed and used.
> 
> Then several small modifications are done to the DSA core, like changing
> the order of two calls during initialization, or permitting driver
> access to the dp->vlan_filtering property.
> 
> These small modifications are done for the greater goal of adding
> support for 802.1Q pseudo-switch tagging. The limitations of this type
> of tagging are discussed in the commit that adds it, and in the code
> comments.
> 
> The SJA1105 driver then proceeds to extend this 8021q switch tagging
> protocol while adding its own (tag_sja1105). This is done because
> SJA1105 needs SPI intervention during transmission of link-local
> traffic, which cannot be done from the xmit handler but requires a
> deferred worker thread.
> 
> The driver is GPL-2.0 licensed. The source code files which are licensed
> as BSD-3-Clause are hardware support files and derivative of the
> userspace NXP sja1105-tool program, which is BSD-3-Clause licensed.
> 
> TODO items:
> * Add full support for the P/Q/R/S series. The patches were mostly
>   tested on a first-generation T device.
> * Add timestamping support and PTP clock manipulation.
> * Figure out what the current state of tc-taprio hw offload is, and
>   attempt to configure the switch's time-aware scheduler using that.

Overall this is a very clean and impressive piece of work, especially
given the constraints you have to work with, I will follow-up with
comments in individual patches thanks Vladimir!

> 
> Vladimir Oltean (13):
>   lib: Add support for generic packing operations
>   net: dsa: Store vlan_filtering as a property of dsa_port
>   net: dsa: Create a more convenient function for installing port VLANs
>   net: dsa: Call driver's setup callback after setting up its switchdev
>     notifier
>   net: dsa: Optional VLAN-based port separation for switches without
>     tagging
>   net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch
>   net: dsa: sja1105: Add support for FDB and MDB management
>   net: dsa: sja1105: Add support for VLAN operations
>   net: dsa: sja1105: Add support for ethtool port counters
>   net: dsa: sja1105: Add support for traffic through standalone ports
>   net: dsa: sja1105: Add support for Spanning Tree Protocol
>   Documentation: networking: dsa: Add details about NXP SJA1105 driver
>   dt-bindings: net: dsa: Add documentation for NXP SJA1105 driver
> 
>  .../devicetree/bindings/net/dsa/sja1105.txt   |  123 ++
>  Documentation/networking/dsa/sja1105.txt      |   83 +
>  Documentation/packing.txt                     |  150 ++
>  MAINTAINERS                                   |   14 +
>  drivers/net/dsa/Kconfig                       |    2 +
>  drivers/net/dsa/Makefile                      |    1 +
>  drivers/net/dsa/sja1105/Kconfig               |   17 +
>  drivers/net/dsa/sja1105/Makefile              |   10 +
>  drivers/net/dsa/sja1105/sja1105.h             |  148 ++
>  drivers/net/dsa/sja1105/sja1105_clocking.c    |  677 ++++++
>  .../net/dsa/sja1105/sja1105_dynamic_config.c  |  607 ++++++
>  .../net/dsa/sja1105/sja1105_dynamic_config.h  |   40 +
>  drivers/net/dsa/sja1105/sja1105_ethtool.c     |  420 ++++
>  drivers/net/dsa/sja1105/sja1105_main.c        | 1580 ++++++++++++++
>  drivers/net/dsa/sja1105/sja1105_spi.c         |  667 ++++++
>  .../net/dsa/sja1105/sja1105_static_config.c   | 1810 +++++++++++++++++
>  .../net/dsa/sja1105/sja1105_static_config.h   |  500 +++++
>  include/linux/dsa/sja1105.h                   |   52 +
>  include/linux/packing.h                       |   49 +
>  include/net/dsa.h                             |    6 +
>  lib/Makefile                                  |    2 +-
>  lib/packing.c                                 |  211 ++
>  net/dsa/Kconfig                               |   12 +
>  net/dsa/Makefile                              |    2 +
>  net/dsa/dsa.c                                 |    6 +
>  net/dsa/dsa2.c                                |    8 +-
>  net/dsa/dsa_priv.h                            |   15 +
>  net/dsa/port.c                                |   36 +-
>  net/dsa/slave.c                               |   16 +-
>  net/dsa/tag_8021q.c                           |  185 ++
>  net/dsa/tag_sja1105.c                         |  142 ++
>  31 files changed, 7568 insertions(+), 23 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/net/dsa/sja1105.txt
>  create mode 100644 Documentation/networking/dsa/sja1105.txt
>  create mode 100644 Documentation/packing.txt
>  create mode 100644 drivers/net/dsa/sja1105/Kconfig
>  create mode 100644 drivers/net/dsa/sja1105/Makefile
>  create mode 100644 drivers/net/dsa/sja1105/sja1105.h
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_clocking.c
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_dynamic_config.c
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_dynamic_config.h
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_ethtool.c
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_main.c
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_spi.c
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_static_config.c
>  create mode 100644 drivers/net/dsa/sja1105/sja1105_static_config.h
>  create mode 100644 include/linux/dsa/sja1105.h
>  create mode 100644 include/linux/packing.h
>  create mode 100644 lib/packing.c
>  create mode 100644 net/dsa/tag_8021q.c
>  create mode 100644 net/dsa/tag_sja1105.c
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port
  2019-03-24  3:23 ` [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port Vladimir Oltean
  2019-03-24 20:34   ` Andrew Lunn
@ 2019-03-25 16:46   ` Florian Fainelli
  1 sibling, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-25 16:46 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/23/19 8:23 PM, Vladimir Oltean wrote:
> This allows drivers to query the VLAN setting imposed by the bridge
> driver directly from DSA, instead of keeping their own state based on
> the .port_vlan_filtering callback.
> 
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>

After you address Andrew's comment:

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 04/13] net: dsa: Call driver's setup callback after setting up its switchdev notifier
  2019-03-24  3:23 ` [RFC PATCH net-next 04/13] net: dsa: Call driver's setup callback after setting up its switchdev notifier Vladimir Oltean
@ 2019-03-25 16:47   ` Florian Fainelli
  0 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-25 16:47 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/23/19 8:23 PM, Vladimir Oltean wrote:
> This allows the driver to perform some manipulations of its own during
> setup, using generic code.
> One current usage scenario is for the driver to request DSA to set up
> 802.1Q based switch tagging for its ports.
> 
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 03/13] net: dsa: Create a more convenient function for installing port VLANs
  2019-03-24  3:23 ` [RFC PATCH net-next 03/13] net: dsa: Create a more convenient function for installing port VLANs Vladimir Oltean
@ 2019-03-25 17:06   ` Florian Fainelli
  2019-03-27  0:31     ` Vladimir Oltean
  0 siblings, 1 reply; 39+ messages in thread
From: Florian Fainelli @ 2019-03-25 17:06 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/23/19 8:23 PM, Vladimir Oltean wrote:
> This refactors the two-phase transaction from dsa_slave_vlan_rx_add_vid
> and also makes that code available for other functions from within DSA.
> The newly exposed function either adds or deletes the specified VLAN
> entry based on a boolean argument.
> 
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>

The name of the function does not make it particularly clear that
passing false results in deleting the VLAN. Can you just wrap this under
a different function name that is only doing the two-step adding of
VLANs and keep using dsa_port_vlan_del() explicitly when you want to
remove a VLAN?

> ---
>  net/dsa/dsa_priv.h |  2 ++
>  net/dsa/port.c     | 24 ++++++++++++++++++++++++
>  net/dsa/slave.c    | 16 ++--------------
>  3 files changed, 28 insertions(+), 14 deletions(-)
> 
> diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
> index 093b7d145eb1..8048ced3708f 100644
> --- a/net/dsa/dsa_priv.h
> +++ b/net/dsa/dsa_priv.h
> @@ -164,6 +164,8 @@ int dsa_port_pre_bridge_flags(const struct dsa_port *dp, unsigned long flags,
>  			      struct switchdev_trans *trans);
>  int dsa_port_bridge_flags(const struct dsa_port *dp, unsigned long flags,
>  			  struct switchdev_trans *trans);
> +int dsa_port_trans_vlan_apply(struct dsa_port *dp, u16 vid, u16 flags,
> +			      bool enabled);
>  int dsa_port_vlan_add(struct dsa_port *dp,
>  		      const struct switchdev_obj_port_vlan *vlan,
>  		      struct switchdev_trans *trans);
> diff --git a/net/dsa/port.c b/net/dsa/port.c
> index a86fe3be1261..9c7358f98004 100644
> --- a/net/dsa/port.c
> +++ b/net/dsa/port.c
> @@ -326,6 +326,30 @@ int dsa_port_vlan_del(struct dsa_port *dp,
>  	return 0;
>  }
>  
> +int dsa_port_trans_vlan_apply(struct dsa_port *dp, u16 vid, u16 flags,
> +			      bool enabled)
> +{
> +	struct switchdev_obj_port_vlan vlan = {
> +		.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
> +		.flags = flags,
> +		.vid_begin = vid,
> +		.vid_end = vid,
> +	};
> +	struct switchdev_trans trans;
> +	int err;
> +
> +	if (!enabled)
> +		return dsa_port_vlan_del(dp, &vlan);
> +
> +	trans.ph_prepare = true;
> +	err = dsa_port_vlan_add(dp, &vlan, &trans);
> +	if (err == -EOPNOTSUPP)
> +		return 0;
> +
> +	trans.ph_prepare = false;
> +	return dsa_port_vlan_add(dp, &vlan, &trans);
> +}
> +
>  static struct phy_device *dsa_port_get_phy_device(struct dsa_port *dp)
>  {
>  	struct device_node *phy_dn;
> diff --git a/net/dsa/slave.c b/net/dsa/slave.c
> index 093eef6f2599..3191ef74f6a1 100644
> --- a/net/dsa/slave.c
> +++ b/net/dsa/slave.c
> @@ -987,13 +987,6 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
>  				     u16 vid)
>  {
>  	struct dsa_port *dp = dsa_slave_to_port(dev);
> -	struct switchdev_obj_port_vlan vlan = {
> -		.vid_begin = vid,
> -		.vid_end = vid,
> -		/* This API only allows programming tagged, non-PVID VIDs */
> -		.flags = 0,
> -	};
> -	struct switchdev_trans trans;
>  	struct bridge_vlan_info info;
>  	int ret;
>  
> @@ -1010,13 +1003,8 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
>  			return -EBUSY;
>  	}
>  
> -	trans.ph_prepare = true;
> -	ret = dsa_port_vlan_add(dp, &vlan, &trans);
> -	if (ret == -EOPNOTSUPP)
> -		return 0;
> -
> -	trans.ph_prepare = false;
> -	return dsa_port_vlan_add(dp, &vlan, &trans);
> +	/* This API only allows programming tagged, non-PVID VIDs */
> +	return dsa_port_trans_vlan_apply(dp, vid, 0, true);
>  }
>  
>  static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
> 


-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 05/13] net: dsa: Optional VLAN-based port separation for switches without tagging
  2019-03-24  3:23 ` [RFC PATCH net-next 05/13] net: dsa: Optional VLAN-based port separation for switches without tagging Vladimir Oltean
@ 2019-03-26  2:21   ` Florian Fainelli
  0 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26  2:21 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> This patch provides generic DSA code for using VLAN (802.1Q) tags for
> the same purpose as a dedicated switch tag for injection/extraction.
> It is based on the discussions and interest that has been so far
> expressed in https://www.spinics.net/lists/netdev/msg556125.html.
> 
> Unlike all other DSA-supported tagging protocols, CONFIG_NET_DSA_TAG_8021Q
> does not offer a complete solution for drivers (nor can it). Instead, it
> provides generic code that driver can opt into calling:
> - dsa_8021q_xmit: Inserts a VLAN header with the specified contents.
>   Currently a few driver are inserting headers that are simply 802.1Q
>   with custom fields. Can be called from another tagging protocol's xmit
>   function.
> - dsa_8021q_rcv: Retrieves the TPID and TCI from a VLAN-tagged skb.
>   Removing the VLAN header is left as a decision for the caller to make.
> - dsa_port_setup_8021q_tagging: For each user port, installs an Rx VID
>   and a Tx VID, for proper untagged traffic identification on ingress
>   and steering on egress. Also sets up the VLAN trunk on the upstream
>   (CPU or DSA) port. Drivers are intentionally left to call this
>   function explicitly, depending on the context and hardware support.
>   The expected switch behavior and VLAN semantics should not be violated
>   under any conditions. That is, after calling
>   dsa_port_setup_8021q_tagging, the hardware should still pass all
>   ingress traffic, be it tagged or untagged.
> 
> This only works when switch ports are standalone, or when they are added
> to a VLAN-unaware bridge. It will probably remain this way for the
> reasons below.
> 
> When added to a bridge that has vlan_filtering 1, the bridge core will
> install its own VLANs and reset the pvids through switchdev. For the
> bridge core, switchdev is a write-only pipe. All VLAN-related state is
> kept in the bridge core and nothing is read from DSA/switchdev or from
> the driver. So the bridge core will break this port separation because
> it will install the vlan_default_pvid into all switchdev ports.
> 
> Even if we could teach the bridge driver about switchdev preference of a
> certain vlan_default_pvid, there would still exist many other challenges.
> 
> Firstly, in the DSA rcv callback, a driver would have to perform an
> iterative reverse lookup to find the correct switch port. That is
> because the port is a bridge slave, so its Rx VID (port PVID) is subject
> to user configuration. How would we ensure that the user doesn't reset
> the pvid to a different value, or to a non-unique value within this DSA
> switch tree?
> 
> Finally, not all switch ports are equal in DSA, and that makes it
> difficult for the bridge to be completely aware of this anyway.
> The CPU port needs to transmit tagged packets (VLAN trunk) in order for
> the DSA rcv code to be able to decode source information.
> But the bridge code has absolutely no idea which switch port is the CPU
> port, if nothing else then just because there is no netdevice registered
> by DSA for the CPU port.

That is true, although we can use the bridge master device as a
substitute for targeting the CPU port (we don't have any for the DSA
ports though, so they will have to remain in a mode where they forward
all VIDs), see .

We don't support that just yet in DSA though.

> Also DSA does not currently allow the user to specify that they want the
> CPU port to do VLAN trunking anyway. VLANs are added to the CPU port
> using the same flags as they were added on the user port.
> 
> So the VLANs installed by dsa_port_setup_8021q_tagging per driver
> request should remain private from the bridge's and user's perspective,
> and should not alter the hardware's behavior with VLAN-tagged traffic.
> If the hardware cannot handle VLAN tag stacking, it should also disable
> this port separation when added as slave to a vlan_filtering bridge.
> If the hardware does support VLAN tag stacking, it should somehow back
> up its private VLAN settings when the bridge tries to override them.

This is an excellent commit message and it captures really well the
challenges involved in trying to coerce 802.1Q only switches into
offering separate DSA slave network devices. Here are a few ideas on how
this can be solved now or later, with possibly a reduction in functionality:

- if the switch internally performs double VLAN tag normalization, then
we could dedicate an outer tag per bridge device, which would allow
identical inner tag VID numbers to co-exist, yet preserve broadcast
domain isolation

- when only 802.1Q is supported (single tagging), we could somehow
enforce that all ports must be part of a VLAN aware bridge, which would
eliminate the need to have standalone DSA network devices alongside
bridged DSA network devices

Your solution clearly works and is a clever way to solve that problem.

[snip]

> +config NET_DSA_TAG_8021Q
> +	bool
> +	help

This probably needs a depends on/select VLAN_8021Q to be functional.

> @@ -0,0 +1,185 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
> + */
> +#include <linux/if_bridge.h>
> +#include <linux/if_vlan.h>
> +
> +#include "dsa_priv.h"
> +
> +#define DSA_TAGGING_VID_RANGE    (DSA_MAX_SWITCHES * DSA_MAX_PORTS)
> +#define DSA_TAGGING_VID_BASE     (VLAN_N_VID - 2 * DSA_TAGGING_VID_RANGE - 1)

VLAN_N_VID may not be a range supported on all switches (e.g.: the ones
that were once popular 15 years ago, like BCM5325/5365) but that can be
changed later on to incorporate per-switch VLAN range limitations.

I would add a comment about why you reserving two times the space, for
which you provide an explanation down below.


With the Kconfig changed:

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 13/13] dt-bindings: net: dsa: Add documentation for NXP SJA1105 driver
  2019-03-24  3:23 ` [RFC PATCH net-next 13/13] dt-bindings: net: dsa: Add documentation for " Vladimir Oltean
@ 2019-03-26  2:24   ` Florian Fainelli
  2019-03-26 23:44     ` Vladimir Oltean
  0 siblings, 1 reply; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26  2:24 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
> ---
>  .../devicetree/bindings/net/dsa/sja1105.txt   | 123 ++++++++++++++++++
>  1 file changed, 123 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/net/dsa/sja1105.txt
> 
> diff --git a/Documentation/devicetree/bindings/net/dsa/sja1105.txt b/Documentation/devicetree/bindings/net/dsa/sja1105.txt
> new file mode 100644
> index 000000000000..2c82b6fc37e3
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/net/dsa/sja1105.txt
> @@ -0,0 +1,123 @@
> +NXP SJA1105 switch driver
> +=========================
> +
> +Required properties:
> +
> +- compatible: Must be "nxp,sja1105". Device ID identification (one of
> +  E/T/P/Q/R/S) is performed by driver at probe time. Swapping pin-compatible
> +  parts is possible with no DTS change.
> +
> +Optional properties:
> +
> +- sja1105,mac-mode, sja1105,phy-mode: Boolean properties that can be assigned
> +  under each port node that is MII or RMII (has no effect for RGMII).  By
> +  default (unless otherwise specified) a port is configured as MAC if it is
> +  driving a PHY (phy-handle is present) or as PHY if it is PHY-less (fixed-link
> +  specified, presumably because it is connected to a MAC).  These properties
> +  are required in the case where SJA1105 ports are at both ends of an MII/RMII
> +  PHY-less setup. One end would need to have sja1105,mac-mode, while the other
> +  sja1105,phy-mode.

Typically we would be using a fixed-link with an appropriate 'phy-mode'
property to describe a MAC to MAC connection, this may be seen as a
re-purposing PHY-oriented properties though, so I am fine with that binding:

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports
  2019-03-24  3:23 ` [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports Vladimir Oltean
@ 2019-03-26  2:31   ` Florian Fainelli
  2019-03-26 22:03     ` Vladimir Oltean
  0 siblings, 1 reply; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26  2:31 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> In order to support this, we are creating a make-shift switch tag out of
> a VLAN trunk configured on the CPU port. Termination on switch ports
> only works when not under a vlan_filtering bridge. We are making use of
> the generic CONFIG_NET_DSA_TAG_8021Q code and leveraging it from our own
> CONFIG_NET_DSA_TAG_SJA1105.
> 
> There are two types of traffic: regular and link-local.
> The link-local traffic received on the CPU port is trapped from the
> switch's regular forwarding decisions because it matched one of the two
> DMAC filters for management traffic.
> On transmission, the switch requires special massaging for these
> link-local frames. Due to a weird implementation of the switching IP, by
> default it drops link-local frames that originate on the CPU port. It
> needs to be told where to forward them to, through an SPI command
> ("management route") that is valid for only a single frame.
> So when we're sending link-local traffic, we need to clone skb's from
> DSA and send them in our custom xmit worker that also performs SPI access.
> 
> For that purpose, the DSA xmit handler and the xmit worker communicate
> through a per-port "skb ring" software structure, with a producer and a
> consumer index. At the moment this structure is rather fragile
> (ping-flooding to a link-local DMAC would cause most of the frames to
> get dropped). I would like to move the management traffic on a separate
> netdev queue that I can stop when the skb ring got full and hardware is
> busy processing, so that we are not forced to drop traffic.
> 
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>

I do like the idea of setting up specific management queue later on,
although it is not clear to me how you would go about integrating it as
a network device, given the DSA slave and master devices, do you know
roughly how you would proceed?
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 12/13] Documentation: networking: dsa: Add details about NXP SJA1105 driver
  2019-03-24  3:23 ` [RFC PATCH net-next 12/13] Documentation: networking: dsa: Add details about NXP SJA1105 driver Vladimir Oltean
@ 2019-03-26  2:34   ` Florian Fainelli
  0 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26  2:34 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 07/13] net: dsa: sja1105: Add support for FDB and MDB management
  2019-03-24  3:23 ` [RFC PATCH net-next 07/13] net: dsa: sja1105: Add support for FDB and MDB management Vladimir Oltean
@ 2019-03-26  2:37   ` Florian Fainelli
  0 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26  2:37 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> Currently only the (more difficult) first generation E/T series is
> supported. Here the TCAM is only 4-way associative, and to know where
> the hardware will search for a FDB entry, we need to perform the same
> hash algorithm in order to install the entry in the correct bin.
> 
> On P/Q/R/S, the TCAM should be fully associative. However the SPI
> command interface is different, and because I don't have access to a
> new-generation device at the moment, support for it is TODO.
> 
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 08/13] net: dsa: sja1105: Add support for VLAN operations
  2019-03-24  3:23 ` [RFC PATCH net-next 08/13] net: dsa: sja1105: Add support for VLAN operations Vladimir Oltean
@ 2019-03-26  2:41   ` Florian Fainelli
  0 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26  2:41 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> VLAN filtering cannot be properly disabled in SJA1105. So in order to
> emulate the "no VLAN awareness" behavior (not dropping traffic that is
> tagged with a VID that isn't configured on the port), we need to hack
> another switch feature: programmable TPID (which is 0x8100 for 802.1Q).
> We are reprogramming the TPID to a bogus value (ETH_P_EDSA) which leaves
> the switch thinking that all traffic is untagged, and therefore accepts
> it.
> 
> Under a vlan_filtering bridge, the proper TPID of ETH_P_8021Q is
> installed again, and the switch starts identifying 802.1Q-tagged
> traffic.
> 
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
> ---

[snip]

> +	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
> +		struct net_device *bridge_dev;
> +
> +		bridge_dev = dsa_to_port(ds, i)->bridge_dev;
> +		if (bridge_dev &&
> +		    bridge_dev != dsa_to_port(ds, port)->bridge_dev &&
> +		    br_vlan_enabled(bridge_dev) != enabled) {
> +			netdev_err(bridge_dev,
> +				   "VLAN filtering is global to the switch!\n");
> +			return -EINVAL;
> +		}

We might want to move this to the DSA core at some point, I had some
patches lying around for doing that but got side tracked with adding
management support for b53/bcm_sf2. Not a big problem for now.

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 09/13] net: dsa: sja1105: Add support for ethtool port counters
  2019-03-24  3:23 ` [RFC PATCH net-next 09/13] net: dsa: sja1105: Add support for ethtool port counters Vladimir Oltean
@ 2019-03-26  2:44   ` Florian Fainelli
  0 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26  2:44 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
> ---

[snip]

> +
> +int sja1105_get_sset_count(struct dsa_switch *ds, int port, int sset)
> +{
> +	int count = ARRAY_SIZE(sja1105_port_stats);
> +	struct sja1105_private *priv = ds->priv;

There is a missing if (sset != ETH_SS_STATS) return here potentially.
With that:

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 01/13] lib: Add support for generic packing operations
  2019-03-24 20:32     ` Vladimir Oltean
@ 2019-03-26  4:13       ` Richard Cochran
  0 siblings, 0 replies; 39+ messages in thread
From: Richard Cochran @ 2019-03-26  4:13 UTC (permalink / raw)
  To: Vladimir Oltean
  Cc: davem, netdev, f.fainelli, andrew, vivien.didelot, linus.walleij

On Sun, Mar 24, 2019 at 10:32:03PM +0200, Vladimir Oltean wrote:
> I didn't want to pollute LKML with the entire driver patchset from the
> get-go, just receive some initial feedback from netdev first (hence the
> RFC).
> How should I proceed? Should I resend just this patch to LKML, or a v2
> patchset with LKML copied on the lib patch?

I would send the entire series, so that the context is clear.

(There is no danger of polluting lkml ;^)

Thanks,
Richard

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 06/13] net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch
  2019-03-24  3:23 ` [RFC PATCH net-next 06/13] net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch Vladimir Oltean
@ 2019-03-26 13:02   ` Florian Fainelli
  2019-03-26 17:52     ` Vladimir Oltean
  0 siblings, 1 reply; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26 13:02 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev
  Cc: andrew, vivien.didelot, linus.walleij, Georg Waibel



On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> At this moment the following is supported:
> * Link state management through phylib

Not a show stopper for now, and your implementation looks sane, though I
would recommend implementing phylink to be future proof, and especially
since you support SGMII.

I was not able to review everything deeply so:

Acked-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver
  2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
                   ` (13 preceding siblings ...)
  2019-03-25 16:31 ` [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Florian Fainelli
@ 2019-03-26 17:30 ` Vinicius Costa Gomes
  2019-03-26 18:07   ` Vladimir Oltean
  14 siblings, 1 reply; 39+ messages in thread
From: Vinicius Costa Gomes @ 2019-03-26 17:30 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev
  Cc: f.fainelli, andrew, vivien.didelot, linus.walleij, Vladimir Oltean

Hi Vladmir,

Vladimir Oltean <olteanv@gmail.com> writes:

> This patchset adds a DSA driver for the SPI-managed NXP SJA1105 driver.
> Due to the hardware's unfriendliness, most of its state needs to be
> shadowed in kernel memory by the driver. To support this and keep a
> decent amount of cleanliness in the code, a new generic API for
> converting between CPU-accessible ("unpacked") structures and
> hardware-accessible ("packed") structures is proposed and used.
>
> Then several small modifications are done to the DSA core, like changing
> the order of two calls during initialization, or permitting driver
> access to the dp->vlan_filtering property.
>
> These small modifications are done for the greater goal of adding
> support for 802.1Q pseudo-switch tagging. The limitations of this type
> of tagging are discussed in the commit that adds it, and in the code
> comments.
>
> The SJA1105 driver then proceeds to extend this 8021q switch tagging
> protocol while adding its own (tag_sja1105). This is done because
> SJA1105 needs SPI intervention during transmission of link-local
> traffic, which cannot be done from the xmit handler but requires a
> deferred worker thread.
>
> The driver is GPL-2.0 licensed. The source code files which are licensed
> as BSD-3-Clause are hardware support files and derivative of the
> userspace NXP sja1105-tool program, which is BSD-3-Clause licensed.
>
> TODO items:
> * Add full support for the P/Q/R/S series. The patches were mostly
>   tested on a first-generation T device.
> * Add timestamping support and PTP clock manipulation.
> * Figure out what the current state of tc-taprio hw offload is, and
>   attempt to configure the switch's time-aware scheduler using that.

At this point, there's no support for hw offloading in taprio. I am
planning on sending an RFC suggesting a interface soon (this week, I
hope). That RFC should at least be useful to get this conversation
started.

By the way, Is there a publicly available datasheet I can take a look?


Cheers,
--
Vinicius

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 06/13] net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch
  2019-03-26 13:02   ` Florian Fainelli
@ 2019-03-26 17:52     ` Vladimir Oltean
  0 siblings, 0 replies; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-26 17:52 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: davem, netdev, Andrew Lunn, vivien.didelot, Linus Walleij, Georg Waibel

Hi Florian,

I am grateful for the thorough review you made to the entire patchset.
It's nice to meet passionate people whom I can share ideas with.


On Tue, 26 Mar 2019 at 15:02, Florian Fainelli <f.fainelli@gmail.com> wrote:
> On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
> > At this moment the following is supported:
> > * Link state management through phylib
>
> Not a show stopper for now, and your implementation looks sane, though I
> would recommend implementing phylink to be future proof, and especially
> since you support SGMII.
>
> I was not able to review everything deeply so:
>
> Acked-by: Florian Fainelli <f.fainelli@gmail.com>
> --
> Florian

I don't think SGMII works at this point, with the current enablement.
When I send the first non-RFC patchset I think I'm going to completely
remove it (along with some other momentarily unused code). I'll add it
back when I can put my hands on a board where the SGMII interface is
actually routed - hopefully in a few weeks at most. Then I can also
rework the MAC adaptation to PHY portion to use phylink instead of
phylib, for "future-proofing".

Thank you!
-Vladimir

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver
  2019-03-26 17:30 ` Vinicius Costa Gomes
@ 2019-03-26 18:07   ` Vladimir Oltean
  0 siblings, 0 replies; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-26 18:07 UTC (permalink / raw)
  To: Vinicius Costa Gomes
  Cc: davem, netdev, Florian Fainelli, Andrew Lunn, vivien.didelot,
	Linus Walleij

On Tue, 26 Mar 2019 at 19:30, Vinicius Costa Gomes
<vinicius.gomes@intel.com> wrote:
>
> Hi Vladmir,
>
> Vladimir Oltean <olteanv@gmail.com> writes:
>
> > This patchset adds a DSA driver for the SPI-managed NXP SJA1105 driver.
> > Due to the hardware's unfriendliness, most of its state needs to be
> > shadowed in kernel memory by the driver. To support this and keep a
> > decent amount of cleanliness in the code, a new generic API for
> > converting between CPU-accessible ("unpacked") structures and
> > hardware-accessible ("packed") structures is proposed and used.
> >
> > Then several small modifications are done to the DSA core, like changing
> > the order of two calls during initialization, or permitting driver
> > access to the dp->vlan_filtering property.
> >
> > These small modifications are done for the greater goal of adding
> > support for 802.1Q pseudo-switch tagging. The limitations of this type
> > of tagging are discussed in the commit that adds it, and in the code
> > comments.
> >
> > The SJA1105 driver then proceeds to extend this 8021q switch tagging
> > protocol while adding its own (tag_sja1105). This is done because
> > SJA1105 needs SPI intervention during transmission of link-local
> > traffic, which cannot be done from the xmit handler but requires a
> > deferred worker thread.
> >
> > The driver is GPL-2.0 licensed. The source code files which are licensed
> > as BSD-3-Clause are hardware support files and derivative of the
> > userspace NXP sja1105-tool program, which is BSD-3-Clause licensed.
> >
> > TODO items:
> > * Add full support for the P/Q/R/S series. The patches were mostly
> >   tested on a first-generation T device.
> > * Add timestamping support and PTP clock manipulation.
> > * Figure out what the current state of tc-taprio hw offload is, and
> >   attempt to configure the switch's time-aware scheduler using that.
>
> At this point, there's no support for hw offloading in taprio. I am
> planning on sending an RFC suggesting a interface soon (this week, I
> hope). That RFC should at least be useful to get this conversation
> started.
>
> By the way, Is there a publicly available datasheet I can take a look?
>
>
> Cheers,
> --
> Vinicius


Hi Vinicius,

I knew you'd appear at some point since I mentioned tc-taprio offload :)
The documentation for the 1st generation SJA1105 switches is at
https://www.nxp.com/docs/en/user-guide/UM10944.pdf (for the 2nd
generation it is not publicly available, but for the most part it's
the same IP).
It's not a perfect match with 802.1Qbv and there are some (perhaps
workable) limitations, but it does offer the concept of scheduled
transmission (8 gated traffic classes per port) based on a PTP clock.
Do send your RFC and feel free to ignore the SJA1105 implementation
for now, since it would probably only cause endless confusion anyway.
:)
I myself am a bit conflicted about how traffic would be scheduled
in-band with the hardware Qbv window (PHC time domain) but that is
more a question about software architecture rather than hardware
details, so I'm really eager to see your proposal.

Thanks!
-Vladimir

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports
  2019-03-26  2:31   ` Florian Fainelli
@ 2019-03-26 22:03     ` Vladimir Oltean
  2019-03-26 22:13       ` Florian Fainelli
  0 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-26 22:03 UTC (permalink / raw)
  To: Florian Fainelli, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/26/19 4:31 AM, Florian Fainelli wrote:
> 
> 
> On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
>> In order to support this, we are creating a make-shift switch tag out of
>> a VLAN trunk configured on the CPU port. Termination on switch ports
>> only works when not under a vlan_filtering bridge. We are making use of
>> the generic CONFIG_NET_DSA_TAG_8021Q code and leveraging it from our own
>> CONFIG_NET_DSA_TAG_SJA1105.
>>
>> There are two types of traffic: regular and link-local.
>> The link-local traffic received on the CPU port is trapped from the
>> switch's regular forwarding decisions because it matched one of the two
>> DMAC filters for management traffic.
>> On transmission, the switch requires special massaging for these
>> link-local frames. Due to a weird implementation of the switching IP, by
>> default it drops link-local frames that originate on the CPU port. It
>> needs to be told where to forward them to, through an SPI command
>> ("management route") that is valid for only a single frame.
>> So when we're sending link-local traffic, we need to clone skb's from
>> DSA and send them in our custom xmit worker that also performs SPI access.
>>
>> For that purpose, the DSA xmit handler and the xmit worker communicate
>> through a per-port "skb ring" software structure, with a producer and a
>> consumer index. At the moment this structure is rather fragile
>> (ping-flooding to a link-local DMAC would cause most of the frames to
>> get dropped). I would like to move the management traffic on a separate
>> netdev queue that I can stop when the skb ring got full and hardware is
>> busy processing, so that we are not forced to drop traffic.
>>
>> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
> 
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> 
> I do like the idea of setting up specific management queue later on,
> although it is not clear to me how you would go about integrating it as
> a network device, given the DSA slave and master devices, do you know
> roughly how you would proceed?
> 

Actually I was thinking about leveraging the multiqueue support that you 
added in 55199df6d2af ("net: dsa: Allow switch drivers to indicate 
number of TX queues") and expose the slave netdev .ndo_select_queue 
callback towards DSA ports. There I would return queue #0 if 
sja1105_is_link_local(skb), and queue #1 otherwise.
Are there any complications that I'm missing?

Thanks,
-Vladimir

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports
  2019-03-26 22:03     ` Vladimir Oltean
@ 2019-03-26 22:13       ` Florian Fainelli
  2019-03-26 22:38         ` Vladimir Oltean
  0 siblings, 1 reply; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26 22:13 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/26/19 3:03 PM, Vladimir Oltean wrote:
> On 3/26/19 4:31 AM, Florian Fainelli wrote:
>>
>>
>> On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
>>> In order to support this, we are creating a make-shift switch tag out of
>>> a VLAN trunk configured on the CPU port. Termination on switch ports
>>> only works when not under a vlan_filtering bridge. We are making use of
>>> the generic CONFIG_NET_DSA_TAG_8021Q code and leveraging it from our own
>>> CONFIG_NET_DSA_TAG_SJA1105.
>>>
>>> There are two types of traffic: regular and link-local.
>>> The link-local traffic received on the CPU port is trapped from the
>>> switch's regular forwarding decisions because it matched one of the two
>>> DMAC filters for management traffic.
>>> On transmission, the switch requires special massaging for these
>>> link-local frames. Due to a weird implementation of the switching IP, by
>>> default it drops link-local frames that originate on the CPU port. It
>>> needs to be told where to forward them to, through an SPI command
>>> ("management route") that is valid for only a single frame.
>>> So when we're sending link-local traffic, we need to clone skb's from
>>> DSA and send them in our custom xmit worker that also performs SPI
>>> access.
>>>
>>> For that purpose, the DSA xmit handler and the xmit worker communicate
>>> through a per-port "skb ring" software structure, with a producer and a
>>> consumer index. At the moment this structure is rather fragile
>>> (ping-flooding to a link-local DMAC would cause most of the frames to
>>> get dropped). I would like to move the management traffic on a separate
>>> netdev queue that I can stop when the skb ring got full and hardware is
>>> busy processing, so that we are not forced to drop traffic.
>>>
>>> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
>>
>> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
>>
>> I do like the idea of setting up specific management queue later on,
>> although it is not clear to me how you would go about integrating it as
>> a network device, given the DSA slave and master devices, do you know
>> roughly how you would proceed?
>>
> 
> Actually I was thinking about leveraging the multiqueue support that you
> added in 55199df6d2af ("net: dsa: Allow switch drivers to indicate
> number of TX queues") and expose the slave netdev .ndo_select_queue
> callback towards DSA ports. There I would return queue #0 if
> sja1105_is_link_local(skb), and queue #1 otherwise.
> Are there any complications that I'm missing?

So that queue could be used to steer management traffic, but it would
still attempt to perform a dev_queue_xmit() using the master DSA network
device unless you somehow change that and/or parent that queue to a
different network device that the sja1105 switch driver creates (which
is doable).
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports
  2019-03-26 22:13       ` Florian Fainelli
@ 2019-03-26 22:38         ` Vladimir Oltean
  2019-03-26 22:45           ` Florian Fainelli
  0 siblings, 1 reply; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-26 22:38 UTC (permalink / raw)
  To: Florian Fainelli, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/27/19 12:13 AM, Florian Fainelli wrote:
> On 3/26/19 3:03 PM, Vladimir Oltean wrote:
>> On 3/26/19 4:31 AM, Florian Fainelli wrote:
>>>
>>>
>>> On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
>>>> In order to support this, we are creating a make-shift switch tag out of
>>>> a VLAN trunk configured on the CPU port. Termination on switch ports
>>>> only works when not under a vlan_filtering bridge. We are making use of
>>>> the generic CONFIG_NET_DSA_TAG_8021Q code and leveraging it from our own
>>>> CONFIG_NET_DSA_TAG_SJA1105.
>>>>
>>>> There are two types of traffic: regular and link-local.
>>>> The link-local traffic received on the CPU port is trapped from the
>>>> switch's regular forwarding decisions because it matched one of the two
>>>> DMAC filters for management traffic.
>>>> On transmission, the switch requires special massaging for these
>>>> link-local frames. Due to a weird implementation of the switching IP, by
>>>> default it drops link-local frames that originate on the CPU port. It
>>>> needs to be told where to forward them to, through an SPI command
>>>> ("management route") that is valid for only a single frame.
>>>> So when we're sending link-local traffic, we need to clone skb's from
>>>> DSA and send them in our custom xmit worker that also performs SPI
>>>> access.
>>>>
>>>> For that purpose, the DSA xmit handler and the xmit worker communicate
>>>> through a per-port "skb ring" software structure, with a producer and a
>>>> consumer index. At the moment this structure is rather fragile
>>>> (ping-flooding to a link-local DMAC would cause most of the frames to
>>>> get dropped). I would like to move the management traffic on a separate
>>>> netdev queue that I can stop when the skb ring got full and hardware is
>>>> busy processing, so that we are not forced to drop traffic.
>>>>
>>>> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
>>>
>>> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
>>>
>>> I do like the idea of setting up specific management queue later on,
>>> although it is not clear to me how you would go about integrating it as
>>> a network device, given the DSA slave and master devices, do you know
>>> roughly how you would proceed?
>>>
>>
>> Actually I was thinking about leveraging the multiqueue support that you
>> added in 55199df6d2af ("net: dsa: Allow switch drivers to indicate
>> number of TX queues") and expose the slave netdev .ndo_select_queue
>> callback towards DSA ports. There I would return queue #0 if
>> sja1105_is_link_local(skb), and queue #1 otherwise.
>> Are there any complications that I'm missing?
> 
> So that queue could be used to steer management traffic, but it would
> still attempt to perform a dev_queue_xmit() using the master DSA network
> device unless you somehow change that and/or parent that queue to a
> different network device that the sja1105 switch driver creates (which
> is doable).
> 

But the problem I'm trying to solve with the management queue is not 
congestion on the master port or inside the switch, but a problem that I 
myself have created by putting some skb's in a ring that is finite (and 
small) in size: the DSA xmit racing with my xmit worker.
Congestion management on the switch is a much ampler issue that I don't 
yet know how to handle. The MACs don't appear to generate pause frames, 
and the pause frames that they receive are trapped to the CPU as 
link-local traffic (DMAC 01-80-C2-00-00-01) where they are simply 
consumed by the master's MAC.

-Vladimir

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports
  2019-03-26 22:38         ` Vladimir Oltean
@ 2019-03-26 22:45           ` Florian Fainelli
  0 siblings, 0 replies; 39+ messages in thread
From: Florian Fainelli @ 2019-03-26 22:45 UTC (permalink / raw)
  To: Vladimir Oltean, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/26/19 3:38 PM, Vladimir Oltean wrote:
> On 3/27/19 12:13 AM, Florian Fainelli wrote:
>> On 3/26/19 3:03 PM, Vladimir Oltean wrote:
>>> On 3/26/19 4:31 AM, Florian Fainelli wrote:
>>>>
>>>>
>>>> On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
>>>>> In order to support this, we are creating a make-shift switch tag
>>>>> out of
>>>>> a VLAN trunk configured on the CPU port. Termination on switch ports
>>>>> only works when not under a vlan_filtering bridge. We are making
>>>>> use of
>>>>> the generic CONFIG_NET_DSA_TAG_8021Q code and leveraging it from
>>>>> our own
>>>>> CONFIG_NET_DSA_TAG_SJA1105.
>>>>>
>>>>> There are two types of traffic: regular and link-local.
>>>>> The link-local traffic received on the CPU port is trapped from the
>>>>> switch's regular forwarding decisions because it matched one of the
>>>>> two
>>>>> DMAC filters for management traffic.
>>>>> On transmission, the switch requires special massaging for these
>>>>> link-local frames. Due to a weird implementation of the switching
>>>>> IP, by
>>>>> default it drops link-local frames that originate on the CPU port. It
>>>>> needs to be told where to forward them to, through an SPI command
>>>>> ("management route") that is valid for only a single frame.
>>>>> So when we're sending link-local traffic, we need to clone skb's from
>>>>> DSA and send them in our custom xmit worker that also performs SPI
>>>>> access.
>>>>>
>>>>> For that purpose, the DSA xmit handler and the xmit worker communicate
>>>>> through a per-port "skb ring" software structure, with a producer
>>>>> and a
>>>>> consumer index. At the moment this structure is rather fragile
>>>>> (ping-flooding to a link-local DMAC would cause most of the frames to
>>>>> get dropped). I would like to move the management traffic on a
>>>>> separate
>>>>> netdev queue that I can stop when the skb ring got full and
>>>>> hardware is
>>>>> busy processing, so that we are not forced to drop traffic.
>>>>>
>>>>> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
>>>>
>>>> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
>>>>
>>>> I do like the idea of setting up specific management queue later on,
>>>> although it is not clear to me how you would go about integrating it as
>>>> a network device, given the DSA slave and master devices, do you know
>>>> roughly how you would proceed?
>>>>
>>>
>>> Actually I was thinking about leveraging the multiqueue support that you
>>> added in 55199df6d2af ("net: dsa: Allow switch drivers to indicate
>>> number of TX queues") and expose the slave netdev .ndo_select_queue
>>> callback towards DSA ports. There I would return queue #0 if
>>> sja1105_is_link_local(skb), and queue #1 otherwise.
>>> Are there any complications that I'm missing?
>>
>> So that queue could be used to steer management traffic, but it would
>> still attempt to perform a dev_queue_xmit() using the master DSA network
>> device unless you somehow change that and/or parent that queue to a
>> different network device that the sja1105 switch driver creates (which
>> is doable).
>>
> 
> But the problem I'm trying to solve with the management queue is not
> congestion on the master port or inside the switch, but a problem that I
> myself have created by putting some skb's in a ring that is finite (and
> small) in size: the DSA xmit racing with my xmit worker.

Oh I understood that part, which is why I was wondering if it even made
sense to make use of a particular queue and the flow control that is
offered with that given that this is already quite an ad-hoc solution
and what you proposed seems to do the job alright.

> Congestion management on the switch is a much ampler issue that I don't
> yet know how to handle. The MACs don't appear to generate pause frames,
> and the pause frames that they receive are trapped to the CPU as
> link-local traffic (DMAC 01-80-C2-00-00-01) where they are simply
> consumed by the master's MAC.

Woah, okay :) I suppose this can be made to work if you accept loading
your host CPU a little bit and have it perform flow control instead of
the switch itself. There is no way to have the switch's internal
buffering automatically deal with pause frames?
-- 
Florian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 13/13] dt-bindings: net: dsa: Add documentation for NXP SJA1105 driver
  2019-03-26  2:24   ` Florian Fainelli
@ 2019-03-26 23:44     ` Vladimir Oltean
  0 siblings, 0 replies; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-26 23:44 UTC (permalink / raw)
  To: Florian Fainelli, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/26/19 4:24 AM, Florian Fainelli wrote:
> 
> 
> On 3/23/2019 8:23 PM, Vladimir Oltean wrote:
>> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
>> ---
>>   .../devicetree/bindings/net/dsa/sja1105.txt   | 123 ++++++++++++++++++
>>   1 file changed, 123 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/net/dsa/sja1105.txt
>>
>> diff --git a/Documentation/devicetree/bindings/net/dsa/sja1105.txt b/Documentation/devicetree/bindings/net/dsa/sja1105.txt
>> new file mode 100644
>> index 000000000000..2c82b6fc37e3
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/net/dsa/sja1105.txt
>> @@ -0,0 +1,123 @@
>> +NXP SJA1105 switch driver
>> +=========================
>> +
>> +Required properties:
>> +
>> +- compatible: Must be "nxp,sja1105". Device ID identification (one of
>> +  E/T/P/Q/R/S) is performed by driver at probe time. Swapping pin-compatible
>> +  parts is possible with no DTS change.
>> +
>> +Optional properties:
>> +
>> +- sja1105,mac-mode, sja1105,phy-mode: Boolean properties that can be assigned
>> +  under each port node that is MII or RMII (has no effect for RGMII).  By
>> +  default (unless otherwise specified) a port is configured as MAC if it is
>> +  driving a PHY (phy-handle is present) or as PHY if it is PHY-less (fixed-link
>> +  specified, presumably because it is connected to a MAC).  These properties
>> +  are required in the case where SJA1105 ports are at both ends of an MII/RMII
>> +  PHY-less setup. One end would need to have sja1105,mac-mode, while the other
>> +  sja1105,phy-mode.
> 
> Typically we would be using a fixed-link with an appropriate 'phy-mode'
> property to describe a MAC to MAC connection, this may be seen as a
> re-purposing PHY-oriented properties though, so I am fine with that binding:
> 
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> 

Hi Florian,

I don't feel amazing about my solution either, but I don't think I'm 
understanding what you propose. Something like phy-mode = "rmii-mac" or 
"rmii-phy" or "mii-mac" or "mii-phy"?
Would that require an update of the phy_modes() function and strings?
I think the last time when an interface type was split into further 
subdivisions (RGMII with all its internal delay flavors) it didn't go 
too well - with lots of bugs introduced simply because drivers failed to 
grok the newly introduced subtypes as still being RGMII.

Thank you,
-Vladimir

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RFC PATCH net-next 03/13] net: dsa: Create a more convenient function for installing port VLANs
  2019-03-25 17:06   ` Florian Fainelli
@ 2019-03-27  0:31     ` Vladimir Oltean
  0 siblings, 0 replies; 39+ messages in thread
From: Vladimir Oltean @ 2019-03-27  0:31 UTC (permalink / raw)
  To: Florian Fainelli, davem, netdev; +Cc: andrew, vivien.didelot, linus.walleij

On 3/25/19 7:06 PM, Florian Fainelli wrote:
> On 3/23/19 8:23 PM, Vladimir Oltean wrote:
>> This refactors the two-phase transaction from dsa_slave_vlan_rx_add_vid
>> and also makes that code available for other functions from within DSA.
>> The newly exposed function either adds or deletes the specified VLAN
>> entry based on a boolean argument.
>>
>> Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
> 
> The name of the function does not make it particularly clear that
> passing false results in deleting the VLAN. Can you just wrap this under
> a different function name that is only doing the two-step adding of
> VLANs and keep using dsa_port_vlan_del() explicitly when you want to
> remove a VLAN?
> 

The reason why I made it this way was mainly to make the 
switchdev_obj_port_vlan struct an implementation detail. That being so I 
wouldn't need to keep and continuously modify this struct during the 3 
times I'm calling the function from within the 05/13 patch ("net: dsa: 
Optional VLAN-based port separation for switches without tagging"). I 
did try to make use of the .vid_begin/.vid_end feature and batch some of 
the function calls, but doing so would restrict possible values that the 
rx_vid and tx_vid functions may return - dsa_port_setup_8021q_tagging() 
would need to ensure that they are contiguous prior to batching them 
into a single vlan object. By the way, I don't think anybody is making 
any good use of this feature, it's just creating useless boilerplate as 
of now.
The other thing is that if I were to wrap around dsa_port_vlan_add() and 
get rid of the switchdev_obj_port_vlan and just pass the vid as u16, I'd 
have to do the same wrapping for the dsa_port_vlan_del() function too. 
Plus I'd have to keep 3 'if' conditions just to decide whether to call 
*_add or *_del.

> diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
> index 221299b264f5..1b11b245e2d6 100644
> --- a/net/dsa/tag_8021q.c
> +++ b/net/dsa/tag_8021q.c
> @@ -120,8 +120,10 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int port, bool enabled)
>  			/* The Rx VID is a regular VLAN on all others */
>  			flags = BRIDGE_VLAN_INFO_UNTAGGED;
>  
> -		err = dsa_port_trans_vlan_apply(other_dp, rx_vid, flags,
> -						enabled);
> +		if (enabled)
> +			err = __dsa_port_vlan_add(other_dp, rx_vid, flags);
> +		else
> +			err = __dsa_port_vlan_del(other_dp, rx_vid);
>  		if (err) {
>  			dev_err(ds->dev, "Failed to apply Rx VID %d to port %d: %d\n",
>  				rx_vid, port, err);
> @@ -129,14 +131,20 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int port, bool enabled)
>  		}
>  	}
>  	/* Finally apply the Tx VID on this port and on the CPU port */
> -	err = dsa_port_trans_vlan_apply(dp, tx_vid, BRIDGE_VLAN_INFO_UNTAGGED,
> -					enabled);
> +	if (enabled)
> +		err = __dsa_port_vlan_add(dp, tx_vid,
> +					  BRIDGE_VLAN_INFO_UNTAGGED);
> +	else
> +		err = __dsa_port_vlan_del(dp, tx_vid);
>  	if (err) {
>  		dev_err(ds->dev, "Failed to apply Tx VID %d on port %d: %d\n",
>  			tx_vid, port, err);
>  		return err;
>  	}
> -	err = dsa_port_trans_vlan_apply(upstream_dp, tx_vid, 0, enabled);
> +	if (enabled)
> +		err = __dsa_port_vlan_add(upstream_dp, tx_vid, 0);
> +	else
> +		err = __dsa_port_vlan_del(upstream_dp, tx_vid);
>  	if (err) {
>  		dev_err(ds->dev, "Failed to apply Tx VID %d on port %d: %d\n",
>  			tx_vid, upstream, err);

How does something like the above look like?

Thanks,
-Vladimir

>> ---
>>   net/dsa/dsa_priv.h |  2 ++
>>   net/dsa/port.c     | 24 ++++++++++++++++++++++++
>>   net/dsa/slave.c    | 16 ++--------------
>>   3 files changed, 28 insertions(+), 14 deletions(-)
>>
>> diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
>> index 093b7d145eb1..8048ced3708f 100644
>> --- a/net/dsa/dsa_priv.h
>> +++ b/net/dsa/dsa_priv.h
>> @@ -164,6 +164,8 @@ int dsa_port_pre_bridge_flags(const struct dsa_port *dp, unsigned long flags,
>>   			      struct switchdev_trans *trans);
>>   int dsa_port_bridge_flags(const struct dsa_port *dp, unsigned long flags,
>>   			  struct switchdev_trans *trans);
>> +int dsa_port_trans_vlan_apply(struct dsa_port *dp, u16 vid, u16 flags,
>> +			      bool enabled);
>>   int dsa_port_vlan_add(struct dsa_port *dp,
>>   		      const struct switchdev_obj_port_vlan *vlan,
>>   		      struct switchdev_trans *trans);
>> diff --git a/net/dsa/port.c b/net/dsa/port.c
>> index a86fe3be1261..9c7358f98004 100644
>> --- a/net/dsa/port.c
>> +++ b/net/dsa/port.c
>> @@ -326,6 +326,30 @@ int dsa_port_vlan_del(struct dsa_port *dp,
>>   	return 0;
>>   }
>>   
>> +int dsa_port_trans_vlan_apply(struct dsa_port *dp, u16 vid, u16 flags,
>> +			      bool enabled)
>> +{
>> +	struct switchdev_obj_port_vlan vlan = {
>> +		.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
>> +		.flags = flags,
>> +		.vid_begin = vid,
>> +		.vid_end = vid,
>> +	};
>> +	struct switchdev_trans trans;
>> +	int err;
>> +
>> +	if (!enabled)
>> +		return dsa_port_vlan_del(dp, &vlan);
>> +
>> +	trans.ph_prepare = true;
>> +	err = dsa_port_vlan_add(dp, &vlan, &trans);
>> +	if (err == -EOPNOTSUPP)
>> +		return 0;
>> +
>> +	trans.ph_prepare = false;
>> +	return dsa_port_vlan_add(dp, &vlan, &trans);
>> +}
>> +
>>   static struct phy_device *dsa_port_get_phy_device(struct dsa_port *dp)
>>   {
>>   	struct device_node *phy_dn;
>> diff --git a/net/dsa/slave.c b/net/dsa/slave.c
>> index 093eef6f2599..3191ef74f6a1 100644
>> --- a/net/dsa/slave.c
>> +++ b/net/dsa/slave.c
>> @@ -987,13 +987,6 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
>>   				     u16 vid)
>>   {
>>   	struct dsa_port *dp = dsa_slave_to_port(dev);
>> -	struct switchdev_obj_port_vlan vlan = {
>> -		.vid_begin = vid,
>> -		.vid_end = vid,
>> -		/* This API only allows programming tagged, non-PVID VIDs */
>> -		.flags = 0,
>> -	};
>> -	struct switchdev_trans trans;
>>   	struct bridge_vlan_info info;
>>   	int ret;
>>   
>> @@ -1010,13 +1003,8 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
>>   			return -EBUSY;
>>   	}
>>   
>> -	trans.ph_prepare = true;
>> -	ret = dsa_port_vlan_add(dp, &vlan, &trans);
>> -	if (ret == -EOPNOTSUPP)
>> -		return 0;
>> -
>> -	trans.ph_prepare = false;
>> -	return dsa_port_vlan_add(dp, &vlan, &trans);
>> +	/* This API only allows programming tagged, non-PVID VIDs */
>> +	return dsa_port_trans_vlan_apply(dp, vid, 0, true);
>>   }
>>   
>>   static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
>>
> 
> 


^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2019-03-27  0:31 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-24  3:23 [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Vladimir Oltean
2019-03-24  3:23 ` [RFC PATCH net-next 01/13] lib: Add support for generic packing operations Vladimir Oltean
2019-03-24 19:02   ` Richard Cochran
2019-03-24 20:32     ` Vladimir Oltean
2019-03-26  4:13       ` Richard Cochran
2019-03-24  3:23 ` [RFC PATCH net-next 02/13] net: dsa: Store vlan_filtering as a property of dsa_port Vladimir Oltean
2019-03-24 20:34   ` Andrew Lunn
2019-03-25 16:46   ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 03/13] net: dsa: Create a more convenient function for installing port VLANs Vladimir Oltean
2019-03-25 17:06   ` Florian Fainelli
2019-03-27  0:31     ` Vladimir Oltean
2019-03-24  3:23 ` [RFC PATCH net-next 04/13] net: dsa: Call driver's setup callback after setting up its switchdev notifier Vladimir Oltean
2019-03-25 16:47   ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 05/13] net: dsa: Optional VLAN-based port separation for switches without tagging Vladimir Oltean
2019-03-26  2:21   ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 06/13] net: dsa: Introduce driver for NXP SJA1105 5-port L2 switch Vladimir Oltean
2019-03-26 13:02   ` Florian Fainelli
2019-03-26 17:52     ` Vladimir Oltean
2019-03-24  3:23 ` [RFC PATCH net-next 07/13] net: dsa: sja1105: Add support for FDB and MDB management Vladimir Oltean
2019-03-26  2:37   ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 08/13] net: dsa: sja1105: Add support for VLAN operations Vladimir Oltean
2019-03-26  2:41   ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 09/13] net: dsa: sja1105: Add support for ethtool port counters Vladimir Oltean
2019-03-26  2:44   ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 10/13] net: dsa: sja1105: Add support for traffic through standalone ports Vladimir Oltean
2019-03-26  2:31   ` Florian Fainelli
2019-03-26 22:03     ` Vladimir Oltean
2019-03-26 22:13       ` Florian Fainelli
2019-03-26 22:38         ` Vladimir Oltean
2019-03-26 22:45           ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 11/13] net: dsa: sja1105: Add support for Spanning Tree Protocol Vladimir Oltean
2019-03-24  3:23 ` [RFC PATCH net-next 12/13] Documentation: networking: dsa: Add details about NXP SJA1105 driver Vladimir Oltean
2019-03-26  2:34   ` Florian Fainelli
2019-03-24  3:23 ` [RFC PATCH net-next 13/13] dt-bindings: net: dsa: Add documentation for " Vladimir Oltean
2019-03-26  2:24   ` Florian Fainelli
2019-03-26 23:44     ` Vladimir Oltean
2019-03-25 16:31 ` [RFC PATCH net-next 00/13] NXP SJA1105 DSA driver Florian Fainelli
2019-03-26 17:30 ` Vinicius Costa Gomes
2019-03-26 18:07   ` Vladimir Oltean

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.