* [PATCH 0/3] Intel IXP4xx network drivers
@ 2007-05-06 23:46 Krzysztof Halasa
2007-05-07 0:06 ` [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
` (5 more replies)
0 siblings, 6 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-06 23:46 UTC (permalink / raw)
To: Jeff Garzik, Russell King; +Cc: lkml, netdev, linux-arm-kernel
Hi,
The next 3 patches:
[1/3] changes "depends on HDLC" to "select HDLC" for WAN/generic HDLC
network drivers
[2/3] adds "fuse" functions to help determine installed IXP4xx CPU
components and to reset/disable/enable them.
[3/3] adds IXP4xx drivers for: hardware queue manager, NPE (on-chip
network coprocessors), built-in Ethernet ports, built-in HSS
(sync serial) ports (currently only non-channelized HDLC).
Patch [3/3] requires patches [1/3] and [2/3]
The code is based on publicly available information:
- Intel IXP4xx Developer's Manual and others e-papers
- Intel IXP400 Access Library Software (BSD license)
- previous works by Christian Hohnstaedt <chohnstaedt@innominate.com>
While I have decided to rewrite most things from scratch, his patch
was a great help in understanding what's going on within the IXP400
code (I took some fragments of his code as well).
Thanks, Christian.
The code is tested with IXP425 CPU.
[1/3]
drivers/net/wan/Kconfig | 31 +-
[2/3]
include/asm-arm/arch-ixp4xx/ixp4xx-regs.h | 47 ++
[3/3]
arch/arm/mach-ixp4xx/ixdp425-setup.c | 27 +
drivers/net/Kconfig | 34 +
drivers/net/Makefile | 1
drivers/net/ixp4xx/Makefile | 4
drivers/net/ixp4xx/ixp4xx_eth.c | 1002 +++++++++++++++++++++++++++
drivers/net/ixp4xx/ixp4xx_hss.c | 1048 ++++++++++++++++++++++++++++
drivers/net/ixp4xx/ixp4xx_npe.c | 731 +++++++++++++++++++++
drivers/net/ixp4xx/ixp4xx_qmgr.c | 273 +++++++
drivers/net/ixp4xx/npe.h | 41 +
drivers/net/ixp4xx/qmgr.h | 124 +++
drivers/net/wan/Kconfig | 10
include/asm-arm/arch-ixp4xx/platform.h | 19
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-06 23:46 [PATCH 0/3] Intel IXP4xx network drivers Krzysztof Halasa
@ 2007-05-07 0:06 ` Krzysztof Halasa
2007-05-07 1:44 ` Roman Zippel
2007-05-07 0:07 ` [PATCH 2/3] ARM: include IXP4xx "fuses" support Krzysztof Halasa
` (4 subsequent siblings)
5 siblings, 1 reply; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 0:06 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Russell King, lkml, netdev, linux-arm-kernel
Allow enabling WAN drivers without selecting generic HDLC first,
HDLC will be selected automatically.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
index 8897f53..3a2fe82 100644
--- a/drivers/net/wan/Kconfig
+++ b/drivers/net/wan/Kconfig
@@ -171,7 +171,8 @@ comment "X.25/LAPB support is disabled"
config PCI200SYN
tristate "Goramo PCI200SYN support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
help
Driver for PCI200SYN cards by Goramo sp. j.
@@ -185,7 +186,8 @@ config PCI200SYN
config WANXL
tristate "SBE Inc. wanXL support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
help
Driver for wanXL PCI cards by SBE Inc.
@@ -208,7 +210,8 @@ config WANXL_BUILD_FIRMWARE
config PC300
tristate "Cyclades-PC300 support (RS-232/V.35, X.21, T1/E1 boards)"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
---help---
Driver for the Cyclades-PC300 synchronous communication boards.
@@ -225,19 +228,21 @@ config PC300
config PC300_MLPPP
bool "Cyclades-PC300 MLPPP support"
- depends on PC300 && PPP_MULTILINK && PPP_SYNC_TTY && HDLC_PPP
+ depends on PC300 && PPP_MULTILINK && PPP_SYNC_TTY
+ select HDLC_PPP
help
Multilink PPP over the PC300 synchronous communication boards.
comment "Cyclades-PC300 MLPPP support is disabled."
- depends on WAN && HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
+ depends on WAN && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n)
comment "Refer to the file README.mlppp, provided by PC300 package."
- depends on WAN && HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
+ depends on WAN && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n)
config PC300TOO
tristate "Cyclades PC300 RSV/X21 alternative support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
help
Alternative driver for PC300 RSV/X21 PCI cards made by
Cyclades, Inc. If you have such a card, say Y here and see
@@ -250,7 +255,8 @@ config PC300TOO
config N2
tristate "SDL RISCom/N2 support"
- depends on HDLC && ISA
+ depends on ISA
+ select HDLC
help
Driver for RISCom/N2 single or dual channel ISA cards by
SDL Communications Inc.
@@ -267,7 +273,8 @@ config N2
config C101
tristate "Moxa C101 support"
- depends on HDLC && ISA
+ depends on ISA
+ select HDLC
help
Driver for C101 SuperSync ISA cards by Moxa Technologies Co., Ltd.
@@ -281,7 +288,8 @@ config C101
config FARSYNC
tristate "FarSync T-Series support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
---help---
Support for the FarSync T-Series X.21 (and V.35/V.24) cards by
FarSite Communications Ltd.
@@ -300,7 +308,8 @@ config FARSYNC
config DSCC4
tristate "Etinc PCISYNC serial board support"
- depends on HDLC && PCI && m
+ depends on PCI && m
+ select HDLC
help
Driver for Etinc PCISYNC boards based on the Infineon (ex. Siemens)
DSCC4 chipset.
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH 2/3] ARM: include IXP4xx "fuses" support
2007-05-06 23:46 [PATCH 0/3] Intel IXP4xx network drivers Krzysztof Halasa
2007-05-07 0:06 ` [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
@ 2007-05-07 0:07 ` Krzysztof Halasa
2007-05-07 5:24 ` Alexey Zaytsev
2007-05-07 0:07 ` [PATCH 3/3] Intel IXP4xx network drivers Krzysztof Halasa
` (3 subsequent siblings)
5 siblings, 1 reply; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 0:07 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Russell King, lkml, netdev, linux-arm-kernel
Adds "fuse" functions to help determine installed IXP4xx CPU
components and to reset/disable/enable them (only NPE - network
coprocessors - can be reset and reenabled).
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h b/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h
index 5d949d7..780c851 100644
--- a/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h
+++ b/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h
@@ -607,4 +607,51 @@
#define DCMD_LENGTH 0x01fff /* length mask (max = 8K - 1) */
+/* Fuse Bits of IXP_EXP_CFG2 */
+#define IXP4XX_FUSE_RCOMP (1 << 0)
+#define IXP4XX_FUSE_USB_DEVICE (1 << 1)
+#define IXP4XX_FUSE_HASH (1 << 2)
+#define IXP4XX_FUSE_AES (1 << 3)
+#define IXP4XX_FUSE_DES (1 << 4)
+#define IXP4XX_FUSE_HDLC (1 << 5)
+#define IXP4XX_FUSE_AAL (1 << 6)
+#define IXP4XX_FUSE_HSS (1 << 7)
+#define IXP4XX_FUSE_UTOPIA (1 << 8)
+#define IXP4XX_FUSE_NPEB_ETH0 (1 << 9)
+#define IXP4XX_FUSE_NPEC_ETH (1 << 10)
+#define IXP4XX_FUSE_RESET_NPEA (1 << 11)
+#define IXP4XX_FUSE_RESET_NPEB (1 << 12)
+#define IXP4XX_FUSE_RESET_NPEC (1 << 13)
+#define IXP4XX_FUSE_PCI (1 << 14)
+#define IXP4XX_FUSE_ECC_TIMESYNC (1 << 15)
+#define IXP4XX_FUSE_UTOPIA_PHY_LIMIT (3 << 16)
+#define IXP4XX_FUSE_USB_HOST (1 << 18)
+#define IXP4XX_FUSE_NPEA_ETH (1 << 19)
+#define IXP4XX_FUSE_NPEB_ETH_1_TO_3 (1 << 20)
+#define IXP4XX_FUSE_RSA (1 << 21)
+#define IXP4XX_FUSE_XSCALE_MAX_FREQ (3 << 22)
+#define IXP4XX_FUSE_RESERVED (0xFF << 24)
+
+#define IXP4XX_FUSE_IXP46X_ONLY (IXP4XX_FUSE_ECC_TIMESYNC | \
+ IXP4XX_FUSE_USB_HOST | \
+ IXP4XX_FUSE_NPEA_ETH | \
+ IXP4XX_FUSE_NPEB_ETH_1_TO_3 | \
+ IXP4XX_FUSE_RSA | \
+ IXP4XX_FUSE_XSCALE_MAX_FREQ)
+
+static inline u32 ixp4xx_read_fuses(void)
+{
+ unsigned int fuses = ~*IXP4XX_EXP_CFG2;
+ fuses &= ~IXP4XX_FUSE_RESERVED;
+ if (!cpu_is_ixp46x())
+ fuses &= ~IXP4XX_FUSE_IXP46X_ONLY;
+
+ return fuses;
+}
+
+static inline void ixp4xx_write_fuses(u32 value)
+{
+ *IXP4XX_EXP_CFG2 = ~value;
+}
+
#endif
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH 3/3] Intel IXP4xx network drivers
2007-05-06 23:46 [PATCH 0/3] Intel IXP4xx network drivers Krzysztof Halasa
2007-05-07 0:06 ` [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
2007-05-07 0:07 ` [PATCH 2/3] ARM: include IXP4xx "fuses" support Krzysztof Halasa
@ 2007-05-07 0:07 ` Krzysztof Halasa
2007-05-07 12:59 ` Michael-Luke Jones
2007-05-08 11:40 ` [PATCH 3/3] Intel IXP4xx network drivers Lennert Buytenhek
2007-05-07 10:27 ` [PATCH 2a/3] " Krzysztof Halasa
` (2 subsequent siblings)
5 siblings, 2 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 0:07 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Russell King, lkml, netdev, linux-arm-kernel
Adds IXP4xx drivers for built-in CPU components:
- hardware queue manager
- NPE (network coprocessors),
- Ethernet ports,
- HSS (sync serial) ports (currently only non-channelized HDLC).
Both Ethernet and HSS drivers use queue manager and NPE driver and
require external firmware file(s) available from www.intel.com.
"Platform device" definitions for Ethernet ports on IXDP425 development
platform are provided (though it has been tested on not yet available
IXP425-based hardware only)
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/ixdp425-setup.c b/arch/arm/mach-ixp4xx/ixdp425-setup.c
index 04b1d56..0dc497f 100644
--- a/arch/arm/mach-ixp4xx/ixdp425-setup.c
+++ b/arch/arm/mach-ixp4xx/ixdp425-setup.c
@@ -101,10 +101,35 @@ static struct platform_device ixdp425_uart = {
.resource = ixdp425_uart_resources
};
+/* Built-in 10/100 Ethernet MAC interfaces */
+static struct mac_plat_info ixdp425_plat_mac[] = {
+ {
+ .phy = 0,
+ .rxq = 3,
+ }, {
+ .phy = 1,
+ .rxq = 4,
+ }
+};
+
+static struct platform_device ixdp425_mac[] = {
+ {
+ .name = "ixp4xx_eth",
+ .id = IXP4XX_ETH_NPEB,
+ .dev.platform_data = ixdp425_plat_mac,
+ }, {
+ .name = "ixp4xx_eth",
+ .id = IXP4XX_ETH_NPEC,
+ .dev.platform_data = ixdp425_plat_mac + 1,
+ }
+};
+
static struct platform_device *ixdp425_devices[] __initdata = {
&ixdp425_i2c_controller,
&ixdp425_flash,
- &ixdp425_uart
+ &ixdp425_uart,
+ &ixdp425_mac[0],
+ &ixdp425_mac[1],
};
static void __init ixdp425_init(void)
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index a3d46ea..94dbfec 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -1891,6 +1891,16 @@ config NE_H8300
source "drivers/net/fec_8xx/Kconfig"
source "drivers/net/fs_enet/Kconfig"
+config IXP4XX_ETH
+ tristate "IXP4xx Ethernet support"
+ depends on ARCH_IXP4XX
+ select IXP4XX_NPE
+ select IXP4XX_QMGR
+ select MII
+ help
+ Say Y here if you want to use built-in Ethernet ports
+ on IXP4xx processor.
+
endmenu
#
@@ -2924,6 +2934,30 @@ config NETCONSOLE
If you want to log kernel messages over the network, enable this.
See <file:Documentation/networking/netconsole.txt> for details.
+config IXP4XX_NETDEVICES
+ tristate
+ depends on ARCH_IXP4XX
+ help
+ Builds IXP4xx network devices
+
+config IXP4XX_NPE
+ tristate "IXP4xx Network Processor Engine support"
+ depends on ARCH_IXP4XX
+ select HOTPLUG
+ select FW_LOADER
+ select IXP4XX_NETDEVICES
+ help
+ This driver supports IXP4xx built-in network coprocessors
+ and is automatically selected by Ethernet and HSS drivers.
+
+config IXP4XX_QMGR
+ tristate "IXP4xx Queue Manager support"
+ depends on ARCH_IXP4XX
+ select IXP4XX_NETDEVICES
+ help
+ This driver supports IXP4xx built-in hardware queue manager
+ and is automatically selected by Ethernet and HSS drivers.
+
endif #NETDEVICES
config NETPOLL
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 33af833..a9bc474 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -212,6 +212,7 @@ obj-$(CONFIG_HAMRADIO) += hamradio/
obj-$(CONFIG_IRDA) += irda/
obj-$(CONFIG_ETRAX_ETHERNET) += cris/
obj-$(CONFIG_ENP2611_MSF_NET) += ixp2000/
+obj-$(CONFIG_IXP4XX_NETDEVICES) += ixp4xx/
obj-$(CONFIG_NETCONSOLE) += netconsole.o
diff --git a/drivers/net/ixp4xx/Makefile b/drivers/net/ixp4xx/Makefile
new file mode 100644
index 0000000..12e8351
--- /dev/null
+++ b/drivers/net/ixp4xx/Makefile
@@ -0,0 +1,4 @@
+obj-$(CONFIG_IXP4XX_QMGR) += ixp4xx_qmgr.o
+obj-$(CONFIG_IXP4XX_NPE) += ixp4xx_npe.o
+obj-$(CONFIG_IXP4XX_ETH) += ixp4xx_eth.o
+obj-$(CONFIG_IXP4XX_HSS) += ixp4xx_hss.o
diff --git a/drivers/net/ixp4xx/ixp4xx_eth.c b/drivers/net/ixp4xx/ixp4xx_eth.c
new file mode 100644
index 0000000..92a654e
--- /dev/null
+++ b/drivers/net/ixp4xx/ixp4xx_eth.c
@@ -0,0 +1,1002 @@
+/*
+ * Intel IXP4xx Ethernet driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * Ethernet port config (0x00 is not present on IXP42X):
+ *
+ * logical port 0x00 0x10 0x20
+ * NPE 0 (NPE-A) 1 (NPE-B) 2 (NPE-C)
+ * physical PortId 2 0 1
+ * TX queue 23 24 25
+ * RX-free queue 26 27 28
+ * TX-done queue is always 31, RX queue is configurable
+ */
+
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/mii.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include "npe.h"
+#include "qmgr.h"
+
+#ifndef __ARMEB__
+#warning Little endian mode not supported
+#endif
+
+#define DEBUG_QUEUES 0
+#define DEBUG_RX 0
+#define DEBUG_TX 0
+#define DEBUG_PKT_BYTES 0
+#define DEBUG_MDIO 0
+
+#define DRV_NAME "ixp4xx_eth"
+#define DRV_VERSION "0.04"
+
+#define TX_QUEUE_LEN 16 /* dwords */
+#define PKT_DESCS 64 /* also length of queues: TX-done, RX-ready, RX */
+
+#define POOL_ALLOC_SIZE (sizeof(struct desc) * (PKT_DESCS))
+#define REGS_SIZE 0x1000
+#define MAX_MRU 1536
+
+#define MDIO_INTERVAL (3 * HZ)
+#define MAX_MDIO_RETRIES 100 /* microseconds, typically 30 cycles */
+
+#define NPE_ID(port) ((port)->id >> 4)
+#define PHYSICAL_ID(port) ((NPE_ID(port) + 2) % 3)
+#define TX_QUEUE(plat) (NPE_ID(port) + 23)
+#define RXFREE_QUEUE(plat) (NPE_ID(port) + 26)
+#define TXDONE_QUEUE 31
+
+/* TX Control Registers */
+#define TX_CNTRL0_TX_EN BIT(0)
+#define TX_CNTRL0_HALFDUPLEX BIT(1)
+#define TX_CNTRL0_RETRY BIT(2)
+#define TX_CNTRL0_PAD_EN BIT(3)
+#define TX_CNTRL0_APPEND_FCS BIT(4)
+#define TX_CNTRL0_2DEFER BIT(5)
+#define TX_CNTRL0_RMII BIT(6) /* reduced MII */
+#define TX_CNTRL1_RETRIES 0x0F /* 4 bits */
+
+/* RX Control Registers */
+#define RX_CNTRL0_RX_EN BIT(0)
+#define RX_CNTRL0_PADSTRIP_EN BIT(1)
+#define RX_CNTRL0_SEND_FCS BIT(2)
+#define RX_CNTRL0_PAUSE_EN BIT(3)
+#define RX_CNTRL0_LOOP_EN BIT(4)
+#define RX_CNTRL0_ADDR_FLTR_EN BIT(5)
+#define RX_CNTRL0_RX_RUNT_EN BIT(6)
+#define RX_CNTRL0_BCAST_DIS BIT(7)
+#define RX_CNTRL1_DEFER_EN BIT(0)
+
+/* Core Control Register */
+#define CORE_RESET BIT(0)
+#define CORE_RX_FIFO_FLUSH BIT(1)
+#define CORE_TX_FIFO_FLUSH BIT(2)
+#define CORE_SEND_JAM BIT(3)
+#define CORE_MDC_EN BIT(4) /* NPE-B ETH-0 only */
+
+/* Definitions for MII access routines */
+#define MII_CMD_GO BIT(31)
+#define MII_CMD_WRITE BIT(26)
+#define MII_STAT_READ_FAILED BIT(31)
+
+/* NPE message codes */
+#define NPE_GETSTATUS 0x00
+#define NPE_EDB_SETPORTADDRESS 0x01
+#define NPE_EDB_GETMACADDRESSDATABASE 0x02
+#define NPE_EDB_SETMACADDRESSSDATABASE 0x03
+#define NPE_GETSTATS 0x04
+#define NPE_RESETSTATS 0x05
+#define NPE_SETMAXFRAMELENGTHS 0x06
+#define NPE_VLAN_SETRXTAGMODE 0x07
+#define NPE_VLAN_SETDEFAULTRXVID 0x08
+#define NPE_VLAN_SETPORTVLANTABLEENTRY 0x09
+#define NPE_VLAN_SETPORTVLANTABLERANGE 0x0A
+#define NPE_VLAN_SETRXQOSENTRY 0x0B
+#define NPE_VLAN_SETPORTIDEXTRACTIONMODE 0x0C
+#define NPE_STP_SETBLOCKINGSTATE 0x0D
+#define NPE_FW_SETFIREWALLMODE 0x0E
+#define NPE_PC_SETFRAMECONTROLDURATIONID 0x0F
+#define NPE_PC_SETAPMACTABLE 0x11
+#define NPE_SETLOOPBACK_MODE 0x12
+#define NPE_PC_SETBSSIDTABLE 0x13
+#define NPE_ADDRESS_FILTER_CONFIG 0x14
+#define NPE_APPENDFCSCONFIG 0x15
+#define NPE_NOTIFY_MAC_RECOVERY_DONE 0x16
+#define NPE_MAC_RECOVERY_START 0x17
+
+
+struct eth_regs {
+ u32 tx_control[2], __res1[2]; /* 000 */
+ u32 rx_control[2], __res2[2]; /* 010 */
+ u32 random_seed, __res3[3]; /* 020 */
+ u32 partial_empty_threshold, __res4; /* 030 */
+ u32 partial_full_threshold, __res5; /* 038 */
+ u32 tx_start_bytes, __res6[3]; /* 040 */
+ u32 tx_deferral, rx_deferral,__res7[2]; /* 050 */
+ u32 tx_2part_deferral[2], __res8[2]; /* 060 */
+ u32 slot_time, __res9[3]; /* 070 */
+ u32 mdio_command[4]; /* 080 */
+ u32 mdio_status[4]; /* 090 */
+ u32 mcast_mask[6], __res10[2]; /* 0A0 */
+ u32 mcast_addr[6], __res11[2]; /* 0C0 */
+ u32 int_clock_threshold, __res12[3]; /* 0E0 */
+ u32 hw_addr[6], __res13[61]; /* 0F0 */
+ u32 core_control; /* 1FC */
+};
+
+struct port {
+ struct resource *mem_res;
+ struct eth_regs __iomem *regs;
+ struct npe *npe;
+ struct net_device *netdev;
+ struct net_device_stats stat;
+ struct mii_if_info mii;
+ struct delayed_work mdio_thread;
+ struct mac_plat_info *plat;
+ struct sk_buff *rx_skb_tab[PKT_DESCS];
+ struct desc *rx_desc_tab; /* coherent */
+ int id; /* logical port ID */
+ u32 rx_desc_tab_phys;
+ u32 msg_enable;
+};
+
+/* NPE message structure */
+struct msg {
+ union {
+ struct {
+ u8 cmd, eth_id, mac[ETH_ALEN];
+ };
+ struct {
+ u8 cmd, eth_id, __byte2, byte3;
+ u8 __byte4, byte5, __byte6, byte7;
+ };
+ struct {
+ u8 cmd, eth_id, __b2, byte3;
+ u32 data32;
+ };
+ };
+};
+
+/* Ethernet packet descriptor */
+struct desc {
+ u32 next; /* pointer to next buffer, unused */
+ u16 buf_len; /* buffer length */
+ u16 pkt_len; /* packet length */
+ u32 data; /* pointer to data buffer in RAM */
+ u8 dest_id;
+ u8 src_id;
+ u16 flags;
+ u8 qos;
+ u8 padlen;
+ u16 vlan_tci;
+ u8 dest_mac[ETH_ALEN];
+ u8 src_mac[ETH_ALEN];
+};
+
+
+#define rx_desc_phys(port, n) ((port)->rx_desc_tab_phys + \
+ (n) * sizeof(struct desc))
+#define tx_desc_phys(n) (tx_desc_tab_phys + (n) * sizeof(struct desc))
+
+static spinlock_t mdio_lock;
+static struct eth_regs __iomem *mdio_regs; /* mdio command and status only */
+static struct npe *mdio_npe;
+static int ports_open;
+static struct dma_pool *dma_pool;
+static struct sk_buff *tx_skb_tab[PKT_DESCS];
+static struct desc *tx_desc_tab; /* coherent */
+static u32 tx_desc_tab_phys;
+
+
+static inline void set_regbits(u32 bits, u32 __iomem *reg)
+{
+ __raw_writel(__raw_readl(reg) | bits, reg);
+}
+static inline void clr_regbits(u32 bits, u32 __iomem *reg)
+{
+ __raw_writel(__raw_readl(reg) & ~bits, reg);
+}
+
+
+static u16 mdio_cmd(struct net_device *dev, int phy_id, int location,
+ int write, u16 cmd)
+{
+ int cycles = 0;
+
+ if (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80) {
+ printk("%s: MII not ready to transmit\n", dev->name);
+ return 0; /* not ready to transmit */
+ }
+
+ if (write) {
+ __raw_writel(cmd & 0xFF, &mdio_regs->mdio_command[0]);
+ __raw_writel(cmd >> 8, &mdio_regs->mdio_command[1]);
+ }
+ __raw_writel(((phy_id << 5) | location) & 0xFF,
+ &mdio_regs->mdio_command[2]);
+ __raw_writel((phy_id >> 3) | (write << 2) | 0x80 /* GO */,
+ &mdio_regs->mdio_command[3]);
+
+ while ((cycles < MAX_MDIO_RETRIES) &&
+ (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80)) {
+ udelay(1);
+ cycles++;
+ }
+
+ if (cycles == MAX_MDIO_RETRIES) {
+ printk("%s: MII write failed\n", dev->name);
+ return 0;
+ }
+
+#if DEBUG_MDIO
+ printk(KERN_DEBUG "mdio_cmd() took %i cycles\n", cycles);
+#endif
+
+ if (write)
+ return 0;
+
+ if (__raw_readl(&mdio_regs->mdio_status[3]) & 0x80) {
+ printk("%s: MII read failed\n", dev->name);
+ return 0;
+ }
+
+ return (__raw_readl(&mdio_regs->mdio_status[0]) & 0xFF) |
+ (__raw_readl(&mdio_regs->mdio_status[1]) << 8);
+}
+
+static int mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ unsigned long flags;
+ u16 val;
+
+ spin_lock_irqsave(&mdio_lock, flags);
+ val = mdio_cmd(dev, phy_id, location, 0, 0);
+ spin_unlock_irqrestore(&mdio_lock, flags);
+ return val;
+}
+
+static void mdio_write(struct net_device *dev, int phy_id, int location,
+ int val)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdio_lock, flags);
+ mdio_cmd(dev, phy_id, location, 1, val);
+ spin_unlock_irqrestore(&mdio_lock, flags);
+}
+
+static void eth_set_duplex(struct port *port)
+{
+ if (port->mii.full_duplex)
+ clr_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
+ else
+ set_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
+}
+
+
+static void mdio_thread(struct work_struct *work)
+{
+ struct port *port = container_of(work, struct port, mdio_thread.work);
+
+ if (mii_check_media(&port->mii, 1, 0))
+ eth_set_duplex(port);
+ schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
+}
+
+
+static inline void debug_skb(const char *func, struct sk_buff *skb)
+{
+#if DEBUG_PKT_BYTES
+ int i;
+
+ printk(KERN_DEBUG "%s(%i): ", func, skb->len);
+ for (i = 0; i < skb->len; i++) {
+ if (i >= DEBUG_PKT_BYTES)
+ break;
+ printk("%s%02X",
+ ((i == 6) || (i == 12) || (i >= 14)) ? " " : "",
+ skb->data[i]);
+ }
+ printk("\n");
+#endif
+}
+
+
+static inline void debug_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc, int is_get)
+{
+#if DEBUG_QUEUES
+ const char *op = is_get ? "->" : "<-";
+
+ if (!desc_phys) {
+ printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
+ return;
+ }
+ printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %2X < %2X %4X %X"
+ " %X %X %02X%02X%02X%02X%02X%02X < %02X%02X%02X%02X%02X%02X\n",
+ queue, op, desc_phys, desc->next, desc->buf_len, desc->pkt_len,
+ desc->data, desc->dest_id, desc->src_id,
+ desc->flags, desc->qos,
+ desc->padlen, desc->vlan_tci,
+ desc->dest_mac[0], desc->dest_mac[1],
+ desc->dest_mac[2], desc->dest_mac[3],
+ desc->dest_mac[4], desc->dest_mac[5],
+ desc->src_mac[0], desc->src_mac[1],
+ desc->src_mac[2], desc->src_mac[3],
+ desc->src_mac[4], desc->src_mac[5]);
+#endif
+}
+
+static inline int queue_get_desc(unsigned int queue, struct port *port,
+ int is_tx)
+{
+ u32 phys, tab_phys, n_desc;
+ struct desc *tab;
+
+ if (!(phys = qmgr_get_entry(queue))) {
+ debug_desc(queue, phys, NULL, 1);
+ return -1;
+ }
+
+ phys &= ~0x1F; /* mask out non-address bits */
+ tab_phys = is_tx ? tx_desc_phys(0) : rx_desc_phys(port, 0);
+ tab = is_tx ? tx_desc_tab : port->rx_desc_tab;
+ n_desc = (phys - tab_phys) / sizeof(struct desc);
+ BUG_ON(n_desc >= PKT_DESCS);
+
+ debug_desc(queue, phys, &tab[n_desc], 1);
+ BUG_ON(tab[n_desc].next);
+ return n_desc;
+}
+
+static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc)
+{
+ debug_desc(queue, desc_phys, desc, 0);
+ BUG_ON(desc_phys & 0x1F);
+ qmgr_put_entry(queue, desc_phys);
+}
+
+
+static void eth_rx_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+ struct port *port = netdev_priv(dev);
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "eth_rx_irq() start\n");
+#endif
+ qmgr_disable_irq(port->plat->rxq);
+ netif_rx_schedule(dev);
+}
+
+static int eth_poll(struct net_device *dev, int *budget)
+{
+ struct port *port = netdev_priv(dev);
+ unsigned int queue = port->plat->rxq;
+ int quota = dev->quota, received = 0;
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "eth_poll() start\n");
+#endif
+ while (quota) {
+ struct sk_buff *old_skb, *new_skb;
+ struct desc *desc;
+ u32 data;
+ int n = queue_get_desc(queue, port, 0);
+ if (n < 0) { /* No packets received */
+ dev->quota -= received;
+ *budget -= received;
+ received = 0;
+ netif_rx_complete(dev);
+ qmgr_enable_irq(queue);
+ if (!qmgr_stat_empty(queue) &&
+ netif_rx_reschedule(dev, 0)) {
+ qmgr_disable_irq(queue);
+ continue;
+ }
+ return 0; /* all work done */
+ }
+
+ desc = &port->rx_desc_tab[n];
+
+ if ((new_skb = netdev_alloc_skb(dev, MAX_MRU)) != NULL) {
+#if 0
+ skb_reserve(new_skb, 2); /* FIXME */
+#endif
+ data = dma_map_single(&dev->dev, new_skb->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ }
+
+ if (!new_skb || dma_mapping_error(data)) {
+ if (new_skb)
+ dev_kfree_skb(new_skb);
+ port->stat.rx_dropped++;
+ /* put the desc back on RX-ready queue */
+ desc->buf_len = MAX_MRU;
+ desc->pkt_len = 0;
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ continue;
+ }
+
+ /* process received skb */
+ old_skb = port->rx_skb_tab[n];
+ dma_unmap_single(&dev->dev, desc->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ skb_put(old_skb, desc->pkt_len);
+
+ debug_skb("eth_poll", old_skb);
+
+ old_skb->protocol = eth_type_trans(old_skb, dev);
+ dev->last_rx = jiffies;
+ port->stat.rx_packets++;
+ port->stat.rx_bytes += old_skb->len;
+ netif_receive_skb(old_skb);
+
+ /* put the new skb on RX-free queue */
+ port->rx_skb_tab[n] = new_skb;
+ desc->buf_len = MAX_MRU;
+ desc->pkt_len = 0;
+ desc->data = data;
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ quota--;
+ received++;
+ }
+ dev->quota -= received;
+ *budget -= received;
+ return 1; /* not all work done */
+}
+
+static void eth_xmit_ready_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit_empty() start\n");
+#endif
+ netif_start_queue(dev);
+}
+
+static int eth_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct desc *desc;
+ u32 phys;
+ struct sk_buff *old_skb;
+ int n;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit() start\n");
+#endif
+ if (unlikely(skb->len > MAX_MRU)) {
+ dev_kfree_skb(skb);
+ port->stat.tx_errors++;
+ return NETDEV_TX_OK;
+ }
+
+ n = queue_get_desc(TXDONE_QUEUE, port, 1);
+ BUG_ON(n < 0);
+ desc = &tx_desc_tab[n];
+ phys = tx_desc_phys(n);
+
+ if ((old_skb = tx_skb_tab[n]) != NULL) {
+ dma_unmap_single(&dev->dev, desc->data,
+ desc->buf_len, DMA_TO_DEVICE);
+ port->stat.tx_packets++;
+ port->stat.tx_bytes += old_skb->len;
+ dev_kfree_skb(old_skb);
+ }
+
+ /* disable VLAN functions in NPE image for now */
+ memset(desc, 0, sizeof(*desc));
+ desc->buf_len = desc->pkt_len = skb->len;
+ desc->data = dma_map_single(&dev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ dev_kfree_skb(skb);
+ tx_skb_tab[n] = NULL;
+ port->stat.tx_dropped++;
+ /* put the desc back on TX-done queue */
+ queue_put_desc(TXDONE_QUEUE, phys, desc);
+ return 0;
+ }
+
+ tx_skb_tab[n] = skb;
+ debug_skb("eth_xmit", skb);
+
+ /* NPE firmware pads short frames with zeros internally */
+ wmb();
+ queue_put_desc(TX_QUEUE(port->plat), phys, desc);
+ BUG_ON(qmgr_stat_overflow(TX_QUEUE(port->plat)));
+ dev->trans_start = jiffies;
+
+ if (qmgr_stat_full(TX_QUEUE(port->plat))) {
+ netif_stop_queue(dev);
+ /* we could miss TX ready interrupt */
+ if (!qmgr_stat_full(TX_QUEUE(port->plat))) {
+ netif_start_queue(dev);
+ }
+ }
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit() end\n");
+#endif
+ return NETDEV_TX_OK;
+}
+
+
+static struct net_device_stats *eth_stats(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ return &port->stat;
+}
+
+static void eth_set_mcast_list(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct dev_mc_list *mclist = dev->mc_list;
+ u8 diffs[ETH_ALEN], *addr;
+ int cnt = dev->mc_count, i;
+
+ if ((dev->flags & IFF_PROMISC) || !mclist || !cnt) {
+ clr_regbits(RX_CNTRL0_ADDR_FLTR_EN,
+ &port->regs->rx_control[0]);
+ return;
+ }
+
+ memset(diffs, 0, ETH_ALEN);
+ addr = mclist->dmi_addr; /* first MAC address */
+
+ while (--cnt && (mclist = mclist->next))
+ for (i = 0; i < ETH_ALEN; i++)
+ diffs[i] |= addr[i] ^ mclist->dmi_addr[i];
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ __raw_writel(addr[i], &port->regs->mcast_addr[i]);
+ __raw_writel(~diffs[i], &port->regs->mcast_mask[i]);
+ }
+
+ set_regbits(RX_CNTRL0_ADDR_FLTR_EN, &port->regs->rx_control[0]);
+}
+
+
+static int eth_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
+{
+ struct port *port = netdev_priv(dev);
+ unsigned int duplex_chg;
+ int err;
+
+ if (!netif_running(dev))
+ return -EINVAL;
+ err = generic_mii_ioctl(&port->mii, if_mii(req), cmd, &duplex_chg);
+ if (duplex_chg)
+ eth_set_duplex(port);
+ return err;
+}
+
+
+static int request_queues(struct port *port)
+{
+ int err;
+
+ err = qmgr_request_queue(RXFREE_QUEUE(port->plat), PKT_DESCS, 0, 0);
+ if (err)
+ return err;
+
+ err = qmgr_request_queue(port->plat->rxq, PKT_DESCS, 0, 0);
+ if (err)
+ goto rel_rxfree;
+
+ err = qmgr_request_queue(TX_QUEUE(port->plat), TX_QUEUE_LEN, 0, 0);
+ if (err)
+ goto rel_rx;
+
+ /* TX-done queue handles skbs sent out by the NPEs */
+ if (!ports_open) {
+ err = qmgr_request_queue(TXDONE_QUEUE, PKT_DESCS, 0, 0);
+ if (err)
+ goto rel_tx;
+ }
+ return 0;
+
+rel_tx:
+ qmgr_release_queue(TX_QUEUE(port->plat));
+rel_rx:
+ qmgr_release_queue(port->plat->rxq);
+rel_rxfree:
+ qmgr_release_queue(RXFREE_QUEUE(port->plat));
+ return err;
+}
+
+static void release_queues(struct port *port)
+{
+ qmgr_release_queue(RXFREE_QUEUE(port->plat));
+ qmgr_release_queue(port->plat->rxq);
+ qmgr_release_queue(TX_QUEUE(port->plat));
+
+ if (!ports_open)
+ qmgr_release_queue(TXDONE_QUEUE);
+}
+
+static int init_queues(struct port *port)
+{
+ int i;
+
+ if (!dma_pool) {
+ /* Setup TX descriptors - common to all ports */
+ dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
+ 32, 0);
+ if (!dma_pool)
+ return -ENOMEM;
+
+ if (!(tx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &tx_desc_tab_phys)))
+ return -ENOMEM;
+ memset(tx_desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(tx_skb_tab, 0, sizeof(tx_skb_tab)); /* static table */
+
+ for (i = 0; i < PKT_DESCS; i++) {
+ queue_put_desc(TXDONE_QUEUE, tx_desc_phys(i),
+ &tx_desc_tab[i]);
+ BUG_ON(qmgr_stat_overflow(TXDONE_QUEUE));
+ }
+ }
+
+ /* Setup RX buffers */
+ if (!(port->rx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &port->rx_desc_tab_phys)))
+ return -ENOMEM;
+ memset(port->rx_desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* table */
+
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &port->rx_desc_tab[i];
+ struct sk_buff *skb;
+
+ if (!(skb = netdev_alloc_skb(port->netdev, MAX_MRU)))
+ return -ENOMEM;
+ port->rx_skb_tab[i] = skb;
+ desc->buf_len = MAX_MRU;
+#if 0
+ skb_reserve(skb, 2); /* FIXME */
+#endif
+ desc->data = dma_map_single(&port->netdev->dev, skb->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ return -EIO;
+ }
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, i), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ }
+ return 0;
+}
+
+static void destroy_queues(struct port *port)
+{
+ int i;
+
+ while (queue_get_desc(RXFREE_QUEUE(port->plat), port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(port->plat->rxq, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(TX_QUEUE(port->plat), port, 1) >= 0) {
+ /* nothing to do here */;
+ }
+ if (!ports_open)
+ while (queue_get_desc(TXDONE_QUEUE, port, 1) >= 0)
+ /* nothing to do here */;
+
+ if (port->rx_desc_tab) {
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &port->rx_desc_tab[i];
+ struct sk_buff *skb = port->rx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data, MAX_MRU,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, port->rx_desc_tab,
+ port->rx_desc_tab_phys);
+ port->rx_desc_tab = NULL;
+ }
+
+ if (!ports_open && tx_desc_tab) {
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &tx_desc_tab[i];
+ struct sk_buff *skb = tx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data,
+ desc->buf_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, tx_desc_tab, tx_desc_tab_phys);
+ tx_desc_tab = NULL;
+ }
+ if (!ports_open && dma_pool) {
+ dma_pool_destroy(dma_pool);
+ dma_pool = NULL;
+ }
+}
+
+static int eth_load_firmware(struct net_device *dev, struct npe *npe)
+{
+ struct msg msg;
+ int err;
+
+ if ((err = npe_load_firmware(npe, npe_name(npe), &dev->dev)) != 0)
+ return err;
+
+ if ((err = npe_recv_message(npe, &msg, "ETH_GET_STATUS")) != 0) {
+ printk(KERN_ERR "%s: %s not responding\n", dev->name,
+ npe_name(npe));
+ return err;
+ }
+ return 0;
+}
+
+static int eth_open(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+ int i, err;
+
+ if (!npe_running(npe))
+ if (eth_load_firmware(dev, npe))
+ return -EIO;
+
+ if (!npe_running(mdio_npe))
+ if (eth_load_firmware(dev, mdio_npe))
+ return -EIO;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = NPE_VLAN_SETRXQOSENTRY;
+ msg.eth_id = port->id;
+ msg.byte5 = port->plat->rxq | 0x80;
+ msg.byte7 = port->plat->rxq << 4;
+ for (i = 0; i < 8; i++) {
+ msg.byte3 = i;
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_RXQ"))
+ return -EIO;
+ }
+
+ msg.cmd = NPE_EDB_SETPORTADDRESS;
+ msg.eth_id = PHYSICAL_ID(port);
+ memcpy(msg.mac, dev->dev_addr, ETH_ALEN);
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_MAC"))
+ return -EIO;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = NPE_FW_SETFIREWALLMODE;
+ msg.eth_id = port->id;
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_FIREWALL_MODE"))
+ return -EIO;
+
+ if ((err = request_queues(port)) != 0)
+ return err;
+
+ if ((err = init_queues(port)) != 0) {
+ destroy_queues(port);
+ release_queues(port);
+ return err;
+ }
+
+ for (i = 0; i < ETH_ALEN; i++)
+ __raw_writel(dev->dev_addr[i], &port->regs->hw_addr[i]);
+ __raw_writel(0x08, &port->regs->random_seed);
+ __raw_writel(0x12, &port->regs->partial_empty_threshold);
+ __raw_writel(0x30, &port->regs->partial_full_threshold);
+ __raw_writel(0x08, &port->regs->tx_start_bytes);
+ __raw_writel(0x15, &port->regs->tx_deferral);
+ __raw_writel(0x08, &port->regs->tx_2part_deferral[0]);
+ __raw_writel(0x07, &port->regs->tx_2part_deferral[1]);
+ __raw_writel(0x80, &port->regs->slot_time);
+ __raw_writel(0x01, &port->regs->int_clock_threshold);
+ __raw_writel(TX_CNTRL1_RETRIES, &port->regs->tx_control[1]);
+ __raw_writel(TX_CNTRL0_TX_EN | TX_CNTRL0_RETRY | TX_CNTRL0_PAD_EN |
+ TX_CNTRL0_APPEND_FCS | TX_CNTRL0_2DEFER,
+ &port->regs->tx_control[0]);
+ __raw_writel(0, &port->regs->rx_control[1]);
+ __raw_writel(RX_CNTRL0_RX_EN | RX_CNTRL0_PADSTRIP_EN,
+ &port->regs->rx_control[0]);
+
+ if (mii_check_media(&port->mii, 1, 1))
+ eth_set_duplex(port);
+ eth_set_mcast_list(dev);
+ netif_start_queue(dev);
+ schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
+
+ qmgr_set_irq(port->plat->rxq, QUEUE_IRQ_SRC_NOT_EMPTY,
+ eth_rx_irq, dev);
+ qmgr_set_irq(TX_QUEUE(port->plat), QUEUE_IRQ_SRC_NOT_FULL,
+ eth_xmit_ready_irq, dev);
+ qmgr_enable_irq(port->plat->rxq);
+ qmgr_enable_irq(TX_QUEUE(port->plat));
+ ports_open++;
+ return 0;
+}
+
+static int eth_close(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+
+ ports_open--;
+ qmgr_disable_irq(port->plat->rxq);
+ qmgr_disable_irq(TX_QUEUE(port->plat));
+ netif_stop_queue(dev);
+
+ clr_regbits(RX_CNTRL0_RX_EN, &port->regs->rx_control[0]);
+ clr_regbits(TX_CNTRL0_TX_EN, &port->regs->tx_control[0]);
+ set_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
+ &port->regs->core_control);
+ udelay(10);
+ clr_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
+ &port->regs->core_control);
+
+ cancel_rearming_delayed_work(&port->mdio_thread);
+ destroy_queues(port);
+ release_queues(port);
+ return 0;
+}
+
+static int __devinit eth_init_one(struct platform_device *pdev)
+{
+ struct port *port;
+ struct net_device *dev;
+ struct mac_plat_info *plat = pdev->dev.platform_data;
+ u32 regs_phys;
+ int err;
+
+ if (!(dev = alloc_etherdev(sizeof(struct port))))
+ return -ENOMEM;
+
+ SET_MODULE_OWNER(dev);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ port = netdev_priv(dev);
+ port->netdev = dev;
+ port->id = pdev->id;
+
+ switch (port->id) {
+ case IXP4XX_ETH_NPEA:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthA_BASE_VIRT;
+ regs_phys = IXP4XX_EthA_BASE_PHYS;
+ break;
+ case IXP4XX_ETH_NPEB:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
+ regs_phys = IXP4XX_EthB_BASE_PHYS;
+ break;
+ case IXP4XX_ETH_NPEC:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthC_BASE_VIRT;
+ regs_phys = IXP4XX_EthC_BASE_PHYS;
+ break;
+ default:
+ err = -ENOSYS;
+ goto err_free;
+ }
+
+ dev->open = eth_open;
+ dev->hard_start_xmit = eth_xmit;
+ dev->poll = eth_poll;
+ dev->stop = eth_close;
+ dev->get_stats = eth_stats;
+ dev->do_ioctl = eth_ioctl;
+ dev->set_multicast_list = eth_set_mcast_list;
+ dev->weight = 16;
+ dev->tx_queue_len = 100;
+
+ if (!(port->npe = npe_request(NPE_ID(port)))) {
+ err = -EIO;
+ goto err_free;
+ }
+
+ if (register_netdev(dev)) {
+ err = -EIO;
+ goto err_npe_rel;
+ }
+
+ port->mem_res = request_mem_region(regs_phys, REGS_SIZE, dev->name);
+ if (!port->mem_res) {
+ err = -EBUSY;
+ goto err_unreg;
+ }
+
+ port->plat = plat;
+ memcpy(dev->dev_addr, plat->hwaddr, ETH_ALEN);
+
+ platform_set_drvdata(pdev, dev);
+
+ __raw_writel(CORE_RESET, &port->regs->core_control);
+ udelay(50);
+ __raw_writel(CORE_MDC_EN, &port->regs->core_control);
+ udelay(50);
+
+ port->mii.dev = dev;
+ port->mii.mdio_read = mdio_read;
+ port->mii.mdio_write = mdio_write;
+ port->mii.phy_id = plat->phy;
+ port->mii.phy_id_mask = 0x1F;
+ port->mii.reg_num_mask = 0x1F;
+
+ INIT_DELAYED_WORK(&port->mdio_thread, mdio_thread);
+
+ printk(KERN_INFO "%s: MII PHY %i on %s\n", dev->name, plat->phy,
+ npe_name(port->npe));
+ return 0;
+
+err_unreg:
+ unregister_netdev(dev);
+err_npe_rel:
+ npe_release(port->npe);
+err_free:
+ free_netdev(dev);
+ return err;
+}
+
+static int __devexit eth_remove_one(struct platform_device *pdev)
+{
+ struct net_device *dev = platform_get_drvdata(pdev);
+ struct port *port = netdev_priv(dev);
+
+ unregister_netdev(dev);
+ platform_set_drvdata(pdev, NULL);
+ npe_release(port->npe);
+ release_resource(port->mem_res);
+ free_netdev(dev);
+ return 0;
+}
+
+static struct platform_driver drv = {
+ .driver.name = DRV_NAME,
+ .probe = eth_init_one,
+ .remove = eth_remove_one,
+};
+
+static int __init eth_init_module(void)
+{
+ if (!(ixp4xx_read_fuses() & IXP4XX_FUSE_NPEB_ETH0))
+ return -ENOSYS;
+
+ /* All MII PHY accesses use NPE-B Ethernet registers */
+ if (!(mdio_npe = npe_request(1)))
+ return -EIO;
+ spin_lock_init(&mdio_lock);
+ mdio_regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
+
+ return platform_driver_register(&drv);
+}
+
+static void __exit eth_cleanup_module(void)
+{
+ platform_driver_unregister(&drv);
+ npe_release(mdio_npe);
+}
+
+MODULE_AUTHOR("Krzysztof Halasa");
+MODULE_DESCRIPTION("Intel IXP4xx Ethernet driver");
+MODULE_LICENSE("GPL v2");
+module_init(eth_init_module);
+module_exit(eth_cleanup_module);
diff --git a/drivers/net/ixp4xx/ixp4xx_hss.c b/drivers/net/ixp4xx/ixp4xx_hss.c
new file mode 100644
index 0000000..cbd96d5
--- /dev/null
+++ b/drivers/net/ixp4xx/ixp4xx_hss.c
@@ -0,0 +1,1048 @@
+/*
+ * Intel IXP4xx HSS (synchronous serial port) driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/hdlc.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include "npe.h"
+#include "qmgr.h"
+
+#ifndef __ARMEB__
+#warning Little endian mode not supported
+#endif
+
+#define DEBUG_QUEUES 0
+#define DEBUG_RX 0
+#define DEBUG_TX 0
+
+#define DRV_NAME "ixp4xx_hss"
+#define DRV_VERSION "0.03"
+
+#define PKT_EXTRA_FLAGS 0 /* orig 1 */
+#define FRAME_SYNC_OFFSET 0 /* unused, channelized only */
+#define FRAME_SYNC_SIZE 1024
+#define PKT_NUM_PIPES 1 /* 1, 2 or 4 */
+#define PKT_PIPE_FIFO_SIZEW 4 /* total 4 dwords per HSS */
+
+#define RX_DESCS 16 /* also length of queues: RX-ready, RX */
+#define TX_DESCS 16 /* also length of queues: TX-done, TX */
+
+#define POOL_ALLOC_SIZE (sizeof(struct desc) * (RX_DESCS + TX_DESCS))
+#define RX_SIZE (HDLC_MAX_MRU + 4) /* NPE needs more space */
+
+/* Queue IDs */
+#define HSS0_CHL_RXTRIG_QUEUE 12 /* orig size = 32 dwords */
+#define HSS0_PKT_RX_QUEUE 13 /* orig size = 32 dwords */
+#define HSS0_PKT_TX0_QUEUE 14 /* orig size = 16 dwords */
+#define HSS0_PKT_TX1_QUEUE 15
+#define HSS0_PKT_TX2_QUEUE 16
+#define HSS0_PKT_TX3_QUEUE 17
+#define HSS0_PKT_RXFREE0_QUEUE 18 /* orig size = 16 dwords */
+#define HSS0_PKT_RXFREE1_QUEUE 19
+#define HSS0_PKT_RXFREE2_QUEUE 20
+#define HSS0_PKT_RXFREE3_QUEUE 21
+#define HSS0_PKT_TXDONE_QUEUE 22 /* orig size = 64 dwords */
+
+#define HSS1_CHL_RXTRIG_QUEUE 10
+#define HSS1_PKT_RX_QUEUE 0
+#define HSS1_PKT_TX0_QUEUE 5
+#define HSS1_PKT_TX1_QUEUE 6
+#define HSS1_PKT_TX2_QUEUE 7
+#define HSS1_PKT_TX3_QUEUE 8
+#define HSS1_PKT_RXFREE0_QUEUE 1
+#define HSS1_PKT_RXFREE1_QUEUE 2
+#define HSS1_PKT_RXFREE2_QUEUE 3
+#define HSS1_PKT_RXFREE3_QUEUE 4
+#define HSS1_PKT_TXDONE_QUEUE 9
+
+#define NPE_PKT_MODE_HDLC 0
+#define NPE_PKT_MODE_RAW 1
+#define NPE_PKT_MODE_56KMODE 2
+#define NPE_PKT_MODE_56KENDIAN_MSB 4
+
+/* PKT_PIPE_HDLC_CFG_WRITE flags */
+#define PKT_HDLC_IDLE_ONES 0x1 /* default = flags */
+#define PKT_HDLC_CRC_32 0x2 /* default = CRC-16 */
+#define PKT_HDLC_MSB_ENDIAN 0x4 /* default = LE */
+
+
+/* hss_config, PCRs */
+/* Frame sync sampling, default = active low */
+#define PCR_FRM_SYNC_ACTIVE_HIGH 0x40000000
+#define PCR_FRM_SYNC_FALLINGEDGE 0x80000000
+#define PCR_FRM_SYNC_RISINGEDGE 0xC0000000
+
+/* Frame sync pin: input (default) or output generated off a given clk edge */
+#define PCR_FRM_SYNC_OUTPUT_FALLING 0x20000000
+#define PCR_FRM_SYNC_OUTPUT_RISING 0x30000000
+
+/* Frame and data clock sampling on edge, default = falling */
+#define PCR_FCLK_EDGE_RISING 0x08000000
+#define PCR_DCLK_EDGE_RISING 0x04000000
+
+/* Clock direction, default = input */
+#define PCR_SYNC_CLK_DIR_OUTPUT 0x02000000
+
+/* Generate/Receive frame pulses, default = enabled */
+#define PCR_FRM_PULSE_DISABLED 0x01000000
+
+ /* Data rate is full (default) or half the configured clk speed */
+#define PCR_HALF_CLK_RATE 0x00200000
+
+/* Invert data between NPE and HSS FIFOs? (default = no) */
+#define PCR_DATA_POLARITY_INVERT 0x00100000
+
+/* TX/RX endianness, default = LSB */
+#define PCR_MSB_ENDIAN 0x00080000
+
+/* Normal (default) / open drain mode (TX only) */
+#define PCR_TX_PINS_OPEN_DRAIN 0x00040000
+
+/* No framing bit transmitted and expected on RX? (default = framing bit) */
+#define PCR_SOF_NO_FBIT 0x00020000
+
+/* Drive data pins? */
+#define PCR_TX_DATA_ENABLE 0x00010000
+
+/* Voice 56k type: drive the data pins low (default), high, high Z */
+#define PCR_TX_V56K_HIGH 0x00002000
+#define PCR_TX_V56K_HIGH_IMP 0x00004000
+
+/* Unassigned type: drive the data pins low (default), high, high Z */
+#define PCR_TX_UNASS_HIGH 0x00000800
+#define PCR_TX_UNASS_HIGH_IMP 0x00001000
+
+/* T1 @ 1.544MHz only: Fbit dictated in FIFO (default) or high Z */
+#define PCR_TX_FB_HIGH_IMP 0x00000400
+
+/* 56k data endiannes - which bit unused: high (default) or low */
+#define PCR_TX_56KE_BIT_0_UNUSED 0x00000200
+
+/* 56k data transmission type: 32/8 bit data (default) or 56K data */
+#define PCR_TX_56KS_56K_DATA 0x00000100
+
+/* hss_config, cCR */
+/* Number of packetized clients, default = 1 */
+#define CCR_NPE_HFIFO_2_HDLC 0x04000000
+#define CCR_NPE_HFIFO_3_OR_4HDLC 0x08000000
+
+/* default = no loopback */
+#define CCR_LOOPBACK 0x02000000
+
+/* HSS number, default = 0 (first) */
+#define CCR_SECOND_HSS 0x01000000
+
+
+/* hss_config, clkCR: main:10, num:10, denom:12 */
+#define CLK42X_SPEED_EXP ((0x3FF << 22) | ( 2 << 12) | 15) /*65 KHz*/
+
+#define CLK42X_SPEED_512KHZ (( 130 << 22) | ( 2 << 12) | 15)
+#define CLK42X_SPEED_1536KHZ (( 43 << 22) | ( 18 << 12) | 47)
+#define CLK42X_SPEED_1544KHZ (( 43 << 22) | ( 33 << 12) | 192)
+#define CLK42X_SPEED_2048KHZ (( 32 << 22) | ( 34 << 12) | 63)
+#define CLK42X_SPEED_4096KHZ (( 16 << 22) | ( 34 << 12) | 127)
+#define CLK42X_SPEED_8192KHZ (( 8 << 22) | ( 34 << 12) | 255)
+
+#define CLK46X_SPEED_512KHZ (( 130 << 22) | ( 24 << 12) | 127)
+#define CLK46X_SPEED_1536KHZ (( 43 << 22) | (152 << 12) | 383)
+#define CLK46X_SPEED_1544KHZ (( 43 << 22) | ( 66 << 12) | 385)
+#define CLK46X_SPEED_2048KHZ (( 32 << 22) | (280 << 12) | 511)
+#define CLK46X_SPEED_4096KHZ (( 16 << 22) | (280 << 12) | 1023)
+#define CLK46X_SPEED_8192KHZ (( 8 << 22) | (280 << 12) | 2047)
+
+
+/* hss_config, LUTs: default = unassigned */
+#define TDMMAP_HDLC 1 /* HDLC - packetised */
+#define TDMMAP_VOICE56K 2 /* Voice56K - channelised */
+#define TDMMAP_VOICE64K 3 /* Voice64K - channelised */
+
+
+/* NPE command codes */
+/* writes the ConfigWord value to the location specified by offset */
+#define PORT_CONFIG_WRITE 0x40
+
+/* triggers the NPE to load the contents of the configuration table */
+#define PORT_CONFIG_LOAD 0x41
+
+/* triggers the NPE to return an HssErrorReadResponse message */
+#define PORT_ERROR_READ 0x42
+
+/* reset NPE internal status and enable the HssChannelized operation */
+#define CHAN_FLOW_ENABLE 0x43
+#define CHAN_FLOW_DISABLE 0x44
+#define CHAN_IDLE_PATTERN_WRITE 0x45
+#define CHAN_NUM_CHANS_WRITE 0x46
+#define CHAN_RX_BUF_ADDR_WRITE 0x47
+#define CHAN_RX_BUF_CFG_WRITE 0x48
+#define CHAN_TX_BLK_CFG_WRITE 0x49
+#define CHAN_TX_BUF_ADDR_WRITE 0x4A
+#define CHAN_TX_BUF_SIZE_WRITE 0x4B
+#define CHAN_TSLOTSWITCH_ENABLE 0x4C
+#define CHAN_TSLOTSWITCH_DISABLE 0x4D
+
+/* downloads the gainWord value for a timeslot switching channel associated
+ with bypassNum */
+#define CHAN_TSLOTSWITCH_GCT_DOWNLOAD 0x4E
+
+/* triggers the NPE to reset internal status and enable the HssPacketized
+ operation for the flow specified by pPipe */
+#define PKT_PIPE_FLOW_ENABLE 0x50
+#define PKT_PIPE_FLOW_DISABLE 0x51
+#define PKT_NUM_PIPES_WRITE 0x52
+#define PKT_PIPE_FIFO_SIZEW_WRITE 0x53
+#define PKT_PIPE_HDLC_CFG_WRITE 0x54
+#define PKT_PIPE_IDLE_PATTERN_WRITE 0x55
+#define PKT_PIPE_RX_SIZE_WRITE 0x56
+#define PKT_PIPE_MODE_WRITE 0x57
+
+
+#define HSS_TIMESLOTS 128
+#define HSS_LUT_BITS 2
+
+/* HDLC packet status values - desc->status */
+#define ERR_SHUTDOWN 1 /* stop or shutdown occurrance */
+#define ERR_HDLC_ALIGN 2 /* HDLC alignment error */
+#define ERR_HDLC_FCS 3 /* HDLC Frame Check Sum error */
+#define ERR_RXFREE_Q_EMPTY 4 /* RX-free queue became empty while receiving
+ this packet (if buf_len < pkt_len) */
+#define ERR_HDLC_TOO_LONG 5 /* HDLC frame size too long */
+#define ERR_HDLC_ABORT 6 /* abort sequence received */
+#define ERR_DISCONNECTING 7 /* disconnect is in progress */
+
+
+struct port {
+ struct npe *npe;
+ struct net_device *netdev;
+ struct hss_plat_info *plat;
+ struct sk_buff *rx_skb_tab[RX_DESCS], *tx_skb_tab[TX_DESCS];
+ struct desc *desc_tab; /* coherent */
+ u32 desc_tab_phys;
+ sync_serial_settings settings;
+ int id;
+ u8 hdlc_cfg;
+};
+
+/* NPE message structure */
+struct msg {
+ u8 cmd, unused, hss_port, index;
+ union {
+ u8 data8[4];
+ u16 data16[2];
+ u32 data32;
+ };
+};
+
+
+/* HDLC packet descriptor */
+struct desc {
+ u32 next; /* pointer to next buffer, unused */
+ u16 buf_len; /* buffer length */
+ u16 pkt_len; /* packet length */
+ u32 data; /* pointer to data buffer in RAM */
+ u8 status;
+ u8 error_count;
+ u16 __reserved;
+ u32 __reserved1[4];
+};
+
+#define rx_desc_ptr(port, n) (&(port)->desc_tab[n])
+#define rx_desc_phys(port, n) ((port)->desc_tab_phys + \
+ (n) * sizeof(struct desc))
+#define tx_desc_ptr(port, n) (&(port)->desc_tab[(n) + RX_DESCS])
+#define tx_desc_phys(port, n) ((port)->desc_tab_phys + \
+ ((n) + RX_DESCS) * sizeof(struct desc))
+
+static int ports_open;
+static struct dma_pool *dma_pool;
+
+static struct {
+ int tx, txdone, rx, rxfree;
+}queue_ids[2] = {{ HSS0_PKT_TX0_QUEUE, HSS0_PKT_TXDONE_QUEUE,
+ HSS0_PKT_RX_QUEUE, HSS0_PKT_RXFREE0_QUEUE },
+ { HSS1_PKT_TX0_QUEUE, HSS1_PKT_TXDONE_QUEUE,
+ HSS1_PKT_RX_QUEUE, HSS1_PKT_RXFREE0_QUEUE },
+};
+
+
+static inline struct port* dev_to_port(struct net_device *dev)
+{
+ return dev_to_hdlc(dev)->priv;
+}
+
+
+static inline void debug_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc, int is_get)
+{
+#if DEBUG_QUEUES
+ const char *op = is_get ? "->" : "<-";
+
+ if (!desc_phys) {
+ printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
+ return;
+ }
+ printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %X %X\n",
+ queue, op, desc_phys, desc->next, desc->buf_len, desc->pkt_len,
+ desc->data, desc->status, desc->error_count);
+#endif
+}
+
+static inline int queue_get_desc(unsigned int queue, struct port *port,
+ int is_tx)
+{
+ u32 phys, tab_phys, n_desc;
+ struct desc *tab;
+
+ if (!(phys = qmgr_get_entry(queue))) {
+ debug_desc(queue, phys, NULL, 1);
+ return -1;
+ }
+
+ BUG_ON(phys & 0x1F);
+ tab_phys = is_tx ? tx_desc_phys(port, 0) : rx_desc_phys(port, 0);
+ tab = is_tx ? tx_desc_ptr(port, 0) : rx_desc_ptr(port, 0);
+ n_desc = (phys - tab_phys) / sizeof(struct desc);
+ BUG_ON(n_desc >= (is_tx ? TX_DESCS : RX_DESCS));
+
+ debug_desc(queue, phys, &tab[n_desc], 1);
+ BUG_ON(tab[n_desc].next);
+ return n_desc;
+}
+
+static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc)
+{
+ debug_desc(queue, desc_phys, desc, 0);
+ BUG_ON(desc_phys & 0x1F);
+ qmgr_put_entry(queue, desc_phys);
+}
+
+
+static void hss_set_carrier(void *pdev, int carrier)
+{
+ struct net_device *dev = pdev;
+ if (carrier)
+ netif_carrier_on(dev);
+ else
+ netif_carrier_off(dev);
+}
+
+static void hss_rx_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+ struct port *port = dev_to_port(dev);
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_rx_irq() start\n");
+#endif
+ qmgr_disable_irq(queue_ids[port->id].rx);
+ netif_rx_schedule(dev);
+}
+
+static int hss_poll(struct net_device *dev, int *budget)
+{
+ struct port *port = dev_to_port(dev);
+ unsigned int queue = queue_ids[port->id].rx;
+ struct net_device_stats *stats = hdlc_stats(dev);
+ int quota = dev->quota, received = 0;
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() start\n");
+#endif
+ while (quota) {
+ struct sk_buff *old_skb, *new_skb = NULL;
+ struct desc *desc;
+ u32 data;
+ int n = queue_get_desc(queue, port, 0);
+ if (n < 0) { /* No packets received */
+ dev->quota -= received;
+ *budget -= received;
+ received = 0;
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() netif_rx_complete()\n");
+#endif
+ netif_rx_complete(dev);
+ qmgr_enable_irq(queue);
+ if (!qmgr_stat_empty(queue) &&
+ netif_rx_reschedule(dev, 0)) {
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll()"
+ " netif_rx_reschedule() successed\n");
+#endif
+ qmgr_disable_irq(queue);
+ continue;
+ }
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() all done\n");
+#endif
+ return 0; /* all work done */
+ }
+
+ desc = rx_desc_ptr(port, n);
+
+ if (!desc->status) /* check for RX errors */
+ new_skb = netdev_alloc_skb(dev, RX_SIZE);
+ if (new_skb)
+ data = dma_map_single(&dev->dev, new_skb->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+
+ if (!new_skb || dma_mapping_error(data)) {
+ if (new_skb)
+ dev_kfree_skb(new_skb);
+ switch (desc->status) {
+ case 0:
+ stats->rx_dropped++;
+ break;
+ case ERR_HDLC_ALIGN:
+ case ERR_HDLC_ABORT:
+ stats->rx_frame_errors++;
+ stats->rx_errors++;
+ break;
+ case ERR_HDLC_FCS:
+ stats->rx_crc_errors++;
+ stats->rx_errors++;
+ break;
+ case ERR_HDLC_TOO_LONG:
+ stats->rx_length_errors++;
+ stats->rx_errors++;
+ break;
+ default: /* FIXME - remove printk */
+ printk(KERN_ERR "hss_poll(): status 0x%02X"
+ " errors %u\n", desc->status,
+ desc->error_count);
+ stats->rx_errors++;
+ }
+ /* put the desc back on RX-ready queue */
+ desc->buf_len = RX_SIZE;
+ desc->pkt_len = desc->status = 0;
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ continue;
+ }
+
+ if (desc->error_count) /* FIXME - remove printk */
+ printk(KERN_ERR "hss_poll(): status 0x%02X"
+ " errors %u\n", desc->status,
+ desc->error_count);
+
+ /* process received skb */
+ old_skb = port->rx_skb_tab[n];
+ dma_unmap_single(&dev->dev, desc->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+
+ skb_put(old_skb, desc->pkt_len);
+ old_skb->protocol = hdlc_type_trans(old_skb, dev);
+ dev->last_rx = jiffies;
+ stats->rx_packets++;
+ stats->rx_bytes += old_skb->len;
+ netif_receive_skb(old_skb);
+
+ /* put the new skb on RX-free queue */
+ port->rx_skb_tab[n] = new_skb;
+ desc->buf_len = RX_SIZE;
+ desc->pkt_len = 0;
+ desc->data = data;
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ quota--;
+ received++;
+ }
+ dev->quota -= received;
+ *budget -= received;
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() end, not all work done\n");
+#endif
+ return 1; /* not all work done */
+}
+
+static void hss_xmit_ready_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit_empty() start\n");
+#endif
+ netif_start_queue(dev);
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit_empty() end\n");
+#endif
+}
+
+static int hss_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct net_device_stats *stats = hdlc_stats(dev);
+ struct desc *desc;
+ u32 phys;
+ struct sk_buff *old_skb;
+ int n;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit() start\n");
+#endif
+ if (unlikely(skb->len > HDLC_MAX_MRU)) {
+ dev_kfree_skb(skb);
+ stats->tx_errors++;
+ return NETDEV_TX_OK;
+ }
+
+ n = queue_get_desc(queue_ids[port->id].txdone, port, 1);
+ BUG_ON(n < 0);
+ desc = tx_desc_ptr(port, n);
+ phys = tx_desc_phys(port, n);
+
+ if ((old_skb = port->tx_skb_tab[n]) != NULL) {
+ dma_unmap_single(&dev->dev, desc->data,
+ desc->buf_len, DMA_TO_DEVICE);
+ stats->tx_packets++;
+ stats->tx_bytes += old_skb->len;
+ dev_kfree_skb(old_skb);
+ }
+
+ desc->buf_len = desc->pkt_len = skb->len;
+ desc->data = dma_map_single(&dev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ dev_kfree_skb(skb);
+ port->tx_skb_tab[n] = NULL;
+ stats->tx_dropped++;
+ /* put the desc back on TX-done queue */
+ queue_put_desc(queue_ids[port->id].txdone, phys, desc);
+ return 0;
+ }
+
+ port->tx_skb_tab[n] = skb;
+ wmb();
+ queue_put_desc(queue_ids[port->id].tx, phys, desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].tx));
+ dev->trans_start = jiffies;
+
+ if (qmgr_stat_empty(queue_ids[port->id].txdone)) {
+ netif_stop_queue(dev);
+ /* we could miss TX ready interrupt */
+ if (!qmgr_stat_empty(queue_ids[port->id].txdone)) {
+ netif_start_queue(dev);
+ }
+ }
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit() end\n");
+#endif
+ return NETDEV_TX_OK;
+}
+
+
+static int request_queues(struct port *port)
+{
+ int err;
+
+ err = qmgr_request_queue(queue_ids[port->id].rxfree, RX_DESCS, 0, 0);
+ if (err)
+ return err;
+
+ err = qmgr_request_queue(queue_ids[port->id].rx, RX_DESCS, 0, 0);
+ if (err)
+ goto rel_rxfree;
+
+ err = qmgr_request_queue(queue_ids[port->id].tx, TX_DESCS, 0, 0);
+ if (err)
+ goto rel_rx;
+
+ err = qmgr_request_queue(queue_ids[port->id].txdone, TX_DESCS, 0, 0);
+ if (err)
+ goto rel_tx;
+ return 0;
+
+rel_tx:
+ qmgr_release_queue(queue_ids[port->id].tx);
+rel_rx:
+ qmgr_release_queue(queue_ids[port->id].rx);
+rel_rxfree:
+ qmgr_release_queue(queue_ids[port->id].rxfree);
+ return err;
+}
+
+static void release_queues(struct port *port)
+{
+ qmgr_release_queue(queue_ids[port->id].rxfree);
+ qmgr_release_queue(queue_ids[port->id].rx);
+ qmgr_release_queue(queue_ids[port->id].txdone);
+ qmgr_release_queue(queue_ids[port->id].tx);
+}
+
+static int init_queues(struct port *port)
+{
+ int i;
+
+ if (!dma_pool) {
+ dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
+ 32, 0);
+ if (!dma_pool)
+ return -ENOMEM;
+ }
+
+ if (!(port->desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &port->desc_tab_phys)))
+ return -ENOMEM;
+ memset(port->desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* tables */
+ memset(port->tx_skb_tab, 0, sizeof(port->tx_skb_tab));
+
+ /* Setup RX buffers */
+ for (i = 0; i < RX_DESCS; i++) {
+ struct desc *desc = rx_desc_ptr(port, i);
+ struct sk_buff *skb;
+
+ if (!(skb = netdev_alloc_skb(port->netdev, RX_SIZE)))
+ return -ENOMEM;
+ port->rx_skb_tab[i] = skb;
+ desc->buf_len = RX_SIZE;
+ desc->data = dma_map_single(&port->netdev->dev, skb->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ return -EIO;
+ }
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, i), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ }
+
+ /* Setup TX-done queue */
+ for (i = 0; i < TX_DESCS; i++) {
+ queue_put_desc(queue_ids[port->id].txdone,
+ tx_desc_phys(port, i), tx_desc_ptr(port, i));
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].txdone));
+ }
+ return 0;
+}
+
+static void destroy_queues(struct port *port)
+{
+ int i;
+
+ while (queue_get_desc(queue_ids[port->id].rxfree, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].rx, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].tx, port, 1) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].txdone, port, 1) >= 0)
+ /* nothing to do here */;
+
+ if (port->desc_tab) {
+ for (i = 0; i < RX_DESCS; i++) {
+ struct desc *desc = rx_desc_ptr(port, i);
+ struct sk_buff *skb = port->rx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data, RX_SIZE,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ for (i = 0; i < TX_DESCS; i++) {
+ struct desc *desc = tx_desc_ptr(port, i);
+ struct sk_buff *skb = port->tx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data,
+ desc->buf_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, port->desc_tab, port->desc_tab_phys);
+ port->desc_tab = NULL;
+ }
+
+ if (!ports_open && dma_pool) {
+ dma_pool_destroy(dma_pool);
+ dma_pool = NULL;
+ }
+}
+
+
+static int hss_open(struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+ int i, err;
+
+ if (!npe_running(npe))
+ if ((err = npe_load_firmware(npe, npe_name(npe),
+ &dev->dev)) != 0)
+ return err;
+
+ if ((err = hdlc_open(dev)) != 0)
+ return err;
+
+ if (port->plat->open)
+ if ((err = port->plat->open(port->id, port->netdev,
+ hss_set_carrier)) != 0)
+ goto err_hdlc_close;
+
+ /* HSS main configuration */
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PORT_CONFIG_WRITE;
+ msg.hss_port = port->id;
+ msg.index = 0; /* offset in HSS config */
+
+ msg.data32 = PCR_FRM_PULSE_DISABLED |
+ PCR_SOF_NO_FBIT |
+ PCR_MSB_ENDIAN |
+ PCR_TX_DATA_ENABLE;
+
+ if (port->settings.clock_type == CLOCK_INT)
+ msg.data32 |= PCR_SYNC_CLK_DIR_OUTPUT;
+
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_TX_PCR") != 0))
+ goto err_plat_close; /* 0: TX PCR */
+
+ msg.index = 4;
+ msg.data32 ^= PCR_TX_DATA_ENABLE | PCR_DCLK_EDGE_RISING;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_RX_PCR") != 0))
+ goto err_plat_close; /* 4: RX PCR */
+
+ msg.index = 8;
+ msg.data32 = (port->settings.loopback ? CCR_LOOPBACK : 0) |
+ (port->id ? CCR_SECOND_HSS : 0);
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_CORE_CR") != 0))
+ goto err_plat_close; /* 8: Core CR */
+
+ msg.index = 12;
+ msg.data32 = CLK42X_SPEED_2048KHZ /* FIXME */;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_CLK_CR") != 0))
+ goto err_plat_close; /* 12: CLK CR */
+
+ msg.data32 = (FRAME_SYNC_OFFSET << 16) | (FRAME_SYNC_SIZE - 1);
+ msg.index = 16;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_TX_FCR") != 0))
+ goto err_plat_close; /* 16: TX FCR */
+
+ msg.index = 20;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_RX_FCR") != 0))
+ goto err_plat_close; /* 20: RX FCR */
+
+ msg.data32 = 0; /* Fill LUT with HDLC timeslots */
+ for (i = 0; i < 32 / HSS_LUT_BITS; i++)
+ msg.data32 |= TDMMAP_HDLC << (HSS_LUT_BITS * i);
+
+ for (i = 0; i < 2 /* TX and RX */ * HSS_TIMESLOTS * HSS_LUT_BITS / 8;
+ i += 4) {
+ msg.index = 24 + i; /* 24 - 55: TX LUT, 56 - 87: RX LUT */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_LUT") != 0))
+ goto err_plat_close;
+ }
+
+ /* HDLC mode configuration */
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_NUM_PIPES_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = PKT_NUM_PIPES;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_PIPES") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FIFO_SIZEW_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = PKT_PIPE_FIFO_SIZEW;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_FIFO") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_IDLE_PATTERN_WRITE;
+ msg.hss_port = port->id;
+ msg.data32 = 0x7F7F7F7F;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_IDLE") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PORT_CONFIG_LOAD;
+ msg.hss_port = port->id;
+ if ((err = npe_send_message(npe, &msg, "HSS_LOAD_CONFIG") != 0))
+ goto err_plat_close;
+ if ((err = npe_recv_message(npe, &msg, "HSS_LOAD_CONFIG") != 0))
+ goto err_plat_close;
+
+ /* HSS_LOAD_CONFIG for port #1 returns port_id = #4 */
+ if (msg.cmd != PORT_CONFIG_LOAD || msg.data32) {
+ printk(KERN_DEBUG "%s: unexpected message received in"
+ " response to HSS_LOAD_CONFIG: \n", npe_name(npe));
+ err = EIO;
+ goto err_plat_close;
+ }
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_HDLC_CFG_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = port->hdlc_cfg; /* rx_cfg */
+ msg.data8[1] = port->hdlc_cfg | (PKT_EXTRA_FLAGS << 3); /* tx_cfg */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_HDLC_CFG") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_MODE_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = NPE_PKT_MODE_HDLC;
+ /* msg.data8[1] = inv_mask */
+ /* msg.data8[2] = or_mask */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_MODE") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_RX_SIZE_WRITE;
+ msg.hss_port = port->id;
+ msg.data16[0] = HDLC_MAX_MRU;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_RX_SIZE") != 0))
+ goto err_plat_close;
+
+ if ((err = request_queues(port)) != 0)
+ goto err_plat_close;
+
+ if ((err = init_queues(port)) != 0)
+ goto err_destroy_queues;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FLOW_ENABLE;
+ msg.hss_port = port->id;
+ if ((err = npe_send_message(npe, &msg, "HSS_ENABLE_PKT_PIPE") != 0))
+ goto err_destroy_queues;
+
+ netif_start_queue(dev);
+
+ qmgr_set_irq(queue_ids[port->id].rx, QUEUE_IRQ_SRC_NOT_EMPTY,
+ hss_rx_irq, dev);
+ qmgr_enable_irq(queue_ids[port->id].rx);
+
+ qmgr_set_irq(queue_ids[port->id].txdone, QUEUE_IRQ_SRC_NOT_EMPTY,
+ hss_xmit_ready_irq, dev);
+ qmgr_enable_irq(queue_ids[port->id].txdone);
+
+ ports_open++;
+ return 0;
+
+err_destroy_queues:
+ destroy_queues(port);
+ release_queues(port);
+err_plat_close:
+ if (port->plat->close)
+ port->plat->close(port->id, port->netdev);
+err_hdlc_close:
+ hdlc_close(dev);
+ return err;
+}
+
+static int hss_close(struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+
+ ports_open--;
+ qmgr_disable_irq(queue_ids[port->id].rx);
+ qmgr_disable_irq(queue_ids[port->id].txdone);
+ netif_stop_queue(dev);
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FLOW_DISABLE;
+ msg.hss_port = port->id;
+ if (npe_send_message(npe, &msg, "HSS_DISABLE_PKT_PIPE")) {
+ printk(KERN_CRIT "HSS-%i: unable to stop HDLC flow\n",
+ port->id);
+ /* The upper level would ignore the error anyway */
+ }
+
+ destroy_queues(port);
+ release_queues(port);
+
+ if (port->plat->close)
+ port->plat->close(port->id, port->netdev);
+ hdlc_close(dev);
+ return 0;
+}
+
+
+static int hss_attach(struct net_device *dev, unsigned short encoding,
+ unsigned short parity)
+{
+ struct port *port = dev_to_port(dev);
+
+ if (encoding != ENCODING_NRZ)
+ return -EINVAL;
+
+ switch(parity) {
+ case PARITY_CRC16_PR1_CCITT:
+ port->hdlc_cfg = 0;
+ return 0;
+
+ case PARITY_CRC32_PR1_CCITT:
+ port->hdlc_cfg = PKT_HDLC_CRC_32;
+ return 0;
+
+ default:
+ return -EINVAL;
+ }
+}
+
+
+static int hss_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ const size_t size = sizeof(sync_serial_settings);
+ sync_serial_settings new_line;
+ int clk;
+ sync_serial_settings __user *line = ifr->ifr_settings.ifs_ifsu.sync;
+ struct port *port = dev_to_port(dev);
+
+ if (cmd != SIOCWANDEV)
+ return hdlc_ioctl(dev, ifr, cmd);
+
+ switch(ifr->ifr_settings.type) {
+ case IF_GET_IFACE:
+ ifr->ifr_settings.type = IF_IFACE_V35;
+ if (ifr->ifr_settings.size < size) {
+ ifr->ifr_settings.size = size; /* data size wanted */
+ return -ENOBUFS;
+ }
+ if (copy_to_user(line, &port->settings, size))
+ return -EFAULT;
+ return 0;
+
+ case IF_IFACE_SYNC_SERIAL:
+ case IF_IFACE_V35:
+ if(!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if (dev->flags & IFF_UP)
+ return -EBUSY; /* Cannot change parameters when open */
+
+ if (copy_from_user(&new_line, line, size))
+ return -EFAULT;
+
+ clk = new_line.clock_type;
+ if (port->plat->set_clock)
+ clk = port->plat->set_clock(port->id, clk);
+
+ if (clk != CLOCK_EXT && clk != CLOCK_INT)
+ return -EINVAL; /* No such clock setting */
+
+ if (new_line.loopback != 0 && new_line.loopback != 1)
+ return -EINVAL;
+
+ memcpy(&port->settings, &new_line, size); /* Update settings */
+ return 0;
+
+ default:
+ return hdlc_ioctl(dev, ifr, cmd);
+ }
+}
+
+
+static int __devinit hss_init_one(struct platform_device *pdev)
+{
+ struct port *port;
+ struct net_device *dev;
+ hdlc_device *hdlc;
+ int err;
+
+ if ((port = kzalloc(sizeof(*port), GFP_KERNEL)) == NULL)
+ return -ENOMEM;
+ platform_set_drvdata(pdev, port);
+ port->id = pdev->id;
+
+ if ((port->npe = npe_request(0)) == NULL) {
+ err = -ENOSYS;
+ goto err_free;
+ }
+
+ port->plat = pdev->dev.platform_data;
+ if ((port->netdev = dev = alloc_hdlcdev(port)) == NULL) {
+ err = -ENOMEM;
+ goto err_plat;
+ }
+
+ SET_MODULE_OWNER(net);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ hdlc = dev_to_hdlc(dev);
+ hdlc->attach = hss_attach;
+ hdlc->xmit = hss_xmit;
+ dev->open = hss_open;
+ dev->poll = hss_poll;
+ dev->stop = hss_close;
+ dev->do_ioctl = hss_ioctl;
+ dev->weight = 16;
+ dev->tx_queue_len = 100;
+ port->settings.clock_type = CLOCK_EXT;
+ port->settings.clock_rate = 2048000;
+
+ if (register_hdlc_device(dev)) {
+ printk(KERN_ERR "HSS-%i: unable to register HDLC device\n",
+ port->id);
+ err = -ENOBUFS;
+ goto err_free_netdev;
+ }
+ printk(KERN_INFO "%s: HSS-%i\n", dev->name, port->id);
+ return 0;
+
+err_free_netdev:
+ free_netdev(dev);
+err_plat:
+ npe_release(port->npe);
+ platform_set_drvdata(pdev, NULL);
+err_free:
+ kfree(port);
+ return err;
+}
+
+static int __devexit hss_remove_one(struct platform_device *pdev)
+{
+ struct port *port = platform_get_drvdata(pdev);
+
+ unregister_hdlc_device(port->netdev);
+ free_netdev(port->netdev);
+ npe_release(port->npe);
+ platform_set_drvdata(pdev, NULL);
+ kfree(port);
+ return 0;
+}
+
+static struct platform_driver drv = {
+ .driver.name = DRV_NAME,
+ .probe = hss_init_one,
+ .remove = hss_remove_one,
+};
+
+static int __init hss_init_module(void)
+{
+ if ((ixp4xx_read_fuses() & (IXP4XX_FUSE_HDLC | IXP4XX_FUSE_HSS)) !=
+ (IXP4XX_FUSE_HDLC | IXP4XX_FUSE_HSS))
+ return -ENOSYS;
+ return platform_driver_register(&drv);
+}
+
+static void __exit hss_cleanup_module(void)
+{
+ platform_driver_unregister(&drv);
+}
+
+MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>");
+MODULE_DESCRIPTION("Intel IXP4xx HSS driver");
+MODULE_LICENSE("GPL v2");
+
+module_init(hss_init_module);
+module_exit(hss_cleanup_module);
diff --git a/drivers/net/ixp4xx/ixp4xx_npe.c b/drivers/net/ixp4xx/ixp4xx_npe.c
new file mode 100644
index 0000000..fb1d91b
--- /dev/null
+++ b/drivers/net/ixp4xx/ixp4xx_npe.c
@@ -0,0 +1,731 @@
+/*
+ * Intel IXP4xx Network Processor Engine driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <asm/delay.h>
+#include <asm/io.h>
+#include "npe.h"
+
+#define DEBUG_MSG 0
+#define DEBUG_FW 0
+
+#define NPE_COUNT 3
+#define MAX_RETRIES 1000 /* microseconds */
+#define NPE_42X_DATA_SIZE 0x800 /* in dwords */
+#define NPE_46X_DATA_SIZE 0x1000
+#define NPE_A_42X_INSTR_SIZE 0x1000
+#define NPE_B_AND_C_42X_INSTR_SIZE 0x800
+#define NPE_46X_INSTR_SIZE 0x1000
+#define REGS_SIZE 0x1000
+
+#define NPE_PHYS_REG 32
+
+#define FW_MAGIC 0xFEEDF00D
+#define FW_BLOCK_TYPE_INSTR 0x0
+#define FW_BLOCK_TYPE_DATA 0x1
+#define FW_BLOCK_TYPE_EOF 0xF
+
+/* NPE exec status (read) and command (write) */
+#define CMD_NPE_STEP 0x01
+#define CMD_NPE_START 0x02
+#define CMD_NPE_STOP 0x03
+#define CMD_NPE_CLR_PIPE 0x04
+#define CMD_CLR_PROFILE_CNT 0x0C
+#define CMD_RD_INS_MEM 0x10 /* instruction memory */
+#define CMD_WR_INS_MEM 0x11
+#define CMD_RD_DATA_MEM 0x12 /* data memory */
+#define CMD_WR_DATA_MEM 0x13
+#define CMD_RD_ECS_REG 0x14 /* exec access register */
+#define CMD_WR_ECS_REG 0x15
+
+#define STAT_RUN 0x80000000
+#define STAT_STOP 0x40000000
+#define STAT_CLEAR 0x20000000
+#define STAT_ECS_K 0x00800000 /* pipeline clean */
+
+#define NPE_STEVT 0x1B
+#define NPE_STARTPC 0x1C
+#define NPE_REGMAP 0x1E
+#define NPE_CINDEX 0x1F
+
+#define INSTR_WR_REG_SHORT 0x0000C000
+#define INSTR_WR_REG_BYTE 0x00004000
+#define INSTR_RD_FIFO 0x0F888220
+#define INSTR_RESET_MBOX 0x0FAC8210
+
+#define ECS_BG_CTXT_REG_0 0x00 /* Background Executing Context */
+#define ECS_BG_CTXT_REG_1 0x01 /* Stack level */
+#define ECS_BG_CTXT_REG_2 0x02
+#define ECS_PRI_1_CTXT_REG_0 0x04 /* Priority 1 Executing Context */
+#define ECS_PRI_1_CTXT_REG_1 0x05 /* Stack level */
+#define ECS_PRI_1_CTXT_REG_2 0x06
+#define ECS_PRI_2_CTXT_REG_0 0x08 /* Priority 2 Executing Context */
+#define ECS_PRI_2_CTXT_REG_1 0x09 /* Stack level */
+#define ECS_PRI_2_CTXT_REG_2 0x0A
+#define ECS_DBG_CTXT_REG_0 0x0C /* Debug Executing Context */
+#define ECS_DBG_CTXT_REG_1 0x0D /* Stack level */
+#define ECS_DBG_CTXT_REG_2 0x0E
+#define ECS_INSTRUCT_REG 0x11 /* NPE Instruction Register */
+
+#define ECS_REG_0_ACTIVE 0x80000000 /* all levels */
+#define ECS_REG_0_NEXTPC_MASK 0x1FFF0000 /* BG/PRI1/PRI2 levels */
+#define ECS_REG_0_LDUR_BITS 8
+#define ECS_REG_0_LDUR_MASK 0x00000700 /* all levels */
+#define ECS_REG_1_CCTXT_BITS 16
+#define ECS_REG_1_CCTXT_MASK 0x000F0000 /* all levels */
+#define ECS_REG_1_SELCTXT_BITS 0
+#define ECS_REG_1_SELCTXT_MASK 0x0000000F /* all levels */
+#define ECS_DBG_REG_2_IF 0x00100000 /* debug level */
+#define ECS_DBG_REG_2_IE 0x00080000 /* debug level */
+
+/* NPE watchpoint_fifo register bit */
+#define WFIFO_VALID 0x80000000
+
+/* NPE messaging_status register bit definitions */
+#define MSGSTAT_OFNE 0x00010000 /* OutFifoNotEmpty */
+#define MSGSTAT_IFNF 0x00020000 /* InFifoNotFull */
+#define MSGSTAT_OFNF 0x00040000 /* OutFifoNotFull */
+#define MSGSTAT_IFNE 0x00080000 /* InFifoNotEmpty */
+#define MSGSTAT_MBINT 0x00100000 /* Mailbox interrupt */
+#define MSGSTAT_IFINT 0x00200000 /* InFifo interrupt */
+#define MSGSTAT_OFINT 0x00400000 /* OutFifo interrupt */
+#define MSGSTAT_WFINT 0x00800000 /* WatchFifo interrupt */
+
+/* NPE messaging_control register bit definitions */
+#define MSGCTL_OUT_FIFO 0x00010000 /* enable output FIFO */
+#define MSGCTL_IN_FIFO 0x00020000 /* enable input FIFO */
+#define MSGCTL_OUT_FIFO_WRITE 0x01000000 /* enable FIFO + WRITE */
+#define MSGCTL_IN_FIFO_WRITE 0x02000000
+
+/* NPE mailbox_status value for reset */
+#define RESET_MBOX_STAT 0x0000F0F0
+
+const char *npe_names[] = { "NPE-A", "NPE-B", "NPE-C" };
+
+#define print_npe(pri, npe, fmt, ...) \
+ printk(pri "%s: " fmt, npe_name(npe), ## __VA_ARGS__)
+
+#if DEBUG_MSG
+#define debug_msg(npe, fmt, ...) \
+ print_npe(KERN_DEBUG, npe, fmt, ## __VA_ARGS__)
+#else
+#define debug_msg(npe, fmt, ...)
+#endif
+
+static struct {
+ u32 reg, val;
+}ecs_reset[] = {
+ { ECS_BG_CTXT_REG_0, 0xA0000000 },
+ { ECS_BG_CTXT_REG_1, 0x01000000 },
+ { ECS_BG_CTXT_REG_2, 0x00008000 },
+ { ECS_PRI_1_CTXT_REG_0, 0x20000080 },
+ { ECS_PRI_1_CTXT_REG_1, 0x01000000 },
+ { ECS_PRI_1_CTXT_REG_2, 0x00008000 },
+ { ECS_PRI_2_CTXT_REG_0, 0x20000080 },
+ { ECS_PRI_2_CTXT_REG_1, 0x01000000 },
+ { ECS_PRI_2_CTXT_REG_2, 0x00008000 },
+ { ECS_DBG_CTXT_REG_0, 0x20000000 },
+ { ECS_DBG_CTXT_REG_1, 0x00000000 },
+ { ECS_DBG_CTXT_REG_2, 0x001E0000 },
+ { ECS_INSTRUCT_REG, 0x1003C00F },
+};
+
+static struct npe npe_tab[NPE_COUNT] = {
+ {
+ .id = 0,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEA_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEA_BASE_PHYS,
+ }, {
+ .id = 1,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEB_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEB_BASE_PHYS,
+ }, {
+ .id = 2,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEC_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEC_BASE_PHYS,
+ }
+};
+
+int npe_running(struct npe *npe)
+{
+ return (__raw_readl(&npe->regs->exec_status_cmd) & STAT_RUN) != 0;
+}
+
+static void npe_cmd_write(struct npe *npe, u32 addr, int cmd, u32 data)
+{
+ __raw_writel(data, &npe->regs->exec_data);
+ __raw_writel(addr, &npe->regs->exec_addr);
+ __raw_writel(cmd, &npe->regs->exec_status_cmd);
+}
+
+static u32 npe_cmd_read(struct npe *npe, u32 addr, int cmd)
+{
+ __raw_writel(addr, &npe->regs->exec_addr);
+ __raw_writel(cmd, &npe->regs->exec_status_cmd);
+ /* Iintroduce extra read cycles after issuing read command to NPE
+ so that we read the register after the NPE has updated it.
+ This is to overcome race condition between XScale and NPE */
+ __raw_readl(&npe->regs->exec_data);
+ __raw_readl(&npe->regs->exec_data);
+ return __raw_readl(&npe->regs->exec_data);
+}
+
+static void npe_clear_active(struct npe *npe, u32 reg)
+{
+ u32 val = npe_cmd_read(npe, reg, CMD_RD_ECS_REG);
+ npe_cmd_write(npe, reg, CMD_WR_ECS_REG, val & ~ECS_REG_0_ACTIVE);
+}
+
+static void npe_start(struct npe *npe)
+{
+ /* ensure only Background Context Stack Level is active */
+ npe_clear_active(npe, ECS_PRI_1_CTXT_REG_0);
+ npe_clear_active(npe, ECS_PRI_2_CTXT_REG_0);
+ npe_clear_active(npe, ECS_DBG_CTXT_REG_0);
+
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+ __raw_writel(CMD_NPE_START, &npe->regs->exec_status_cmd);
+}
+
+static void npe_stop(struct npe *npe)
+{
+ __raw_writel(CMD_NPE_STOP, &npe->regs->exec_status_cmd);
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd); /*FIXME?*/
+}
+
+static int __must_check npe_debug_instr(struct npe *npe, u32 instr, u32 ctx,
+ u32 ldur)
+{
+ u32 wc;
+ int i;
+
+ /* set the Active bit, and the LDUR, in the debug level */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_0, CMD_WR_ECS_REG,
+ ECS_REG_0_ACTIVE | (ldur << ECS_REG_0_LDUR_BITS));
+
+ /* set CCTXT at ECS DEBUG L3 to specify in which context to execute
+ the instruction, and set SELCTXT at ECS DEBUG Level to specify
+ which context store to access.
+ Debug ECS Level Reg 1 has form 0x000n000n, where n = context number
+ */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_1, CMD_WR_ECS_REG,
+ (ctx << ECS_REG_1_CCTXT_BITS) |
+ (ctx << ECS_REG_1_SELCTXT_BITS));
+
+ /* clear the pipeline */
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+
+ /* load NPE instruction into the instruction register */
+ npe_cmd_write(npe, ECS_INSTRUCT_REG, CMD_WR_ECS_REG, instr);
+
+ /* we need this value later to wait for completion of NPE execution
+ step */
+ wc = __raw_readl(&npe->regs->watch_count);
+
+ /* issue a Step One command via the Execution Control register */
+ __raw_writel(CMD_NPE_STEP, &npe->regs->exec_status_cmd);
+
+ /* Watch Count register increments when NPE completes an instruction */
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (wc != __raw_readl(&npe->regs->watch_count))
+ return 0;
+ udelay(1);
+ }
+
+ print_npe(KERN_ERR, npe, "reset: npe_debug_instr(): timeout\n");
+ return -ETIMEDOUT;
+}
+
+static int __must_check npe_logical_reg_write8(struct npe *npe, u32 addr,
+ u8 val, u32 ctx)
+{
+ /* here we build the NPE assembler instruction: mov8 d0, #0 */
+ u32 instr = INSTR_WR_REG_BYTE | /* OpCode */
+ addr << 9 | /* base Operand */
+ (val & 0x1F) << 4 | /* lower 5 bits to immediate data */
+ (val & ~0x1F) << (18 - 5);/* higher 3 bits to CoProc instr. */
+ return npe_debug_instr(npe, instr, ctx, 1); /* execute it */
+}
+
+static int __must_check npe_logical_reg_write16(struct npe *npe, u32 addr,
+ u16 val, u32 ctx)
+{
+ /* here we build the NPE assembler instruction: mov16 d0, #0 */
+ u32 instr = INSTR_WR_REG_SHORT | /* OpCode */
+ addr << 9 | /* base Operand */
+ (val & 0x1F) << 4 | /* lower 5 bits to immediate data */
+ (val & ~0x1F) << (18 - 5);/* higher 11 bits to CoProc instr. */
+ return npe_debug_instr(npe, instr, ctx, 1); /* execute it */
+}
+
+static int __must_check npe_logical_reg_write32(struct npe *npe, u32 addr,
+ u32 val, u32 ctx)
+{
+ /* write in 16 bit steps first the high and then the low value */
+ if (npe_logical_reg_write16(npe, addr, val >> 16, ctx))
+ return -ETIMEDOUT;
+ return npe_logical_reg_write16(npe, addr + 2, val & 0xFFFF, ctx);
+}
+
+static int npe_reset(struct npe *npe)
+{
+ u32 val, ctl, exec_count, ctx_reg2;
+ int i;
+
+ ctl = (__raw_readl(&npe->regs->messaging_control) | 0x3F000000) &
+ 0x3F3FFFFF;
+
+ /* disable parity interrupt */
+ __raw_writel(ctl & 0x3F00FFFF, &npe->regs->messaging_control);
+
+ /* pre exec - debug instruction */
+ /* turn off the halt bit by clearing Execution Count register. */
+ exec_count = __raw_readl(&npe->regs->exec_count);
+ __raw_writel(0, &npe->regs->exec_count);
+ /* ensure that IF and IE are on (temporarily), so that we don't end up
+ stepping forever */
+ ctx_reg2 = npe_cmd_read(npe, ECS_DBG_CTXT_REG_2, CMD_RD_ECS_REG);
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_2, CMD_WR_ECS_REG, ctx_reg2 |
+ ECS_DBG_REG_2_IF | ECS_DBG_REG_2_IE);
+
+ /* clear the FIFOs */
+ while (__raw_readl(&npe->regs->watchpoint_fifo) & WFIFO_VALID)
+ ;
+ while (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_OFNE)
+ /* read from the outFIFO until empty */
+ print_npe(KERN_DEBUG, npe, "npe_reset: read FIFO = 0x%X\n",
+ __raw_readl(&npe->regs->in_out_fifo));
+
+ while (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE)
+ /* step execution of the NPE intruction to read inFIFO using
+ the Debug Executing Context stack */
+ if (npe_debug_instr(npe, INSTR_RD_FIFO, 0, 0))
+ return -ETIMEDOUT;
+
+ /* reset the mailbox reg from the XScale side */
+ __raw_writel(RESET_MBOX_STAT, &npe->regs->mailbox_status);
+ /* from NPE side */
+ if (npe_debug_instr(npe, INSTR_RESET_MBOX, 0, 0))
+ return -ETIMEDOUT;
+
+ /* Reset the physical registers in the NPE register file */
+ for (val = 0; val < NPE_PHYS_REG; val++) {
+ if (npe_logical_reg_write16(npe, NPE_REGMAP, val >> 1, 0))
+ return -ETIMEDOUT;
+ /* address is either 0 or 4 */
+ if (npe_logical_reg_write32(npe, (val & 1) * 4, 0, 0))
+ return -ETIMEDOUT;
+ }
+
+ /* Reset the context store = each context's Context Store registers */
+
+ /* Context 0 has no STARTPC. Instead, this value is used to set NextPC
+ for Background ECS, to set where NPE starts executing code */
+ val = npe_cmd_read(npe, ECS_BG_CTXT_REG_0, CMD_RD_ECS_REG);
+ val &= ~ECS_REG_0_NEXTPC_MASK;
+ val |= (0 /* NextPC */ << 16) & ECS_REG_0_NEXTPC_MASK;
+ npe_cmd_write(npe, ECS_BG_CTXT_REG_0, CMD_WR_ECS_REG, val);
+
+ for (i = 0; i < 16; i++) {
+ if (i) { /* Context 0 has no STEVT nor STARTPC */
+ /* STEVT = off, 0x80 */
+ if (npe_logical_reg_write8(npe, NPE_STEVT, 0x80, i))
+ return -ETIMEDOUT;
+ if (npe_logical_reg_write16(npe, NPE_STARTPC, 0, i))
+ return -ETIMEDOUT;
+ }
+ /* REGMAP = d0->p0, d8->p2, d16->p4 */
+ if (npe_logical_reg_write16(npe, NPE_REGMAP, 0x820, i))
+ return -ETIMEDOUT;
+ if (npe_logical_reg_write8(npe, NPE_CINDEX, 0, i))
+ return -ETIMEDOUT;
+ }
+
+ /* post exec */
+ /* clear active bit in debug level */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_0, CMD_WR_ECS_REG, 0);
+ /* clear the pipeline */
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+ /* restore previous values */
+ __raw_writel(exec_count, &npe->regs->exec_count);
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_2, CMD_WR_ECS_REG, ctx_reg2);
+
+ /* write reset values to Execution Context Stack registers */
+ for (val = 0; val < ARRAY_SIZE(ecs_reset); val++)
+ npe_cmd_write(npe, ecs_reset[val].reg, CMD_WR_ECS_REG,
+ ecs_reset[val].val);
+
+ /* clear the profile counter */
+ __raw_writel(CMD_CLR_PROFILE_CNT, &npe->regs->exec_status_cmd);
+
+ __raw_writel(0, &npe->regs->exec_count);
+ __raw_writel(0, &npe->regs->action_points[0]);
+ __raw_writel(0, &npe->regs->action_points[1]);
+ __raw_writel(0, &npe->regs->action_points[2]);
+ __raw_writel(0, &npe->regs->action_points[3]);
+ __raw_writel(0, &npe->regs->watch_count);
+
+ val = ixp4xx_read_fuses();
+ /* reset the NPE */
+ ixp4xx_write_fuses(val & ~(IXP4XX_FUSE_RESET_NPEA << npe->id));
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (!(ixp4xx_read_fuses() &
+ (IXP4XX_FUSE_RESET_NPEA << npe->id)))
+ break; /* reset completed */
+ udelay(1);
+ }
+ if (i == MAX_RETRIES)
+ return -ETIMEDOUT;
+
+ /* deassert reset */
+ ixp4xx_write_fuses(val | (IXP4XX_FUSE_RESET_NPEA << npe->id));
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (ixp4xx_read_fuses() & (IXP4XX_FUSE_RESET_NPEA << npe->id))
+ break; /* NPE is back alive */
+ udelay(1);
+ }
+ if (i == MAX_RETRIES)
+ return -ETIMEDOUT;
+
+ npe_stop(npe);
+
+ /* restore NPE configuration bus Control Register - parity settings */
+ __raw_writel(ctl, &npe->regs->messaging_control);
+ return 0;
+}
+
+
+int npe_send_message(struct npe *npe, const void *msg, const char *what)
+{
+ const u32 *send = msg;
+ int cycles = 0;
+
+ debug_msg(npe, "Trying to send message %s [%08X:%08X]\n",
+ what, send[0], send[1]);
+
+ if (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE) {
+ debug_msg(npe, "NPE input FIFO not empty\n");
+ return -EIO;
+ }
+
+ __raw_writel(send[0], &npe->regs->in_out_fifo);
+
+ if (!(__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNF)) {
+ debug_msg(npe, "NPE input FIFO full\n");
+ return -EIO;
+ }
+
+ __raw_writel(send[1], &npe->regs->in_out_fifo);
+
+ while ((cycles < MAX_RETRIES) &&
+ (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE)) {
+ udelay(1);
+ cycles++;
+ }
+
+ if (cycles == MAX_RETRIES) {
+ debug_msg(npe, "Timeout sending message\n");
+ return -ETIMEDOUT;
+ }
+
+ debug_msg(npe, "Sending a message took %i cycles\n", cycles);
+ return 0;
+}
+
+int npe_recv_message(struct npe *npe, void *msg, const char *what)
+{
+ u32 *recv = msg;
+ int cycles = 0, cnt = 0;
+
+ debug_msg(npe, "Trying to receive message %s\n", what);
+
+ while (cycles < MAX_RETRIES) {
+ if (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_OFNE) {
+ recv[cnt++] = __raw_readl(&npe->regs->in_out_fifo);
+ if (cnt == 2)
+ break;
+ } else {
+ udelay(1);
+ cycles++;
+ }
+ }
+
+ switch(cnt) {
+ case 1:
+ debug_msg(npe, "Received [%08X]\n", recv[0]);
+ break;
+ case 2:
+ debug_msg(npe, "Received [%08X:%08X]\n", recv[0], recv[1]);
+ break;
+ }
+
+ if (cycles == MAX_RETRIES) {
+ debug_msg(npe, "Timeout waiting for message\n");
+ return -ETIMEDOUT;
+ }
+
+ debug_msg(npe, "Receiving a message took %i cycles\n", cycles);
+ return 0;
+}
+
+int npe_send_recv_message(struct npe *npe, void *msg, const char *what)
+{
+ int result;
+ u32 *send = msg, recv[2];
+
+ if ((result = npe_send_message(npe, msg, what)) != 0)
+ return result;
+ if ((result = npe_recv_message(npe, recv, what)) != 0)
+ return result;
+
+ if ((recv[0] != send[0]) || (recv[1] != send[1])) {
+ debug_msg(npe, "Message %s: unexpected message received\n",
+ what);
+ return -EIO;
+ }
+ return 0;
+}
+
+
+int npe_load_firmware(struct npe *npe, const char *name, struct device *dev)
+{
+ const struct firmware *fw_entry;
+
+ struct dl_block {
+ u32 type;
+ u32 offset;
+ } *blk;
+
+ struct dl_image {
+ u32 magic;
+ u32 id;
+ u32 size;
+ union {
+ u32 data[0];
+ struct dl_block blocks[0];
+ };
+ } *image;
+
+ struct dl_codeblock {
+ u32 npe_addr;
+ u32 size;
+ u32 data[0];
+ } *cb;
+
+ int i, j, err, data_size, instr_size, blocks, table_end;
+ u32 cmd;
+
+ if ((err = request_firmware(&fw_entry, name, dev)) != 0)
+ return err;
+
+ err = -EINVAL;
+ if (fw_entry->size < sizeof(struct dl_image)) {
+ print_npe(KERN_ERR, npe, "incomplete firmware file\n");
+ goto err;
+ }
+ image = (struct dl_image*)fw_entry->data;
+
+#if DEBUG_FW
+ print_npe(KERN_DEBUG, npe, "firmware: %08X %08X %08X (0x%X bytes)\n",
+ image->magic, image->id, image->size, image->size * 4);
+#endif
+
+ if (image->magic == swab32(FW_MAGIC)) { /* swapped file */
+ image->id = swab32(image->id);
+ image->size = swab32(image->size);
+ } else if (image->magic != FW_MAGIC) {
+ print_npe(KERN_ERR, npe, "bad firmware file magic: 0x%X\n",
+ image->magic);
+ goto err;
+ }
+ if ((image->size * 4 + sizeof(struct dl_image)) != fw_entry->size) {
+ print_npe(KERN_ERR, npe,
+ "inconsistent size of firmware file\n");
+ goto err;
+ }
+ if (((image->id >> 24) & 0xF /* NPE ID */) != npe->id) {
+ print_npe(KERN_ERR, npe, "firmware file NPE ID mismatch\n");
+ goto err;
+ }
+ if (image->magic == swab32(FW_MAGIC))
+ for (i = 0; i < image->size; i++)
+ image->data[i] = swab32(image->data[i]);
+
+ if (!cpu_is_ixp46x() && ((image->id >> 28) & 0xF /* device ID */)) {
+ print_npe(KERN_INFO, npe, "IXP46x firmware ignored on "
+ "IXP42x\n");
+ goto err;
+ }
+
+ if (npe_running(npe)) {
+ print_npe(KERN_INFO, npe, "unable to load firmware, NPE is "
+ "already running\n");
+ err = -EBUSY;
+ goto err;
+ }
+#if 0
+ npe_stop(npe);
+ npe_reset(npe);
+#endif
+
+ print_npe(KERN_INFO, npe, "firmware functionality 0x%X, "
+ "revision 0x%X:%X\n", (image->id >> 16) & 0xFF,
+ (image->id >> 8) & 0xFF, image->id & 0xFF);
+
+ if (!cpu_is_ixp46x()) {
+ if (!npe->id)
+ instr_size = NPE_A_42X_INSTR_SIZE;
+ else
+ instr_size = NPE_B_AND_C_42X_INSTR_SIZE;
+ data_size = NPE_42X_DATA_SIZE;
+ } else {
+ instr_size = NPE_46X_INSTR_SIZE;
+ data_size = NPE_46X_DATA_SIZE;
+ }
+
+ for (blocks = 0; blocks * sizeof(struct dl_block) / 4 < image->size;
+ blocks++)
+ if (image->blocks[blocks].type == FW_BLOCK_TYPE_EOF)
+ break;
+ if (blocks * sizeof(struct dl_block) / 4 >= image->size) {
+ print_npe(KERN_INFO, npe, "firmware EOF block marker not "
+ "found\n");
+ goto err;
+ }
+
+#if DEBUG_FW
+ print_npe(KERN_DEBUG, npe, "%i firmware blocks found\n", blocks);
+#endif
+
+ table_end = blocks * sizeof(struct dl_block) / 4 + 1 /* EOF marker */;
+ for (i = 0, blk = image->blocks; i < blocks; i++, blk++) {
+ if (blk->offset > image->size - sizeof(struct dl_codeblock) / 4
+ || blk->offset < table_end) {
+ print_npe(KERN_INFO, npe, "invalid offset 0x%X of "
+ "firmware block #%i\n", blk->offset, i);
+ goto err;
+ }
+
+ cb = (struct dl_codeblock*)&image->data[blk->offset];
+ if (blk->type == FW_BLOCK_TYPE_INSTR) {
+ if (cb->npe_addr + cb->size > instr_size)
+ goto too_big;
+ cmd = CMD_WR_INS_MEM;
+ } else if (blk->type == FW_BLOCK_TYPE_DATA) {
+ if (cb->npe_addr + cb->size > data_size)
+ goto too_big;
+ cmd = CMD_WR_DATA_MEM;
+ } else {
+ print_npe(KERN_INFO, npe, "invalid firmware block #%i "
+ "type 0x%X\n", i, blk->type);
+ goto err;
+ }
+ if (blk->offset + sizeof(*cb) / 4 + cb->size > image->size) {
+ print_npe(KERN_INFO, npe, "firmware block #%i doesn't "
+ "fit in firmware image: type %c, start 0x%X,"
+ " length 0x%X\n", i,
+ blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
+ cb->npe_addr, cb->size);
+ goto err;
+ }
+
+ for (j = 0; j < cb->size; j++)
+ npe_cmd_write(npe, cb->npe_addr + j, cmd, cb->data[j]);
+ }
+
+ npe_start(npe);
+ if (!npe_running(npe))
+ print_npe(KERN_ERR, npe, "unable to start\n");
+ release_firmware(fw_entry);
+ return 0;
+
+too_big:
+ print_npe(KERN_INFO, npe, "firmware block #%i doesn't fit in NPE "
+ "memory: type %c, start 0x%X, length 0x%X\n", i,
+ blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
+ cb->npe_addr, cb->size);
+err:
+ release_firmware(fw_entry);
+ return err;
+}
+
+
+struct npe *npe_request(int id)
+{
+ if (id < NPE_COUNT)
+ if (npe_tab[id].valid)
+ if (try_module_get(THIS_MODULE))
+ return &npe_tab[id];
+ return NULL;
+}
+
+void npe_release(struct npe *npe)
+{
+ module_put(THIS_MODULE);
+}
+
+
+static int __init npe_init_module(void)
+{
+
+ int i, found = 0;
+
+ for (i = 0; i < NPE_COUNT; i++) {
+ struct npe *npe = &npe_tab[i];
+ if (!(ixp4xx_read_fuses() & (IXP4XX_FUSE_RESET_NPEA << i)))
+ continue; /* NPE already disabled or not present */
+ if (!(npe->mem_res = request_mem_region(npe->regs_phys,
+ REGS_SIZE,
+ npe_name(npe)))) {
+ print_npe(KERN_ERR, npe,
+ "failed to request memory region\n");
+ continue;
+ }
+
+ if (npe_reset(npe))
+ continue;
+ npe->valid = 1;
+ found++;
+ }
+
+ if (!found)
+ return -ENOSYS;
+ return 0;
+}
+
+static void __exit npe_cleanup_module(void)
+{
+ int i;
+
+ for (i = 0; i < NPE_COUNT; i++)
+ if (npe_tab[i].mem_res) {
+ npe_reset(&npe_tab[i]);
+ release_resource(npe_tab[i].mem_res);
+ }
+}
+
+module_init(npe_init_module);
+module_exit(npe_cleanup_module);
+
+MODULE_AUTHOR("Krzysztof Halasa");
+MODULE_LICENSE("GPL v2");
+
+EXPORT_SYMBOL(npe_names);
+EXPORT_SYMBOL(npe_running);
+EXPORT_SYMBOL(npe_request);
+EXPORT_SYMBOL(npe_release);
+EXPORT_SYMBOL(npe_load_firmware);
+EXPORT_SYMBOL(npe_send_message);
+EXPORT_SYMBOL(npe_recv_message);
+EXPORT_SYMBOL(npe_send_recv_message);
diff --git a/drivers/net/ixp4xx/ixp4xx_qmgr.c b/drivers/net/ixp4xx/ixp4xx_qmgr.c
new file mode 100644
index 0000000..7dcb2b6
--- /dev/null
+++ b/drivers/net/ixp4xx/ixp4xx_qmgr.c
@@ -0,0 +1,273 @@
+/*
+ * Intel IXP4xx Queue Manager driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include "qmgr.h"
+
+#define DEBUG 0
+
+struct qmgr_regs __iomem *qmgr_regs;
+static struct resource *mem_res;
+static spinlock_t qmgr_lock;
+static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
+static void (*irq_handlers[HALF_QUEUES])(void *pdev);
+static void *irq_pdevs[HALF_QUEUES];
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev)
+{
+ u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues / u32 */
+ int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
+ unsigned long flags;
+
+ src &= 7;
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
+ irq_handlers[queue] = handler;
+ irq_pdevs[queue] = pdev;
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+
+static irqreturn_t qmgr_irq1(int irq, void *pdev)
+{
+ int i;
+ u32 val = __raw_readl(&qmgr_regs->irqstat[0]);
+ __raw_writel(val, &qmgr_regs->irqstat[0]); /* ACK */
+
+ for (i = 0; i < HALF_QUEUES; i++)
+ if (val & (1 << i))
+ irq_handlers[i](irq_pdevs[i]);
+
+ return val ? IRQ_HANDLED : 0;
+}
+
+
+void qmgr_enable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) | (1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+void qmgr_disable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) & ~(1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+static inline void shift_mask(u32 *mask)
+{
+ mask[3] = mask[3] << 1 | mask[2] >> 31;
+ mask[2] = mask[2] << 1 | mask[1] >> 31;
+ mask[1] = mask[1] << 1 | mask[0] >> 31;
+ mask[0] <<= 1;
+}
+
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark)
+{
+ u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
+ int err;
+
+ if (queue >= HALF_QUEUES)
+ return -ERANGE;
+
+ if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
+ return -EINVAL;
+
+ switch (len) {
+ case 16:
+ cfg = 0 << 24;
+ mask[0] = 0x1;
+ break;
+ case 32:
+ cfg = 1 << 24;
+ mask[0] = 0x3;
+ break;
+ case 64:
+ cfg = 2 << 24;
+ mask[0] = 0xF;
+ break;
+ case 128:
+ cfg = 3 << 24;
+ mask[0] = 0xFF;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ cfg |= nearly_empty_watermark << 26;
+ cfg |= nearly_full_watermark << 29;
+ len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
+ mask[1] = mask[2] = mask[3] = 0;
+
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+ spin_lock_irq(&qmgr_lock);
+ if (__raw_readl(&qmgr_regs->sram[queue])) {
+ err = -EBUSY;
+ goto err;
+ }
+
+ while (1) {
+ if (!(used_sram_bitmap[0] & mask[0]) &&
+ !(used_sram_bitmap[1] & mask[1]) &&
+ !(used_sram_bitmap[2] & mask[2]) &&
+ !(used_sram_bitmap[3] & mask[3]))
+ break; /* found free space */
+
+ addr++;
+ shift_mask(mask);
+ if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
+ printk(KERN_ERR "qmgr: no free SRAM space for"
+ " queue %i\n", queue);
+ err = -ENOMEM;
+ goto err;
+ }
+ }
+
+ used_sram_bitmap[0] |= mask[0];
+ used_sram_bitmap[1] |= mask[1];
+ used_sram_bitmap[2] |= mask[2];
+ used_sram_bitmap[3] |= mask[3];
+ __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
+ spin_unlock_irq(&qmgr_lock);
+
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
+ queue, addr);
+#endif
+ return 0;
+
+err:
+ spin_unlock_irq(&qmgr_lock);
+ module_put(THIS_MODULE);
+ return err;
+}
+
+void qmgr_release_queue(unsigned int queue)
+{
+ u32 cfg, addr, mask[4];
+
+ BUG_ON(queue >= HALF_QUEUES); /* not in valid range */
+
+ spin_lock_irq(&qmgr_lock);
+ cfg = __raw_readl(&qmgr_regs->sram[queue]);
+ addr = (cfg >> 14) & 0xFF;
+
+ BUG_ON(!addr); /* not requested */
+
+ switch ((cfg >> 24) & 3) {
+ case 0: mask[0] = 0x1; break;
+ case 1: mask[0] = 0x3; break;
+ case 2: mask[0] = 0xF; break;
+ case 3: mask[0] = 0xFF; break;
+ }
+
+ while (addr--)
+ shift_mask(mask);
+
+ __raw_writel(0, &qmgr_regs->sram[queue]);
+
+ used_sram_bitmap[0] &= ~mask[0];
+ used_sram_bitmap[1] &= ~mask[1];
+ used_sram_bitmap[2] &= ~mask[2];
+ used_sram_bitmap[3] &= ~mask[3];
+ irq_handlers[queue] = NULL; /* catch IRQ bugs */
+ spin_unlock_irq(&qmgr_lock);
+
+ module_put(THIS_MODULE);
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: released queue %i\n", queue);
+#endif
+}
+
+static int qmgr_init(void)
+{
+ int i, err;
+ mem_res = request_mem_region(IXP4XX_QMGR_BASE_PHYS,
+ IXP4XX_QMGR_REGION_SIZE,
+ "IXP4xx Queue Manager");
+ if (mem_res == NULL)
+ return -EBUSY;
+
+ qmgr_regs = ioremap(IXP4XX_QMGR_BASE_PHYS, IXP4XX_QMGR_REGION_SIZE);
+ if (qmgr_regs == NULL) {
+ err = -ENOMEM;
+ goto error_map;
+ }
+
+ /* reset qmgr registers */
+ for (i = 0; i < 4; i++) {
+ __raw_writel(0x33333333, &qmgr_regs->stat1[i]);
+ __raw_writel(0, &qmgr_regs->irqsrc[i]);
+ }
+ for (i = 0; i < 2; i++) {
+ __raw_writel(0, &qmgr_regs->stat2[i]);
+ __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[i]); /* clear */
+ __raw_writel(0, &qmgr_regs->irqen[i]);
+ }
+
+ for (i = 0; i < QUEUES; i++)
+ __raw_writel(0, &qmgr_regs->sram[i]);
+
+ err = request_irq(IRQ_IXP4XX_QM1, qmgr_irq1, 0,
+ "IXP4xx Queue Manager", NULL);
+ if (err) {
+ printk(KERN_ERR "qmgr: failed to request IRQ%i\n",
+ IRQ_IXP4XX_QM1);
+ goto error_irq;
+ }
+
+ used_sram_bitmap[0] = 0xF; /* 4 first pages reserved for config */
+ spin_lock_init(&qmgr_lock);
+
+ printk(KERN_INFO "IXP4xx Queue Manager initialized.\n");
+ return 0;
+
+error_irq:
+ iounmap(qmgr_regs);
+error_map:
+ release_resource(mem_res);
+ return err;
+}
+
+static void qmgr_remove(void)
+{
+ free_irq(IRQ_IXP4XX_QM1, NULL);
+ synchronize_irq(IRQ_IXP4XX_QM1);
+ iounmap(qmgr_regs);
+ release_resource(mem_res);
+}
+
+module_init(qmgr_init);
+module_exit(qmgr_remove);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Krzysztof Halasa");
+
+EXPORT_SYMBOL(qmgr_regs);
+EXPORT_SYMBOL(qmgr_set_irq);
+EXPORT_SYMBOL(qmgr_enable_irq);
+EXPORT_SYMBOL(qmgr_disable_irq);
+EXPORT_SYMBOL(qmgr_request_queue);
+EXPORT_SYMBOL(qmgr_release_queue);
diff --git a/drivers/net/ixp4xx/npe.h b/drivers/net/ixp4xx/npe.h
new file mode 100644
index 0000000..fd20bf5
--- /dev/null
+++ b/drivers/net/ixp4xx/npe.h
@@ -0,0 +1,41 @@
+#ifndef __IXP4XX_NPE_H
+#define __IXP4XX_NPE_H
+
+#include <linux/etherdevice.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+
+extern const char *npe_names[];
+
+struct npe_regs {
+ u32 exec_addr, exec_data, exec_status_cmd, exec_count;
+ u32 action_points[4];
+ u32 watchpoint_fifo, watch_count;
+ u32 profile_count;
+ u32 messaging_status, messaging_control;
+ u32 mailbox_status, /*messaging_*/ in_out_fifo;
+};
+
+struct npe {
+ struct resource *mem_res;
+ struct npe_regs __iomem *regs;
+ u32 regs_phys;
+ int id;
+ int valid;
+};
+
+
+static inline const char *npe_name(struct npe *npe)
+{
+ return npe_names[npe->id];
+}
+
+int npe_running(struct npe *npe);
+int npe_send_message(struct npe *npe, const void *msg, const char *what);
+int npe_recv_message(struct npe *npe, void *msg, const char *what);
+int npe_send_recv_message(struct npe *npe, void *msg, const char *what);
+int npe_load_firmware(struct npe *npe, const char *name, struct device *dev);
+struct npe *npe_request(int id);
+void npe_release(struct npe *npe);
+
+#endif /* __IXP4XX_NPE_H */
diff --git a/drivers/net/ixp4xx/qmgr.h b/drivers/net/ixp4xx/qmgr.h
new file mode 100644
index 0000000..d03464a
--- /dev/null
+++ b/drivers/net/ixp4xx/qmgr.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#ifndef IXP4XX_QMGR_H
+#define IXP4XX_QMGR_H
+
+#include <linux/kernel.h>
+#include <asm/io.h>
+
+#define HALF_QUEUES 32
+#define QUEUES 64 /* only 32 lower queues currently supported */
+#define MAX_QUEUE_LENGTH 4 /* in dwords */
+
+#define QUEUE_STAT1_EMPTY 1 /* queue status bits */
+#define QUEUE_STAT1_NEARLY_EMPTY 2
+#define QUEUE_STAT1_NEARLY_FULL 4
+#define QUEUE_STAT1_FULL 8
+#define QUEUE_STAT2_UNDERFLOW 1
+#define QUEUE_STAT2_OVERFLOW 2
+
+#define QUEUE_WATERMARK_0_ENTRIES 0
+#define QUEUE_WATERMARK_1_ENTRY 1
+#define QUEUE_WATERMARK_2_ENTRIES 2
+#define QUEUE_WATERMARK_4_ENTRIES 3
+#define QUEUE_WATERMARK_8_ENTRIES 4
+#define QUEUE_WATERMARK_16_ENTRIES 5
+#define QUEUE_WATERMARK_32_ENTRIES 6
+#define QUEUE_WATERMARK_64_ENTRIES 7
+
+/* queue interrupt request conditions */
+#define QUEUE_IRQ_SRC_EMPTY 0
+#define QUEUE_IRQ_SRC_NEARLY_EMPTY 1
+#define QUEUE_IRQ_SRC_NEARLY_FULL 2
+#define QUEUE_IRQ_SRC_FULL 3
+#define QUEUE_IRQ_SRC_NOT_EMPTY 4
+#define QUEUE_IRQ_SRC_NOT_NEARLY_EMPTY 5
+#define QUEUE_IRQ_SRC_NOT_NEARLY_FULL 6
+#define QUEUE_IRQ_SRC_NOT_FULL 7
+
+struct qmgr_regs {
+ u32 acc[QUEUES][MAX_QUEUE_LENGTH]; /* 0x000 - 0x3FF */
+ u32 stat1[4]; /* 0x400 - 0x40F */
+ u32 stat2[2]; /* 0x410 - 0x417 */
+ u32 statne_h; /* 0x418 - queue nearly empty */
+ u32 statf_h; /* 0x41C - queue full */
+ u32 irqsrc[4]; /* 0x420 - 0x42F IRC source */
+ u32 irqen[2]; /* 0x430 - 0x437 IRQ enabled */
+ u32 irqstat[2]; /* 0x438 - 0x43F - IRQ access only */
+ u32 reserved[1776];
+ u32 sram[2048]; /* 0x2000 - 0x3FFF - config and buffer */
+};
+
+extern struct qmgr_regs __iomem *qmgr_regs;
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev);
+void qmgr_enable_irq(unsigned int queue);
+void qmgr_disable_irq(unsigned int queue);
+
+/* request_ and release_queue() must be called from non-IRQ context */
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark);
+void qmgr_release_queue(unsigned int queue);
+
+
+static inline void qmgr_put_entry(unsigned int queue, u32 val)
+{
+ __raw_writel(val, &qmgr_regs->acc[queue][0]);
+}
+
+static inline u32 qmgr_get_entry(unsigned int queue)
+{
+ return __raw_readl(&qmgr_regs->acc[queue][0]);
+}
+
+static inline int qmgr_get_stat1(unsigned int queue)
+{
+ return (__raw_readl(&qmgr_regs->stat1[queue >> 3])
+ >> ((queue & 7) << 2)) & 0xF;
+}
+
+static inline int qmgr_get_stat2(unsigned int queue)
+{
+ return (__raw_readl(&qmgr_regs->stat2[queue >> 4])
+ >> ((queue & 0xF) << 1)) & 0x3;
+}
+
+static inline int qmgr_stat_empty(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_EMPTY);
+}
+
+static inline int qmgr_stat_nearly_empty(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_EMPTY);
+}
+
+static inline int qmgr_stat_nearly_full(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_FULL);
+}
+
+static inline int qmgr_stat_full(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_FULL);
+}
+
+static inline int qmgr_stat_underflow(unsigned int queue)
+{
+ return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_UNDERFLOW);
+}
+
+static inline int qmgr_stat_overflow(unsigned int queue)
+{
+ return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_OVERFLOW);
+}
+
+#endif
diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
index 8897f53..373307f 100644
--- a/drivers/net/wan/Kconfig
+++ b/drivers/net/wan/Kconfig
@@ -336,6 +345,16 @@ config DSCC4_PCI_RST
Say Y if your card supports this feature.
+config IXP4XX_HSS
+ tristate "IXP4xx HSS (synchronous serial port) support"
+ depends on ARCH_IXP4XX
+ select IXP4XX_NPE
+ select IXP4XX_QMGR
+ select HDLC
+ help
+ Say Y here if you want to use built-in HSS ports
+ on IXP4xx processor.
+
config DLCI
tristate "Frame Relay DLCI support"
depends on WAN
diff --git a/include/asm-arm/arch-ixp4xx/platform.h b/include/asm-arm/arch-ixp4xx/platform.h
index ab194e5..8fc9f7c 100644
--- a/include/asm-arm/arch-ixp4xx/platform.h
+++ b/include/asm-arm/arch-ixp4xx/platform.h
@@ -86,6 +85,25 @@ struct ixp4xx_i2c_pins {
unsigned long scl_pin;
};
+#define IXP4XX_ETH_NPEA 0x00
+#define IXP4XX_ETH_NPEB 0x10
+#define IXP4XX_ETH_NPEC 0x20
+
+/* Information about built-in Ethernet MAC interfaces */
+struct mac_plat_info {
+ u8 phy; /* MII PHY ID, 0 - 31 */
+ u8 rxq; /* configurable, currently 0 - 31 only */
+ u8 hwaddr[6];
+};
+
+/* Information about built-in HSS (synchronous serial) interfaces */
+struct hss_plat_info {
+ int (*set_clock)(int port, unsigned int clock_type);
+ int (*open)(int port, void *pdev,
+ void (*set_carrier_cb)(void *pdev, int carrier));
+ void (*close)(int port, void *pdev);
+};
+
/*
* This structure provide a means for the board setup code
* to give information to th pata_ixp4xx driver. It is
^ permalink raw reply related [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 0:06 ` [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
@ 2007-05-07 1:44 ` Roman Zippel
2007-05-07 9:35 ` Krzysztof Halasa
0 siblings, 1 reply; 88+ messages in thread
From: Roman Zippel @ 2007-05-07 1:44 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Hi,
On Mon, 7 May 2007, Krzysztof Halasa wrote:
> Allow enabling WAN drivers without selecting generic HDLC first,
> HDLC will be selected automatically.
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
>
> diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
> index 8897f53..3a2fe82 100644
> --- a/drivers/net/wan/Kconfig
> +++ b/drivers/net/wan/Kconfig
> @@ -171,7 +171,8 @@ comment "X.25/LAPB support is disabled"
>
> config PCI200SYN
> tristate "Goramo PCI200SYN support"
> - depends on HDLC && PCI
> + depends on PCI
> + select HDLC
> help
> Driver for PCI200SYN cards by Goramo sp. j.
>
What's the advantage? The HDLC option is directly before this?
bye, Roman
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 2/3] ARM: include IXP4xx "fuses" support
2007-05-07 0:07 ` [PATCH 2/3] ARM: include IXP4xx "fuses" support Krzysztof Halasa
@ 2007-05-07 5:24 ` Alexey Zaytsev
2007-05-07 10:24 ` Krzysztof Halasa
0 siblings, 1 reply; 88+ messages in thread
From: Alexey Zaytsev @ 2007-05-07 5:24 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Hello, Krzysztof.
On 5/7/07, Krzysztof Halasa <khc@pm.waw.pl> wrote:
...
> + IXP4XX_FUSE_RSA | \
> + IXP4XX_FUSE_XSCALE_MAX_FREQ)
> +
#ifndef __ASSEMBLY__
> +static inline u32 ixp4xx_read_fuses(void)
> +{
> + unsigned int fuses = ~*IXP4XX_EXP_CFG2;
...
> + fuses &= ~IXP4XX_FUSE_RESERVED;
> +}
#endif /* __ASSEMBLY__ */
> +
> #endif
Are you sure this is the version you wanted to send? I don't see how this could
compile without this #ifndef. And also there is some problem with undefined
processor_id, but this is not your fault, but a flaw in the pre-rc1
kernel, which
I hope is now noticed. So maybe you should just send the patches made for
the 2.6.21 kernel?
Otherwise, when applied on top of my 2.6.20 kernel, the driver just
works (at least
I see the pings, haven't considered any other tests yet), thank you a lot!
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 1:44 ` Roman Zippel
@ 2007-05-07 9:35 ` Krzysztof Halasa
2007-05-07 11:22 ` Roman Zippel
0 siblings, 1 reply; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 9:35 UTC (permalink / raw)
To: Roman Zippel; +Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Roman Zippel <zippel@linux-m68k.org> writes:
> What's the advantage? The HDLC option is directly before this?
You don't have to know it's required, you can just select a driver
for your hardware, without enabling HDLC first.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 2/3] ARM: include IXP4xx "fuses" support
2007-05-07 5:24 ` Alexey Zaytsev
@ 2007-05-07 10:24 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 10:24 UTC (permalink / raw)
To: Alexey Zaytsev; +Cc: netdev, linux-arm-kernel, Russell King, Jeff Garzik, lkml
Hello,
"Alexey Zaytsev" <alexey.zaytsev@gmail.com> writes:
> #ifndef __ASSEMBLY__
>
>> +static inline u32 ixp4xx_read_fuses(void)
Oops. You're right, of course.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 2a/3] Intel IXP4xx network drivers
2007-05-06 23:46 [PATCH 0/3] Intel IXP4xx network drivers Krzysztof Halasa
` (2 preceding siblings ...)
2007-05-07 0:07 ` [PATCH 3/3] Intel IXP4xx network drivers Krzysztof Halasa
@ 2007-05-07 10:27 ` Krzysztof Halasa
2007-05-07 20:39 ` [PATCH 0/3] " Leon Woestenberg
2007-05-08 1:40 ` Krzysztof Halasa
5 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 10:27 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Russell King, lkml, netdev, linux-arm-kernel
Adds "fuse" functions to help determine installed IXP4xx CPU
components and to reset/disable/enable them (only NPE - network
coprocessors - can be reset and reenabled).
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h b/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h
index 5d949d7..5652c41 100644
--- a/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h
+++ b/include/asm-arm/arch-ixp4xx/ixp4xx-regs.h
@@ -607,4 +607,55 @@
#define DCMD_LENGTH 0x01fff /* length mask (max = 8K - 1) */
+/* Fuse Bits of IXP_EXP_CFG2 */
+#define IXP4XX_FUSE_RCOMP (1 << 0)
+#define IXP4XX_FUSE_USB_DEVICE (1 << 1)
+#define IXP4XX_FUSE_HASH (1 << 2)
+#define IXP4XX_FUSE_AES (1 << 3)
+#define IXP4XX_FUSE_DES (1 << 4)
+#define IXP4XX_FUSE_HDLC (1 << 5)
+#define IXP4XX_FUSE_AAL (1 << 6)
+#define IXP4XX_FUSE_HSS (1 << 7)
+#define IXP4XX_FUSE_UTOPIA (1 << 8)
+#define IXP4XX_FUSE_NPEB_ETH0 (1 << 9)
+#define IXP4XX_FUSE_NPEC_ETH (1 << 10)
+#define IXP4XX_FUSE_RESET_NPEA (1 << 11)
+#define IXP4XX_FUSE_RESET_NPEB (1 << 12)
+#define IXP4XX_FUSE_RESET_NPEC (1 << 13)
+#define IXP4XX_FUSE_PCI (1 << 14)
+#define IXP4XX_FUSE_ECC_TIMESYNC (1 << 15)
+#define IXP4XX_FUSE_UTOPIA_PHY_LIMIT (3 << 16)
+#define IXP4XX_FUSE_USB_HOST (1 << 18)
+#define IXP4XX_FUSE_NPEA_ETH (1 << 19)
+#define IXP4XX_FUSE_NPEB_ETH_1_TO_3 (1 << 20)
+#define IXP4XX_FUSE_RSA (1 << 21)
+#define IXP4XX_FUSE_XSCALE_MAX_FREQ (3 << 22)
+#define IXP4XX_FUSE_RESERVED (0xFF << 24)
+
+#define IXP4XX_FUSE_IXP46X_ONLY (IXP4XX_FUSE_ECC_TIMESYNC | \
+ IXP4XX_FUSE_USB_HOST | \
+ IXP4XX_FUSE_NPEA_ETH | \
+ IXP4XX_FUSE_NPEB_ETH_1_TO_3 | \
+ IXP4XX_FUSE_RSA | \
+ IXP4XX_FUSE_XSCALE_MAX_FREQ)
+
+#ifndef __ASSEMBLY__
+
+static inline u32 ixp4xx_read_fuses(void)
+{
+ unsigned int fuses = ~*IXP4XX_EXP_CFG2;
+ fuses &= ~IXP4XX_FUSE_RESERVED;
+ if (!cpu_is_ixp46x())
+ fuses &= ~IXP4XX_FUSE_IXP46X_ONLY;
+
+ return fuses;
+}
+
+static inline void ixp4xx_write_fuses(u32 value)
+{
+ *IXP4XX_EXP_CFG2 = ~value;
+}
+
+#endif /* __ASSEMBLY__ */
+
#endif
^ permalink raw reply related [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 9:35 ` Krzysztof Halasa
@ 2007-05-07 11:22 ` Roman Zippel
2007-05-07 11:56 ` Krzysztof Halasa
0 siblings, 1 reply; 88+ messages in thread
From: Roman Zippel @ 2007-05-07 11:22 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Hi,
On Mon, 7 May 2007, Krzysztof Halasa wrote:
> Roman Zippel <zippel@linux-m68k.org> writes:
>
> > What's the advantage? The HDLC option is directly before this?
>
> You don't have to know it's required, you can just select a driver
> for your hardware, without enabling HDLC first.
Is this a real problem?
Using select you should also consider removing HDLC as visible option and
use only select. Mixing depends and selects is generally a bad idea.
bye, Roman
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 11:22 ` Roman Zippel
@ 2007-05-07 11:56 ` Krzysztof Halasa
2007-05-07 13:17 ` Roman Zippel
0 siblings, 1 reply; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 11:56 UTC (permalink / raw)
To: Roman Zippel; +Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Roman Zippel <zippel@linux-m68k.org> writes:
>> You don't have to know it's required, you can just select a driver
>> for your hardware, without enabling HDLC first.
>
> Is this a real problem?
I think the "select" is better.
> Using select you should also consider removing HDLC as visible option and
> use only select. Mixing depends and selects is generally a bad idea.
It has to stay there for external modules.
It's similar to MII - drivers select MII automatically but you can
turn it on (Y or M) by hand as well.
And you can have HDLC=y and driver=m (and it makes perfect sense).
Actually I can't see any bad idea here.
The original dependency was certainly, uhm, not the best one.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 0:07 ` [PATCH 3/3] Intel IXP4xx network drivers Krzysztof Halasa
@ 2007-05-07 12:59 ` Michael-Luke Jones
2007-05-08 11:40 ` [PATCH 3/3] Intel IXP4xx network drivers Lennert Buytenhek
1 sibling, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-07 12:59 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
On 7 May 2007, at 01:07, Krzysztof Halasa wrote:
> Adds IXP4xx drivers for built-in CPU components:
> - hardware queue manager
> - NPE (network coprocessors),
> - Ethernet ports,
> - HSS (sync serial) ports (currently only non-channelized HDLC).
>
> Both Ethernet and HSS drivers use queue manager and NPE driver and
> require external firmware file(s) available from www.intel.com.
>
> "Platform device" definitions for Ethernet ports on IXDP425
> development
> platform are provided (though it has been tested on not yet available
> IXP425-based hardware only)
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
Immediate comments as follows:
(Krzysztof has already seen them in a private email but I'm putting
them out so people can publically disagree with me if I have got this
wrong.)
Code placement:
Queue Manager & NPE code => arch/arm/mach-ixp4xx
WAN driver code => drivers/net/wan
Eth code => drivers/net/arm
Kconfig:
I'm not convinced about 'config IXP4XX_NETDEVICES'. I'd lose it
together with the drivers/net/ixp4xx directory
Ethernet & HSS code should probably select NPE and QMGR (rather than
depend) but these options should still be exposed in arch/arm/mach-
ixp4xx/Kconfig
Michael-Luke Jones
PS: Please cc me on replies as I only subscribe to l-a-k.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
@ 2007-05-07 12:59 ` Michael-Luke Jones
0 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-07 12:59 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: netdev, linux-arm-kernel, Russell King, Jeff Garzik, lkml
On 7 May 2007, at 01:07, Krzysztof Halasa wrote:
> Adds IXP4xx drivers for built-in CPU components:
> - hardware queue manager
> - NPE (network coprocessors),
> - Ethernet ports,
> - HSS (sync serial) ports (currently only non-channelized HDLC).
>
> Both Ethernet and HSS drivers use queue manager and NPE driver and
> require external firmware file(s) available from www.intel.com.
>
> "Platform device" definitions for Ethernet ports on IXDP425
> development
> platform are provided (though it has been tested on not yet available
> IXP425-based hardware only)
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
Immediate comments as follows:
(Krzysztof has already seen them in a private email but I'm putting
them out so people can publically disagree with me if I have got this
wrong.)
Code placement:
Queue Manager & NPE code => arch/arm/mach-ixp4xx
WAN driver code => drivers/net/wan
Eth code => drivers/net/arm
Kconfig:
I'm not convinced about 'config IXP4XX_NETDEVICES'. I'd lose it
together with the drivers/net/ixp4xx directory
Ethernet & HSS code should probably select NPE and QMGR (rather than
depend) but these options should still be exposed in arch/arm/mach-
ixp4xx/Kconfig
Michael-Luke Jones
PS: Please cc me on replies as I only subscribe to l-a-k.
-------------------------------------------------------------------
List admin: http://lists.arm.linux.org.uk/mailman/listinfo/linux-arm-kernel
FAQ: http://www.arm.linux.org.uk/mailinglists/faq.php
Etiquette: http://www.arm.linux.org.uk/mailinglists/etiquette.php
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 11:56 ` Krzysztof Halasa
@ 2007-05-07 13:17 ` Roman Zippel
2007-05-07 13:21 ` Jeff Garzik
0 siblings, 1 reply; 88+ messages in thread
From: Roman Zippel @ 2007-05-07 13:17 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Hi,
On Mon, 7 May 2007, Krzysztof Halasa wrote:
> Actually I can't see any bad idea here.
> The original dependency was certainly, uhm, not the best one.
select seriously screws with the dependencies, it's especially problematic
if the selected symbol has other dependencies as HDLC in this case, it
makes it only more complicated to get the dependencies correct again.
Please use it only if it solves a real problem.
bye, Roman
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 13:17 ` Roman Zippel
@ 2007-05-07 13:21 ` Jeff Garzik
2007-05-07 13:46 ` Roman Zippel
0 siblings, 1 reply; 88+ messages in thread
From: Jeff Garzik @ 2007-05-07 13:21 UTC (permalink / raw)
To: Roman Zippel
Cc: Krzysztof Halasa, Russell King, lkml, netdev, linux-arm-kernel
Roman Zippel wrote:
> Hi,
>
> On Mon, 7 May 2007, Krzysztof Halasa wrote:
>
>> Actually I can't see any bad idea here.
>> The original dependency was certainly, uhm, not the best one.
>
> select seriously screws with the dependencies, it's especially problematic
> if the selected symbol has other dependencies as HDLC in this case, it
> makes it only more complicated to get the dependencies correct again.
> Please use it only if it solves a real problem.
What he's doing is the standard way to deal with library-style code.
Nothing wrong with the patch, it's continuing established methods.
Jeff
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 13:21 ` Jeff Garzik
@ 2007-05-07 13:46 ` Roman Zippel
2007-05-07 16:50 ` Krzysztof Halasa
0 siblings, 1 reply; 88+ messages in thread
From: Roman Zippel @ 2007-05-07 13:46 UTC (permalink / raw)
To: Jeff Garzik
Cc: Krzysztof Halasa, Russell King, lkml, netdev, linux-arm-kernel
Hi,
On Mon, 7 May 2007, Jeff Garzik wrote:
> > select seriously screws with the dependencies, it's especially problematic
> > if the selected symbol has other dependencies as HDLC in this case, it makes
> > it only more complicated to get the dependencies correct again.
> > Please use it only if it solves a real problem.
>
> What he's doing is the standard way to deal with library-style code. Nothing
> wrong with the patch, it's continuing established methods.
select was never meant as autoconfiguration tool. It can't be said often
enough: select seriously screws with the dependencies, _please_ don't use
it as a simple depends replacement.
HDLC doesn't really look like simple library code, what's up with all the
HDLC_* options?
bye, Roman
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 13:46 ` Roman Zippel
@ 2007-05-07 16:50 ` Krzysztof Halasa
2007-05-07 17:07 ` Roman Zippel
0 siblings, 1 reply; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 16:50 UTC (permalink / raw)
To: Roman Zippel; +Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Roman Zippel <zippel@linux-m68k.org> writes:
> HDLC doesn't really look like simple library code, what's up with all the
> HDLC_* options?
Sub-modules. Anyway, what does the patch "screw" exactly?
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 16:50 ` Krzysztof Halasa
@ 2007-05-07 17:07 ` Roman Zippel
2007-05-07 18:15 ` Satyam Sharma
` (3 more replies)
0 siblings, 4 replies; 88+ messages in thread
From: Roman Zippel @ 2007-05-07 17:07 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Hi,
On Mon, 7 May 2007, Krzysztof Halasa wrote:
> Roman Zippel <zippel@linux-m68k.org> writes:
>
> > HDLC doesn't really look like simple library code, what's up with all the
> > HDLC_* options?
>
> Sub-modules.
So it's not simple library code, or is it?
> Anyway, what does the patch "screw" exactly?
Normal dependencies, you basically have to manually make sure they are
correct (and it seems with your patch they aren't). Again, _please_ (with
sugar on top) don't use select unless you have a good reason for it.
bye, Roman
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 12:59 ` Michael-Luke Jones
(?)
@ 2007-05-07 17:12 ` Krzysztof Halasa
2007-05-07 17:52 ` Christian Hohnstaedt
2007-05-07 18:14 ` Michael-Luke Jones
-1 siblings, 2 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 17:12 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Michael-Luke Jones <mlj28@cam.ac.uk> writes:
> Code placement:
> Queue Manager & NPE code => arch/arm/mach-ixp4xx
> WAN driver code => drivers/net/wan
> Eth code => drivers/net/arm
Why would you want such placement?
Potential problems: header files would have to be moved to
include/asm-arm = headers pollution.
All 4 drivers are, in fact, network (related) drivers.
drivers/net/arm would probably make (some) sense if it was
a single (or not so single) Ethernet driver.
> Kconfig:
> I'm not convinced about 'config IXP4XX_NETDEVICES'. I'd lose it
> together with the drivers/net/ixp4xx directory
It wouldn't make sense without the directory, no doubt.
> Ethernet & HSS code should probably select NPE and QMGR (rather than
> depend)
Actually, that's exactly what this patch do.
> but these options should still be exposed in arch/arm/mach-
> ixp4xx/Kconfig
Why exactly? They are network devices, who would expect them there?
How about the dependency mess (NET_ETHERNET etc.) that would be
created?
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 17:12 ` Krzysztof Halasa
@ 2007-05-07 17:52 ` Christian Hohnstaedt
2007-05-07 18:14 ` Michael-Luke Jones
1 sibling, 0 replies; 88+ messages in thread
From: Christian Hohnstaedt @ 2007-05-07 17:52 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, netdev, linux-arm-kernel, Russell King,
Jeff Garzik, lkml
On Mon, May 07, 2007 at 07:12:49PM +0200, Krzysztof Halasa wrote:
> Michael-Luke Jones <mlj28@cam.ac.uk> writes:
>
> > Code placement:
> > Queue Manager & NPE code => arch/arm/mach-ixp4xx
> > WAN driver code => drivers/net/wan
> > Eth code => drivers/net/arm
>
> Why would you want such placement?
> Potential problems: header files would have to be moved to
> include/asm-arm = headers pollution.
> All 4 drivers are, in fact, network (related) drivers.
No.
- qmgr is a versatile hardware fifo stack, that is currently
used to exchange data with the NPE.
- the NPE can also be used as DMA engine and for crypto operations.
Both are not network related.
Additionally, the NPE is not only ixp4xx related, but is
also used in IXP23xx CPUs, so it could be placed in
arch/arm/common or arch/arm/xscale ?
- The MAC is used on IXP23xx, too. So the drivers for
both CPU familys only differ in the way they exchange
network packets between the NPE and the kernel.
>
> drivers/net/arm would probably make (some) sense if it was
> a single (or not so single) Ethernet driver.
If Queue Manager & NPE move to arch/.... , it can be a single file.
Christian Hohnstaedt
--
Christian Hohnstaedt
Software Engineer
Innominate Security Technologies AG /protecting industrial networks/
tel: +49.30.6392-3285 fax: +49.30.6392-3307
Albert-Einstein-Strasse 14, D-12489 Berlin, Germany
http://www.innominate.com
Register Court: AG Charlottenburg, HR B 81603
Management Board: Joachim Fietz, Dirk Seewald
Chairman of the Supervisory Board: Edward M. Stadum
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
@ 2007-05-07 17:52 ` Christian Hohnstaedt
0 siblings, 0 replies; 88+ messages in thread
From: Christian Hohnstaedt @ 2007-05-07 17:52 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, linux-arm-kernel, Russell King
On Mon, May 07, 2007 at 07:12:49PM +0200, Krzysztof Halasa wrote:
> Michael-Luke Jones <mlj28@cam.ac.uk> writes:
>
> > Code placement:
> > Queue Manager & NPE code => arch/arm/mach-ixp4xx
> > WAN driver code => drivers/net/wan
> > Eth code => drivers/net/arm
>
> Why would you want such placement?
> Potential problems: header files would have to be moved to
> include/asm-arm = headers pollution.
> All 4 drivers are, in fact, network (related) drivers.
No.
- qmgr is a versatile hardware fifo stack, that is currently
used to exchange data with the NPE.
- the NPE can also be used as DMA engine and for crypto operations.
Both are not network related.
Additionally, the NPE is not only ixp4xx related, but is
also used in IXP23xx CPUs, so it could be placed in
arch/arm/common or arch/arm/xscale ?
- The MAC is used on IXP23xx, too. So the drivers for
both CPU familys only differ in the way they exchange
network packets between the NPE and the kernel.
>
> drivers/net/arm would probably make (some) sense if it was
> a single (or not so single) Ethernet driver.
If Queue Manager & NPE move to arch/.... , it can be a single file.
Christian Hohnstaedt
--
Christian Hohnstaedt
Software Engineer
Innominate Security Technologies AG /protecting industrial networks/
tel: +49.30.6392-3285 fax: +49.30.6392-3307
Albert-Einstein-Strasse 14, D-12489 Berlin, Germany
http://www.innominate.com
Register Court: AG Charlottenburg, HR B 81603
Management Board: Joachim Fietz, Dirk Seewald
Chairman of the Supervisory Board: Edward M. Stadum
-------------------------------------------------------------------
List admin: http://lists.arm.linux.org.uk/mailman/listinfo/linux-arm-kernel
FAQ: http://www.arm.linux.org.uk/mailinglists/faq.php
Etiquette: http://www.arm.linux.org.uk/mailinglists/etiquette.php
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 17:12 ` Krzysztof Halasa
@ 2007-05-07 18:14 ` Michael-Luke Jones
2007-05-07 18:14 ` Michael-Luke Jones
1 sibling, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-07 18:14 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, ARM Linux Mailing List,
Lennert Buytenhek
[Added Lennert Buytenhek to CC list]
Hey again,
>> Code placement:
>> Queue Manager & NPE code => arch/arm/mach-ixp4xx
>> WAN driver code => drivers/net/wan
>> Eth code => drivers/net/arm
>
> Why would you want such placement?
> Potential problems: header files would have to be moved to
> include/asm-arm = headers pollution.
Headers for ixp4xx-specific hardware can surely live in the include/
asm-arm/arch-ixp4xx/ quite happily.
> All 4 drivers are, in fact, network (related) drivers.
Despite their name, Network Processing Engines are independent
coprocessors which are only coincidentally attached to MACs for
ethernet / WAN purposes. If Intel would allow us to compile code for
these coprocessors, we could get them to do lots of things other than
networking.
In fact, we already kind of can. Crypto is not networking, and if the
kernel gains ixp4xx crypto support, that should be possible to enable
independently of networking. They can also function as DMA engines,
which should also be independent of networking functionality.
So, the NPE driver (which is basically ixp4xx specific) should be,
for practical purposes, networking-code agnostic. As it is a lump of
code talking to an architecture specific piece of hardware, it should
live in arch/arm/ rather than arch-independent drivers/
(NB: the publically reviewed version of Christian's ixp4xx_net driver
had exactly this file layout, see below)
>> Ethernet & HSS code should probably select NPE and QMGR (rather than
>> depend)
>
> Actually, that's exactly what this patch do.
>
>> but these options should still be exposed in arch/arm/mach-
>> ixp4xx/Kconfig
Sorry, unclear. That sentence was meant as a coherent whole -
agreeing with you that the NPE dependency should use select but then
pointing out that you should still be able to turn NPE support on in
arch/arm/mach/ixp4xx/Kconfig even without selecting any of the
network drivers.
> Why exactly? They are network devices, who would expect them there?
> How about the dependency mess (NET_ETHERNET etc.) that would be
> created?
For networking devices point, see above.
I don't fully understand the specifics, but Christian appeared to
avoid any dependency mess in the publically reviewed version of his
driver (as below).
As I understand it, functions to talk to the NPE should appear in the
NPE driver. The NPE driver should then be called by ethernet/wan/
crypto/dma(?) drivers to carry out the specific firmware-dependent
tasks. I haven't reviewed your code in detail, so I can't comment on
whether this is what you actually do or not.
==Links to the review of Christian's driver==
[1/7] Register & NPE definitions:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038082.html
[2/7] Platform devices (thought unnecessary by Lennert in his review):
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038086.html
[3/7] Stub for Data/Address-Coherent mode setup:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038083.html
[4/7] QMGR driver:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038278.html
[5/7] NPE driver:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038085.html
[6/7] Ethernet driver:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038087.html
[7/7] Documentation:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038088.html
Sorry if I'm stating the obvious, but this is a public discussion and
I want to make sure everyone who reads this can see what I mean. If
they disagree with me despite this, so be it :)
Mike-Luke
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
@ 2007-05-07 18:14 ` Michael-Luke Jones
0 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-07 18:14 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, ARM Linux Mailing List,
Lennert Buytenhek
[Added Lennert Buytenhek to CC list]
Hey again,
>> Code placement:
>> Queue Manager & NPE code => arch/arm/mach-ixp4xx
>> WAN driver code => drivers/net/wan
>> Eth code => drivers/net/arm
>
> Why would you want such placement?
> Potential problems: header files would have to be moved to
> include/asm-arm = headers pollution.
Headers for ixp4xx-specific hardware can surely live in the include/
asm-arm/arch-ixp4xx/ quite happily.
> All 4 drivers are, in fact, network (related) drivers.
Despite their name, Network Processing Engines are independent
coprocessors which are only coincidentally attached to MACs for
ethernet / WAN purposes. If Intel would allow us to compile code for
these coprocessors, we could get them to do lots of things other than
networking.
In fact, we already kind of can. Crypto is not networking, and if the
kernel gains ixp4xx crypto support, that should be possible to enable
independently of networking. They can also function as DMA engines,
which should also be independent of networking functionality.
So, the NPE driver (which is basically ixp4xx specific) should be,
for practical purposes, networking-code agnostic. As it is a lump of
code talking to an architecture specific piece of hardware, it should
live in arch/arm/ rather than arch-independent drivers/
(NB: the publically reviewed version of Christian's ixp4xx_net driver
had exactly this file layout, see below)
>> Ethernet & HSS code should probably select NPE and QMGR (rather than
>> depend)
>
> Actually, that's exactly what this patch do.
>
>> but these options should still be exposed in arch/arm/mach-
>> ixp4xx/Kconfig
Sorry, unclear. That sentence was meant as a coherent whole -
agreeing with you that the NPE dependency should use select but then
pointing out that you should still be able to turn NPE support on in
arch/arm/mach/ixp4xx/Kconfig even without selecting any of the
network drivers.
> Why exactly? They are network devices, who would expect them there?
> How about the dependency mess (NET_ETHERNET etc.) that would be
> created?
For networking devices point, see above.
I don't fully understand the specifics, but Christian appeared to
avoid any dependency mess in the publically reviewed version of his
driver (as below).
As I understand it, functions to talk to the NPE should appear in the
NPE driver. The NPE driver should then be called by ethernet/wan/
crypto/dma(?) drivers to carry out the specific firmware-dependent
tasks. I haven't reviewed your code in detail, so I can't comment on
whether this is what you actually do or not.
==Links to the review of Christian's driver==
[1/7] Register & NPE definitions:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038082.html
[2/7] Platform devices (thought unnecessary by Lennert in his review):
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038086.html
[3/7] Stub for Data/Address-Coherent mode setup:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038083.html
[4/7] QMGR driver:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038278.html
[5/7] NPE driver:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038085.html
[6/7] Ethernet driver:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038087.html
[7/7] Documentation:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2007-January/
038088.html
Sorry if I'm stating the obvious, but this is a public discussion and
I want to make sure everyone who reads this can see what I mean. If
they disagree with me despite this, so be it :)
Mike-Luke
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 17:07 ` Roman Zippel
@ 2007-05-07 18:15 ` Satyam Sharma
2007-05-07 20:31 ` Jeff Garzik
2007-05-07 20:54 ` Krzysztof Halasa
` (2 subsequent siblings)
3 siblings, 1 reply; 88+ messages in thread
From: Satyam Sharma @ 2007-05-07 18:15 UTC (permalink / raw)
To: Roman Zippel
Cc: Krzysztof Halasa, Jeff Garzik, Russell King, lkml, netdev,
linux-arm-kernel
On 5/7/07, Roman Zippel <zippel@linux-m68k.org> wrote:
> Hi,
>
> On Mon, 7 May 2007, Krzysztof Halasa wrote:
>
> > Roman Zippel <zippel@linux-m68k.org> writes:
> >
> > > HDLC doesn't really look like simple library code, what's up with all the
> > > HDLC_* options?
> >
> > Sub-modules.
>
> So it's not simple library code, or is it?
>
> > Anyway, what does the patch "screw" exactly?
>
> Normal dependencies, you basically have to manually make sure they are
> correct (and it seems with your patch they aren't). Again, _please_ (with
> sugar on top) don't use select unless you have a good reason for it.
Yes, mixing select and depends is a recipe for build disasters. Call
me a rabid fanatic, but I would in fact go as far as to say that this
whole "select" thing in the Kconfig process is one big BUG, and not a
feature. People are lazy by nature and would rather just "select" a
dependency for their config option than burden users with several
"depends".
The following rant doesn't apply only to the select above, but
unfortunately, that's precisely what happens when such stuff is
introduced ... they seem like a good idea to the introducer for his
special / rarest-of-rare case, but then others tend to {ab-,mis-}use
it and the use of such primitives soon proliferates even to cases
where they are clearly inapplicable / avoidable.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 18:14 ` Michael-Luke Jones
@ 2007-05-07 19:57 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 19:57 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Having thought about it a bit more, a layout similar to the one
proposed by you may make sense.
Michael-Luke Jones <mlj28@cam.ac.uk> writes:
> Despite their name, Network Processing Engines are independent
> coprocessors which are only coincidentally attached to MACs for
> ethernet / WAN purposes. If Intel would allow us to compile code for
> these coprocessors, we could get them to do lots of things other than
> networking.
Not sure about that. Intel doesn't say much about it, but I think
one can safely assume that while NPEs can probably be programmed
to do other things, their performance comes not from NPE firmware
but from specialized network coprocessors (not NPEs) which can only
do (in hardware) things like Ethernet, HDLC, bit sync, CRC16/32,
and MD5/SHA-1/DES/AES.
I think you can even use MD5 and SHA-1 without any firmware (but
would have to check this info).
Note that while certain CPUs have the same set of NPEs, they are
missing some network coprocessors and can't do, for example, crypto.
OTOH, yes, they are not, strictly speaking, only network processors.
> Crypto is not networking, and if the
> kernel gains ixp4xx crypto support, that should be possible to enable
> independently of networking.
Yep. Unfortunately I don't know in-kernel crypto code.
> They can also function as DMA engines,
> which should also be independent of networking functionality.
That's what the docs say. Not sure about real-life purpose of
such DMA engine, though.
> So, the NPE driver (which is basically ixp4xx specific) should be,
> for practical purposes, networking-code agnostic. As it is a lump of
> code talking to an architecture specific piece of hardware, it should
> live in arch/arm/ rather than arch-independent drivers/
Well, I'm told that (compatible) NPEs are present on other IXP CPUs.
Not sure about details.
> As I understand it, functions to talk to the NPE should appear in the
> NPE driver. The NPE driver should then be called by ethernet/wan/
> crypto/dma(?) drivers to carry out the specific firmware-dependent
> tasks.
Actually, the NPE code does two things:
a) initialized NPEs and downloades the firmware
b) exchanges control messages with NPEs.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
@ 2007-05-07 19:57 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 19:57 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Having thought about it a bit more, a layout similar to the one
proposed by you may make sense.
Michael-Luke Jones <mlj28@cam.ac.uk> writes:
> Despite their name, Network Processing Engines are independent
> coprocessors which are only coincidentally attached to MACs for
> ethernet / WAN purposes. If Intel would allow us to compile code for
> these coprocessors, we could get them to do lots of things other than
> networking.
Not sure about that. Intel doesn't say much about it, but I think
one can safely assume that while NPEs can probably be programmed
to do other things, their performance comes not from NPE firmware
but from specialized network coprocessors (not NPEs) which can only
do (in hardware) things like Ethernet, HDLC, bit sync, CRC16/32,
and MD5/SHA-1/DES/AES.
I think you can even use MD5 and SHA-1 without any firmware (but
would have to check this info).
Note that while certain CPUs have the same set of NPEs, they are
missing some network coprocessors and can't do, for example, crypto.
OTOH, yes, they are not, strictly speaking, only network processors.
> Crypto is not networking, and if the
> kernel gains ixp4xx crypto support, that should be possible to enable
> independently of networking.
Yep. Unfortunately I don't know in-kernel crypto code.
> They can also function as DMA engines,
> which should also be independent of networking functionality.
That's what the docs say. Not sure about real-life purpose of
such DMA engine, though.
> So, the NPE driver (which is basically ixp4xx specific) should be,
> for practical purposes, networking-code agnostic. As it is a lump of
> code talking to an architecture specific piece of hardware, it should
> live in arch/arm/ rather than arch-independent drivers/
Well, I'm told that (compatible) NPEs are present on other IXP CPUs.
Not sure about details.
> As I understand it, functions to talk to the NPE should appear in the
> NPE driver. The NPE driver should then be called by ethernet/wan/
> crypto/dma(?) drivers to carry out the specific firmware-dependent
> tasks.
Actually, the NPE code does two things:
a) initialized NPEs and downloades the firmware
b) exchanges control messages with NPEs.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 17:52 ` Christian Hohnstaedt
(?)
@ 2007-05-07 20:00 ` Krzysztof Halasa
2007-05-08 11:48 ` Lennert Buytenhek
-1 siblings, 1 reply; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 20:00 UTC (permalink / raw)
To: Christian Hohnstaedt
Cc: Michael-Luke Jones, netdev, linux-arm-kernel, Russell King,
Jeff Garzik, lkml
Christian Hohnstaedt <chohnstaedt@innominate.com> writes:
> - the NPE can also be used as DMA engine and for crypto operations.
> Both are not network related.
> Additionally, the NPE is not only ixp4xx related, but is
> also used in IXP23xx CPUs, so it could be placed in
> arch/arm/common or arch/arm/xscale ?
>
> - The MAC is used on IXP23xx, too. So the drivers for
> both CPU familys only differ in the way they exchange
> network packets between the NPE and the kernel.
Hmm... perhaps someone have a spare device with such IXP23xx
and wants to make it a donation for science? :-)
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 19:57 ` Krzysztof Halasa
(?)
@ 2007-05-07 20:18 ` Michael-Luke Jones
2007-05-08 11:46 ` Lennert Buytenhek
-1 siblings, 1 reply; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-07 20:18 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, ARM Linux Mailing List, Russell King
On 7 May 2007, at 20:57, Krzysztof Halasa wrote:
> Well, I'm told that (compatible) NPEs are present on other IXP CPUs.
> Not sure about details.
If, by a combined effort, we ever manage to create a generic NPE
driver for the NPEs found in IXP42x/43x/46x/2000/23xx then the driver
should go in arch/arm/npe.c
It's possible, but hard due to the differences in hardware design and
the fact that boards based on anything other than 42x are few and far
between. The vast majority of 'independent' users following mainline
are likely running on 42x boards.
Thus, for now, I would drop the NPE / QMGR code in arch/arm/mach-
ixp4xx/ and concentrate on making it 42x/43x/46x agnostic. One step
at a time :)
Michael-Luke
-------------------------------------------------------------------
List admin: http://lists.arm.linux.org.uk/mailman/listinfo/linux-arm-kernel
FAQ: http://www.arm.linux.org.uk/mailinglists/faq.php
Etiquette: http://www.arm.linux.org.uk/mailinglists/etiquette.php
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 18:15 ` Satyam Sharma
@ 2007-05-07 20:31 ` Jeff Garzik
2007-05-07 20:49 ` Satyam Sharma
` (2 more replies)
0 siblings, 3 replies; 88+ messages in thread
From: Jeff Garzik @ 2007-05-07 20:31 UTC (permalink / raw)
To: Satyam Sharma
Cc: Roman Zippel, Krzysztof Halasa, Russell King, lkml, netdev,
linux-arm-kernel
Satyam Sharma wrote:
> Yes, mixing select and depends is a recipe for build disasters. Call
> me a rabid fanatic, but I would in fact go as far as to say that this
> whole "select" thing in the Kconfig process is one big BUG, and not a
> feature. People are lazy by nature and would rather just "select" a
> dependency for their config option than burden users with several
> "depends".
Tough, the kernel community has voted against you.
It makes far more sense to include a driver during kernel configuration,
and have that driver pull in its libraries via 'select'. The lame
alternative requires developers to know which libraries they need BEFORE
picking their drivers, which is backwards and requires legwork on the
part of the kernel developer.
Jeff
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 0/3] Intel IXP4xx network drivers
2007-05-06 23:46 [PATCH 0/3] Intel IXP4xx network drivers Krzysztof Halasa
` (3 preceding siblings ...)
2007-05-07 10:27 ` [PATCH 2a/3] " Krzysztof Halasa
@ 2007-05-07 20:39 ` Leon Woestenberg
2007-05-07 21:21 ` Krzysztof Halasa
2007-05-08 1:40 ` Krzysztof Halasa
5 siblings, 1 reply; 88+ messages in thread
From: Leon Woestenberg @ 2007-05-07 20:39 UTC (permalink / raw)
To: Krzysztof Halasa, linux-kernel
Cc: Jeff Garzik, Russell King, lnetdev, linux-arm-kernel
Hello Krzysztof,
...
drivers/net/ixp4xx/ixp4xx_eth.c
drivers/net/ixp4xx/ixp4xx_npe.c
drivers/net/ixp4xx/ixp4xx_qmgr.c
...
suggestion: credit Christian for his work by putting his copyright in
the header as well?
Thanks to both of you!
Leon.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 20:31 ` Jeff Garzik
@ 2007-05-07 20:49 ` Satyam Sharma
2007-05-07 20:50 ` Randy Dunlap
2007-05-07 20:57 ` Roman Zippel
2 siblings, 0 replies; 88+ messages in thread
From: Satyam Sharma @ 2007-05-07 20:49 UTC (permalink / raw)
To: Jeff Garzik
Cc: Roman Zippel, Krzysztof Halasa, Russell King, lkml, netdev,
linux-arm-kernel
On 5/8/07, Jeff Garzik <jeff@garzik.org> wrote:
> Satyam Sharma wrote:
> > Yes, mixing select and depends is a recipe for build disasters. Call
> > me a rabid fanatic, but I would in fact go as far as to say that this
> > whole "select" thing in the Kconfig process is one big BUG, and not a
> > feature. People are lazy by nature and would rather just "select" a
> > dependency for their config option than burden users with several
> > "depends".
>
> Tough, the kernel community has voted against you.
Heh ... I guess I appeared a trifle _too_ fanatic there, but I'm not.
That rant against it was actually more about controlling its
{ab-,mis-}use and rampant proliferation (even to cases where it is
inapplicable / avoidable) than removing it altogether. Especially,
when a developer is lazy / wrong and mixes select and depends in the
same dependency chain leading to errors.
> It makes far more sense to include a driver during kernel configuration,
> and have that driver pull in its libraries via 'select'. The lame
> alternative requires developers to know which libraries they need BEFORE
> picking their drivers, which is backwards and requires legwork on the
> part of the kernel developer.
(That bit about a developer _knowing_ which libraries to select
manually before selecting a driver is not really true, you can "?" on
a config item and get all its "depends" and "select" dependencies
anyway -- and it requires more leg work, yes, but then you'll never
have build failures either)
Still, I don't really disagree w.r.t. "select"ing libraries, as long
as we're careful to limit them to _only_ libraries.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 20:31 ` Jeff Garzik
2007-05-07 20:49 ` Satyam Sharma
@ 2007-05-07 20:50 ` Randy Dunlap
2007-05-07 22:39 ` Satyam Sharma
2007-05-07 20:57 ` Roman Zippel
2 siblings, 1 reply; 88+ messages in thread
From: Randy Dunlap @ 2007-05-07 20:50 UTC (permalink / raw)
To: Jeff Garzik
Cc: Satyam Sharma, Roman Zippel, Krzysztof Halasa, Russell King,
lkml, netdev, linux-arm-kernel
On Mon, 07 May 2007 16:31:48 -0400 Jeff Garzik wrote:
> Satyam Sharma wrote:
> > Yes, mixing select and depends is a recipe for build disasters. Call
> > me a rabid fanatic, but I would in fact go as far as to say that this
> > whole "select" thing in the Kconfig process is one big BUG, and not a
> > feature. People are lazy by nature and would rather just "select" a
> > dependency for their config option than burden users with several
> > "depends".
>
> Tough, the kernel community has voted against you.
Andrew (usually) implores people not to use "select" and I agree
with him.
> It makes far more sense to include a driver during kernel configuration,
> and have that driver pull in its libraries via 'select'. The lame
> alternative requires developers to know which libraries they need BEFORE
> picking their drivers, which is backwards and requires legwork on the
> part of the kernel developer.
Developers? If you had said "users," I might agree, but IMO it's
OK (or even Good) for developers to know what libraries their code
uses/requires. Yes, that's a good thing.
---
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 17:07 ` Roman Zippel
2007-05-07 18:15 ` Satyam Sharma
@ 2007-05-07 20:54 ` Krzysztof Halasa
2007-05-07 21:02 ` [PATCH] Use menuconfig objects II - netdev/wan Krzysztof Halasa
2007-05-07 21:08 ` [PATCH 1a/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
3 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 20:54 UTC (permalink / raw)
To: Roman Zippel; +Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Roman Zippel <zippel@linux-m68k.org> writes:
> Normal dependencies, you basically have to manually make sure they are
> correct (and it seems with your patch they aren't). Again, _please_ (with
> sugar on top) don't use select unless you have a good reason for it.
You perhaps mean WAN dependency, don't you? I was under impression
that the "menu" patches have already been merged so the WAN
dependency would be automatic.
Anything other than that? Sure, I can see now. I can only say,
in my defense, that I was (and still am) very tired. I'm going to
get some coffee.
There is still a very good reason for the select.
CONFIG_HDLC _is_ a simple library, though probably not the most
simple one. I really feel it's an improvement.
Attaching two patches, hopefully the double check is enough.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 20:31 ` Jeff Garzik
2007-05-07 20:49 ` Satyam Sharma
2007-05-07 20:50 ` Randy Dunlap
@ 2007-05-07 20:57 ` Roman Zippel
2 siblings, 0 replies; 88+ messages in thread
From: Roman Zippel @ 2007-05-07 20:57 UTC (permalink / raw)
To: Jeff Garzik
Cc: Satyam Sharma, Krzysztof Halasa, Russell King, lkml, netdev,
linux-arm-kernel
Hi,
On Mon, 7 May 2007, Jeff Garzik wrote:
> Tough, the kernel community has voted against you.
>
> It makes far more sense to include a driver during kernel configuration, and
> have that driver pull in its libraries via 'select'. The lame alternative
> requires developers to know which libraries they need BEFORE picking their
> drivers, which is backwards and requires legwork on the part of the kernel
> developer.
Jeff, there was never anything to "vote" about! There is no
autoconfiguration, kernel configuration isn't ready for Aunt Tilly and
select is no substitute for it...
bye, Roman
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH] Use menuconfig objects II - netdev/wan
2007-05-07 17:07 ` Roman Zippel
2007-05-07 18:15 ` Satyam Sharma
2007-05-07 20:54 ` Krzysztof Halasa
@ 2007-05-07 21:02 ` Krzysztof Halasa
2007-05-07 21:08 ` [PATCH 1a/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
3 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 21:02 UTC (permalink / raw)
To: Roman Zippel; +Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
From: Jan Engelhardt <jengelh@linux01.gwdg.de>
Change Kconfig objects from "menu, config" into "menuconfig" so
that the user can disable the whole feature without having to
enter the menu first.
Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
---
drivers/net/wan/Kconfig | 34 +++++++++++++++-------------------
1 file changed, 15 insertions(+), 19 deletions(-)
--- linux-2.6.21-mm_20070428.orig/drivers/net/wan/Kconfig
+++ linux-2.6.21-mm_20070428/drivers/net/wan/Kconfig
@@ -2,10 +2,7 @@
# wan devices configuration
#
-menu "Wan interfaces"
- depends on NETDEVICES
-
-config WAN
+menuconfig WAN
bool "Wan interfaces support"
---help---
Wide Area Networks (WANs), such as X.25, Frame Relay and leased
@@ -23,10 +20,12 @@ config WAN
If unsure, say N.
+if WAN
+
# There is no way to detect a comtrol sv11 - force it modular for now.
config HOSTESS_SV11
tristate "Comtrol Hostess SV-11 support"
- depends on WAN && ISA && m && ISA_DMA_API && INET
+ depends on ISA && m && ISA_DMA_API && INET
help
Driver for Comtrol Hostess SV-11 network card which
operates on low speed synchronous serial links at up to
@@ -38,7 +37,7 @@ config HOSTESS_SV11
# The COSA/SRP driver has not been tested as non-modular yet.
config COSA
tristate "COSA/SRP sync serial boards support"
- depends on WAN && ISA && m && ISA_DMA_API
+ depends on ISA && m && ISA_DMA_API
---help---
Driver for COSA and SRP synchronous serial boards.
@@ -62,7 +61,7 @@ config COSA
#
config LANMEDIA
tristate "LanMedia Corp. SSI/V.35, T1/E1, HSSI, T3 boards"
- depends on WAN && PCI
+ depends on PCI
---help---
Driver for the following Lan Media family of serial boards:
@@ -89,7 +88,7 @@ config LANMEDIA
# There is no way to detect a Sealevel board. Force it modular
config SEALEVEL_4021
tristate "Sealevel Systems 4021 support"
- depends on WAN && ISA && m && ISA_DMA_API && INET
+ depends on ISA && m && ISA_DMA_API && INET
help
This is a driver for the Sealevel Systems ACB 56 serial I/O adapter.
@@ -99,7 +98,6 @@ config SEALEVEL_4021
# Generic HDLC
config HDLC
tristate "Generic HDLC layer"
- depends on WAN
help
Say Y to this option if your Linux box contains a WAN (Wide Area
Network) card supported by this driver and you are planning to
@@ -167,7 +165,7 @@ config HDLC_X25
If unsure, say N.
comment "X.25/LAPB support is disabled"
- depends on WAN && HDLC && (LAPB!=m || HDLC!=m) && LAPB!=y
+ depends on HDLC && (LAPB!=m || HDLC!=m) && LAPB!=y
config PCI200SYN
tristate "Goramo PCI200SYN support"
@@ -230,10 +228,10 @@ config PC300_MLPPP
Multilink PPP over the PC300 synchronous communication boards.
comment "Cyclades-PC300 MLPPP support is disabled."
- depends on WAN && HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
+ depends on HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
comment "Refer to the file README.mlppp, provided by PC300 package."
- depends on WAN && HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
+ depends on HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
config PC300TOO
tristate "Cyclades PC300 RSV/X21 alternative support"
@@ -338,7 +336,6 @@ config DSCC4_PCI_RST
config DLCI
tristate "Frame Relay DLCI support"
- depends on WAN
---help---
Support for the Frame Relay protocol.
@@ -385,7 +382,7 @@ config SDLA
# Wan router core.
config WAN_ROUTER_DRIVERS
tristate "WAN router drivers"
- depends on WAN && WAN_ROUTER
+ depends on WAN_ROUTER
---help---
Connect LAN to WAN via Linux box.
@@ -440,7 +437,7 @@ config CYCLOMX_X25
# X.25 network drivers
config LAPBETHER
tristate "LAPB over Ethernet driver (EXPERIMENTAL)"
- depends on WAN && LAPB && X25
+ depends on LAPB && X25
---help---
Driver for a pseudo device (typically called /dev/lapb0) which allows
you to open an LAPB point-to-point connection to some other computer
@@ -456,7 +453,7 @@ config LAPBETHER
config X25_ASY
tristate "X.25 async driver (EXPERIMENTAL)"
- depends on WAN && LAPB && X25
+ depends on LAPB && X25
---help---
Send and receive X.25 frames over regular asynchronous serial
lines such as telephone lines equipped with ordinary modems.
@@ -471,7 +468,7 @@ config X25_ASY
config SBNI
tristate "Granch SBNI12 Leased Line adapter support"
- depends on WAN && X86
+ depends on X86
---help---
Driver for ISA SBNI12-xx cards which are low cost alternatives to
leased line modems.
@@ -497,5 +494,4 @@ config SBNI_MULTILINE
If unsure, say N.
-endmenu
-
+endif # WAN
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 1a/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 17:07 ` Roman Zippel
` (2 preceding siblings ...)
2007-05-07 21:02 ` [PATCH] Use menuconfig objects II - netdev/wan Krzysztof Halasa
@ 2007-05-07 21:08 ` Krzysztof Halasa
3 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 21:08 UTC (permalink / raw)
To: Roman Zippel; +Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
Allow enabling WAN drivers without selecting generic HDLC first,
HDLC will be selected automatically.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
--- linux/drivers/net/wan/Kconfig 2007-05-07 22:46:06.000000000 +0200
+++ linux/drivers/net/wan/Kconfig 2007-05-07 22:45:13.000000000 +0200
@@ -169,7 +169,8 @@
config PCI200SYN
tristate "Goramo PCI200SYN support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
help
Driver for PCI200SYN cards by Goramo sp. j.
@@ -183,7 +184,8 @@
config WANXL
tristate "SBE Inc. wanXL support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
help
Driver for wanXL PCI cards by SBE Inc.
@@ -206,7 +208,8 @@
config PC300
tristate "Cyclades-PC300 support (RS-232/V.35, X.21, T1/E1 boards)"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
---help---
Driver for the Cyclades-PC300 synchronous communication boards.
@@ -228,14 +231,15 @@
Multilink PPP over the PC300 synchronous communication boards.
comment "Cyclades-PC300 MLPPP support is disabled."
- depends on HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
+ depends on PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
comment "Refer to the file README.mlppp, provided by PC300 package."
- depends on HDLC && PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
+ depends on PC300 && (PPP=n || !PPP_MULTILINK || PPP_SYNC_TTY=n || !HDLC_PPP)
config PC300TOO
tristate "Cyclades PC300 RSV/X21 alternative support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
help
Alternative driver for PC300 RSV/X21 PCI cards made by
Cyclades, Inc. If you have such a card, say Y here and see
@@ -248,7 +252,8 @@
config N2
tristate "SDL RISCom/N2 support"
- depends on HDLC && ISA
+ depends on ISA
+ select HDLC
help
Driver for RISCom/N2 single or dual channel ISA cards by
SDL Communications Inc.
@@ -265,7 +270,8 @@
config C101
tristate "Moxa C101 support"
- depends on HDLC && ISA
+ depends on ISA
+ select HDLC
help
Driver for C101 SuperSync ISA cards by Moxa Technologies Co., Ltd.
@@ -279,7 +285,8 @@
config FARSYNC
tristate "FarSync T-Series support"
- depends on HDLC && PCI
+ depends on PCI
+ select HDLC
---help---
Support for the FarSync T-Series X.21 (and V.35/V.24) cards by
FarSite Communications Ltd.
@@ -298,7 +305,8 @@
config DSCC4
tristate "Etinc PCISYNC serial board support"
- depends on HDLC && PCI && m
+ depends on PCI && m
+ select HDLC
help
Driver for Etinc PCISYNC boards based on the Infineon (ex. Siemens)
DSCC4 chipset.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 0/3] Intel IXP4xx network drivers
2007-05-07 20:39 ` [PATCH 0/3] " Leon Woestenberg
@ 2007-05-07 21:21 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-07 21:21 UTC (permalink / raw)
To: Leon Woestenberg
Cc: linux-kernel, Jeff Garzik, Russell King, lnetdev, linux-arm-kernel
Hello,
"Leon Woestenberg" <leon.woestenberg@gmail.com> writes:
> drivers/net/ixp4xx/ixp4xx_eth.c
> drivers/net/ixp4xx/ixp4xx_npe.c
> drivers/net/ixp4xx/ixp4xx_qmgr.c
> ...
>
> suggestion: credit Christian for his work by putting his copyright in
> the header as well?
Obviously, as I wrote, his work was a great help for me.
However, a copyright is a different thing, this is a complete
rewrite with maybe two small fragments adapted from his code
(including the "fuses" thing). I can put info in the sources,
though.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 20:50 ` Randy Dunlap
@ 2007-05-07 22:39 ` Satyam Sharma
2007-05-07 22:52 ` Randy Dunlap
0 siblings, 1 reply; 88+ messages in thread
From: Satyam Sharma @ 2007-05-07 22:39 UTC (permalink / raw)
To: Randy Dunlap
Cc: Jeff Garzik, Roman Zippel, Krzysztof Halasa, Russell King, lkml,
netdev, linux-arm-kernel
On 5/8/07, Randy Dunlap <randy.dunlap@oracle.com> wrote:
> On Mon, 07 May 2007 16:31:48 -0400 Jeff Garzik wrote:
>
> > Satyam Sharma wrote:
> > > Yes, mixing select and depends is a recipe for build disasters. Call
> > > me a rabid fanatic, but I would in fact go as far as to say that this
> > > whole "select" thing in the Kconfig process is one big BUG, and not a
> > > feature. People are lazy by nature and would rather just "select" a
> > > dependency for their config option than burden users with several
> > > "depends".
> >
> > Tough, the kernel community has voted against you.
>
> Andrew (usually) implores people not to use "select" and I agree
> with him.
>
> > It makes far more sense to include a driver during kernel configuration,
> > and have that driver pull in its libraries via 'select'. The lame
> > alternative requires developers to know which libraries they need BEFORE
> > picking their drivers, which is backwards and requires legwork on the
> > part of the kernel developer.
>
> Developers? If you had said "users," I might agree, but IMO it's
> OK (or even Good) for developers to know what libraries their code
> uses/requires. Yes, that's a good thing.
You're absolutely right, but to give Jeff the benefit of the doubt I'm
sure he _meant_ "users" there although he said "developers". Stating
the obvious, the developer _has_ to know what stuff his code uses
anyway, otherwise what does he "select"s or "depends" his config
option on.
As for users, we _can_ avoid pitfalls by building a complete
dependency tree and just selecting _everything_ that we require for a
particular config option to be selected, but some users could
conceivably prefer only being _told_ about what else they need to
successfully pick a config option (than everything just getting in
behind their backs). Actually (correct me if I'm wrong), this is not
presently possible: an option is not visible unless dependencies are
already picked. Just a suggestion, though.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select"
2007-05-07 22:39 ` Satyam Sharma
@ 2007-05-07 22:52 ` Randy Dunlap
0 siblings, 0 replies; 88+ messages in thread
From: Randy Dunlap @ 2007-05-07 22:52 UTC (permalink / raw)
To: Satyam Sharma
Cc: Jeff Garzik, Roman Zippel, Krzysztof Halasa, Russell King, lkml,
netdev, linux-arm-kernel
Satyam Sharma wrote:
> On 5/8/07, Randy Dunlap <randy.dunlap@oracle.com> wrote:
>> On Mon, 07 May 2007 16:31:48 -0400 Jeff Garzik wrote:
>>
>> > Satyam Sharma wrote:
>> > > Yes, mixing select and depends is a recipe for build disasters. Call
>> > > me a rabid fanatic, but I would in fact go as far as to say that this
>> > > whole "select" thing in the Kconfig process is one big BUG, and not a
>> > > feature. People are lazy by nature and would rather just "select" a
>> > > dependency for their config option than burden users with several
>> > > "depends".
>> >
>> > Tough, the kernel community has voted against you.
>>
>> Andrew (usually) implores people not to use "select" and I agree
>> with him.
>>
>> > It makes far more sense to include a driver during kernel
>> configuration,
>> > and have that driver pull in its libraries via 'select'. The lame
>> > alternative requires developers to know which libraries they need
>> BEFORE
>> > picking their drivers, which is backwards and requires legwork on the
>> > part of the kernel developer.
>>
>> Developers? If you had said "users," I might agree, but IMO it's
>> OK (or even Good) for developers to know what libraries their code
>> uses/requires. Yes, that's a good thing.
>
> You're absolutely right, but to give Jeff the benefit of the doubt I'm
> sure he _meant_ "users" there although he said "developers". Stating
> the obvious, the developer _has_ to know what stuff his code uses
> anyway, otherwise what does he "select"s or "depends" his config
> option on.
>
> As for users, we _can_ avoid pitfalls by building a complete
> dependency tree and just selecting _everything_ that we require for a
> particular config option to be selected, but some users could
> conceivably prefer only being _told_ about what else they need to
> successfully pick a config option (than everything just getting in
> behind their backs). Actually (correct me if I'm wrong), this is not
> presently possible: an option is not visible unless dependencies are
> already picked. Just a suggestion, though.
That's correct for menuconfig. For xconfig, there are GUI options to
Show Name
Show Range
Show Data
Show All Options
Show Debug Info
I often have all of them enabled.
--
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.2
2007-05-07 19:57 ` Krzysztof Halasa
@ 2007-05-08 0:11 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 0:11 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for IXP4xx built-in hardware queue manager.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig
index 9715ef5..71ef55f 100644
--- a/arch/arm/mach-ixp4xx/Kconfig
+++ b/arch/arm/mach-ixp4xx/Kconfig
@@ -176,6 +176,12 @@ config IXP4XX_INDIRECT_PCI
need to use the indirect method instead. If you don't know
what you need, leave this option unselected.
+config IXP4XX_QMGR
+ tristate "IXP4xx Queue Manager support"
+ help
+ This driver supports IXP4xx built-in hardware queue manager
+ and is automatically selected by Ethernet and HSS drivers.
+
endmenu
endif
diff --git a/arch/arm/mach-ixp4xx/Makefile b/arch/arm/mach-ixp4xx/Makefile
index 3b87c47..f8e1afc 100644
--- a/arch/arm/mach-ixp4xx/Makefile
+++ b/arch/arm/mach-ixp4xx/Makefile
@@ -26,3 +26,4 @@ obj-$(CONFIG_MACH_NAS100D) += nas100d-setup.o nas100d-power.o
obj-$(CONFIG_MACH_DSMG600) += dsmg600-setup.o dsmg600-power.o
obj-$(CONFIG_PCI) += $(obj-pci-$(CONFIG_PCI)) common-pci.o
+obj-$(CONFIG_IXP4XX_QMGR) += ixp4xx_qmgr.o
diff --git a/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
new file mode 100644
index 0000000..b9e9bd6
--- /dev/null
+++ b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
@@ -0,0 +1,273 @@
+/*
+ * Intel IXP4xx Queue Manager driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <asm/arch/qmgr.h>
+
+#define DEBUG 0
+
+struct qmgr_regs __iomem *qmgr_regs;
+static struct resource *mem_res;
+static spinlock_t qmgr_lock;
+static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
+static void (*irq_handlers[HALF_QUEUES])(void *pdev);
+static void *irq_pdevs[HALF_QUEUES];
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev)
+{
+ u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues / u32 */
+ int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
+ unsigned long flags;
+
+ src &= 7;
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
+ irq_handlers[queue] = handler;
+ irq_pdevs[queue] = pdev;
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+
+static irqreturn_t qmgr_irq1(int irq, void *pdev)
+{
+ int i;
+ u32 val = __raw_readl(&qmgr_regs->irqstat[0]);
+ __raw_writel(val, &qmgr_regs->irqstat[0]); /* ACK */
+
+ for (i = 0; i < HALF_QUEUES; i++)
+ if (val & (1 << i))
+ irq_handlers[i](irq_pdevs[i]);
+
+ return val ? IRQ_HANDLED : 0;
+}
+
+
+void qmgr_enable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) | (1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+void qmgr_disable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) & ~(1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+static inline void shift_mask(u32 *mask)
+{
+ mask[3] = mask[3] << 1 | mask[2] >> 31;
+ mask[2] = mask[2] << 1 | mask[1] >> 31;
+ mask[1] = mask[1] << 1 | mask[0] >> 31;
+ mask[0] <<= 1;
+}
+
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark)
+{
+ u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
+ int err;
+
+ if (queue >= HALF_QUEUES)
+ return -ERANGE;
+
+ if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
+ return -EINVAL;
+
+ switch (len) {
+ case 16:
+ cfg = 0 << 24;
+ mask[0] = 0x1;
+ break;
+ case 32:
+ cfg = 1 << 24;
+ mask[0] = 0x3;
+ break;
+ case 64:
+ cfg = 2 << 24;
+ mask[0] = 0xF;
+ break;
+ case 128:
+ cfg = 3 << 24;
+ mask[0] = 0xFF;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ cfg |= nearly_empty_watermark << 26;
+ cfg |= nearly_full_watermark << 29;
+ len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
+ mask[1] = mask[2] = mask[3] = 0;
+
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+ spin_lock_irq(&qmgr_lock);
+ if (__raw_readl(&qmgr_regs->sram[queue])) {
+ err = -EBUSY;
+ goto err;
+ }
+
+ while (1) {
+ if (!(used_sram_bitmap[0] & mask[0]) &&
+ !(used_sram_bitmap[1] & mask[1]) &&
+ !(used_sram_bitmap[2] & mask[2]) &&
+ !(used_sram_bitmap[3] & mask[3]))
+ break; /* found free space */
+
+ addr++;
+ shift_mask(mask);
+ if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
+ printk(KERN_ERR "qmgr: no free SRAM space for"
+ " queue %i\n", queue);
+ err = -ENOMEM;
+ goto err;
+ }
+ }
+
+ used_sram_bitmap[0] |= mask[0];
+ used_sram_bitmap[1] |= mask[1];
+ used_sram_bitmap[2] |= mask[2];
+ used_sram_bitmap[3] |= mask[3];
+ __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
+ spin_unlock_irq(&qmgr_lock);
+
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
+ queue, addr);
+#endif
+ return 0;
+
+err:
+ spin_unlock_irq(&qmgr_lock);
+ module_put(THIS_MODULE);
+ return err;
+}
+
+void qmgr_release_queue(unsigned int queue)
+{
+ u32 cfg, addr, mask[4];
+
+ BUG_ON(queue >= HALF_QUEUES); /* not in valid range */
+
+ spin_lock_irq(&qmgr_lock);
+ cfg = __raw_readl(&qmgr_regs->sram[queue]);
+ addr = (cfg >> 14) & 0xFF;
+
+ BUG_ON(!addr); /* not requested */
+
+ switch ((cfg >> 24) & 3) {
+ case 0: mask[0] = 0x1; break;
+ case 1: mask[0] = 0x3; break;
+ case 2: mask[0] = 0xF; break;
+ case 3: mask[0] = 0xFF; break;
+ }
+
+ while (addr--)
+ shift_mask(mask);
+
+ __raw_writel(0, &qmgr_regs->sram[queue]);
+
+ used_sram_bitmap[0] &= ~mask[0];
+ used_sram_bitmap[1] &= ~mask[1];
+ used_sram_bitmap[2] &= ~mask[2];
+ used_sram_bitmap[3] &= ~mask[3];
+ irq_handlers[queue] = NULL; /* catch IRQ bugs */
+ spin_unlock_irq(&qmgr_lock);
+
+ module_put(THIS_MODULE);
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: released queue %i\n", queue);
+#endif
+}
+
+static int qmgr_init(void)
+{
+ int i, err;
+ mem_res = request_mem_region(IXP4XX_QMGR_BASE_PHYS,
+ IXP4XX_QMGR_REGION_SIZE,
+ "IXP4xx Queue Manager");
+ if (mem_res == NULL)
+ return -EBUSY;
+
+ qmgr_regs = ioremap(IXP4XX_QMGR_BASE_PHYS, IXP4XX_QMGR_REGION_SIZE);
+ if (qmgr_regs == NULL) {
+ err = -ENOMEM;
+ goto error_map;
+ }
+
+ /* reset qmgr registers */
+ for (i = 0; i < 4; i++) {
+ __raw_writel(0x33333333, &qmgr_regs->stat1[i]);
+ __raw_writel(0, &qmgr_regs->irqsrc[i]);
+ }
+ for (i = 0; i < 2; i++) {
+ __raw_writel(0, &qmgr_regs->stat2[i]);
+ __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[i]); /* clear */
+ __raw_writel(0, &qmgr_regs->irqen[i]);
+ }
+
+ for (i = 0; i < QUEUES; i++)
+ __raw_writel(0, &qmgr_regs->sram[i]);
+
+ err = request_irq(IRQ_IXP4XX_QM1, qmgr_irq1, 0,
+ "IXP4xx Queue Manager", NULL);
+ if (err) {
+ printk(KERN_ERR "qmgr: failed to request IRQ%i\n",
+ IRQ_IXP4XX_QM1);
+ goto error_irq;
+ }
+
+ used_sram_bitmap[0] = 0xF; /* 4 first pages reserved for config */
+ spin_lock_init(&qmgr_lock);
+
+ printk(KERN_INFO "IXP4xx Queue Manager initialized.\n");
+ return 0;
+
+error_irq:
+ iounmap(qmgr_regs);
+error_map:
+ release_resource(mem_res);
+ return err;
+}
+
+static void qmgr_remove(void)
+{
+ free_irq(IRQ_IXP4XX_QM1, NULL);
+ synchronize_irq(IRQ_IXP4XX_QM1);
+ iounmap(qmgr_regs);
+ release_resource(mem_res);
+}
+
+module_init(qmgr_init);
+module_exit(qmgr_remove);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Krzysztof Halasa");
+
+EXPORT_SYMBOL(qmgr_regs);
+EXPORT_SYMBOL(qmgr_set_irq);
+EXPORT_SYMBOL(qmgr_enable_irq);
+EXPORT_SYMBOL(qmgr_disable_irq);
+EXPORT_SYMBOL(qmgr_request_queue);
+EXPORT_SYMBOL(qmgr_release_queue);
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.2
@ 2007-05-08 0:11 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 0:11 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for IXP4xx built-in hardware queue manager.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig
index 9715ef5..71ef55f 100644
--- a/arch/arm/mach-ixp4xx/Kconfig
+++ b/arch/arm/mach-ixp4xx/Kconfig
@@ -176,6 +176,12 @@ config IXP4XX_INDIRECT_PCI
need to use the indirect method instead. If you don't know
what you need, leave this option unselected.
+config IXP4XX_QMGR
+ tristate "IXP4xx Queue Manager support"
+ help
+ This driver supports IXP4xx built-in hardware queue manager
+ and is automatically selected by Ethernet and HSS drivers.
+
endmenu
endif
diff --git a/arch/arm/mach-ixp4xx/Makefile b/arch/arm/mach-ixp4xx/Makefile
index 3b87c47..f8e1afc 100644
--- a/arch/arm/mach-ixp4xx/Makefile
+++ b/arch/arm/mach-ixp4xx/Makefile
@@ -26,3 +26,4 @@ obj-$(CONFIG_MACH_NAS100D) += nas100d-setup.o nas100d-power.o
obj-$(CONFIG_MACH_DSMG600) += dsmg600-setup.o dsmg600-power.o
obj-$(CONFIG_PCI) += $(obj-pci-$(CONFIG_PCI)) common-pci.o
+obj-$(CONFIG_IXP4XX_QMGR) += ixp4xx_qmgr.o
diff --git a/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
new file mode 100644
index 0000000..b9e9bd6
--- /dev/null
+++ b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
@@ -0,0 +1,273 @@
+/*
+ * Intel IXP4xx Queue Manager driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <asm/arch/qmgr.h>
+
+#define DEBUG 0
+
+struct qmgr_regs __iomem *qmgr_regs;
+static struct resource *mem_res;
+static spinlock_t qmgr_lock;
+static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
+static void (*irq_handlers[HALF_QUEUES])(void *pdev);
+static void *irq_pdevs[HALF_QUEUES];
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev)
+{
+ u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues / u32 */
+ int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
+ unsigned long flags;
+
+ src &= 7;
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
+ irq_handlers[queue] = handler;
+ irq_pdevs[queue] = pdev;
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+
+static irqreturn_t qmgr_irq1(int irq, void *pdev)
+{
+ int i;
+ u32 val = __raw_readl(&qmgr_regs->irqstat[0]);
+ __raw_writel(val, &qmgr_regs->irqstat[0]); /* ACK */
+
+ for (i = 0; i < HALF_QUEUES; i++)
+ if (val & (1 << i))
+ irq_handlers[i](irq_pdevs[i]);
+
+ return val ? IRQ_HANDLED : 0;
+}
+
+
+void qmgr_enable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) | (1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+void qmgr_disable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) & ~(1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+static inline void shift_mask(u32 *mask)
+{
+ mask[3] = mask[3] << 1 | mask[2] >> 31;
+ mask[2] = mask[2] << 1 | mask[1] >> 31;
+ mask[1] = mask[1] << 1 | mask[0] >> 31;
+ mask[0] <<= 1;
+}
+
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark)
+{
+ u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
+ int err;
+
+ if (queue >= HALF_QUEUES)
+ return -ERANGE;
+
+ if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
+ return -EINVAL;
+
+ switch (len) {
+ case 16:
+ cfg = 0 << 24;
+ mask[0] = 0x1;
+ break;
+ case 32:
+ cfg = 1 << 24;
+ mask[0] = 0x3;
+ break;
+ case 64:
+ cfg = 2 << 24;
+ mask[0] = 0xF;
+ break;
+ case 128:
+ cfg = 3 << 24;
+ mask[0] = 0xFF;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ cfg |= nearly_empty_watermark << 26;
+ cfg |= nearly_full_watermark << 29;
+ len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
+ mask[1] = mask[2] = mask[3] = 0;
+
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+ spin_lock_irq(&qmgr_lock);
+ if (__raw_readl(&qmgr_regs->sram[queue])) {
+ err = -EBUSY;
+ goto err;
+ }
+
+ while (1) {
+ if (!(used_sram_bitmap[0] & mask[0]) &&
+ !(used_sram_bitmap[1] & mask[1]) &&
+ !(used_sram_bitmap[2] & mask[2]) &&
+ !(used_sram_bitmap[3] & mask[3]))
+ break; /* found free space */
+
+ addr++;
+ shift_mask(mask);
+ if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
+ printk(KERN_ERR "qmgr: no free SRAM space for"
+ " queue %i\n", queue);
+ err = -ENOMEM;
+ goto err;
+ }
+ }
+
+ used_sram_bitmap[0] |= mask[0];
+ used_sram_bitmap[1] |= mask[1];
+ used_sram_bitmap[2] |= mask[2];
+ used_sram_bitmap[3] |= mask[3];
+ __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
+ spin_unlock_irq(&qmgr_lock);
+
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
+ queue, addr);
+#endif
+ return 0;
+
+err:
+ spin_unlock_irq(&qmgr_lock);
+ module_put(THIS_MODULE);
+ return err;
+}
+
+void qmgr_release_queue(unsigned int queue)
+{
+ u32 cfg, addr, mask[4];
+
+ BUG_ON(queue >= HALF_QUEUES); /* not in valid range */
+
+ spin_lock_irq(&qmgr_lock);
+ cfg = __raw_readl(&qmgr_regs->sram[queue]);
+ addr = (cfg >> 14) & 0xFF;
+
+ BUG_ON(!addr); /* not requested */
+
+ switch ((cfg >> 24) & 3) {
+ case 0: mask[0] = 0x1; break;
+ case 1: mask[0] = 0x3; break;
+ case 2: mask[0] = 0xF; break;
+ case 3: mask[0] = 0xFF; break;
+ }
+
+ while (addr--)
+ shift_mask(mask);
+
+ __raw_writel(0, &qmgr_regs->sram[queue]);
+
+ used_sram_bitmap[0] &= ~mask[0];
+ used_sram_bitmap[1] &= ~mask[1];
+ used_sram_bitmap[2] &= ~mask[2];
+ used_sram_bitmap[3] &= ~mask[3];
+ irq_handlers[queue] = NULL; /* catch IRQ bugs */
+ spin_unlock_irq(&qmgr_lock);
+
+ module_put(THIS_MODULE);
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: released queue %i\n", queue);
+#endif
+}
+
+static int qmgr_init(void)
+{
+ int i, err;
+ mem_res = request_mem_region(IXP4XX_QMGR_BASE_PHYS,
+ IXP4XX_QMGR_REGION_SIZE,
+ "IXP4xx Queue Manager");
+ if (mem_res == NULL)
+ return -EBUSY;
+
+ qmgr_regs = ioremap(IXP4XX_QMGR_BASE_PHYS, IXP4XX_QMGR_REGION_SIZE);
+ if (qmgr_regs == NULL) {
+ err = -ENOMEM;
+ goto error_map;
+ }
+
+ /* reset qmgr registers */
+ for (i = 0; i < 4; i++) {
+ __raw_writel(0x33333333, &qmgr_regs->stat1[i]);
+ __raw_writel(0, &qmgr_regs->irqsrc[i]);
+ }
+ for (i = 0; i < 2; i++) {
+ __raw_writel(0, &qmgr_regs->stat2[i]);
+ __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[i]); /* clear */
+ __raw_writel(0, &qmgr_regs->irqen[i]);
+ }
+
+ for (i = 0; i < QUEUES; i++)
+ __raw_writel(0, &qmgr_regs->sram[i]);
+
+ err = request_irq(IRQ_IXP4XX_QM1, qmgr_irq1, 0,
+ "IXP4xx Queue Manager", NULL);
+ if (err) {
+ printk(KERN_ERR "qmgr: failed to request IRQ%i\n",
+ IRQ_IXP4XX_QM1);
+ goto error_irq;
+ }
+
+ used_sram_bitmap[0] = 0xF; /* 4 first pages reserved for config */
+ spin_lock_init(&qmgr_lock);
+
+ printk(KERN_INFO "IXP4xx Queue Manager initialized.\n");
+ return 0;
+
+error_irq:
+ iounmap(qmgr_regs);
+error_map:
+ release_resource(mem_res);
+ return err;
+}
+
+static void qmgr_remove(void)
+{
+ free_irq(IRQ_IXP4XX_QM1, NULL);
+ synchronize_irq(IRQ_IXP4XX_QM1);
+ iounmap(qmgr_regs);
+ release_resource(mem_res);
+}
+
+module_init(qmgr_init);
+module_exit(qmgr_remove);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Krzysztof Halasa");
+
+EXPORT_SYMBOL(qmgr_regs);
+EXPORT_SYMBOL(qmgr_set_irq);
+EXPORT_SYMBOL(qmgr_enable_irq);
+EXPORT_SYMBOL(qmgr_disable_irq);
+EXPORT_SYMBOL(qmgr_request_queue);
+EXPORT_SYMBOL(qmgr_release_queue);
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.2 - NPE
2007-05-07 19:57 ` Krzysztof Halasa
@ 2007-05-08 0:36 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 0:36 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for built-in IXP4xx Network Processor Engines.
This patch requires IXP4xx Queue Manager driver and the "fuses" patch.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig
index 71ef55f..25f8994 100644
--- a/arch/arm/mach-ixp4xx/Kconfig
+++ b/arch/arm/mach-ixp4xx/Kconfig
@@ -182,6 +182,14 @@ config IXP4XX_QMGR
This driver supports IXP4xx built-in hardware queue manager
and is automatically selected by Ethernet and HSS drivers.
+config IXP4XX_NPE
+ tristate "IXP4xx Network Processor Engine support"
+ select HOTPLUG
+ select FW_LOADER
+ help
+ This driver supports IXP4xx built-in network coprocessors
+ and is automatically selected by Ethernet and HSS drivers.
+
endmenu
endif
diff --git a/arch/arm/mach-ixp4xx/Makefile b/arch/arm/mach-ixp4xx/Makefile
index f8e1afc..33d4b88 100644
--- a/arch/arm/mach-ixp4xx/Makefile
+++ b/arch/arm/mach-ixp4xx/Makefile
@@ -27,3 +27,4 @@ obj-$(CONFIG_MACH_DSMG600) += dsmg600-setup.o dsmg600-power.o
obj-$(CONFIG_PCI) += $(obj-pci-$(CONFIG_PCI)) common-pci.o
obj-$(CONFIG_IXP4XX_QMGR) += ixp4xx_qmgr.o
+obj-$(CONFIG_IXP4XX_NPE) += ixp4xx_npe.o
diff --git a/arch/arm/mach-ixp4xx/ixp4xx_npe.c b/arch/arm/mach-ixp4xx/ixp4xx_npe.c
new file mode 100644
index 0000000..4c77d8a
--- /dev/null
+++ b/arch/arm/mach-ixp4xx/ixp4xx_npe.c
@@ -0,0 +1,737 @@
+/*
+ * Intel IXP4xx Network Processor Engine driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * The code is based on publicly available information:
+ * - Intel IXP4xx Developer's Manual and other e-papers
+ * - Intel IXP400 Access Library Software (BSD license)
+ * - previous works by Christian Hohnstaedt <chohnstaedt@innominate.com>
+ * Thanks, Christian.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <asm/delay.h>
+#include <asm/io.h>
+#include <asm/arch/npe.h>
+
+#define DEBUG_MSG 0
+#define DEBUG_FW 0
+
+#define NPE_COUNT 3
+#define MAX_RETRIES 1000 /* microseconds */
+#define NPE_42X_DATA_SIZE 0x800 /* in dwords */
+#define NPE_46X_DATA_SIZE 0x1000
+#define NPE_A_42X_INSTR_SIZE 0x1000
+#define NPE_B_AND_C_42X_INSTR_SIZE 0x800
+#define NPE_46X_INSTR_SIZE 0x1000
+#define REGS_SIZE 0x1000
+
+#define NPE_PHYS_REG 32
+
+#define FW_MAGIC 0xFEEDF00D
+#define FW_BLOCK_TYPE_INSTR 0x0
+#define FW_BLOCK_TYPE_DATA 0x1
+#define FW_BLOCK_TYPE_EOF 0xF
+
+/* NPE exec status (read) and command (write) */
+#define CMD_NPE_STEP 0x01
+#define CMD_NPE_START 0x02
+#define CMD_NPE_STOP 0x03
+#define CMD_NPE_CLR_PIPE 0x04
+#define CMD_CLR_PROFILE_CNT 0x0C
+#define CMD_RD_INS_MEM 0x10 /* instruction memory */
+#define CMD_WR_INS_MEM 0x11
+#define CMD_RD_DATA_MEM 0x12 /* data memory */
+#define CMD_WR_DATA_MEM 0x13
+#define CMD_RD_ECS_REG 0x14 /* exec access register */
+#define CMD_WR_ECS_REG 0x15
+
+#define STAT_RUN 0x80000000
+#define STAT_STOP 0x40000000
+#define STAT_CLEAR 0x20000000
+#define STAT_ECS_K 0x00800000 /* pipeline clean */
+
+#define NPE_STEVT 0x1B
+#define NPE_STARTPC 0x1C
+#define NPE_REGMAP 0x1E
+#define NPE_CINDEX 0x1F
+
+#define INSTR_WR_REG_SHORT 0x0000C000
+#define INSTR_WR_REG_BYTE 0x00004000
+#define INSTR_RD_FIFO 0x0F888220
+#define INSTR_RESET_MBOX 0x0FAC8210
+
+#define ECS_BG_CTXT_REG_0 0x00 /* Background Executing Context */
+#define ECS_BG_CTXT_REG_1 0x01 /* Stack level */
+#define ECS_BG_CTXT_REG_2 0x02
+#define ECS_PRI_1_CTXT_REG_0 0x04 /* Priority 1 Executing Context */
+#define ECS_PRI_1_CTXT_REG_1 0x05 /* Stack level */
+#define ECS_PRI_1_CTXT_REG_2 0x06
+#define ECS_PRI_2_CTXT_REG_0 0x08 /* Priority 2 Executing Context */
+#define ECS_PRI_2_CTXT_REG_1 0x09 /* Stack level */
+#define ECS_PRI_2_CTXT_REG_2 0x0A
+#define ECS_DBG_CTXT_REG_0 0x0C /* Debug Executing Context */
+#define ECS_DBG_CTXT_REG_1 0x0D /* Stack level */
+#define ECS_DBG_CTXT_REG_2 0x0E
+#define ECS_INSTRUCT_REG 0x11 /* NPE Instruction Register */
+
+#define ECS_REG_0_ACTIVE 0x80000000 /* all levels */
+#define ECS_REG_0_NEXTPC_MASK 0x1FFF0000 /* BG/PRI1/PRI2 levels */
+#define ECS_REG_0_LDUR_BITS 8
+#define ECS_REG_0_LDUR_MASK 0x00000700 /* all levels */
+#define ECS_REG_1_CCTXT_BITS 16
+#define ECS_REG_1_CCTXT_MASK 0x000F0000 /* all levels */
+#define ECS_REG_1_SELCTXT_BITS 0
+#define ECS_REG_1_SELCTXT_MASK 0x0000000F /* all levels */
+#define ECS_DBG_REG_2_IF 0x00100000 /* debug level */
+#define ECS_DBG_REG_2_IE 0x00080000 /* debug level */
+
+/* NPE watchpoint_fifo register bit */
+#define WFIFO_VALID 0x80000000
+
+/* NPE messaging_status register bit definitions */
+#define MSGSTAT_OFNE 0x00010000 /* OutFifoNotEmpty */
+#define MSGSTAT_IFNF 0x00020000 /* InFifoNotFull */
+#define MSGSTAT_OFNF 0x00040000 /* OutFifoNotFull */
+#define MSGSTAT_IFNE 0x00080000 /* InFifoNotEmpty */
+#define MSGSTAT_MBINT 0x00100000 /* Mailbox interrupt */
+#define MSGSTAT_IFINT 0x00200000 /* InFifo interrupt */
+#define MSGSTAT_OFINT 0x00400000 /* OutFifo interrupt */
+#define MSGSTAT_WFINT 0x00800000 /* WatchFifo interrupt */
+
+/* NPE messaging_control register bit definitions */
+#define MSGCTL_OUT_FIFO 0x00010000 /* enable output FIFO */
+#define MSGCTL_IN_FIFO 0x00020000 /* enable input FIFO */
+#define MSGCTL_OUT_FIFO_WRITE 0x01000000 /* enable FIFO + WRITE */
+#define MSGCTL_IN_FIFO_WRITE 0x02000000
+
+/* NPE mailbox_status value for reset */
+#define RESET_MBOX_STAT 0x0000F0F0
+
+const char *npe_names[] = { "NPE-A", "NPE-B", "NPE-C" };
+
+#define print_npe(pri, npe, fmt, ...) \
+ printk(pri "%s: " fmt, npe_name(npe), ## __VA_ARGS__)
+
+#if DEBUG_MSG
+#define debug_msg(npe, fmt, ...) \
+ print_npe(KERN_DEBUG, npe, fmt, ## __VA_ARGS__)
+#else
+#define debug_msg(npe, fmt, ...)
+#endif
+
+static struct {
+ u32 reg, val;
+}ecs_reset[] = {
+ { ECS_BG_CTXT_REG_0, 0xA0000000 },
+ { ECS_BG_CTXT_REG_1, 0x01000000 },
+ { ECS_BG_CTXT_REG_2, 0x00008000 },
+ { ECS_PRI_1_CTXT_REG_0, 0x20000080 },
+ { ECS_PRI_1_CTXT_REG_1, 0x01000000 },
+ { ECS_PRI_1_CTXT_REG_2, 0x00008000 },
+ { ECS_PRI_2_CTXT_REG_0, 0x20000080 },
+ { ECS_PRI_2_CTXT_REG_1, 0x01000000 },
+ { ECS_PRI_2_CTXT_REG_2, 0x00008000 },
+ { ECS_DBG_CTXT_REG_0, 0x20000000 },
+ { ECS_DBG_CTXT_REG_1, 0x00000000 },
+ { ECS_DBG_CTXT_REG_2, 0x001E0000 },
+ { ECS_INSTRUCT_REG, 0x1003C00F },
+};
+
+static struct npe npe_tab[NPE_COUNT] = {
+ {
+ .id = 0,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEA_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEA_BASE_PHYS,
+ }, {
+ .id = 1,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEB_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEB_BASE_PHYS,
+ }, {
+ .id = 2,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEC_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEC_BASE_PHYS,
+ }
+};
+
+int npe_running(struct npe *npe)
+{
+ return (__raw_readl(&npe->regs->exec_status_cmd) & STAT_RUN) != 0;
+}
+
+static void npe_cmd_write(struct npe *npe, u32 addr, int cmd, u32 data)
+{
+ __raw_writel(data, &npe->regs->exec_data);
+ __raw_writel(addr, &npe->regs->exec_addr);
+ __raw_writel(cmd, &npe->regs->exec_status_cmd);
+}
+
+static u32 npe_cmd_read(struct npe *npe, u32 addr, int cmd)
+{
+ __raw_writel(addr, &npe->regs->exec_addr);
+ __raw_writel(cmd, &npe->regs->exec_status_cmd);
+ /* Iintroduce extra read cycles after issuing read command to NPE
+ so that we read the register after the NPE has updated it.
+ This is to overcome race condition between XScale and NPE */
+ __raw_readl(&npe->regs->exec_data);
+ __raw_readl(&npe->regs->exec_data);
+ return __raw_readl(&npe->regs->exec_data);
+}
+
+static void npe_clear_active(struct npe *npe, u32 reg)
+{
+ u32 val = npe_cmd_read(npe, reg, CMD_RD_ECS_REG);
+ npe_cmd_write(npe, reg, CMD_WR_ECS_REG, val & ~ECS_REG_0_ACTIVE);
+}
+
+static void npe_start(struct npe *npe)
+{
+ /* ensure only Background Context Stack Level is active */
+ npe_clear_active(npe, ECS_PRI_1_CTXT_REG_0);
+ npe_clear_active(npe, ECS_PRI_2_CTXT_REG_0);
+ npe_clear_active(npe, ECS_DBG_CTXT_REG_0);
+
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+ __raw_writel(CMD_NPE_START, &npe->regs->exec_status_cmd);
+}
+
+static void npe_stop(struct npe *npe)
+{
+ __raw_writel(CMD_NPE_STOP, &npe->regs->exec_status_cmd);
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd); /*FIXME?*/
+}
+
+static int __must_check npe_debug_instr(struct npe *npe, u32 instr, u32 ctx,
+ u32 ldur)
+{
+ u32 wc;
+ int i;
+
+ /* set the Active bit, and the LDUR, in the debug level */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_0, CMD_WR_ECS_REG,
+ ECS_REG_0_ACTIVE | (ldur << ECS_REG_0_LDUR_BITS));
+
+ /* set CCTXT at ECS DEBUG L3 to specify in which context to execute
+ the instruction, and set SELCTXT at ECS DEBUG Level to specify
+ which context store to access.
+ Debug ECS Level Reg 1 has form 0x000n000n, where n = context number
+ */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_1, CMD_WR_ECS_REG,
+ (ctx << ECS_REG_1_CCTXT_BITS) |
+ (ctx << ECS_REG_1_SELCTXT_BITS));
+
+ /* clear the pipeline */
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+
+ /* load NPE instruction into the instruction register */
+ npe_cmd_write(npe, ECS_INSTRUCT_REG, CMD_WR_ECS_REG, instr);
+
+ /* we need this value later to wait for completion of NPE execution
+ step */
+ wc = __raw_readl(&npe->regs->watch_count);
+
+ /* issue a Step One command via the Execution Control register */
+ __raw_writel(CMD_NPE_STEP, &npe->regs->exec_status_cmd);
+
+ /* Watch Count register increments when NPE completes an instruction */
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (wc != __raw_readl(&npe->regs->watch_count))
+ return 0;
+ udelay(1);
+ }
+
+ print_npe(KERN_ERR, npe, "reset: npe_debug_instr(): timeout\n");
+ return -ETIMEDOUT;
+}
+
+static int __must_check npe_logical_reg_write8(struct npe *npe, u32 addr,
+ u8 val, u32 ctx)
+{
+ /* here we build the NPE assembler instruction: mov8 d0, #0 */
+ u32 instr = INSTR_WR_REG_BYTE | /* OpCode */
+ addr << 9 | /* base Operand */
+ (val & 0x1F) << 4 | /* lower 5 bits to immediate data */
+ (val & ~0x1F) << (18 - 5);/* higher 3 bits to CoProc instr. */
+ return npe_debug_instr(npe, instr, ctx, 1); /* execute it */
+}
+
+static int __must_check npe_logical_reg_write16(struct npe *npe, u32 addr,
+ u16 val, u32 ctx)
+{
+ /* here we build the NPE assembler instruction: mov16 d0, #0 */
+ u32 instr = INSTR_WR_REG_SHORT | /* OpCode */
+ addr << 9 | /* base Operand */
+ (val & 0x1F) << 4 | /* lower 5 bits to immediate data */
+ (val & ~0x1F) << (18 - 5);/* higher 11 bits to CoProc instr. */
+ return npe_debug_instr(npe, instr, ctx, 1); /* execute it */
+}
+
+static int __must_check npe_logical_reg_write32(struct npe *npe, u32 addr,
+ u32 val, u32 ctx)
+{
+ /* write in 16 bit steps first the high and then the low value */
+ if (npe_logical_reg_write16(npe, addr, val >> 16, ctx))
+ return -ETIMEDOUT;
+ return npe_logical_reg_write16(npe, addr + 2, val & 0xFFFF, ctx);
+}
+
+static int npe_reset(struct npe *npe)
+{
+ u32 val, ctl, exec_count, ctx_reg2;
+ int i;
+
+ ctl = (__raw_readl(&npe->regs->messaging_control) | 0x3F000000) &
+ 0x3F3FFFFF;
+
+ /* disable parity interrupt */
+ __raw_writel(ctl & 0x3F00FFFF, &npe->regs->messaging_control);
+
+ /* pre exec - debug instruction */
+ /* turn off the halt bit by clearing Execution Count register. */
+ exec_count = __raw_readl(&npe->regs->exec_count);
+ __raw_writel(0, &npe->regs->exec_count);
+ /* ensure that IF and IE are on (temporarily), so that we don't end up
+ stepping forever */
+ ctx_reg2 = npe_cmd_read(npe, ECS_DBG_CTXT_REG_2, CMD_RD_ECS_REG);
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_2, CMD_WR_ECS_REG, ctx_reg2 |
+ ECS_DBG_REG_2_IF | ECS_DBG_REG_2_IE);
+
+ /* clear the FIFOs */
+ while (__raw_readl(&npe->regs->watchpoint_fifo) & WFIFO_VALID)
+ ;
+ while (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_OFNE)
+ /* read from the outFIFO until empty */
+ print_npe(KERN_DEBUG, npe, "npe_reset: read FIFO = 0x%X\n",
+ __raw_readl(&npe->regs->in_out_fifo));
+
+ while (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE)
+ /* step execution of the NPE intruction to read inFIFO using
+ the Debug Executing Context stack */
+ if (npe_debug_instr(npe, INSTR_RD_FIFO, 0, 0))
+ return -ETIMEDOUT;
+
+ /* reset the mailbox reg from the XScale side */
+ __raw_writel(RESET_MBOX_STAT, &npe->regs->mailbox_status);
+ /* from NPE side */
+ if (npe_debug_instr(npe, INSTR_RESET_MBOX, 0, 0))
+ return -ETIMEDOUT;
+
+ /* Reset the physical registers in the NPE register file */
+ for (val = 0; val < NPE_PHYS_REG; val++) {
+ if (npe_logical_reg_write16(npe, NPE_REGMAP, val >> 1, 0))
+ return -ETIMEDOUT;
+ /* address is either 0 or 4 */
+ if (npe_logical_reg_write32(npe, (val & 1) * 4, 0, 0))
+ return -ETIMEDOUT;
+ }
+
+ /* Reset the context store = each context's Context Store registers */
+
+ /* Context 0 has no STARTPC. Instead, this value is used to set NextPC
+ for Background ECS, to set where NPE starts executing code */
+ val = npe_cmd_read(npe, ECS_BG_CTXT_REG_0, CMD_RD_ECS_REG);
+ val &= ~ECS_REG_0_NEXTPC_MASK;
+ val |= (0 /* NextPC */ << 16) & ECS_REG_0_NEXTPC_MASK;
+ npe_cmd_write(npe, ECS_BG_CTXT_REG_0, CMD_WR_ECS_REG, val);
+
+ for (i = 0; i < 16; i++) {
+ if (i) { /* Context 0 has no STEVT nor STARTPC */
+ /* STEVT = off, 0x80 */
+ if (npe_logical_reg_write8(npe, NPE_STEVT, 0x80, i))
+ return -ETIMEDOUT;
+ if (npe_logical_reg_write16(npe, NPE_STARTPC, 0, i))
+ return -ETIMEDOUT;
+ }
+ /* REGMAP = d0->p0, d8->p2, d16->p4 */
+ if (npe_logical_reg_write16(npe, NPE_REGMAP, 0x820, i))
+ return -ETIMEDOUT;
+ if (npe_logical_reg_write8(npe, NPE_CINDEX, 0, i))
+ return -ETIMEDOUT;
+ }
+
+ /* post exec */
+ /* clear active bit in debug level */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_0, CMD_WR_ECS_REG, 0);
+ /* clear the pipeline */
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+ /* restore previous values */
+ __raw_writel(exec_count, &npe->regs->exec_count);
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_2, CMD_WR_ECS_REG, ctx_reg2);
+
+ /* write reset values to Execution Context Stack registers */
+ for (val = 0; val < ARRAY_SIZE(ecs_reset); val++)
+ npe_cmd_write(npe, ecs_reset[val].reg, CMD_WR_ECS_REG,
+ ecs_reset[val].val);
+
+ /* clear the profile counter */
+ __raw_writel(CMD_CLR_PROFILE_CNT, &npe->regs->exec_status_cmd);
+
+ __raw_writel(0, &npe->regs->exec_count);
+ __raw_writel(0, &npe->regs->action_points[0]);
+ __raw_writel(0, &npe->regs->action_points[1]);
+ __raw_writel(0, &npe->regs->action_points[2]);
+ __raw_writel(0, &npe->regs->action_points[3]);
+ __raw_writel(0, &npe->regs->watch_count);
+
+ val = ixp4xx_read_fuses();
+ /* reset the NPE */
+ ixp4xx_write_fuses(val & ~(IXP4XX_FUSE_RESET_NPEA << npe->id));
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (!(ixp4xx_read_fuses() &
+ (IXP4XX_FUSE_RESET_NPEA << npe->id)))
+ break; /* reset completed */
+ udelay(1);
+ }
+ if (i == MAX_RETRIES)
+ return -ETIMEDOUT;
+
+ /* deassert reset */
+ ixp4xx_write_fuses(val | (IXP4XX_FUSE_RESET_NPEA << npe->id));
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (ixp4xx_read_fuses() & (IXP4XX_FUSE_RESET_NPEA << npe->id))
+ break; /* NPE is back alive */
+ udelay(1);
+ }
+ if (i == MAX_RETRIES)
+ return -ETIMEDOUT;
+
+ npe_stop(npe);
+
+ /* restore NPE configuration bus Control Register - parity settings */
+ __raw_writel(ctl, &npe->regs->messaging_control);
+ return 0;
+}
+
+
+int npe_send_message(struct npe *npe, const void *msg, const char *what)
+{
+ const u32 *send = msg;
+ int cycles = 0;
+
+ debug_msg(npe, "Trying to send message %s [%08X:%08X]\n",
+ what, send[0], send[1]);
+
+ if (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE) {
+ debug_msg(npe, "NPE input FIFO not empty\n");
+ return -EIO;
+ }
+
+ __raw_writel(send[0], &npe->regs->in_out_fifo);
+
+ if (!(__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNF)) {
+ debug_msg(npe, "NPE input FIFO full\n");
+ return -EIO;
+ }
+
+ __raw_writel(send[1], &npe->regs->in_out_fifo);
+
+ while ((cycles < MAX_RETRIES) &&
+ (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE)) {
+ udelay(1);
+ cycles++;
+ }
+
+ if (cycles == MAX_RETRIES) {
+ debug_msg(npe, "Timeout sending message\n");
+ return -ETIMEDOUT;
+ }
+
+ debug_msg(npe, "Sending a message took %i cycles\n", cycles);
+ return 0;
+}
+
+int npe_recv_message(struct npe *npe, void *msg, const char *what)
+{
+ u32 *recv = msg;
+ int cycles = 0, cnt = 0;
+
+ debug_msg(npe, "Trying to receive message %s\n", what);
+
+ while (cycles < MAX_RETRIES) {
+ if (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_OFNE) {
+ recv[cnt++] = __raw_readl(&npe->regs->in_out_fifo);
+ if (cnt == 2)
+ break;
+ } else {
+ udelay(1);
+ cycles++;
+ }
+ }
+
+ switch(cnt) {
+ case 1:
+ debug_msg(npe, "Received [%08X]\n", recv[0]);
+ break;
+ case 2:
+ debug_msg(npe, "Received [%08X:%08X]\n", recv[0], recv[1]);
+ break;
+ }
+
+ if (cycles == MAX_RETRIES) {
+ debug_msg(npe, "Timeout waiting for message\n");
+ return -ETIMEDOUT;
+ }
+
+ debug_msg(npe, "Receiving a message took %i cycles\n", cycles);
+ return 0;
+}
+
+int npe_send_recv_message(struct npe *npe, void *msg, const char *what)
+{
+ int result;
+ u32 *send = msg, recv[2];
+
+ if ((result = npe_send_message(npe, msg, what)) != 0)
+ return result;
+ if ((result = npe_recv_message(npe, recv, what)) != 0)
+ return result;
+
+ if ((recv[0] != send[0]) || (recv[1] != send[1])) {
+ debug_msg(npe, "Message %s: unexpected message received\n",
+ what);
+ return -EIO;
+ }
+ return 0;
+}
+
+
+int npe_load_firmware(struct npe *npe, const char *name, struct device *dev)
+{
+ const struct firmware *fw_entry;
+
+ struct dl_block {
+ u32 type;
+ u32 offset;
+ } *blk;
+
+ struct dl_image {
+ u32 magic;
+ u32 id;
+ u32 size;
+ union {
+ u32 data[0];
+ struct dl_block blocks[0];
+ };
+ } *image;
+
+ struct dl_codeblock {
+ u32 npe_addr;
+ u32 size;
+ u32 data[0];
+ } *cb;
+
+ int i, j, err, data_size, instr_size, blocks, table_end;
+ u32 cmd;
+
+ if ((err = request_firmware(&fw_entry, name, dev)) != 0)
+ return err;
+
+ err = -EINVAL;
+ if (fw_entry->size < sizeof(struct dl_image)) {
+ print_npe(KERN_ERR, npe, "incomplete firmware file\n");
+ goto err;
+ }
+ image = (struct dl_image*)fw_entry->data;
+
+#if DEBUG_FW
+ print_npe(KERN_DEBUG, npe, "firmware: %08X %08X %08X (0x%X bytes)\n",
+ image->magic, image->id, image->size, image->size * 4);
+#endif
+
+ if (image->magic == swab32(FW_MAGIC)) { /* swapped file */
+ image->id = swab32(image->id);
+ image->size = swab32(image->size);
+ } else if (image->magic != FW_MAGIC) {
+ print_npe(KERN_ERR, npe, "bad firmware file magic: 0x%X\n",
+ image->magic);
+ goto err;
+ }
+ if ((image->size * 4 + sizeof(struct dl_image)) != fw_entry->size) {
+ print_npe(KERN_ERR, npe,
+ "inconsistent size of firmware file\n");
+ goto err;
+ }
+ if (((image->id >> 24) & 0xF /* NPE ID */) != npe->id) {
+ print_npe(KERN_ERR, npe, "firmware file NPE ID mismatch\n");
+ goto err;
+ }
+ if (image->magic == swab32(FW_MAGIC))
+ for (i = 0; i < image->size; i++)
+ image->data[i] = swab32(image->data[i]);
+
+ if (!cpu_is_ixp46x() && ((image->id >> 28) & 0xF /* device ID */)) {
+ print_npe(KERN_INFO, npe, "IXP46x firmware ignored on "
+ "IXP42x\n");
+ goto err;
+ }
+
+ if (npe_running(npe)) {
+ print_npe(KERN_INFO, npe, "unable to load firmware, NPE is "
+ "already running\n");
+ err = -EBUSY;
+ goto err;
+ }
+#if 0
+ npe_stop(npe);
+ npe_reset(npe);
+#endif
+
+ print_npe(KERN_INFO, npe, "firmware functionality 0x%X, "
+ "revision 0x%X:%X\n", (image->id >> 16) & 0xFF,
+ (image->id >> 8) & 0xFF, image->id & 0xFF);
+
+ if (!cpu_is_ixp46x()) {
+ if (!npe->id)
+ instr_size = NPE_A_42X_INSTR_SIZE;
+ else
+ instr_size = NPE_B_AND_C_42X_INSTR_SIZE;
+ data_size = NPE_42X_DATA_SIZE;
+ } else {
+ instr_size = NPE_46X_INSTR_SIZE;
+ data_size = NPE_46X_DATA_SIZE;
+ }
+
+ for (blocks = 0; blocks * sizeof(struct dl_block) / 4 < image->size;
+ blocks++)
+ if (image->blocks[blocks].type == FW_BLOCK_TYPE_EOF)
+ break;
+ if (blocks * sizeof(struct dl_block) / 4 >= image->size) {
+ print_npe(KERN_INFO, npe, "firmware EOF block marker not "
+ "found\n");
+ goto err;
+ }
+
+#if DEBUG_FW
+ print_npe(KERN_DEBUG, npe, "%i firmware blocks found\n", blocks);
+#endif
+
+ table_end = blocks * sizeof(struct dl_block) / 4 + 1 /* EOF marker */;
+ for (i = 0, blk = image->blocks; i < blocks; i++, blk++) {
+ if (blk->offset > image->size - sizeof(struct dl_codeblock) / 4
+ || blk->offset < table_end) {
+ print_npe(KERN_INFO, npe, "invalid offset 0x%X of "
+ "firmware block #%i\n", blk->offset, i);
+ goto err;
+ }
+
+ cb = (struct dl_codeblock*)&image->data[blk->offset];
+ if (blk->type == FW_BLOCK_TYPE_INSTR) {
+ if (cb->npe_addr + cb->size > instr_size)
+ goto too_big;
+ cmd = CMD_WR_INS_MEM;
+ } else if (blk->type == FW_BLOCK_TYPE_DATA) {
+ if (cb->npe_addr + cb->size > data_size)
+ goto too_big;
+ cmd = CMD_WR_DATA_MEM;
+ } else {
+ print_npe(KERN_INFO, npe, "invalid firmware block #%i "
+ "type 0x%X\n", i, blk->type);
+ goto err;
+ }
+ if (blk->offset + sizeof(*cb) / 4 + cb->size > image->size) {
+ print_npe(KERN_INFO, npe, "firmware block #%i doesn't "
+ "fit in firmware image: type %c, start 0x%X,"
+ " length 0x%X\n", i,
+ blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
+ cb->npe_addr, cb->size);
+ goto err;
+ }
+
+ for (j = 0; j < cb->size; j++)
+ npe_cmd_write(npe, cb->npe_addr + j, cmd, cb->data[j]);
+ }
+
+ npe_start(npe);
+ if (!npe_running(npe))
+ print_npe(KERN_ERR, npe, "unable to start\n");
+ release_firmware(fw_entry);
+ return 0;
+
+too_big:
+ print_npe(KERN_INFO, npe, "firmware block #%i doesn't fit in NPE "
+ "memory: type %c, start 0x%X, length 0x%X\n", i,
+ blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
+ cb->npe_addr, cb->size);
+err:
+ release_firmware(fw_entry);
+ return err;
+}
+
+
+struct npe *npe_request(int id)
+{
+ if (id < NPE_COUNT)
+ if (npe_tab[id].valid)
+ if (try_module_get(THIS_MODULE))
+ return &npe_tab[id];
+ return NULL;
+}
+
+void npe_release(struct npe *npe)
+{
+ module_put(THIS_MODULE);
+}
+
+
+static int __init npe_init_module(void)
+{
+
+ int i, found = 0;
+
+ for (i = 0; i < NPE_COUNT; i++) {
+ struct npe *npe = &npe_tab[i];
+ if (!(ixp4xx_read_fuses() & (IXP4XX_FUSE_RESET_NPEA << i)))
+ continue; /* NPE already disabled or not present */
+ if (!(npe->mem_res = request_mem_region(npe->regs_phys,
+ REGS_SIZE,
+ npe_name(npe)))) {
+ print_npe(KERN_ERR, npe,
+ "failed to request memory region\n");
+ continue;
+ }
+
+ if (npe_reset(npe))
+ continue;
+ npe->valid = 1;
+ found++;
+ }
+
+ if (!found)
+ return -ENOSYS;
+ return 0;
+}
+
+static void __exit npe_cleanup_module(void)
+{
+ int i;
+
+ for (i = 0; i < NPE_COUNT; i++)
+ if (npe_tab[i].mem_res) {
+ npe_reset(&npe_tab[i]);
+ release_resource(npe_tab[i].mem_res);
+ }
+}
+
+module_init(npe_init_module);
+module_exit(npe_cleanup_module);
+
+MODULE_AUTHOR("Krzysztof Halasa");
+MODULE_LICENSE("GPL v2");
+
+EXPORT_SYMBOL(npe_names);
+EXPORT_SYMBOL(npe_running);
+EXPORT_SYMBOL(npe_request);
+EXPORT_SYMBOL(npe_release);
+EXPORT_SYMBOL(npe_load_firmware);
+EXPORT_SYMBOL(npe_send_message);
+EXPORT_SYMBOL(npe_recv_message);
+EXPORT_SYMBOL(npe_send_recv_message);
diff --git a/include/asm-arm/arch-ixp4xx/npe.h b/include/asm-arm/arch-ixp4xx/npe.h
new file mode 100644
index 0000000..fd20bf5
--- /dev/null
+++ b/include/asm-arm/arch-ixp4xx/npe.h
@@ -0,0 +1,41 @@
+#ifndef __IXP4XX_NPE_H
+#define __IXP4XX_NPE_H
+
+#include <linux/etherdevice.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+
+extern const char *npe_names[];
+
+struct npe_regs {
+ u32 exec_addr, exec_data, exec_status_cmd, exec_count;
+ u32 action_points[4];
+ u32 watchpoint_fifo, watch_count;
+ u32 profile_count;
+ u32 messaging_status, messaging_control;
+ u32 mailbox_status, /*messaging_*/ in_out_fifo;
+};
+
+struct npe {
+ struct resource *mem_res;
+ struct npe_regs __iomem *regs;
+ u32 regs_phys;
+ int id;
+ int valid;
+};
+
+
+static inline const char *npe_name(struct npe *npe)
+{
+ return npe_names[npe->id];
+}
+
+int npe_running(struct npe *npe);
+int npe_send_message(struct npe *npe, const void *msg, const char *what);
+int npe_recv_message(struct npe *npe, void *msg, const char *what);
+int npe_send_recv_message(struct npe *npe, void *msg, const char *what);
+int npe_load_firmware(struct npe *npe, const char *name, struct device *dev);
+struct npe *npe_request(int id);
+void npe_release(struct npe *npe);
+
+#endif /* __IXP4XX_NPE_H */
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.2 - NPE
@ 2007-05-08 0:36 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 0:36 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for built-in IXP4xx Network Processor Engines.
This patch requires IXP4xx Queue Manager driver and the "fuses" patch.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig
index 71ef55f..25f8994 100644
--- a/arch/arm/mach-ixp4xx/Kconfig
+++ b/arch/arm/mach-ixp4xx/Kconfig
@@ -182,6 +182,14 @@ config IXP4XX_QMGR
This driver supports IXP4xx built-in hardware queue manager
and is automatically selected by Ethernet and HSS drivers.
+config IXP4XX_NPE
+ tristate "IXP4xx Network Processor Engine support"
+ select HOTPLUG
+ select FW_LOADER
+ help
+ This driver supports IXP4xx built-in network coprocessors
+ and is automatically selected by Ethernet and HSS drivers.
+
endmenu
endif
diff --git a/arch/arm/mach-ixp4xx/Makefile b/arch/arm/mach-ixp4xx/Makefile
index f8e1afc..33d4b88 100644
--- a/arch/arm/mach-ixp4xx/Makefile
+++ b/arch/arm/mach-ixp4xx/Makefile
@@ -27,3 +27,4 @@ obj-$(CONFIG_MACH_DSMG600) += dsmg600-setup.o dsmg600-power.o
obj-$(CONFIG_PCI) += $(obj-pci-$(CONFIG_PCI)) common-pci.o
obj-$(CONFIG_IXP4XX_QMGR) += ixp4xx_qmgr.o
+obj-$(CONFIG_IXP4XX_NPE) += ixp4xx_npe.o
diff --git a/arch/arm/mach-ixp4xx/ixp4xx_npe.c b/arch/arm/mach-ixp4xx/ixp4xx_npe.c
new file mode 100644
index 0000000..4c77d8a
--- /dev/null
+++ b/arch/arm/mach-ixp4xx/ixp4xx_npe.c
@@ -0,0 +1,737 @@
+/*
+ * Intel IXP4xx Network Processor Engine driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * The code is based on publicly available information:
+ * - Intel IXP4xx Developer's Manual and other e-papers
+ * - Intel IXP400 Access Library Software (BSD license)
+ * - previous works by Christian Hohnstaedt <chohnstaedt@innominate.com>
+ * Thanks, Christian.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <asm/delay.h>
+#include <asm/io.h>
+#include <asm/arch/npe.h>
+
+#define DEBUG_MSG 0
+#define DEBUG_FW 0
+
+#define NPE_COUNT 3
+#define MAX_RETRIES 1000 /* microseconds */
+#define NPE_42X_DATA_SIZE 0x800 /* in dwords */
+#define NPE_46X_DATA_SIZE 0x1000
+#define NPE_A_42X_INSTR_SIZE 0x1000
+#define NPE_B_AND_C_42X_INSTR_SIZE 0x800
+#define NPE_46X_INSTR_SIZE 0x1000
+#define REGS_SIZE 0x1000
+
+#define NPE_PHYS_REG 32
+
+#define FW_MAGIC 0xFEEDF00D
+#define FW_BLOCK_TYPE_INSTR 0x0
+#define FW_BLOCK_TYPE_DATA 0x1
+#define FW_BLOCK_TYPE_EOF 0xF
+
+/* NPE exec status (read) and command (write) */
+#define CMD_NPE_STEP 0x01
+#define CMD_NPE_START 0x02
+#define CMD_NPE_STOP 0x03
+#define CMD_NPE_CLR_PIPE 0x04
+#define CMD_CLR_PROFILE_CNT 0x0C
+#define CMD_RD_INS_MEM 0x10 /* instruction memory */
+#define CMD_WR_INS_MEM 0x11
+#define CMD_RD_DATA_MEM 0x12 /* data memory */
+#define CMD_WR_DATA_MEM 0x13
+#define CMD_RD_ECS_REG 0x14 /* exec access register */
+#define CMD_WR_ECS_REG 0x15
+
+#define STAT_RUN 0x80000000
+#define STAT_STOP 0x40000000
+#define STAT_CLEAR 0x20000000
+#define STAT_ECS_K 0x00800000 /* pipeline clean */
+
+#define NPE_STEVT 0x1B
+#define NPE_STARTPC 0x1C
+#define NPE_REGMAP 0x1E
+#define NPE_CINDEX 0x1F
+
+#define INSTR_WR_REG_SHORT 0x0000C000
+#define INSTR_WR_REG_BYTE 0x00004000
+#define INSTR_RD_FIFO 0x0F888220
+#define INSTR_RESET_MBOX 0x0FAC8210
+
+#define ECS_BG_CTXT_REG_0 0x00 /* Background Executing Context */
+#define ECS_BG_CTXT_REG_1 0x01 /* Stack level */
+#define ECS_BG_CTXT_REG_2 0x02
+#define ECS_PRI_1_CTXT_REG_0 0x04 /* Priority 1 Executing Context */
+#define ECS_PRI_1_CTXT_REG_1 0x05 /* Stack level */
+#define ECS_PRI_1_CTXT_REG_2 0x06
+#define ECS_PRI_2_CTXT_REG_0 0x08 /* Priority 2 Executing Context */
+#define ECS_PRI_2_CTXT_REG_1 0x09 /* Stack level */
+#define ECS_PRI_2_CTXT_REG_2 0x0A
+#define ECS_DBG_CTXT_REG_0 0x0C /* Debug Executing Context */
+#define ECS_DBG_CTXT_REG_1 0x0D /* Stack level */
+#define ECS_DBG_CTXT_REG_2 0x0E
+#define ECS_INSTRUCT_REG 0x11 /* NPE Instruction Register */
+
+#define ECS_REG_0_ACTIVE 0x80000000 /* all levels */
+#define ECS_REG_0_NEXTPC_MASK 0x1FFF0000 /* BG/PRI1/PRI2 levels */
+#define ECS_REG_0_LDUR_BITS 8
+#define ECS_REG_0_LDUR_MASK 0x00000700 /* all levels */
+#define ECS_REG_1_CCTXT_BITS 16
+#define ECS_REG_1_CCTXT_MASK 0x000F0000 /* all levels */
+#define ECS_REG_1_SELCTXT_BITS 0
+#define ECS_REG_1_SELCTXT_MASK 0x0000000F /* all levels */
+#define ECS_DBG_REG_2_IF 0x00100000 /* debug level */
+#define ECS_DBG_REG_2_IE 0x00080000 /* debug level */
+
+/* NPE watchpoint_fifo register bit */
+#define WFIFO_VALID 0x80000000
+
+/* NPE messaging_status register bit definitions */
+#define MSGSTAT_OFNE 0x00010000 /* OutFifoNotEmpty */
+#define MSGSTAT_IFNF 0x00020000 /* InFifoNotFull */
+#define MSGSTAT_OFNF 0x00040000 /* OutFifoNotFull */
+#define MSGSTAT_IFNE 0x00080000 /* InFifoNotEmpty */
+#define MSGSTAT_MBINT 0x00100000 /* Mailbox interrupt */
+#define MSGSTAT_IFINT 0x00200000 /* InFifo interrupt */
+#define MSGSTAT_OFINT 0x00400000 /* OutFifo interrupt */
+#define MSGSTAT_WFINT 0x00800000 /* WatchFifo interrupt */
+
+/* NPE messaging_control register bit definitions */
+#define MSGCTL_OUT_FIFO 0x00010000 /* enable output FIFO */
+#define MSGCTL_IN_FIFO 0x00020000 /* enable input FIFO */
+#define MSGCTL_OUT_FIFO_WRITE 0x01000000 /* enable FIFO + WRITE */
+#define MSGCTL_IN_FIFO_WRITE 0x02000000
+
+/* NPE mailbox_status value for reset */
+#define RESET_MBOX_STAT 0x0000F0F0
+
+const char *npe_names[] = { "NPE-A", "NPE-B", "NPE-C" };
+
+#define print_npe(pri, npe, fmt, ...) \
+ printk(pri "%s: " fmt, npe_name(npe), ## __VA_ARGS__)
+
+#if DEBUG_MSG
+#define debug_msg(npe, fmt, ...) \
+ print_npe(KERN_DEBUG, npe, fmt, ## __VA_ARGS__)
+#else
+#define debug_msg(npe, fmt, ...)
+#endif
+
+static struct {
+ u32 reg, val;
+}ecs_reset[] = {
+ { ECS_BG_CTXT_REG_0, 0xA0000000 },
+ { ECS_BG_CTXT_REG_1, 0x01000000 },
+ { ECS_BG_CTXT_REG_2, 0x00008000 },
+ { ECS_PRI_1_CTXT_REG_0, 0x20000080 },
+ { ECS_PRI_1_CTXT_REG_1, 0x01000000 },
+ { ECS_PRI_1_CTXT_REG_2, 0x00008000 },
+ { ECS_PRI_2_CTXT_REG_0, 0x20000080 },
+ { ECS_PRI_2_CTXT_REG_1, 0x01000000 },
+ { ECS_PRI_2_CTXT_REG_2, 0x00008000 },
+ { ECS_DBG_CTXT_REG_0, 0x20000000 },
+ { ECS_DBG_CTXT_REG_1, 0x00000000 },
+ { ECS_DBG_CTXT_REG_2, 0x001E0000 },
+ { ECS_INSTRUCT_REG, 0x1003C00F },
+};
+
+static struct npe npe_tab[NPE_COUNT] = {
+ {
+ .id = 0,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEA_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEA_BASE_PHYS,
+ }, {
+ .id = 1,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEB_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEB_BASE_PHYS,
+ }, {
+ .id = 2,
+ .regs = (struct npe_regs __iomem *)IXP4XX_NPEC_BASE_VIRT,
+ .regs_phys = IXP4XX_NPEC_BASE_PHYS,
+ }
+};
+
+int npe_running(struct npe *npe)
+{
+ return (__raw_readl(&npe->regs->exec_status_cmd) & STAT_RUN) != 0;
+}
+
+static void npe_cmd_write(struct npe *npe, u32 addr, int cmd, u32 data)
+{
+ __raw_writel(data, &npe->regs->exec_data);
+ __raw_writel(addr, &npe->regs->exec_addr);
+ __raw_writel(cmd, &npe->regs->exec_status_cmd);
+}
+
+static u32 npe_cmd_read(struct npe *npe, u32 addr, int cmd)
+{
+ __raw_writel(addr, &npe->regs->exec_addr);
+ __raw_writel(cmd, &npe->regs->exec_status_cmd);
+ /* Iintroduce extra read cycles after issuing read command to NPE
+ so that we read the register after the NPE has updated it.
+ This is to overcome race condition between XScale and NPE */
+ __raw_readl(&npe->regs->exec_data);
+ __raw_readl(&npe->regs->exec_data);
+ return __raw_readl(&npe->regs->exec_data);
+}
+
+static void npe_clear_active(struct npe *npe, u32 reg)
+{
+ u32 val = npe_cmd_read(npe, reg, CMD_RD_ECS_REG);
+ npe_cmd_write(npe, reg, CMD_WR_ECS_REG, val & ~ECS_REG_0_ACTIVE);
+}
+
+static void npe_start(struct npe *npe)
+{
+ /* ensure only Background Context Stack Level is active */
+ npe_clear_active(npe, ECS_PRI_1_CTXT_REG_0);
+ npe_clear_active(npe, ECS_PRI_2_CTXT_REG_0);
+ npe_clear_active(npe, ECS_DBG_CTXT_REG_0);
+
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+ __raw_writel(CMD_NPE_START, &npe->regs->exec_status_cmd);
+}
+
+static void npe_stop(struct npe *npe)
+{
+ __raw_writel(CMD_NPE_STOP, &npe->regs->exec_status_cmd);
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd); /*FIXME?*/
+}
+
+static int __must_check npe_debug_instr(struct npe *npe, u32 instr, u32 ctx,
+ u32 ldur)
+{
+ u32 wc;
+ int i;
+
+ /* set the Active bit, and the LDUR, in the debug level */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_0, CMD_WR_ECS_REG,
+ ECS_REG_0_ACTIVE | (ldur << ECS_REG_0_LDUR_BITS));
+
+ /* set CCTXT at ECS DEBUG L3 to specify in which context to execute
+ the instruction, and set SELCTXT at ECS DEBUG Level to specify
+ which context store to access.
+ Debug ECS Level Reg 1 has form 0x000n000n, where n = context number
+ */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_1, CMD_WR_ECS_REG,
+ (ctx << ECS_REG_1_CCTXT_BITS) |
+ (ctx << ECS_REG_1_SELCTXT_BITS));
+
+ /* clear the pipeline */
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+
+ /* load NPE instruction into the instruction register */
+ npe_cmd_write(npe, ECS_INSTRUCT_REG, CMD_WR_ECS_REG, instr);
+
+ /* we need this value later to wait for completion of NPE execution
+ step */
+ wc = __raw_readl(&npe->regs->watch_count);
+
+ /* issue a Step One command via the Execution Control register */
+ __raw_writel(CMD_NPE_STEP, &npe->regs->exec_status_cmd);
+
+ /* Watch Count register increments when NPE completes an instruction */
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (wc != __raw_readl(&npe->regs->watch_count))
+ return 0;
+ udelay(1);
+ }
+
+ print_npe(KERN_ERR, npe, "reset: npe_debug_instr(): timeout\n");
+ return -ETIMEDOUT;
+}
+
+static int __must_check npe_logical_reg_write8(struct npe *npe, u32 addr,
+ u8 val, u32 ctx)
+{
+ /* here we build the NPE assembler instruction: mov8 d0, #0 */
+ u32 instr = INSTR_WR_REG_BYTE | /* OpCode */
+ addr << 9 | /* base Operand */
+ (val & 0x1F) << 4 | /* lower 5 bits to immediate data */
+ (val & ~0x1F) << (18 - 5);/* higher 3 bits to CoProc instr. */
+ return npe_debug_instr(npe, instr, ctx, 1); /* execute it */
+}
+
+static int __must_check npe_logical_reg_write16(struct npe *npe, u32 addr,
+ u16 val, u32 ctx)
+{
+ /* here we build the NPE assembler instruction: mov16 d0, #0 */
+ u32 instr = INSTR_WR_REG_SHORT | /* OpCode */
+ addr << 9 | /* base Operand */
+ (val & 0x1F) << 4 | /* lower 5 bits to immediate data */
+ (val & ~0x1F) << (18 - 5);/* higher 11 bits to CoProc instr. */
+ return npe_debug_instr(npe, instr, ctx, 1); /* execute it */
+}
+
+static int __must_check npe_logical_reg_write32(struct npe *npe, u32 addr,
+ u32 val, u32 ctx)
+{
+ /* write in 16 bit steps first the high and then the low value */
+ if (npe_logical_reg_write16(npe, addr, val >> 16, ctx))
+ return -ETIMEDOUT;
+ return npe_logical_reg_write16(npe, addr + 2, val & 0xFFFF, ctx);
+}
+
+static int npe_reset(struct npe *npe)
+{
+ u32 val, ctl, exec_count, ctx_reg2;
+ int i;
+
+ ctl = (__raw_readl(&npe->regs->messaging_control) | 0x3F000000) &
+ 0x3F3FFFFF;
+
+ /* disable parity interrupt */
+ __raw_writel(ctl & 0x3F00FFFF, &npe->regs->messaging_control);
+
+ /* pre exec - debug instruction */
+ /* turn off the halt bit by clearing Execution Count register. */
+ exec_count = __raw_readl(&npe->regs->exec_count);
+ __raw_writel(0, &npe->regs->exec_count);
+ /* ensure that IF and IE are on (temporarily), so that we don't end up
+ stepping forever */
+ ctx_reg2 = npe_cmd_read(npe, ECS_DBG_CTXT_REG_2, CMD_RD_ECS_REG);
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_2, CMD_WR_ECS_REG, ctx_reg2 |
+ ECS_DBG_REG_2_IF | ECS_DBG_REG_2_IE);
+
+ /* clear the FIFOs */
+ while (__raw_readl(&npe->regs->watchpoint_fifo) & WFIFO_VALID)
+ ;
+ while (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_OFNE)
+ /* read from the outFIFO until empty */
+ print_npe(KERN_DEBUG, npe, "npe_reset: read FIFO = 0x%X\n",
+ __raw_readl(&npe->regs->in_out_fifo));
+
+ while (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE)
+ /* step execution of the NPE intruction to read inFIFO using
+ the Debug Executing Context stack */
+ if (npe_debug_instr(npe, INSTR_RD_FIFO, 0, 0))
+ return -ETIMEDOUT;
+
+ /* reset the mailbox reg from the XScale side */
+ __raw_writel(RESET_MBOX_STAT, &npe->regs->mailbox_status);
+ /* from NPE side */
+ if (npe_debug_instr(npe, INSTR_RESET_MBOX, 0, 0))
+ return -ETIMEDOUT;
+
+ /* Reset the physical registers in the NPE register file */
+ for (val = 0; val < NPE_PHYS_REG; val++) {
+ if (npe_logical_reg_write16(npe, NPE_REGMAP, val >> 1, 0))
+ return -ETIMEDOUT;
+ /* address is either 0 or 4 */
+ if (npe_logical_reg_write32(npe, (val & 1) * 4, 0, 0))
+ return -ETIMEDOUT;
+ }
+
+ /* Reset the context store = each context's Context Store registers */
+
+ /* Context 0 has no STARTPC. Instead, this value is used to set NextPC
+ for Background ECS, to set where NPE starts executing code */
+ val = npe_cmd_read(npe, ECS_BG_CTXT_REG_0, CMD_RD_ECS_REG);
+ val &= ~ECS_REG_0_NEXTPC_MASK;
+ val |= (0 /* NextPC */ << 16) & ECS_REG_0_NEXTPC_MASK;
+ npe_cmd_write(npe, ECS_BG_CTXT_REG_0, CMD_WR_ECS_REG, val);
+
+ for (i = 0; i < 16; i++) {
+ if (i) { /* Context 0 has no STEVT nor STARTPC */
+ /* STEVT = off, 0x80 */
+ if (npe_logical_reg_write8(npe, NPE_STEVT, 0x80, i))
+ return -ETIMEDOUT;
+ if (npe_logical_reg_write16(npe, NPE_STARTPC, 0, i))
+ return -ETIMEDOUT;
+ }
+ /* REGMAP = d0->p0, d8->p2, d16->p4 */
+ if (npe_logical_reg_write16(npe, NPE_REGMAP, 0x820, i))
+ return -ETIMEDOUT;
+ if (npe_logical_reg_write8(npe, NPE_CINDEX, 0, i))
+ return -ETIMEDOUT;
+ }
+
+ /* post exec */
+ /* clear active bit in debug level */
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_0, CMD_WR_ECS_REG, 0);
+ /* clear the pipeline */
+ __raw_writel(CMD_NPE_CLR_PIPE, &npe->regs->exec_status_cmd);
+ /* restore previous values */
+ __raw_writel(exec_count, &npe->regs->exec_count);
+ npe_cmd_write(npe, ECS_DBG_CTXT_REG_2, CMD_WR_ECS_REG, ctx_reg2);
+
+ /* write reset values to Execution Context Stack registers */
+ for (val = 0; val < ARRAY_SIZE(ecs_reset); val++)
+ npe_cmd_write(npe, ecs_reset[val].reg, CMD_WR_ECS_REG,
+ ecs_reset[val].val);
+
+ /* clear the profile counter */
+ __raw_writel(CMD_CLR_PROFILE_CNT, &npe->regs->exec_status_cmd);
+
+ __raw_writel(0, &npe->regs->exec_count);
+ __raw_writel(0, &npe->regs->action_points[0]);
+ __raw_writel(0, &npe->regs->action_points[1]);
+ __raw_writel(0, &npe->regs->action_points[2]);
+ __raw_writel(0, &npe->regs->action_points[3]);
+ __raw_writel(0, &npe->regs->watch_count);
+
+ val = ixp4xx_read_fuses();
+ /* reset the NPE */
+ ixp4xx_write_fuses(val & ~(IXP4XX_FUSE_RESET_NPEA << npe->id));
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (!(ixp4xx_read_fuses() &
+ (IXP4XX_FUSE_RESET_NPEA << npe->id)))
+ break; /* reset completed */
+ udelay(1);
+ }
+ if (i == MAX_RETRIES)
+ return -ETIMEDOUT;
+
+ /* deassert reset */
+ ixp4xx_write_fuses(val | (IXP4XX_FUSE_RESET_NPEA << npe->id));
+ for (i = 0; i < MAX_RETRIES; i++) {
+ if (ixp4xx_read_fuses() & (IXP4XX_FUSE_RESET_NPEA << npe->id))
+ break; /* NPE is back alive */
+ udelay(1);
+ }
+ if (i == MAX_RETRIES)
+ return -ETIMEDOUT;
+
+ npe_stop(npe);
+
+ /* restore NPE configuration bus Control Register - parity settings */
+ __raw_writel(ctl, &npe->regs->messaging_control);
+ return 0;
+}
+
+
+int npe_send_message(struct npe *npe, const void *msg, const char *what)
+{
+ const u32 *send = msg;
+ int cycles = 0;
+
+ debug_msg(npe, "Trying to send message %s [%08X:%08X]\n",
+ what, send[0], send[1]);
+
+ if (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE) {
+ debug_msg(npe, "NPE input FIFO not empty\n");
+ return -EIO;
+ }
+
+ __raw_writel(send[0], &npe->regs->in_out_fifo);
+
+ if (!(__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNF)) {
+ debug_msg(npe, "NPE input FIFO full\n");
+ return -EIO;
+ }
+
+ __raw_writel(send[1], &npe->regs->in_out_fifo);
+
+ while ((cycles < MAX_RETRIES) &&
+ (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_IFNE)) {
+ udelay(1);
+ cycles++;
+ }
+
+ if (cycles == MAX_RETRIES) {
+ debug_msg(npe, "Timeout sending message\n");
+ return -ETIMEDOUT;
+ }
+
+ debug_msg(npe, "Sending a message took %i cycles\n", cycles);
+ return 0;
+}
+
+int npe_recv_message(struct npe *npe, void *msg, const char *what)
+{
+ u32 *recv = msg;
+ int cycles = 0, cnt = 0;
+
+ debug_msg(npe, "Trying to receive message %s\n", what);
+
+ while (cycles < MAX_RETRIES) {
+ if (__raw_readl(&npe->regs->messaging_status) & MSGSTAT_OFNE) {
+ recv[cnt++] = __raw_readl(&npe->regs->in_out_fifo);
+ if (cnt == 2)
+ break;
+ } else {
+ udelay(1);
+ cycles++;
+ }
+ }
+
+ switch(cnt) {
+ case 1:
+ debug_msg(npe, "Received [%08X]\n", recv[0]);
+ break;
+ case 2:
+ debug_msg(npe, "Received [%08X:%08X]\n", recv[0], recv[1]);
+ break;
+ }
+
+ if (cycles == MAX_RETRIES) {
+ debug_msg(npe, "Timeout waiting for message\n");
+ return -ETIMEDOUT;
+ }
+
+ debug_msg(npe, "Receiving a message took %i cycles\n", cycles);
+ return 0;
+}
+
+int npe_send_recv_message(struct npe *npe, void *msg, const char *what)
+{
+ int result;
+ u32 *send = msg, recv[2];
+
+ if ((result = npe_send_message(npe, msg, what)) != 0)
+ return result;
+ if ((result = npe_recv_message(npe, recv, what)) != 0)
+ return result;
+
+ if ((recv[0] != send[0]) || (recv[1] != send[1])) {
+ debug_msg(npe, "Message %s: unexpected message received\n",
+ what);
+ return -EIO;
+ }
+ return 0;
+}
+
+
+int npe_load_firmware(struct npe *npe, const char *name, struct device *dev)
+{
+ const struct firmware *fw_entry;
+
+ struct dl_block {
+ u32 type;
+ u32 offset;
+ } *blk;
+
+ struct dl_image {
+ u32 magic;
+ u32 id;
+ u32 size;
+ union {
+ u32 data[0];
+ struct dl_block blocks[0];
+ };
+ } *image;
+
+ struct dl_codeblock {
+ u32 npe_addr;
+ u32 size;
+ u32 data[0];
+ } *cb;
+
+ int i, j, err, data_size, instr_size, blocks, table_end;
+ u32 cmd;
+
+ if ((err = request_firmware(&fw_entry, name, dev)) != 0)
+ return err;
+
+ err = -EINVAL;
+ if (fw_entry->size < sizeof(struct dl_image)) {
+ print_npe(KERN_ERR, npe, "incomplete firmware file\n");
+ goto err;
+ }
+ image = (struct dl_image*)fw_entry->data;
+
+#if DEBUG_FW
+ print_npe(KERN_DEBUG, npe, "firmware: %08X %08X %08X (0x%X bytes)\n",
+ image->magic, image->id, image->size, image->size * 4);
+#endif
+
+ if (image->magic == swab32(FW_MAGIC)) { /* swapped file */
+ image->id = swab32(image->id);
+ image->size = swab32(image->size);
+ } else if (image->magic != FW_MAGIC) {
+ print_npe(KERN_ERR, npe, "bad firmware file magic: 0x%X\n",
+ image->magic);
+ goto err;
+ }
+ if ((image->size * 4 + sizeof(struct dl_image)) != fw_entry->size) {
+ print_npe(KERN_ERR, npe,
+ "inconsistent size of firmware file\n");
+ goto err;
+ }
+ if (((image->id >> 24) & 0xF /* NPE ID */) != npe->id) {
+ print_npe(KERN_ERR, npe, "firmware file NPE ID mismatch\n");
+ goto err;
+ }
+ if (image->magic == swab32(FW_MAGIC))
+ for (i = 0; i < image->size; i++)
+ image->data[i] = swab32(image->data[i]);
+
+ if (!cpu_is_ixp46x() && ((image->id >> 28) & 0xF /* device ID */)) {
+ print_npe(KERN_INFO, npe, "IXP46x firmware ignored on "
+ "IXP42x\n");
+ goto err;
+ }
+
+ if (npe_running(npe)) {
+ print_npe(KERN_INFO, npe, "unable to load firmware, NPE is "
+ "already running\n");
+ err = -EBUSY;
+ goto err;
+ }
+#if 0
+ npe_stop(npe);
+ npe_reset(npe);
+#endif
+
+ print_npe(KERN_INFO, npe, "firmware functionality 0x%X, "
+ "revision 0x%X:%X\n", (image->id >> 16) & 0xFF,
+ (image->id >> 8) & 0xFF, image->id & 0xFF);
+
+ if (!cpu_is_ixp46x()) {
+ if (!npe->id)
+ instr_size = NPE_A_42X_INSTR_SIZE;
+ else
+ instr_size = NPE_B_AND_C_42X_INSTR_SIZE;
+ data_size = NPE_42X_DATA_SIZE;
+ } else {
+ instr_size = NPE_46X_INSTR_SIZE;
+ data_size = NPE_46X_DATA_SIZE;
+ }
+
+ for (blocks = 0; blocks * sizeof(struct dl_block) / 4 < image->size;
+ blocks++)
+ if (image->blocks[blocks].type == FW_BLOCK_TYPE_EOF)
+ break;
+ if (blocks * sizeof(struct dl_block) / 4 >= image->size) {
+ print_npe(KERN_INFO, npe, "firmware EOF block marker not "
+ "found\n");
+ goto err;
+ }
+
+#if DEBUG_FW
+ print_npe(KERN_DEBUG, npe, "%i firmware blocks found\n", blocks);
+#endif
+
+ table_end = blocks * sizeof(struct dl_block) / 4 + 1 /* EOF marker */;
+ for (i = 0, blk = image->blocks; i < blocks; i++, blk++) {
+ if (blk->offset > image->size - sizeof(struct dl_codeblock) / 4
+ || blk->offset < table_end) {
+ print_npe(KERN_INFO, npe, "invalid offset 0x%X of "
+ "firmware block #%i\n", blk->offset, i);
+ goto err;
+ }
+
+ cb = (struct dl_codeblock*)&image->data[blk->offset];
+ if (blk->type == FW_BLOCK_TYPE_INSTR) {
+ if (cb->npe_addr + cb->size > instr_size)
+ goto too_big;
+ cmd = CMD_WR_INS_MEM;
+ } else if (blk->type == FW_BLOCK_TYPE_DATA) {
+ if (cb->npe_addr + cb->size > data_size)
+ goto too_big;
+ cmd = CMD_WR_DATA_MEM;
+ } else {
+ print_npe(KERN_INFO, npe, "invalid firmware block #%i "
+ "type 0x%X\n", i, blk->type);
+ goto err;
+ }
+ if (blk->offset + sizeof(*cb) / 4 + cb->size > image->size) {
+ print_npe(KERN_INFO, npe, "firmware block #%i doesn't "
+ "fit in firmware image: type %c, start 0x%X,"
+ " length 0x%X\n", i,
+ blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
+ cb->npe_addr, cb->size);
+ goto err;
+ }
+
+ for (j = 0; j < cb->size; j++)
+ npe_cmd_write(npe, cb->npe_addr + j, cmd, cb->data[j]);
+ }
+
+ npe_start(npe);
+ if (!npe_running(npe))
+ print_npe(KERN_ERR, npe, "unable to start\n");
+ release_firmware(fw_entry);
+ return 0;
+
+too_big:
+ print_npe(KERN_INFO, npe, "firmware block #%i doesn't fit in NPE "
+ "memory: type %c, start 0x%X, length 0x%X\n", i,
+ blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
+ cb->npe_addr, cb->size);
+err:
+ release_firmware(fw_entry);
+ return err;
+}
+
+
+struct npe *npe_request(int id)
+{
+ if (id < NPE_COUNT)
+ if (npe_tab[id].valid)
+ if (try_module_get(THIS_MODULE))
+ return &npe_tab[id];
+ return NULL;
+}
+
+void npe_release(struct npe *npe)
+{
+ module_put(THIS_MODULE);
+}
+
+
+static int __init npe_init_module(void)
+{
+
+ int i, found = 0;
+
+ for (i = 0; i < NPE_COUNT; i++) {
+ struct npe *npe = &npe_tab[i];
+ if (!(ixp4xx_read_fuses() & (IXP4XX_FUSE_RESET_NPEA << i)))
+ continue; /* NPE already disabled or not present */
+ if (!(npe->mem_res = request_mem_region(npe->regs_phys,
+ REGS_SIZE,
+ npe_name(npe)))) {
+ print_npe(KERN_ERR, npe,
+ "failed to request memory region\n");
+ continue;
+ }
+
+ if (npe_reset(npe))
+ continue;
+ npe->valid = 1;
+ found++;
+ }
+
+ if (!found)
+ return -ENOSYS;
+ return 0;
+}
+
+static void __exit npe_cleanup_module(void)
+{
+ int i;
+
+ for (i = 0; i < NPE_COUNT; i++)
+ if (npe_tab[i].mem_res) {
+ npe_reset(&npe_tab[i]);
+ release_resource(npe_tab[i].mem_res);
+ }
+}
+
+module_init(npe_init_module);
+module_exit(npe_cleanup_module);
+
+MODULE_AUTHOR("Krzysztof Halasa");
+MODULE_LICENSE("GPL v2");
+
+EXPORT_SYMBOL(npe_names);
+EXPORT_SYMBOL(npe_running);
+EXPORT_SYMBOL(npe_request);
+EXPORT_SYMBOL(npe_release);
+EXPORT_SYMBOL(npe_load_firmware);
+EXPORT_SYMBOL(npe_send_message);
+EXPORT_SYMBOL(npe_recv_message);
+EXPORT_SYMBOL(npe_send_recv_message);
diff --git a/include/asm-arm/arch-ixp4xx/npe.h b/include/asm-arm/arch-ixp4xx/npe.h
new file mode 100644
index 0000000..fd20bf5
--- /dev/null
+++ b/include/asm-arm/arch-ixp4xx/npe.h
@@ -0,0 +1,41 @@
+#ifndef __IXP4XX_NPE_H
+#define __IXP4XX_NPE_H
+
+#include <linux/etherdevice.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+
+extern const char *npe_names[];
+
+struct npe_regs {
+ u32 exec_addr, exec_data, exec_status_cmd, exec_count;
+ u32 action_points[4];
+ u32 watchpoint_fifo, watch_count;
+ u32 profile_count;
+ u32 messaging_status, messaging_control;
+ u32 mailbox_status, /*messaging_*/ in_out_fifo;
+};
+
+struct npe {
+ struct resource *mem_res;
+ struct npe_regs __iomem *regs;
+ u32 regs_phys;
+ int id;
+ int valid;
+};
+
+
+static inline const char *npe_name(struct npe *npe)
+{
+ return npe_names[npe->id];
+}
+
+int npe_running(struct npe *npe);
+int npe_send_message(struct npe *npe, const void *msg, const char *what);
+int npe_recv_message(struct npe *npe, void *msg, const char *what);
+int npe_send_recv_message(struct npe *npe, void *msg, const char *what);
+int npe_load_firmware(struct npe *npe, const char *name, struct device *dev);
+struct npe *npe_request(int id);
+void npe_release(struct npe *npe);
+
+#endif /* __IXP4XX_NPE_H */
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-07 19:57 ` Krzysztof Halasa
@ 2007-05-08 0:46 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 0:46 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for built-in IXP4xx hardware Queue Manager.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig
index 9715ef5..71ef55f 100644
--- a/arch/arm/mach-ixp4xx/Kconfig
+++ b/arch/arm/mach-ixp4xx/Kconfig
@@ -176,6 +176,12 @@ config IXP4XX_INDIRECT_PCI
need to use the indirect method instead. If you don't know
what you need, leave this option unselected.
+config IXP4XX_QMGR
+ tristate "IXP4xx Queue Manager support"
+ help
+ This driver supports IXP4xx built-in hardware queue manager
+ and is automatically selected by Ethernet and HSS drivers.
+
endmenu
endif
diff --git a/arch/arm/mach-ixp4xx/Makefile b/arch/arm/mach-ixp4xx/Makefile
index 3b87c47..f8e1afc 100644
--- a/arch/arm/mach-ixp4xx/Makefile
+++ b/arch/arm/mach-ixp4xx/Makefile
@@ -26,3 +26,4 @@ obj-$(CONFIG_MACH_NAS100D) += nas100d-setup.o nas100d-power.o
obj-$(CONFIG_MACH_DSMG600) += dsmg600-setup.o dsmg600-power.o
obj-$(CONFIG_PCI) += $(obj-pci-$(CONFIG_PCI)) common-pci.o
+obj-$(CONFIG_IXP4XX_QMGR) += ixp4xx_qmgr.o
diff --git a/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
new file mode 100644
index 0000000..b9e9bd6
--- /dev/null
+++ b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
@@ -0,0 +1,273 @@
+/*
+ * Intel IXP4xx Queue Manager driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <asm/arch/qmgr.h>
+
+#define DEBUG 0
+
+struct qmgr_regs __iomem *qmgr_regs;
+static struct resource *mem_res;
+static spinlock_t qmgr_lock;
+static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
+static void (*irq_handlers[HALF_QUEUES])(void *pdev);
+static void *irq_pdevs[HALF_QUEUES];
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev)
+{
+ u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues / u32 */
+ int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
+ unsigned long flags;
+
+ src &= 7;
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
+ irq_handlers[queue] = handler;
+ irq_pdevs[queue] = pdev;
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+
+static irqreturn_t qmgr_irq1(int irq, void *pdev)
+{
+ int i;
+ u32 val = __raw_readl(&qmgr_regs->irqstat[0]);
+ __raw_writel(val, &qmgr_regs->irqstat[0]); /* ACK */
+
+ for (i = 0; i < HALF_QUEUES; i++)
+ if (val & (1 << i))
+ irq_handlers[i](irq_pdevs[i]);
+
+ return val ? IRQ_HANDLED : 0;
+}
+
+
+void qmgr_enable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) | (1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+void qmgr_disable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) & ~(1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+static inline void shift_mask(u32 *mask)
+{
+ mask[3] = mask[3] << 1 | mask[2] >> 31;
+ mask[2] = mask[2] << 1 | mask[1] >> 31;
+ mask[1] = mask[1] << 1 | mask[0] >> 31;
+ mask[0] <<= 1;
+}
+
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark)
+{
+ u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
+ int err;
+
+ if (queue >= HALF_QUEUES)
+ return -ERANGE;
+
+ if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
+ return -EINVAL;
+
+ switch (len) {
+ case 16:
+ cfg = 0 << 24;
+ mask[0] = 0x1;
+ break;
+ case 32:
+ cfg = 1 << 24;
+ mask[0] = 0x3;
+ break;
+ case 64:
+ cfg = 2 << 24;
+ mask[0] = 0xF;
+ break;
+ case 128:
+ cfg = 3 << 24;
+ mask[0] = 0xFF;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ cfg |= nearly_empty_watermark << 26;
+ cfg |= nearly_full_watermark << 29;
+ len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
+ mask[1] = mask[2] = mask[3] = 0;
+
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+ spin_lock_irq(&qmgr_lock);
+ if (__raw_readl(&qmgr_regs->sram[queue])) {
+ err = -EBUSY;
+ goto err;
+ }
+
+ while (1) {
+ if (!(used_sram_bitmap[0] & mask[0]) &&
+ !(used_sram_bitmap[1] & mask[1]) &&
+ !(used_sram_bitmap[2] & mask[2]) &&
+ !(used_sram_bitmap[3] & mask[3]))
+ break; /* found free space */
+
+ addr++;
+ shift_mask(mask);
+ if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
+ printk(KERN_ERR "qmgr: no free SRAM space for"
+ " queue %i\n", queue);
+ err = -ENOMEM;
+ goto err;
+ }
+ }
+
+ used_sram_bitmap[0] |= mask[0];
+ used_sram_bitmap[1] |= mask[1];
+ used_sram_bitmap[2] |= mask[2];
+ used_sram_bitmap[3] |= mask[3];
+ __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
+ spin_unlock_irq(&qmgr_lock);
+
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
+ queue, addr);
+#endif
+ return 0;
+
+err:
+ spin_unlock_irq(&qmgr_lock);
+ module_put(THIS_MODULE);
+ return err;
+}
+
+void qmgr_release_queue(unsigned int queue)
+{
+ u32 cfg, addr, mask[4];
+
+ BUG_ON(queue >= HALF_QUEUES); /* not in valid range */
+
+ spin_lock_irq(&qmgr_lock);
+ cfg = __raw_readl(&qmgr_regs->sram[queue]);
+ addr = (cfg >> 14) & 0xFF;
+
+ BUG_ON(!addr); /* not requested */
+
+ switch ((cfg >> 24) & 3) {
+ case 0: mask[0] = 0x1; break;
+ case 1: mask[0] = 0x3; break;
+ case 2: mask[0] = 0xF; break;
+ case 3: mask[0] = 0xFF; break;
+ }
+
+ while (addr--)
+ shift_mask(mask);
+
+ __raw_writel(0, &qmgr_regs->sram[queue]);
+
+ used_sram_bitmap[0] &= ~mask[0];
+ used_sram_bitmap[1] &= ~mask[1];
+ used_sram_bitmap[2] &= ~mask[2];
+ used_sram_bitmap[3] &= ~mask[3];
+ irq_handlers[queue] = NULL; /* catch IRQ bugs */
+ spin_unlock_irq(&qmgr_lock);
+
+ module_put(THIS_MODULE);
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: released queue %i\n", queue);
+#endif
+}
+
+static int qmgr_init(void)
+{
+ int i, err;
+ mem_res = request_mem_region(IXP4XX_QMGR_BASE_PHYS,
+ IXP4XX_QMGR_REGION_SIZE,
+ "IXP4xx Queue Manager");
+ if (mem_res == NULL)
+ return -EBUSY;
+
+ qmgr_regs = ioremap(IXP4XX_QMGR_BASE_PHYS, IXP4XX_QMGR_REGION_SIZE);
+ if (qmgr_regs == NULL) {
+ err = -ENOMEM;
+ goto error_map;
+ }
+
+ /* reset qmgr registers */
+ for (i = 0; i < 4; i++) {
+ __raw_writel(0x33333333, &qmgr_regs->stat1[i]);
+ __raw_writel(0, &qmgr_regs->irqsrc[i]);
+ }
+ for (i = 0; i < 2; i++) {
+ __raw_writel(0, &qmgr_regs->stat2[i]);
+ __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[i]); /* clear */
+ __raw_writel(0, &qmgr_regs->irqen[i]);
+ }
+
+ for (i = 0; i < QUEUES; i++)
+ __raw_writel(0, &qmgr_regs->sram[i]);
+
+ err = request_irq(IRQ_IXP4XX_QM1, qmgr_irq1, 0,
+ "IXP4xx Queue Manager", NULL);
+ if (err) {
+ printk(KERN_ERR "qmgr: failed to request IRQ%i\n",
+ IRQ_IXP4XX_QM1);
+ goto error_irq;
+ }
+
+ used_sram_bitmap[0] = 0xF; /* 4 first pages reserved for config */
+ spin_lock_init(&qmgr_lock);
+
+ printk(KERN_INFO "IXP4xx Queue Manager initialized.\n");
+ return 0;
+
+error_irq:
+ iounmap(qmgr_regs);
+error_map:
+ release_resource(mem_res);
+ return err;
+}
+
+static void qmgr_remove(void)
+{
+ free_irq(IRQ_IXP4XX_QM1, NULL);
+ synchronize_irq(IRQ_IXP4XX_QM1);
+ iounmap(qmgr_regs);
+ release_resource(mem_res);
+}
+
+module_init(qmgr_init);
+module_exit(qmgr_remove);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Krzysztof Halasa");
+
+EXPORT_SYMBOL(qmgr_regs);
+EXPORT_SYMBOL(qmgr_set_irq);
+EXPORT_SYMBOL(qmgr_enable_irq);
+EXPORT_SYMBOL(qmgr_disable_irq);
+EXPORT_SYMBOL(qmgr_request_queue);
+EXPORT_SYMBOL(qmgr_release_queue);
diff --git a/include/asm-arm/arch-ixp4xx/qmgr.h b/include/asm-arm/arch-ixp4xx/qmgr.h
new file mode 100644
index 0000000..d03464a
--- /dev/null
+++ b/include/asm-arm/arch-ixp4xx/qmgr.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#ifndef IXP4XX_QMGR_H
+#define IXP4XX_QMGR_H
+
+#include <linux/kernel.h>
+#include <asm/io.h>
+
+#define HALF_QUEUES 32
+#define QUEUES 64 /* only 32 lower queues currently supported */
+#define MAX_QUEUE_LENGTH 4 /* in dwords */
+
+#define QUEUE_STAT1_EMPTY 1 /* queue status bits */
+#define QUEUE_STAT1_NEARLY_EMPTY 2
+#define QUEUE_STAT1_NEARLY_FULL 4
+#define QUEUE_STAT1_FULL 8
+#define QUEUE_STAT2_UNDERFLOW 1
+#define QUEUE_STAT2_OVERFLOW 2
+
+#define QUEUE_WATERMARK_0_ENTRIES 0
+#define QUEUE_WATERMARK_1_ENTRY 1
+#define QUEUE_WATERMARK_2_ENTRIES 2
+#define QUEUE_WATERMARK_4_ENTRIES 3
+#define QUEUE_WATERMARK_8_ENTRIES 4
+#define QUEUE_WATERMARK_16_ENTRIES 5
+#define QUEUE_WATERMARK_32_ENTRIES 6
+#define QUEUE_WATERMARK_64_ENTRIES 7
+
+/* queue interrupt request conditions */
+#define QUEUE_IRQ_SRC_EMPTY 0
+#define QUEUE_IRQ_SRC_NEARLY_EMPTY 1
+#define QUEUE_IRQ_SRC_NEARLY_FULL 2
+#define QUEUE_IRQ_SRC_FULL 3
+#define QUEUE_IRQ_SRC_NOT_EMPTY 4
+#define QUEUE_IRQ_SRC_NOT_NEARLY_EMPTY 5
+#define QUEUE_IRQ_SRC_NOT_NEARLY_FULL 6
+#define QUEUE_IRQ_SRC_NOT_FULL 7
+
+struct qmgr_regs {
+ u32 acc[QUEUES][MAX_QUEUE_LENGTH]; /* 0x000 - 0x3FF */
+ u32 stat1[4]; /* 0x400 - 0x40F */
+ u32 stat2[2]; /* 0x410 - 0x417 */
+ u32 statne_h; /* 0x418 - queue nearly empty */
+ u32 statf_h; /* 0x41C - queue full */
+ u32 irqsrc[4]; /* 0x420 - 0x42F IRC source */
+ u32 irqen[2]; /* 0x430 - 0x437 IRQ enabled */
+ u32 irqstat[2]; /* 0x438 - 0x43F - IRQ access only */
+ u32 reserved[1776];
+ u32 sram[2048]; /* 0x2000 - 0x3FFF - config and buffer */
+};
+
+extern struct qmgr_regs __iomem *qmgr_regs;
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev);
+void qmgr_enable_irq(unsigned int queue);
+void qmgr_disable_irq(unsigned int queue);
+
+/* request_ and release_queue() must be called from non-IRQ context */
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark);
+void qmgr_release_queue(unsigned int queue);
+
+
+static inline void qmgr_put_entry(unsigned int queue, u32 val)
+{
+ __raw_writel(val, &qmgr_regs->acc[queue][0]);
+}
+
+static inline u32 qmgr_get_entry(unsigned int queue)
+{
+ return __raw_readl(&qmgr_regs->acc[queue][0]);
+}
+
+static inline int qmgr_get_stat1(unsigned int queue)
+{
+ return (__raw_readl(&qmgr_regs->stat1[queue >> 3])
+ >> ((queue & 7) << 2)) & 0xF;
+}
+
+static inline int qmgr_get_stat2(unsigned int queue)
+{
+ return (__raw_readl(&qmgr_regs->stat2[queue >> 4])
+ >> ((queue & 0xF) << 1)) & 0x3;
+}
+
+static inline int qmgr_stat_empty(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_EMPTY);
+}
+
+static inline int qmgr_stat_nearly_empty(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_EMPTY);
+}
+
+static inline int qmgr_stat_nearly_full(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_FULL);
+}
+
+static inline int qmgr_stat_full(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_FULL);
+}
+
+static inline int qmgr_stat_underflow(unsigned int queue)
+{
+ return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_UNDERFLOW);
+}
+
+static inline int qmgr_stat_overflow(unsigned int queue)
+{
+ return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_OVERFLOW);
+}
+
+#endif
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 0:46 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 0:46 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for built-in IXP4xx hardware Queue Manager.
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig
index 9715ef5..71ef55f 100644
--- a/arch/arm/mach-ixp4xx/Kconfig
+++ b/arch/arm/mach-ixp4xx/Kconfig
@@ -176,6 +176,12 @@ config IXP4XX_INDIRECT_PCI
need to use the indirect method instead. If you don't know
what you need, leave this option unselected.
+config IXP4XX_QMGR
+ tristate "IXP4xx Queue Manager support"
+ help
+ This driver supports IXP4xx built-in hardware queue manager
+ and is automatically selected by Ethernet and HSS drivers.
+
endmenu
endif
diff --git a/arch/arm/mach-ixp4xx/Makefile b/arch/arm/mach-ixp4xx/Makefile
index 3b87c47..f8e1afc 100644
--- a/arch/arm/mach-ixp4xx/Makefile
+++ b/arch/arm/mach-ixp4xx/Makefile
@@ -26,3 +26,4 @@ obj-$(CONFIG_MACH_NAS100D) += nas100d-setup.o nas100d-power.o
obj-$(CONFIG_MACH_DSMG600) += dsmg600-setup.o dsmg600-power.o
obj-$(CONFIG_PCI) += $(obj-pci-$(CONFIG_PCI)) common-pci.o
+obj-$(CONFIG_IXP4XX_QMGR) += ixp4xx_qmgr.o
diff --git a/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
new file mode 100644
index 0000000..b9e9bd6
--- /dev/null
+++ b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
@@ -0,0 +1,273 @@
+/*
+ * Intel IXP4xx Queue Manager driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <asm/arch/qmgr.h>
+
+#define DEBUG 0
+
+struct qmgr_regs __iomem *qmgr_regs;
+static struct resource *mem_res;
+static spinlock_t qmgr_lock;
+static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
+static void (*irq_handlers[HALF_QUEUES])(void *pdev);
+static void *irq_pdevs[HALF_QUEUES];
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev)
+{
+ u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues / u32 */
+ int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
+ unsigned long flags;
+
+ src &= 7;
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
+ irq_handlers[queue] = handler;
+ irq_pdevs[queue] = pdev;
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+
+static irqreturn_t qmgr_irq1(int irq, void *pdev)
+{
+ int i;
+ u32 val = __raw_readl(&qmgr_regs->irqstat[0]);
+ __raw_writel(val, &qmgr_regs->irqstat[0]); /* ACK */
+
+ for (i = 0; i < HALF_QUEUES; i++)
+ if (val & (1 << i))
+ irq_handlers[i](irq_pdevs[i]);
+
+ return val ? IRQ_HANDLED : 0;
+}
+
+
+void qmgr_enable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) | (1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+void qmgr_disable_irq(unsigned int queue)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qmgr_lock, flags);
+ __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) & ~(1 << queue),
+ &qmgr_regs->irqen[0]);
+ spin_unlock_irqrestore(&qmgr_lock, flags);
+}
+
+static inline void shift_mask(u32 *mask)
+{
+ mask[3] = mask[3] << 1 | mask[2] >> 31;
+ mask[2] = mask[2] << 1 | mask[1] >> 31;
+ mask[1] = mask[1] << 1 | mask[0] >> 31;
+ mask[0] <<= 1;
+}
+
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark)
+{
+ u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
+ int err;
+
+ if (queue >= HALF_QUEUES)
+ return -ERANGE;
+
+ if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
+ return -EINVAL;
+
+ switch (len) {
+ case 16:
+ cfg = 0 << 24;
+ mask[0] = 0x1;
+ break;
+ case 32:
+ cfg = 1 << 24;
+ mask[0] = 0x3;
+ break;
+ case 64:
+ cfg = 2 << 24;
+ mask[0] = 0xF;
+ break;
+ case 128:
+ cfg = 3 << 24;
+ mask[0] = 0xFF;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ cfg |= nearly_empty_watermark << 26;
+ cfg |= nearly_full_watermark << 29;
+ len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
+ mask[1] = mask[2] = mask[3] = 0;
+
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+ spin_lock_irq(&qmgr_lock);
+ if (__raw_readl(&qmgr_regs->sram[queue])) {
+ err = -EBUSY;
+ goto err;
+ }
+
+ while (1) {
+ if (!(used_sram_bitmap[0] & mask[0]) &&
+ !(used_sram_bitmap[1] & mask[1]) &&
+ !(used_sram_bitmap[2] & mask[2]) &&
+ !(used_sram_bitmap[3] & mask[3]))
+ break; /* found free space */
+
+ addr++;
+ shift_mask(mask);
+ if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
+ printk(KERN_ERR "qmgr: no free SRAM space for"
+ " queue %i\n", queue);
+ err = -ENOMEM;
+ goto err;
+ }
+ }
+
+ used_sram_bitmap[0] |= mask[0];
+ used_sram_bitmap[1] |= mask[1];
+ used_sram_bitmap[2] |= mask[2];
+ used_sram_bitmap[3] |= mask[3];
+ __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
+ spin_unlock_irq(&qmgr_lock);
+
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
+ queue, addr);
+#endif
+ return 0;
+
+err:
+ spin_unlock_irq(&qmgr_lock);
+ module_put(THIS_MODULE);
+ return err;
+}
+
+void qmgr_release_queue(unsigned int queue)
+{
+ u32 cfg, addr, mask[4];
+
+ BUG_ON(queue >= HALF_QUEUES); /* not in valid range */
+
+ spin_lock_irq(&qmgr_lock);
+ cfg = __raw_readl(&qmgr_regs->sram[queue]);
+ addr = (cfg >> 14) & 0xFF;
+
+ BUG_ON(!addr); /* not requested */
+
+ switch ((cfg >> 24) & 3) {
+ case 0: mask[0] = 0x1; break;
+ case 1: mask[0] = 0x3; break;
+ case 2: mask[0] = 0xF; break;
+ case 3: mask[0] = 0xFF; break;
+ }
+
+ while (addr--)
+ shift_mask(mask);
+
+ __raw_writel(0, &qmgr_regs->sram[queue]);
+
+ used_sram_bitmap[0] &= ~mask[0];
+ used_sram_bitmap[1] &= ~mask[1];
+ used_sram_bitmap[2] &= ~mask[2];
+ used_sram_bitmap[3] &= ~mask[3];
+ irq_handlers[queue] = NULL; /* catch IRQ bugs */
+ spin_unlock_irq(&qmgr_lock);
+
+ module_put(THIS_MODULE);
+#if DEBUG
+ printk(KERN_DEBUG "qmgr: released queue %i\n", queue);
+#endif
+}
+
+static int qmgr_init(void)
+{
+ int i, err;
+ mem_res = request_mem_region(IXP4XX_QMGR_BASE_PHYS,
+ IXP4XX_QMGR_REGION_SIZE,
+ "IXP4xx Queue Manager");
+ if (mem_res == NULL)
+ return -EBUSY;
+
+ qmgr_regs = ioremap(IXP4XX_QMGR_BASE_PHYS, IXP4XX_QMGR_REGION_SIZE);
+ if (qmgr_regs == NULL) {
+ err = -ENOMEM;
+ goto error_map;
+ }
+
+ /* reset qmgr registers */
+ for (i = 0; i < 4; i++) {
+ __raw_writel(0x33333333, &qmgr_regs->stat1[i]);
+ __raw_writel(0, &qmgr_regs->irqsrc[i]);
+ }
+ for (i = 0; i < 2; i++) {
+ __raw_writel(0, &qmgr_regs->stat2[i]);
+ __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[i]); /* clear */
+ __raw_writel(0, &qmgr_regs->irqen[i]);
+ }
+
+ for (i = 0; i < QUEUES; i++)
+ __raw_writel(0, &qmgr_regs->sram[i]);
+
+ err = request_irq(IRQ_IXP4XX_QM1, qmgr_irq1, 0,
+ "IXP4xx Queue Manager", NULL);
+ if (err) {
+ printk(KERN_ERR "qmgr: failed to request IRQ%i\n",
+ IRQ_IXP4XX_QM1);
+ goto error_irq;
+ }
+
+ used_sram_bitmap[0] = 0xF; /* 4 first pages reserved for config */
+ spin_lock_init(&qmgr_lock);
+
+ printk(KERN_INFO "IXP4xx Queue Manager initialized.\n");
+ return 0;
+
+error_irq:
+ iounmap(qmgr_regs);
+error_map:
+ release_resource(mem_res);
+ return err;
+}
+
+static void qmgr_remove(void)
+{
+ free_irq(IRQ_IXP4XX_QM1, NULL);
+ synchronize_irq(IRQ_IXP4XX_QM1);
+ iounmap(qmgr_regs);
+ release_resource(mem_res);
+}
+
+module_init(qmgr_init);
+module_exit(qmgr_remove);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Krzysztof Halasa");
+
+EXPORT_SYMBOL(qmgr_regs);
+EXPORT_SYMBOL(qmgr_set_irq);
+EXPORT_SYMBOL(qmgr_enable_irq);
+EXPORT_SYMBOL(qmgr_disable_irq);
+EXPORT_SYMBOL(qmgr_request_queue);
+EXPORT_SYMBOL(qmgr_release_queue);
diff --git a/include/asm-arm/arch-ixp4xx/qmgr.h b/include/asm-arm/arch-ixp4xx/qmgr.h
new file mode 100644
index 0000000..d03464a
--- /dev/null
+++ b/include/asm-arm/arch-ixp4xx/qmgr.h
@@ -0,0 +1,124 @@
+/*
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#ifndef IXP4XX_QMGR_H
+#define IXP4XX_QMGR_H
+
+#include <linux/kernel.h>
+#include <asm/io.h>
+
+#define HALF_QUEUES 32
+#define QUEUES 64 /* only 32 lower queues currently supported */
+#define MAX_QUEUE_LENGTH 4 /* in dwords */
+
+#define QUEUE_STAT1_EMPTY 1 /* queue status bits */
+#define QUEUE_STAT1_NEARLY_EMPTY 2
+#define QUEUE_STAT1_NEARLY_FULL 4
+#define QUEUE_STAT1_FULL 8
+#define QUEUE_STAT2_UNDERFLOW 1
+#define QUEUE_STAT2_OVERFLOW 2
+
+#define QUEUE_WATERMARK_0_ENTRIES 0
+#define QUEUE_WATERMARK_1_ENTRY 1
+#define QUEUE_WATERMARK_2_ENTRIES 2
+#define QUEUE_WATERMARK_4_ENTRIES 3
+#define QUEUE_WATERMARK_8_ENTRIES 4
+#define QUEUE_WATERMARK_16_ENTRIES 5
+#define QUEUE_WATERMARK_32_ENTRIES 6
+#define QUEUE_WATERMARK_64_ENTRIES 7
+
+/* queue interrupt request conditions */
+#define QUEUE_IRQ_SRC_EMPTY 0
+#define QUEUE_IRQ_SRC_NEARLY_EMPTY 1
+#define QUEUE_IRQ_SRC_NEARLY_FULL 2
+#define QUEUE_IRQ_SRC_FULL 3
+#define QUEUE_IRQ_SRC_NOT_EMPTY 4
+#define QUEUE_IRQ_SRC_NOT_NEARLY_EMPTY 5
+#define QUEUE_IRQ_SRC_NOT_NEARLY_FULL 6
+#define QUEUE_IRQ_SRC_NOT_FULL 7
+
+struct qmgr_regs {
+ u32 acc[QUEUES][MAX_QUEUE_LENGTH]; /* 0x000 - 0x3FF */
+ u32 stat1[4]; /* 0x400 - 0x40F */
+ u32 stat2[2]; /* 0x410 - 0x417 */
+ u32 statne_h; /* 0x418 - queue nearly empty */
+ u32 statf_h; /* 0x41C - queue full */
+ u32 irqsrc[4]; /* 0x420 - 0x42F IRC source */
+ u32 irqen[2]; /* 0x430 - 0x437 IRQ enabled */
+ u32 irqstat[2]; /* 0x438 - 0x43F - IRQ access only */
+ u32 reserved[1776];
+ u32 sram[2048]; /* 0x2000 - 0x3FFF - config and buffer */
+};
+
+extern struct qmgr_regs __iomem *qmgr_regs;
+
+void qmgr_set_irq(unsigned int queue, int src,
+ void (*handler)(void *pdev), void *pdev);
+void qmgr_enable_irq(unsigned int queue);
+void qmgr_disable_irq(unsigned int queue);
+
+/* request_ and release_queue() must be called from non-IRQ context */
+int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
+ unsigned int nearly_empty_watermark,
+ unsigned int nearly_full_watermark);
+void qmgr_release_queue(unsigned int queue);
+
+
+static inline void qmgr_put_entry(unsigned int queue, u32 val)
+{
+ __raw_writel(val, &qmgr_regs->acc[queue][0]);
+}
+
+static inline u32 qmgr_get_entry(unsigned int queue)
+{
+ return __raw_readl(&qmgr_regs->acc[queue][0]);
+}
+
+static inline int qmgr_get_stat1(unsigned int queue)
+{
+ return (__raw_readl(&qmgr_regs->stat1[queue >> 3])
+ >> ((queue & 7) << 2)) & 0xF;
+}
+
+static inline int qmgr_get_stat2(unsigned int queue)
+{
+ return (__raw_readl(&qmgr_regs->stat2[queue >> 4])
+ >> ((queue & 0xF) << 1)) & 0x3;
+}
+
+static inline int qmgr_stat_empty(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_EMPTY);
+}
+
+static inline int qmgr_stat_nearly_empty(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_EMPTY);
+}
+
+static inline int qmgr_stat_nearly_full(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_FULL);
+}
+
+static inline int qmgr_stat_full(unsigned int queue)
+{
+ return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_FULL);
+}
+
+static inline int qmgr_stat_underflow(unsigned int queue)
+{
+ return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_UNDERFLOW);
+}
+
+static inline int qmgr_stat_overflow(unsigned int queue)
+{
+ return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_OVERFLOW);
+}
+
+#endif
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
2007-05-07 19:57 ` Krzysztof Halasa
@ 2007-05-08 1:19 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 1:19 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for built-in IXP4xx Ethernet MAC and HSS ports
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/ixdp425-setup.c b/arch/arm/mach-ixp4xx/ixdp425-setup.c
index ec4f079..f20d39d 100644
--- a/arch/arm/mach-ixp4xx/ixdp425-setup.c
+++ b/arch/arm/mach-ixp4xx/ixdp425-setup.c
@@ -101,10 +101,35 @@ static struct platform_device ixdp425_uart = {
.resource = ixdp425_uart_resources
};
+/* Built-in 10/100 Ethernet MAC interfaces */
+static struct mac_plat_info ixdp425_plat_mac[] = {
+ {
+ .phy = 0,
+ .rxq = 3,
+ }, {
+ .phy = 1,
+ .rxq = 4,
+ }
+};
+
+static struct platform_device ixdp425_mac[] = {
+ {
+ .name = "ixp4xx_eth",
+ .id = IXP4XX_ETH_NPEB,
+ .dev.platform_data = ixdp425_plat_mac,
+ }, {
+ .name = "ixp4xx_eth",
+ .id = IXP4XX_ETH_NPEC,
+ .dev.platform_data = ixdp425_plat_mac + 1,
+ }
+};
+
static struct platform_device *ixdp425_devices[] __initdata = {
&ixdp425_i2c_controller,
&ixdp425_flash,
- &ixdp425_uart
+ &ixdp425_uart,
+ &ixdp425_mac[0],
+ &ixdp425_mac[1],
};
static void __init ixdp425_init(void)
diff --git a/drivers/net/arm/Kconfig b/drivers/net/arm/Kconfig
index 678e4f4..5e2acb6 100644
--- a/drivers/net/arm/Kconfig
+++ b/drivers/net/arm/Kconfig
@@ -46,3 +46,13 @@ config EP93XX_ETH
help
This is a driver for the ethernet hardware included in EP93xx CPUs.
Say Y if you are building a kernel for EP93xx based devices.
+
+config IXP4XX_ETH
+ tristate "IXP4xx Ethernet support"
+ depends on NET_ETHERNET && ARM && ARCH_IXP4XX
+ select IXP4XX_NPE
+ select IXP4XX_QMGR
+ select MII
+ help
+ Say Y here if you want to use built-in Ethernet ports
+ on IXP4xx processor.
diff --git a/drivers/net/arm/Makefile b/drivers/net/arm/Makefile
index a4c8682..7c812ac 100644
--- a/drivers/net/arm/Makefile
+++ b/drivers/net/arm/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_ARM_ETHER3) += ether3.o
obj-$(CONFIG_ARM_ETHER1) += ether1.o
obj-$(CONFIG_ARM_AT91_ETHER) += at91_ether.o
obj-$(CONFIG_EP93XX_ETH) += ep93xx_eth.o
+obj-$(CONFIG_IXP4XX_ETH) += ixp4xx_eth.o
diff --git a/drivers/net/arm/ixp4xx_eth.c b/drivers/net/arm/ixp4xx_eth.c
new file mode 100644
index 0000000..dcea6e5
--- /dev/null
+++ b/drivers/net/arm/ixp4xx_eth.c
@@ -0,0 +1,1002 @@
+/*
+ * Intel IXP4xx Ethernet driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * Ethernet port config (0x00 is not present on IXP42X):
+ *
+ * logical port 0x00 0x10 0x20
+ * NPE 0 (NPE-A) 1 (NPE-B) 2 (NPE-C)
+ * physical PortId 2 0 1
+ * TX queue 23 24 25
+ * RX-free queue 26 27 28
+ * TX-done queue is always 31, RX queue is configurable
+ */
+
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/mii.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include <asm/arch/npe.h>
+#include <asm/arch/qmgr.h>
+
+#ifndef __ARMEB__
+#warning Little endian mode not supported
+#endif
+
+#define DEBUG_QUEUES 0
+#define DEBUG_RX 0
+#define DEBUG_TX 0
+#define DEBUG_PKT_BYTES 0
+#define DEBUG_MDIO 0
+
+#define DRV_NAME "ixp4xx_eth"
+#define DRV_VERSION "0.04"
+
+#define TX_QUEUE_LEN 16 /* dwords */
+#define PKT_DESCS 64 /* also length of queues: TX-done, RX-ready, RX */
+
+#define POOL_ALLOC_SIZE (sizeof(struct desc) * (PKT_DESCS))
+#define REGS_SIZE 0x1000
+#define MAX_MRU 1536
+
+#define MDIO_INTERVAL (3 * HZ)
+#define MAX_MDIO_RETRIES 100 /* microseconds, typically 30 cycles */
+
+#define NPE_ID(port) ((port)->id >> 4)
+#define PHYSICAL_ID(port) ((NPE_ID(port) + 2) % 3)
+#define TX_QUEUE(plat) (NPE_ID(port) + 23)
+#define RXFREE_QUEUE(plat) (NPE_ID(port) + 26)
+#define TXDONE_QUEUE 31
+
+/* TX Control Registers */
+#define TX_CNTRL0_TX_EN BIT(0)
+#define TX_CNTRL0_HALFDUPLEX BIT(1)
+#define TX_CNTRL0_RETRY BIT(2)
+#define TX_CNTRL0_PAD_EN BIT(3)
+#define TX_CNTRL0_APPEND_FCS BIT(4)
+#define TX_CNTRL0_2DEFER BIT(5)
+#define TX_CNTRL0_RMII BIT(6) /* reduced MII */
+#define TX_CNTRL1_RETRIES 0x0F /* 4 bits */
+
+/* RX Control Registers */
+#define RX_CNTRL0_RX_EN BIT(0)
+#define RX_CNTRL0_PADSTRIP_EN BIT(1)
+#define RX_CNTRL0_SEND_FCS BIT(2)
+#define RX_CNTRL0_PAUSE_EN BIT(3)
+#define RX_CNTRL0_LOOP_EN BIT(4)
+#define RX_CNTRL0_ADDR_FLTR_EN BIT(5)
+#define RX_CNTRL0_RX_RUNT_EN BIT(6)
+#define RX_CNTRL0_BCAST_DIS BIT(7)
+#define RX_CNTRL1_DEFER_EN BIT(0)
+
+/* Core Control Register */
+#define CORE_RESET BIT(0)
+#define CORE_RX_FIFO_FLUSH BIT(1)
+#define CORE_TX_FIFO_FLUSH BIT(2)
+#define CORE_SEND_JAM BIT(3)
+#define CORE_MDC_EN BIT(4) /* NPE-B ETH-0 only */
+
+/* Definitions for MII access routines */
+#define MII_CMD_GO BIT(31)
+#define MII_CMD_WRITE BIT(26)
+#define MII_STAT_READ_FAILED BIT(31)
+
+/* NPE message codes */
+#define NPE_GETSTATUS 0x00
+#define NPE_EDB_SETPORTADDRESS 0x01
+#define NPE_EDB_GETMACADDRESSDATABASE 0x02
+#define NPE_EDB_SETMACADDRESSSDATABASE 0x03
+#define NPE_GETSTATS 0x04
+#define NPE_RESETSTATS 0x05
+#define NPE_SETMAXFRAMELENGTHS 0x06
+#define NPE_VLAN_SETRXTAGMODE 0x07
+#define NPE_VLAN_SETDEFAULTRXVID 0x08
+#define NPE_VLAN_SETPORTVLANTABLEENTRY 0x09
+#define NPE_VLAN_SETPORTVLANTABLERANGE 0x0A
+#define NPE_VLAN_SETRXQOSENTRY 0x0B
+#define NPE_VLAN_SETPORTIDEXTRACTIONMODE 0x0C
+#define NPE_STP_SETBLOCKINGSTATE 0x0D
+#define NPE_FW_SETFIREWALLMODE 0x0E
+#define NPE_PC_SETFRAMECONTROLDURATIONID 0x0F
+#define NPE_PC_SETAPMACTABLE 0x11
+#define NPE_SETLOOPBACK_MODE 0x12
+#define NPE_PC_SETBSSIDTABLE 0x13
+#define NPE_ADDRESS_FILTER_CONFIG 0x14
+#define NPE_APPENDFCSCONFIG 0x15
+#define NPE_NOTIFY_MAC_RECOVERY_DONE 0x16
+#define NPE_MAC_RECOVERY_START 0x17
+
+
+struct eth_regs {
+ u32 tx_control[2], __res1[2]; /* 000 */
+ u32 rx_control[2], __res2[2]; /* 010 */
+ u32 random_seed, __res3[3]; /* 020 */
+ u32 partial_empty_threshold, __res4; /* 030 */
+ u32 partial_full_threshold, __res5; /* 038 */
+ u32 tx_start_bytes, __res6[3]; /* 040 */
+ u32 tx_deferral, rx_deferral,__res7[2]; /* 050 */
+ u32 tx_2part_deferral[2], __res8[2]; /* 060 */
+ u32 slot_time, __res9[3]; /* 070 */
+ u32 mdio_command[4]; /* 080 */
+ u32 mdio_status[4]; /* 090 */
+ u32 mcast_mask[6], __res10[2]; /* 0A0 */
+ u32 mcast_addr[6], __res11[2]; /* 0C0 */
+ u32 int_clock_threshold, __res12[3]; /* 0E0 */
+ u32 hw_addr[6], __res13[61]; /* 0F0 */
+ u32 core_control; /* 1FC */
+};
+
+struct port {
+ struct resource *mem_res;
+ struct eth_regs __iomem *regs;
+ struct npe *npe;
+ struct net_device *netdev;
+ struct net_device_stats stat;
+ struct mii_if_info mii;
+ struct delayed_work mdio_thread;
+ struct mac_plat_info *plat;
+ struct sk_buff *rx_skb_tab[PKT_DESCS];
+ struct desc *rx_desc_tab; /* coherent */
+ int id; /* logical port ID */
+ u32 rx_desc_tab_phys;
+ u32 msg_enable;
+};
+
+/* NPE message structure */
+struct msg {
+ union {
+ struct {
+ u8 cmd, eth_id, mac[ETH_ALEN];
+ };
+ struct {
+ u8 cmd, eth_id, __byte2, byte3;
+ u8 __byte4, byte5, __byte6, byte7;
+ };
+ struct {
+ u8 cmd, eth_id, __b2, byte3;
+ u32 data32;
+ };
+ };
+};
+
+/* Ethernet packet descriptor */
+struct desc {
+ u32 next; /* pointer to next buffer, unused */
+ u16 buf_len; /* buffer length */
+ u16 pkt_len; /* packet length */
+ u32 data; /* pointer to data buffer in RAM */
+ u8 dest_id;
+ u8 src_id;
+ u16 flags;
+ u8 qos;
+ u8 padlen;
+ u16 vlan_tci;
+ u8 dest_mac[ETH_ALEN];
+ u8 src_mac[ETH_ALEN];
+};
+
+
+#define rx_desc_phys(port, n) ((port)->rx_desc_tab_phys + \
+ (n) * sizeof(struct desc))
+#define tx_desc_phys(n) (tx_desc_tab_phys + (n) * sizeof(struct desc))
+
+static spinlock_t mdio_lock;
+static struct eth_regs __iomem *mdio_regs; /* mdio command and status only */
+static struct npe *mdio_npe;
+static int ports_open;
+static struct dma_pool *dma_pool;
+static struct sk_buff *tx_skb_tab[PKT_DESCS];
+static struct desc *tx_desc_tab; /* coherent */
+static u32 tx_desc_tab_phys;
+
+
+static inline void set_regbits(u32 bits, u32 __iomem *reg)
+{
+ __raw_writel(__raw_readl(reg) | bits, reg);
+}
+static inline void clr_regbits(u32 bits, u32 __iomem *reg)
+{
+ __raw_writel(__raw_readl(reg) & ~bits, reg);
+}
+
+
+static u16 mdio_cmd(struct net_device *dev, int phy_id, int location,
+ int write, u16 cmd)
+{
+ int cycles = 0;
+
+ if (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80) {
+ printk("%s: MII not ready to transmit\n", dev->name);
+ return 0; /* not ready to transmit */
+ }
+
+ if (write) {
+ __raw_writel(cmd & 0xFF, &mdio_regs->mdio_command[0]);
+ __raw_writel(cmd >> 8, &mdio_regs->mdio_command[1]);
+ }
+ __raw_writel(((phy_id << 5) | location) & 0xFF,
+ &mdio_regs->mdio_command[2]);
+ __raw_writel((phy_id >> 3) | (write << 2) | 0x80 /* GO */,
+ &mdio_regs->mdio_command[3]);
+
+ while ((cycles < MAX_MDIO_RETRIES) &&
+ (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80)) {
+ udelay(1);
+ cycles++;
+ }
+
+ if (cycles == MAX_MDIO_RETRIES) {
+ printk("%s: MII write failed\n", dev->name);
+ return 0;
+ }
+
+#if DEBUG_MDIO
+ printk(KERN_DEBUG "mdio_cmd() took %i cycles\n", cycles);
+#endif
+
+ if (write)
+ return 0;
+
+ if (__raw_readl(&mdio_regs->mdio_status[3]) & 0x80) {
+ printk("%s: MII read failed\n", dev->name);
+ return 0;
+ }
+
+ return (__raw_readl(&mdio_regs->mdio_status[0]) & 0xFF) |
+ (__raw_readl(&mdio_regs->mdio_status[1]) << 8);
+}
+
+static int mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ unsigned long flags;
+ u16 val;
+
+ spin_lock_irqsave(&mdio_lock, flags);
+ val = mdio_cmd(dev, phy_id, location, 0, 0);
+ spin_unlock_irqrestore(&mdio_lock, flags);
+ return val;
+}
+
+static void mdio_write(struct net_device *dev, int phy_id, int location,
+ int val)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdio_lock, flags);
+ mdio_cmd(dev, phy_id, location, 1, val);
+ spin_unlock_irqrestore(&mdio_lock, flags);
+}
+
+static void eth_set_duplex(struct port *port)
+{
+ if (port->mii.full_duplex)
+ clr_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
+ else
+ set_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
+}
+
+
+static void mdio_thread(struct work_struct *work)
+{
+ struct port *port = container_of(work, struct port, mdio_thread.work);
+
+ if (mii_check_media(&port->mii, 1, 0))
+ eth_set_duplex(port);
+ schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
+}
+
+
+static inline void debug_skb(const char *func, struct sk_buff *skb)
+{
+#if DEBUG_PKT_BYTES
+ int i;
+
+ printk(KERN_DEBUG "%s(%i): ", func, skb->len);
+ for (i = 0; i < skb->len; i++) {
+ if (i >= DEBUG_PKT_BYTES)
+ break;
+ printk("%s%02X",
+ ((i == 6) || (i == 12) || (i >= 14)) ? " " : "",
+ skb->data[i]);
+ }
+ printk("\n");
+#endif
+}
+
+
+static inline void debug_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc, int is_get)
+{
+#if DEBUG_QUEUES
+ const char *op = is_get ? "->" : "<-";
+
+ if (!desc_phys) {
+ printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
+ return;
+ }
+ printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %2X < %2X %4X %X"
+ " %X %X %02X%02X%02X%02X%02X%02X < %02X%02X%02X%02X%02X%02X\n",
+ queue, op, desc_phys, desc->next, desc->buf_len, desc->pkt_len,
+ desc->data, desc->dest_id, desc->src_id,
+ desc->flags, desc->qos,
+ desc->padlen, desc->vlan_tci,
+ desc->dest_mac[0], desc->dest_mac[1],
+ desc->dest_mac[2], desc->dest_mac[3],
+ desc->dest_mac[4], desc->dest_mac[5],
+ desc->src_mac[0], desc->src_mac[1],
+ desc->src_mac[2], desc->src_mac[3],
+ desc->src_mac[4], desc->src_mac[5]);
+#endif
+}
+
+static inline int queue_get_desc(unsigned int queue, struct port *port,
+ int is_tx)
+{
+ u32 phys, tab_phys, n_desc;
+ struct desc *tab;
+
+ if (!(phys = qmgr_get_entry(queue))) {
+ debug_desc(queue, phys, NULL, 1);
+ return -1;
+ }
+
+ phys &= ~0x1F; /* mask out non-address bits */
+ tab_phys = is_tx ? tx_desc_phys(0) : rx_desc_phys(port, 0);
+ tab = is_tx ? tx_desc_tab : port->rx_desc_tab;
+ n_desc = (phys - tab_phys) / sizeof(struct desc);
+ BUG_ON(n_desc >= PKT_DESCS);
+
+ debug_desc(queue, phys, &tab[n_desc], 1);
+ BUG_ON(tab[n_desc].next);
+ return n_desc;
+}
+
+static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc)
+{
+ debug_desc(queue, desc_phys, desc, 0);
+ BUG_ON(desc_phys & 0x1F);
+ qmgr_put_entry(queue, desc_phys);
+}
+
+
+static void eth_rx_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+ struct port *port = netdev_priv(dev);
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "eth_rx_irq() start\n");
+#endif
+ qmgr_disable_irq(port->plat->rxq);
+ netif_rx_schedule(dev);
+}
+
+static int eth_poll(struct net_device *dev, int *budget)
+{
+ struct port *port = netdev_priv(dev);
+ unsigned int queue = port->plat->rxq;
+ int quota = dev->quota, received = 0;
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "eth_poll() start\n");
+#endif
+ while (quota) {
+ struct sk_buff *old_skb, *new_skb;
+ struct desc *desc;
+ u32 data;
+ int n = queue_get_desc(queue, port, 0);
+ if (n < 0) { /* No packets received */
+ dev->quota -= received;
+ *budget -= received;
+ received = 0;
+ netif_rx_complete(dev);
+ qmgr_enable_irq(queue);
+ if (!qmgr_stat_empty(queue) &&
+ netif_rx_reschedule(dev, 0)) {
+ qmgr_disable_irq(queue);
+ continue;
+ }
+ return 0; /* all work done */
+ }
+
+ desc = &port->rx_desc_tab[n];
+
+ if ((new_skb = netdev_alloc_skb(dev, MAX_MRU)) != NULL) {
+#if 0
+ skb_reserve(new_skb, 2); /* FIXME */
+#endif
+ data = dma_map_single(&dev->dev, new_skb->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ }
+
+ if (!new_skb || dma_mapping_error(data)) {
+ if (new_skb)
+ dev_kfree_skb(new_skb);
+ port->stat.rx_dropped++;
+ /* put the desc back on RX-ready queue */
+ desc->buf_len = MAX_MRU;
+ desc->pkt_len = 0;
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ continue;
+ }
+
+ /* process received skb */
+ old_skb = port->rx_skb_tab[n];
+ dma_unmap_single(&dev->dev, desc->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ skb_put(old_skb, desc->pkt_len);
+
+ debug_skb("eth_poll", old_skb);
+
+ old_skb->protocol = eth_type_trans(old_skb, dev);
+ dev->last_rx = jiffies;
+ port->stat.rx_packets++;
+ port->stat.rx_bytes += old_skb->len;
+ netif_receive_skb(old_skb);
+
+ /* put the new skb on RX-free queue */
+ port->rx_skb_tab[n] = new_skb;
+ desc->buf_len = MAX_MRU;
+ desc->pkt_len = 0;
+ desc->data = data;
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ quota--;
+ received++;
+ }
+ dev->quota -= received;
+ *budget -= received;
+ return 1; /* not all work done */
+}
+
+static void eth_xmit_ready_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit_empty() start\n");
+#endif
+ netif_start_queue(dev);
+}
+
+static int eth_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct desc *desc;
+ u32 phys;
+ struct sk_buff *old_skb;
+ int n;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit() start\n");
+#endif
+ if (unlikely(skb->len > MAX_MRU)) {
+ dev_kfree_skb(skb);
+ port->stat.tx_errors++;
+ return NETDEV_TX_OK;
+ }
+
+ n = queue_get_desc(TXDONE_QUEUE, port, 1);
+ BUG_ON(n < 0);
+ desc = &tx_desc_tab[n];
+ phys = tx_desc_phys(n);
+
+ if ((old_skb = tx_skb_tab[n]) != NULL) {
+ dma_unmap_single(&dev->dev, desc->data,
+ desc->buf_len, DMA_TO_DEVICE);
+ port->stat.tx_packets++;
+ port->stat.tx_bytes += old_skb->len;
+ dev_kfree_skb(old_skb);
+ }
+
+ /* disable VLAN functions in NPE image for now */
+ memset(desc, 0, sizeof(*desc));
+ desc->buf_len = desc->pkt_len = skb->len;
+ desc->data = dma_map_single(&dev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ dev_kfree_skb(skb);
+ tx_skb_tab[n] = NULL;
+ port->stat.tx_dropped++;
+ /* put the desc back on TX-done queue */
+ queue_put_desc(TXDONE_QUEUE, phys, desc);
+ return 0;
+ }
+
+ tx_skb_tab[n] = skb;
+ debug_skb("eth_xmit", skb);
+
+ /* NPE firmware pads short frames with zeros internally */
+ wmb();
+ queue_put_desc(TX_QUEUE(port->plat), phys, desc);
+ BUG_ON(qmgr_stat_overflow(TX_QUEUE(port->plat)));
+ dev->trans_start = jiffies;
+
+ if (qmgr_stat_full(TX_QUEUE(port->plat))) {
+ netif_stop_queue(dev);
+ /* we could miss TX ready interrupt */
+ if (!qmgr_stat_full(TX_QUEUE(port->plat))) {
+ netif_start_queue(dev);
+ }
+ }
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit() end\n");
+#endif
+ return NETDEV_TX_OK;
+}
+
+
+static struct net_device_stats *eth_stats(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ return &port->stat;
+}
+
+static void eth_set_mcast_list(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct dev_mc_list *mclist = dev->mc_list;
+ u8 diffs[ETH_ALEN], *addr;
+ int cnt = dev->mc_count, i;
+
+ if ((dev->flags & IFF_PROMISC) || !mclist || !cnt) {
+ clr_regbits(RX_CNTRL0_ADDR_FLTR_EN,
+ &port->regs->rx_control[0]);
+ return;
+ }
+
+ memset(diffs, 0, ETH_ALEN);
+ addr = mclist->dmi_addr; /* first MAC address */
+
+ while (--cnt && (mclist = mclist->next))
+ for (i = 0; i < ETH_ALEN; i++)
+ diffs[i] |= addr[i] ^ mclist->dmi_addr[i];
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ __raw_writel(addr[i], &port->regs->mcast_addr[i]);
+ __raw_writel(~diffs[i], &port->regs->mcast_mask[i]);
+ }
+
+ set_regbits(RX_CNTRL0_ADDR_FLTR_EN, &port->regs->rx_control[0]);
+}
+
+
+static int eth_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
+{
+ struct port *port = netdev_priv(dev);
+ unsigned int duplex_chg;
+ int err;
+
+ if (!netif_running(dev))
+ return -EINVAL;
+ err = generic_mii_ioctl(&port->mii, if_mii(req), cmd, &duplex_chg);
+ if (duplex_chg)
+ eth_set_duplex(port);
+ return err;
+}
+
+
+static int request_queues(struct port *port)
+{
+ int err;
+
+ err = qmgr_request_queue(RXFREE_QUEUE(port->plat), PKT_DESCS, 0, 0);
+ if (err)
+ return err;
+
+ err = qmgr_request_queue(port->plat->rxq, PKT_DESCS, 0, 0);
+ if (err)
+ goto rel_rxfree;
+
+ err = qmgr_request_queue(TX_QUEUE(port->plat), TX_QUEUE_LEN, 0, 0);
+ if (err)
+ goto rel_rx;
+
+ /* TX-done queue handles skbs sent out by the NPEs */
+ if (!ports_open) {
+ err = qmgr_request_queue(TXDONE_QUEUE, PKT_DESCS, 0, 0);
+ if (err)
+ goto rel_tx;
+ }
+ return 0;
+
+rel_tx:
+ qmgr_release_queue(TX_QUEUE(port->plat));
+rel_rx:
+ qmgr_release_queue(port->plat->rxq);
+rel_rxfree:
+ qmgr_release_queue(RXFREE_QUEUE(port->plat));
+ return err;
+}
+
+static void release_queues(struct port *port)
+{
+ qmgr_release_queue(RXFREE_QUEUE(port->plat));
+ qmgr_release_queue(port->plat->rxq);
+ qmgr_release_queue(TX_QUEUE(port->plat));
+
+ if (!ports_open)
+ qmgr_release_queue(TXDONE_QUEUE);
+}
+
+static int init_queues(struct port *port)
+{
+ int i;
+
+ if (!dma_pool) {
+ /* Setup TX descriptors - common to all ports */
+ dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
+ 32, 0);
+ if (!dma_pool)
+ return -ENOMEM;
+
+ if (!(tx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &tx_desc_tab_phys)))
+ return -ENOMEM;
+ memset(tx_desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(tx_skb_tab, 0, sizeof(tx_skb_tab)); /* static table */
+
+ for (i = 0; i < PKT_DESCS; i++) {
+ queue_put_desc(TXDONE_QUEUE, tx_desc_phys(i),
+ &tx_desc_tab[i]);
+ BUG_ON(qmgr_stat_overflow(TXDONE_QUEUE));
+ }
+ }
+
+ /* Setup RX buffers */
+ if (!(port->rx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &port->rx_desc_tab_phys)))
+ return -ENOMEM;
+ memset(port->rx_desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* table */
+
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &port->rx_desc_tab[i];
+ struct sk_buff *skb;
+
+ if (!(skb = netdev_alloc_skb(port->netdev, MAX_MRU)))
+ return -ENOMEM;
+ port->rx_skb_tab[i] = skb;
+ desc->buf_len = MAX_MRU;
+#if 0
+ skb_reserve(skb, 2); /* FIXME */
+#endif
+ desc->data = dma_map_single(&port->netdev->dev, skb->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ return -EIO;
+ }
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, i), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ }
+ return 0;
+}
+
+static void destroy_queues(struct port *port)
+{
+ int i;
+
+ while (queue_get_desc(RXFREE_QUEUE(port->plat), port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(port->plat->rxq, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(TX_QUEUE(port->plat), port, 1) >= 0) {
+ /* nothing to do here */;
+ }
+ if (!ports_open)
+ while (queue_get_desc(TXDONE_QUEUE, port, 1) >= 0)
+ /* nothing to do here */;
+
+ if (port->rx_desc_tab) {
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &port->rx_desc_tab[i];
+ struct sk_buff *skb = port->rx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data, MAX_MRU,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, port->rx_desc_tab,
+ port->rx_desc_tab_phys);
+ port->rx_desc_tab = NULL;
+ }
+
+ if (!ports_open && tx_desc_tab) {
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &tx_desc_tab[i];
+ struct sk_buff *skb = tx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data,
+ desc->buf_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, tx_desc_tab, tx_desc_tab_phys);
+ tx_desc_tab = NULL;
+ }
+ if (!ports_open && dma_pool) {
+ dma_pool_destroy(dma_pool);
+ dma_pool = NULL;
+ }
+}
+
+static int eth_load_firmware(struct net_device *dev, struct npe *npe)
+{
+ struct msg msg;
+ int err;
+
+ if ((err = npe_load_firmware(npe, npe_name(npe), &dev->dev)) != 0)
+ return err;
+
+ if ((err = npe_recv_message(npe, &msg, "ETH_GET_STATUS")) != 0) {
+ printk(KERN_ERR "%s: %s not responding\n", dev->name,
+ npe_name(npe));
+ return err;
+ }
+ return 0;
+}
+
+static int eth_open(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+ int i, err;
+
+ if (!npe_running(npe))
+ if (eth_load_firmware(dev, npe))
+ return -EIO;
+
+ if (!npe_running(mdio_npe))
+ if (eth_load_firmware(dev, mdio_npe))
+ return -EIO;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = NPE_VLAN_SETRXQOSENTRY;
+ msg.eth_id = port->id;
+ msg.byte5 = port->plat->rxq | 0x80;
+ msg.byte7 = port->plat->rxq << 4;
+ for (i = 0; i < 8; i++) {
+ msg.byte3 = i;
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_RXQ"))
+ return -EIO;
+ }
+
+ msg.cmd = NPE_EDB_SETPORTADDRESS;
+ msg.eth_id = PHYSICAL_ID(port);
+ memcpy(msg.mac, dev->dev_addr, ETH_ALEN);
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_MAC"))
+ return -EIO;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = NPE_FW_SETFIREWALLMODE;
+ msg.eth_id = port->id;
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_FIREWALL_MODE"))
+ return -EIO;
+
+ if ((err = request_queues(port)) != 0)
+ return err;
+
+ if ((err = init_queues(port)) != 0) {
+ destroy_queues(port);
+ release_queues(port);
+ return err;
+ }
+
+ for (i = 0; i < ETH_ALEN; i++)
+ __raw_writel(dev->dev_addr[i], &port->regs->hw_addr[i]);
+ __raw_writel(0x08, &port->regs->random_seed);
+ __raw_writel(0x12, &port->regs->partial_empty_threshold);
+ __raw_writel(0x30, &port->regs->partial_full_threshold);
+ __raw_writel(0x08, &port->regs->tx_start_bytes);
+ __raw_writel(0x15, &port->regs->tx_deferral);
+ __raw_writel(0x08, &port->regs->tx_2part_deferral[0]);
+ __raw_writel(0x07, &port->regs->tx_2part_deferral[1]);
+ __raw_writel(0x80, &port->regs->slot_time);
+ __raw_writel(0x01, &port->regs->int_clock_threshold);
+ __raw_writel(TX_CNTRL1_RETRIES, &port->regs->tx_control[1]);
+ __raw_writel(TX_CNTRL0_TX_EN | TX_CNTRL0_RETRY | TX_CNTRL0_PAD_EN |
+ TX_CNTRL0_APPEND_FCS | TX_CNTRL0_2DEFER,
+ &port->regs->tx_control[0]);
+ __raw_writel(0, &port->regs->rx_control[1]);
+ __raw_writel(RX_CNTRL0_RX_EN | RX_CNTRL0_PADSTRIP_EN,
+ &port->regs->rx_control[0]);
+
+ if (mii_check_media(&port->mii, 1, 1))
+ eth_set_duplex(port);
+ eth_set_mcast_list(dev);
+ netif_start_queue(dev);
+ schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
+
+ qmgr_set_irq(port->plat->rxq, QUEUE_IRQ_SRC_NOT_EMPTY,
+ eth_rx_irq, dev);
+ qmgr_set_irq(TX_QUEUE(port->plat), QUEUE_IRQ_SRC_NOT_FULL,
+ eth_xmit_ready_irq, dev);
+ qmgr_enable_irq(port->plat->rxq);
+ qmgr_enable_irq(TX_QUEUE(port->plat));
+ ports_open++;
+ return 0;
+}
+
+static int eth_close(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+
+ ports_open--;
+ qmgr_disable_irq(port->plat->rxq);
+ qmgr_disable_irq(TX_QUEUE(port->plat));
+ netif_stop_queue(dev);
+
+ clr_regbits(RX_CNTRL0_RX_EN, &port->regs->rx_control[0]);
+ clr_regbits(TX_CNTRL0_TX_EN, &port->regs->tx_control[0]);
+ set_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
+ &port->regs->core_control);
+ udelay(10);
+ clr_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
+ &port->regs->core_control);
+
+ cancel_rearming_delayed_work(&port->mdio_thread);
+ destroy_queues(port);
+ release_queues(port);
+ return 0;
+}
+
+static int __devinit eth_init_one(struct platform_device *pdev)
+{
+ struct port *port;
+ struct net_device *dev;
+ struct mac_plat_info *plat = pdev->dev.platform_data;
+ u32 regs_phys;
+ int err;
+
+ if (!(dev = alloc_etherdev(sizeof(struct port))))
+ return -ENOMEM;
+
+ SET_MODULE_OWNER(dev);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ port = netdev_priv(dev);
+ port->netdev = dev;
+ port->id = pdev->id;
+
+ switch (port->id) {
+ case IXP4XX_ETH_NPEA:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthA_BASE_VIRT;
+ regs_phys = IXP4XX_EthA_BASE_PHYS;
+ break;
+ case IXP4XX_ETH_NPEB:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
+ regs_phys = IXP4XX_EthB_BASE_PHYS;
+ break;
+ case IXP4XX_ETH_NPEC:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthC_BASE_VIRT;
+ regs_phys = IXP4XX_EthC_BASE_PHYS;
+ break;
+ default:
+ err = -ENOSYS;
+ goto err_free;
+ }
+
+ dev->open = eth_open;
+ dev->hard_start_xmit = eth_xmit;
+ dev->poll = eth_poll;
+ dev->stop = eth_close;
+ dev->get_stats = eth_stats;
+ dev->do_ioctl = eth_ioctl;
+ dev->set_multicast_list = eth_set_mcast_list;
+ dev->weight = 16;
+ dev->tx_queue_len = 100;
+
+ if (!(port->npe = npe_request(NPE_ID(port)))) {
+ err = -EIO;
+ goto err_free;
+ }
+
+ if (register_netdev(dev)) {
+ err = -EIO;
+ goto err_npe_rel;
+ }
+
+ port->mem_res = request_mem_region(regs_phys, REGS_SIZE, dev->name);
+ if (!port->mem_res) {
+ err = -EBUSY;
+ goto err_unreg;
+ }
+
+ port->plat = plat;
+ memcpy(dev->dev_addr, plat->hwaddr, ETH_ALEN);
+
+ platform_set_drvdata(pdev, dev);
+
+ __raw_writel(CORE_RESET, &port->regs->core_control);
+ udelay(50);
+ __raw_writel(CORE_MDC_EN, &port->regs->core_control);
+ udelay(50);
+
+ port->mii.dev = dev;
+ port->mii.mdio_read = mdio_read;
+ port->mii.mdio_write = mdio_write;
+ port->mii.phy_id = plat->phy;
+ port->mii.phy_id_mask = 0x1F;
+ port->mii.reg_num_mask = 0x1F;
+
+ INIT_DELAYED_WORK(&port->mdio_thread, mdio_thread);
+
+ printk(KERN_INFO "%s: MII PHY %i on %s\n", dev->name, plat->phy,
+ npe_name(port->npe));
+ return 0;
+
+err_unreg:
+ unregister_netdev(dev);
+err_npe_rel:
+ npe_release(port->npe);
+err_free:
+ free_netdev(dev);
+ return err;
+}
+
+static int __devexit eth_remove_one(struct platform_device *pdev)
+{
+ struct net_device *dev = platform_get_drvdata(pdev);
+ struct port *port = netdev_priv(dev);
+
+ unregister_netdev(dev);
+ platform_set_drvdata(pdev, NULL);
+ npe_release(port->npe);
+ release_resource(port->mem_res);
+ free_netdev(dev);
+ return 0;
+}
+
+static struct platform_driver drv = {
+ .driver.name = DRV_NAME,
+ .probe = eth_init_one,
+ .remove = eth_remove_one,
+};
+
+static int __init eth_init_module(void)
+{
+ if (!(ixp4xx_read_fuses() & IXP4XX_FUSE_NPEB_ETH0))
+ return -ENOSYS;
+
+ /* All MII PHY accesses use NPE-B Ethernet registers */
+ if (!(mdio_npe = npe_request(1)))
+ return -EIO;
+ spin_lock_init(&mdio_lock);
+ mdio_regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
+
+ return platform_driver_register(&drv);
+}
+
+static void __exit eth_cleanup_module(void)
+{
+ platform_driver_unregister(&drv);
+ npe_release(mdio_npe);
+}
+
+MODULE_AUTHOR("Krzysztof Halasa");
+MODULE_DESCRIPTION("Intel IXP4xx Ethernet driver");
+MODULE_LICENSE("GPL v2");
+module_init(eth_init_module);
+module_exit(eth_cleanup_module);
diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
index 5f79622..b891e10 100644
--- a/drivers/net/wan/Kconfig
+++ b/drivers/net/wan/Kconfig
@@ -342,6 +342,16 @@ config DSCC4_PCI_RST
Say Y if your card supports this feature.
+config IXP4XX_HSS
+ tristate "IXP4xx HSS (synchronous serial port) support"
+ depends on ARM && ARCH_IXP4XX
+ select IXP4XX_NPE
+ select IXP4XX_QMGR
+ select HDLC
+ help
+ Say Y here if you want to use built-in HSS ports
+ on IXP4xx processor.
+
config DLCI
tristate "Frame Relay DLCI support"
---help---
diff --git a/drivers/net/wan/Makefile b/drivers/net/wan/Makefile
index d61fef3..1b1d116 100644
--- a/drivers/net/wan/Makefile
+++ b/drivers/net/wan/Makefile
@@ -42,6 +42,7 @@ obj-$(CONFIG_C101) += c101.o
obj-$(CONFIG_WANXL) += wanxl.o
obj-$(CONFIG_PCI200SYN) += pci200syn.o
obj-$(CONFIG_PC300TOO) += pc300too.o
+obj-$(CONFIG_IXP4XX_HSS) += ixp4xx_hss.o
clean-files := wanxlfw.inc
$(obj)/wanxl.o: $(obj)/wanxlfw.inc
diff --git a/drivers/net/wan/ixp4xx_hss.c b/drivers/net/wan/ixp4xx_hss.c
new file mode 100644
index 0000000..ed56ed8
--- /dev/null
+++ b/drivers/net/wan/ixp4xx_hss.c
@@ -0,0 +1,1048 @@
+/*
+ * Intel IXP4xx HSS (synchronous serial port) driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/hdlc.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include <asm/arch/npe.h>
+#include <asm/arch/qmgr.h>
+
+#ifndef __ARMEB__
+#warning Little endian mode not supported
+#endif
+
+#define DEBUG_QUEUES 0
+#define DEBUG_RX 0
+#define DEBUG_TX 0
+
+#define DRV_NAME "ixp4xx_hss"
+#define DRV_VERSION "0.03"
+
+#define PKT_EXTRA_FLAGS 0 /* orig 1 */
+#define FRAME_SYNC_OFFSET 0 /* unused, channelized only */
+#define FRAME_SYNC_SIZE 1024
+#define PKT_NUM_PIPES 1 /* 1, 2 or 4 */
+#define PKT_PIPE_FIFO_SIZEW 4 /* total 4 dwords per HSS */
+
+#define RX_DESCS 16 /* also length of queues: RX-ready, RX */
+#define TX_DESCS 16 /* also length of queues: TX-done, TX */
+
+#define POOL_ALLOC_SIZE (sizeof(struct desc) * (RX_DESCS + TX_DESCS))
+#define RX_SIZE (HDLC_MAX_MRU + 4) /* NPE needs more space */
+
+/* Queue IDs */
+#define HSS0_CHL_RXTRIG_QUEUE 12 /* orig size = 32 dwords */
+#define HSS0_PKT_RX_QUEUE 13 /* orig size = 32 dwords */
+#define HSS0_PKT_TX0_QUEUE 14 /* orig size = 16 dwords */
+#define HSS0_PKT_TX1_QUEUE 15
+#define HSS0_PKT_TX2_QUEUE 16
+#define HSS0_PKT_TX3_QUEUE 17
+#define HSS0_PKT_RXFREE0_QUEUE 18 /* orig size = 16 dwords */
+#define HSS0_PKT_RXFREE1_QUEUE 19
+#define HSS0_PKT_RXFREE2_QUEUE 20
+#define HSS0_PKT_RXFREE3_QUEUE 21
+#define HSS0_PKT_TXDONE_QUEUE 22 /* orig size = 64 dwords */
+
+#define HSS1_CHL_RXTRIG_QUEUE 10
+#define HSS1_PKT_RX_QUEUE 0
+#define HSS1_PKT_TX0_QUEUE 5
+#define HSS1_PKT_TX1_QUEUE 6
+#define HSS1_PKT_TX2_QUEUE 7
+#define HSS1_PKT_TX3_QUEUE 8
+#define HSS1_PKT_RXFREE0_QUEUE 1
+#define HSS1_PKT_RXFREE1_QUEUE 2
+#define HSS1_PKT_RXFREE2_QUEUE 3
+#define HSS1_PKT_RXFREE3_QUEUE 4
+#define HSS1_PKT_TXDONE_QUEUE 9
+
+#define NPE_PKT_MODE_HDLC 0
+#define NPE_PKT_MODE_RAW 1
+#define NPE_PKT_MODE_56KMODE 2
+#define NPE_PKT_MODE_56KENDIAN_MSB 4
+
+/* PKT_PIPE_HDLC_CFG_WRITE flags */
+#define PKT_HDLC_IDLE_ONES 0x1 /* default = flags */
+#define PKT_HDLC_CRC_32 0x2 /* default = CRC-16 */
+#define PKT_HDLC_MSB_ENDIAN 0x4 /* default = LE */
+
+
+/* hss_config, PCRs */
+/* Frame sync sampling, default = active low */
+#define PCR_FRM_SYNC_ACTIVE_HIGH 0x40000000
+#define PCR_FRM_SYNC_FALLINGEDGE 0x80000000
+#define PCR_FRM_SYNC_RISINGEDGE 0xC0000000
+
+/* Frame sync pin: input (default) or output generated off a given clk edge */
+#define PCR_FRM_SYNC_OUTPUT_FALLING 0x20000000
+#define PCR_FRM_SYNC_OUTPUT_RISING 0x30000000
+
+/* Frame and data clock sampling on edge, default = falling */
+#define PCR_FCLK_EDGE_RISING 0x08000000
+#define PCR_DCLK_EDGE_RISING 0x04000000
+
+/* Clock direction, default = input */
+#define PCR_SYNC_CLK_DIR_OUTPUT 0x02000000
+
+/* Generate/Receive frame pulses, default = enabled */
+#define PCR_FRM_PULSE_DISABLED 0x01000000
+
+ /* Data rate is full (default) or half the configured clk speed */
+#define PCR_HALF_CLK_RATE 0x00200000
+
+/* Invert data between NPE and HSS FIFOs? (default = no) */
+#define PCR_DATA_POLARITY_INVERT 0x00100000
+
+/* TX/RX endianness, default = LSB */
+#define PCR_MSB_ENDIAN 0x00080000
+
+/* Normal (default) / open drain mode (TX only) */
+#define PCR_TX_PINS_OPEN_DRAIN 0x00040000
+
+/* No framing bit transmitted and expected on RX? (default = framing bit) */
+#define PCR_SOF_NO_FBIT 0x00020000
+
+/* Drive data pins? */
+#define PCR_TX_DATA_ENABLE 0x00010000
+
+/* Voice 56k type: drive the data pins low (default), high, high Z */
+#define PCR_TX_V56K_HIGH 0x00002000
+#define PCR_TX_V56K_HIGH_IMP 0x00004000
+
+/* Unassigned type: drive the data pins low (default), high, high Z */
+#define PCR_TX_UNASS_HIGH 0x00000800
+#define PCR_TX_UNASS_HIGH_IMP 0x00001000
+
+/* T1 @ 1.544MHz only: Fbit dictated in FIFO (default) or high Z */
+#define PCR_TX_FB_HIGH_IMP 0x00000400
+
+/* 56k data endiannes - which bit unused: high (default) or low */
+#define PCR_TX_56KE_BIT_0_UNUSED 0x00000200
+
+/* 56k data transmission type: 32/8 bit data (default) or 56K data */
+#define PCR_TX_56KS_56K_DATA 0x00000100
+
+/* hss_config, cCR */
+/* Number of packetized clients, default = 1 */
+#define CCR_NPE_HFIFO_2_HDLC 0x04000000
+#define CCR_NPE_HFIFO_3_OR_4HDLC 0x08000000
+
+/* default = no loopback */
+#define CCR_LOOPBACK 0x02000000
+
+/* HSS number, default = 0 (first) */
+#define CCR_SECOND_HSS 0x01000000
+
+
+/* hss_config, clkCR: main:10, num:10, denom:12 */
+#define CLK42X_SPEED_EXP ((0x3FF << 22) | ( 2 << 12) | 15) /*65 KHz*/
+
+#define CLK42X_SPEED_512KHZ (( 130 << 22) | ( 2 << 12) | 15)
+#define CLK42X_SPEED_1536KHZ (( 43 << 22) | ( 18 << 12) | 47)
+#define CLK42X_SPEED_1544KHZ (( 43 << 22) | ( 33 << 12) | 192)
+#define CLK42X_SPEED_2048KHZ (( 32 << 22) | ( 34 << 12) | 63)
+#define CLK42X_SPEED_4096KHZ (( 16 << 22) | ( 34 << 12) | 127)
+#define CLK42X_SPEED_8192KHZ (( 8 << 22) | ( 34 << 12) | 255)
+
+#define CLK46X_SPEED_512KHZ (( 130 << 22) | ( 24 << 12) | 127)
+#define CLK46X_SPEED_1536KHZ (( 43 << 22) | (152 << 12) | 383)
+#define CLK46X_SPEED_1544KHZ (( 43 << 22) | ( 66 << 12) | 385)
+#define CLK46X_SPEED_2048KHZ (( 32 << 22) | (280 << 12) | 511)
+#define CLK46X_SPEED_4096KHZ (( 16 << 22) | (280 << 12) | 1023)
+#define CLK46X_SPEED_8192KHZ (( 8 << 22) | (280 << 12) | 2047)
+
+
+/* hss_config, LUTs: default = unassigned */
+#define TDMMAP_HDLC 1 /* HDLC - packetised */
+#define TDMMAP_VOICE56K 2 /* Voice56K - channelised */
+#define TDMMAP_VOICE64K 3 /* Voice64K - channelised */
+
+
+/* NPE command codes */
+/* writes the ConfigWord value to the location specified by offset */
+#define PORT_CONFIG_WRITE 0x40
+
+/* triggers the NPE to load the contents of the configuration table */
+#define PORT_CONFIG_LOAD 0x41
+
+/* triggers the NPE to return an HssErrorReadResponse message */
+#define PORT_ERROR_READ 0x42
+
+/* reset NPE internal status and enable the HssChannelized operation */
+#define CHAN_FLOW_ENABLE 0x43
+#define CHAN_FLOW_DISABLE 0x44
+#define CHAN_IDLE_PATTERN_WRITE 0x45
+#define CHAN_NUM_CHANS_WRITE 0x46
+#define CHAN_RX_BUF_ADDR_WRITE 0x47
+#define CHAN_RX_BUF_CFG_WRITE 0x48
+#define CHAN_TX_BLK_CFG_WRITE 0x49
+#define CHAN_TX_BUF_ADDR_WRITE 0x4A
+#define CHAN_TX_BUF_SIZE_WRITE 0x4B
+#define CHAN_TSLOTSWITCH_ENABLE 0x4C
+#define CHAN_TSLOTSWITCH_DISABLE 0x4D
+
+/* downloads the gainWord value for a timeslot switching channel associated
+ with bypassNum */
+#define CHAN_TSLOTSWITCH_GCT_DOWNLOAD 0x4E
+
+/* triggers the NPE to reset internal status and enable the HssPacketized
+ operation for the flow specified by pPipe */
+#define PKT_PIPE_FLOW_ENABLE 0x50
+#define PKT_PIPE_FLOW_DISABLE 0x51
+#define PKT_NUM_PIPES_WRITE 0x52
+#define PKT_PIPE_FIFO_SIZEW_WRITE 0x53
+#define PKT_PIPE_HDLC_CFG_WRITE 0x54
+#define PKT_PIPE_IDLE_PATTERN_WRITE 0x55
+#define PKT_PIPE_RX_SIZE_WRITE 0x56
+#define PKT_PIPE_MODE_WRITE 0x57
+
+
+#define HSS_TIMESLOTS 128
+#define HSS_LUT_BITS 2
+
+/* HDLC packet status values - desc->status */
+#define ERR_SHUTDOWN 1 /* stop or shutdown occurrance */
+#define ERR_HDLC_ALIGN 2 /* HDLC alignment error */
+#define ERR_HDLC_FCS 3 /* HDLC Frame Check Sum error */
+#define ERR_RXFREE_Q_EMPTY 4 /* RX-free queue became empty while receiving
+ this packet (if buf_len < pkt_len) */
+#define ERR_HDLC_TOO_LONG 5 /* HDLC frame size too long */
+#define ERR_HDLC_ABORT 6 /* abort sequence received */
+#define ERR_DISCONNECTING 7 /* disconnect is in progress */
+
+
+struct port {
+ struct npe *npe;
+ struct net_device *netdev;
+ struct hss_plat_info *plat;
+ struct sk_buff *rx_skb_tab[RX_DESCS], *tx_skb_tab[TX_DESCS];
+ struct desc *desc_tab; /* coherent */
+ u32 desc_tab_phys;
+ sync_serial_settings settings;
+ int id;
+ u8 hdlc_cfg;
+};
+
+/* NPE message structure */
+struct msg {
+ u8 cmd, unused, hss_port, index;
+ union {
+ u8 data8[4];
+ u16 data16[2];
+ u32 data32;
+ };
+};
+
+
+/* HDLC packet descriptor */
+struct desc {
+ u32 next; /* pointer to next buffer, unused */
+ u16 buf_len; /* buffer length */
+ u16 pkt_len; /* packet length */
+ u32 data; /* pointer to data buffer in RAM */
+ u8 status;
+ u8 error_count;
+ u16 __reserved;
+ u32 __reserved1[4];
+};
+
+#define rx_desc_ptr(port, n) (&(port)->desc_tab[n])
+#define rx_desc_phys(port, n) ((port)->desc_tab_phys + \
+ (n) * sizeof(struct desc))
+#define tx_desc_ptr(port, n) (&(port)->desc_tab[(n) + RX_DESCS])
+#define tx_desc_phys(port, n) ((port)->desc_tab_phys + \
+ ((n) + RX_DESCS) * sizeof(struct desc))
+
+static int ports_open;
+static struct dma_pool *dma_pool;
+
+static struct {
+ int tx, txdone, rx, rxfree;
+}queue_ids[2] = {{ HSS0_PKT_TX0_QUEUE, HSS0_PKT_TXDONE_QUEUE,
+ HSS0_PKT_RX_QUEUE, HSS0_PKT_RXFREE0_QUEUE },
+ { HSS1_PKT_TX0_QUEUE, HSS1_PKT_TXDONE_QUEUE,
+ HSS1_PKT_RX_QUEUE, HSS1_PKT_RXFREE0_QUEUE },
+};
+
+
+static inline struct port* dev_to_port(struct net_device *dev)
+{
+ return dev_to_hdlc(dev)->priv;
+}
+
+
+static inline void debug_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc, int is_get)
+{
+#if DEBUG_QUEUES
+ const char *op = is_get ? "->" : "<-";
+
+ if (!desc_phys) {
+ printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
+ return;
+ }
+ printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %X %X\n",
+ queue, op, desc_phys, desc->next, desc->buf_len, desc->pkt_len,
+ desc->data, desc->status, desc->error_count);
+#endif
+}
+
+static inline int queue_get_desc(unsigned int queue, struct port *port,
+ int is_tx)
+{
+ u32 phys, tab_phys, n_desc;
+ struct desc *tab;
+
+ if (!(phys = qmgr_get_entry(queue))) {
+ debug_desc(queue, phys, NULL, 1);
+ return -1;
+ }
+
+ BUG_ON(phys & 0x1F);
+ tab_phys = is_tx ? tx_desc_phys(port, 0) : rx_desc_phys(port, 0);
+ tab = is_tx ? tx_desc_ptr(port, 0) : rx_desc_ptr(port, 0);
+ n_desc = (phys - tab_phys) / sizeof(struct desc);
+ BUG_ON(n_desc >= (is_tx ? TX_DESCS : RX_DESCS));
+
+ debug_desc(queue, phys, &tab[n_desc], 1);
+ BUG_ON(tab[n_desc].next);
+ return n_desc;
+}
+
+static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc)
+{
+ debug_desc(queue, desc_phys, desc, 0);
+ BUG_ON(desc_phys & 0x1F);
+ qmgr_put_entry(queue, desc_phys);
+}
+
+
+static void hss_set_carrier(void *pdev, int carrier)
+{
+ struct net_device *dev = pdev;
+ if (carrier)
+ netif_carrier_on(dev);
+ else
+ netif_carrier_off(dev);
+}
+
+static void hss_rx_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+ struct port *port = dev_to_port(dev);
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_rx_irq() start\n");
+#endif
+ qmgr_disable_irq(queue_ids[port->id].rx);
+ netif_rx_schedule(dev);
+}
+
+static int hss_poll(struct net_device *dev, int *budget)
+{
+ struct port *port = dev_to_port(dev);
+ unsigned int queue = queue_ids[port->id].rx;
+ struct net_device_stats *stats = hdlc_stats(dev);
+ int quota = dev->quota, received = 0;
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() start\n");
+#endif
+ while (quota) {
+ struct sk_buff *old_skb, *new_skb = NULL;
+ struct desc *desc;
+ u32 data;
+ int n = queue_get_desc(queue, port, 0);
+ if (n < 0) { /* No packets received */
+ dev->quota -= received;
+ *budget -= received;
+ received = 0;
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() netif_rx_complete()\n");
+#endif
+ netif_rx_complete(dev);
+ qmgr_enable_irq(queue);
+ if (!qmgr_stat_empty(queue) &&
+ netif_rx_reschedule(dev, 0)) {
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll()"
+ " netif_rx_reschedule() successed\n");
+#endif
+ qmgr_disable_irq(queue);
+ continue;
+ }
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() all done\n");
+#endif
+ return 0; /* all work done */
+ }
+
+ desc = rx_desc_ptr(port, n);
+
+ if (!desc->status) /* check for RX errors */
+ new_skb = netdev_alloc_skb(dev, RX_SIZE);
+ if (new_skb)
+ data = dma_map_single(&dev->dev, new_skb->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+
+ if (!new_skb || dma_mapping_error(data)) {
+ if (new_skb)
+ dev_kfree_skb(new_skb);
+ switch (desc->status) {
+ case 0:
+ stats->rx_dropped++;
+ break;
+ case ERR_HDLC_ALIGN:
+ case ERR_HDLC_ABORT:
+ stats->rx_frame_errors++;
+ stats->rx_errors++;
+ break;
+ case ERR_HDLC_FCS:
+ stats->rx_crc_errors++;
+ stats->rx_errors++;
+ break;
+ case ERR_HDLC_TOO_LONG:
+ stats->rx_length_errors++;
+ stats->rx_errors++;
+ break;
+ default: /* FIXME - remove printk */
+ printk(KERN_ERR "hss_poll(): status 0x%02X"
+ " errors %u\n", desc->status,
+ desc->error_count);
+ stats->rx_errors++;
+ }
+ /* put the desc back on RX-ready queue */
+ desc->buf_len = RX_SIZE;
+ desc->pkt_len = desc->status = 0;
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ continue;
+ }
+
+ if (desc->error_count) /* FIXME - remove printk */
+ printk(KERN_ERR "hss_poll(): status 0x%02X"
+ " errors %u\n", desc->status,
+ desc->error_count);
+
+ /* process received skb */
+ old_skb = port->rx_skb_tab[n];
+ dma_unmap_single(&dev->dev, desc->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+
+ skb_put(old_skb, desc->pkt_len);
+ old_skb->protocol = hdlc_type_trans(old_skb, dev);
+ dev->last_rx = jiffies;
+ stats->rx_packets++;
+ stats->rx_bytes += old_skb->len;
+ netif_receive_skb(old_skb);
+
+ /* put the new skb on RX-free queue */
+ port->rx_skb_tab[n] = new_skb;
+ desc->buf_len = RX_SIZE;
+ desc->pkt_len = 0;
+ desc->data = data;
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ quota--;
+ received++;
+ }
+ dev->quota -= received;
+ *budget -= received;
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() end, not all work done\n");
+#endif
+ return 1; /* not all work done */
+}
+
+static void hss_xmit_ready_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit_empty() start\n");
+#endif
+ netif_start_queue(dev);
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit_empty() end\n");
+#endif
+}
+
+static int hss_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct net_device_stats *stats = hdlc_stats(dev);
+ struct desc *desc;
+ u32 phys;
+ struct sk_buff *old_skb;
+ int n;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit() start\n");
+#endif
+ if (unlikely(skb->len > HDLC_MAX_MRU)) {
+ dev_kfree_skb(skb);
+ stats->tx_errors++;
+ return NETDEV_TX_OK;
+ }
+
+ n = queue_get_desc(queue_ids[port->id].txdone, port, 1);
+ BUG_ON(n < 0);
+ desc = tx_desc_ptr(port, n);
+ phys = tx_desc_phys(port, n);
+
+ if ((old_skb = port->tx_skb_tab[n]) != NULL) {
+ dma_unmap_single(&dev->dev, desc->data,
+ desc->buf_len, DMA_TO_DEVICE);
+ stats->tx_packets++;
+ stats->tx_bytes += old_skb->len;
+ dev_kfree_skb(old_skb);
+ }
+
+ desc->buf_len = desc->pkt_len = skb->len;
+ desc->data = dma_map_single(&dev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ dev_kfree_skb(skb);
+ port->tx_skb_tab[n] = NULL;
+ stats->tx_dropped++;
+ /* put the desc back on TX-done queue */
+ queue_put_desc(queue_ids[port->id].txdone, phys, desc);
+ return 0;
+ }
+
+ port->tx_skb_tab[n] = skb;
+ wmb();
+ queue_put_desc(queue_ids[port->id].tx, phys, desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].tx));
+ dev->trans_start = jiffies;
+
+ if (qmgr_stat_empty(queue_ids[port->id].txdone)) {
+ netif_stop_queue(dev);
+ /* we could miss TX ready interrupt */
+ if (!qmgr_stat_empty(queue_ids[port->id].txdone)) {
+ netif_start_queue(dev);
+ }
+ }
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit() end\n");
+#endif
+ return NETDEV_TX_OK;
+}
+
+
+static int request_queues(struct port *port)
+{
+ int err;
+
+ err = qmgr_request_queue(queue_ids[port->id].rxfree, RX_DESCS, 0, 0);
+ if (err)
+ return err;
+
+ err = qmgr_request_queue(queue_ids[port->id].rx, RX_DESCS, 0, 0);
+ if (err)
+ goto rel_rxfree;
+
+ err = qmgr_request_queue(queue_ids[port->id].tx, TX_DESCS, 0, 0);
+ if (err)
+ goto rel_rx;
+
+ err = qmgr_request_queue(queue_ids[port->id].txdone, TX_DESCS, 0, 0);
+ if (err)
+ goto rel_tx;
+ return 0;
+
+rel_tx:
+ qmgr_release_queue(queue_ids[port->id].tx);
+rel_rx:
+ qmgr_release_queue(queue_ids[port->id].rx);
+rel_rxfree:
+ qmgr_release_queue(queue_ids[port->id].rxfree);
+ return err;
+}
+
+static void release_queues(struct port *port)
+{
+ qmgr_release_queue(queue_ids[port->id].rxfree);
+ qmgr_release_queue(queue_ids[port->id].rx);
+ qmgr_release_queue(queue_ids[port->id].txdone);
+ qmgr_release_queue(queue_ids[port->id].tx);
+}
+
+static int init_queues(struct port *port)
+{
+ int i;
+
+ if (!dma_pool) {
+ dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
+ 32, 0);
+ if (!dma_pool)
+ return -ENOMEM;
+ }
+
+ if (!(port->desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &port->desc_tab_phys)))
+ return -ENOMEM;
+ memset(port->desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* tables */
+ memset(port->tx_skb_tab, 0, sizeof(port->tx_skb_tab));
+
+ /* Setup RX buffers */
+ for (i = 0; i < RX_DESCS; i++) {
+ struct desc *desc = rx_desc_ptr(port, i);
+ struct sk_buff *skb;
+
+ if (!(skb = netdev_alloc_skb(port->netdev, RX_SIZE)))
+ return -ENOMEM;
+ port->rx_skb_tab[i] = skb;
+ desc->buf_len = RX_SIZE;
+ desc->data = dma_map_single(&port->netdev->dev, skb->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ return -EIO;
+ }
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, i), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ }
+
+ /* Setup TX-done queue */
+ for (i = 0; i < TX_DESCS; i++) {
+ queue_put_desc(queue_ids[port->id].txdone,
+ tx_desc_phys(port, i), tx_desc_ptr(port, i));
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].txdone));
+ }
+ return 0;
+}
+
+static void destroy_queues(struct port *port)
+{
+ int i;
+
+ while (queue_get_desc(queue_ids[port->id].rxfree, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].rx, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].tx, port, 1) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].txdone, port, 1) >= 0)
+ /* nothing to do here */;
+
+ if (port->desc_tab) {
+ for (i = 0; i < RX_DESCS; i++) {
+ struct desc *desc = rx_desc_ptr(port, i);
+ struct sk_buff *skb = port->rx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data, RX_SIZE,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ for (i = 0; i < TX_DESCS; i++) {
+ struct desc *desc = tx_desc_ptr(port, i);
+ struct sk_buff *skb = port->tx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data,
+ desc->buf_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, port->desc_tab, port->desc_tab_phys);
+ port->desc_tab = NULL;
+ }
+
+ if (!ports_open && dma_pool) {
+ dma_pool_destroy(dma_pool);
+ dma_pool = NULL;
+ }
+}
+
+
+static int hss_open(struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+ int i, err;
+
+ if (!npe_running(npe))
+ if ((err = npe_load_firmware(npe, npe_name(npe),
+ &dev->dev)) != 0)
+ return err;
+
+ if ((err = hdlc_open(dev)) != 0)
+ return err;
+
+ if (port->plat->open)
+ if ((err = port->plat->open(port->id, port->netdev,
+ hss_set_carrier)) != 0)
+ goto err_hdlc_close;
+
+ /* HSS main configuration */
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PORT_CONFIG_WRITE;
+ msg.hss_port = port->id;
+ msg.index = 0; /* offset in HSS config */
+
+ msg.data32 = PCR_FRM_PULSE_DISABLED |
+ PCR_SOF_NO_FBIT |
+ PCR_MSB_ENDIAN |
+ PCR_TX_DATA_ENABLE;
+
+ if (port->settings.clock_type == CLOCK_INT)
+ msg.data32 |= PCR_SYNC_CLK_DIR_OUTPUT;
+
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_TX_PCR") != 0))
+ goto err_plat_close; /* 0: TX PCR */
+
+ msg.index = 4;
+ msg.data32 ^= PCR_TX_DATA_ENABLE | PCR_DCLK_EDGE_RISING;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_RX_PCR") != 0))
+ goto err_plat_close; /* 4: RX PCR */
+
+ msg.index = 8;
+ msg.data32 = (port->settings.loopback ? CCR_LOOPBACK : 0) |
+ (port->id ? CCR_SECOND_HSS : 0);
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_CORE_CR") != 0))
+ goto err_plat_close; /* 8: Core CR */
+
+ msg.index = 12;
+ msg.data32 = CLK42X_SPEED_2048KHZ /* FIXME */;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_CLK_CR") != 0))
+ goto err_plat_close; /* 12: CLK CR */
+
+ msg.data32 = (FRAME_SYNC_OFFSET << 16) | (FRAME_SYNC_SIZE - 1);
+ msg.index = 16;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_TX_FCR") != 0))
+ goto err_plat_close; /* 16: TX FCR */
+
+ msg.index = 20;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_RX_FCR") != 0))
+ goto err_plat_close; /* 20: RX FCR */
+
+ msg.data32 = 0; /* Fill LUT with HDLC timeslots */
+ for (i = 0; i < 32 / HSS_LUT_BITS; i++)
+ msg.data32 |= TDMMAP_HDLC << (HSS_LUT_BITS * i);
+
+ for (i = 0; i < 2 /* TX and RX */ * HSS_TIMESLOTS * HSS_LUT_BITS / 8;
+ i += 4) {
+ msg.index = 24 + i; /* 24 - 55: TX LUT, 56 - 87: RX LUT */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_LUT") != 0))
+ goto err_plat_close;
+ }
+
+ /* HDLC mode configuration */
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_NUM_PIPES_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = PKT_NUM_PIPES;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_PIPES") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FIFO_SIZEW_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = PKT_PIPE_FIFO_SIZEW;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_FIFO") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_IDLE_PATTERN_WRITE;
+ msg.hss_port = port->id;
+ msg.data32 = 0x7F7F7F7F;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_IDLE") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PORT_CONFIG_LOAD;
+ msg.hss_port = port->id;
+ if ((err = npe_send_message(npe, &msg, "HSS_LOAD_CONFIG") != 0))
+ goto err_plat_close;
+ if ((err = npe_recv_message(npe, &msg, "HSS_LOAD_CONFIG") != 0))
+ goto err_plat_close;
+
+ /* HSS_LOAD_CONFIG for port #1 returns port_id = #4 */
+ if (msg.cmd != PORT_CONFIG_LOAD || msg.data32) {
+ printk(KERN_DEBUG "%s: unexpected message received in"
+ " response to HSS_LOAD_CONFIG: \n", npe_name(npe));
+ err = EIO;
+ goto err_plat_close;
+ }
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_HDLC_CFG_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = port->hdlc_cfg; /* rx_cfg */
+ msg.data8[1] = port->hdlc_cfg | (PKT_EXTRA_FLAGS << 3); /* tx_cfg */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_HDLC_CFG") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_MODE_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = NPE_PKT_MODE_HDLC;
+ /* msg.data8[1] = inv_mask */
+ /* msg.data8[2] = or_mask */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_MODE") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_RX_SIZE_WRITE;
+ msg.hss_port = port->id;
+ msg.data16[0] = HDLC_MAX_MRU;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_RX_SIZE") != 0))
+ goto err_plat_close;
+
+ if ((err = request_queues(port)) != 0)
+ goto err_plat_close;
+
+ if ((err = init_queues(port)) != 0)
+ goto err_destroy_queues;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FLOW_ENABLE;
+ msg.hss_port = port->id;
+ if ((err = npe_send_message(npe, &msg, "HSS_ENABLE_PKT_PIPE") != 0))
+ goto err_destroy_queues;
+
+ netif_start_queue(dev);
+
+ qmgr_set_irq(queue_ids[port->id].rx, QUEUE_IRQ_SRC_NOT_EMPTY,
+ hss_rx_irq, dev);
+ qmgr_enable_irq(queue_ids[port->id].rx);
+
+ qmgr_set_irq(queue_ids[port->id].txdone, QUEUE_IRQ_SRC_NOT_EMPTY,
+ hss_xmit_ready_irq, dev);
+ qmgr_enable_irq(queue_ids[port->id].txdone);
+
+ ports_open++;
+ return 0;
+
+err_destroy_queues:
+ destroy_queues(port);
+ release_queues(port);
+err_plat_close:
+ if (port->plat->close)
+ port->plat->close(port->id, port->netdev);
+err_hdlc_close:
+ hdlc_close(dev);
+ return err;
+}
+
+static int hss_close(struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+
+ ports_open--;
+ qmgr_disable_irq(queue_ids[port->id].rx);
+ qmgr_disable_irq(queue_ids[port->id].txdone);
+ netif_stop_queue(dev);
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FLOW_DISABLE;
+ msg.hss_port = port->id;
+ if (npe_send_message(npe, &msg, "HSS_DISABLE_PKT_PIPE")) {
+ printk(KERN_CRIT "HSS-%i: unable to stop HDLC flow\n",
+ port->id);
+ /* The upper level would ignore the error anyway */
+ }
+
+ destroy_queues(port);
+ release_queues(port);
+
+ if (port->plat->close)
+ port->plat->close(port->id, port->netdev);
+ hdlc_close(dev);
+ return 0;
+}
+
+
+static int hss_attach(struct net_device *dev, unsigned short encoding,
+ unsigned short parity)
+{
+ struct port *port = dev_to_port(dev);
+
+ if (encoding != ENCODING_NRZ)
+ return -EINVAL;
+
+ switch(parity) {
+ case PARITY_CRC16_PR1_CCITT:
+ port->hdlc_cfg = 0;
+ return 0;
+
+ case PARITY_CRC32_PR1_CCITT:
+ port->hdlc_cfg = PKT_HDLC_CRC_32;
+ return 0;
+
+ default:
+ return -EINVAL;
+ }
+}
+
+
+static int hss_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ const size_t size = sizeof(sync_serial_settings);
+ sync_serial_settings new_line;
+ int clk;
+ sync_serial_settings __user *line = ifr->ifr_settings.ifs_ifsu.sync;
+ struct port *port = dev_to_port(dev);
+
+ if (cmd != SIOCWANDEV)
+ return hdlc_ioctl(dev, ifr, cmd);
+
+ switch(ifr->ifr_settings.type) {
+ case IF_GET_IFACE:
+ ifr->ifr_settings.type = IF_IFACE_V35;
+ if (ifr->ifr_settings.size < size) {
+ ifr->ifr_settings.size = size; /* data size wanted */
+ return -ENOBUFS;
+ }
+ if (copy_to_user(line, &port->settings, size))
+ return -EFAULT;
+ return 0;
+
+ case IF_IFACE_SYNC_SERIAL:
+ case IF_IFACE_V35:
+ if(!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if (dev->flags & IFF_UP)
+ return -EBUSY; /* Cannot change parameters when open */
+
+ if (copy_from_user(&new_line, line, size))
+ return -EFAULT;
+
+ clk = new_line.clock_type;
+ if (port->plat->set_clock)
+ clk = port->plat->set_clock(port->id, clk);
+
+ if (clk != CLOCK_EXT && clk != CLOCK_INT)
+ return -EINVAL; /* No such clock setting */
+
+ if (new_line.loopback != 0 && new_line.loopback != 1)
+ return -EINVAL;
+
+ memcpy(&port->settings, &new_line, size); /* Update settings */
+ return 0;
+
+ default:
+ return hdlc_ioctl(dev, ifr, cmd);
+ }
+}
+
+
+static int __devinit hss_init_one(struct platform_device *pdev)
+{
+ struct port *port;
+ struct net_device *dev;
+ hdlc_device *hdlc;
+ int err;
+
+ if ((port = kzalloc(sizeof(*port), GFP_KERNEL)) == NULL)
+ return -ENOMEM;
+ platform_set_drvdata(pdev, port);
+ port->id = pdev->id;
+
+ if ((port->npe = npe_request(0)) == NULL) {
+ err = -ENOSYS;
+ goto err_free;
+ }
+
+ port->plat = pdev->dev.platform_data;
+ if ((port->netdev = dev = alloc_hdlcdev(port)) == NULL) {
+ err = -ENOMEM;
+ goto err_plat;
+ }
+
+ SET_MODULE_OWNER(net);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ hdlc = dev_to_hdlc(dev);
+ hdlc->attach = hss_attach;
+ hdlc->xmit = hss_xmit;
+ dev->open = hss_open;
+ dev->poll = hss_poll;
+ dev->stop = hss_close;
+ dev->do_ioctl = hss_ioctl;
+ dev->weight = 16;
+ dev->tx_queue_len = 100;
+ port->settings.clock_type = CLOCK_EXT;
+ port->settings.clock_rate = 2048000;
+
+ if (register_hdlc_device(dev)) {
+ printk(KERN_ERR "HSS-%i: unable to register HDLC device\n",
+ port->id);
+ err = -ENOBUFS;
+ goto err_free_netdev;
+ }
+ printk(KERN_INFO "%s: HSS-%i\n", dev->name, port->id);
+ return 0;
+
+err_free_netdev:
+ free_netdev(dev);
+err_plat:
+ npe_release(port->npe);
+ platform_set_drvdata(pdev, NULL);
+err_free:
+ kfree(port);
+ return err;
+}
+
+static int __devexit hss_remove_one(struct platform_device *pdev)
+{
+ struct port *port = platform_get_drvdata(pdev);
+
+ unregister_hdlc_device(port->netdev);
+ free_netdev(port->netdev);
+ npe_release(port->npe);
+ platform_set_drvdata(pdev, NULL);
+ kfree(port);
+ return 0;
+}
+
+static struct platform_driver drv = {
+ .driver.name = DRV_NAME,
+ .probe = hss_init_one,
+ .remove = hss_remove_one,
+};
+
+static int __init hss_init_module(void)
+{
+ if ((ixp4xx_read_fuses() & (IXP4XX_FUSE_HDLC | IXP4XX_FUSE_HSS)) !=
+ (IXP4XX_FUSE_HDLC | IXP4XX_FUSE_HSS))
+ return -ENOSYS;
+ return platform_driver_register(&drv);
+}
+
+static void __exit hss_cleanup_module(void)
+{
+ platform_driver_unregister(&drv);
+}
+
+MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>");
+MODULE_DESCRIPTION("Intel IXP4xx HSS driver");
+MODULE_LICENSE("GPL v2");
+
+module_init(hss_init_module);
+module_exit(hss_cleanup_module);
diff --git a/include/asm-arm/arch-ixp4xx/platform.h b/include/asm-arm/arch-ixp4xx/platform.h
index ab194e5..3a07ee2 100644
--- a/include/asm-arm/arch-ixp4xx/platform.h
+++ b/include/asm-arm/arch-ixp4xx/platform.h
@@ -86,6 +86,25 @@ struct ixp4xx_i2c_pins {
unsigned long scl_pin;
};
+#define IXP4XX_ETH_NPEA 0x00
+#define IXP4XX_ETH_NPEB 0x10
+#define IXP4XX_ETH_NPEC 0x20
+
+/* Information about built-in Ethernet MAC interfaces */
+struct mac_plat_info {
+ u8 phy; /* MII PHY ID, 0 - 31 */
+ u8 rxq; /* configurable, currently 0 - 31 only */
+ u8 hwaddr[6];
+};
+
+/* Information about built-in HSS (synchronous serial) interfaces */
+struct hss_plat_info {
+ int (*set_clock)(int port, unsigned int clock_type);
+ int (*open)(int port, void *pdev,
+ void (*set_carrier_cb)(void *pdev, int carrier));
+ void (*close)(int port, void *pdev);
+};
+
/*
* This structure provide a means for the board setup code
* to give information to th pata_ixp4xx driver. It is
^ permalink raw reply related [flat|nested] 88+ messages in thread
* [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
@ 2007-05-08 1:19 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 1:19 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Adds a driver for built-in IXP4xx Ethernet MAC and HSS ports
Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
diff --git a/arch/arm/mach-ixp4xx/ixdp425-setup.c b/arch/arm/mach-ixp4xx/ixdp425-setup.c
index ec4f079..f20d39d 100644
--- a/arch/arm/mach-ixp4xx/ixdp425-setup.c
+++ b/arch/arm/mach-ixp4xx/ixdp425-setup.c
@@ -101,10 +101,35 @@ static struct platform_device ixdp425_uart = {
.resource = ixdp425_uart_resources
};
+/* Built-in 10/100 Ethernet MAC interfaces */
+static struct mac_plat_info ixdp425_plat_mac[] = {
+ {
+ .phy = 0,
+ .rxq = 3,
+ }, {
+ .phy = 1,
+ .rxq = 4,
+ }
+};
+
+static struct platform_device ixdp425_mac[] = {
+ {
+ .name = "ixp4xx_eth",
+ .id = IXP4XX_ETH_NPEB,
+ .dev.platform_data = ixdp425_plat_mac,
+ }, {
+ .name = "ixp4xx_eth",
+ .id = IXP4XX_ETH_NPEC,
+ .dev.platform_data = ixdp425_plat_mac + 1,
+ }
+};
+
static struct platform_device *ixdp425_devices[] __initdata = {
&ixdp425_i2c_controller,
&ixdp425_flash,
- &ixdp425_uart
+ &ixdp425_uart,
+ &ixdp425_mac[0],
+ &ixdp425_mac[1],
};
static void __init ixdp425_init(void)
diff --git a/drivers/net/arm/Kconfig b/drivers/net/arm/Kconfig
index 678e4f4..5e2acb6 100644
--- a/drivers/net/arm/Kconfig
+++ b/drivers/net/arm/Kconfig
@@ -46,3 +46,13 @@ config EP93XX_ETH
help
This is a driver for the ethernet hardware included in EP93xx CPUs.
Say Y if you are building a kernel for EP93xx based devices.
+
+config IXP4XX_ETH
+ tristate "IXP4xx Ethernet support"
+ depends on NET_ETHERNET && ARM && ARCH_IXP4XX
+ select IXP4XX_NPE
+ select IXP4XX_QMGR
+ select MII
+ help
+ Say Y here if you want to use built-in Ethernet ports
+ on IXP4xx processor.
diff --git a/drivers/net/arm/Makefile b/drivers/net/arm/Makefile
index a4c8682..7c812ac 100644
--- a/drivers/net/arm/Makefile
+++ b/drivers/net/arm/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_ARM_ETHER3) += ether3.o
obj-$(CONFIG_ARM_ETHER1) += ether1.o
obj-$(CONFIG_ARM_AT91_ETHER) += at91_ether.o
obj-$(CONFIG_EP93XX_ETH) += ep93xx_eth.o
+obj-$(CONFIG_IXP4XX_ETH) += ixp4xx_eth.o
diff --git a/drivers/net/arm/ixp4xx_eth.c b/drivers/net/arm/ixp4xx_eth.c
new file mode 100644
index 0000000..dcea6e5
--- /dev/null
+++ b/drivers/net/arm/ixp4xx_eth.c
@@ -0,0 +1,1002 @@
+/*
+ * Intel IXP4xx Ethernet driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * Ethernet port config (0x00 is not present on IXP42X):
+ *
+ * logical port 0x00 0x10 0x20
+ * NPE 0 (NPE-A) 1 (NPE-B) 2 (NPE-C)
+ * physical PortId 2 0 1
+ * TX queue 23 24 25
+ * RX-free queue 26 27 28
+ * TX-done queue is always 31, RX queue is configurable
+ */
+
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/mii.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include <asm/arch/npe.h>
+#include <asm/arch/qmgr.h>
+
+#ifndef __ARMEB__
+#warning Little endian mode not supported
+#endif
+
+#define DEBUG_QUEUES 0
+#define DEBUG_RX 0
+#define DEBUG_TX 0
+#define DEBUG_PKT_BYTES 0
+#define DEBUG_MDIO 0
+
+#define DRV_NAME "ixp4xx_eth"
+#define DRV_VERSION "0.04"
+
+#define TX_QUEUE_LEN 16 /* dwords */
+#define PKT_DESCS 64 /* also length of queues: TX-done, RX-ready, RX */
+
+#define POOL_ALLOC_SIZE (sizeof(struct desc) * (PKT_DESCS))
+#define REGS_SIZE 0x1000
+#define MAX_MRU 1536
+
+#define MDIO_INTERVAL (3 * HZ)
+#define MAX_MDIO_RETRIES 100 /* microseconds, typically 30 cycles */
+
+#define NPE_ID(port) ((port)->id >> 4)
+#define PHYSICAL_ID(port) ((NPE_ID(port) + 2) % 3)
+#define TX_QUEUE(plat) (NPE_ID(port) + 23)
+#define RXFREE_QUEUE(plat) (NPE_ID(port) + 26)
+#define TXDONE_QUEUE 31
+
+/* TX Control Registers */
+#define TX_CNTRL0_TX_EN BIT(0)
+#define TX_CNTRL0_HALFDUPLEX BIT(1)
+#define TX_CNTRL0_RETRY BIT(2)
+#define TX_CNTRL0_PAD_EN BIT(3)
+#define TX_CNTRL0_APPEND_FCS BIT(4)
+#define TX_CNTRL0_2DEFER BIT(5)
+#define TX_CNTRL0_RMII BIT(6) /* reduced MII */
+#define TX_CNTRL1_RETRIES 0x0F /* 4 bits */
+
+/* RX Control Registers */
+#define RX_CNTRL0_RX_EN BIT(0)
+#define RX_CNTRL0_PADSTRIP_EN BIT(1)
+#define RX_CNTRL0_SEND_FCS BIT(2)
+#define RX_CNTRL0_PAUSE_EN BIT(3)
+#define RX_CNTRL0_LOOP_EN BIT(4)
+#define RX_CNTRL0_ADDR_FLTR_EN BIT(5)
+#define RX_CNTRL0_RX_RUNT_EN BIT(6)
+#define RX_CNTRL0_BCAST_DIS BIT(7)
+#define RX_CNTRL1_DEFER_EN BIT(0)
+
+/* Core Control Register */
+#define CORE_RESET BIT(0)
+#define CORE_RX_FIFO_FLUSH BIT(1)
+#define CORE_TX_FIFO_FLUSH BIT(2)
+#define CORE_SEND_JAM BIT(3)
+#define CORE_MDC_EN BIT(4) /* NPE-B ETH-0 only */
+
+/* Definitions for MII access routines */
+#define MII_CMD_GO BIT(31)
+#define MII_CMD_WRITE BIT(26)
+#define MII_STAT_READ_FAILED BIT(31)
+
+/* NPE message codes */
+#define NPE_GETSTATUS 0x00
+#define NPE_EDB_SETPORTADDRESS 0x01
+#define NPE_EDB_GETMACADDRESSDATABASE 0x02
+#define NPE_EDB_SETMACADDRESSSDATABASE 0x03
+#define NPE_GETSTATS 0x04
+#define NPE_RESETSTATS 0x05
+#define NPE_SETMAXFRAMELENGTHS 0x06
+#define NPE_VLAN_SETRXTAGMODE 0x07
+#define NPE_VLAN_SETDEFAULTRXVID 0x08
+#define NPE_VLAN_SETPORTVLANTABLEENTRY 0x09
+#define NPE_VLAN_SETPORTVLANTABLERANGE 0x0A
+#define NPE_VLAN_SETRXQOSENTRY 0x0B
+#define NPE_VLAN_SETPORTIDEXTRACTIONMODE 0x0C
+#define NPE_STP_SETBLOCKINGSTATE 0x0D
+#define NPE_FW_SETFIREWALLMODE 0x0E
+#define NPE_PC_SETFRAMECONTROLDURATIONID 0x0F
+#define NPE_PC_SETAPMACTABLE 0x11
+#define NPE_SETLOOPBACK_MODE 0x12
+#define NPE_PC_SETBSSIDTABLE 0x13
+#define NPE_ADDRESS_FILTER_CONFIG 0x14
+#define NPE_APPENDFCSCONFIG 0x15
+#define NPE_NOTIFY_MAC_RECOVERY_DONE 0x16
+#define NPE_MAC_RECOVERY_START 0x17
+
+
+struct eth_regs {
+ u32 tx_control[2], __res1[2]; /* 000 */
+ u32 rx_control[2], __res2[2]; /* 010 */
+ u32 random_seed, __res3[3]; /* 020 */
+ u32 partial_empty_threshold, __res4; /* 030 */
+ u32 partial_full_threshold, __res5; /* 038 */
+ u32 tx_start_bytes, __res6[3]; /* 040 */
+ u32 tx_deferral, rx_deferral,__res7[2]; /* 050 */
+ u32 tx_2part_deferral[2], __res8[2]; /* 060 */
+ u32 slot_time, __res9[3]; /* 070 */
+ u32 mdio_command[4]; /* 080 */
+ u32 mdio_status[4]; /* 090 */
+ u32 mcast_mask[6], __res10[2]; /* 0A0 */
+ u32 mcast_addr[6], __res11[2]; /* 0C0 */
+ u32 int_clock_threshold, __res12[3]; /* 0E0 */
+ u32 hw_addr[6], __res13[61]; /* 0F0 */
+ u32 core_control; /* 1FC */
+};
+
+struct port {
+ struct resource *mem_res;
+ struct eth_regs __iomem *regs;
+ struct npe *npe;
+ struct net_device *netdev;
+ struct net_device_stats stat;
+ struct mii_if_info mii;
+ struct delayed_work mdio_thread;
+ struct mac_plat_info *plat;
+ struct sk_buff *rx_skb_tab[PKT_DESCS];
+ struct desc *rx_desc_tab; /* coherent */
+ int id; /* logical port ID */
+ u32 rx_desc_tab_phys;
+ u32 msg_enable;
+};
+
+/* NPE message structure */
+struct msg {
+ union {
+ struct {
+ u8 cmd, eth_id, mac[ETH_ALEN];
+ };
+ struct {
+ u8 cmd, eth_id, __byte2, byte3;
+ u8 __byte4, byte5, __byte6, byte7;
+ };
+ struct {
+ u8 cmd, eth_id, __b2, byte3;
+ u32 data32;
+ };
+ };
+};
+
+/* Ethernet packet descriptor */
+struct desc {
+ u32 next; /* pointer to next buffer, unused */
+ u16 buf_len; /* buffer length */
+ u16 pkt_len; /* packet length */
+ u32 data; /* pointer to data buffer in RAM */
+ u8 dest_id;
+ u8 src_id;
+ u16 flags;
+ u8 qos;
+ u8 padlen;
+ u16 vlan_tci;
+ u8 dest_mac[ETH_ALEN];
+ u8 src_mac[ETH_ALEN];
+};
+
+
+#define rx_desc_phys(port, n) ((port)->rx_desc_tab_phys + \
+ (n) * sizeof(struct desc))
+#define tx_desc_phys(n) (tx_desc_tab_phys + (n) * sizeof(struct desc))
+
+static spinlock_t mdio_lock;
+static struct eth_regs __iomem *mdio_regs; /* mdio command and status only */
+static struct npe *mdio_npe;
+static int ports_open;
+static struct dma_pool *dma_pool;
+static struct sk_buff *tx_skb_tab[PKT_DESCS];
+static struct desc *tx_desc_tab; /* coherent */
+static u32 tx_desc_tab_phys;
+
+
+static inline void set_regbits(u32 bits, u32 __iomem *reg)
+{
+ __raw_writel(__raw_readl(reg) | bits, reg);
+}
+static inline void clr_regbits(u32 bits, u32 __iomem *reg)
+{
+ __raw_writel(__raw_readl(reg) & ~bits, reg);
+}
+
+
+static u16 mdio_cmd(struct net_device *dev, int phy_id, int location,
+ int write, u16 cmd)
+{
+ int cycles = 0;
+
+ if (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80) {
+ printk("%s: MII not ready to transmit\n", dev->name);
+ return 0; /* not ready to transmit */
+ }
+
+ if (write) {
+ __raw_writel(cmd & 0xFF, &mdio_regs->mdio_command[0]);
+ __raw_writel(cmd >> 8, &mdio_regs->mdio_command[1]);
+ }
+ __raw_writel(((phy_id << 5) | location) & 0xFF,
+ &mdio_regs->mdio_command[2]);
+ __raw_writel((phy_id >> 3) | (write << 2) | 0x80 /* GO */,
+ &mdio_regs->mdio_command[3]);
+
+ while ((cycles < MAX_MDIO_RETRIES) &&
+ (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80)) {
+ udelay(1);
+ cycles++;
+ }
+
+ if (cycles == MAX_MDIO_RETRIES) {
+ printk("%s: MII write failed\n", dev->name);
+ return 0;
+ }
+
+#if DEBUG_MDIO
+ printk(KERN_DEBUG "mdio_cmd() took %i cycles\n", cycles);
+#endif
+
+ if (write)
+ return 0;
+
+ if (__raw_readl(&mdio_regs->mdio_status[3]) & 0x80) {
+ printk("%s: MII read failed\n", dev->name);
+ return 0;
+ }
+
+ return (__raw_readl(&mdio_regs->mdio_status[0]) & 0xFF) |
+ (__raw_readl(&mdio_regs->mdio_status[1]) << 8);
+}
+
+static int mdio_read(struct net_device *dev, int phy_id, int location)
+{
+ unsigned long flags;
+ u16 val;
+
+ spin_lock_irqsave(&mdio_lock, flags);
+ val = mdio_cmd(dev, phy_id, location, 0, 0);
+ spin_unlock_irqrestore(&mdio_lock, flags);
+ return val;
+}
+
+static void mdio_write(struct net_device *dev, int phy_id, int location,
+ int val)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdio_lock, flags);
+ mdio_cmd(dev, phy_id, location, 1, val);
+ spin_unlock_irqrestore(&mdio_lock, flags);
+}
+
+static void eth_set_duplex(struct port *port)
+{
+ if (port->mii.full_duplex)
+ clr_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
+ else
+ set_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
+}
+
+
+static void mdio_thread(struct work_struct *work)
+{
+ struct port *port = container_of(work, struct port, mdio_thread.work);
+
+ if (mii_check_media(&port->mii, 1, 0))
+ eth_set_duplex(port);
+ schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
+}
+
+
+static inline void debug_skb(const char *func, struct sk_buff *skb)
+{
+#if DEBUG_PKT_BYTES
+ int i;
+
+ printk(KERN_DEBUG "%s(%i): ", func, skb->len);
+ for (i = 0; i < skb->len; i++) {
+ if (i >= DEBUG_PKT_BYTES)
+ break;
+ printk("%s%02X",
+ ((i == 6) || (i == 12) || (i >= 14)) ? " " : "",
+ skb->data[i]);
+ }
+ printk("\n");
+#endif
+}
+
+
+static inline void debug_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc, int is_get)
+{
+#if DEBUG_QUEUES
+ const char *op = is_get ? "->" : "<-";
+
+ if (!desc_phys) {
+ printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
+ return;
+ }
+ printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %2X < %2X %4X %X"
+ " %X %X %02X%02X%02X%02X%02X%02X < %02X%02X%02X%02X%02X%02X\n",
+ queue, op, desc_phys, desc->next, desc->buf_len, desc->pkt_len,
+ desc->data, desc->dest_id, desc->src_id,
+ desc->flags, desc->qos,
+ desc->padlen, desc->vlan_tci,
+ desc->dest_mac[0], desc->dest_mac[1],
+ desc->dest_mac[2], desc->dest_mac[3],
+ desc->dest_mac[4], desc->dest_mac[5],
+ desc->src_mac[0], desc->src_mac[1],
+ desc->src_mac[2], desc->src_mac[3],
+ desc->src_mac[4], desc->src_mac[5]);
+#endif
+}
+
+static inline int queue_get_desc(unsigned int queue, struct port *port,
+ int is_tx)
+{
+ u32 phys, tab_phys, n_desc;
+ struct desc *tab;
+
+ if (!(phys = qmgr_get_entry(queue))) {
+ debug_desc(queue, phys, NULL, 1);
+ return -1;
+ }
+
+ phys &= ~0x1F; /* mask out non-address bits */
+ tab_phys = is_tx ? tx_desc_phys(0) : rx_desc_phys(port, 0);
+ tab = is_tx ? tx_desc_tab : port->rx_desc_tab;
+ n_desc = (phys - tab_phys) / sizeof(struct desc);
+ BUG_ON(n_desc >= PKT_DESCS);
+
+ debug_desc(queue, phys, &tab[n_desc], 1);
+ BUG_ON(tab[n_desc].next);
+ return n_desc;
+}
+
+static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc)
+{
+ debug_desc(queue, desc_phys, desc, 0);
+ BUG_ON(desc_phys & 0x1F);
+ qmgr_put_entry(queue, desc_phys);
+}
+
+
+static void eth_rx_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+ struct port *port = netdev_priv(dev);
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "eth_rx_irq() start\n");
+#endif
+ qmgr_disable_irq(port->plat->rxq);
+ netif_rx_schedule(dev);
+}
+
+static int eth_poll(struct net_device *dev, int *budget)
+{
+ struct port *port = netdev_priv(dev);
+ unsigned int queue = port->plat->rxq;
+ int quota = dev->quota, received = 0;
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "eth_poll() start\n");
+#endif
+ while (quota) {
+ struct sk_buff *old_skb, *new_skb;
+ struct desc *desc;
+ u32 data;
+ int n = queue_get_desc(queue, port, 0);
+ if (n < 0) { /* No packets received */
+ dev->quota -= received;
+ *budget -= received;
+ received = 0;
+ netif_rx_complete(dev);
+ qmgr_enable_irq(queue);
+ if (!qmgr_stat_empty(queue) &&
+ netif_rx_reschedule(dev, 0)) {
+ qmgr_disable_irq(queue);
+ continue;
+ }
+ return 0; /* all work done */
+ }
+
+ desc = &port->rx_desc_tab[n];
+
+ if ((new_skb = netdev_alloc_skb(dev, MAX_MRU)) != NULL) {
+#if 0
+ skb_reserve(new_skb, 2); /* FIXME */
+#endif
+ data = dma_map_single(&dev->dev, new_skb->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ }
+
+ if (!new_skb || dma_mapping_error(data)) {
+ if (new_skb)
+ dev_kfree_skb(new_skb);
+ port->stat.rx_dropped++;
+ /* put the desc back on RX-ready queue */
+ desc->buf_len = MAX_MRU;
+ desc->pkt_len = 0;
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ continue;
+ }
+
+ /* process received skb */
+ old_skb = port->rx_skb_tab[n];
+ dma_unmap_single(&dev->dev, desc->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ skb_put(old_skb, desc->pkt_len);
+
+ debug_skb("eth_poll", old_skb);
+
+ old_skb->protocol = eth_type_trans(old_skb, dev);
+ dev->last_rx = jiffies;
+ port->stat.rx_packets++;
+ port->stat.rx_bytes += old_skb->len;
+ netif_receive_skb(old_skb);
+
+ /* put the new skb on RX-free queue */
+ port->rx_skb_tab[n] = new_skb;
+ desc->buf_len = MAX_MRU;
+ desc->pkt_len = 0;
+ desc->data = data;
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ quota--;
+ received++;
+ }
+ dev->quota -= received;
+ *budget -= received;
+ return 1; /* not all work done */
+}
+
+static void eth_xmit_ready_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit_empty() start\n");
+#endif
+ netif_start_queue(dev);
+}
+
+static int eth_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct desc *desc;
+ u32 phys;
+ struct sk_buff *old_skb;
+ int n;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit() start\n");
+#endif
+ if (unlikely(skb->len > MAX_MRU)) {
+ dev_kfree_skb(skb);
+ port->stat.tx_errors++;
+ return NETDEV_TX_OK;
+ }
+
+ n = queue_get_desc(TXDONE_QUEUE, port, 1);
+ BUG_ON(n < 0);
+ desc = &tx_desc_tab[n];
+ phys = tx_desc_phys(n);
+
+ if ((old_skb = tx_skb_tab[n]) != NULL) {
+ dma_unmap_single(&dev->dev, desc->data,
+ desc->buf_len, DMA_TO_DEVICE);
+ port->stat.tx_packets++;
+ port->stat.tx_bytes += old_skb->len;
+ dev_kfree_skb(old_skb);
+ }
+
+ /* disable VLAN functions in NPE image for now */
+ memset(desc, 0, sizeof(*desc));
+ desc->buf_len = desc->pkt_len = skb->len;
+ desc->data = dma_map_single(&dev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ dev_kfree_skb(skb);
+ tx_skb_tab[n] = NULL;
+ port->stat.tx_dropped++;
+ /* put the desc back on TX-done queue */
+ queue_put_desc(TXDONE_QUEUE, phys, desc);
+ return 0;
+ }
+
+ tx_skb_tab[n] = skb;
+ debug_skb("eth_xmit", skb);
+
+ /* NPE firmware pads short frames with zeros internally */
+ wmb();
+ queue_put_desc(TX_QUEUE(port->plat), phys, desc);
+ BUG_ON(qmgr_stat_overflow(TX_QUEUE(port->plat)));
+ dev->trans_start = jiffies;
+
+ if (qmgr_stat_full(TX_QUEUE(port->plat))) {
+ netif_stop_queue(dev);
+ /* we could miss TX ready interrupt */
+ if (!qmgr_stat_full(TX_QUEUE(port->plat))) {
+ netif_start_queue(dev);
+ }
+ }
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "eth_xmit() end\n");
+#endif
+ return NETDEV_TX_OK;
+}
+
+
+static struct net_device_stats *eth_stats(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ return &port->stat;
+}
+
+static void eth_set_mcast_list(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct dev_mc_list *mclist = dev->mc_list;
+ u8 diffs[ETH_ALEN], *addr;
+ int cnt = dev->mc_count, i;
+
+ if ((dev->flags & IFF_PROMISC) || !mclist || !cnt) {
+ clr_regbits(RX_CNTRL0_ADDR_FLTR_EN,
+ &port->regs->rx_control[0]);
+ return;
+ }
+
+ memset(diffs, 0, ETH_ALEN);
+ addr = mclist->dmi_addr; /* first MAC address */
+
+ while (--cnt && (mclist = mclist->next))
+ for (i = 0; i < ETH_ALEN; i++)
+ diffs[i] |= addr[i] ^ mclist->dmi_addr[i];
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ __raw_writel(addr[i], &port->regs->mcast_addr[i]);
+ __raw_writel(~diffs[i], &port->regs->mcast_mask[i]);
+ }
+
+ set_regbits(RX_CNTRL0_ADDR_FLTR_EN, &port->regs->rx_control[0]);
+}
+
+
+static int eth_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
+{
+ struct port *port = netdev_priv(dev);
+ unsigned int duplex_chg;
+ int err;
+
+ if (!netif_running(dev))
+ return -EINVAL;
+ err = generic_mii_ioctl(&port->mii, if_mii(req), cmd, &duplex_chg);
+ if (duplex_chg)
+ eth_set_duplex(port);
+ return err;
+}
+
+
+static int request_queues(struct port *port)
+{
+ int err;
+
+ err = qmgr_request_queue(RXFREE_QUEUE(port->plat), PKT_DESCS, 0, 0);
+ if (err)
+ return err;
+
+ err = qmgr_request_queue(port->plat->rxq, PKT_DESCS, 0, 0);
+ if (err)
+ goto rel_rxfree;
+
+ err = qmgr_request_queue(TX_QUEUE(port->plat), TX_QUEUE_LEN, 0, 0);
+ if (err)
+ goto rel_rx;
+
+ /* TX-done queue handles skbs sent out by the NPEs */
+ if (!ports_open) {
+ err = qmgr_request_queue(TXDONE_QUEUE, PKT_DESCS, 0, 0);
+ if (err)
+ goto rel_tx;
+ }
+ return 0;
+
+rel_tx:
+ qmgr_release_queue(TX_QUEUE(port->plat));
+rel_rx:
+ qmgr_release_queue(port->plat->rxq);
+rel_rxfree:
+ qmgr_release_queue(RXFREE_QUEUE(port->plat));
+ return err;
+}
+
+static void release_queues(struct port *port)
+{
+ qmgr_release_queue(RXFREE_QUEUE(port->plat));
+ qmgr_release_queue(port->plat->rxq);
+ qmgr_release_queue(TX_QUEUE(port->plat));
+
+ if (!ports_open)
+ qmgr_release_queue(TXDONE_QUEUE);
+}
+
+static int init_queues(struct port *port)
+{
+ int i;
+
+ if (!dma_pool) {
+ /* Setup TX descriptors - common to all ports */
+ dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
+ 32, 0);
+ if (!dma_pool)
+ return -ENOMEM;
+
+ if (!(tx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &tx_desc_tab_phys)))
+ return -ENOMEM;
+ memset(tx_desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(tx_skb_tab, 0, sizeof(tx_skb_tab)); /* static table */
+
+ for (i = 0; i < PKT_DESCS; i++) {
+ queue_put_desc(TXDONE_QUEUE, tx_desc_phys(i),
+ &tx_desc_tab[i]);
+ BUG_ON(qmgr_stat_overflow(TXDONE_QUEUE));
+ }
+ }
+
+ /* Setup RX buffers */
+ if (!(port->rx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &port->rx_desc_tab_phys)))
+ return -ENOMEM;
+ memset(port->rx_desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* table */
+
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &port->rx_desc_tab[i];
+ struct sk_buff *skb;
+
+ if (!(skb = netdev_alloc_skb(port->netdev, MAX_MRU)))
+ return -ENOMEM;
+ port->rx_skb_tab[i] = skb;
+ desc->buf_len = MAX_MRU;
+#if 0
+ skb_reserve(skb, 2); /* FIXME */
+#endif
+ desc->data = dma_map_single(&port->netdev->dev, skb->data,
+ MAX_MRU, DMA_FROM_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ return -EIO;
+ }
+ queue_put_desc(RXFREE_QUEUE(port->plat),
+ rx_desc_phys(port, i), desc);
+ BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
+ }
+ return 0;
+}
+
+static void destroy_queues(struct port *port)
+{
+ int i;
+
+ while (queue_get_desc(RXFREE_QUEUE(port->plat), port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(port->plat->rxq, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(TX_QUEUE(port->plat), port, 1) >= 0) {
+ /* nothing to do here */;
+ }
+ if (!ports_open)
+ while (queue_get_desc(TXDONE_QUEUE, port, 1) >= 0)
+ /* nothing to do here */;
+
+ if (port->rx_desc_tab) {
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &port->rx_desc_tab[i];
+ struct sk_buff *skb = port->rx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data, MAX_MRU,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, port->rx_desc_tab,
+ port->rx_desc_tab_phys);
+ port->rx_desc_tab = NULL;
+ }
+
+ if (!ports_open && tx_desc_tab) {
+ for (i = 0; i < PKT_DESCS; i++) {
+ struct desc *desc = &tx_desc_tab[i];
+ struct sk_buff *skb = tx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data,
+ desc->buf_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, tx_desc_tab, tx_desc_tab_phys);
+ tx_desc_tab = NULL;
+ }
+ if (!ports_open && dma_pool) {
+ dma_pool_destroy(dma_pool);
+ dma_pool = NULL;
+ }
+}
+
+static int eth_load_firmware(struct net_device *dev, struct npe *npe)
+{
+ struct msg msg;
+ int err;
+
+ if ((err = npe_load_firmware(npe, npe_name(npe), &dev->dev)) != 0)
+ return err;
+
+ if ((err = npe_recv_message(npe, &msg, "ETH_GET_STATUS")) != 0) {
+ printk(KERN_ERR "%s: %s not responding\n", dev->name,
+ npe_name(npe));
+ return err;
+ }
+ return 0;
+}
+
+static int eth_open(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+ int i, err;
+
+ if (!npe_running(npe))
+ if (eth_load_firmware(dev, npe))
+ return -EIO;
+
+ if (!npe_running(mdio_npe))
+ if (eth_load_firmware(dev, mdio_npe))
+ return -EIO;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = NPE_VLAN_SETRXQOSENTRY;
+ msg.eth_id = port->id;
+ msg.byte5 = port->plat->rxq | 0x80;
+ msg.byte7 = port->plat->rxq << 4;
+ for (i = 0; i < 8; i++) {
+ msg.byte3 = i;
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_RXQ"))
+ return -EIO;
+ }
+
+ msg.cmd = NPE_EDB_SETPORTADDRESS;
+ msg.eth_id = PHYSICAL_ID(port);
+ memcpy(msg.mac, dev->dev_addr, ETH_ALEN);
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_MAC"))
+ return -EIO;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = NPE_FW_SETFIREWALLMODE;
+ msg.eth_id = port->id;
+ if (npe_send_recv_message(port->npe, &msg, "ETH_SET_FIREWALL_MODE"))
+ return -EIO;
+
+ if ((err = request_queues(port)) != 0)
+ return err;
+
+ if ((err = init_queues(port)) != 0) {
+ destroy_queues(port);
+ release_queues(port);
+ return err;
+ }
+
+ for (i = 0; i < ETH_ALEN; i++)
+ __raw_writel(dev->dev_addr[i], &port->regs->hw_addr[i]);
+ __raw_writel(0x08, &port->regs->random_seed);
+ __raw_writel(0x12, &port->regs->partial_empty_threshold);
+ __raw_writel(0x30, &port->regs->partial_full_threshold);
+ __raw_writel(0x08, &port->regs->tx_start_bytes);
+ __raw_writel(0x15, &port->regs->tx_deferral);
+ __raw_writel(0x08, &port->regs->tx_2part_deferral[0]);
+ __raw_writel(0x07, &port->regs->tx_2part_deferral[1]);
+ __raw_writel(0x80, &port->regs->slot_time);
+ __raw_writel(0x01, &port->regs->int_clock_threshold);
+ __raw_writel(TX_CNTRL1_RETRIES, &port->regs->tx_control[1]);
+ __raw_writel(TX_CNTRL0_TX_EN | TX_CNTRL0_RETRY | TX_CNTRL0_PAD_EN |
+ TX_CNTRL0_APPEND_FCS | TX_CNTRL0_2DEFER,
+ &port->regs->tx_control[0]);
+ __raw_writel(0, &port->regs->rx_control[1]);
+ __raw_writel(RX_CNTRL0_RX_EN | RX_CNTRL0_PADSTRIP_EN,
+ &port->regs->rx_control[0]);
+
+ if (mii_check_media(&port->mii, 1, 1))
+ eth_set_duplex(port);
+ eth_set_mcast_list(dev);
+ netif_start_queue(dev);
+ schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
+
+ qmgr_set_irq(port->plat->rxq, QUEUE_IRQ_SRC_NOT_EMPTY,
+ eth_rx_irq, dev);
+ qmgr_set_irq(TX_QUEUE(port->plat), QUEUE_IRQ_SRC_NOT_FULL,
+ eth_xmit_ready_irq, dev);
+ qmgr_enable_irq(port->plat->rxq);
+ qmgr_enable_irq(TX_QUEUE(port->plat));
+ ports_open++;
+ return 0;
+}
+
+static int eth_close(struct net_device *dev)
+{
+ struct port *port = netdev_priv(dev);
+
+ ports_open--;
+ qmgr_disable_irq(port->plat->rxq);
+ qmgr_disable_irq(TX_QUEUE(port->plat));
+ netif_stop_queue(dev);
+
+ clr_regbits(RX_CNTRL0_RX_EN, &port->regs->rx_control[0]);
+ clr_regbits(TX_CNTRL0_TX_EN, &port->regs->tx_control[0]);
+ set_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
+ &port->regs->core_control);
+ udelay(10);
+ clr_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
+ &port->regs->core_control);
+
+ cancel_rearming_delayed_work(&port->mdio_thread);
+ destroy_queues(port);
+ release_queues(port);
+ return 0;
+}
+
+static int __devinit eth_init_one(struct platform_device *pdev)
+{
+ struct port *port;
+ struct net_device *dev;
+ struct mac_plat_info *plat = pdev->dev.platform_data;
+ u32 regs_phys;
+ int err;
+
+ if (!(dev = alloc_etherdev(sizeof(struct port))))
+ return -ENOMEM;
+
+ SET_MODULE_OWNER(dev);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ port = netdev_priv(dev);
+ port->netdev = dev;
+ port->id = pdev->id;
+
+ switch (port->id) {
+ case IXP4XX_ETH_NPEA:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthA_BASE_VIRT;
+ regs_phys = IXP4XX_EthA_BASE_PHYS;
+ break;
+ case IXP4XX_ETH_NPEB:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
+ regs_phys = IXP4XX_EthB_BASE_PHYS;
+ break;
+ case IXP4XX_ETH_NPEC:
+ port->regs = (struct eth_regs __iomem *)IXP4XX_EthC_BASE_VIRT;
+ regs_phys = IXP4XX_EthC_BASE_PHYS;
+ break;
+ default:
+ err = -ENOSYS;
+ goto err_free;
+ }
+
+ dev->open = eth_open;
+ dev->hard_start_xmit = eth_xmit;
+ dev->poll = eth_poll;
+ dev->stop = eth_close;
+ dev->get_stats = eth_stats;
+ dev->do_ioctl = eth_ioctl;
+ dev->set_multicast_list = eth_set_mcast_list;
+ dev->weight = 16;
+ dev->tx_queue_len = 100;
+
+ if (!(port->npe = npe_request(NPE_ID(port)))) {
+ err = -EIO;
+ goto err_free;
+ }
+
+ if (register_netdev(dev)) {
+ err = -EIO;
+ goto err_npe_rel;
+ }
+
+ port->mem_res = request_mem_region(regs_phys, REGS_SIZE, dev->name);
+ if (!port->mem_res) {
+ err = -EBUSY;
+ goto err_unreg;
+ }
+
+ port->plat = plat;
+ memcpy(dev->dev_addr, plat->hwaddr, ETH_ALEN);
+
+ platform_set_drvdata(pdev, dev);
+
+ __raw_writel(CORE_RESET, &port->regs->core_control);
+ udelay(50);
+ __raw_writel(CORE_MDC_EN, &port->regs->core_control);
+ udelay(50);
+
+ port->mii.dev = dev;
+ port->mii.mdio_read = mdio_read;
+ port->mii.mdio_write = mdio_write;
+ port->mii.phy_id = plat->phy;
+ port->mii.phy_id_mask = 0x1F;
+ port->mii.reg_num_mask = 0x1F;
+
+ INIT_DELAYED_WORK(&port->mdio_thread, mdio_thread);
+
+ printk(KERN_INFO "%s: MII PHY %i on %s\n", dev->name, plat->phy,
+ npe_name(port->npe));
+ return 0;
+
+err_unreg:
+ unregister_netdev(dev);
+err_npe_rel:
+ npe_release(port->npe);
+err_free:
+ free_netdev(dev);
+ return err;
+}
+
+static int __devexit eth_remove_one(struct platform_device *pdev)
+{
+ struct net_device *dev = platform_get_drvdata(pdev);
+ struct port *port = netdev_priv(dev);
+
+ unregister_netdev(dev);
+ platform_set_drvdata(pdev, NULL);
+ npe_release(port->npe);
+ release_resource(port->mem_res);
+ free_netdev(dev);
+ return 0;
+}
+
+static struct platform_driver drv = {
+ .driver.name = DRV_NAME,
+ .probe = eth_init_one,
+ .remove = eth_remove_one,
+};
+
+static int __init eth_init_module(void)
+{
+ if (!(ixp4xx_read_fuses() & IXP4XX_FUSE_NPEB_ETH0))
+ return -ENOSYS;
+
+ /* All MII PHY accesses use NPE-B Ethernet registers */
+ if (!(mdio_npe = npe_request(1)))
+ return -EIO;
+ spin_lock_init(&mdio_lock);
+ mdio_regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
+
+ return platform_driver_register(&drv);
+}
+
+static void __exit eth_cleanup_module(void)
+{
+ platform_driver_unregister(&drv);
+ npe_release(mdio_npe);
+}
+
+MODULE_AUTHOR("Krzysztof Halasa");
+MODULE_DESCRIPTION("Intel IXP4xx Ethernet driver");
+MODULE_LICENSE("GPL v2");
+module_init(eth_init_module);
+module_exit(eth_cleanup_module);
diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
index 5f79622..b891e10 100644
--- a/drivers/net/wan/Kconfig
+++ b/drivers/net/wan/Kconfig
@@ -342,6 +342,16 @@ config DSCC4_PCI_RST
Say Y if your card supports this feature.
+config IXP4XX_HSS
+ tristate "IXP4xx HSS (synchronous serial port) support"
+ depends on ARM && ARCH_IXP4XX
+ select IXP4XX_NPE
+ select IXP4XX_QMGR
+ select HDLC
+ help
+ Say Y here if you want to use built-in HSS ports
+ on IXP4xx processor.
+
config DLCI
tristate "Frame Relay DLCI support"
---help---
diff --git a/drivers/net/wan/Makefile b/drivers/net/wan/Makefile
index d61fef3..1b1d116 100644
--- a/drivers/net/wan/Makefile
+++ b/drivers/net/wan/Makefile
@@ -42,6 +42,7 @@ obj-$(CONFIG_C101) += c101.o
obj-$(CONFIG_WANXL) += wanxl.o
obj-$(CONFIG_PCI200SYN) += pci200syn.o
obj-$(CONFIG_PC300TOO) += pc300too.o
+obj-$(CONFIG_IXP4XX_HSS) += ixp4xx_hss.o
clean-files := wanxlfw.inc
$(obj)/wanxl.o: $(obj)/wanxlfw.inc
diff --git a/drivers/net/wan/ixp4xx_hss.c b/drivers/net/wan/ixp4xx_hss.c
new file mode 100644
index 0000000..ed56ed8
--- /dev/null
+++ b/drivers/net/wan/ixp4xx_hss.c
@@ -0,0 +1,1048 @@
+/*
+ * Intel IXP4xx HSS (synchronous serial port) driver for Linux
+ *
+ * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/hdlc.h>
+#include <linux/platform_device.h>
+#include <asm/io.h>
+#include <asm/arch/npe.h>
+#include <asm/arch/qmgr.h>
+
+#ifndef __ARMEB__
+#warning Little endian mode not supported
+#endif
+
+#define DEBUG_QUEUES 0
+#define DEBUG_RX 0
+#define DEBUG_TX 0
+
+#define DRV_NAME "ixp4xx_hss"
+#define DRV_VERSION "0.03"
+
+#define PKT_EXTRA_FLAGS 0 /* orig 1 */
+#define FRAME_SYNC_OFFSET 0 /* unused, channelized only */
+#define FRAME_SYNC_SIZE 1024
+#define PKT_NUM_PIPES 1 /* 1, 2 or 4 */
+#define PKT_PIPE_FIFO_SIZEW 4 /* total 4 dwords per HSS */
+
+#define RX_DESCS 16 /* also length of queues: RX-ready, RX */
+#define TX_DESCS 16 /* also length of queues: TX-done, TX */
+
+#define POOL_ALLOC_SIZE (sizeof(struct desc) * (RX_DESCS + TX_DESCS))
+#define RX_SIZE (HDLC_MAX_MRU + 4) /* NPE needs more space */
+
+/* Queue IDs */
+#define HSS0_CHL_RXTRIG_QUEUE 12 /* orig size = 32 dwords */
+#define HSS0_PKT_RX_QUEUE 13 /* orig size = 32 dwords */
+#define HSS0_PKT_TX0_QUEUE 14 /* orig size = 16 dwords */
+#define HSS0_PKT_TX1_QUEUE 15
+#define HSS0_PKT_TX2_QUEUE 16
+#define HSS0_PKT_TX3_QUEUE 17
+#define HSS0_PKT_RXFREE0_QUEUE 18 /* orig size = 16 dwords */
+#define HSS0_PKT_RXFREE1_QUEUE 19
+#define HSS0_PKT_RXFREE2_QUEUE 20
+#define HSS0_PKT_RXFREE3_QUEUE 21
+#define HSS0_PKT_TXDONE_QUEUE 22 /* orig size = 64 dwords */
+
+#define HSS1_CHL_RXTRIG_QUEUE 10
+#define HSS1_PKT_RX_QUEUE 0
+#define HSS1_PKT_TX0_QUEUE 5
+#define HSS1_PKT_TX1_QUEUE 6
+#define HSS1_PKT_TX2_QUEUE 7
+#define HSS1_PKT_TX3_QUEUE 8
+#define HSS1_PKT_RXFREE0_QUEUE 1
+#define HSS1_PKT_RXFREE1_QUEUE 2
+#define HSS1_PKT_RXFREE2_QUEUE 3
+#define HSS1_PKT_RXFREE3_QUEUE 4
+#define HSS1_PKT_TXDONE_QUEUE 9
+
+#define NPE_PKT_MODE_HDLC 0
+#define NPE_PKT_MODE_RAW 1
+#define NPE_PKT_MODE_56KMODE 2
+#define NPE_PKT_MODE_56KENDIAN_MSB 4
+
+/* PKT_PIPE_HDLC_CFG_WRITE flags */
+#define PKT_HDLC_IDLE_ONES 0x1 /* default = flags */
+#define PKT_HDLC_CRC_32 0x2 /* default = CRC-16 */
+#define PKT_HDLC_MSB_ENDIAN 0x4 /* default = LE */
+
+
+/* hss_config, PCRs */
+/* Frame sync sampling, default = active low */
+#define PCR_FRM_SYNC_ACTIVE_HIGH 0x40000000
+#define PCR_FRM_SYNC_FALLINGEDGE 0x80000000
+#define PCR_FRM_SYNC_RISINGEDGE 0xC0000000
+
+/* Frame sync pin: input (default) or output generated off a given clk edge */
+#define PCR_FRM_SYNC_OUTPUT_FALLING 0x20000000
+#define PCR_FRM_SYNC_OUTPUT_RISING 0x30000000
+
+/* Frame and data clock sampling on edge, default = falling */
+#define PCR_FCLK_EDGE_RISING 0x08000000
+#define PCR_DCLK_EDGE_RISING 0x04000000
+
+/* Clock direction, default = input */
+#define PCR_SYNC_CLK_DIR_OUTPUT 0x02000000
+
+/* Generate/Receive frame pulses, default = enabled */
+#define PCR_FRM_PULSE_DISABLED 0x01000000
+
+ /* Data rate is full (default) or half the configured clk speed */
+#define PCR_HALF_CLK_RATE 0x00200000
+
+/* Invert data between NPE and HSS FIFOs? (default = no) */
+#define PCR_DATA_POLARITY_INVERT 0x00100000
+
+/* TX/RX endianness, default = LSB */
+#define PCR_MSB_ENDIAN 0x00080000
+
+/* Normal (default) / open drain mode (TX only) */
+#define PCR_TX_PINS_OPEN_DRAIN 0x00040000
+
+/* No framing bit transmitted and expected on RX? (default = framing bit) */
+#define PCR_SOF_NO_FBIT 0x00020000
+
+/* Drive data pins? */
+#define PCR_TX_DATA_ENABLE 0x00010000
+
+/* Voice 56k type: drive the data pins low (default), high, high Z */
+#define PCR_TX_V56K_HIGH 0x00002000
+#define PCR_TX_V56K_HIGH_IMP 0x00004000
+
+/* Unassigned type: drive the data pins low (default), high, high Z */
+#define PCR_TX_UNASS_HIGH 0x00000800
+#define PCR_TX_UNASS_HIGH_IMP 0x00001000
+
+/* T1 @ 1.544MHz only: Fbit dictated in FIFO (default) or high Z */
+#define PCR_TX_FB_HIGH_IMP 0x00000400
+
+/* 56k data endiannes - which bit unused: high (default) or low */
+#define PCR_TX_56KE_BIT_0_UNUSED 0x00000200
+
+/* 56k data transmission type: 32/8 bit data (default) or 56K data */
+#define PCR_TX_56KS_56K_DATA 0x00000100
+
+/* hss_config, cCR */
+/* Number of packetized clients, default = 1 */
+#define CCR_NPE_HFIFO_2_HDLC 0x04000000
+#define CCR_NPE_HFIFO_3_OR_4HDLC 0x08000000
+
+/* default = no loopback */
+#define CCR_LOOPBACK 0x02000000
+
+/* HSS number, default = 0 (first) */
+#define CCR_SECOND_HSS 0x01000000
+
+
+/* hss_config, clkCR: main:10, num:10, denom:12 */
+#define CLK42X_SPEED_EXP ((0x3FF << 22) | ( 2 << 12) | 15) /*65 KHz*/
+
+#define CLK42X_SPEED_512KHZ (( 130 << 22) | ( 2 << 12) | 15)
+#define CLK42X_SPEED_1536KHZ (( 43 << 22) | ( 18 << 12) | 47)
+#define CLK42X_SPEED_1544KHZ (( 43 << 22) | ( 33 << 12) | 192)
+#define CLK42X_SPEED_2048KHZ (( 32 << 22) | ( 34 << 12) | 63)
+#define CLK42X_SPEED_4096KHZ (( 16 << 22) | ( 34 << 12) | 127)
+#define CLK42X_SPEED_8192KHZ (( 8 << 22) | ( 34 << 12) | 255)
+
+#define CLK46X_SPEED_512KHZ (( 130 << 22) | ( 24 << 12) | 127)
+#define CLK46X_SPEED_1536KHZ (( 43 << 22) | (152 << 12) | 383)
+#define CLK46X_SPEED_1544KHZ (( 43 << 22) | ( 66 << 12) | 385)
+#define CLK46X_SPEED_2048KHZ (( 32 << 22) | (280 << 12) | 511)
+#define CLK46X_SPEED_4096KHZ (( 16 << 22) | (280 << 12) | 1023)
+#define CLK46X_SPEED_8192KHZ (( 8 << 22) | (280 << 12) | 2047)
+
+
+/* hss_config, LUTs: default = unassigned */
+#define TDMMAP_HDLC 1 /* HDLC - packetised */
+#define TDMMAP_VOICE56K 2 /* Voice56K - channelised */
+#define TDMMAP_VOICE64K 3 /* Voice64K - channelised */
+
+
+/* NPE command codes */
+/* writes the ConfigWord value to the location specified by offset */
+#define PORT_CONFIG_WRITE 0x40
+
+/* triggers the NPE to load the contents of the configuration table */
+#define PORT_CONFIG_LOAD 0x41
+
+/* triggers the NPE to return an HssErrorReadResponse message */
+#define PORT_ERROR_READ 0x42
+
+/* reset NPE internal status and enable the HssChannelized operation */
+#define CHAN_FLOW_ENABLE 0x43
+#define CHAN_FLOW_DISABLE 0x44
+#define CHAN_IDLE_PATTERN_WRITE 0x45
+#define CHAN_NUM_CHANS_WRITE 0x46
+#define CHAN_RX_BUF_ADDR_WRITE 0x47
+#define CHAN_RX_BUF_CFG_WRITE 0x48
+#define CHAN_TX_BLK_CFG_WRITE 0x49
+#define CHAN_TX_BUF_ADDR_WRITE 0x4A
+#define CHAN_TX_BUF_SIZE_WRITE 0x4B
+#define CHAN_TSLOTSWITCH_ENABLE 0x4C
+#define CHAN_TSLOTSWITCH_DISABLE 0x4D
+
+/* downloads the gainWord value for a timeslot switching channel associated
+ with bypassNum */
+#define CHAN_TSLOTSWITCH_GCT_DOWNLOAD 0x4E
+
+/* triggers the NPE to reset internal status and enable the HssPacketized
+ operation for the flow specified by pPipe */
+#define PKT_PIPE_FLOW_ENABLE 0x50
+#define PKT_PIPE_FLOW_DISABLE 0x51
+#define PKT_NUM_PIPES_WRITE 0x52
+#define PKT_PIPE_FIFO_SIZEW_WRITE 0x53
+#define PKT_PIPE_HDLC_CFG_WRITE 0x54
+#define PKT_PIPE_IDLE_PATTERN_WRITE 0x55
+#define PKT_PIPE_RX_SIZE_WRITE 0x56
+#define PKT_PIPE_MODE_WRITE 0x57
+
+
+#define HSS_TIMESLOTS 128
+#define HSS_LUT_BITS 2
+
+/* HDLC packet status values - desc->status */
+#define ERR_SHUTDOWN 1 /* stop or shutdown occurrance */
+#define ERR_HDLC_ALIGN 2 /* HDLC alignment error */
+#define ERR_HDLC_FCS 3 /* HDLC Frame Check Sum error */
+#define ERR_RXFREE_Q_EMPTY 4 /* RX-free queue became empty while receiving
+ this packet (if buf_len < pkt_len) */
+#define ERR_HDLC_TOO_LONG 5 /* HDLC frame size too long */
+#define ERR_HDLC_ABORT 6 /* abort sequence received */
+#define ERR_DISCONNECTING 7 /* disconnect is in progress */
+
+
+struct port {
+ struct npe *npe;
+ struct net_device *netdev;
+ struct hss_plat_info *plat;
+ struct sk_buff *rx_skb_tab[RX_DESCS], *tx_skb_tab[TX_DESCS];
+ struct desc *desc_tab; /* coherent */
+ u32 desc_tab_phys;
+ sync_serial_settings settings;
+ int id;
+ u8 hdlc_cfg;
+};
+
+/* NPE message structure */
+struct msg {
+ u8 cmd, unused, hss_port, index;
+ union {
+ u8 data8[4];
+ u16 data16[2];
+ u32 data32;
+ };
+};
+
+
+/* HDLC packet descriptor */
+struct desc {
+ u32 next; /* pointer to next buffer, unused */
+ u16 buf_len; /* buffer length */
+ u16 pkt_len; /* packet length */
+ u32 data; /* pointer to data buffer in RAM */
+ u8 status;
+ u8 error_count;
+ u16 __reserved;
+ u32 __reserved1[4];
+};
+
+#define rx_desc_ptr(port, n) (&(port)->desc_tab[n])
+#define rx_desc_phys(port, n) ((port)->desc_tab_phys + \
+ (n) * sizeof(struct desc))
+#define tx_desc_ptr(port, n) (&(port)->desc_tab[(n) + RX_DESCS])
+#define tx_desc_phys(port, n) ((port)->desc_tab_phys + \
+ ((n) + RX_DESCS) * sizeof(struct desc))
+
+static int ports_open;
+static struct dma_pool *dma_pool;
+
+static struct {
+ int tx, txdone, rx, rxfree;
+}queue_ids[2] = {{ HSS0_PKT_TX0_QUEUE, HSS0_PKT_TXDONE_QUEUE,
+ HSS0_PKT_RX_QUEUE, HSS0_PKT_RXFREE0_QUEUE },
+ { HSS1_PKT_TX0_QUEUE, HSS1_PKT_TXDONE_QUEUE,
+ HSS1_PKT_RX_QUEUE, HSS1_PKT_RXFREE0_QUEUE },
+};
+
+
+static inline struct port* dev_to_port(struct net_device *dev)
+{
+ return dev_to_hdlc(dev)->priv;
+}
+
+
+static inline void debug_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc, int is_get)
+{
+#if DEBUG_QUEUES
+ const char *op = is_get ? "->" : "<-";
+
+ if (!desc_phys) {
+ printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
+ return;
+ }
+ printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %X %X\n",
+ queue, op, desc_phys, desc->next, desc->buf_len, desc->pkt_len,
+ desc->data, desc->status, desc->error_count);
+#endif
+}
+
+static inline int queue_get_desc(unsigned int queue, struct port *port,
+ int is_tx)
+{
+ u32 phys, tab_phys, n_desc;
+ struct desc *tab;
+
+ if (!(phys = qmgr_get_entry(queue))) {
+ debug_desc(queue, phys, NULL, 1);
+ return -1;
+ }
+
+ BUG_ON(phys & 0x1F);
+ tab_phys = is_tx ? tx_desc_phys(port, 0) : rx_desc_phys(port, 0);
+ tab = is_tx ? tx_desc_ptr(port, 0) : rx_desc_ptr(port, 0);
+ n_desc = (phys - tab_phys) / sizeof(struct desc);
+ BUG_ON(n_desc >= (is_tx ? TX_DESCS : RX_DESCS));
+
+ debug_desc(queue, phys, &tab[n_desc], 1);
+ BUG_ON(tab[n_desc].next);
+ return n_desc;
+}
+
+static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
+ struct desc *desc)
+{
+ debug_desc(queue, desc_phys, desc, 0);
+ BUG_ON(desc_phys & 0x1F);
+ qmgr_put_entry(queue, desc_phys);
+}
+
+
+static void hss_set_carrier(void *pdev, int carrier)
+{
+ struct net_device *dev = pdev;
+ if (carrier)
+ netif_carrier_on(dev);
+ else
+ netif_carrier_off(dev);
+}
+
+static void hss_rx_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+ struct port *port = dev_to_port(dev);
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_rx_irq() start\n");
+#endif
+ qmgr_disable_irq(queue_ids[port->id].rx);
+ netif_rx_schedule(dev);
+}
+
+static int hss_poll(struct net_device *dev, int *budget)
+{
+ struct port *port = dev_to_port(dev);
+ unsigned int queue = queue_ids[port->id].rx;
+ struct net_device_stats *stats = hdlc_stats(dev);
+ int quota = dev->quota, received = 0;
+
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() start\n");
+#endif
+ while (quota) {
+ struct sk_buff *old_skb, *new_skb = NULL;
+ struct desc *desc;
+ u32 data;
+ int n = queue_get_desc(queue, port, 0);
+ if (n < 0) { /* No packets received */
+ dev->quota -= received;
+ *budget -= received;
+ received = 0;
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() netif_rx_complete()\n");
+#endif
+ netif_rx_complete(dev);
+ qmgr_enable_irq(queue);
+ if (!qmgr_stat_empty(queue) &&
+ netif_rx_reschedule(dev, 0)) {
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll()"
+ " netif_rx_reschedule() successed\n");
+#endif
+ qmgr_disable_irq(queue);
+ continue;
+ }
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() all done\n");
+#endif
+ return 0; /* all work done */
+ }
+
+ desc = rx_desc_ptr(port, n);
+
+ if (!desc->status) /* check for RX errors */
+ new_skb = netdev_alloc_skb(dev, RX_SIZE);
+ if (new_skb)
+ data = dma_map_single(&dev->dev, new_skb->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+
+ if (!new_skb || dma_mapping_error(data)) {
+ if (new_skb)
+ dev_kfree_skb(new_skb);
+ switch (desc->status) {
+ case 0:
+ stats->rx_dropped++;
+ break;
+ case ERR_HDLC_ALIGN:
+ case ERR_HDLC_ABORT:
+ stats->rx_frame_errors++;
+ stats->rx_errors++;
+ break;
+ case ERR_HDLC_FCS:
+ stats->rx_crc_errors++;
+ stats->rx_errors++;
+ break;
+ case ERR_HDLC_TOO_LONG:
+ stats->rx_length_errors++;
+ stats->rx_errors++;
+ break;
+ default: /* FIXME - remove printk */
+ printk(KERN_ERR "hss_poll(): status 0x%02X"
+ " errors %u\n", desc->status,
+ desc->error_count);
+ stats->rx_errors++;
+ }
+ /* put the desc back on RX-ready queue */
+ desc->buf_len = RX_SIZE;
+ desc->pkt_len = desc->status = 0;
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ continue;
+ }
+
+ if (desc->error_count) /* FIXME - remove printk */
+ printk(KERN_ERR "hss_poll(): status 0x%02X"
+ " errors %u\n", desc->status,
+ desc->error_count);
+
+ /* process received skb */
+ old_skb = port->rx_skb_tab[n];
+ dma_unmap_single(&dev->dev, desc->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+
+ skb_put(old_skb, desc->pkt_len);
+ old_skb->protocol = hdlc_type_trans(old_skb, dev);
+ dev->last_rx = jiffies;
+ stats->rx_packets++;
+ stats->rx_bytes += old_skb->len;
+ netif_receive_skb(old_skb);
+
+ /* put the new skb on RX-free queue */
+ port->rx_skb_tab[n] = new_skb;
+ desc->buf_len = RX_SIZE;
+ desc->pkt_len = 0;
+ desc->data = data;
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, n), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ quota--;
+ received++;
+ }
+ dev->quota -= received;
+ *budget -= received;
+#if DEBUG_RX
+ printk(KERN_DEBUG "hss_poll() end, not all work done\n");
+#endif
+ return 1; /* not all work done */
+}
+
+static void hss_xmit_ready_irq(void *pdev)
+{
+ struct net_device *dev = pdev;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit_empty() start\n");
+#endif
+ netif_start_queue(dev);
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit_empty() end\n");
+#endif
+}
+
+static int hss_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct net_device_stats *stats = hdlc_stats(dev);
+ struct desc *desc;
+ u32 phys;
+ struct sk_buff *old_skb;
+ int n;
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit() start\n");
+#endif
+ if (unlikely(skb->len > HDLC_MAX_MRU)) {
+ dev_kfree_skb(skb);
+ stats->tx_errors++;
+ return NETDEV_TX_OK;
+ }
+
+ n = queue_get_desc(queue_ids[port->id].txdone, port, 1);
+ BUG_ON(n < 0);
+ desc = tx_desc_ptr(port, n);
+ phys = tx_desc_phys(port, n);
+
+ if ((old_skb = port->tx_skb_tab[n]) != NULL) {
+ dma_unmap_single(&dev->dev, desc->data,
+ desc->buf_len, DMA_TO_DEVICE);
+ stats->tx_packets++;
+ stats->tx_bytes += old_skb->len;
+ dev_kfree_skb(old_skb);
+ }
+
+ desc->buf_len = desc->pkt_len = skb->len;
+ desc->data = dma_map_single(&dev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ dev_kfree_skb(skb);
+ port->tx_skb_tab[n] = NULL;
+ stats->tx_dropped++;
+ /* put the desc back on TX-done queue */
+ queue_put_desc(queue_ids[port->id].txdone, phys, desc);
+ return 0;
+ }
+
+ port->tx_skb_tab[n] = skb;
+ wmb();
+ queue_put_desc(queue_ids[port->id].tx, phys, desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].tx));
+ dev->trans_start = jiffies;
+
+ if (qmgr_stat_empty(queue_ids[port->id].txdone)) {
+ netif_stop_queue(dev);
+ /* we could miss TX ready interrupt */
+ if (!qmgr_stat_empty(queue_ids[port->id].txdone)) {
+ netif_start_queue(dev);
+ }
+ }
+
+#if DEBUG_TX
+ printk(KERN_DEBUG "hss_xmit() end\n");
+#endif
+ return NETDEV_TX_OK;
+}
+
+
+static int request_queues(struct port *port)
+{
+ int err;
+
+ err = qmgr_request_queue(queue_ids[port->id].rxfree, RX_DESCS, 0, 0);
+ if (err)
+ return err;
+
+ err = qmgr_request_queue(queue_ids[port->id].rx, RX_DESCS, 0, 0);
+ if (err)
+ goto rel_rxfree;
+
+ err = qmgr_request_queue(queue_ids[port->id].tx, TX_DESCS, 0, 0);
+ if (err)
+ goto rel_rx;
+
+ err = qmgr_request_queue(queue_ids[port->id].txdone, TX_DESCS, 0, 0);
+ if (err)
+ goto rel_tx;
+ return 0;
+
+rel_tx:
+ qmgr_release_queue(queue_ids[port->id].tx);
+rel_rx:
+ qmgr_release_queue(queue_ids[port->id].rx);
+rel_rxfree:
+ qmgr_release_queue(queue_ids[port->id].rxfree);
+ return err;
+}
+
+static void release_queues(struct port *port)
+{
+ qmgr_release_queue(queue_ids[port->id].rxfree);
+ qmgr_release_queue(queue_ids[port->id].rx);
+ qmgr_release_queue(queue_ids[port->id].txdone);
+ qmgr_release_queue(queue_ids[port->id].tx);
+}
+
+static int init_queues(struct port *port)
+{
+ int i;
+
+ if (!dma_pool) {
+ dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
+ 32, 0);
+ if (!dma_pool)
+ return -ENOMEM;
+ }
+
+ if (!(port->desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
+ &port->desc_tab_phys)))
+ return -ENOMEM;
+ memset(port->desc_tab, 0, POOL_ALLOC_SIZE);
+ memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* tables */
+ memset(port->tx_skb_tab, 0, sizeof(port->tx_skb_tab));
+
+ /* Setup RX buffers */
+ for (i = 0; i < RX_DESCS; i++) {
+ struct desc *desc = rx_desc_ptr(port, i);
+ struct sk_buff *skb;
+
+ if (!(skb = netdev_alloc_skb(port->netdev, RX_SIZE)))
+ return -ENOMEM;
+ port->rx_skb_tab[i] = skb;
+ desc->buf_len = RX_SIZE;
+ desc->data = dma_map_single(&port->netdev->dev, skb->data,
+ RX_SIZE, DMA_FROM_DEVICE);
+ if (dma_mapping_error(desc->data)) {
+ desc->data = 0;
+ return -EIO;
+ }
+ queue_put_desc(queue_ids[port->id].rxfree,
+ rx_desc_phys(port, i), desc);
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].rxfree));
+ }
+
+ /* Setup TX-done queue */
+ for (i = 0; i < TX_DESCS; i++) {
+ queue_put_desc(queue_ids[port->id].txdone,
+ tx_desc_phys(port, i), tx_desc_ptr(port, i));
+ BUG_ON(qmgr_stat_overflow(queue_ids[port->id].txdone));
+ }
+ return 0;
+}
+
+static void destroy_queues(struct port *port)
+{
+ int i;
+
+ while (queue_get_desc(queue_ids[port->id].rxfree, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].rx, port, 0) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].tx, port, 1) >= 0)
+ /* nothing to do here */;
+ while (queue_get_desc(queue_ids[port->id].txdone, port, 1) >= 0)
+ /* nothing to do here */;
+
+ if (port->desc_tab) {
+ for (i = 0; i < RX_DESCS; i++) {
+ struct desc *desc = rx_desc_ptr(port, i);
+ struct sk_buff *skb = port->rx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data, RX_SIZE,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ for (i = 0; i < TX_DESCS; i++) {
+ struct desc *desc = tx_desc_ptr(port, i);
+ struct sk_buff *skb = port->tx_skb_tab[i];
+ if (skb) {
+ if (desc->data)
+ dma_unmap_single(&port->netdev->dev,
+ desc->data,
+ desc->buf_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb(skb);
+ }
+ }
+ dma_pool_free(dma_pool, port->desc_tab, port->desc_tab_phys);
+ port->desc_tab = NULL;
+ }
+
+ if (!ports_open && dma_pool) {
+ dma_pool_destroy(dma_pool);
+ dma_pool = NULL;
+ }
+}
+
+
+static int hss_open(struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+ int i, err;
+
+ if (!npe_running(npe))
+ if ((err = npe_load_firmware(npe, npe_name(npe),
+ &dev->dev)) != 0)
+ return err;
+
+ if ((err = hdlc_open(dev)) != 0)
+ return err;
+
+ if (port->plat->open)
+ if ((err = port->plat->open(port->id, port->netdev,
+ hss_set_carrier)) != 0)
+ goto err_hdlc_close;
+
+ /* HSS main configuration */
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PORT_CONFIG_WRITE;
+ msg.hss_port = port->id;
+ msg.index = 0; /* offset in HSS config */
+
+ msg.data32 = PCR_FRM_PULSE_DISABLED |
+ PCR_SOF_NO_FBIT |
+ PCR_MSB_ENDIAN |
+ PCR_TX_DATA_ENABLE;
+
+ if (port->settings.clock_type == CLOCK_INT)
+ msg.data32 |= PCR_SYNC_CLK_DIR_OUTPUT;
+
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_TX_PCR") != 0))
+ goto err_plat_close; /* 0: TX PCR */
+
+ msg.index = 4;
+ msg.data32 ^= PCR_TX_DATA_ENABLE | PCR_DCLK_EDGE_RISING;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_RX_PCR") != 0))
+ goto err_plat_close; /* 4: RX PCR */
+
+ msg.index = 8;
+ msg.data32 = (port->settings.loopback ? CCR_LOOPBACK : 0) |
+ (port->id ? CCR_SECOND_HSS : 0);
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_CORE_CR") != 0))
+ goto err_plat_close; /* 8: Core CR */
+
+ msg.index = 12;
+ msg.data32 = CLK42X_SPEED_2048KHZ /* FIXME */;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_CLK_CR") != 0))
+ goto err_plat_close; /* 12: CLK CR */
+
+ msg.data32 = (FRAME_SYNC_OFFSET << 16) | (FRAME_SYNC_SIZE - 1);
+ msg.index = 16;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_TX_FCR") != 0))
+ goto err_plat_close; /* 16: TX FCR */
+
+ msg.index = 20;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_RX_FCR") != 0))
+ goto err_plat_close; /* 20: RX FCR */
+
+ msg.data32 = 0; /* Fill LUT with HDLC timeslots */
+ for (i = 0; i < 32 / HSS_LUT_BITS; i++)
+ msg.data32 |= TDMMAP_HDLC << (HSS_LUT_BITS * i);
+
+ for (i = 0; i < 2 /* TX and RX */ * HSS_TIMESLOTS * HSS_LUT_BITS / 8;
+ i += 4) {
+ msg.index = 24 + i; /* 24 - 55: TX LUT, 56 - 87: RX LUT */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_LUT") != 0))
+ goto err_plat_close;
+ }
+
+ /* HDLC mode configuration */
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_NUM_PIPES_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = PKT_NUM_PIPES;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_PIPES") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FIFO_SIZEW_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = PKT_PIPE_FIFO_SIZEW;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_FIFO") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_IDLE_PATTERN_WRITE;
+ msg.hss_port = port->id;
+ msg.data32 = 0x7F7F7F7F;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_IDLE") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PORT_CONFIG_LOAD;
+ msg.hss_port = port->id;
+ if ((err = npe_send_message(npe, &msg, "HSS_LOAD_CONFIG") != 0))
+ goto err_plat_close;
+ if ((err = npe_recv_message(npe, &msg, "HSS_LOAD_CONFIG") != 0))
+ goto err_plat_close;
+
+ /* HSS_LOAD_CONFIG for port #1 returns port_id = #4 */
+ if (msg.cmd != PORT_CONFIG_LOAD || msg.data32) {
+ printk(KERN_DEBUG "%s: unexpected message received in"
+ " response to HSS_LOAD_CONFIG: \n", npe_name(npe));
+ err = EIO;
+ goto err_plat_close;
+ }
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_HDLC_CFG_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = port->hdlc_cfg; /* rx_cfg */
+ msg.data8[1] = port->hdlc_cfg | (PKT_EXTRA_FLAGS << 3); /* tx_cfg */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_HDLC_CFG") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_MODE_WRITE;
+ msg.hss_port = port->id;
+ msg.data8[0] = NPE_PKT_MODE_HDLC;
+ /* msg.data8[1] = inv_mask */
+ /* msg.data8[2] = or_mask */
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_MODE") != 0))
+ goto err_plat_close;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_RX_SIZE_WRITE;
+ msg.hss_port = port->id;
+ msg.data16[0] = HDLC_MAX_MRU;
+ if ((err = npe_send_message(npe, &msg, "HSS_SET_PKT_RX_SIZE") != 0))
+ goto err_plat_close;
+
+ if ((err = request_queues(port)) != 0)
+ goto err_plat_close;
+
+ if ((err = init_queues(port)) != 0)
+ goto err_destroy_queues;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FLOW_ENABLE;
+ msg.hss_port = port->id;
+ if ((err = npe_send_message(npe, &msg, "HSS_ENABLE_PKT_PIPE") != 0))
+ goto err_destroy_queues;
+
+ netif_start_queue(dev);
+
+ qmgr_set_irq(queue_ids[port->id].rx, QUEUE_IRQ_SRC_NOT_EMPTY,
+ hss_rx_irq, dev);
+ qmgr_enable_irq(queue_ids[port->id].rx);
+
+ qmgr_set_irq(queue_ids[port->id].txdone, QUEUE_IRQ_SRC_NOT_EMPTY,
+ hss_xmit_ready_irq, dev);
+ qmgr_enable_irq(queue_ids[port->id].txdone);
+
+ ports_open++;
+ return 0;
+
+err_destroy_queues:
+ destroy_queues(port);
+ release_queues(port);
+err_plat_close:
+ if (port->plat->close)
+ port->plat->close(port->id, port->netdev);
+err_hdlc_close:
+ hdlc_close(dev);
+ return err;
+}
+
+static int hss_close(struct net_device *dev)
+{
+ struct port *port = dev_to_port(dev);
+ struct npe *npe = port->npe;
+ struct msg msg;
+
+ ports_open--;
+ qmgr_disable_irq(queue_ids[port->id].rx);
+ qmgr_disable_irq(queue_ids[port->id].txdone);
+ netif_stop_queue(dev);
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = PKT_PIPE_FLOW_DISABLE;
+ msg.hss_port = port->id;
+ if (npe_send_message(npe, &msg, "HSS_DISABLE_PKT_PIPE")) {
+ printk(KERN_CRIT "HSS-%i: unable to stop HDLC flow\n",
+ port->id);
+ /* The upper level would ignore the error anyway */
+ }
+
+ destroy_queues(port);
+ release_queues(port);
+
+ if (port->plat->close)
+ port->plat->close(port->id, port->netdev);
+ hdlc_close(dev);
+ return 0;
+}
+
+
+static int hss_attach(struct net_device *dev, unsigned short encoding,
+ unsigned short parity)
+{
+ struct port *port = dev_to_port(dev);
+
+ if (encoding != ENCODING_NRZ)
+ return -EINVAL;
+
+ switch(parity) {
+ case PARITY_CRC16_PR1_CCITT:
+ port->hdlc_cfg = 0;
+ return 0;
+
+ case PARITY_CRC32_PR1_CCITT:
+ port->hdlc_cfg = PKT_HDLC_CRC_32;
+ return 0;
+
+ default:
+ return -EINVAL;
+ }
+}
+
+
+static int hss_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ const size_t size = sizeof(sync_serial_settings);
+ sync_serial_settings new_line;
+ int clk;
+ sync_serial_settings __user *line = ifr->ifr_settings.ifs_ifsu.sync;
+ struct port *port = dev_to_port(dev);
+
+ if (cmd != SIOCWANDEV)
+ return hdlc_ioctl(dev, ifr, cmd);
+
+ switch(ifr->ifr_settings.type) {
+ case IF_GET_IFACE:
+ ifr->ifr_settings.type = IF_IFACE_V35;
+ if (ifr->ifr_settings.size < size) {
+ ifr->ifr_settings.size = size; /* data size wanted */
+ return -ENOBUFS;
+ }
+ if (copy_to_user(line, &port->settings, size))
+ return -EFAULT;
+ return 0;
+
+ case IF_IFACE_SYNC_SERIAL:
+ case IF_IFACE_V35:
+ if(!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if (dev->flags & IFF_UP)
+ return -EBUSY; /* Cannot change parameters when open */
+
+ if (copy_from_user(&new_line, line, size))
+ return -EFAULT;
+
+ clk = new_line.clock_type;
+ if (port->plat->set_clock)
+ clk = port->plat->set_clock(port->id, clk);
+
+ if (clk != CLOCK_EXT && clk != CLOCK_INT)
+ return -EINVAL; /* No such clock setting */
+
+ if (new_line.loopback != 0 && new_line.loopback != 1)
+ return -EINVAL;
+
+ memcpy(&port->settings, &new_line, size); /* Update settings */
+ return 0;
+
+ default:
+ return hdlc_ioctl(dev, ifr, cmd);
+ }
+}
+
+
+static int __devinit hss_init_one(struct platform_device *pdev)
+{
+ struct port *port;
+ struct net_device *dev;
+ hdlc_device *hdlc;
+ int err;
+
+ if ((port = kzalloc(sizeof(*port), GFP_KERNEL)) == NULL)
+ return -ENOMEM;
+ platform_set_drvdata(pdev, port);
+ port->id = pdev->id;
+
+ if ((port->npe = npe_request(0)) == NULL) {
+ err = -ENOSYS;
+ goto err_free;
+ }
+
+ port->plat = pdev->dev.platform_data;
+ if ((port->netdev = dev = alloc_hdlcdev(port)) == NULL) {
+ err = -ENOMEM;
+ goto err_plat;
+ }
+
+ SET_MODULE_OWNER(net);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ hdlc = dev_to_hdlc(dev);
+ hdlc->attach = hss_attach;
+ hdlc->xmit = hss_xmit;
+ dev->open = hss_open;
+ dev->poll = hss_poll;
+ dev->stop = hss_close;
+ dev->do_ioctl = hss_ioctl;
+ dev->weight = 16;
+ dev->tx_queue_len = 100;
+ port->settings.clock_type = CLOCK_EXT;
+ port->settings.clock_rate = 2048000;
+
+ if (register_hdlc_device(dev)) {
+ printk(KERN_ERR "HSS-%i: unable to register HDLC device\n",
+ port->id);
+ err = -ENOBUFS;
+ goto err_free_netdev;
+ }
+ printk(KERN_INFO "%s: HSS-%i\n", dev->name, port->id);
+ return 0;
+
+err_free_netdev:
+ free_netdev(dev);
+err_plat:
+ npe_release(port->npe);
+ platform_set_drvdata(pdev, NULL);
+err_free:
+ kfree(port);
+ return err;
+}
+
+static int __devexit hss_remove_one(struct platform_device *pdev)
+{
+ struct port *port = platform_get_drvdata(pdev);
+
+ unregister_hdlc_device(port->netdev);
+ free_netdev(port->netdev);
+ npe_release(port->npe);
+ platform_set_drvdata(pdev, NULL);
+ kfree(port);
+ return 0;
+}
+
+static struct platform_driver drv = {
+ .driver.name = DRV_NAME,
+ .probe = hss_init_one,
+ .remove = hss_remove_one,
+};
+
+static int __init hss_init_module(void)
+{
+ if ((ixp4xx_read_fuses() & (IXP4XX_FUSE_HDLC | IXP4XX_FUSE_HSS)) !=
+ (IXP4XX_FUSE_HDLC | IXP4XX_FUSE_HSS))
+ return -ENOSYS;
+ return platform_driver_register(&drv);
+}
+
+static void __exit hss_cleanup_module(void)
+{
+ platform_driver_unregister(&drv);
+}
+
+MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>");
+MODULE_DESCRIPTION("Intel IXP4xx HSS driver");
+MODULE_LICENSE("GPL v2");
+
+module_init(hss_init_module);
+module_exit(hss_cleanup_module);
diff --git a/include/asm-arm/arch-ixp4xx/platform.h b/include/asm-arm/arch-ixp4xx/platform.h
index ab194e5..3a07ee2 100644
--- a/include/asm-arm/arch-ixp4xx/platform.h
+++ b/include/asm-arm/arch-ixp4xx/platform.h
@@ -86,6 +86,25 @@ struct ixp4xx_i2c_pins {
unsigned long scl_pin;
};
+#define IXP4XX_ETH_NPEA 0x00
+#define IXP4XX_ETH_NPEB 0x10
+#define IXP4XX_ETH_NPEC 0x20
+
+/* Information about built-in Ethernet MAC interfaces */
+struct mac_plat_info {
+ u8 phy; /* MII PHY ID, 0 - 31 */
+ u8 rxq; /* configurable, currently 0 - 31 only */
+ u8 hwaddr[6];
+};
+
+/* Information about built-in HSS (synchronous serial) interfaces */
+struct hss_plat_info {
+ int (*set_clock)(int port, unsigned int clock_type);
+ int (*open)(int port, void *pdev,
+ void (*set_carrier_cb)(void *pdev, int carrier));
+ void (*close)(int port, void *pdev);
+};
+
/*
* This structure provide a means for the board setup code
* to give information to th pata_ixp4xx driver. It is
^ permalink raw reply related [flat|nested] 88+ messages in thread
* Re: [PATCH 0/3] Intel IXP4xx network drivers
2007-05-06 23:46 [PATCH 0/3] Intel IXP4xx network drivers Krzysztof Halasa
` (4 preceding siblings ...)
2007-05-07 20:39 ` [PATCH 0/3] " Leon Woestenberg
@ 2007-05-08 1:40 ` Krzysztof Halasa
5 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 1:40 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Russell King, lkml, netdev, linux-arm-kernel
Ok, the status of my patches is as follows:
[PATCH] Use menuconfig objects II - netdev/wan
the "menuconfig" WAN patch by Jan Engelhardt
[PATCH 1a/3] WAN Kconfig: change "depends on HDLC" to "select"
the Kconfig changes for WAN (HDLC) drivers
[PATCH 2a/3] Intel IXP4xx network drivers
IXP4xx "fuses" (installed on-chip CPU components) patch
[PATCH] Intel IXP4xx network drivers v.3 - QMGR
IXP4xx hardware Queue Manager driver
[PATCH] Intel IXP4xx network drivers v.2 - NPE
IXP4xx Network Processor Engine driver, needs "fuses" patch
[PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
You guessed it, needs all of the above.
The:
[PATCH] Intel IXP4xx network drivers v.2 - QMGR
was faulty, thus v.3.
Fire again :-)
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
2007-05-08 1:19 ` Krzysztof Halasa
@ 2007-05-08 5:28 ` Jeff Garzik
-1 siblings, 0 replies; 88+ messages in thread
From: Jeff Garzik @ 2007-05-08 5:28 UTC (permalink / raw)
To: Krzysztof Halasa, Russell King
Cc: Michael-Luke Jones, netdev, lkml, ARM Linux Mailing List
ACK.
I shall presume that the ARM folks will apply these patches. You may
tack on an "Acked-by: Jeff Garzik <jeff@garzik.org>" onto the ethernet
driver itself.
I'll let the ARM folks review the rest.
I do agree with the comments in the thread that -- as in your most
recent revision -- the non-eth code belongs in arch/arm.
Jeff
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
@ 2007-05-08 5:28 ` Jeff Garzik
0 siblings, 0 replies; 88+ messages in thread
From: Jeff Garzik @ 2007-05-08 5:28 UTC (permalink / raw)
To: Krzysztof Halasa, Russell King
Cc: Michael-Luke Jones, netdev, lkml, ARM Linux Mailing List
ACK.
I shall presume that the ARM folks will apply these patches. You may
tack on an "Acked-by: Jeff Garzik <jeff@garzik.org>" onto the ethernet
driver itself.
I'll let the ARM folks review the rest.
I do agree with the comments in the thread that -- as in your most
recent revision -- the non-eth code belongs in arch/arm.
Jeff
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - NPE
2007-05-08 0:36 ` Krzysztof Halasa
@ 2007-05-08 7:02 ` Michael-Luke Jones
-1 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-08 7:02 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
On 8 May 2007, at 01:36, Krzysztof Halasa wrote:
> Adds a driver for built-in IXP4xx Network Processor Engines.
> This patch requires IXP4xx Queue Manager driver and the "fuses" patch.
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
[snip]
> diff --git a/arch/arm/mach-ixp4xx/ixp4xx_npe.c b/arch/arm/mach-
> ixp4xx/ixp4xx_npe.c
> new file mode 100644
> index 0000000..4c77d8a
> --- /dev/null
> +++ b/arch/arm/mach-ixp4xx/ixp4xx_npe.c
Already in mach-ixp4xx, so can just be called npe.c
> @@ -0,0 +1,737 @@
> +/*
> + * Intel IXP4xx Network Processor Engine driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + *
> + * The code is based on publicly available information:
> + * - Intel IXP4xx Developer's Manual and other e-papers
> + * - Intel IXP400 Access Library Software (BSD license)
> + * - previous works by Christian Hohnstaedt
> <chohnstaedt@innominate.com>
> + * Thanks, Christian.
> + */
[snip]
> +int npe_load_firmware(struct npe *npe, const char *name, struct
> device *dev)
> +{
> + const struct firmware *fw_entry;
> +
> + struct dl_block {
> + u32 type;
> + u32 offset;
> + } *blk;
> +
> + struct dl_image {
> + u32 magic;
> + u32 id;
> + u32 size;
> + union {
> + u32 data[0];
> + struct dl_block blocks[0];
> + };
> + } *image;
> +
> + struct dl_codeblock {
> + u32 npe_addr;
> + u32 size;
> + u32 data[0];
> + } *cb;
> +
> + int i, j, err, data_size, instr_size, blocks, table_end;
> + u32 cmd;
> +
> + if ((err = request_firmware(&fw_entry, name, dev)) != 0)
> + return err;
> +
> + err = -EINVAL;
> + if (fw_entry->size < sizeof(struct dl_image)) {
> + print_npe(KERN_ERR, npe, "incomplete firmware file\n");
> + goto err;
> + }
> + image = (struct dl_image*)fw_entry->data;
> +
> +#if DEBUG_FW
> + print_npe(KERN_DEBUG, npe, "firmware: %08X %08X %08X (0x%X bytes)
> \n",
> + image->magic, image->id, image->size, image->size * 4);
> +#endif
> +
> + if (image->magic == swab32(FW_MAGIC)) { /* swapped file */
> + image->id = swab32(image->id);
> + image->size = swab32(image->size);
> + } else if (image->magic != FW_MAGIC) {
> + print_npe(KERN_ERR, npe, "bad firmware file magic: 0x%X\n",
> + image->magic);
> + goto err;
> + }
> + if ((image->size * 4 + sizeof(struct dl_image)) != fw_entry->size) {
> + print_npe(KERN_ERR, npe,
> + "inconsistent size of firmware file\n");
> + goto err;
> + }
> + if (((image->id >> 24) & 0xF /* NPE ID */) != npe->id) {
> + print_npe(KERN_ERR, npe, "firmware file NPE ID mismatch\n");
> + goto err;
> + }
> + if (image->magic == swab32(FW_MAGIC))
> + for (i = 0; i < image->size; i++)
> + image->data[i] = swab32(image->data[i]);
> +
> + if (!cpu_is_ixp46x() && ((image->id >> 28) & 0xF /* device ID */)) {
> + print_npe(KERN_INFO, npe, "IXP46x firmware ignored on "
> + "IXP42x\n");
> + goto err;
> + }
> +
> + if (npe_running(npe)) {
> + print_npe(KERN_INFO, npe, "unable to load firmware, NPE is "
> + "already running\n");
> + err = -EBUSY;
> + goto err;
> + }
> +#if 0
> + npe_stop(npe);
> + npe_reset(npe);
> +#endif
Debugging code? Can this go?
> + print_npe(KERN_INFO, npe, "firmware functionality 0x%X, "
> + "revision 0x%X:%X\n", (image->id >> 16) & 0xFF,
> + (image->id >> 8) & 0xFF, image->id & 0xFF);
> +
> + if (!cpu_is_ixp46x()) {
> + if (!npe->id)
> + instr_size = NPE_A_42X_INSTR_SIZE;
> + else
> + instr_size = NPE_B_AND_C_42X_INSTR_SIZE;
> + data_size = NPE_42X_DATA_SIZE;
> + } else {
> + instr_size = NPE_46X_INSTR_SIZE;
> + data_size = NPE_46X_DATA_SIZE;
> + }
> +
> + for (blocks = 0; blocks * sizeof(struct dl_block) / 4 < image->size;
> + blocks++)
> + if (image->blocks[blocks].type == FW_BLOCK_TYPE_EOF)
> + break;
> + if (blocks * sizeof(struct dl_block) / 4 >= image->size) {
> + print_npe(KERN_INFO, npe, "firmware EOF block marker not "
> + "found\n");
> + goto err;
> + }
> +
> +#if DEBUG_FW
> + print_npe(KERN_DEBUG, npe, "%i firmware blocks found\n", blocks);
> +#endif
> +
> + table_end = blocks * sizeof(struct dl_block) / 4 + 1 /* EOF
> marker */;
> + for (i = 0, blk = image->blocks; i < blocks; i++, blk++) {
> + if (blk->offset > image->size - sizeof(struct dl_codeblock) / 4
> + || blk->offset < table_end) {
> + print_npe(KERN_INFO, npe, "invalid offset 0x%X of "
> + "firmware block #%i\n", blk->offset, i);
> + goto err;
> + }
> +
> + cb = (struct dl_codeblock*)&image->data[blk->offset];
> + if (blk->type == FW_BLOCK_TYPE_INSTR) {
> + if (cb->npe_addr + cb->size > instr_size)
> + goto too_big;
> + cmd = CMD_WR_INS_MEM;
> + } else if (blk->type == FW_BLOCK_TYPE_DATA) {
> + if (cb->npe_addr + cb->size > data_size)
> + goto too_big;
> + cmd = CMD_WR_DATA_MEM;
> + } else {
> + print_npe(KERN_INFO, npe, "invalid firmware block #%i "
> + "type 0x%X\n", i, blk->type);
> + goto err;
> + }
> + if (blk->offset + sizeof(*cb) / 4 + cb->size > image->size) {
> + print_npe(KERN_INFO, npe, "firmware block #%i doesn't "
> + "fit in firmware image: type %c, start 0x%X,"
> + " length 0x%X\n", i,
> + blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
> + cb->npe_addr, cb->size);
> + goto err;
> + }
> +
> + for (j = 0; j < cb->size; j++)
> + npe_cmd_write(npe, cb->npe_addr + j, cmd, cb->data[j]);
> + }
> +
> + npe_start(npe);
> + if (!npe_running(npe))
> + print_npe(KERN_ERR, npe, "unable to start\n");
> + release_firmware(fw_entry);
> + return 0;
> +
> +too_big:
> + print_npe(KERN_INFO, npe, "firmware block #%i doesn't fit in NPE "
> + "memory: type %c, start 0x%X, length 0x%X\n", i,
> + blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
> + cb->npe_addr, cb->size);
> +err:
> + release_firmware(fw_entry);
> + return err;
> +}
[snip]
> +module_init(npe_init_module);
> +module_exit(npe_cleanup_module);
> +
> +MODULE_AUTHOR("Krzysztof Halasa");
> +MODULE_LICENSE("GPL v2");
> +
> +EXPORT_SYMBOL(npe_names);
> +EXPORT_SYMBOL(npe_running);
> +EXPORT_SYMBOL(npe_request);
> +EXPORT_SYMBOL(npe_release);
> +EXPORT_SYMBOL(npe_load_firmware);
> +EXPORT_SYMBOL(npe_send_message);
> +EXPORT_SYMBOL(npe_recv_message);
> +EXPORT_SYMBOL(npe_send_recv_message);
> diff --git a/include/asm-arm/arch-ixp4xx/npe.h b/include/asm-arm/
> arch-ixp4xx/npe.h
> new file mode 100644
> index 0000000..fd20bf5
> --- /dev/null
> +++ b/include/asm-arm/arch-ixp4xx/npe.h
> @@ -0,0 +1,41 @@
> +#ifndef __IXP4XX_NPE_H
> +#define __IXP4XX_NPE_H
> +
> +#include <linux/etherdevice.h>
> +#include <linux/kernel.h>
> +#include <asm/io.h>
> +
> +extern const char *npe_names[];
> +
> +struct npe_regs {
> + u32 exec_addr, exec_data, exec_status_cmd, exec_count;
> + u32 action_points[4];
> + u32 watchpoint_fifo, watch_count;
> + u32 profile_count;
> + u32 messaging_status, messaging_control;
> + u32 mailbox_status, /*messaging_*/ in_out_fifo;
> +};
> +
> +struct npe {
> + struct resource *mem_res;
> + struct npe_regs __iomem *regs;
> + u32 regs_phys;
> + int id;
> + int valid;
> +};
> +
> +
> +static inline const char *npe_name(struct npe *npe)
> +{
> + return npe_names[npe->id];
> +}
> +
> +int npe_running(struct npe *npe);
> +int npe_send_message(struct npe *npe, const void *msg, const char
> *what);
> +int npe_recv_message(struct npe *npe, void *msg, const char *what);
> +int npe_send_recv_message(struct npe *npe, void *msg, const char
> *what);
> +int npe_load_firmware(struct npe *npe, const char *name, struct
> device *dev);
> +struct npe *npe_request(int id);
> +void npe_release(struct npe *npe);
> +
> +#endif /* __IXP4XX_NPE_H */
It may be a matter of taste, but could some of the many definitions
at the top of ixp4xx_npe.c go in the header file here?
Michael-Luke
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - NPE
@ 2007-05-08 7:02 ` Michael-Luke Jones
0 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-08 7:02 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
On 8 May 2007, at 01:36, Krzysztof Halasa wrote:
> Adds a driver for built-in IXP4xx Network Processor Engines.
> This patch requires IXP4xx Queue Manager driver and the "fuses" patch.
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
[snip]
> diff --git a/arch/arm/mach-ixp4xx/ixp4xx_npe.c b/arch/arm/mach-
> ixp4xx/ixp4xx_npe.c
> new file mode 100644
> index 0000000..4c77d8a
> --- /dev/null
> +++ b/arch/arm/mach-ixp4xx/ixp4xx_npe.c
Already in mach-ixp4xx, so can just be called npe.c
> @@ -0,0 +1,737 @@
> +/*
> + * Intel IXP4xx Network Processor Engine driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + *
> + * The code is based on publicly available information:
> + * - Intel IXP4xx Developer's Manual and other e-papers
> + * - Intel IXP400 Access Library Software (BSD license)
> + * - previous works by Christian Hohnstaedt
> <chohnstaedt@innominate.com>
> + * Thanks, Christian.
> + */
[snip]
> +int npe_load_firmware(struct npe *npe, const char *name, struct
> device *dev)
> +{
> + const struct firmware *fw_entry;
> +
> + struct dl_block {
> + u32 type;
> + u32 offset;
> + } *blk;
> +
> + struct dl_image {
> + u32 magic;
> + u32 id;
> + u32 size;
> + union {
> + u32 data[0];
> + struct dl_block blocks[0];
> + };
> + } *image;
> +
> + struct dl_codeblock {
> + u32 npe_addr;
> + u32 size;
> + u32 data[0];
> + } *cb;
> +
> + int i, j, err, data_size, instr_size, blocks, table_end;
> + u32 cmd;
> +
> + if ((err = request_firmware(&fw_entry, name, dev)) != 0)
> + return err;
> +
> + err = -EINVAL;
> + if (fw_entry->size < sizeof(struct dl_image)) {
> + print_npe(KERN_ERR, npe, "incomplete firmware file\n");
> + goto err;
> + }
> + image = (struct dl_image*)fw_entry->data;
> +
> +#if DEBUG_FW
> + print_npe(KERN_DEBUG, npe, "firmware: %08X %08X %08X (0x%X bytes)
> \n",
> + image->magic, image->id, image->size, image->size * 4);
> +#endif
> +
> + if (image->magic == swab32(FW_MAGIC)) { /* swapped file */
> + image->id = swab32(image->id);
> + image->size = swab32(image->size);
> + } else if (image->magic != FW_MAGIC) {
> + print_npe(KERN_ERR, npe, "bad firmware file magic: 0x%X\n",
> + image->magic);
> + goto err;
> + }
> + if ((image->size * 4 + sizeof(struct dl_image)) != fw_entry->size) {
> + print_npe(KERN_ERR, npe,
> + "inconsistent size of firmware file\n");
> + goto err;
> + }
> + if (((image->id >> 24) & 0xF /* NPE ID */) != npe->id) {
> + print_npe(KERN_ERR, npe, "firmware file NPE ID mismatch\n");
> + goto err;
> + }
> + if (image->magic == swab32(FW_MAGIC))
> + for (i = 0; i < image->size; i++)
> + image->data[i] = swab32(image->data[i]);
> +
> + if (!cpu_is_ixp46x() && ((image->id >> 28) & 0xF /* device ID */)) {
> + print_npe(KERN_INFO, npe, "IXP46x firmware ignored on "
> + "IXP42x\n");
> + goto err;
> + }
> +
> + if (npe_running(npe)) {
> + print_npe(KERN_INFO, npe, "unable to load firmware, NPE is "
> + "already running\n");
> + err = -EBUSY;
> + goto err;
> + }
> +#if 0
> + npe_stop(npe);
> + npe_reset(npe);
> +#endif
Debugging code? Can this go?
> + print_npe(KERN_INFO, npe, "firmware functionality 0x%X, "
> + "revision 0x%X:%X\n", (image->id >> 16) & 0xFF,
> + (image->id >> 8) & 0xFF, image->id & 0xFF);
> +
> + if (!cpu_is_ixp46x()) {
> + if (!npe->id)
> + instr_size = NPE_A_42X_INSTR_SIZE;
> + else
> + instr_size = NPE_B_AND_C_42X_INSTR_SIZE;
> + data_size = NPE_42X_DATA_SIZE;
> + } else {
> + instr_size = NPE_46X_INSTR_SIZE;
> + data_size = NPE_46X_DATA_SIZE;
> + }
> +
> + for (blocks = 0; blocks * sizeof(struct dl_block) / 4 < image->size;
> + blocks++)
> + if (image->blocks[blocks].type == FW_BLOCK_TYPE_EOF)
> + break;
> + if (blocks * sizeof(struct dl_block) / 4 >= image->size) {
> + print_npe(KERN_INFO, npe, "firmware EOF block marker not "
> + "found\n");
> + goto err;
> + }
> +
> +#if DEBUG_FW
> + print_npe(KERN_DEBUG, npe, "%i firmware blocks found\n", blocks);
> +#endif
> +
> + table_end = blocks * sizeof(struct dl_block) / 4 + 1 /* EOF
> marker */;
> + for (i = 0, blk = image->blocks; i < blocks; i++, blk++) {
> + if (blk->offset > image->size - sizeof(struct dl_codeblock) / 4
> + || blk->offset < table_end) {
> + print_npe(KERN_INFO, npe, "invalid offset 0x%X of "
> + "firmware block #%i\n", blk->offset, i);
> + goto err;
> + }
> +
> + cb = (struct dl_codeblock*)&image->data[blk->offset];
> + if (blk->type == FW_BLOCK_TYPE_INSTR) {
> + if (cb->npe_addr + cb->size > instr_size)
> + goto too_big;
> + cmd = CMD_WR_INS_MEM;
> + } else if (blk->type == FW_BLOCK_TYPE_DATA) {
> + if (cb->npe_addr + cb->size > data_size)
> + goto too_big;
> + cmd = CMD_WR_DATA_MEM;
> + } else {
> + print_npe(KERN_INFO, npe, "invalid firmware block #%i "
> + "type 0x%X\n", i, blk->type);
> + goto err;
> + }
> + if (blk->offset + sizeof(*cb) / 4 + cb->size > image->size) {
> + print_npe(KERN_INFO, npe, "firmware block #%i doesn't "
> + "fit in firmware image: type %c, start 0x%X,"
> + " length 0x%X\n", i,
> + blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
> + cb->npe_addr, cb->size);
> + goto err;
> + }
> +
> + for (j = 0; j < cb->size; j++)
> + npe_cmd_write(npe, cb->npe_addr + j, cmd, cb->data[j]);
> + }
> +
> + npe_start(npe);
> + if (!npe_running(npe))
> + print_npe(KERN_ERR, npe, "unable to start\n");
> + release_firmware(fw_entry);
> + return 0;
> +
> +too_big:
> + print_npe(KERN_INFO, npe, "firmware block #%i doesn't fit in NPE "
> + "memory: type %c, start 0x%X, length 0x%X\n", i,
> + blk->type == FW_BLOCK_TYPE_INSTR ? 'I' : 'D',
> + cb->npe_addr, cb->size);
> +err:
> + release_firmware(fw_entry);
> + return err;
> +}
[snip]
> +module_init(npe_init_module);
> +module_exit(npe_cleanup_module);
> +
> +MODULE_AUTHOR("Krzysztof Halasa");
> +MODULE_LICENSE("GPL v2");
> +
> +EXPORT_SYMBOL(npe_names);
> +EXPORT_SYMBOL(npe_running);
> +EXPORT_SYMBOL(npe_request);
> +EXPORT_SYMBOL(npe_release);
> +EXPORT_SYMBOL(npe_load_firmware);
> +EXPORT_SYMBOL(npe_send_message);
> +EXPORT_SYMBOL(npe_recv_message);
> +EXPORT_SYMBOL(npe_send_recv_message);
> diff --git a/include/asm-arm/arch-ixp4xx/npe.h b/include/asm-arm/
> arch-ixp4xx/npe.h
> new file mode 100644
> index 0000000..fd20bf5
> --- /dev/null
> +++ b/include/asm-arm/arch-ixp4xx/npe.h
> @@ -0,0 +1,41 @@
> +#ifndef __IXP4XX_NPE_H
> +#define __IXP4XX_NPE_H
> +
> +#include <linux/etherdevice.h>
> +#include <linux/kernel.h>
> +#include <asm/io.h>
> +
> +extern const char *npe_names[];
> +
> +struct npe_regs {
> + u32 exec_addr, exec_data, exec_status_cmd, exec_count;
> + u32 action_points[4];
> + u32 watchpoint_fifo, watch_count;
> + u32 profile_count;
> + u32 messaging_status, messaging_control;
> + u32 mailbox_status, /*messaging_*/ in_out_fifo;
> +};
> +
> +struct npe {
> + struct resource *mem_res;
> + struct npe_regs __iomem *regs;
> + u32 regs_phys;
> + int id;
> + int valid;
> +};
> +
> +
> +static inline const char *npe_name(struct npe *npe)
> +{
> + return npe_names[npe->id];
> +}
> +
> +int npe_running(struct npe *npe);
> +int npe_send_message(struct npe *npe, const void *msg, const char
> *what);
> +int npe_recv_message(struct npe *npe, void *msg, const char *what);
> +int npe_send_recv_message(struct npe *npe, void *msg, const char
> *what);
> +int npe_load_firmware(struct npe *npe, const char *name, struct
> device *dev);
> +struct npe *npe_request(int id);
> +void npe_release(struct npe *npe);
> +
> +#endif /* __IXP4XX_NPE_H */
It may be a matter of taste, but could some of the many definitions
at the top of ixp4xx_npe.c go in the header file here?
Michael-Luke
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 0:46 ` Krzysztof Halasa
@ 2007-05-08 7:05 ` Michael-Luke Jones
-1 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-08 7:05 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
On 8 May 2007, at 01:46, Krzysztof Halasa wrote:
> Adds a driver for built-in IXP4xx hardware Queue Manager.
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
[snip]
> diff --git a/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c b/arch/arm/mach-
> ixp4xx/ixp4xx_qmgr.c
> new file mode 100644
> index 0000000..b9e9bd6
> --- /dev/null
> +++ b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
Already in mach-ixp4xx, so can just be called qmgr.c
> @@ -0,0 +1,273 @@
> +/*
> + * Intel IXP4xx Queue Manager driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <asm/io.h>
> +#include <asm/arch/qmgr.h>
> +
> +#define DEBUG 0
> +
> +struct qmgr_regs __iomem *qmgr_regs;
> +static struct resource *mem_res;
> +static spinlock_t qmgr_lock;
> +static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
> +static void (*irq_handlers[HALF_QUEUES])(void *pdev);
> +static void *irq_pdevs[HALF_QUEUES];
> +
> +void qmgr_set_irq(unsigned int queue, int src,
> + void (*handler)(void *pdev), void *pdev)
> +{
> + u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues /
> u32 */
> + int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
> + unsigned long flags;
> +
> + src &= 7;
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
> + irq_handlers[queue] = handler;
> + irq_pdevs[queue] = pdev;
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
> +
> +
> +static irqreturn_t qmgr_irq1(int irq, void *pdev)
> +{
> + int i;
> + u32 val = __raw_readl(&qmgr_regs->irqstat[0]);
> + __raw_writel(val, &qmgr_regs->irqstat[0]); /* ACK */
> +
> + for (i = 0; i < HALF_QUEUES; i++)
> + if (val & (1 << i))
> + irq_handlers[i](irq_pdevs[i]);
> +
> + return val ? IRQ_HANDLED : 0;
> +}
> +
> +
> +void qmgr_enable_irq(unsigned int queue)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) | (1 << queue),
> + &qmgr_regs->irqen[0]);
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
> +
> +void qmgr_disable_irq(unsigned int queue)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) & ~(1 << queue),
> + &qmgr_regs->irqen[0]);
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
> +
> +static inline void shift_mask(u32 *mask)
> +{
> + mask[3] = mask[3] << 1 | mask[2] >> 31;
> + mask[2] = mask[2] << 1 | mask[1] >> 31;
> + mask[1] = mask[1] << 1 | mask[0] >> 31;
> + mask[0] <<= 1;
> +}
> +
> +int qmgr_request_queue(unsigned int queue, unsigned int len /*
> dwords */,
> + unsigned int nearly_empty_watermark,
> + unsigned int nearly_full_watermark)
> +{
> + u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
> + int err;
> +
> + if (queue >= HALF_QUEUES)
> + return -ERANGE;
> +
> + if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
> + return -EINVAL;
> +
> + switch (len) {
> + case 16:
> + cfg = 0 << 24;
> + mask[0] = 0x1;
> + break;
> + case 32:
> + cfg = 1 << 24;
> + mask[0] = 0x3;
> + break;
> + case 64:
> + cfg = 2 << 24;
> + mask[0] = 0xF;
> + break;
> + case 128:
> + cfg = 3 << 24;
> + mask[0] = 0xFF;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + cfg |= nearly_empty_watermark << 26;
> + cfg |= nearly_full_watermark << 29;
> + len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
> + mask[1] = mask[2] = mask[3] = 0;
> +
> + if (!try_module_get(THIS_MODULE))
> + return -ENODEV;
> +
> + spin_lock_irq(&qmgr_lock);
> + if (__raw_readl(&qmgr_regs->sram[queue])) {
> + err = -EBUSY;
> + goto err;
> + }
> +
> + while (1) {
> + if (!(used_sram_bitmap[0] & mask[0]) &&
> + !(used_sram_bitmap[1] & mask[1]) &&
> + !(used_sram_bitmap[2] & mask[2]) &&
> + !(used_sram_bitmap[3] & mask[3]))
> + break; /* found free space */
> +
> + addr++;
> + shift_mask(mask);
> + if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
> + printk(KERN_ERR "qmgr: no free SRAM space for"
> + " queue %i\n", queue);
> + err = -ENOMEM;
> + goto err;
> + }
> + }
> +
> + used_sram_bitmap[0] |= mask[0];
> + used_sram_bitmap[1] |= mask[1];
> + used_sram_bitmap[2] |= mask[2];
> + used_sram_bitmap[3] |= mask[3];
> + __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
> + spin_unlock_irq(&qmgr_lock);
> +
> +#if DEBUG
> + printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
> + queue, addr);
> +#endif
> + return 0;
> +
> +err:
> + spin_unlock_irq(&qmgr_lock);
> + module_put(THIS_MODULE);
> + return err;
> +}
> +
> +void qmgr_release_queue(unsigned int queue)
> +{
> + u32 cfg, addr, mask[4];
> +
> + BUG_ON(queue >= HALF_QUEUES); /* not in valid range */
> +
> + spin_lock_irq(&qmgr_lock);
> + cfg = __raw_readl(&qmgr_regs->sram[queue]);
> + addr = (cfg >> 14) & 0xFF;
> +
> + BUG_ON(!addr); /* not requested */
> +
> + switch ((cfg >> 24) & 3) {
> + case 0: mask[0] = 0x1; break;
> + case 1: mask[0] = 0x3; break;
> + case 2: mask[0] = 0xF; break;
> + case 3: mask[0] = 0xFF; break;
> + }
> +
> + while (addr--)
> + shift_mask(mask);
> +
> + __raw_writel(0, &qmgr_regs->sram[queue]);
> +
> + used_sram_bitmap[0] &= ~mask[0];
> + used_sram_bitmap[1] &= ~mask[1];
> + used_sram_bitmap[2] &= ~mask[2];
> + used_sram_bitmap[3] &= ~mask[3];
> + irq_handlers[queue] = NULL; /* catch IRQ bugs */
> + spin_unlock_irq(&qmgr_lock);
> +
> + module_put(THIS_MODULE);
> +#if DEBUG
> + printk(KERN_DEBUG "qmgr: released queue %i\n", queue);
> +#endif
> +}
> +
> +static int qmgr_init(void)
> +{
> + int i, err;
> + mem_res = request_mem_region(IXP4XX_QMGR_BASE_PHYS,
> + IXP4XX_QMGR_REGION_SIZE,
> + "IXP4xx Queue Manager");
> + if (mem_res == NULL)
> + return -EBUSY;
> +
> + qmgr_regs = ioremap(IXP4XX_QMGR_BASE_PHYS, IXP4XX_QMGR_REGION_SIZE);
> + if (qmgr_regs == NULL) {
> + err = -ENOMEM;
> + goto error_map;
> + }
> +
> + /* reset qmgr registers */
> + for (i = 0; i < 4; i++) {
> + __raw_writel(0x33333333, &qmgr_regs->stat1[i]);
> + __raw_writel(0, &qmgr_regs->irqsrc[i]);
> + }
> + for (i = 0; i < 2; i++) {
> + __raw_writel(0, &qmgr_regs->stat2[i]);
> + __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[i]); /* clear */
> + __raw_writel(0, &qmgr_regs->irqen[i]);
> + }
> +
> + for (i = 0; i < QUEUES; i++)
> + __raw_writel(0, &qmgr_regs->sram[i]);
> +
> + err = request_irq(IRQ_IXP4XX_QM1, qmgr_irq1, 0,
> + "IXP4xx Queue Manager", NULL);
> + if (err) {
> + printk(KERN_ERR "qmgr: failed to request IRQ%i\n",
> + IRQ_IXP4XX_QM1);
> + goto error_irq;
> + }
> +
> + used_sram_bitmap[0] = 0xF; /* 4 first pages reserved for config */
> + spin_lock_init(&qmgr_lock);
> +
> + printk(KERN_INFO "IXP4xx Queue Manager initialized.\n");
> + return 0;
> +
> +error_irq:
> + iounmap(qmgr_regs);
> +error_map:
> + release_resource(mem_res);
> + return err;
> +}
> +
> +static void qmgr_remove(void)
> +{
> + free_irq(IRQ_IXP4XX_QM1, NULL);
> + synchronize_irq(IRQ_IXP4XX_QM1);
> + iounmap(qmgr_regs);
> + release_resource(mem_res);
> +}
> +
> +module_init(qmgr_init);
> +module_exit(qmgr_remove);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Krzysztof Halasa");
> +
> +EXPORT_SYMBOL(qmgr_regs);
> +EXPORT_SYMBOL(qmgr_set_irq);
> +EXPORT_SYMBOL(qmgr_enable_irq);
> +EXPORT_SYMBOL(qmgr_disable_irq);
> +EXPORT_SYMBOL(qmgr_request_queue);
> +EXPORT_SYMBOL(qmgr_release_queue);
> diff --git a/include/asm-arm/arch-ixp4xx/qmgr.h b/include/asm-arm/
> arch-ixp4xx/qmgr.h
> new file mode 100644
> index 0000000..d03464a
> --- /dev/null
> +++ b/include/asm-arm/arch-ixp4xx/qmgr.h
> @@ -0,0 +1,124 @@
> +/*
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#ifndef IXP4XX_QMGR_H
> +#define IXP4XX_QMGR_H
> +
> +#include <linux/kernel.h>
> +#include <asm/io.h>
> +
> +#define HALF_QUEUES 32
> +#define QUEUES 64 /* only 32 lower queues currently supported */
> +#define MAX_QUEUE_LENGTH 4 /* in dwords */
> +
> +#define QUEUE_STAT1_EMPTY 1 /* queue status bits */
> +#define QUEUE_STAT1_NEARLY_EMPTY 2
> +#define QUEUE_STAT1_NEARLY_FULL 4
> +#define QUEUE_STAT1_FULL 8
> +#define QUEUE_STAT2_UNDERFLOW 1
> +#define QUEUE_STAT2_OVERFLOW 2
> +
> +#define QUEUE_WATERMARK_0_ENTRIES 0
> +#define QUEUE_WATERMARK_1_ENTRY 1
> +#define QUEUE_WATERMARK_2_ENTRIES 2
> +#define QUEUE_WATERMARK_4_ENTRIES 3
> +#define QUEUE_WATERMARK_8_ENTRIES 4
> +#define QUEUE_WATERMARK_16_ENTRIES 5
> +#define QUEUE_WATERMARK_32_ENTRIES 6
> +#define QUEUE_WATERMARK_64_ENTRIES 7
> +
> +/* queue interrupt request conditions */
> +#define QUEUE_IRQ_SRC_EMPTY 0
> +#define QUEUE_IRQ_SRC_NEARLY_EMPTY 1
> +#define QUEUE_IRQ_SRC_NEARLY_FULL 2
> +#define QUEUE_IRQ_SRC_FULL 3
> +#define QUEUE_IRQ_SRC_NOT_EMPTY 4
> +#define QUEUE_IRQ_SRC_NOT_NEARLY_EMPTY 5
> +#define QUEUE_IRQ_SRC_NOT_NEARLY_FULL 6
> +#define QUEUE_IRQ_SRC_NOT_FULL 7
Here, unlike ixp4xx_npe.c defines are in qmgr.h - that seems a bit
more natural.
> +struct qmgr_regs {
> + u32 acc[QUEUES][MAX_QUEUE_LENGTH]; /* 0x000 - 0x3FF */
> + u32 stat1[4]; /* 0x400 - 0x40F */
> + u32 stat2[2]; /* 0x410 - 0x417 */
> + u32 statne_h; /* 0x418 - queue nearly empty */
> + u32 statf_h; /* 0x41C - queue full */
> + u32 irqsrc[4]; /* 0x420 - 0x42F IRC source */
> + u32 irqen[2]; /* 0x430 - 0x437 IRQ enabled */
> + u32 irqstat[2]; /* 0x438 - 0x43F - IRQ access only */
> + u32 reserved[1776];
> + u32 sram[2048]; /* 0x2000 - 0x3FFF - config and buffer */
> +};
> +
> +extern struct qmgr_regs __iomem *qmgr_regs;
> +
> +void qmgr_set_irq(unsigned int queue, int src,
> + void (*handler)(void *pdev), void *pdev);
> +void qmgr_enable_irq(unsigned int queue);
> +void qmgr_disable_irq(unsigned int queue);
> +
> +/* request_ and release_queue() must be called from non-IRQ
> context */
> +int qmgr_request_queue(unsigned int queue, unsigned int len /*
> dwords */,
> + unsigned int nearly_empty_watermark,
> + unsigned int nearly_full_watermark);
> +void qmgr_release_queue(unsigned int queue);
> +
> +
> +static inline void qmgr_put_entry(unsigned int queue, u32 val)
> +{
> + __raw_writel(val, &qmgr_regs->acc[queue][0]);
> +}
> +
> +static inline u32 qmgr_get_entry(unsigned int queue)
> +{
> + return __raw_readl(&qmgr_regs->acc[queue][0]);
> +}
> +
> +static inline int qmgr_get_stat1(unsigned int queue)
> +{
> + return (__raw_readl(&qmgr_regs->stat1[queue >> 3])
> + >> ((queue & 7) << 2)) & 0xF;
> +}
> +
> +static inline int qmgr_get_stat2(unsigned int queue)
> +{
> + return (__raw_readl(&qmgr_regs->stat2[queue >> 4])
> + >> ((queue & 0xF) << 1)) & 0x3;
> +}
> +
> +static inline int qmgr_stat_empty(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_EMPTY);
> +}
> +
> +static inline int qmgr_stat_nearly_empty(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_EMPTY);
> +}
> +
> +static inline int qmgr_stat_nearly_full(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_FULL);
> +}
> +
> +static inline int qmgr_stat_full(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_FULL);
> +}
> +
> +static inline int qmgr_stat_underflow(unsigned int queue)
> +{
> + return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_UNDERFLOW);
> +}
> +
> +static inline int qmgr_stat_overflow(unsigned int queue)
> +{
> + return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_OVERFLOW);
> +}
> +
> +#endif
Great work,
Michael-Luke
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 7:05 ` Michael-Luke Jones
0 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-08 7:05 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
On 8 May 2007, at 01:46, Krzysztof Halasa wrote:
> Adds a driver for built-in IXP4xx hardware Queue Manager.
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
[snip]
> diff --git a/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c b/arch/arm/mach-
> ixp4xx/ixp4xx_qmgr.c
> new file mode 100644
> index 0000000..b9e9bd6
> --- /dev/null
> +++ b/arch/arm/mach-ixp4xx/ixp4xx_qmgr.c
Already in mach-ixp4xx, so can just be called qmgr.c
> @@ -0,0 +1,273 @@
> +/*
> + * Intel IXP4xx Queue Manager driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <asm/io.h>
> +#include <asm/arch/qmgr.h>
> +
> +#define DEBUG 0
> +
> +struct qmgr_regs __iomem *qmgr_regs;
> +static struct resource *mem_res;
> +static spinlock_t qmgr_lock;
> +static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
> +static void (*irq_handlers[HALF_QUEUES])(void *pdev);
> +static void *irq_pdevs[HALF_QUEUES];
> +
> +void qmgr_set_irq(unsigned int queue, int src,
> + void (*handler)(void *pdev), void *pdev)
> +{
> + u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues /
> u32 */
> + int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
> + unsigned long flags;
> +
> + src &= 7;
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
> + irq_handlers[queue] = handler;
> + irq_pdevs[queue] = pdev;
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
> +
> +
> +static irqreturn_t qmgr_irq1(int irq, void *pdev)
> +{
> + int i;
> + u32 val = __raw_readl(&qmgr_regs->irqstat[0]);
> + __raw_writel(val, &qmgr_regs->irqstat[0]); /* ACK */
> +
> + for (i = 0; i < HALF_QUEUES; i++)
> + if (val & (1 << i))
> + irq_handlers[i](irq_pdevs[i]);
> +
> + return val ? IRQ_HANDLED : 0;
> +}
> +
> +
> +void qmgr_enable_irq(unsigned int queue)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) | (1 << queue),
> + &qmgr_regs->irqen[0]);
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
> +
> +void qmgr_disable_irq(unsigned int queue)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel(__raw_readl(&qmgr_regs->irqen[0]) & ~(1 << queue),
> + &qmgr_regs->irqen[0]);
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
> +
> +static inline void shift_mask(u32 *mask)
> +{
> + mask[3] = mask[3] << 1 | mask[2] >> 31;
> + mask[2] = mask[2] << 1 | mask[1] >> 31;
> + mask[1] = mask[1] << 1 | mask[0] >> 31;
> + mask[0] <<= 1;
> +}
> +
> +int qmgr_request_queue(unsigned int queue, unsigned int len /*
> dwords */,
> + unsigned int nearly_empty_watermark,
> + unsigned int nearly_full_watermark)
> +{
> + u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
> + int err;
> +
> + if (queue >= HALF_QUEUES)
> + return -ERANGE;
> +
> + if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
> + return -EINVAL;
> +
> + switch (len) {
> + case 16:
> + cfg = 0 << 24;
> + mask[0] = 0x1;
> + break;
> + case 32:
> + cfg = 1 << 24;
> + mask[0] = 0x3;
> + break;
> + case 64:
> + cfg = 2 << 24;
> + mask[0] = 0xF;
> + break;
> + case 128:
> + cfg = 3 << 24;
> + mask[0] = 0xFF;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + cfg |= nearly_empty_watermark << 26;
> + cfg |= nearly_full_watermark << 29;
> + len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
> + mask[1] = mask[2] = mask[3] = 0;
> +
> + if (!try_module_get(THIS_MODULE))
> + return -ENODEV;
> +
> + spin_lock_irq(&qmgr_lock);
> + if (__raw_readl(&qmgr_regs->sram[queue])) {
> + err = -EBUSY;
> + goto err;
> + }
> +
> + while (1) {
> + if (!(used_sram_bitmap[0] & mask[0]) &&
> + !(used_sram_bitmap[1] & mask[1]) &&
> + !(used_sram_bitmap[2] & mask[2]) &&
> + !(used_sram_bitmap[3] & mask[3]))
> + break; /* found free space */
> +
> + addr++;
> + shift_mask(mask);
> + if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
> + printk(KERN_ERR "qmgr: no free SRAM space for"
> + " queue %i\n", queue);
> + err = -ENOMEM;
> + goto err;
> + }
> + }
> +
> + used_sram_bitmap[0] |= mask[0];
> + used_sram_bitmap[1] |= mask[1];
> + used_sram_bitmap[2] |= mask[2];
> + used_sram_bitmap[3] |= mask[3];
> + __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
> + spin_unlock_irq(&qmgr_lock);
> +
> +#if DEBUG
> + printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
> + queue, addr);
> +#endif
> + return 0;
> +
> +err:
> + spin_unlock_irq(&qmgr_lock);
> + module_put(THIS_MODULE);
> + return err;
> +}
> +
> +void qmgr_release_queue(unsigned int queue)
> +{
> + u32 cfg, addr, mask[4];
> +
> + BUG_ON(queue >= HALF_QUEUES); /* not in valid range */
> +
> + spin_lock_irq(&qmgr_lock);
> + cfg = __raw_readl(&qmgr_regs->sram[queue]);
> + addr = (cfg >> 14) & 0xFF;
> +
> + BUG_ON(!addr); /* not requested */
> +
> + switch ((cfg >> 24) & 3) {
> + case 0: mask[0] = 0x1; break;
> + case 1: mask[0] = 0x3; break;
> + case 2: mask[0] = 0xF; break;
> + case 3: mask[0] = 0xFF; break;
> + }
> +
> + while (addr--)
> + shift_mask(mask);
> +
> + __raw_writel(0, &qmgr_regs->sram[queue]);
> +
> + used_sram_bitmap[0] &= ~mask[0];
> + used_sram_bitmap[1] &= ~mask[1];
> + used_sram_bitmap[2] &= ~mask[2];
> + used_sram_bitmap[3] &= ~mask[3];
> + irq_handlers[queue] = NULL; /* catch IRQ bugs */
> + spin_unlock_irq(&qmgr_lock);
> +
> + module_put(THIS_MODULE);
> +#if DEBUG
> + printk(KERN_DEBUG "qmgr: released queue %i\n", queue);
> +#endif
> +}
> +
> +static int qmgr_init(void)
> +{
> + int i, err;
> + mem_res = request_mem_region(IXP4XX_QMGR_BASE_PHYS,
> + IXP4XX_QMGR_REGION_SIZE,
> + "IXP4xx Queue Manager");
> + if (mem_res == NULL)
> + return -EBUSY;
> +
> + qmgr_regs = ioremap(IXP4XX_QMGR_BASE_PHYS, IXP4XX_QMGR_REGION_SIZE);
> + if (qmgr_regs == NULL) {
> + err = -ENOMEM;
> + goto error_map;
> + }
> +
> + /* reset qmgr registers */
> + for (i = 0; i < 4; i++) {
> + __raw_writel(0x33333333, &qmgr_regs->stat1[i]);
> + __raw_writel(0, &qmgr_regs->irqsrc[i]);
> + }
> + for (i = 0; i < 2; i++) {
> + __raw_writel(0, &qmgr_regs->stat2[i]);
> + __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[i]); /* clear */
> + __raw_writel(0, &qmgr_regs->irqen[i]);
> + }
> +
> + for (i = 0; i < QUEUES; i++)
> + __raw_writel(0, &qmgr_regs->sram[i]);
> +
> + err = request_irq(IRQ_IXP4XX_QM1, qmgr_irq1, 0,
> + "IXP4xx Queue Manager", NULL);
> + if (err) {
> + printk(KERN_ERR "qmgr: failed to request IRQ%i\n",
> + IRQ_IXP4XX_QM1);
> + goto error_irq;
> + }
> +
> + used_sram_bitmap[0] = 0xF; /* 4 first pages reserved for config */
> + spin_lock_init(&qmgr_lock);
> +
> + printk(KERN_INFO "IXP4xx Queue Manager initialized.\n");
> + return 0;
> +
> +error_irq:
> + iounmap(qmgr_regs);
> +error_map:
> + release_resource(mem_res);
> + return err;
> +}
> +
> +static void qmgr_remove(void)
> +{
> + free_irq(IRQ_IXP4XX_QM1, NULL);
> + synchronize_irq(IRQ_IXP4XX_QM1);
> + iounmap(qmgr_regs);
> + release_resource(mem_res);
> +}
> +
> +module_init(qmgr_init);
> +module_exit(qmgr_remove);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Krzysztof Halasa");
> +
> +EXPORT_SYMBOL(qmgr_regs);
> +EXPORT_SYMBOL(qmgr_set_irq);
> +EXPORT_SYMBOL(qmgr_enable_irq);
> +EXPORT_SYMBOL(qmgr_disable_irq);
> +EXPORT_SYMBOL(qmgr_request_queue);
> +EXPORT_SYMBOL(qmgr_release_queue);
> diff --git a/include/asm-arm/arch-ixp4xx/qmgr.h b/include/asm-arm/
> arch-ixp4xx/qmgr.h
> new file mode 100644
> index 0000000..d03464a
> --- /dev/null
> +++ b/include/asm-arm/arch-ixp4xx/qmgr.h
> @@ -0,0 +1,124 @@
> +/*
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#ifndef IXP4XX_QMGR_H
> +#define IXP4XX_QMGR_H
> +
> +#include <linux/kernel.h>
> +#include <asm/io.h>
> +
> +#define HALF_QUEUES 32
> +#define QUEUES 64 /* only 32 lower queues currently supported */
> +#define MAX_QUEUE_LENGTH 4 /* in dwords */
> +
> +#define QUEUE_STAT1_EMPTY 1 /* queue status bits */
> +#define QUEUE_STAT1_NEARLY_EMPTY 2
> +#define QUEUE_STAT1_NEARLY_FULL 4
> +#define QUEUE_STAT1_FULL 8
> +#define QUEUE_STAT2_UNDERFLOW 1
> +#define QUEUE_STAT2_OVERFLOW 2
> +
> +#define QUEUE_WATERMARK_0_ENTRIES 0
> +#define QUEUE_WATERMARK_1_ENTRY 1
> +#define QUEUE_WATERMARK_2_ENTRIES 2
> +#define QUEUE_WATERMARK_4_ENTRIES 3
> +#define QUEUE_WATERMARK_8_ENTRIES 4
> +#define QUEUE_WATERMARK_16_ENTRIES 5
> +#define QUEUE_WATERMARK_32_ENTRIES 6
> +#define QUEUE_WATERMARK_64_ENTRIES 7
> +
> +/* queue interrupt request conditions */
> +#define QUEUE_IRQ_SRC_EMPTY 0
> +#define QUEUE_IRQ_SRC_NEARLY_EMPTY 1
> +#define QUEUE_IRQ_SRC_NEARLY_FULL 2
> +#define QUEUE_IRQ_SRC_FULL 3
> +#define QUEUE_IRQ_SRC_NOT_EMPTY 4
> +#define QUEUE_IRQ_SRC_NOT_NEARLY_EMPTY 5
> +#define QUEUE_IRQ_SRC_NOT_NEARLY_FULL 6
> +#define QUEUE_IRQ_SRC_NOT_FULL 7
Here, unlike ixp4xx_npe.c defines are in qmgr.h - that seems a bit
more natural.
> +struct qmgr_regs {
> + u32 acc[QUEUES][MAX_QUEUE_LENGTH]; /* 0x000 - 0x3FF */
> + u32 stat1[4]; /* 0x400 - 0x40F */
> + u32 stat2[2]; /* 0x410 - 0x417 */
> + u32 statne_h; /* 0x418 - queue nearly empty */
> + u32 statf_h; /* 0x41C - queue full */
> + u32 irqsrc[4]; /* 0x420 - 0x42F IRC source */
> + u32 irqen[2]; /* 0x430 - 0x437 IRQ enabled */
> + u32 irqstat[2]; /* 0x438 - 0x43F - IRQ access only */
> + u32 reserved[1776];
> + u32 sram[2048]; /* 0x2000 - 0x3FFF - config and buffer */
> +};
> +
> +extern struct qmgr_regs __iomem *qmgr_regs;
> +
> +void qmgr_set_irq(unsigned int queue, int src,
> + void (*handler)(void *pdev), void *pdev);
> +void qmgr_enable_irq(unsigned int queue);
> +void qmgr_disable_irq(unsigned int queue);
> +
> +/* request_ and release_queue() must be called from non-IRQ
> context */
> +int qmgr_request_queue(unsigned int queue, unsigned int len /*
> dwords */,
> + unsigned int nearly_empty_watermark,
> + unsigned int nearly_full_watermark);
> +void qmgr_release_queue(unsigned int queue);
> +
> +
> +static inline void qmgr_put_entry(unsigned int queue, u32 val)
> +{
> + __raw_writel(val, &qmgr_regs->acc[queue][0]);
> +}
> +
> +static inline u32 qmgr_get_entry(unsigned int queue)
> +{
> + return __raw_readl(&qmgr_regs->acc[queue][0]);
> +}
> +
> +static inline int qmgr_get_stat1(unsigned int queue)
> +{
> + return (__raw_readl(&qmgr_regs->stat1[queue >> 3])
> + >> ((queue & 7) << 2)) & 0xF;
> +}
> +
> +static inline int qmgr_get_stat2(unsigned int queue)
> +{
> + return (__raw_readl(&qmgr_regs->stat2[queue >> 4])
> + >> ((queue & 0xF) << 1)) & 0x3;
> +}
> +
> +static inline int qmgr_stat_empty(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_EMPTY);
> +}
> +
> +static inline int qmgr_stat_nearly_empty(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_EMPTY);
> +}
> +
> +static inline int qmgr_stat_nearly_full(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_NEARLY_FULL);
> +}
> +
> +static inline int qmgr_stat_full(unsigned int queue)
> +{
> + return !!(qmgr_get_stat1(queue) & QUEUE_STAT1_FULL);
> +}
> +
> +static inline int qmgr_stat_underflow(unsigned int queue)
> +{
> + return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_UNDERFLOW);
> +}
> +
> +static inline int qmgr_stat_overflow(unsigned int queue)
> +{
> + return !!(qmgr_get_stat2(queue) & QUEUE_STAT2_OVERFLOW);
> +}
> +
> +#endif
Great work,
Michael-Luke
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
2007-05-08 1:19 ` Krzysztof Halasa
@ 2007-05-08 7:22 ` Michael-Luke Jones
-1 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-08 7:22 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
On 8 May 2007, at 02:19, Krzysztof Halasa wrote:
> Adds a driver for built-in IXP4xx Ethernet MAC and HSS ports
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
>
> diff --git a/arch/arm/mach-ixp4xx/ixdp425-setup.c b/arch/arm/mach-
> ixp4xx/ixdp425-setup.c
> index ec4f079..f20d39d 100644
> --- a/arch/arm/mach-ixp4xx/ixdp425-setup.c
> +++ b/arch/arm/mach-ixp4xx/ixdp425-setup.c
> @@ -101,10 +101,35 @@ static struct platform_device ixdp425_uart = {
> .resource = ixdp425_uart_resources
> };
>
> +/* Built-in 10/100 Ethernet MAC interfaces */
> +static struct mac_plat_info ixdp425_plat_mac[] = {
> + {
> + .phy = 0,
> + .rxq = 3,
> + }, {
> + .phy = 1,
> + .rxq = 4,
> + }
> +};
> +
> +static struct platform_device ixdp425_mac[] = {
> + {
> + .name = "ixp4xx_eth",
> + .id = IXP4XX_ETH_NPEB,
> + .dev.platform_data = ixdp425_plat_mac,
> + }, {
> + .name = "ixp4xx_eth",
> + .id = IXP4XX_ETH_NPEC,
> + .dev.platform_data = ixdp425_plat_mac + 1,
> + }
> +};
> +
> static struct platform_device *ixdp425_devices[] __initdata = {
> &ixdp425_i2c_controller,
> &ixdp425_flash,
> - &ixdp425_uart
> + &ixdp425_uart,
> + &ixdp425_mac[0],
> + &ixdp425_mac[1],
> };
>
> static void __init ixdp425_init(void)
A final submission should probably have this platform data separated
from the net driver and sent upstream via Russell's patch tracking
system rather than netdev.
> diff --git a/drivers/net/arm/Kconfig b/drivers/net/arm/Kconfig
> index 678e4f4..5e2acb6 100644
> --- a/drivers/net/arm/Kconfig
> +++ b/drivers/net/arm/Kconfig
> @@ -46,3 +46,13 @@ config EP93XX_ETH
> help
> This is a driver for the ethernet hardware included in EP93xx
> CPUs.
> Say Y if you are building a kernel for EP93xx based devices.
> +
> +config IXP4XX_ETH
> + tristate "IXP4xx Ethernet support"
> + depends on NET_ETHERNET && ARM && ARCH_IXP4XX
> + select IXP4XX_NPE
> + select IXP4XX_QMGR
> + select MII
> + help
> + Say Y here if you want to use built-in Ethernet ports
> + on IXP4xx processor.
> diff --git a/drivers/net/arm/Makefile b/drivers/net/arm/Makefile
> index a4c8682..7c812ac 100644
> --- a/drivers/net/arm/Makefile
> +++ b/drivers/net/arm/Makefile
> @@ -9,3 +9,4 @@ obj-$(CONFIG_ARM_ETHER3) += ether3.o
> obj-$(CONFIG_ARM_ETHER1) += ether1.o
> obj-$(CONFIG_ARM_AT91_ETHER) += at91_ether.o
> obj-$(CONFIG_EP93XX_ETH) += ep93xx_eth.o
> +obj-$(CONFIG_IXP4XX_ETH) += ixp4xx_eth.o
> diff --git a/drivers/net/arm/ixp4xx_eth.c b/drivers/net/arm/
> ixp4xx_eth.c
> new file mode 100644
> index 0000000..dcea6e5
> --- /dev/null
> +++ b/drivers/net/arm/ixp4xx_eth.c
> @@ -0,0 +1,1002 @@
> +/*
> + * Intel IXP4xx Ethernet driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + *
> + * Ethernet port config (0x00 is not present on IXP42X):
> + *
> + * logical port 0x00 0x10 0x20
> + * NPE 0 (NPE-A) 1 (NPE-B) 2 (NPE-C)
> + * physical PortId 2 0 1
> + * TX queue 23 24 25
> + * RX-free queue 26 27 28
> + * TX-done queue is always 31, RX queue is configurable
> + */
> +
> +#include <linux/delay.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dmapool.h>
> +#include <linux/kernel.h>
> +#include <linux/mii.h>
> +#include <linux/platform_device.h>
> +#include <asm/io.h>
> +#include <asm/arch/npe.h>
> +#include <asm/arch/qmgr.h>
> +
> +#ifndef __ARMEB__
> +#warning Little endian mode not supported
> +#endif
This has gone from error to warning - fair play but if are planning
to put this upstream this cycle (anything's possible :) ) you'll want
to declare this driver broken on ARMEB in Kconfig please.
Personally I'd like LE ethernet tested and working before we push.
> +
> +#define DEBUG_QUEUES 0
> +#define DEBUG_RX 0
> +#define DEBUG_TX 0
> +#define DEBUG_PKT_BYTES 0
> +#define DEBUG_MDIO 0
> +
> +#define DRV_NAME "ixp4xx_eth"
> +#define DRV_VERSION "0.04"
> +
> +#define TX_QUEUE_LEN 16 /* dwords */
> +#define PKT_DESCS 64 /* also length of queues: TX-done, RX-ready,
> RX */
> +
> +#define POOL_ALLOC_SIZE (sizeof(struct desc) * (PKT_DESCS))
> +#define REGS_SIZE 0x1000
> +#define MAX_MRU 1536
> +
> +#define MDIO_INTERVAL (3 * HZ)
> +#define MAX_MDIO_RETRIES 100 /* microseconds, typically 30 cycles */
> +
> +#define NPE_ID(port) ((port)->id >> 4)
> +#define PHYSICAL_ID(port) ((NPE_ID(port) + 2) % 3)
> +#define TX_QUEUE(plat) (NPE_ID(port) + 23)
> +#define RXFREE_QUEUE(plat) (NPE_ID(port) + 26)
> +#define TXDONE_QUEUE 31
> +
> +/* TX Control Registers */
> +#define TX_CNTRL0_TX_EN BIT(0)
> +#define TX_CNTRL0_HALFDUPLEX BIT(1)
> +#define TX_CNTRL0_RETRY BIT(2)
> +#define TX_CNTRL0_PAD_EN BIT(3)
> +#define TX_CNTRL0_APPEND_FCS BIT(4)
> +#define TX_CNTRL0_2DEFER BIT(5)
> +#define TX_CNTRL0_RMII BIT(6) /* reduced MII */
> +#define TX_CNTRL1_RETRIES 0x0F /* 4 bits */
> +
> +/* RX Control Registers */
> +#define RX_CNTRL0_RX_EN BIT(0)
> +#define RX_CNTRL0_PADSTRIP_EN BIT(1)
> +#define RX_CNTRL0_SEND_FCS BIT(2)
> +#define RX_CNTRL0_PAUSE_EN BIT(3)
> +#define RX_CNTRL0_LOOP_EN BIT(4)
> +#define RX_CNTRL0_ADDR_FLTR_EN BIT(5)
> +#define RX_CNTRL0_RX_RUNT_EN BIT(6)
> +#define RX_CNTRL0_BCAST_DIS BIT(7)
> +#define RX_CNTRL1_DEFER_EN BIT(0)
> +
> +/* Core Control Register */
> +#define CORE_RESET BIT(0)
> +#define CORE_RX_FIFO_FLUSH BIT(1)
> +#define CORE_TX_FIFO_FLUSH BIT(2)
> +#define CORE_SEND_JAM BIT(3)
> +#define CORE_MDC_EN BIT(4) /* NPE-B ETH-0 only */
> +
> +/* Definitions for MII access routines */
> +#define MII_CMD_GO BIT(31)
> +#define MII_CMD_WRITE BIT(26)
> +#define MII_STAT_READ_FAILED BIT(31)
> +
> +/* NPE message codes */
> +#define NPE_GETSTATUS 0x00
> +#define NPE_EDB_SETPORTADDRESS 0x01
> +#define NPE_EDB_GETMACADDRESSDATABASE 0x02
> +#define NPE_EDB_SETMACADDRESSSDATABASE 0x03
> +#define NPE_GETSTATS 0x04
> +#define NPE_RESETSTATS 0x05
> +#define NPE_SETMAXFRAMELENGTHS 0x06
> +#define NPE_VLAN_SETRXTAGMODE 0x07
> +#define NPE_VLAN_SETDEFAULTRXVID 0x08
> +#define NPE_VLAN_SETPORTVLANTABLEENTRY 0x09
> +#define NPE_VLAN_SETPORTVLANTABLERANGE 0x0A
> +#define NPE_VLAN_SETRXQOSENTRY 0x0B
> +#define NPE_VLAN_SETPORTIDEXTRACTIONMODE 0x0C
> +#define NPE_STP_SETBLOCKINGSTATE 0x0D
> +#define NPE_FW_SETFIREWALLMODE 0x0E
> +#define NPE_PC_SETFRAMECONTROLDURATIONID 0x0F
> +#define NPE_PC_SETAPMACTABLE 0x11
> +#define NPE_SETLOOPBACK_MODE 0x12
> +#define NPE_PC_SETBSSIDTABLE 0x13
> +#define NPE_ADDRESS_FILTER_CONFIG 0x14
> +#define NPE_APPENDFCSCONFIG 0x15
> +#define NPE_NOTIFY_MAC_RECOVERY_DONE 0x16
> +#define NPE_MAC_RECOVERY_START 0x17
> +
> +
Two returns? Defines make sense in-file here :)
> +struct eth_regs {
> + u32 tx_control[2], __res1[2]; /* 000 */
> + u32 rx_control[2], __res2[2]; /* 010 */
> + u32 random_seed, __res3[3]; /* 020 */
> + u32 partial_empty_threshold, __res4; /* 030 */
> + u32 partial_full_threshold, __res5; /* 038 */
> + u32 tx_start_bytes, __res6[3]; /* 040 */
> + u32 tx_deferral, rx_deferral,__res7[2]; /* 050 */
> + u32 tx_2part_deferral[2], __res8[2]; /* 060 */
> + u32 slot_time, __res9[3]; /* 070 */
> + u32 mdio_command[4]; /* 080 */
> + u32 mdio_status[4]; /* 090 */
> + u32 mcast_mask[6], __res10[2]; /* 0A0 */
> + u32 mcast_addr[6], __res11[2]; /* 0C0 */
> + u32 int_clock_threshold, __res12[3]; /* 0E0 */
> + u32 hw_addr[6], __res13[61]; /* 0F0 */
> + u32 core_control; /* 1FC */
> +};
> +
> +struct port {
> + struct resource *mem_res;
> + struct eth_regs __iomem *regs;
> + struct npe *npe;
> + struct net_device *netdev;
> + struct net_device_stats stat;
> + struct mii_if_info mii;
> + struct delayed_work mdio_thread;
> + struct mac_plat_info *plat;
> + struct sk_buff *rx_skb_tab[PKT_DESCS];
> + struct desc *rx_desc_tab; /* coherent */
> + int id; /* logical port ID */
> + u32 rx_desc_tab_phys;
> + u32 msg_enable;
> +};
> +
> +/* NPE message structure */
> +struct msg {
> + union {
> + struct {
> + u8 cmd, eth_id, mac[ETH_ALEN];
> + };
> + struct {
> + u8 cmd, eth_id, __byte2, byte3;
> + u8 __byte4, byte5, __byte6, byte7;
> + };
> + struct {
> + u8 cmd, eth_id, __b2, byte3;
> + u32 data32;
> + };
> + };
> +};
> +
> +/* Ethernet packet descriptor */
> +struct desc {
> + u32 next; /* pointer to next buffer, unused */
> + u16 buf_len; /* buffer length */
> + u16 pkt_len; /* packet length */
> + u32 data; /* pointer to data buffer in RAM */
> + u8 dest_id;
> + u8 src_id;
> + u16 flags;
> + u8 qos;
> + u8 padlen;
> + u16 vlan_tci;
> + u8 dest_mac[ETH_ALEN];
> + u8 src_mac[ETH_ALEN];
> +};
> +
> +
> +#define rx_desc_phys(port, n) ((port)->rx_desc_tab_phys + \
> + (n) * sizeof(struct desc))
> +#define tx_desc_phys(n) (tx_desc_tab_phys + (n) * sizeof(struct
> desc))
> +
> +static spinlock_t mdio_lock;
> +static struct eth_regs __iomem *mdio_regs; /* mdio command and
> status only */
> +static struct npe *mdio_npe;
> +static int ports_open;
> +static struct dma_pool *dma_pool;
> +static struct sk_buff *tx_skb_tab[PKT_DESCS];
> +static struct desc *tx_desc_tab; /* coherent */
> +static u32 tx_desc_tab_phys;
> +
> +
> +static inline void set_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) | bits, reg);
> +}
> +static inline void clr_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) & ~bits, reg);
> +}
> +
> +
> +static u16 mdio_cmd(struct net_device *dev, int phy_id, int location,
> + int write, u16 cmd)
> +{
> + int cycles = 0;
> +
> + if (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80) {
> + printk("%s: MII not ready to transmit\n", dev->name);
> + return 0; /* not ready to transmit */
> + }
> +
> + if (write) {
> + __raw_writel(cmd & 0xFF, &mdio_regs->mdio_command[0]);
> + __raw_writel(cmd >> 8, &mdio_regs->mdio_command[1]);
> + }
> + __raw_writel(((phy_id << 5) | location) & 0xFF,
> + &mdio_regs->mdio_command[2]);
> + __raw_writel((phy_id >> 3) | (write << 2) | 0x80 /* GO */,
> + &mdio_regs->mdio_command[3]);
> +
> + while ((cycles < MAX_MDIO_RETRIES) &&
> + (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80)) {
> + udelay(1);
> + cycles++;
> + }
> +
> + if (cycles == MAX_MDIO_RETRIES) {
> + printk("%s: MII write failed\n", dev->name);
> + return 0;
> + }
> +
> +#if DEBUG_MDIO
> + printk(KERN_DEBUG "mdio_cmd() took %i cycles\n", cycles);
> +#endif
> +
> + if (write)
> + return 0;
> +
> + if (__raw_readl(&mdio_regs->mdio_status[3]) & 0x80) {
> + printk("%s: MII read failed\n", dev->name);
> + return 0;
> + }
> +
> + return (__raw_readl(&mdio_regs->mdio_status[0]) & 0xFF) |
> + (__raw_readl(&mdio_regs->mdio_status[1]) << 8);
> +}
> +
> +static int mdio_read(struct net_device *dev, int phy_id, int
> location)
> +{
> + unsigned long flags;
> + u16 val;
> +
> + spin_lock_irqsave(&mdio_lock, flags);
> + val = mdio_cmd(dev, phy_id, location, 0, 0);
> + spin_unlock_irqrestore(&mdio_lock, flags);
> + return val;
> +}
> +
> +static void mdio_write(struct net_device *dev, int phy_id, int
> location,
> + int val)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&mdio_lock, flags);
> + mdio_cmd(dev, phy_id, location, 1, val);
> + spin_unlock_irqrestore(&mdio_lock, flags);
> +}
> +
> +static void eth_set_duplex(struct port *port)
> +{
> + if (port->mii.full_duplex)
> + clr_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
> + else
> + set_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
> +}
> +
> +
> +static void mdio_thread(struct work_struct *work)
> +{
> + struct port *port = container_of(work, struct port,
> mdio_thread.work);
> +
> + if (mii_check_media(&port->mii, 1, 0))
> + eth_set_duplex(port);
> + schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
> +}
> +
> +
> +static inline void debug_skb(const char *func, struct sk_buff *skb)
> +{
> +#if DEBUG_PKT_BYTES
> + int i;
> +
> + printk(KERN_DEBUG "%s(%i): ", func, skb->len);
> + for (i = 0; i < skb->len; i++) {
> + if (i >= DEBUG_PKT_BYTES)
> + break;
> + printk("%s%02X",
> + ((i == 6) || (i == 12) || (i >= 14)) ? " " : "",
> + skb->data[i]);
> + }
> + printk("\n");
> +#endif
> +}
> +
> +
> +static inline void debug_desc(unsigned int queue, u32 desc_phys,
> + struct desc *desc, int is_get)
> +{
> +#if DEBUG_QUEUES
> + const char *op = is_get ? "->" : "<-";
> +
> + if (!desc_phys) {
> + printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
> + return;
> + }
> + printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %2X < %2X %4X
> %X"
> + " %X %X %02X%02X%02X%02X%02X%02X < %02X%02X%02X%02X%02X%02X
> \n",
> + queue, op, desc_phys, desc->next, desc->buf_len, desc-
> >pkt_len,
> + desc->data, desc->dest_id, desc->src_id,
> + desc->flags, desc->qos,
> + desc->padlen, desc->vlan_tci,
> + desc->dest_mac[0], desc->dest_mac[1],
> + desc->dest_mac[2], desc->dest_mac[3],
> + desc->dest_mac[4], desc->dest_mac[5],
> + desc->src_mac[0], desc->src_mac[1],
> + desc->src_mac[2], desc->src_mac[3],
> + desc->src_mac[4], desc->src_mac[5]);
> +#endif
> +}
> +
> +static inline int queue_get_desc(unsigned int queue, struct port
> *port,
> + int is_tx)
> +{
> + u32 phys, tab_phys, n_desc;
> + struct desc *tab;
> +
> + if (!(phys = qmgr_get_entry(queue))) {
> + debug_desc(queue, phys, NULL, 1);
> + return -1;
> + }
> +
> + phys &= ~0x1F; /* mask out non-address bits */
> + tab_phys = is_tx ? tx_desc_phys(0) : rx_desc_phys(port, 0);
> + tab = is_tx ? tx_desc_tab : port->rx_desc_tab;
> + n_desc = (phys - tab_phys) / sizeof(struct desc);
> + BUG_ON(n_desc >= PKT_DESCS);
> +
> + debug_desc(queue, phys, &tab[n_desc], 1);
> + BUG_ON(tab[n_desc].next);
> + return n_desc;
> +}
> +
> +static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
> + struct desc *desc)
> +{
> + debug_desc(queue, desc_phys, desc, 0);
> + BUG_ON(desc_phys & 0x1F);
> + qmgr_put_entry(queue, desc_phys);
> +}
> +
> +
> +static void eth_rx_irq(void *pdev)
> +{
> + struct net_device *dev = pdev;
> + struct port *port = netdev_priv(dev);
> +
> +#if DEBUG_RX
> + printk(KERN_DEBUG "eth_rx_irq() start\n");
> +#endif
> + qmgr_disable_irq(port->plat->rxq);
> + netif_rx_schedule(dev);
> +}
> +
> +static int eth_poll(struct net_device *dev, int *budget)
> +{
> + struct port *port = netdev_priv(dev);
> + unsigned int queue = port->plat->rxq;
> + int quota = dev->quota, received = 0;
> +
> +#if DEBUG_RX
> + printk(KERN_DEBUG "eth_poll() start\n");
> +#endif
> + while (quota) {
> + struct sk_buff *old_skb, *new_skb;
> + struct desc *desc;
> + u32 data;
> + int n = queue_get_desc(queue, port, 0);
> + if (n < 0) { /* No packets received */
> + dev->quota -= received;
> + *budget -= received;
> + received = 0;
> + netif_rx_complete(dev);
> + qmgr_enable_irq(queue);
> + if (!qmgr_stat_empty(queue) &&
> + netif_rx_reschedule(dev, 0)) {
> + qmgr_disable_irq(queue);
> + continue;
> + }
> + return 0; /* all work done */
> + }
> +
> + desc = &port->rx_desc_tab[n];
> +
> + if ((new_skb = netdev_alloc_skb(dev, MAX_MRU)) != NULL) {
> +#if 0
> + skb_reserve(new_skb, 2); /* FIXME */
> +#endif
> + data = dma_map_single(&dev->dev, new_skb->data,
> + MAX_MRU, DMA_FROM_DEVICE);
> + }
> +
> + if (!new_skb || dma_mapping_error(data)) {
> + if (new_skb)
> + dev_kfree_skb(new_skb);
> + port->stat.rx_dropped++;
> + /* put the desc back on RX-ready queue */
> + desc->buf_len = MAX_MRU;
> + desc->pkt_len = 0;
> + queue_put_desc(RXFREE_QUEUE(port->plat),
> + rx_desc_phys(port, n), desc);
> + BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
> + continue;
> + }
> +
> + /* process received skb */
> + old_skb = port->rx_skb_tab[n];
> + dma_unmap_single(&dev->dev, desc->data,
> + MAX_MRU, DMA_FROM_DEVICE);
> + skb_put(old_skb, desc->pkt_len);
> +
> + debug_skb("eth_poll", old_skb);
> +
> + old_skb->protocol = eth_type_trans(old_skb, dev);
> + dev->last_rx = jiffies;
> + port->stat.rx_packets++;
> + port->stat.rx_bytes += old_skb->len;
> + netif_receive_skb(old_skb);
> +
> + /* put the new skb on RX-free queue */
> + port->rx_skb_tab[n] = new_skb;
> + desc->buf_len = MAX_MRU;
> + desc->pkt_len = 0;
> + desc->data = data;
> + queue_put_desc(RXFREE_QUEUE(port->plat),
> + rx_desc_phys(port, n), desc);
> + BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
> + quota--;
> + received++;
> + }
> + dev->quota -= received;
> + *budget -= received;
> + return 1; /* not all work done */
> +}
> +
> +static void eth_xmit_ready_irq(void *pdev)
> +{
> + struct net_device *dev = pdev;
> +
> +#if DEBUG_TX
> + printk(KERN_DEBUG "eth_xmit_empty() start\n");
> +#endif
> + netif_start_queue(dev);
> +}
> +
> +static int eth_xmit(struct sk_buff *skb, struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + struct desc *desc;
> + u32 phys;
> + struct sk_buff *old_skb;
> + int n;
> +
> +#if DEBUG_TX
> + printk(KERN_DEBUG "eth_xmit() start\n");
> +#endif
> + if (unlikely(skb->len > MAX_MRU)) {
> + dev_kfree_skb(skb);
> + port->stat.tx_errors++;
> + return NETDEV_TX_OK;
> + }
> +
> + n = queue_get_desc(TXDONE_QUEUE, port, 1);
> + BUG_ON(n < 0);
> + desc = &tx_desc_tab[n];
> + phys = tx_desc_phys(n);
> +
> + if ((old_skb = tx_skb_tab[n]) != NULL) {
> + dma_unmap_single(&dev->dev, desc->data,
> + desc->buf_len, DMA_TO_DEVICE);
> + port->stat.tx_packets++;
> + port->stat.tx_bytes += old_skb->len;
> + dev_kfree_skb(old_skb);
> + }
> +
> + /* disable VLAN functions in NPE image for now */
> + memset(desc, 0, sizeof(*desc));
> + desc->buf_len = desc->pkt_len = skb->len;
> + desc->data = dma_map_single(&dev->dev, skb->data,
> + skb->len, DMA_TO_DEVICE);
> + if (dma_mapping_error(desc->data)) {
> + desc->data = 0;
> + dev_kfree_skb(skb);
> + tx_skb_tab[n] = NULL;
> + port->stat.tx_dropped++;
> + /* put the desc back on TX-done queue */
> + queue_put_desc(TXDONE_QUEUE, phys, desc);
> + return 0;
> + }
> +
> + tx_skb_tab[n] = skb;
> + debug_skb("eth_xmit", skb);
> +
> + /* NPE firmware pads short frames with zeros internally */
> + wmb();
> + queue_put_desc(TX_QUEUE(port->plat), phys, desc);
> + BUG_ON(qmgr_stat_overflow(TX_QUEUE(port->plat)));
> + dev->trans_start = jiffies;
> +
> + if (qmgr_stat_full(TX_QUEUE(port->plat))) {
> + netif_stop_queue(dev);
> + /* we could miss TX ready interrupt */
> + if (!qmgr_stat_full(TX_QUEUE(port->plat))) {
> + netif_start_queue(dev);
> + }
> + }
> +
> +#if DEBUG_TX
> + printk(KERN_DEBUG "eth_xmit() end\n");
> +#endif
> + return NETDEV_TX_OK;
> +}
> +
> +
> +static struct net_device_stats *eth_stats(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + return &port->stat;
> +}
> +
> +static void eth_set_mcast_list(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + struct dev_mc_list *mclist = dev->mc_list;
> + u8 diffs[ETH_ALEN], *addr;
> + int cnt = dev->mc_count, i;
> +
> + if ((dev->flags & IFF_PROMISC) || !mclist || !cnt) {
> + clr_regbits(RX_CNTRL0_ADDR_FLTR_EN,
> + &port->regs->rx_control[0]);
> + return;
> + }
> +
> + memset(diffs, 0, ETH_ALEN);
> + addr = mclist->dmi_addr; /* first MAC address */
> +
> + while (--cnt && (mclist = mclist->next))
> + for (i = 0; i < ETH_ALEN; i++)
> + diffs[i] |= addr[i] ^ mclist->dmi_addr[i];
> +
> + for (i = 0; i < ETH_ALEN; i++) {
> + __raw_writel(addr[i], &port->regs->mcast_addr[i]);
> + __raw_writel(~diffs[i], &port->regs->mcast_mask[i]);
> + }
> +
> + set_regbits(RX_CNTRL0_ADDR_FLTR_EN, &port->regs->rx_control[0]);
> +}
> +
> +
> +static int eth_ioctl(struct net_device *dev, struct ifreq *req,
> int cmd)
> +{
> + struct port *port = netdev_priv(dev);
> + unsigned int duplex_chg;
> + int err;
> +
> + if (!netif_running(dev))
> + return -EINVAL;
> + err = generic_mii_ioctl(&port->mii, if_mii(req), cmd, &duplex_chg);
> + if (duplex_chg)
> + eth_set_duplex(port);
> + return err;
> +}
> +
> +
> +static int request_queues(struct port *port)
> +{
> + int err;
> +
> + err = qmgr_request_queue(RXFREE_QUEUE(port->plat), PKT_DESCS, 0, 0);
> + if (err)
> + return err;
> +
> + err = qmgr_request_queue(port->plat->rxq, PKT_DESCS, 0, 0);
> + if (err)
> + goto rel_rxfree;
> +
> + err = qmgr_request_queue(TX_QUEUE(port->plat), TX_QUEUE_LEN, 0, 0);
> + if (err)
> + goto rel_rx;
> +
> + /* TX-done queue handles skbs sent out by the NPEs */
> + if (!ports_open) {
> + err = qmgr_request_queue(TXDONE_QUEUE, PKT_DESCS, 0, 0);
> + if (err)
> + goto rel_tx;
> + }
> + return 0;
> +
> +rel_tx:
> + qmgr_release_queue(TX_QUEUE(port->plat));
> +rel_rx:
> + qmgr_release_queue(port->plat->rxq);
> +rel_rxfree:
> + qmgr_release_queue(RXFREE_QUEUE(port->plat));
> + return err;
> +}
> +
> +static void release_queues(struct port *port)
> +{
> + qmgr_release_queue(RXFREE_QUEUE(port->plat));
> + qmgr_release_queue(port->plat->rxq);
> + qmgr_release_queue(TX_QUEUE(port->plat));
> +
> + if (!ports_open)
> + qmgr_release_queue(TXDONE_QUEUE);
> +}
> +
> +static int init_queues(struct port *port)
> +{
> + int i;
> +
> + if (!dma_pool) {
> + /* Setup TX descriptors - common to all ports */
> + dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
> + 32, 0);
> + if (!dma_pool)
> + return -ENOMEM;
> +
> + if (!(tx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
> + &tx_desc_tab_phys)))
> + return -ENOMEM;
> + memset(tx_desc_tab, 0, POOL_ALLOC_SIZE);
> + memset(tx_skb_tab, 0, sizeof(tx_skb_tab)); /* static table */
> +
> + for (i = 0; i < PKT_DESCS; i++) {
> + queue_put_desc(TXDONE_QUEUE, tx_desc_phys(i),
> + &tx_desc_tab[i]);
> + BUG_ON(qmgr_stat_overflow(TXDONE_QUEUE));
> + }
> + }
> +
> + /* Setup RX buffers */
> + if (!(port->rx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
> + &port->rx_desc_tab_phys)))
> + return -ENOMEM;
> + memset(port->rx_desc_tab, 0, POOL_ALLOC_SIZE);
> + memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* table */
> +
> + for (i = 0; i < PKT_DESCS; i++) {
> + struct desc *desc = &port->rx_desc_tab[i];
> + struct sk_buff *skb;
> +
> + if (!(skb = netdev_alloc_skb(port->netdev, MAX_MRU)))
> + return -ENOMEM;
> + port->rx_skb_tab[i] = skb;
> + desc->buf_len = MAX_MRU;
> +#if 0
> + skb_reserve(skb, 2); /* FIXME */
> +#endif
Hallo :o)
> + desc->data = dma_map_single(&port->netdev->dev, skb->data,
> + MAX_MRU, DMA_FROM_DEVICE);
> + if (dma_mapping_error(desc->data)) {
> + desc->data = 0;
> + return -EIO;
> + }
> + queue_put_desc(RXFREE_QUEUE(port->plat),
> + rx_desc_phys(port, i), desc);
> + BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
> + }
> + return 0;
> +}
> +
> +static void destroy_queues(struct port *port)
> +{
> + int i;
> +
> + while (queue_get_desc(RXFREE_QUEUE(port->plat), port, 0) >= 0)
> + /* nothing to do here */;
> + while (queue_get_desc(port->plat->rxq, port, 0) >= 0)
> + /* nothing to do here */;
> + while (queue_get_desc(TX_QUEUE(port->plat), port, 1) >= 0) {
> + /* nothing to do here */;
> + }
> + if (!ports_open)
> + while (queue_get_desc(TXDONE_QUEUE, port, 1) >= 0)
> + /* nothing to do here */;
> +
> + if (port->rx_desc_tab) {
> + for (i = 0; i < PKT_DESCS; i++) {
> + struct desc *desc = &port->rx_desc_tab[i];
> + struct sk_buff *skb = port->rx_skb_tab[i];
> + if (skb) {
> + if (desc->data)
> + dma_unmap_single(&port->netdev->dev,
> + desc->data, MAX_MRU,
> + DMA_FROM_DEVICE);
> + dev_kfree_skb(skb);
> + }
> + }
> + dma_pool_free(dma_pool, port->rx_desc_tab,
> + port->rx_desc_tab_phys);
> + port->rx_desc_tab = NULL;
> + }
> +
> + if (!ports_open && tx_desc_tab) {
> + for (i = 0; i < PKT_DESCS; i++) {
> + struct desc *desc = &tx_desc_tab[i];
> + struct sk_buff *skb = tx_skb_tab[i];
> + if (skb) {
> + if (desc->data)
> + dma_unmap_single(&port->netdev->dev,
> + desc->data,
> + desc->buf_len,
> + DMA_TO_DEVICE);
> + dev_kfree_skb(skb);
> + }
> + }
> + dma_pool_free(dma_pool, tx_desc_tab, tx_desc_tab_phys);
> + tx_desc_tab = NULL;
> + }
> + if (!ports_open && dma_pool) {
> + dma_pool_destroy(dma_pool);
> + dma_pool = NULL;
> + }
> +}
> +
> +static int eth_load_firmware(struct net_device *dev, struct npe *npe)
> +{
> + struct msg msg;
> + int err;
> +
> + if ((err = npe_load_firmware(npe, npe_name(npe), &dev->dev)) != 0)
> + return err;
> +
> + if ((err = npe_recv_message(npe, &msg, "ETH_GET_STATUS")) != 0) {
> + printk(KERN_ERR "%s: %s not responding\n", dev->name,
> + npe_name(npe));
> + return err;
> + }
> + return 0;
> +}
> +
> +static int eth_open(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + struct npe *npe = port->npe;
> + struct msg msg;
> + int i, err;
> +
> + if (!npe_running(npe))
> + if (eth_load_firmware(dev, npe))
> + return -EIO;
> +
> + if (!npe_running(mdio_npe))
> + if (eth_load_firmware(dev, mdio_npe))
> + return -EIO;
> +
> + memset(&msg, 0, sizeof(msg));
> + msg.cmd = NPE_VLAN_SETRXQOSENTRY;
> + msg.eth_id = port->id;
> + msg.byte5 = port->plat->rxq | 0x80;
> + msg.byte7 = port->plat->rxq << 4;
> + for (i = 0; i < 8; i++) {
> + msg.byte3 = i;
> + if (npe_send_recv_message(port->npe, &msg, "ETH_SET_RXQ"))
> + return -EIO;
> + }
> +
> + msg.cmd = NPE_EDB_SETPORTADDRESS;
> + msg.eth_id = PHYSICAL_ID(port);
> + memcpy(msg.mac, dev->dev_addr, ETH_ALEN);
> + if (npe_send_recv_message(port->npe, &msg, "ETH_SET_MAC"))
> + return -EIO;
> +
> + memset(&msg, 0, sizeof(msg));
> + msg.cmd = NPE_FW_SETFIREWALLMODE;
> + msg.eth_id = port->id;
> + if (npe_send_recv_message(port->npe, &msg, "ETH_SET_FIREWALL_MODE"))
> + return -EIO;
> +
> + if ((err = request_queues(port)) != 0)
> + return err;
> +
> + if ((err = init_queues(port)) != 0) {
> + destroy_queues(port);
> + release_queues(port);
> + return err;
> + }
> +
> + for (i = 0; i < ETH_ALEN; i++)
> + __raw_writel(dev->dev_addr[i], &port->regs->hw_addr[i]);
> + __raw_writel(0x08, &port->regs->random_seed);
> + __raw_writel(0x12, &port->regs->partial_empty_threshold);
> + __raw_writel(0x30, &port->regs->partial_full_threshold);
> + __raw_writel(0x08, &port->regs->tx_start_bytes);
> + __raw_writel(0x15, &port->regs->tx_deferral);
> + __raw_writel(0x08, &port->regs->tx_2part_deferral[0]);
> + __raw_writel(0x07, &port->regs->tx_2part_deferral[1]);
> + __raw_writel(0x80, &port->regs->slot_time);
> + __raw_writel(0x01, &port->regs->int_clock_threshold);
> + __raw_writel(TX_CNTRL1_RETRIES, &port->regs->tx_control[1]);
> + __raw_writel(TX_CNTRL0_TX_EN | TX_CNTRL0_RETRY | TX_CNTRL0_PAD_EN |
> + TX_CNTRL0_APPEND_FCS | TX_CNTRL0_2DEFER,
> + &port->regs->tx_control[0]);
> + __raw_writel(0, &port->regs->rx_control[1]);
> + __raw_writel(RX_CNTRL0_RX_EN | RX_CNTRL0_PADSTRIP_EN,
> + &port->regs->rx_control[0]);
> +
> + if (mii_check_media(&port->mii, 1, 1))
> + eth_set_duplex(port);
> + eth_set_mcast_list(dev);
> + netif_start_queue(dev);
> + schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
> +
> + qmgr_set_irq(port->plat->rxq, QUEUE_IRQ_SRC_NOT_EMPTY,
> + eth_rx_irq, dev);
> + qmgr_set_irq(TX_QUEUE(port->plat), QUEUE_IRQ_SRC_NOT_FULL,
> + eth_xmit_ready_irq, dev);
> + qmgr_enable_irq(port->plat->rxq);
> + qmgr_enable_irq(TX_QUEUE(port->plat));
> + ports_open++;
> + return 0;
> +}
> +
> +static int eth_close(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> +
> + ports_open--;
> + qmgr_disable_irq(port->plat->rxq);
> + qmgr_disable_irq(TX_QUEUE(port->plat));
> + netif_stop_queue(dev);
> +
> + clr_regbits(RX_CNTRL0_RX_EN, &port->regs->rx_control[0]);
> + clr_regbits(TX_CNTRL0_TX_EN, &port->regs->tx_control[0]);
> + set_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
> + &port->regs->core_control);
> + udelay(10);
> + clr_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
> + &port->regs->core_control);
> +
> + cancel_rearming_delayed_work(&port->mdio_thread);
> + destroy_queues(port);
> + release_queues(port);
> + return 0;
> +}
> +
> +static int __devinit eth_init_one(struct platform_device *pdev)
> +{
> + struct port *port;
> + struct net_device *dev;
> + struct mac_plat_info *plat = pdev->dev.platform_data;
> + u32 regs_phys;
> + int err;
> +
> + if (!(dev = alloc_etherdev(sizeof(struct port))))
> + return -ENOMEM;
> +
> + SET_MODULE_OWNER(dev);
> + SET_NETDEV_DEV(dev, &pdev->dev);
> + port = netdev_priv(dev);
> + port->netdev = dev;
> + port->id = pdev->id;
> +
> + switch (port->id) {
> + case IXP4XX_ETH_NPEA:
> + port->regs = (struct eth_regs __iomem *)IXP4XX_EthA_BASE_VIRT;
> + regs_phys = IXP4XX_EthA_BASE_PHYS;
> + break;
> + case IXP4XX_ETH_NPEB:
> + port->regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
> + regs_phys = IXP4XX_EthB_BASE_PHYS;
> + break;
> + case IXP4XX_ETH_NPEC:
> + port->regs = (struct eth_regs __iomem *)IXP4XX_EthC_BASE_VIRT;
> + regs_phys = IXP4XX_EthC_BASE_PHYS;
> + break;
> + default:
> + err = -ENOSYS;
> + goto err_free;
> + }
> +
> + dev->open = eth_open;
> + dev->hard_start_xmit = eth_xmit;
> + dev->poll = eth_poll;
> + dev->stop = eth_close;
> + dev->get_stats = eth_stats;
> + dev->do_ioctl = eth_ioctl;
> + dev->set_multicast_list = eth_set_mcast_list;
> + dev->weight = 16;
> + dev->tx_queue_len = 100;
> +
> + if (!(port->npe = npe_request(NPE_ID(port)))) {
> + err = -EIO;
> + goto err_free;
> + }
> +
> + if (register_netdev(dev)) {
> + err = -EIO;
> + goto err_npe_rel;
> + }
> +
> + port->mem_res = request_mem_region(regs_phys, REGS_SIZE, dev->name);
> + if (!port->mem_res) {
> + err = -EBUSY;
> + goto err_unreg;
> + }
> +
> + port->plat = plat;
> + memcpy(dev->dev_addr, plat->hwaddr, ETH_ALEN);
I think my comment about adding randomised MAC addresses in case of
no hwaddr stands - it's really not that complex.
Christian's driver did this:
/* The place of the MAC address is very system dependent.
* Here we use a random one to be replaced by one of the
* following commands:
* "ip link set address 02:03:04:04:04:01 dev eth0"
* "ifconfig eth0 hw ether 02:03:04:04:04:07"
*/
if (is_zero_ether_addr(plat->hwaddr)) {
random_ether_addr(dev->dev_addr);
dev->dev_addr[5] = plat->phy_id;
}
else
memcpy(dev->dev_addr, plat->hwaddr, 6);
> +
> + platform_set_drvdata(pdev, dev);
> +
> + __raw_writel(CORE_RESET, &port->regs->core_control);
> + udelay(50);
> + __raw_writel(CORE_MDC_EN, &port->regs->core_control);
> + udelay(50);
> +
> + port->mii.dev = dev;
> + port->mii.mdio_read = mdio_read;
> + port->mii.mdio_write = mdio_write;
> + port->mii.phy_id = plat->phy;
> + port->mii.phy_id_mask = 0x1F;
> + port->mii.reg_num_mask = 0x1F;
> +
> + INIT_DELAYED_WORK(&port->mdio_thread, mdio_thread);
> +
> + printk(KERN_INFO "%s: MII PHY %i on %s\n", dev->name, plat->phy,
> + npe_name(port->npe));
> + return 0;
> +
> +err_unreg:
> + unregister_netdev(dev);
> +err_npe_rel:
> + npe_release(port->npe);
> +err_free:
> + free_netdev(dev);
> + return err;
> +}
> +
> +static int __devexit eth_remove_one(struct platform_device *pdev)
> +{
> + struct net_device *dev = platform_get_drvdata(pdev);
> + struct port *port = netdev_priv(dev);
> +
> + unregister_netdev(dev);
> + platform_set_drvdata(pdev, NULL);
> + npe_release(port->npe);
> + release_resource(port->mem_res);
> + free_netdev(dev);
> + return 0;
> +}
> +
> +static struct platform_driver drv = {
> + .driver.name = DRV_NAME,
> + .probe = eth_init_one,
> + .remove = eth_remove_one,
> +};
> +
> +static int __init eth_init_module(void)
> +{
> + if (!(ixp4xx_read_fuses() & IXP4XX_FUSE_NPEB_ETH0))
> + return -ENOSYS;
> +
> + /* All MII PHY accesses use NPE-B Ethernet registers */
> + if (!(mdio_npe = npe_request(1)))
> + return -EIO;
> + spin_lock_init(&mdio_lock);
> + mdio_regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
> +
> + return platform_driver_register(&drv);
> +}
> +
> +static void __exit eth_cleanup_module(void)
> +{
> + platform_driver_unregister(&drv);
> + npe_release(mdio_npe);
> +}
> +
> +MODULE_AUTHOR("Krzysztof Halasa");
> +MODULE_DESCRIPTION("Intel IXP4xx Ethernet driver");
> +MODULE_LICENSE("GPL v2");
> +module_init(eth_init_module);
For our flash and eeprom notifiers to work, we need this converted to
a late_initcall:
http://trac.nslu2-linux.org/kernel/browser/trunk/patches/2.6.21/37-
ixp4xx-net-driver-fix-mac-handling.patch
akpm suggested this fix, but we don't absolutely know if it's
upstream acceptable.
> +module_exit(eth_cleanup_module);
> diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
> index 5f79622..b891e10 100644
> --- a/drivers/net/wan/Kconfig
> +++ b/drivers/net/wan/Kconfig
> @@ -342,6 +342,16 @@ config DSCC4_PCI_RST
>
> Say Y if your card supports this feature.
>
> +config IXP4XX_HSS
> + tristate "IXP4xx HSS (synchronous serial port) support"
> + depends on ARM && ARCH_IXP4XX
> + select IXP4XX_NPE
> + select IXP4XX_QMGR
> + select HDLC
> + help
> + Say Y here if you want to use built-in HSS ports
> + on IXP4xx processor.
> +
> config DLCI
> tristate "Frame Relay DLCI support"
> ---help---
> diff --git a/drivers/net/wan/Makefile b/drivers/net/wan/Makefile
> index d61fef3..1b1d116 100644
> --- a/drivers/net/wan/Makefile
> +++ b/drivers/net/wan/Makefile
> @@ -42,6 +42,7 @@ obj-$(CONFIG_C101) += c101.o
> obj-$(CONFIG_WANXL) += wanxl.o
> obj-$(CONFIG_PCI200SYN) += pci200syn.o
> obj-$(CONFIG_PC300TOO) += pc300too.o
> +obj-$(CONFIG_IXP4XX_HSS) += ixp4xx_hss.o
>
> clean-files := wanxlfw.inc
> $(obj)/wanxl.o: $(obj)/wanxlfw.inc
> diff --git a/drivers/net/wan/ixp4xx_hss.c b/drivers/net/wan/
> ixp4xx_hss.c
> new file mode 100644
> index 0000000..ed56ed8
> --- /dev/null
> +++ b/drivers/net/wan/ixp4xx_hss.c
> @@ -0,0 +1,1048 @@
> +/*
> + * Intel IXP4xx HSS (synchronous serial port) driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#include <linux/dma-mapping.h>
> +#include <linux/dmapool.h>
> +#include <linux/kernel.h>
> +#include <linux/hdlc.h>
> +#include <linux/platform_device.h>
> +#include <asm/io.h>
> +#include <asm/arch/npe.h>
> +#include <asm/arch/qmgr.h>
> +
> +#ifndef __ARMEB__
> +#warning Little endian mode not supported
> +#endif
Personally I'm less fussed about WAN / LE support. Anyone with any
sense will run ixp4xx boards doing such a specialised network
operation as BE. Also, NSLU2-Linux can't test this functionality with
our LE setup as we don't have this hardware on-board. You may just
want to declare a depends on ARMEB in Kconfig (with or without OR
(ARM || BROKEN) ) and have done with it - it's up to you.
> +
> +#define DEBUG_QUEUES 0
> +#define DEBUG_RX 0
> +#define DEBUG_TX 0
> +
> +#define DRV_NAME "ixp4xx_hss"
> +#define DRV_VERSION "0.03"
> +
> +#define PKT_EXTRA_FLAGS 0 /* orig 1 */
> +#define FRAME_SYNC_OFFSET 0 /* unused, channelized only */
> +#define FRAME_SYNC_SIZE 1024
> +#define PKT_NUM_PIPES 1 /* 1, 2 or 4 */
> +#define PKT_PIPE_FIFO_SIZEW 4 /* total 4 dwords per HSS */
> +
> +#define RX_DESCS 16 /* also length of queues: RX-ready, RX */
> +#define TX_DESCS 16 /* also length of queues: TX-done, TX */
> +
> +#define POOL_ALLOC_SIZE (sizeof(struct desc) * (RX_DESCS +
> TX_DESCS))
> +#define RX_SIZE (HDLC_MAX_MRU + 4) /* NPE needs more space */
> +
> +/* Queue IDs */
> +#define HSS0_CHL_RXTRIG_QUEUE 12 /* orig size = 32 dwords */
> +#define HSS0_PKT_RX_QUEUE 13 /* orig size = 32 dwords */
> +#define HSS0_PKT_TX0_QUEUE 14 /* orig size = 16 dwords */
> +#define HSS0_PKT_TX1_QUEUE 15
> +#define HSS0_PKT_TX2_QUEUE 16
> +#define HSS0_PKT_TX3_QUEUE 17
> +#define HSS0_PKT_RXFREE0_QUEUE 18 /* orig size = 16 dwords */
> +#define HSS0_PKT_RXFREE1_QUEUE 19
> +#define HSS0_PKT_RXFREE2_QUEUE 20
> +#define HSS0_PKT_RXFREE3_QUEUE 21
> +#define HSS0_PKT_TXDONE_QUEUE 22 /* orig size = 64 dwords */
> +
> +#define HSS1_CHL_RXTRIG_QUEUE 10
> +#define HSS1_PKT_RX_QUEUE 0
> +#define HSS1_PKT_TX0_QUEUE 5
> +#define HSS1_PKT_TX1_QUEUE 6
> +#define HSS1_PKT_TX2_QUEUE 7
> +#define HSS1_PKT_TX3_QUEUE 8
> +#define HSS1_PKT_RXFREE0_QUEUE 1
> +#define HSS1_PKT_RXFREE1_QUEUE 2
> +#define HSS1_PKT_RXFREE2_QUEUE 3
> +#define HSS1_PKT_RXFREE3_QUEUE 4
> +#define HSS1_PKT_TXDONE_QUEUE 9
> +
> +#define NPE_PKT_MODE_HDLC 0
> +#define NPE_PKT_MODE_RAW 1
> +#define NPE_PKT_MODE_56KMODE 2
> +#define NPE_PKT_MODE_56KENDIAN_MSB 4
> +
> +/* PKT_PIPE_HDLC_CFG_WRITE flags */
> +#define PKT_HDLC_IDLE_ONES 0x1 /* default = flags */
> +#define PKT_HDLC_CRC_32 0x2 /* default = CRC-16 */
> +#define PKT_HDLC_MSB_ENDIAN 0x4 /* default = LE */
> +
> +
> +/* hss_config, PCRs */
> +/* Frame sync sampling, default = active low */
> +#define PCR_FRM_SYNC_ACTIVE_HIGH 0x40000000
> +#define PCR_FRM_SYNC_FALLINGEDGE 0x80000000
> +#define PCR_FRM_SYNC_RISINGEDGE 0xC0000000
> +
> +/* Frame sync pin: input (default) or output generated off a given
> clk edge */
> +#define PCR_FRM_SYNC_OUTPUT_FALLING 0x20000000
> +#define PCR_FRM_SYNC_OUTPUT_RISING 0x30000000
> +
> +/* Frame and data clock sampling on edge, default = falling */
> +#define PCR_FCLK_EDGE_RISING 0x08000000
> +#define PCR_DCLK_EDGE_RISING 0x04000000
> +
> +/* Clock direction, default = input */
> +#define PCR_SYNC_CLK_DIR_OUTPUT 0x02000000
> +
> +/* Generate/Receive frame pulses, default = enabled */
> +#define PCR_FRM_PULSE_DISABLED 0x01000000
> +
> + /* Data rate is full (default) or half the configured clk speed */
> +#define PCR_HALF_CLK_RATE 0x00200000
> +
> +/* Invert data between NPE and HSS FIFOs? (default = no) */
> +#define PCR_DATA_POLARITY_INVERT 0x00100000
> +
> +/* TX/RX endianness, default = LSB */
> +#define PCR_MSB_ENDIAN 0x00080000
> +
> +/* Normal (default) / open drain mode (TX only) */
> +#define PCR_TX_PINS_OPEN_DRAIN 0x00040000
> +
> +/* No framing bit transmitted and expected on RX? (default =
> framing bit) */
> +#define PCR_SOF_NO_FBIT 0x00020000
> +
> +/* Drive data pins? */
> +#define PCR_TX_DATA_ENABLE 0x00010000
> +
> +/* Voice 56k type: drive the data pins low (default), high, high Z */
> +#define PCR_TX_V56K_HIGH 0x00002000
> +#define PCR_TX_V56K_HIGH_IMP 0x00004000
> +
> +/* Unassigned type: drive the data pins low (default), high, high
> Z */
> +#define PCR_TX_UNASS_HIGH 0x00000800
> +#define PCR_TX_UNASS_HIGH_IMP 0x00001000
> +
> +/* T1 @ 1.544MHz only: Fbit dictated in FIFO (default) or high Z */
> +#define PCR_TX_FB_HIGH_IMP 0x00000400
> +
> +/* 56k data endiannes - which bit unused: high (default) or low */
> +#define PCR_TX_56KE_BIT_0_UNUSED 0x00000200
> +
> +/* 56k data transmission type: 32/8 bit data (default) or 56K data */
> +#define PCR_TX_56KS_56K_DATA 0x00000100
> +
> +/* hss_config, cCR */
> +/* Number of packetized clients, default = 1 */
> +#define CCR_NPE_HFIFO_2_HDLC 0x04000000
> +#define CCR_NPE_HFIFO_3_OR_4HDLC 0x08000000
> +
> +/* default = no loopback */
> +#define CCR_LOOPBACK 0x02000000
> +
> +/* HSS number, default = 0 (first) */
> +#define CCR_SECOND_HSS 0x01000000
> +
> +
> +/* hss_config, clkCR: main:10, num:10, denom:12 */
> +#define CLK42X_SPEED_EXP ((0x3FF << 22) | ( 2 << 12) | 15) /*65
> KHz*/
> +
> +#define CLK42X_SPEED_512KHZ (( 130 << 22) | ( 2 << 12) | 15)
> +#define CLK42X_SPEED_1536KHZ (( 43 << 22) | ( 18 << 12) | 47)
> +#define CLK42X_SPEED_1544KHZ (( 43 << 22) | ( 33 << 12) | 192)
> +#define CLK42X_SPEED_2048KHZ (( 32 << 22) | ( 34 << 12) | 63)
> +#define CLK42X_SPEED_4096KHZ (( 16 << 22) | ( 34 << 12) | 127)
> +#define CLK42X_SPEED_8192KHZ (( 8 << 22) | ( 34 << 12) | 255)
> +
> +#define CLK46X_SPEED_512KHZ (( 130 << 22) | ( 24 << 12) | 127)
> +#define CLK46X_SPEED_1536KHZ (( 43 << 22) | (152 << 12) | 383)
> +#define CLK46X_SPEED_1544KHZ (( 43 << 22) | ( 66 << 12) | 385)
> +#define CLK46X_SPEED_2048KHZ (( 32 << 22) | (280 << 12) | 511)
> +#define CLK46X_SPEED_4096KHZ (( 16 << 22) | (280 << 12) | 1023)
> +#define CLK46X_SPEED_8192KHZ (( 8 << 22) | (280 << 12) | 2047)
> +
> +
> +/* hss_config, LUTs: default = unassigned */
> +#define TDMMAP_HDLC 1 /* HDLC - packetised */
> +#define TDMMAP_VOICE56K 2 /* Voice56K - channelised */
> +#define TDMMAP_VOICE64K 3 /* Voice64K - channelised */
> +
> +
> +/* NPE command codes */
> +/* writes the ConfigWord value to the location specified by offset */
> +#define PORT_CONFIG_WRITE 0x40
> +
> +/* triggers the NPE to load the contents of the configuration
> table */
> +#define PORT_CONFIG_LOAD 0x41
> +
> +/* triggers the NPE to return an HssErrorReadResponse message */
> +#define PORT_ERROR_READ 0x42
> +
> +/* reset NPE internal status and enable the HssChannelized
> operation */
> +#define CHAN_FLOW_ENABLE 0x43
> +#define CHAN_FLOW_DISABLE 0x44
> +#define CHAN_IDLE_PATTERN_WRITE 0x45
> +#define CHAN_NUM_CHANS_WRITE 0x46
> +#define CHAN_RX_BUF_ADDR_WRITE 0x47
> +#define CHAN_RX_BUF_CFG_WRITE 0x48
> +#define CHAN_TX_BLK_CFG_WRITE 0x49
> +#define CHAN_TX_BUF_ADDR_WRITE 0x4A
> +#define CHAN_TX_BUF_SIZE_WRITE 0x4B
> +#define CHAN_TSLOTSWITCH_ENABLE 0x4C
> +#define CHAN_TSLOTSWITCH_DISABLE 0x4D
> +
> +/* downloads the gainWord value for a timeslot switching channel
> associated
> + with bypassNum */
> +#define CHAN_TSLOTSWITCH_GCT_DOWNLOAD 0x4E
> +
> +/* triggers the NPE to reset internal status and enable the
> HssPacketized
> + operation for the flow specified by pPipe */
Greater-than-one-line comments not conforming to Kernel coding style
- someone much more angry than me will jump on that.
> +#define PKT_PIPE_FLOW_ENABLE 0x50
> +#define PKT_PIPE_FLOW_DISABLE 0x51
> +#define PKT_NUM_PIPES_WRITE 0x52
> +#define PKT_PIPE_FIFO_SIZEW_WRITE 0x53
> +#define PKT_PIPE_HDLC_CFG_WRITE 0x54
> +#define PKT_PIPE_IDLE_PATTERN_WRITE 0x55
> +#define PKT_PIPE_RX_SIZE_WRITE 0x56
> +#define PKT_PIPE_MODE_WRITE 0x57
> +
> +
Lots of double returns.
> +#define HSS_TIMESLOTS 128
> +#define HSS_LUT_BITS 2
> +
> +/* HDLC packet status values - desc->status */
> +#define ERR_SHUTDOWN 1 /* stop or shutdown occurrance */
> +#define ERR_HDLC_ALIGN 2 /* HDLC alignment error */
> +#define ERR_HDLC_FCS 3 /* HDLC Frame Check Sum error */
> +#define ERR_RXFREE_Q_EMPTY 4 /* RX-free queue became empty while
> receiving
> + this packet (if buf_len < pkt_len) */
> +#define ERR_HDLC_TOO_LONG 5 /* HDLC frame size too long */
> +#define ERR_HDLC_ABORT 6 /* abort sequence received */
> +#define ERR_DISCONNECTING 7 /* disconnect is in progress */
> +
> +
> +struct port {
> + struct npe *npe;
> + struct net_device *netdev;
> + struct hss_plat_info *plat;
> + struct sk_buff *rx_skb_tab[RX_DESCS], *tx_skb_tab[TX_DESCS];
> + struct desc *desc_tab; /* coherent */
> + u32 desc_tab_phys;
> + sync_serial_settings settings;
> + int id;
> + u8 hdlc_cfg;
> +};
> +
[snip]
Again, looking good.
Michael-Luke Jones
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
@ 2007-05-08 7:22 ` Michael-Luke Jones
0 siblings, 0 replies; 88+ messages in thread
From: Michael-Luke Jones @ 2007-05-08 7:22 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
On 8 May 2007, at 02:19, Krzysztof Halasa wrote:
> Adds a driver for built-in IXP4xx Ethernet MAC and HSS ports
>
> Signed-off-by: Krzysztof Halasa <khc@pm.waw.pl>
>
> diff --git a/arch/arm/mach-ixp4xx/ixdp425-setup.c b/arch/arm/mach-
> ixp4xx/ixdp425-setup.c
> index ec4f079..f20d39d 100644
> --- a/arch/arm/mach-ixp4xx/ixdp425-setup.c
> +++ b/arch/arm/mach-ixp4xx/ixdp425-setup.c
> @@ -101,10 +101,35 @@ static struct platform_device ixdp425_uart = {
> .resource = ixdp425_uart_resources
> };
>
> +/* Built-in 10/100 Ethernet MAC interfaces */
> +static struct mac_plat_info ixdp425_plat_mac[] = {
> + {
> + .phy = 0,
> + .rxq = 3,
> + }, {
> + .phy = 1,
> + .rxq = 4,
> + }
> +};
> +
> +static struct platform_device ixdp425_mac[] = {
> + {
> + .name = "ixp4xx_eth",
> + .id = IXP4XX_ETH_NPEB,
> + .dev.platform_data = ixdp425_plat_mac,
> + }, {
> + .name = "ixp4xx_eth",
> + .id = IXP4XX_ETH_NPEC,
> + .dev.platform_data = ixdp425_plat_mac + 1,
> + }
> +};
> +
> static struct platform_device *ixdp425_devices[] __initdata = {
> &ixdp425_i2c_controller,
> &ixdp425_flash,
> - &ixdp425_uart
> + &ixdp425_uart,
> + &ixdp425_mac[0],
> + &ixdp425_mac[1],
> };
>
> static void __init ixdp425_init(void)
A final submission should probably have this platform data separated
from the net driver and sent upstream via Russell's patch tracking
system rather than netdev.
> diff --git a/drivers/net/arm/Kconfig b/drivers/net/arm/Kconfig
> index 678e4f4..5e2acb6 100644
> --- a/drivers/net/arm/Kconfig
> +++ b/drivers/net/arm/Kconfig
> @@ -46,3 +46,13 @@ config EP93XX_ETH
> help
> This is a driver for the ethernet hardware included in EP93xx
> CPUs.
> Say Y if you are building a kernel for EP93xx based devices.
> +
> +config IXP4XX_ETH
> + tristate "IXP4xx Ethernet support"
> + depends on NET_ETHERNET && ARM && ARCH_IXP4XX
> + select IXP4XX_NPE
> + select IXP4XX_QMGR
> + select MII
> + help
> + Say Y here if you want to use built-in Ethernet ports
> + on IXP4xx processor.
> diff --git a/drivers/net/arm/Makefile b/drivers/net/arm/Makefile
> index a4c8682..7c812ac 100644
> --- a/drivers/net/arm/Makefile
> +++ b/drivers/net/arm/Makefile
> @@ -9,3 +9,4 @@ obj-$(CONFIG_ARM_ETHER3) += ether3.o
> obj-$(CONFIG_ARM_ETHER1) += ether1.o
> obj-$(CONFIG_ARM_AT91_ETHER) += at91_ether.o
> obj-$(CONFIG_EP93XX_ETH) += ep93xx_eth.o
> +obj-$(CONFIG_IXP4XX_ETH) += ixp4xx_eth.o
> diff --git a/drivers/net/arm/ixp4xx_eth.c b/drivers/net/arm/
> ixp4xx_eth.c
> new file mode 100644
> index 0000000..dcea6e5
> --- /dev/null
> +++ b/drivers/net/arm/ixp4xx_eth.c
> @@ -0,0 +1,1002 @@
> +/*
> + * Intel IXP4xx Ethernet driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + *
> + * Ethernet port config (0x00 is not present on IXP42X):
> + *
> + * logical port 0x00 0x10 0x20
> + * NPE 0 (NPE-A) 1 (NPE-B) 2 (NPE-C)
> + * physical PortId 2 0 1
> + * TX queue 23 24 25
> + * RX-free queue 26 27 28
> + * TX-done queue is always 31, RX queue is configurable
> + */
> +
> +#include <linux/delay.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dmapool.h>
> +#include <linux/kernel.h>
> +#include <linux/mii.h>
> +#include <linux/platform_device.h>
> +#include <asm/io.h>
> +#include <asm/arch/npe.h>
> +#include <asm/arch/qmgr.h>
> +
> +#ifndef __ARMEB__
> +#warning Little endian mode not supported
> +#endif
This has gone from error to warning - fair play but if are planning
to put this upstream this cycle (anything's possible :) ) you'll want
to declare this driver broken on ARMEB in Kconfig please.
Personally I'd like LE ethernet tested and working before we push.
> +
> +#define DEBUG_QUEUES 0
> +#define DEBUG_RX 0
> +#define DEBUG_TX 0
> +#define DEBUG_PKT_BYTES 0
> +#define DEBUG_MDIO 0
> +
> +#define DRV_NAME "ixp4xx_eth"
> +#define DRV_VERSION "0.04"
> +
> +#define TX_QUEUE_LEN 16 /* dwords */
> +#define PKT_DESCS 64 /* also length of queues: TX-done, RX-ready,
> RX */
> +
> +#define POOL_ALLOC_SIZE (sizeof(struct desc) * (PKT_DESCS))
> +#define REGS_SIZE 0x1000
> +#define MAX_MRU 1536
> +
> +#define MDIO_INTERVAL (3 * HZ)
> +#define MAX_MDIO_RETRIES 100 /* microseconds, typically 30 cycles */
> +
> +#define NPE_ID(port) ((port)->id >> 4)
> +#define PHYSICAL_ID(port) ((NPE_ID(port) + 2) % 3)
> +#define TX_QUEUE(plat) (NPE_ID(port) + 23)
> +#define RXFREE_QUEUE(plat) (NPE_ID(port) + 26)
> +#define TXDONE_QUEUE 31
> +
> +/* TX Control Registers */
> +#define TX_CNTRL0_TX_EN BIT(0)
> +#define TX_CNTRL0_HALFDUPLEX BIT(1)
> +#define TX_CNTRL0_RETRY BIT(2)
> +#define TX_CNTRL0_PAD_EN BIT(3)
> +#define TX_CNTRL0_APPEND_FCS BIT(4)
> +#define TX_CNTRL0_2DEFER BIT(5)
> +#define TX_CNTRL0_RMII BIT(6) /* reduced MII */
> +#define TX_CNTRL1_RETRIES 0x0F /* 4 bits */
> +
> +/* RX Control Registers */
> +#define RX_CNTRL0_RX_EN BIT(0)
> +#define RX_CNTRL0_PADSTRIP_EN BIT(1)
> +#define RX_CNTRL0_SEND_FCS BIT(2)
> +#define RX_CNTRL0_PAUSE_EN BIT(3)
> +#define RX_CNTRL0_LOOP_EN BIT(4)
> +#define RX_CNTRL0_ADDR_FLTR_EN BIT(5)
> +#define RX_CNTRL0_RX_RUNT_EN BIT(6)
> +#define RX_CNTRL0_BCAST_DIS BIT(7)
> +#define RX_CNTRL1_DEFER_EN BIT(0)
> +
> +/* Core Control Register */
> +#define CORE_RESET BIT(0)
> +#define CORE_RX_FIFO_FLUSH BIT(1)
> +#define CORE_TX_FIFO_FLUSH BIT(2)
> +#define CORE_SEND_JAM BIT(3)
> +#define CORE_MDC_EN BIT(4) /* NPE-B ETH-0 only */
> +
> +/* Definitions for MII access routines */
> +#define MII_CMD_GO BIT(31)
> +#define MII_CMD_WRITE BIT(26)
> +#define MII_STAT_READ_FAILED BIT(31)
> +
> +/* NPE message codes */
> +#define NPE_GETSTATUS 0x00
> +#define NPE_EDB_SETPORTADDRESS 0x01
> +#define NPE_EDB_GETMACADDRESSDATABASE 0x02
> +#define NPE_EDB_SETMACADDRESSSDATABASE 0x03
> +#define NPE_GETSTATS 0x04
> +#define NPE_RESETSTATS 0x05
> +#define NPE_SETMAXFRAMELENGTHS 0x06
> +#define NPE_VLAN_SETRXTAGMODE 0x07
> +#define NPE_VLAN_SETDEFAULTRXVID 0x08
> +#define NPE_VLAN_SETPORTVLANTABLEENTRY 0x09
> +#define NPE_VLAN_SETPORTVLANTABLERANGE 0x0A
> +#define NPE_VLAN_SETRXQOSENTRY 0x0B
> +#define NPE_VLAN_SETPORTIDEXTRACTIONMODE 0x0C
> +#define NPE_STP_SETBLOCKINGSTATE 0x0D
> +#define NPE_FW_SETFIREWALLMODE 0x0E
> +#define NPE_PC_SETFRAMECONTROLDURATIONID 0x0F
> +#define NPE_PC_SETAPMACTABLE 0x11
> +#define NPE_SETLOOPBACK_MODE 0x12
> +#define NPE_PC_SETBSSIDTABLE 0x13
> +#define NPE_ADDRESS_FILTER_CONFIG 0x14
> +#define NPE_APPENDFCSCONFIG 0x15
> +#define NPE_NOTIFY_MAC_RECOVERY_DONE 0x16
> +#define NPE_MAC_RECOVERY_START 0x17
> +
> +
Two returns? Defines make sense in-file here :)
> +struct eth_regs {
> + u32 tx_control[2], __res1[2]; /* 000 */
> + u32 rx_control[2], __res2[2]; /* 010 */
> + u32 random_seed, __res3[3]; /* 020 */
> + u32 partial_empty_threshold, __res4; /* 030 */
> + u32 partial_full_threshold, __res5; /* 038 */
> + u32 tx_start_bytes, __res6[3]; /* 040 */
> + u32 tx_deferral, rx_deferral,__res7[2]; /* 050 */
> + u32 tx_2part_deferral[2], __res8[2]; /* 060 */
> + u32 slot_time, __res9[3]; /* 070 */
> + u32 mdio_command[4]; /* 080 */
> + u32 mdio_status[4]; /* 090 */
> + u32 mcast_mask[6], __res10[2]; /* 0A0 */
> + u32 mcast_addr[6], __res11[2]; /* 0C0 */
> + u32 int_clock_threshold, __res12[3]; /* 0E0 */
> + u32 hw_addr[6], __res13[61]; /* 0F0 */
> + u32 core_control; /* 1FC */
> +};
> +
> +struct port {
> + struct resource *mem_res;
> + struct eth_regs __iomem *regs;
> + struct npe *npe;
> + struct net_device *netdev;
> + struct net_device_stats stat;
> + struct mii_if_info mii;
> + struct delayed_work mdio_thread;
> + struct mac_plat_info *plat;
> + struct sk_buff *rx_skb_tab[PKT_DESCS];
> + struct desc *rx_desc_tab; /* coherent */
> + int id; /* logical port ID */
> + u32 rx_desc_tab_phys;
> + u32 msg_enable;
> +};
> +
> +/* NPE message structure */
> +struct msg {
> + union {
> + struct {
> + u8 cmd, eth_id, mac[ETH_ALEN];
> + };
> + struct {
> + u8 cmd, eth_id, __byte2, byte3;
> + u8 __byte4, byte5, __byte6, byte7;
> + };
> + struct {
> + u8 cmd, eth_id, __b2, byte3;
> + u32 data32;
> + };
> + };
> +};
> +
> +/* Ethernet packet descriptor */
> +struct desc {
> + u32 next; /* pointer to next buffer, unused */
> + u16 buf_len; /* buffer length */
> + u16 pkt_len; /* packet length */
> + u32 data; /* pointer to data buffer in RAM */
> + u8 dest_id;
> + u8 src_id;
> + u16 flags;
> + u8 qos;
> + u8 padlen;
> + u16 vlan_tci;
> + u8 dest_mac[ETH_ALEN];
> + u8 src_mac[ETH_ALEN];
> +};
> +
> +
> +#define rx_desc_phys(port, n) ((port)->rx_desc_tab_phys + \
> + (n) * sizeof(struct desc))
> +#define tx_desc_phys(n) (tx_desc_tab_phys + (n) * sizeof(struct
> desc))
> +
> +static spinlock_t mdio_lock;
> +static struct eth_regs __iomem *mdio_regs; /* mdio command and
> status only */
> +static struct npe *mdio_npe;
> +static int ports_open;
> +static struct dma_pool *dma_pool;
> +static struct sk_buff *tx_skb_tab[PKT_DESCS];
> +static struct desc *tx_desc_tab; /* coherent */
> +static u32 tx_desc_tab_phys;
> +
> +
> +static inline void set_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) | bits, reg);
> +}
> +static inline void clr_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) & ~bits, reg);
> +}
> +
> +
> +static u16 mdio_cmd(struct net_device *dev, int phy_id, int location,
> + int write, u16 cmd)
> +{
> + int cycles = 0;
> +
> + if (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80) {
> + printk("%s: MII not ready to transmit\n", dev->name);
> + return 0; /* not ready to transmit */
> + }
> +
> + if (write) {
> + __raw_writel(cmd & 0xFF, &mdio_regs->mdio_command[0]);
> + __raw_writel(cmd >> 8, &mdio_regs->mdio_command[1]);
> + }
> + __raw_writel(((phy_id << 5) | location) & 0xFF,
> + &mdio_regs->mdio_command[2]);
> + __raw_writel((phy_id >> 3) | (write << 2) | 0x80 /* GO */,
> + &mdio_regs->mdio_command[3]);
> +
> + while ((cycles < MAX_MDIO_RETRIES) &&
> + (__raw_readl(&mdio_regs->mdio_command[3]) & 0x80)) {
> + udelay(1);
> + cycles++;
> + }
> +
> + if (cycles == MAX_MDIO_RETRIES) {
> + printk("%s: MII write failed\n", dev->name);
> + return 0;
> + }
> +
> +#if DEBUG_MDIO
> + printk(KERN_DEBUG "mdio_cmd() took %i cycles\n", cycles);
> +#endif
> +
> + if (write)
> + return 0;
> +
> + if (__raw_readl(&mdio_regs->mdio_status[3]) & 0x80) {
> + printk("%s: MII read failed\n", dev->name);
> + return 0;
> + }
> +
> + return (__raw_readl(&mdio_regs->mdio_status[0]) & 0xFF) |
> + (__raw_readl(&mdio_regs->mdio_status[1]) << 8);
> +}
> +
> +static int mdio_read(struct net_device *dev, int phy_id, int
> location)
> +{
> + unsigned long flags;
> + u16 val;
> +
> + spin_lock_irqsave(&mdio_lock, flags);
> + val = mdio_cmd(dev, phy_id, location, 0, 0);
> + spin_unlock_irqrestore(&mdio_lock, flags);
> + return val;
> +}
> +
> +static void mdio_write(struct net_device *dev, int phy_id, int
> location,
> + int val)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&mdio_lock, flags);
> + mdio_cmd(dev, phy_id, location, 1, val);
> + spin_unlock_irqrestore(&mdio_lock, flags);
> +}
> +
> +static void eth_set_duplex(struct port *port)
> +{
> + if (port->mii.full_duplex)
> + clr_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
> + else
> + set_regbits(TX_CNTRL0_HALFDUPLEX, &port->regs->tx_control[0]);
> +}
> +
> +
> +static void mdio_thread(struct work_struct *work)
> +{
> + struct port *port = container_of(work, struct port,
> mdio_thread.work);
> +
> + if (mii_check_media(&port->mii, 1, 0))
> + eth_set_duplex(port);
> + schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
> +}
> +
> +
> +static inline void debug_skb(const char *func, struct sk_buff *skb)
> +{
> +#if DEBUG_PKT_BYTES
> + int i;
> +
> + printk(KERN_DEBUG "%s(%i): ", func, skb->len);
> + for (i = 0; i < skb->len; i++) {
> + if (i >= DEBUG_PKT_BYTES)
> + break;
> + printk("%s%02X",
> + ((i == 6) || (i == 12) || (i >= 14)) ? " " : "",
> + skb->data[i]);
> + }
> + printk("\n");
> +#endif
> +}
> +
> +
> +static inline void debug_desc(unsigned int queue, u32 desc_phys,
> + struct desc *desc, int is_get)
> +{
> +#if DEBUG_QUEUES
> + const char *op = is_get ? "->" : "<-";
> +
> + if (!desc_phys) {
> + printk(KERN_DEBUG "queue %2i %s NULL\n", queue, op);
> + return;
> + }
> + printk(KERN_DEBUG "queue %2i %s %X: %X %3X %3X %08X %2X < %2X %4X
> %X"
> + " %X %X %02X%02X%02X%02X%02X%02X < %02X%02X%02X%02X%02X%02X
> \n",
> + queue, op, desc_phys, desc->next, desc->buf_len, desc-
> >pkt_len,
> + desc->data, desc->dest_id, desc->src_id,
> + desc->flags, desc->qos,
> + desc->padlen, desc->vlan_tci,
> + desc->dest_mac[0], desc->dest_mac[1],
> + desc->dest_mac[2], desc->dest_mac[3],
> + desc->dest_mac[4], desc->dest_mac[5],
> + desc->src_mac[0], desc->src_mac[1],
> + desc->src_mac[2], desc->src_mac[3],
> + desc->src_mac[4], desc->src_mac[5]);
> +#endif
> +}
> +
> +static inline int queue_get_desc(unsigned int queue, struct port
> *port,
> + int is_tx)
> +{
> + u32 phys, tab_phys, n_desc;
> + struct desc *tab;
> +
> + if (!(phys = qmgr_get_entry(queue))) {
> + debug_desc(queue, phys, NULL, 1);
> + return -1;
> + }
> +
> + phys &= ~0x1F; /* mask out non-address bits */
> + tab_phys = is_tx ? tx_desc_phys(0) : rx_desc_phys(port, 0);
> + tab = is_tx ? tx_desc_tab : port->rx_desc_tab;
> + n_desc = (phys - tab_phys) / sizeof(struct desc);
> + BUG_ON(n_desc >= PKT_DESCS);
> +
> + debug_desc(queue, phys, &tab[n_desc], 1);
> + BUG_ON(tab[n_desc].next);
> + return n_desc;
> +}
> +
> +static inline void queue_put_desc(unsigned int queue, u32 desc_phys,
> + struct desc *desc)
> +{
> + debug_desc(queue, desc_phys, desc, 0);
> + BUG_ON(desc_phys & 0x1F);
> + qmgr_put_entry(queue, desc_phys);
> +}
> +
> +
> +static void eth_rx_irq(void *pdev)
> +{
> + struct net_device *dev = pdev;
> + struct port *port = netdev_priv(dev);
> +
> +#if DEBUG_RX
> + printk(KERN_DEBUG "eth_rx_irq() start\n");
> +#endif
> + qmgr_disable_irq(port->plat->rxq);
> + netif_rx_schedule(dev);
> +}
> +
> +static int eth_poll(struct net_device *dev, int *budget)
> +{
> + struct port *port = netdev_priv(dev);
> + unsigned int queue = port->plat->rxq;
> + int quota = dev->quota, received = 0;
> +
> +#if DEBUG_RX
> + printk(KERN_DEBUG "eth_poll() start\n");
> +#endif
> + while (quota) {
> + struct sk_buff *old_skb, *new_skb;
> + struct desc *desc;
> + u32 data;
> + int n = queue_get_desc(queue, port, 0);
> + if (n < 0) { /* No packets received */
> + dev->quota -= received;
> + *budget -= received;
> + received = 0;
> + netif_rx_complete(dev);
> + qmgr_enable_irq(queue);
> + if (!qmgr_stat_empty(queue) &&
> + netif_rx_reschedule(dev, 0)) {
> + qmgr_disable_irq(queue);
> + continue;
> + }
> + return 0; /* all work done */
> + }
> +
> + desc = &port->rx_desc_tab[n];
> +
> + if ((new_skb = netdev_alloc_skb(dev, MAX_MRU)) != NULL) {
> +#if 0
> + skb_reserve(new_skb, 2); /* FIXME */
> +#endif
> + data = dma_map_single(&dev->dev, new_skb->data,
> + MAX_MRU, DMA_FROM_DEVICE);
> + }
> +
> + if (!new_skb || dma_mapping_error(data)) {
> + if (new_skb)
> + dev_kfree_skb(new_skb);
> + port->stat.rx_dropped++;
> + /* put the desc back on RX-ready queue */
> + desc->buf_len = MAX_MRU;
> + desc->pkt_len = 0;
> + queue_put_desc(RXFREE_QUEUE(port->plat),
> + rx_desc_phys(port, n), desc);
> + BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
> + continue;
> + }
> +
> + /* process received skb */
> + old_skb = port->rx_skb_tab[n];
> + dma_unmap_single(&dev->dev, desc->data,
> + MAX_MRU, DMA_FROM_DEVICE);
> + skb_put(old_skb, desc->pkt_len);
> +
> + debug_skb("eth_poll", old_skb);
> +
> + old_skb->protocol = eth_type_trans(old_skb, dev);
> + dev->last_rx = jiffies;
> + port->stat.rx_packets++;
> + port->stat.rx_bytes += old_skb->len;
> + netif_receive_skb(old_skb);
> +
> + /* put the new skb on RX-free queue */
> + port->rx_skb_tab[n] = new_skb;
> + desc->buf_len = MAX_MRU;
> + desc->pkt_len = 0;
> + desc->data = data;
> + queue_put_desc(RXFREE_QUEUE(port->plat),
> + rx_desc_phys(port, n), desc);
> + BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
> + quota--;
> + received++;
> + }
> + dev->quota -= received;
> + *budget -= received;
> + return 1; /* not all work done */
> +}
> +
> +static void eth_xmit_ready_irq(void *pdev)
> +{
> + struct net_device *dev = pdev;
> +
> +#if DEBUG_TX
> + printk(KERN_DEBUG "eth_xmit_empty() start\n");
> +#endif
> + netif_start_queue(dev);
> +}
> +
> +static int eth_xmit(struct sk_buff *skb, struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + struct desc *desc;
> + u32 phys;
> + struct sk_buff *old_skb;
> + int n;
> +
> +#if DEBUG_TX
> + printk(KERN_DEBUG "eth_xmit() start\n");
> +#endif
> + if (unlikely(skb->len > MAX_MRU)) {
> + dev_kfree_skb(skb);
> + port->stat.tx_errors++;
> + return NETDEV_TX_OK;
> + }
> +
> + n = queue_get_desc(TXDONE_QUEUE, port, 1);
> + BUG_ON(n < 0);
> + desc = &tx_desc_tab[n];
> + phys = tx_desc_phys(n);
> +
> + if ((old_skb = tx_skb_tab[n]) != NULL) {
> + dma_unmap_single(&dev->dev, desc->data,
> + desc->buf_len, DMA_TO_DEVICE);
> + port->stat.tx_packets++;
> + port->stat.tx_bytes += old_skb->len;
> + dev_kfree_skb(old_skb);
> + }
> +
> + /* disable VLAN functions in NPE image for now */
> + memset(desc, 0, sizeof(*desc));
> + desc->buf_len = desc->pkt_len = skb->len;
> + desc->data = dma_map_single(&dev->dev, skb->data,
> + skb->len, DMA_TO_DEVICE);
> + if (dma_mapping_error(desc->data)) {
> + desc->data = 0;
> + dev_kfree_skb(skb);
> + tx_skb_tab[n] = NULL;
> + port->stat.tx_dropped++;
> + /* put the desc back on TX-done queue */
> + queue_put_desc(TXDONE_QUEUE, phys, desc);
> + return 0;
> + }
> +
> + tx_skb_tab[n] = skb;
> + debug_skb("eth_xmit", skb);
> +
> + /* NPE firmware pads short frames with zeros internally */
> + wmb();
> + queue_put_desc(TX_QUEUE(port->plat), phys, desc);
> + BUG_ON(qmgr_stat_overflow(TX_QUEUE(port->plat)));
> + dev->trans_start = jiffies;
> +
> + if (qmgr_stat_full(TX_QUEUE(port->plat))) {
> + netif_stop_queue(dev);
> + /* we could miss TX ready interrupt */
> + if (!qmgr_stat_full(TX_QUEUE(port->plat))) {
> + netif_start_queue(dev);
> + }
> + }
> +
> +#if DEBUG_TX
> + printk(KERN_DEBUG "eth_xmit() end\n");
> +#endif
> + return NETDEV_TX_OK;
> +}
> +
> +
> +static struct net_device_stats *eth_stats(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + return &port->stat;
> +}
> +
> +static void eth_set_mcast_list(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + struct dev_mc_list *mclist = dev->mc_list;
> + u8 diffs[ETH_ALEN], *addr;
> + int cnt = dev->mc_count, i;
> +
> + if ((dev->flags & IFF_PROMISC) || !mclist || !cnt) {
> + clr_regbits(RX_CNTRL0_ADDR_FLTR_EN,
> + &port->regs->rx_control[0]);
> + return;
> + }
> +
> + memset(diffs, 0, ETH_ALEN);
> + addr = mclist->dmi_addr; /* first MAC address */
> +
> + while (--cnt && (mclist = mclist->next))
> + for (i = 0; i < ETH_ALEN; i++)
> + diffs[i] |= addr[i] ^ mclist->dmi_addr[i];
> +
> + for (i = 0; i < ETH_ALEN; i++) {
> + __raw_writel(addr[i], &port->regs->mcast_addr[i]);
> + __raw_writel(~diffs[i], &port->regs->mcast_mask[i]);
> + }
> +
> + set_regbits(RX_CNTRL0_ADDR_FLTR_EN, &port->regs->rx_control[0]);
> +}
> +
> +
> +static int eth_ioctl(struct net_device *dev, struct ifreq *req,
> int cmd)
> +{
> + struct port *port = netdev_priv(dev);
> + unsigned int duplex_chg;
> + int err;
> +
> + if (!netif_running(dev))
> + return -EINVAL;
> + err = generic_mii_ioctl(&port->mii, if_mii(req), cmd, &duplex_chg);
> + if (duplex_chg)
> + eth_set_duplex(port);
> + return err;
> +}
> +
> +
> +static int request_queues(struct port *port)
> +{
> + int err;
> +
> + err = qmgr_request_queue(RXFREE_QUEUE(port->plat), PKT_DESCS, 0, 0);
> + if (err)
> + return err;
> +
> + err = qmgr_request_queue(port->plat->rxq, PKT_DESCS, 0, 0);
> + if (err)
> + goto rel_rxfree;
> +
> + err = qmgr_request_queue(TX_QUEUE(port->plat), TX_QUEUE_LEN, 0, 0);
> + if (err)
> + goto rel_rx;
> +
> + /* TX-done queue handles skbs sent out by the NPEs */
> + if (!ports_open) {
> + err = qmgr_request_queue(TXDONE_QUEUE, PKT_DESCS, 0, 0);
> + if (err)
> + goto rel_tx;
> + }
> + return 0;
> +
> +rel_tx:
> + qmgr_release_queue(TX_QUEUE(port->plat));
> +rel_rx:
> + qmgr_release_queue(port->plat->rxq);
> +rel_rxfree:
> + qmgr_release_queue(RXFREE_QUEUE(port->plat));
> + return err;
> +}
> +
> +static void release_queues(struct port *port)
> +{
> + qmgr_release_queue(RXFREE_QUEUE(port->plat));
> + qmgr_release_queue(port->plat->rxq);
> + qmgr_release_queue(TX_QUEUE(port->plat));
> +
> + if (!ports_open)
> + qmgr_release_queue(TXDONE_QUEUE);
> +}
> +
> +static int init_queues(struct port *port)
> +{
> + int i;
> +
> + if (!dma_pool) {
> + /* Setup TX descriptors - common to all ports */
> + dma_pool = dma_pool_create(DRV_NAME, NULL, POOL_ALLOC_SIZE,
> + 32, 0);
> + if (!dma_pool)
> + return -ENOMEM;
> +
> + if (!(tx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
> + &tx_desc_tab_phys)))
> + return -ENOMEM;
> + memset(tx_desc_tab, 0, POOL_ALLOC_SIZE);
> + memset(tx_skb_tab, 0, sizeof(tx_skb_tab)); /* static table */
> +
> + for (i = 0; i < PKT_DESCS; i++) {
> + queue_put_desc(TXDONE_QUEUE, tx_desc_phys(i),
> + &tx_desc_tab[i]);
> + BUG_ON(qmgr_stat_overflow(TXDONE_QUEUE));
> + }
> + }
> +
> + /* Setup RX buffers */
> + if (!(port->rx_desc_tab = dma_pool_alloc(dma_pool, GFP_KERNEL,
> + &port->rx_desc_tab_phys)))
> + return -ENOMEM;
> + memset(port->rx_desc_tab, 0, POOL_ALLOC_SIZE);
> + memset(port->rx_skb_tab, 0, sizeof(port->rx_skb_tab)); /* table */
> +
> + for (i = 0; i < PKT_DESCS; i++) {
> + struct desc *desc = &port->rx_desc_tab[i];
> + struct sk_buff *skb;
> +
> + if (!(skb = netdev_alloc_skb(port->netdev, MAX_MRU)))
> + return -ENOMEM;
> + port->rx_skb_tab[i] = skb;
> + desc->buf_len = MAX_MRU;
> +#if 0
> + skb_reserve(skb, 2); /* FIXME */
> +#endif
Hallo :o)
> + desc->data = dma_map_single(&port->netdev->dev, skb->data,
> + MAX_MRU, DMA_FROM_DEVICE);
> + if (dma_mapping_error(desc->data)) {
> + desc->data = 0;
> + return -EIO;
> + }
> + queue_put_desc(RXFREE_QUEUE(port->plat),
> + rx_desc_phys(port, i), desc);
> + BUG_ON(qmgr_stat_overflow(RXFREE_QUEUE(port->plat)));
> + }
> + return 0;
> +}
> +
> +static void destroy_queues(struct port *port)
> +{
> + int i;
> +
> + while (queue_get_desc(RXFREE_QUEUE(port->plat), port, 0) >= 0)
> + /* nothing to do here */;
> + while (queue_get_desc(port->plat->rxq, port, 0) >= 0)
> + /* nothing to do here */;
> + while (queue_get_desc(TX_QUEUE(port->plat), port, 1) >= 0) {
> + /* nothing to do here */;
> + }
> + if (!ports_open)
> + while (queue_get_desc(TXDONE_QUEUE, port, 1) >= 0)
> + /* nothing to do here */;
> +
> + if (port->rx_desc_tab) {
> + for (i = 0; i < PKT_DESCS; i++) {
> + struct desc *desc = &port->rx_desc_tab[i];
> + struct sk_buff *skb = port->rx_skb_tab[i];
> + if (skb) {
> + if (desc->data)
> + dma_unmap_single(&port->netdev->dev,
> + desc->data, MAX_MRU,
> + DMA_FROM_DEVICE);
> + dev_kfree_skb(skb);
> + }
> + }
> + dma_pool_free(dma_pool, port->rx_desc_tab,
> + port->rx_desc_tab_phys);
> + port->rx_desc_tab = NULL;
> + }
> +
> + if (!ports_open && tx_desc_tab) {
> + for (i = 0; i < PKT_DESCS; i++) {
> + struct desc *desc = &tx_desc_tab[i];
> + struct sk_buff *skb = tx_skb_tab[i];
> + if (skb) {
> + if (desc->data)
> + dma_unmap_single(&port->netdev->dev,
> + desc->data,
> + desc->buf_len,
> + DMA_TO_DEVICE);
> + dev_kfree_skb(skb);
> + }
> + }
> + dma_pool_free(dma_pool, tx_desc_tab, tx_desc_tab_phys);
> + tx_desc_tab = NULL;
> + }
> + if (!ports_open && dma_pool) {
> + dma_pool_destroy(dma_pool);
> + dma_pool = NULL;
> + }
> +}
> +
> +static int eth_load_firmware(struct net_device *dev, struct npe *npe)
> +{
> + struct msg msg;
> + int err;
> +
> + if ((err = npe_load_firmware(npe, npe_name(npe), &dev->dev)) != 0)
> + return err;
> +
> + if ((err = npe_recv_message(npe, &msg, "ETH_GET_STATUS")) != 0) {
> + printk(KERN_ERR "%s: %s not responding\n", dev->name,
> + npe_name(npe));
> + return err;
> + }
> + return 0;
> +}
> +
> +static int eth_open(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> + struct npe *npe = port->npe;
> + struct msg msg;
> + int i, err;
> +
> + if (!npe_running(npe))
> + if (eth_load_firmware(dev, npe))
> + return -EIO;
> +
> + if (!npe_running(mdio_npe))
> + if (eth_load_firmware(dev, mdio_npe))
> + return -EIO;
> +
> + memset(&msg, 0, sizeof(msg));
> + msg.cmd = NPE_VLAN_SETRXQOSENTRY;
> + msg.eth_id = port->id;
> + msg.byte5 = port->plat->rxq | 0x80;
> + msg.byte7 = port->plat->rxq << 4;
> + for (i = 0; i < 8; i++) {
> + msg.byte3 = i;
> + if (npe_send_recv_message(port->npe, &msg, "ETH_SET_RXQ"))
> + return -EIO;
> + }
> +
> + msg.cmd = NPE_EDB_SETPORTADDRESS;
> + msg.eth_id = PHYSICAL_ID(port);
> + memcpy(msg.mac, dev->dev_addr, ETH_ALEN);
> + if (npe_send_recv_message(port->npe, &msg, "ETH_SET_MAC"))
> + return -EIO;
> +
> + memset(&msg, 0, sizeof(msg));
> + msg.cmd = NPE_FW_SETFIREWALLMODE;
> + msg.eth_id = port->id;
> + if (npe_send_recv_message(port->npe, &msg, "ETH_SET_FIREWALL_MODE"))
> + return -EIO;
> +
> + if ((err = request_queues(port)) != 0)
> + return err;
> +
> + if ((err = init_queues(port)) != 0) {
> + destroy_queues(port);
> + release_queues(port);
> + return err;
> + }
> +
> + for (i = 0; i < ETH_ALEN; i++)
> + __raw_writel(dev->dev_addr[i], &port->regs->hw_addr[i]);
> + __raw_writel(0x08, &port->regs->random_seed);
> + __raw_writel(0x12, &port->regs->partial_empty_threshold);
> + __raw_writel(0x30, &port->regs->partial_full_threshold);
> + __raw_writel(0x08, &port->regs->tx_start_bytes);
> + __raw_writel(0x15, &port->regs->tx_deferral);
> + __raw_writel(0x08, &port->regs->tx_2part_deferral[0]);
> + __raw_writel(0x07, &port->regs->tx_2part_deferral[1]);
> + __raw_writel(0x80, &port->regs->slot_time);
> + __raw_writel(0x01, &port->regs->int_clock_threshold);
> + __raw_writel(TX_CNTRL1_RETRIES, &port->regs->tx_control[1]);
> + __raw_writel(TX_CNTRL0_TX_EN | TX_CNTRL0_RETRY | TX_CNTRL0_PAD_EN |
> + TX_CNTRL0_APPEND_FCS | TX_CNTRL0_2DEFER,
> + &port->regs->tx_control[0]);
> + __raw_writel(0, &port->regs->rx_control[1]);
> + __raw_writel(RX_CNTRL0_RX_EN | RX_CNTRL0_PADSTRIP_EN,
> + &port->regs->rx_control[0]);
> +
> + if (mii_check_media(&port->mii, 1, 1))
> + eth_set_duplex(port);
> + eth_set_mcast_list(dev);
> + netif_start_queue(dev);
> + schedule_delayed_work(&port->mdio_thread, MDIO_INTERVAL);
> +
> + qmgr_set_irq(port->plat->rxq, QUEUE_IRQ_SRC_NOT_EMPTY,
> + eth_rx_irq, dev);
> + qmgr_set_irq(TX_QUEUE(port->plat), QUEUE_IRQ_SRC_NOT_FULL,
> + eth_xmit_ready_irq, dev);
> + qmgr_enable_irq(port->plat->rxq);
> + qmgr_enable_irq(TX_QUEUE(port->plat));
> + ports_open++;
> + return 0;
> +}
> +
> +static int eth_close(struct net_device *dev)
> +{
> + struct port *port = netdev_priv(dev);
> +
> + ports_open--;
> + qmgr_disable_irq(port->plat->rxq);
> + qmgr_disable_irq(TX_QUEUE(port->plat));
> + netif_stop_queue(dev);
> +
> + clr_regbits(RX_CNTRL0_RX_EN, &port->regs->rx_control[0]);
> + clr_regbits(TX_CNTRL0_TX_EN, &port->regs->tx_control[0]);
> + set_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
> + &port->regs->core_control);
> + udelay(10);
> + clr_regbits(CORE_RESET | CORE_RX_FIFO_FLUSH | CORE_TX_FIFO_FLUSH,
> + &port->regs->core_control);
> +
> + cancel_rearming_delayed_work(&port->mdio_thread);
> + destroy_queues(port);
> + release_queues(port);
> + return 0;
> +}
> +
> +static int __devinit eth_init_one(struct platform_device *pdev)
> +{
> + struct port *port;
> + struct net_device *dev;
> + struct mac_plat_info *plat = pdev->dev.platform_data;
> + u32 regs_phys;
> + int err;
> +
> + if (!(dev = alloc_etherdev(sizeof(struct port))))
> + return -ENOMEM;
> +
> + SET_MODULE_OWNER(dev);
> + SET_NETDEV_DEV(dev, &pdev->dev);
> + port = netdev_priv(dev);
> + port->netdev = dev;
> + port->id = pdev->id;
> +
> + switch (port->id) {
> + case IXP4XX_ETH_NPEA:
> + port->regs = (struct eth_regs __iomem *)IXP4XX_EthA_BASE_VIRT;
> + regs_phys = IXP4XX_EthA_BASE_PHYS;
> + break;
> + case IXP4XX_ETH_NPEB:
> + port->regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
> + regs_phys = IXP4XX_EthB_BASE_PHYS;
> + break;
> + case IXP4XX_ETH_NPEC:
> + port->regs = (struct eth_regs __iomem *)IXP4XX_EthC_BASE_VIRT;
> + regs_phys = IXP4XX_EthC_BASE_PHYS;
> + break;
> + default:
> + err = -ENOSYS;
> + goto err_free;
> + }
> +
> + dev->open = eth_open;
> + dev->hard_start_xmit = eth_xmit;
> + dev->poll = eth_poll;
> + dev->stop = eth_close;
> + dev->get_stats = eth_stats;
> + dev->do_ioctl = eth_ioctl;
> + dev->set_multicast_list = eth_set_mcast_list;
> + dev->weight = 16;
> + dev->tx_queue_len = 100;
> +
> + if (!(port->npe = npe_request(NPE_ID(port)))) {
> + err = -EIO;
> + goto err_free;
> + }
> +
> + if (register_netdev(dev)) {
> + err = -EIO;
> + goto err_npe_rel;
> + }
> +
> + port->mem_res = request_mem_region(regs_phys, REGS_SIZE, dev->name);
> + if (!port->mem_res) {
> + err = -EBUSY;
> + goto err_unreg;
> + }
> +
> + port->plat = plat;
> + memcpy(dev->dev_addr, plat->hwaddr, ETH_ALEN);
I think my comment about adding randomised MAC addresses in case of
no hwaddr stands - it's really not that complex.
Christian's driver did this:
/* The place of the MAC address is very system dependent.
* Here we use a random one to be replaced by one of the
* following commands:
* "ip link set address 02:03:04:04:04:01 dev eth0"
* "ifconfig eth0 hw ether 02:03:04:04:04:07"
*/
if (is_zero_ether_addr(plat->hwaddr)) {
random_ether_addr(dev->dev_addr);
dev->dev_addr[5] = plat->phy_id;
}
else
memcpy(dev->dev_addr, plat->hwaddr, 6);
> +
> + platform_set_drvdata(pdev, dev);
> +
> + __raw_writel(CORE_RESET, &port->regs->core_control);
> + udelay(50);
> + __raw_writel(CORE_MDC_EN, &port->regs->core_control);
> + udelay(50);
> +
> + port->mii.dev = dev;
> + port->mii.mdio_read = mdio_read;
> + port->mii.mdio_write = mdio_write;
> + port->mii.phy_id = plat->phy;
> + port->mii.phy_id_mask = 0x1F;
> + port->mii.reg_num_mask = 0x1F;
> +
> + INIT_DELAYED_WORK(&port->mdio_thread, mdio_thread);
> +
> + printk(KERN_INFO "%s: MII PHY %i on %s\n", dev->name, plat->phy,
> + npe_name(port->npe));
> + return 0;
> +
> +err_unreg:
> + unregister_netdev(dev);
> +err_npe_rel:
> + npe_release(port->npe);
> +err_free:
> + free_netdev(dev);
> + return err;
> +}
> +
> +static int __devexit eth_remove_one(struct platform_device *pdev)
> +{
> + struct net_device *dev = platform_get_drvdata(pdev);
> + struct port *port = netdev_priv(dev);
> +
> + unregister_netdev(dev);
> + platform_set_drvdata(pdev, NULL);
> + npe_release(port->npe);
> + release_resource(port->mem_res);
> + free_netdev(dev);
> + return 0;
> +}
> +
> +static struct platform_driver drv = {
> + .driver.name = DRV_NAME,
> + .probe = eth_init_one,
> + .remove = eth_remove_one,
> +};
> +
> +static int __init eth_init_module(void)
> +{
> + if (!(ixp4xx_read_fuses() & IXP4XX_FUSE_NPEB_ETH0))
> + return -ENOSYS;
> +
> + /* All MII PHY accesses use NPE-B Ethernet registers */
> + if (!(mdio_npe = npe_request(1)))
> + return -EIO;
> + spin_lock_init(&mdio_lock);
> + mdio_regs = (struct eth_regs __iomem *)IXP4XX_EthB_BASE_VIRT;
> +
> + return platform_driver_register(&drv);
> +}
> +
> +static void __exit eth_cleanup_module(void)
> +{
> + platform_driver_unregister(&drv);
> + npe_release(mdio_npe);
> +}
> +
> +MODULE_AUTHOR("Krzysztof Halasa");
> +MODULE_DESCRIPTION("Intel IXP4xx Ethernet driver");
> +MODULE_LICENSE("GPL v2");
> +module_init(eth_init_module);
For our flash and eeprom notifiers to work, we need this converted to
a late_initcall:
http://trac.nslu2-linux.org/kernel/browser/trunk/patches/2.6.21/37-
ixp4xx-net-driver-fix-mac-handling.patch
akpm suggested this fix, but we don't absolutely know if it's
upstream acceptable.
> +module_exit(eth_cleanup_module);
> diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig
> index 5f79622..b891e10 100644
> --- a/drivers/net/wan/Kconfig
> +++ b/drivers/net/wan/Kconfig
> @@ -342,6 +342,16 @@ config DSCC4_PCI_RST
>
> Say Y if your card supports this feature.
>
> +config IXP4XX_HSS
> + tristate "IXP4xx HSS (synchronous serial port) support"
> + depends on ARM && ARCH_IXP4XX
> + select IXP4XX_NPE
> + select IXP4XX_QMGR
> + select HDLC
> + help
> + Say Y here if you want to use built-in HSS ports
> + on IXP4xx processor.
> +
> config DLCI
> tristate "Frame Relay DLCI support"
> ---help---
> diff --git a/drivers/net/wan/Makefile b/drivers/net/wan/Makefile
> index d61fef3..1b1d116 100644
> --- a/drivers/net/wan/Makefile
> +++ b/drivers/net/wan/Makefile
> @@ -42,6 +42,7 @@ obj-$(CONFIG_C101) += c101.o
> obj-$(CONFIG_WANXL) += wanxl.o
> obj-$(CONFIG_PCI200SYN) += pci200syn.o
> obj-$(CONFIG_PC300TOO) += pc300too.o
> +obj-$(CONFIG_IXP4XX_HSS) += ixp4xx_hss.o
>
> clean-files := wanxlfw.inc
> $(obj)/wanxl.o: $(obj)/wanxlfw.inc
> diff --git a/drivers/net/wan/ixp4xx_hss.c b/drivers/net/wan/
> ixp4xx_hss.c
> new file mode 100644
> index 0000000..ed56ed8
> --- /dev/null
> +++ b/drivers/net/wan/ixp4xx_hss.c
> @@ -0,0 +1,1048 @@
> +/*
> + * Intel IXP4xx HSS (synchronous serial port) driver for Linux
> + *
> + * Copyright (C) 2007 Krzysztof Halasa <khc@pm.waw.pl>
> + *
> + * This program is free software; you can redistribute it and/or
> modify it
> + * under the terms of version 2 of the GNU General Public License
> + * as published by the Free Software Foundation.
> + */
> +
> +#include <linux/dma-mapping.h>
> +#include <linux/dmapool.h>
> +#include <linux/kernel.h>
> +#include <linux/hdlc.h>
> +#include <linux/platform_device.h>
> +#include <asm/io.h>
> +#include <asm/arch/npe.h>
> +#include <asm/arch/qmgr.h>
> +
> +#ifndef __ARMEB__
> +#warning Little endian mode not supported
> +#endif
Personally I'm less fussed about WAN / LE support. Anyone with any
sense will run ixp4xx boards doing such a specialised network
operation as BE. Also, NSLU2-Linux can't test this functionality with
our LE setup as we don't have this hardware on-board. You may just
want to declare a depends on ARMEB in Kconfig (with or without OR
(ARM || BROKEN) ) and have done with it - it's up to you.
> +
> +#define DEBUG_QUEUES 0
> +#define DEBUG_RX 0
> +#define DEBUG_TX 0
> +
> +#define DRV_NAME "ixp4xx_hss"
> +#define DRV_VERSION "0.03"
> +
> +#define PKT_EXTRA_FLAGS 0 /* orig 1 */
> +#define FRAME_SYNC_OFFSET 0 /* unused, channelized only */
> +#define FRAME_SYNC_SIZE 1024
> +#define PKT_NUM_PIPES 1 /* 1, 2 or 4 */
> +#define PKT_PIPE_FIFO_SIZEW 4 /* total 4 dwords per HSS */
> +
> +#define RX_DESCS 16 /* also length of queues: RX-ready, RX */
> +#define TX_DESCS 16 /* also length of queues: TX-done, TX */
> +
> +#define POOL_ALLOC_SIZE (sizeof(struct desc) * (RX_DESCS +
> TX_DESCS))
> +#define RX_SIZE (HDLC_MAX_MRU + 4) /* NPE needs more space */
> +
> +/* Queue IDs */
> +#define HSS0_CHL_RXTRIG_QUEUE 12 /* orig size = 32 dwords */
> +#define HSS0_PKT_RX_QUEUE 13 /* orig size = 32 dwords */
> +#define HSS0_PKT_TX0_QUEUE 14 /* orig size = 16 dwords */
> +#define HSS0_PKT_TX1_QUEUE 15
> +#define HSS0_PKT_TX2_QUEUE 16
> +#define HSS0_PKT_TX3_QUEUE 17
> +#define HSS0_PKT_RXFREE0_QUEUE 18 /* orig size = 16 dwords */
> +#define HSS0_PKT_RXFREE1_QUEUE 19
> +#define HSS0_PKT_RXFREE2_QUEUE 20
> +#define HSS0_PKT_RXFREE3_QUEUE 21
> +#define HSS0_PKT_TXDONE_QUEUE 22 /* orig size = 64 dwords */
> +
> +#define HSS1_CHL_RXTRIG_QUEUE 10
> +#define HSS1_PKT_RX_QUEUE 0
> +#define HSS1_PKT_TX0_QUEUE 5
> +#define HSS1_PKT_TX1_QUEUE 6
> +#define HSS1_PKT_TX2_QUEUE 7
> +#define HSS1_PKT_TX3_QUEUE 8
> +#define HSS1_PKT_RXFREE0_QUEUE 1
> +#define HSS1_PKT_RXFREE1_QUEUE 2
> +#define HSS1_PKT_RXFREE2_QUEUE 3
> +#define HSS1_PKT_RXFREE3_QUEUE 4
> +#define HSS1_PKT_TXDONE_QUEUE 9
> +
> +#define NPE_PKT_MODE_HDLC 0
> +#define NPE_PKT_MODE_RAW 1
> +#define NPE_PKT_MODE_56KMODE 2
> +#define NPE_PKT_MODE_56KENDIAN_MSB 4
> +
> +/* PKT_PIPE_HDLC_CFG_WRITE flags */
> +#define PKT_HDLC_IDLE_ONES 0x1 /* default = flags */
> +#define PKT_HDLC_CRC_32 0x2 /* default = CRC-16 */
> +#define PKT_HDLC_MSB_ENDIAN 0x4 /* default = LE */
> +
> +
> +/* hss_config, PCRs */
> +/* Frame sync sampling, default = active low */
> +#define PCR_FRM_SYNC_ACTIVE_HIGH 0x40000000
> +#define PCR_FRM_SYNC_FALLINGEDGE 0x80000000
> +#define PCR_FRM_SYNC_RISINGEDGE 0xC0000000
> +
> +/* Frame sync pin: input (default) or output generated off a given
> clk edge */
> +#define PCR_FRM_SYNC_OUTPUT_FALLING 0x20000000
> +#define PCR_FRM_SYNC_OUTPUT_RISING 0x30000000
> +
> +/* Frame and data clock sampling on edge, default = falling */
> +#define PCR_FCLK_EDGE_RISING 0x08000000
> +#define PCR_DCLK_EDGE_RISING 0x04000000
> +
> +/* Clock direction, default = input */
> +#define PCR_SYNC_CLK_DIR_OUTPUT 0x02000000
> +
> +/* Generate/Receive frame pulses, default = enabled */
> +#define PCR_FRM_PULSE_DISABLED 0x01000000
> +
> + /* Data rate is full (default) or half the configured clk speed */
> +#define PCR_HALF_CLK_RATE 0x00200000
> +
> +/* Invert data between NPE and HSS FIFOs? (default = no) */
> +#define PCR_DATA_POLARITY_INVERT 0x00100000
> +
> +/* TX/RX endianness, default = LSB */
> +#define PCR_MSB_ENDIAN 0x00080000
> +
> +/* Normal (default) / open drain mode (TX only) */
> +#define PCR_TX_PINS_OPEN_DRAIN 0x00040000
> +
> +/* No framing bit transmitted and expected on RX? (default =
> framing bit) */
> +#define PCR_SOF_NO_FBIT 0x00020000
> +
> +/* Drive data pins? */
> +#define PCR_TX_DATA_ENABLE 0x00010000
> +
> +/* Voice 56k type: drive the data pins low (default), high, high Z */
> +#define PCR_TX_V56K_HIGH 0x00002000
> +#define PCR_TX_V56K_HIGH_IMP 0x00004000
> +
> +/* Unassigned type: drive the data pins low (default), high, high
> Z */
> +#define PCR_TX_UNASS_HIGH 0x00000800
> +#define PCR_TX_UNASS_HIGH_IMP 0x00001000
> +
> +/* T1 @ 1.544MHz only: Fbit dictated in FIFO (default) or high Z */
> +#define PCR_TX_FB_HIGH_IMP 0x00000400
> +
> +/* 56k data endiannes - which bit unused: high (default) or low */
> +#define PCR_TX_56KE_BIT_0_UNUSED 0x00000200
> +
> +/* 56k data transmission type: 32/8 bit data (default) or 56K data */
> +#define PCR_TX_56KS_56K_DATA 0x00000100
> +
> +/* hss_config, cCR */
> +/* Number of packetized clients, default = 1 */
> +#define CCR_NPE_HFIFO_2_HDLC 0x04000000
> +#define CCR_NPE_HFIFO_3_OR_4HDLC 0x08000000
> +
> +/* default = no loopback */
> +#define CCR_LOOPBACK 0x02000000
> +
> +/* HSS number, default = 0 (first) */
> +#define CCR_SECOND_HSS 0x01000000
> +
> +
> +/* hss_config, clkCR: main:10, num:10, denom:12 */
> +#define CLK42X_SPEED_EXP ((0x3FF << 22) | ( 2 << 12) | 15) /*65
> KHz*/
> +
> +#define CLK42X_SPEED_512KHZ (( 130 << 22) | ( 2 << 12) | 15)
> +#define CLK42X_SPEED_1536KHZ (( 43 << 22) | ( 18 << 12) | 47)
> +#define CLK42X_SPEED_1544KHZ (( 43 << 22) | ( 33 << 12) | 192)
> +#define CLK42X_SPEED_2048KHZ (( 32 << 22) | ( 34 << 12) | 63)
> +#define CLK42X_SPEED_4096KHZ (( 16 << 22) | ( 34 << 12) | 127)
> +#define CLK42X_SPEED_8192KHZ (( 8 << 22) | ( 34 << 12) | 255)
> +
> +#define CLK46X_SPEED_512KHZ (( 130 << 22) | ( 24 << 12) | 127)
> +#define CLK46X_SPEED_1536KHZ (( 43 << 22) | (152 << 12) | 383)
> +#define CLK46X_SPEED_1544KHZ (( 43 << 22) | ( 66 << 12) | 385)
> +#define CLK46X_SPEED_2048KHZ (( 32 << 22) | (280 << 12) | 511)
> +#define CLK46X_SPEED_4096KHZ (( 16 << 22) | (280 << 12) | 1023)
> +#define CLK46X_SPEED_8192KHZ (( 8 << 22) | (280 << 12) | 2047)
> +
> +
> +/* hss_config, LUTs: default = unassigned */
> +#define TDMMAP_HDLC 1 /* HDLC - packetised */
> +#define TDMMAP_VOICE56K 2 /* Voice56K - channelised */
> +#define TDMMAP_VOICE64K 3 /* Voice64K - channelised */
> +
> +
> +/* NPE command codes */
> +/* writes the ConfigWord value to the location specified by offset */
> +#define PORT_CONFIG_WRITE 0x40
> +
> +/* triggers the NPE to load the contents of the configuration
> table */
> +#define PORT_CONFIG_LOAD 0x41
> +
> +/* triggers the NPE to return an HssErrorReadResponse message */
> +#define PORT_ERROR_READ 0x42
> +
> +/* reset NPE internal status and enable the HssChannelized
> operation */
> +#define CHAN_FLOW_ENABLE 0x43
> +#define CHAN_FLOW_DISABLE 0x44
> +#define CHAN_IDLE_PATTERN_WRITE 0x45
> +#define CHAN_NUM_CHANS_WRITE 0x46
> +#define CHAN_RX_BUF_ADDR_WRITE 0x47
> +#define CHAN_RX_BUF_CFG_WRITE 0x48
> +#define CHAN_TX_BLK_CFG_WRITE 0x49
> +#define CHAN_TX_BUF_ADDR_WRITE 0x4A
> +#define CHAN_TX_BUF_SIZE_WRITE 0x4B
> +#define CHAN_TSLOTSWITCH_ENABLE 0x4C
> +#define CHAN_TSLOTSWITCH_DISABLE 0x4D
> +
> +/* downloads the gainWord value for a timeslot switching channel
> associated
> + with bypassNum */
> +#define CHAN_TSLOTSWITCH_GCT_DOWNLOAD 0x4E
> +
> +/* triggers the NPE to reset internal status and enable the
> HssPacketized
> + operation for the flow specified by pPipe */
Greater-than-one-line comments not conforming to Kernel coding style
- someone much more angry than me will jump on that.
> +#define PKT_PIPE_FLOW_ENABLE 0x50
> +#define PKT_PIPE_FLOW_DISABLE 0x51
> +#define PKT_NUM_PIPES_WRITE 0x52
> +#define PKT_PIPE_FIFO_SIZEW_WRITE 0x53
> +#define PKT_PIPE_HDLC_CFG_WRITE 0x54
> +#define PKT_PIPE_IDLE_PATTERN_WRITE 0x55
> +#define PKT_PIPE_RX_SIZE_WRITE 0x56
> +#define PKT_PIPE_MODE_WRITE 0x57
> +
> +
Lots of double returns.
> +#define HSS_TIMESLOTS 128
> +#define HSS_LUT_BITS 2
> +
> +/* HDLC packet status values - desc->status */
> +#define ERR_SHUTDOWN 1 /* stop or shutdown occurrance */
> +#define ERR_HDLC_ALIGN 2 /* HDLC alignment error */
> +#define ERR_HDLC_FCS 3 /* HDLC Frame Check Sum error */
> +#define ERR_RXFREE_Q_EMPTY 4 /* RX-free queue became empty while
> receiving
> + this packet (if buf_len < pkt_len) */
> +#define ERR_HDLC_TOO_LONG 5 /* HDLC frame size too long */
> +#define ERR_HDLC_ABORT 6 /* abort sequence received */
> +#define ERR_DISCONNECTING 7 /* disconnect is in progress */
> +
> +
> +struct port {
> + struct npe *npe;
> + struct net_device *netdev;
> + struct hss_plat_info *plat;
> + struct sk_buff *rx_skb_tab[RX_DESCS], *tx_skb_tab[TX_DESCS];
> + struct desc *desc_tab; /* coherent */
> + u32 desc_tab_phys;
> + sync_serial_settings settings;
> + int id;
> + u8 hdlc_cfg;
> +};
> +
[snip]
Again, looking good.
Michael-Luke Jones
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 0:46 ` Krzysztof Halasa
@ 2007-05-08 11:32 ` Lennert Buytenhek
-1 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:32 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
I'm not sure what the latest versions are, so I'm not sure which
patches to review and which patches are obsolete.
On Tue, May 08, 2007 at 02:46:28AM +0200, Krzysztof Halasa wrote:
> +struct qmgr_regs __iomem *qmgr_regs;
> +static struct resource *mem_res;
> +static spinlock_t qmgr_lock;
> +static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
> +static void (*irq_handlers[HALF_QUEUES])(void *pdev);
> +static void *irq_pdevs[HALF_QUEUES];
> +
> +void qmgr_set_irq(unsigned int queue, int src,
> + void (*handler)(void *pdev), void *pdev)
> +{
> + u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues / u32 */
> + int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
> + unsigned long flags;
> +
> + src &= 7;
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
> + irq_handlers[queue] = handler;
> + irq_pdevs[queue] = pdev;
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
The queue manager interrupts should probably be implemented as an
irqchip, in the same way that GPIO interrupts are implemented. (I.e.
allocate 'real' interrupt numbers for them, and use the interrupt
cascade mechanism.) You probably want to have separate irqchips for
the upper and lower halves, too. This way, drivers can just use
request_irq() instead of having to bother with platform-specific
qmgr_set_irq() methods. I think I also made this review comment
with Christian's driver.
> +int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
> + unsigned int nearly_empty_watermark,
> + unsigned int nearly_full_watermark)
> +{
> + u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
> + int err;
> +
> + if (queue >= HALF_QUEUES)
> + return -ERANGE;
> +
> + if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
> + return -EINVAL;
> +
> + switch (len) {
> + case 16:
> + cfg = 0 << 24;
> + mask[0] = 0x1;
> + break;
> + case 32:
> + cfg = 1 << 24;
> + mask[0] = 0x3;
> + break;
> + case 64:
> + cfg = 2 << 24;
> + mask[0] = 0xF;
> + break;
> + case 128:
> + cfg = 3 << 24;
> + mask[0] = 0xFF;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + cfg |= nearly_empty_watermark << 26;
> + cfg |= nearly_full_watermark << 29;
> + len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
> + mask[1] = mask[2] = mask[3] = 0;
> +
> + if (!try_module_get(THIS_MODULE))
> + return -ENODEV;
> +
> + spin_lock_irq(&qmgr_lock);
> + if (__raw_readl(&qmgr_regs->sram[queue])) {
> + err = -EBUSY;
> + goto err;
> + }
> +
> + while (1) {
> + if (!(used_sram_bitmap[0] & mask[0]) &&
> + !(used_sram_bitmap[1] & mask[1]) &&
> + !(used_sram_bitmap[2] & mask[2]) &&
> + !(used_sram_bitmap[3] & mask[3]))
> + break; /* found free space */
> +
> + addr++;
> + shift_mask(mask);
> + if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
> + printk(KERN_ERR "qmgr: no free SRAM space for"
> + " queue %i\n", queue);
> + err = -ENOMEM;
> + goto err;
> + }
> + }
> +
> + used_sram_bitmap[0] |= mask[0];
> + used_sram_bitmap[1] |= mask[1];
> + used_sram_bitmap[2] |= mask[2];
> + used_sram_bitmap[3] |= mask[3];
> + __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
> + spin_unlock_irq(&qmgr_lock);
> +
> +#if DEBUG
> + printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
> + queue, addr);
> +#endif
> + return 0;
> +
> +err:
> + spin_unlock_irq(&qmgr_lock);
> + module_put(THIS_MODULE);
> + return err;
> +}
As with Christian's driver, I don't know whether an SRAM allocator
makes much sense. We can just set up a static allocation map for the
in-tree drivers and leave out the allocator altogether. I.e. I don't
think it's worth the complexity (and just because the butt-ugly Intel
code has an allocator isn't a very good reason. :-)
I.e. an API a la:
ixp4xx_qmgr_config_queue(int queue_nr, int sram_base_address, int queue_size, ...);
might simply suffice.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 11:32 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:32 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
I'm not sure what the latest versions are, so I'm not sure which
patches to review and which patches are obsolete.
On Tue, May 08, 2007 at 02:46:28AM +0200, Krzysztof Halasa wrote:
> +struct qmgr_regs __iomem *qmgr_regs;
> +static struct resource *mem_res;
> +static spinlock_t qmgr_lock;
> +static u32 used_sram_bitmap[4]; /* 128 16-dword pages */
> +static void (*irq_handlers[HALF_QUEUES])(void *pdev);
> +static void *irq_pdevs[HALF_QUEUES];
> +
> +void qmgr_set_irq(unsigned int queue, int src,
> + void (*handler)(void *pdev), void *pdev)
> +{
> + u32 __iomem *reg = &qmgr_regs->irqsrc[queue / 8]; /* 8 queues / u32 */
> + int bit = (queue % 8) * 4; /* 3 bits + 1 reserved bit per queue */
> + unsigned long flags;
> +
> + src &= 7;
> + spin_lock_irqsave(&qmgr_lock, flags);
> + __raw_writel((__raw_readl(reg) & ~(7 << bit)) | (src << bit), reg);
> + irq_handlers[queue] = handler;
> + irq_pdevs[queue] = pdev;
> + spin_unlock_irqrestore(&qmgr_lock, flags);
> +}
The queue manager interrupts should probably be implemented as an
irqchip, in the same way that GPIO interrupts are implemented. (I.e.
allocate 'real' interrupt numbers for them, and use the interrupt
cascade mechanism.) You probably want to have separate irqchips for
the upper and lower halves, too. This way, drivers can just use
request_irq() instead of having to bother with platform-specific
qmgr_set_irq() methods. I think I also made this review comment
with Christian's driver.
> +int qmgr_request_queue(unsigned int queue, unsigned int len /* dwords */,
> + unsigned int nearly_empty_watermark,
> + unsigned int nearly_full_watermark)
> +{
> + u32 cfg, addr = 0, mask[4]; /* in 16-dwords */
> + int err;
> +
> + if (queue >= HALF_QUEUES)
> + return -ERANGE;
> +
> + if ((nearly_empty_watermark | nearly_full_watermark) & ~7)
> + return -EINVAL;
> +
> + switch (len) {
> + case 16:
> + cfg = 0 << 24;
> + mask[0] = 0x1;
> + break;
> + case 32:
> + cfg = 1 << 24;
> + mask[0] = 0x3;
> + break;
> + case 64:
> + cfg = 2 << 24;
> + mask[0] = 0xF;
> + break;
> + case 128:
> + cfg = 3 << 24;
> + mask[0] = 0xFF;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + cfg |= nearly_empty_watermark << 26;
> + cfg |= nearly_full_watermark << 29;
> + len /= 16; /* in 16-dwords: 1, 2, 4 or 8 */
> + mask[1] = mask[2] = mask[3] = 0;
> +
> + if (!try_module_get(THIS_MODULE))
> + return -ENODEV;
> +
> + spin_lock_irq(&qmgr_lock);
> + if (__raw_readl(&qmgr_regs->sram[queue])) {
> + err = -EBUSY;
> + goto err;
> + }
> +
> + while (1) {
> + if (!(used_sram_bitmap[0] & mask[0]) &&
> + !(used_sram_bitmap[1] & mask[1]) &&
> + !(used_sram_bitmap[2] & mask[2]) &&
> + !(used_sram_bitmap[3] & mask[3]))
> + break; /* found free space */
> +
> + addr++;
> + shift_mask(mask);
> + if (addr + len > ARRAY_SIZE(qmgr_regs->sram)) {
> + printk(KERN_ERR "qmgr: no free SRAM space for"
> + " queue %i\n", queue);
> + err = -ENOMEM;
> + goto err;
> + }
> + }
> +
> + used_sram_bitmap[0] |= mask[0];
> + used_sram_bitmap[1] |= mask[1];
> + used_sram_bitmap[2] |= mask[2];
> + used_sram_bitmap[3] |= mask[3];
> + __raw_writel(cfg | (addr << 14), &qmgr_regs->sram[queue]);
> + spin_unlock_irq(&qmgr_lock);
> +
> +#if DEBUG
> + printk(KERN_DEBUG "qmgr: requested queue %i, addr = 0x%02X\n",
> + queue, addr);
> +#endif
> + return 0;
> +
> +err:
> + spin_unlock_irq(&qmgr_lock);
> + module_put(THIS_MODULE);
> + return err;
> +}
As with Christian's driver, I don't know whether an SRAM allocator
makes much sense. We can just set up a static allocation map for the
in-tree drivers and leave out the allocator altogether. I.e. I don't
think it's worth the complexity (and just because the butt-ugly Intel
code has an allocator isn't a very good reason. :-)
I.e. an API a la:
ixp4xx_qmgr_config_queue(int queue_nr, int sram_base_address, int queue_size, ...);
might simply suffice.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
2007-05-08 1:19 ` Krzysztof Halasa
@ 2007-05-08 11:37 ` Lennert Buytenhek
-1 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:37 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 03:19:22AM +0200, Krzysztof Halasa wrote:
> diff --git a/arch/arm/mach-ixp4xx/ixdp425-setup.c b/arch/arm/mach-ixp4xx/ixdp425-setup.c
> index ec4f079..f20d39d 100644
> --- a/arch/arm/mach-ixp4xx/ixdp425-setup.c
> +++ b/arch/arm/mach-ixp4xx/ixdp425-setup.c
> @@ -101,10 +101,35 @@ static struct platform_device ixdp425_uart = {
> .resource = ixdp425_uart_resources
> };
>
> +/* Built-in 10/100 Ethernet MAC interfaces */
> +static struct mac_plat_info ixdp425_plat_mac[] = {
> + {
> + .phy = 0,
> + .rxq = 3,
> + }, {
> + .phy = 1,
> + .rxq = 4,
> + }
> +};
As with Christian's driver (I'm feeling like a bit of a broken record
here :-), putting knowledge of which queue to use (which is firmware-
specific) in the _board_ support file is almost certainly wrong.
I would just put the port number in there, and let the ethernet
driver map the port number to the hardware queue number. After all,
the ethernet driver knows which queues the firmware uses, while the
board support code doesn't.
> +#ifndef __ARMEB__
> +#warning Little endian mode not supported
> +#endif
Yay. :-) /me hides
> +static inline void set_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) | bits, reg);
> +}
> +static inline void clr_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) & ~bits, reg);
> +}
I generally discourage the use of such wrappers, as it often makes
people forget that the set and clear operations are not atomic, and
it ignores the fact that some of the other bits in the register you
are modifying might have side-effects.
Didn't review the rest -- not sure whether I'm looking at the latest
version.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
@ 2007-05-08 11:37 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:37 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 03:19:22AM +0200, Krzysztof Halasa wrote:
> diff --git a/arch/arm/mach-ixp4xx/ixdp425-setup.c b/arch/arm/mach-ixp4xx/ixdp425-setup.c
> index ec4f079..f20d39d 100644
> --- a/arch/arm/mach-ixp4xx/ixdp425-setup.c
> +++ b/arch/arm/mach-ixp4xx/ixdp425-setup.c
> @@ -101,10 +101,35 @@ static struct platform_device ixdp425_uart = {
> .resource = ixdp425_uart_resources
> };
>
> +/* Built-in 10/100 Ethernet MAC interfaces */
> +static struct mac_plat_info ixdp425_plat_mac[] = {
> + {
> + .phy = 0,
> + .rxq = 3,
> + }, {
> + .phy = 1,
> + .rxq = 4,
> + }
> +};
As with Christian's driver (I'm feeling like a bit of a broken record
here :-), putting knowledge of which queue to use (which is firmware-
specific) in the _board_ support file is almost certainly wrong.
I would just put the port number in there, and let the ethernet
driver map the port number to the hardware queue number. After all,
the ethernet driver knows which queues the firmware uses, while the
board support code doesn't.
> +#ifndef __ARMEB__
> +#warning Little endian mode not supported
> +#endif
Yay. :-) /me hides
> +static inline void set_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) | bits, reg);
> +}
> +static inline void clr_regbits(u32 bits, u32 __iomem *reg)
> +{
> + __raw_writel(__raw_readl(reg) & ~bits, reg);
> +}
I generally discourage the use of such wrappers, as it often makes
people forget that the set and clear operations are not atomic, and
it ignores the fact that some of the other bits in the register you
are modifying might have side-effects.
Didn't review the rest -- not sure whether I'm looking at the latest
version.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 0:07 ` [PATCH 3/3] Intel IXP4xx network drivers Krzysztof Halasa
2007-05-07 12:59 ` Michael-Luke Jones
@ 2007-05-08 11:40 ` Lennert Buytenhek
1 sibling, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:40 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Jeff Garzik, Russell King, lkml, netdev, linux-arm-kernel
On Mon, May 07, 2007 at 02:07:16AM +0200, Krzysztof Halasa wrote:
> + * Ethernet port config (0x00 is not present on IXP42X):
> + *
> + * logical port 0x00 0x10 0x20
> + * NPE 0 (NPE-A) 1 (NPE-B) 2 (NPE-C)
> + * physical PortId 2 0 1
> + * TX queue 23 24 25
> + * RX-free queue 26 27 28
> + * TX-done queue is always 31, RX queue is configurable
(Note that this assignment depends on the firmware, and different
firmware versions use different queues -- you might want to add a
note about which firmware version this holds for.)
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 20:18 ` Michael-Luke Jones
@ 2007-05-08 11:46 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:46 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Krzysztof Halasa, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Mon, May 07, 2007 at 09:18:00PM +0100, Michael-Luke Jones wrote:
> >Well, I'm told that (compatible) NPEs are present on other IXP CPUs.
> >Not sure about details.
>
> If, by a combined effort, we ever manage to create a generic NPE
> driver for the NPEs found in IXP42x/43x/46x/2000/23xx then the driver
> should go in arch/arm/npe.c
(Note that the ixp2000 doesn't have NPEs.)
(Both the 2000 and the 23xx have microengines, which are both
supported by arch/arm/common/uengine.c.)
> It's possible, but hard due to the differences in hardware design
The ixp23xx NPEs seem pretty much identical to me to the ixp4xx
NPEs. There are some minor differences between the ixp2000 and
ixp23xx uengines, but those are easy enough to deal with.
> and the fact that boards based on anything other than 42x are few
> and far between. The vast majority of 'independent' users following
> mainline are likely running on 42x boards.
Sure, ixp23xx hardware is harder to get. I'm not sure what you mean
by 'independent' users, though. Are people with non-42x hardware
'dependent' users, and why?
> Thus, for now, I would drop the NPE / QMGR code in arch/arm/mach-
> ixp4xx/ and concentrate on making it 42x/43x/46x agnostic. One step
> at a time :)
I'd say that it's up to those who are interested in ixp23xx support
(probably only myself at this point) to add ixp23xx support.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
@ 2007-05-08 11:46 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:46 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Krzysztof Halasa, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Mon, May 07, 2007 at 09:18:00PM +0100, Michael-Luke Jones wrote:
> >Well, I'm told that (compatible) NPEs are present on other IXP CPUs.
> >Not sure about details.
>
> If, by a combined effort, we ever manage to create a generic NPE
> driver for the NPEs found in IXP42x/43x/46x/2000/23xx then the driver
> should go in arch/arm/npe.c
(Note that the ixp2000 doesn't have NPEs.)
(Both the 2000 and the 23xx have microengines, which are both
supported by arch/arm/common/uengine.c.)
> It's possible, but hard due to the differences in hardware design
The ixp23xx NPEs seem pretty much identical to me to the ixp4xx
NPEs. There are some minor differences between the ixp2000 and
ixp23xx uengines, but those are easy enough to deal with.
> and the fact that boards based on anything other than 42x are few
> and far between. The vast majority of 'independent' users following
> mainline are likely running on 42x boards.
Sure, ixp23xx hardware is harder to get. I'm not sure what you mean
by 'independent' users, though. Are people with non-42x hardware
'dependent' users, and why?
> Thus, for now, I would drop the NPE / QMGR code in arch/arm/mach-
> ixp4xx/ and concentrate on making it 42x/43x/46x agnostic. One step
> at a time :)
I'd say that it's up to those who are interested in ixp23xx support
(probably only myself at this point) to add ixp23xx support.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-07 20:00 ` Krzysztof Halasa
@ 2007-05-08 11:48 ` Lennert Buytenhek
2007-05-08 13:47 ` Krzysztof Halasa
0 siblings, 1 reply; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 11:48 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Christian Hohnstaedt, Michael-Luke Jones, netdev,
linux-arm-kernel, Russell King, Jeff Garzik, lkml
On Mon, May 07, 2007 at 10:00:20PM +0200, Krzysztof Halasa wrote:
> > - the NPE can also be used as DMA engine and for crypto operations.
> > Both are not network related.
> > Additionally, the NPE is not only ixp4xx related, but is
> > also used in IXP23xx CPUs, so it could be placed in
> > arch/arm/common or arch/arm/xscale ?
> >
> > - The MAC is used on IXP23xx, too. So the drivers for
> > both CPU familys only differ in the way they exchange
> > network packets between the NPE and the kernel.
>
> Hmm... perhaps someone have a spare device with such IXP23xx
> and wants to make it a donation for science? :-)
I have a couple of ixp23xx boards at home, but I'm not sure whether I
can give them away. I can give you remote access to them, though.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 11:32 ` Lennert Buytenhek
(?)
@ 2007-05-08 12:47 ` Alexey Zaytsev
2007-05-08 12:59 ` Lennert Buytenhek
-1 siblings, 1 reply; 88+ messages in thread
From: Alexey Zaytsev @ 2007-05-08 12:47 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Krzysztof Halasa, Michael-Luke Jones, Jeff Garzik, netdev, lkml,
Russell King, ARM Linux Mailing List
On 5/8/07, Lennert Buytenhek <buytenh@wantstofly.org> wrote:
...
> As with Christian's driver, I don't know whether an SRAM allocator
> makes much sense. We can just set up a static allocation map for the
> in-tree drivers and leave out the allocator altogether. I.e. I don't
> think it's worth the complexity (and just because the butt-ugly Intel
> code has an allocator isn't a very good reason. :-)
Is the qmgr used when the NPEs are utilized as DMA engines? And is the
allocator needed in this case? If yes, I beg you not to drop it,
because we use one NPE for this purpose, and if we are going to adopt
this driver instead of the intel's one, you will receive a patch
adding DMA functionality very soon. ;)
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 12:47 ` Alexey Zaytsev
@ 2007-05-08 12:59 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 12:59 UTC (permalink / raw)
To: Alexey Zaytsev
Cc: Krzysztof Halasa, Michael-Luke Jones, Jeff Garzik, netdev, lkml,
Russell King, ARM Linux Mailing List
On Tue, May 08, 2007 at 04:47:31PM +0400, Alexey Zaytsev wrote:
> > As with Christian's driver, I don't know whether an SRAM allocator
> > makes much sense. We can just set up a static allocation map for the
> > in-tree drivers and leave out the allocator altogether. I.e. I don't
> > think it's worth the complexity (and just because the butt-ugly Intel
> > code has an allocator isn't a very good reason. :-)
>
> Is the qmgr used when the NPEs are utilized as DMA engines?
I'm not sure, but probably yes.
> And is the allocator needed in this case?
If you statically partition the available queue SRAM, no.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 12:59 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 12:59 UTC (permalink / raw)
To: Alexey Zaytsev
Cc: Krzysztof Halasa, Michael-Luke Jones, Jeff Garzik, netdev, lkml,
Russell King, ARM Linux Mailing List
On Tue, May 08, 2007 at 04:47:31PM +0400, Alexey Zaytsev wrote:
> > As with Christian's driver, I don't know whether an SRAM allocator
> > makes much sense. We can just set up a static allocation map for the
> > in-tree drivers and leave out the allocator altogether. I.e. I don't
> > think it's worth the complexity (and just because the butt-ugly Intel
> > code has an allocator isn't a very good reason. :-)
>
> Is the qmgr used when the NPEs are utilized as DMA engines?
I'm not sure, but probably yes.
> And is the allocator needed in this case?
If you statically partition the available queue SRAM, no.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 3/3] Intel IXP4xx network drivers
2007-05-08 11:48 ` Lennert Buytenhek
@ 2007-05-08 13:47 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 13:47 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Christian Hohnstaedt, Michael-Luke Jones, netdev,
linux-arm-kernel, Russell King, Jeff Garzik, lkml
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> I have a couple of ixp23xx boards at home, but I'm not sure whether I
> can give them away. I can give you remote access to them, though.
Hmm, may be interesting some day.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - NPE
2007-05-08 7:02 ` Michael-Luke Jones
@ 2007-05-08 13:56 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 13:56 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Michael-Luke Jones <mlj28@cam.ac.uk> writes:
> Already in mach-ixp4xx, so can just be called npe.c
I want ixp4xx_ prefix in module name, otherwise I'd call it npe.c,
sure.
> Debugging code? Can this go?
Why? Especially with code having to work with third party binary-only
firmware? Suicide. They are eliminated at build time = performance
hit (OTOH this file isn't on any fast path).
> It may be a matter of taste, but could some of the many definitions
> at the top of ixp4xx_npe.c go in the header file here?
It's actually not only a matter of taste, they are private
to the .c file and I don't want to make them available to the
public (but sure, I don't like them in .c either, I think nobody
likes such definitions anywhere but they have to exist somewhere).
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - NPE
@ 2007-05-08 13:56 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 13:56 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Michael-Luke Jones <mlj28@cam.ac.uk> writes:
> Already in mach-ixp4xx, so can just be called npe.c
I want ixp4xx_ prefix in module name, otherwise I'd call it npe.c,
sure.
> Debugging code? Can this go?
Why? Especially with code having to work with third party binary-only
firmware? Suicide. They are eliminated at build time = performance
hit (OTOH this file isn't on any fast path).
> It may be a matter of taste, but could some of the many definitions
> at the top of ixp4xx_npe.c go in the header file here?
It's actually not only a matter of taste, they are private
to the .c file and I don't want to make them available to the
public (but sure, I don't like them in .c either, I think nobody
likes such definitions anywhere but they have to exist somewhere).
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 7:05 ` Michael-Luke Jones
@ 2007-05-08 13:57 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 13:57 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Michael-Luke Jones <mlj28@cam.ac.uk> writes:
> Already in mach-ixp4xx, so can just be called qmgr.c
Same here.
>> +#define QUEUE_IRQ_SRC_NEARLY_FULL 2
>> +#define QUEUE_IRQ_SRC_FULL 3
>> +#define QUEUE_IRQ_SRC_NOT_EMPTY 4
>> +#define QUEUE_IRQ_SRC_NOT_NEARLY_EMPTY 5
>> +#define QUEUE_IRQ_SRC_NOT_NEARLY_FULL 6
>> +#define QUEUE_IRQ_SRC_NOT_FULL 7
>
> Here, unlike ixp4xx_npe.c defines are in qmgr.h - that seems a bit
> more natural.
Because they are public interface :-)
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 13:57 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 13:57 UTC (permalink / raw)
To: Michael-Luke Jones
Cc: Jeff Garzik, netdev, lkml, Russell King, ARM Linux Mailing List
Michael-Luke Jones <mlj28@cam.ac.uk> writes:
> Already in mach-ixp4xx, so can just be called qmgr.c
Same here.
>> +#define QUEUE_IRQ_SRC_NEARLY_FULL 2
>> +#define QUEUE_IRQ_SRC_FULL 3
>> +#define QUEUE_IRQ_SRC_NOT_EMPTY 4
>> +#define QUEUE_IRQ_SRC_NOT_NEARLY_EMPTY 5
>> +#define QUEUE_IRQ_SRC_NOT_NEARLY_FULL 6
>> +#define QUEUE_IRQ_SRC_NOT_FULL 7
>
> Here, unlike ixp4xx_npe.c defines are in qmgr.h - that seems a bit
> more natural.
Because they are public interface :-)
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 11:32 ` Lennert Buytenhek
@ 2007-05-08 14:12 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 14:12 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> The queue manager interrupts should probably be implemented as an
> irqchip, in the same way that GPIO interrupts are implemented. (I.e.
> allocate 'real' interrupt numbers for them, and use the interrupt
> cascade mechanism.) You probably want to have separate irqchips for
> the upper and lower halves, too. This way, drivers can just use
> request_irq() instead of having to bother with platform-specific
> qmgr_set_irq() methods.
Is there a sample somewhere?
> As with Christian's driver, I don't know whether an SRAM allocator
> makes much sense. We can just set up a static allocation map for the
> in-tree drivers and leave out the allocator altogether. I.e. I don't
> think it's worth the complexity (and just because the butt-ugly Intel
> code has an allocator isn't a very good reason. :-)
It's a very simple allocator. I don't whink we have enough SRAM
without it. For now it would work but it's probably too small
for all potential users at a time.
There may be up to 6 Ethernet ports (not sure about hardware
status, not yet supported even by Intel) - 7 queues * 128 entries
each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
for TX, and then crypto, and maybe other things.
Current allocator have its potential problems, but they can be
solved internally (fragmentation, be we tend to use only
128-entry queues (RX and TX-ready Ethernet pool) and short,
16-entry ones (TX) - easy to deal with).
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 14:12 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 14:12 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> The queue manager interrupts should probably be implemented as an
> irqchip, in the same way that GPIO interrupts are implemented. (I.e.
> allocate 'real' interrupt numbers for them, and use the interrupt
> cascade mechanism.) You probably want to have separate irqchips for
> the upper and lower halves, too. This way, drivers can just use
> request_irq() instead of having to bother with platform-specific
> qmgr_set_irq() methods.
Is there a sample somewhere?
> As with Christian's driver, I don't know whether an SRAM allocator
> makes much sense. We can just set up a static allocation map for the
> in-tree drivers and leave out the allocator altogether. I.e. I don't
> think it's worth the complexity (and just because the butt-ugly Intel
> code has an allocator isn't a very good reason. :-)
It's a very simple allocator. I don't whink we have enough SRAM
without it. For now it would work but it's probably too small
for all potential users at a time.
There may be up to 6 Ethernet ports (not sure about hardware
status, not yet supported even by Intel) - 7 queues * 128 entries
each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
for TX, and then crypto, and maybe other things.
Current allocator have its potential problems, but they can be
solved internally (fragmentation, be we tend to use only
128-entry queues (RX and TX-ready Ethernet pool) and short,
16-entry ones (TX) - easy to deal with).
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
2007-05-08 11:37 ` Lennert Buytenhek
@ 2007-05-08 14:31 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 14:31 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
>> +/* Built-in 10/100 Ethernet MAC interfaces */
>> +static struct mac_plat_info ixdp425_plat_mac[] = {
>> + {
>> + .phy = 0,
>> + .rxq = 3,
>> + }, {
>> + .phy = 1,
>> + .rxq = 4,
>> + }
>> +};
>
> As with Christian's driver (I'm feeling like a bit of a broken record
> here :-), putting knowledge of which queue to use (which is firmware-
> specific) in the _board_ support file is almost certainly wrong.
>
> I would just put the port number in there, and let the ethernet
> driver map the port number to the hardware queue number. After all,
> the ethernet driver knows which queues the firmware uses, while the
> board support code doesn't.
No, quite the opposite. The board code knows its set of hardware
interfaces etc. and can let Ethernet driver use, say, HSS queues.
The driver can't know that.
It would make sense if we had many queues, but it doesn't seem
the case (perhaps the upper queues could be used for some
purposes, but Intel's code doesn't use them either and they
probably know better).
>> +static inline void set_regbits(u32 bits, u32 __iomem *reg)
>> +{
>> + __raw_writel(__raw_readl(reg) | bits, reg);
>> +}
>> +static inline void clr_regbits(u32 bits, u32 __iomem *reg)
>> +{
>> + __raw_writel(__raw_readl(reg) & ~bits, reg);
>> +}
>
> I generally discourage the use of such wrappers, as it often makes
> people forget that the set and clear operations are not atomic, and
> it ignores the fact that some of the other bits in the register you
> are modifying might have side-effects.
Without them the code in question is hardly readable, I pick the need
to remember about non-atomicity and possible side effects instead :-)
I've outlined the current versions in a separate mail, generally
2 patches are marked "v.2" and one "v.3".
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
@ 2007-05-08 14:31 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 14:31 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
>> +/* Built-in 10/100 Ethernet MAC interfaces */
>> +static struct mac_plat_info ixdp425_plat_mac[] = {
>> + {
>> + .phy = 0,
>> + .rxq = 3,
>> + }, {
>> + .phy = 1,
>> + .rxq = 4,
>> + }
>> +};
>
> As with Christian's driver (I'm feeling like a bit of a broken record
> here :-), putting knowledge of which queue to use (which is firmware-
> specific) in the _board_ support file is almost certainly wrong.
>
> I would just put the port number in there, and let the ethernet
> driver map the port number to the hardware queue number. After all,
> the ethernet driver knows which queues the firmware uses, while the
> board support code doesn't.
No, quite the opposite. The board code knows its set of hardware
interfaces etc. and can let Ethernet driver use, say, HSS queues.
The driver can't know that.
It would make sense if we had many queues, but it doesn't seem
the case (perhaps the upper queues could be used for some
purposes, but Intel's code doesn't use them either and they
probably know better).
>> +static inline void set_regbits(u32 bits, u32 __iomem *reg)
>> +{
>> + __raw_writel(__raw_readl(reg) | bits, reg);
>> +}
>> +static inline void clr_regbits(u32 bits, u32 __iomem *reg)
>> +{
>> + __raw_writel(__raw_readl(reg) & ~bits, reg);
>> +}
>
> I generally discourage the use of such wrappers, as it often makes
> people forget that the set and clear operations are not atomic, and
> it ignores the fact that some of the other bits in the register you
> are modifying might have side-effects.
Without them the code in question is hardly readable, I pick the need
to remember about non-atomicity and possible side effects instead :-)
I've outlined the current versions in a separate mail, generally
2 patches are marked "v.2" and one "v.3".
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 14:12 ` Krzysztof Halasa
@ 2007-05-08 14:40 ` Lennert Buytenhek
-1 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 14:40 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 04:12:17PM +0200, Krzysztof Halasa wrote:
> > The queue manager interrupts should probably be implemented as an
> > irqchip, in the same way that GPIO interrupts are implemented. (I.e.
> > allocate 'real' interrupt numbers for them, and use the interrupt
> > cascade mechanism.) You probably want to have separate irqchips for
> > the upper and lower halves, too. This way, drivers can just use
> > request_irq() instead of having to bother with platform-specific
> > qmgr_set_irq() methods.
>
> Is there a sample somewhere?
See for example arch/arm/mach-ep93xx/core.c, handling of the A/B/F
port GPIO interrupts.
In a nutshell, it goes like this.
1) Allocate a set of IRQ numbers. E.g. in include/asm-arm/arch-ixp4xx/irqs.h:
#define IRQ_IXP4XX_QUEUE_0 64
#define IRQ_IXP4XX_QUEUE_1 65
[...]
Adjust NR_IRQS, too.
2) Implement interrupt chip functions:
static void ixp4xx_queue_low_irq_mask_ack(unsigned int irq)
{
[...]
}
static void ixp4xx_queue_low_irq_mask(unsigned int irq)
{
[...]
}
static void ixp4xx_queue_low_irq_unmask(unsigned int irq)
{
[...]
}
static void ixp4xx_queue_low_irq_set_type(unsigned int irq)
{
[...]
}
static struct irq_chip ixp4xx_queue_low_irq_chip = {
.name = "QMGR low",
.ack = ixp4xx_queue_low_irq_mask_ack,
.mask = ixp4xx_queue_low_irq_mask,
.unmask = ixp4xx_queue_low_irq_unmask,
.set_type = ixp4xx_queue_low_irq_set_type,
};
3) Hook up the queue interrupts:
for (i = IRQ_IXP4XX_QUEUE_0; i <= IRQ_IXP4XX_QUEUE_31; i++) {
set_irq_chip(i, &ixp4xx_queue_low_irq_chip);
set_irq_handler(i, handle_level_irq);
set_irq_flags(i, IRQF_VALID);
}
4) Implement an interrupt handler for the parent interrupt:
static void ixp4xx_qmgr_low_irq_handler(unsigned int irq, struct irq_des c *desc)
{
u32 status;
int i;
status = __raw_readl(IXP4XX_WHATEVER_QMGR_LOW_STATUS_REGISTER);
for (i = 0; i < 32; i++) {
if (status & (1 << i)) {
desc = irq_desc + IRQ_IXP4XX_QUEUE_0 + i;
desc_handle_irq(IRQ_IXP4XX_QUEUE_0 + i, desc);
}
}
}
5) Hook up the parent interrupt:
set_irq_chained_handler(IRQ_IXP4XX_QM1, ixp4xx_qmgr_low_irq_handler);
Or something like that.
> > As with Christian's driver, I don't know whether an SRAM allocator
> > makes much sense. We can just set up a static allocation map for the
> > in-tree drivers and leave out the allocator altogether. I.e. I don't
> > think it's worth the complexity (and just because the butt-ugly Intel
> > code has an allocator isn't a very good reason. :-)
>
> It's a very simple allocator. I don't whink we have enough SRAM
> without it. For now it would work but it's probably too small
> for all potential users at a time.
>
> There may be up to 6 Ethernet ports (not sure about hardware
> status, not yet supported even by Intel) - 7 queues * 128 entries
> each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
> for TX, and then crypto, and maybe other things.
You're unlikely to be using all of those at the same time, though.
And what do you do if the user does compile all of these features into
his kernel and then tries to use them all at the same time? Return
-ENOMEM?
Shouldn't we make sure that at least the features that are compiled in
can be used at the same time? If you want that guarantee, then you
might as well determine the SRAM map at compile time.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 14:40 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 14:40 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 04:12:17PM +0200, Krzysztof Halasa wrote:
> > The queue manager interrupts should probably be implemented as an
> > irqchip, in the same way that GPIO interrupts are implemented. (I.e.
> > allocate 'real' interrupt numbers for them, and use the interrupt
> > cascade mechanism.) You probably want to have separate irqchips for
> > the upper and lower halves, too. This way, drivers can just use
> > request_irq() instead of having to bother with platform-specific
> > qmgr_set_irq() methods.
>
> Is there a sample somewhere?
See for example arch/arm/mach-ep93xx/core.c, handling of the A/B/F
port GPIO interrupts.
In a nutshell, it goes like this.
1) Allocate a set of IRQ numbers. E.g. in include/asm-arm/arch-ixp4xx/irqs.h:
#define IRQ_IXP4XX_QUEUE_0 64
#define IRQ_IXP4XX_QUEUE_1 65
[...]
Adjust NR_IRQS, too.
2) Implement interrupt chip functions:
static void ixp4xx_queue_low_irq_mask_ack(unsigned int irq)
{
[...]
}
static void ixp4xx_queue_low_irq_mask(unsigned int irq)
{
[...]
}
static void ixp4xx_queue_low_irq_unmask(unsigned int irq)
{
[...]
}
static void ixp4xx_queue_low_irq_set_type(unsigned int irq)
{
[...]
}
static struct irq_chip ixp4xx_queue_low_irq_chip = {
.name = "QMGR low",
.ack = ixp4xx_queue_low_irq_mask_ack,
.mask = ixp4xx_queue_low_irq_mask,
.unmask = ixp4xx_queue_low_irq_unmask,
.set_type = ixp4xx_queue_low_irq_set_type,
};
3) Hook up the queue interrupts:
for (i = IRQ_IXP4XX_QUEUE_0; i <= IRQ_IXP4XX_QUEUE_31; i++) {
set_irq_chip(i, &ixp4xx_queue_low_irq_chip);
set_irq_handler(i, handle_level_irq);
set_irq_flags(i, IRQF_VALID);
}
4) Implement an interrupt handler for the parent interrupt:
static void ixp4xx_qmgr_low_irq_handler(unsigned int irq, struct irq_des c *desc)
{
u32 status;
int i;
status = __raw_readl(IXP4XX_WHATEVER_QMGR_LOW_STATUS_REGISTER);
for (i = 0; i < 32; i++) {
if (status & (1 << i)) {
desc = irq_desc + IRQ_IXP4XX_QUEUE_0 + i;
desc_handle_irq(IRQ_IXP4XX_QUEUE_0 + i, desc);
}
}
}
5) Hook up the parent interrupt:
set_irq_chained_handler(IRQ_IXP4XX_QM1, ixp4xx_qmgr_low_irq_handler);
Or something like that.
> > As with Christian's driver, I don't know whether an SRAM allocator
> > makes much sense. We can just set up a static allocation map for the
> > in-tree drivers and leave out the allocator altogether. I.e. I don't
> > think it's worth the complexity (and just because the butt-ugly Intel
> > code has an allocator isn't a very good reason. :-)
>
> It's a very simple allocator. I don't whink we have enough SRAM
> without it. For now it would work but it's probably too small
> for all potential users at a time.
>
> There may be up to 6 Ethernet ports (not sure about hardware
> status, not yet supported even by Intel) - 7 queues * 128 entries
> each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
> for TX, and then crypto, and maybe other things.
You're unlikely to be using all of those at the same time, though.
And what do you do if the user does compile all of these features into
his kernel and then tries to use them all at the same time? Return
-ENOMEM?
Shouldn't we make sure that at least the features that are compiled in
can be used at the same time? If you want that guarantee, then you
might as well determine the SRAM map at compile time.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
2007-05-08 14:31 ` Krzysztof Halasa
@ 2007-05-08 14:53 ` Lennert Buytenhek
-1 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 14:53 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 04:31:12PM +0200, Krzysztof Halasa wrote:
> >> +/* Built-in 10/100 Ethernet MAC interfaces */
> >> +static struct mac_plat_info ixdp425_plat_mac[] = {
> >> + {
> >> + .phy = 0,
> >> + .rxq = 3,
> >> + }, {
> >> + .phy = 1,
> >> + .rxq = 4,
> >> + }
> >> +};
> >
> > As with Christian's driver (I'm feeling like a bit of a broken record
> > here :-), putting knowledge of which queue to use (which is firmware-
> > specific) in the _board_ support file is almost certainly wrong.
> >
> > I would just put the port number in there, and let the ethernet
> > driver map the port number to the hardware queue number. After all,
> > the ethernet driver knows which queues the firmware uses, while the
> > board support code doesn't.
>
> No, quite the opposite. The board code knows its set of hardware
> interfaces etc. and can let Ethernet driver use, say, HSS queues.
> The driver can't know that.
You are attacking a point that I did not make.
The board support code knows such things as that the "front ethernet
port" on the board is connected to the CPU's MII port number #2, but
the board support code does _not_ know that MII port number #2
corresponds to "ixp4xx hardware queue #5."
If Intel puts out a firmware update next month, and your ethernet
driver is modified to take advantage of the new features in that
firmware and starts depending on the newer version of that firmware,
we will have to modify every ixp4xx board support file in case the
firmware update modifies the ixp4xx queue numbers in use. The
mapping from hardware ports (MII port #0, MII port #6, HSS port
#42, whatever) to ixp4xx hardware queue numbers (0-63) should _not_
be put in every single ixp4xx board support file.
Even if you only change the
(in board support file)
.rxq = 4,
line to something like this instead:
(in some ixp4xx-specific or driver-specific header file)
#define IXP4XX_MII_PORT_1_RX_QUEUE 4
(in board support file)
.rxq = IXP4XX_MII_PORT_1_RX_QUEUE,
then you have remved this dependency, and then you only have to update
one place if you move to a newer firmware version.
> > I generally discourage the use of such wrappers, as it often makes
> > people forget that the set and clear operations are not atomic, and
> > it ignores the fact that some of the other bits in the register you
> > are modifying might have side-effects.
>
> Without them the code in question is hardly readable,
You can read Polish, how can you complain about code readability. :-))
*runs*
> I pick the need to remember about non-atomicity and possible side
> effects instead :-)
Sure, point taken, it's just that the person after you might not
remember..
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
@ 2007-05-08 14:53 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-08 14:53 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 04:31:12PM +0200, Krzysztof Halasa wrote:
> >> +/* Built-in 10/100 Ethernet MAC interfaces */
> >> +static struct mac_plat_info ixdp425_plat_mac[] = {
> >> + {
> >> + .phy = 0,
> >> + .rxq = 3,
> >> + }, {
> >> + .phy = 1,
> >> + .rxq = 4,
> >> + }
> >> +};
> >
> > As with Christian's driver (I'm feeling like a bit of a broken record
> > here :-), putting knowledge of which queue to use (which is firmware-
> > specific) in the _board_ support file is almost certainly wrong.
> >
> > I would just put the port number in there, and let the ethernet
> > driver map the port number to the hardware queue number. After all,
> > the ethernet driver knows which queues the firmware uses, while the
> > board support code doesn't.
>
> No, quite the opposite. The board code knows its set of hardware
> interfaces etc. and can let Ethernet driver use, say, HSS queues.
> The driver can't know that.
You are attacking a point that I did not make.
The board support code knows such things as that the "front ethernet
port" on the board is connected to the CPU's MII port number #2, but
the board support code does _not_ know that MII port number #2
corresponds to "ixp4xx hardware queue #5."
If Intel puts out a firmware update next month, and your ethernet
driver is modified to take advantage of the new features in that
firmware and starts depending on the newer version of that firmware,
we will have to modify every ixp4xx board support file in case the
firmware update modifies the ixp4xx queue numbers in use. The
mapping from hardware ports (MII port #0, MII port #6, HSS port
#42, whatever) to ixp4xx hardware queue numbers (0-63) should _not_
be put in every single ixp4xx board support file.
Even if you only change the
(in board support file)
.rxq = 4,
line to something like this instead:
(in some ixp4xx-specific or driver-specific header file)
#define IXP4XX_MII_PORT_1_RX_QUEUE 4
(in board support file)
.rxq = IXP4XX_MII_PORT_1_RX_QUEUE,
then you have remved this dependency, and then you only have to update
one place if you move to a newer firmware version.
> > I generally discourage the use of such wrappers, as it often makes
> > people forget that the set and clear operations are not atomic, and
> > it ignores the fact that some of the other bits in the register you
> > are modifying might have side-effects.
>
> Without them the code in question is hardly readable,
You can read Polish, how can you complain about code readability. :-))
*runs*
> I pick the need to remember about non-atomicity and possible side
> effects instead :-)
Sure, point taken, it's just that the person after you might not
remember..
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 14:40 ` Lennert Buytenhek
@ 2007-05-08 16:59 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 16:59 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> See for example arch/arm/mach-ep93xx/core.c, handling of the A/B/F
> port GPIO interrupts.
>
> In a nutshell, it goes like this.
Thanks, I will investigate.
>> There may be up to 6 Ethernet ports (not sure about hardware
>> status, not yet supported even by Intel) - 7 queues * 128 entries
>> each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
>> for TX, and then crypto, and maybe other things.
>
> You're unlikely to be using all of those at the same time, though.
That's the point.
> And what do you do if the user does compile all of these features into
> his kernel and then tries to use them all at the same time? Return
> -ENOMEM?
If he is able to do so, yes - there is nothing we can do. But
I suspect a single machine would not have all possible hardware.
The problem is, we don't know what would it have, so it must be
dynamic.
> Shouldn't we make sure that at least the features that are compiled in
> can be used at the same time?
We can't - hardware capabilities limit that. A general purpose
distribution would probably want to compile in everything (perhaps
as modules).
> If you want that guarantee, then you
> might as well determine the SRAM map at compile time.
That would be most limiting with IMHO no visible advantage.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-08 16:59 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 16:59 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> See for example arch/arm/mach-ep93xx/core.c, handling of the A/B/F
> port GPIO interrupts.
>
> In a nutshell, it goes like this.
Thanks, I will investigate.
>> There may be up to 6 Ethernet ports (not sure about hardware
>> status, not yet supported even by Intel) - 7 queues * 128 entries
>> each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
>> for TX, and then crypto, and maybe other things.
>
> You're unlikely to be using all of those at the same time, though.
That's the point.
> And what do you do if the user does compile all of these features into
> his kernel and then tries to use them all at the same time? Return
> -ENOMEM?
If he is able to do so, yes - there is nothing we can do. But
I suspect a single machine would not have all possible hardware.
The problem is, we don't know what would it have, so it must be
dynamic.
> Shouldn't we make sure that at least the features that are compiled in
> can be used at the same time?
We can't - hardware capabilities limit that. A general purpose
distribution would probably want to compile in everything (perhaps
as modules).
> If you want that guarantee, then you
> might as well determine the SRAM map at compile time.
That would be most limiting with IMHO no visible advantage.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
2007-05-08 14:53 ` Lennert Buytenhek
@ 2007-05-08 17:17 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 17:17 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> The board support code knows such things as that the "front ethernet
> port" on the board is connected to the CPU's MII port number #2, but
> the board support code does _not_ know that MII port number #2
> corresponds to "ixp4xx hardware queue #5."
Sure. And I don't want it to know.
It has to pick up any available queue for RX, that is. If the
board code knows it uses ETH connected to NPE-B and NPE-C, and
HSS-0 connected (obviously) to NPE-A, and it wants some crypto
functions etc., it can pick a queue which normally belongs to
HSS-1. If the code knows the board has both HSS and only NPE-B
Ethernet, it can use one of NPE-C Ethernet's queues. It's that
simple.
The Ethernet (and HSS etc.) driver knows it has to use queue 24 for
NPE-B Ethernet's TX and 27 for TX and so on, this is fixed in the
firmware so I don't let the board code mess with that. The
Ethernet RX queue is different, we can just make something up and
tell NPE about that.
That's BTW the same thing you would want to do with SRAM - except
that the SRAM allocator is technically possible, while making
queue assignments needs knowledge about the hardware.
> If Intel puts out a firmware update next month, and your ethernet
> driver is modified to take advantage of the new features in that
> firmware and starts depending on the newer version of that firmware,
> we will have to modify every ixp4xx board support file in case the
> firmware update modifies the ixp4xx queue numbers in use.
Nope, we just modify Ethernet driver:
drivers/net/arm/ixp4xx_eth.c:
#define TX_QUEUE(plat) (NPE_ID(port) + 23)
#define RXFREE_QUEUE(plat) (NPE_ID(port) + 26)
#define TXDONE_QUEUE 31
> The
> mapping from hardware ports (MII port #0, MII port #6, HSS port
> #42, whatever) to ixp4xx hardware queue numbers (0-63) should _not_
> be put in every single ixp4xx board support file.
I've never considered doing that :-)
drivers/net/wan/ixp4xx_hss.c:
/* Queue IDs */
#define HSS0_CHL_RXTRIG_QUEUE 12 /* orig size = 32 dwords */
#define HSS0_PKT_RX_QUEUE 13 /* orig size = 32 dwords */
#define HSS0_PKT_TX0_QUEUE 14 /* orig size = 16 dwords */
#define HSS0_PKT_TX1_QUEUE 15
#define HSS0_PKT_TX2_QUEUE 16
#define HSS0_PKT_TX3_QUEUE 17
#define HSS0_PKT_RXFREE0_QUEUE 18 /* orig size = 16 dwords */
#define HSS0_PKT_RXFREE1_QUEUE 19
#define HSS0_PKT_RXFREE2_QUEUE 20
#define HSS0_PKT_RXFREE3_QUEUE 21
#define HSS0_PKT_TXDONE_QUEUE 22 /* orig size = 64 dwords */
#define HSS1_CHL_RXTRIG_QUEUE 10
#define HSS1_PKT_RX_QUEUE 0
#define HSS1_PKT_TX0_QUEUE 5
#define HSS1_PKT_TX1_QUEUE 6
#define HSS1_PKT_TX2_QUEUE 7
#define HSS1_PKT_TX3_QUEUE 8
#define HSS1_PKT_RXFREE0_QUEUE 1
#define HSS1_PKT_RXFREE1_QUEUE 2
#define HSS1_PKT_RXFREE2_QUEUE 3
#define HSS1_PKT_RXFREE3_QUEUE 4
#define HSS1_PKT_TXDONE_QUEUE 9
>> Without them the code in question is hardly readable,
>
> You can read Polish, how can you complain about code readability. :-))
Well, you may have the point, but I also care about others :-)
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS
@ 2007-05-08 17:17 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-08 17:17 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> The board support code knows such things as that the "front ethernet
> port" on the board is connected to the CPU's MII port number #2, but
> the board support code does _not_ know that MII port number #2
> corresponds to "ixp4xx hardware queue #5."
Sure. And I don't want it to know.
It has to pick up any available queue for RX, that is. If the
board code knows it uses ETH connected to NPE-B and NPE-C, and
HSS-0 connected (obviously) to NPE-A, and it wants some crypto
functions etc., it can pick a queue which normally belongs to
HSS-1. If the code knows the board has both HSS and only NPE-B
Ethernet, it can use one of NPE-C Ethernet's queues. It's that
simple.
The Ethernet (and HSS etc.) driver knows it has to use queue 24 for
NPE-B Ethernet's TX and 27 for TX and so on, this is fixed in the
firmware so I don't let the board code mess with that. The
Ethernet RX queue is different, we can just make something up and
tell NPE about that.
That's BTW the same thing you would want to do with SRAM - except
that the SRAM allocator is technically possible, while making
queue assignments needs knowledge about the hardware.
> If Intel puts out a firmware update next month, and your ethernet
> driver is modified to take advantage of the new features in that
> firmware and starts depending on the newer version of that firmware,
> we will have to modify every ixp4xx board support file in case the
> firmware update modifies the ixp4xx queue numbers in use.
Nope, we just modify Ethernet driver:
drivers/net/arm/ixp4xx_eth.c:
#define TX_QUEUE(plat) (NPE_ID(port) + 23)
#define RXFREE_QUEUE(plat) (NPE_ID(port) + 26)
#define TXDONE_QUEUE 31
> The
> mapping from hardware ports (MII port #0, MII port #6, HSS port
> #42, whatever) to ixp4xx hardware queue numbers (0-63) should _not_
> be put in every single ixp4xx board support file.
I've never considered doing that :-)
drivers/net/wan/ixp4xx_hss.c:
/* Queue IDs */
#define HSS0_CHL_RXTRIG_QUEUE 12 /* orig size = 32 dwords */
#define HSS0_PKT_RX_QUEUE 13 /* orig size = 32 dwords */
#define HSS0_PKT_TX0_QUEUE 14 /* orig size = 16 dwords */
#define HSS0_PKT_TX1_QUEUE 15
#define HSS0_PKT_TX2_QUEUE 16
#define HSS0_PKT_TX3_QUEUE 17
#define HSS0_PKT_RXFREE0_QUEUE 18 /* orig size = 16 dwords */
#define HSS0_PKT_RXFREE1_QUEUE 19
#define HSS0_PKT_RXFREE2_QUEUE 20
#define HSS0_PKT_RXFREE3_QUEUE 21
#define HSS0_PKT_TXDONE_QUEUE 22 /* orig size = 64 dwords */
#define HSS1_CHL_RXTRIG_QUEUE 10
#define HSS1_PKT_RX_QUEUE 0
#define HSS1_PKT_TX0_QUEUE 5
#define HSS1_PKT_TX1_QUEUE 6
#define HSS1_PKT_TX2_QUEUE 7
#define HSS1_PKT_TX3_QUEUE 8
#define HSS1_PKT_RXFREE0_QUEUE 1
#define HSS1_PKT_RXFREE1_QUEUE 2
#define HSS1_PKT_RXFREE2_QUEUE 3
#define HSS1_PKT_RXFREE3_QUEUE 4
#define HSS1_PKT_TXDONE_QUEUE 9
>> Without them the code in question is hardly readable,
>
> You can read Polish, how can you complain about code readability. :-))
Well, you may have the point, but I also care about others :-)
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-08 16:59 ` Krzysztof Halasa
@ 2007-05-09 10:21 ` Lennert Buytenhek
-1 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-09 10:21 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 06:59:36PM +0200, Krzysztof Halasa wrote:
> >> There may be up to 6 Ethernet ports (not sure about hardware
> >> status, not yet supported even by Intel) - 7 queues * 128 entries
> >> each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
> >> for TX, and then crypto, and maybe other things.
> >
> > You're unlikely to be using all of those at the same time, though.
>
> That's the point.
>
> > And what do you do if the user does compile all of these features into
> > his kernel and then tries to use them all at the same time? Return
> > -ENOMEM?
>
> If he is able to do so, yes - there is nothing we can do. But
> I suspect a single machine would not have all possible hardware.
> The problem is, we don't know what would it have, so it must be
> dynamic.
Well, you _would_ like to have a way to make sure that all the
capabilities on the board can be used. If you have a future ixp4xx
based board with 16 ethernet ports, you don't want 'ifconfig eth7 up'
to give you -ENOMEM just because we ran out of SRAM.
The way I see it, that means that you do want to scale back your
other SRAM allocations if you know that you're going to need a lot
of SRAM (say, for ethernet RX/TX queues.)
Either you can do this with an ugly hack a la:
/*
* The FOO board has many ethernet ports, and runs out of
* SRAM prematurely if we use the default TX/RX ring sizes.
*/
#ifdef CONFIG_MACH_IXP483_FOO_BOARD
#define IXP4XX_ETH_RXTX_QUEUE_SIZE 32
#else
#define IXP4XX_ETH_RXTX_QUEUE_SIZE 256
#endif
Or you can put this knowledge in the board support code (cleaner, IMHO.)
E.g. let arch/arm/mach-ixp4xx/nslu2.c decide, at platform device
instantiation time, which region of queue SRAM can be used by which
queue, and take static allocations for things like the crypto unit
into account. (This is just one form of that idea, there are many
different variations.)
That way, you can _guarantee_ that you'll always have enough SRAM
to be able to use the functionality that is exposed on the board you
are running on (which is a desirable property, IMHO), which is
something that you can't achieve with an allocator, as far as I can
see.
I'm not per se against the allocator, I just think that there are
problems (running out of SRAM, fragmentation) that can't be solved
by the allocator alone (SRAM users have to be aware which other SRAM
users there are in the system, while the idea of the allocator is to
insulate these users from each other), and any solution that solves
those two problems IMHO also automatically solves the problem that
the allocator is trying to solve.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-09 10:21 ` Lennert Buytenhek
0 siblings, 0 replies; 88+ messages in thread
From: Lennert Buytenhek @ 2007-05-09 10:21 UTC (permalink / raw)
To: Krzysztof Halasa
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
On Tue, May 08, 2007 at 06:59:36PM +0200, Krzysztof Halasa wrote:
> >> There may be up to 6 Ethernet ports (not sure about hardware
> >> status, not yet supported even by Intel) - 7 queues * 128 entries
> >> each = ~ 3.5 KB. Add 2 long queues (RX) for HSS and something
> >> for TX, and then crypto, and maybe other things.
> >
> > You're unlikely to be using all of those at the same time, though.
>
> That's the point.
>
> > And what do you do if the user does compile all of these features into
> > his kernel and then tries to use them all at the same time? Return
> > -ENOMEM?
>
> If he is able to do so, yes - there is nothing we can do. But
> I suspect a single machine would not have all possible hardware.
> The problem is, we don't know what would it have, so it must be
> dynamic.
Well, you _would_ like to have a way to make sure that all the
capabilities on the board can be used. If you have a future ixp4xx
based board with 16 ethernet ports, you don't want 'ifconfig eth7 up'
to give you -ENOMEM just because we ran out of SRAM.
The way I see it, that means that you do want to scale back your
other SRAM allocations if you know that you're going to need a lot
of SRAM (say, for ethernet RX/TX queues.)
Either you can do this with an ugly hack a la:
/*
* The FOO board has many ethernet ports, and runs out of
* SRAM prematurely if we use the default TX/RX ring sizes.
*/
#ifdef CONFIG_MACH_IXP483_FOO_BOARD
#define IXP4XX_ETH_RXTX_QUEUE_SIZE 32
#else
#define IXP4XX_ETH_RXTX_QUEUE_SIZE 256
#endif
Or you can put this knowledge in the board support code (cleaner, IMHO.)
E.g. let arch/arm/mach-ixp4xx/nslu2.c decide, at platform device
instantiation time, which region of queue SRAM can be used by which
queue, and take static allocations for things like the crypto unit
into account. (This is just one form of that idea, there are many
different variations.)
That way, you can _guarantee_ that you'll always have enough SRAM
to be able to use the functionality that is exposed on the board you
are running on (which is a desirable property, IMHO), which is
something that you can't achieve with an allocator, as far as I can
see.
I'm not per se against the allocator, I just think that there are
problems (running out of SRAM, fragmentation) that can't be solved
by the allocator alone (SRAM users have to be aware which other SRAM
users there are in the system, while the idea of the allocator is to
insulate these users from each other), and any solution that solves
those two problems IMHO also automatically solves the problem that
the allocator is trying to solve.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
2007-05-09 10:21 ` Lennert Buytenhek
@ 2007-05-10 14:08 ` Krzysztof Halasa
-1 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-10 14:08 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> The way I see it, that means that you do want to scale back your
> other SRAM allocations if you know that you're going to need a lot
> of SRAM (say, for ethernet RX/TX queues.)
Yep, I will then add "queue_size" parameter to the platform data.
Or something like that.
> Or you can put this knowledge in the board support code (cleaner, IMHO.)
Sure.
> That way, you can _guarantee_ that you'll always have enough SRAM
> to be able to use the functionality that is exposed on the board you
> are running on (which is a desirable property, IMHO), which is
> something that you can't achieve with an allocator, as far as I can
> see.
I'd have to put SRAM address in the board code instead. Certainly
not required at this point, and perhaps it will be never needed.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH] Intel IXP4xx network drivers v.3 - QMGR
@ 2007-05-10 14:08 ` Krzysztof Halasa
0 siblings, 0 replies; 88+ messages in thread
From: Krzysztof Halasa @ 2007-05-10 14:08 UTC (permalink / raw)
To: Lennert Buytenhek
Cc: Michael-Luke Jones, Jeff Garzik, netdev, lkml, Russell King,
ARM Linux Mailing List
Lennert Buytenhek <buytenh@wantstofly.org> writes:
> The way I see it, that means that you do want to scale back your
> other SRAM allocations if you know that you're going to need a lot
> of SRAM (say, for ethernet RX/TX queues.)
Yep, I will then add "queue_size" parameter to the platform data.
Or something like that.
> Or you can put this knowledge in the board support code (cleaner, IMHO.)
Sure.
> That way, you can _guarantee_ that you'll always have enough SRAM
> to be able to use the functionality that is exposed on the board you
> are running on (which is a desirable property, IMHO), which is
> something that you can't achieve with an allocator, as far as I can
> see.
I'd have to put SRAM address in the board code instead. Certainly
not required at this point, and perhaps it will be never needed.
--
Krzysztof Halasa
^ permalink raw reply [flat|nested] 88+ messages in thread
end of thread, other threads:[~2007-05-10 14:08 UTC | newest]
Thread overview: 88+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-05-06 23:46 [PATCH 0/3] Intel IXP4xx network drivers Krzysztof Halasa
2007-05-07 0:06 ` [PATCH 1/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
2007-05-07 1:44 ` Roman Zippel
2007-05-07 9:35 ` Krzysztof Halasa
2007-05-07 11:22 ` Roman Zippel
2007-05-07 11:56 ` Krzysztof Halasa
2007-05-07 13:17 ` Roman Zippel
2007-05-07 13:21 ` Jeff Garzik
2007-05-07 13:46 ` Roman Zippel
2007-05-07 16:50 ` Krzysztof Halasa
2007-05-07 17:07 ` Roman Zippel
2007-05-07 18:15 ` Satyam Sharma
2007-05-07 20:31 ` Jeff Garzik
2007-05-07 20:49 ` Satyam Sharma
2007-05-07 20:50 ` Randy Dunlap
2007-05-07 22:39 ` Satyam Sharma
2007-05-07 22:52 ` Randy Dunlap
2007-05-07 20:57 ` Roman Zippel
2007-05-07 20:54 ` Krzysztof Halasa
2007-05-07 21:02 ` [PATCH] Use menuconfig objects II - netdev/wan Krzysztof Halasa
2007-05-07 21:08 ` [PATCH 1a/3] WAN Kconfig: change "depends on HDLC" to "select" Krzysztof Halasa
2007-05-07 0:07 ` [PATCH 2/3] ARM: include IXP4xx "fuses" support Krzysztof Halasa
2007-05-07 5:24 ` Alexey Zaytsev
2007-05-07 10:24 ` Krzysztof Halasa
2007-05-07 0:07 ` [PATCH 3/3] Intel IXP4xx network drivers Krzysztof Halasa
2007-05-07 12:59 ` Michael-Luke Jones
2007-05-07 12:59 ` Michael-Luke Jones
2007-05-07 17:12 ` Krzysztof Halasa
2007-05-07 17:52 ` Christian Hohnstaedt
2007-05-07 17:52 ` Christian Hohnstaedt
2007-05-07 20:00 ` Krzysztof Halasa
2007-05-08 11:48 ` Lennert Buytenhek
2007-05-08 13:47 ` Krzysztof Halasa
2007-05-07 18:14 ` Michael-Luke Jones
2007-05-07 18:14 ` Michael-Luke Jones
2007-05-07 19:57 ` Krzysztof Halasa
2007-05-07 19:57 ` Krzysztof Halasa
2007-05-07 20:18 ` Michael-Luke Jones
2007-05-08 11:46 ` Lennert Buytenhek
2007-05-08 11:46 ` Lennert Buytenhek
2007-05-08 0:11 ` [PATCH] Intel IXP4xx network drivers v.2 Krzysztof Halasa
2007-05-08 0:11 ` Krzysztof Halasa
2007-05-08 0:36 ` [PATCH] Intel IXP4xx network drivers v.2 - NPE Krzysztof Halasa
2007-05-08 0:36 ` Krzysztof Halasa
2007-05-08 7:02 ` Michael-Luke Jones
2007-05-08 7:02 ` Michael-Luke Jones
2007-05-08 13:56 ` Krzysztof Halasa
2007-05-08 13:56 ` Krzysztof Halasa
2007-05-08 0:46 ` [PATCH] Intel IXP4xx network drivers v.3 - QMGR Krzysztof Halasa
2007-05-08 0:46 ` Krzysztof Halasa
2007-05-08 7:05 ` Michael-Luke Jones
2007-05-08 7:05 ` Michael-Luke Jones
2007-05-08 13:57 ` Krzysztof Halasa
2007-05-08 13:57 ` Krzysztof Halasa
2007-05-08 11:32 ` Lennert Buytenhek
2007-05-08 11:32 ` Lennert Buytenhek
2007-05-08 12:47 ` Alexey Zaytsev
2007-05-08 12:59 ` Lennert Buytenhek
2007-05-08 12:59 ` Lennert Buytenhek
2007-05-08 14:12 ` Krzysztof Halasa
2007-05-08 14:12 ` Krzysztof Halasa
2007-05-08 14:40 ` Lennert Buytenhek
2007-05-08 14:40 ` Lennert Buytenhek
2007-05-08 16:59 ` Krzysztof Halasa
2007-05-08 16:59 ` Krzysztof Halasa
2007-05-09 10:21 ` Lennert Buytenhek
2007-05-09 10:21 ` Lennert Buytenhek
2007-05-10 14:08 ` Krzysztof Halasa
2007-05-10 14:08 ` Krzysztof Halasa
2007-05-08 1:19 ` [PATCH] Intel IXP4xx network drivers v.2 - Ethernet and HSS Krzysztof Halasa
2007-05-08 1:19 ` Krzysztof Halasa
2007-05-08 5:28 ` Jeff Garzik
2007-05-08 5:28 ` Jeff Garzik
2007-05-08 7:22 ` Michael-Luke Jones
2007-05-08 7:22 ` Michael-Luke Jones
2007-05-08 11:37 ` Lennert Buytenhek
2007-05-08 11:37 ` Lennert Buytenhek
2007-05-08 14:31 ` Krzysztof Halasa
2007-05-08 14:31 ` Krzysztof Halasa
2007-05-08 14:53 ` Lennert Buytenhek
2007-05-08 14:53 ` Lennert Buytenhek
2007-05-08 17:17 ` Krzysztof Halasa
2007-05-08 17:17 ` Krzysztof Halasa
2007-05-08 11:40 ` [PATCH 3/3] Intel IXP4xx network drivers Lennert Buytenhek
2007-05-07 10:27 ` [PATCH 2a/3] " Krzysztof Halasa
2007-05-07 20:39 ` [PATCH 0/3] " Leon Woestenberg
2007-05-07 21:21 ` Krzysztof Halasa
2007-05-08 1:40 ` Krzysztof Halasa
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.