All of lore.kernel.org
 help / color / mirror / Atom feed
* [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot
@ 2013-07-02 15:15 Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file Dan Murphy
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

This patch series has been generated in an effort to get comments on 
the implementation of the dwc and xHCI code within the uBoot.

v3 series adds the xHCI back port from the linux kernel

The first patch is the one of major concern as this patch will make an
attempt to commonize the usb headers so that there is re-use and prepare
the usb headers for future usb code back ports from the linux kernel.

This code will compile for omap5 and omap4 but fails for am335x. 
Before I invest anymore time in this I would like to understand if there are
any comments on the overall implemenation.


 Makefile                                |    1 +
 arch/arm/cpu/armv7/omap5/hw_data.c      |   14 +
 arch/arm/cpu/armv7/omap5/prcm-regs.c    |    1 +
 arch/arm/include/asm/arch-omap5/clock.h |    4 +
 arch/arm/include/asm/omap_common.h      |    1 +
 common/cmd_usb.c                        |    6 +-
 common/usb.c                            |    1 +
 common/usb_hub.c                        |    1 +
 drivers/usb/dwc3/Makefile               |   53 +
 drivers/usb/dwc3/core.c                 |  853 ++++++
 drivers/usb/dwc3/core.h                 |  990 +++++++
 drivers/usb/dwc3/dwc3-omap.c            |  507 ++++
 drivers/usb/dwc3/dwc3-omap.h            |   41 +
 drivers/usb/dwc3/dwc3-uboot.c           |  384 +++
 drivers/usb/dwc3/ep0.c                  | 1089 +++++++
 drivers/usb/dwc3/gadget.c               | 2819 ++++++++++++++++++
 drivers/usb/dwc3/gadget.h               |  196 ++
 drivers/usb/dwc3/host.c                 |  108 +
 drivers/usb/dwc3/io.h                   |   81 +
 drivers/usb/host/Makefile               |    7 +
 drivers/usb/host/xhci-ext-caps.h        |  167 ++
 drivers/usb/host/xhci-hub.c             | 1231 ++++++++
 drivers/usb/host/xhci-mem.c             | 2554 ++++++++++++++++
 drivers/usb/host/xhci-plat.c            |  207 ++
 drivers/usb/host/xhci-ring.c            | 4059 ++++++++++++++++++++++++++
 drivers/usb/host/xhci.c                 | 4815 +++++++++++++++++++++++++++++++
 drivers/usb/host/xhci.h                 | 1872 ++++++++++++
 drivers/usb/musb-new/musb_host.h        |    1 +
 drivers/usb/musb-new/usb-compat.h       |   30 -
 include/asm-generic/scatterlist.h       |   34 +
 include/configs/omap5_common.h          |   10 +
 include/linux/usb/ch11.h                |  279 ++
 include/linux/usb/gadget.h              |  184 +-
 include/linux/usb/hcd.h                 |  680 +++++
 include/linux/usb/linux-compat.h        |  280 ++
 include/linux/usb/usb-compat.h          | 1965 +++++++++++++
 include/linux/usb/usb-mod-devicetable.h |  131 +
 include/usb.h                           |  153 +-
 include/usb/lin_gadget_compat.h         |   29 +-
 39 files changed, 25593 insertions(+), 245 deletions(-)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
@ 2013-07-02 15:15 ` Dan Murphy
  2013-07-02 15:18   ` Nishanth Menon
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 2/7] USB: Adapt the usb-compat.h to uboot and fix compiler errors Dan Murphy
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

Backport the kernel USB header file include/linux/usb.h
that contains the structures and constants for the linux kernel drivers.
Rename the usb.h to usb-compat.h so that it is not confused with the
uBoot include usb.h file.

Kernel base commit ID:aa4f608478acb7ed69dfcff4f3c404100b78ac49

Signed-off-by: Dan Murphy <dmurphy@ti.com>
---
 include/linux/usb/usb-compat.h | 1815 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1815 insertions(+)
 create mode 100644 include/linux/usb/usb-compat.h

diff --git a/include/linux/usb/usb-compat.h b/include/linux/usb/usb-compat.h
new file mode 100644
index 0000000..a0bee5a
--- /dev/null
+++ b/include/linux/usb/usb-compat.h
@@ -0,0 +1,1815 @@
+#ifndef __LINUX_USB_H
+#define __LINUX_USB_H
+
+#include <linux/mod_devicetable.h>
+#include <linux/usb/ch9.h>
+
+#define USB_MAJOR			180
+#define USB_DEVICE_MAJOR		189
+
+
+#ifdef __KERNEL__
+
+#include <linux/errno.h>        /* for -ENODEV */
+#include <linux/delay.h>	/* for mdelay() */
+#include <linux/interrupt.h>	/* for in_interrupt() */
+#include <linux/list.h>		/* for struct list_head */
+#include <linux/kref.h>		/* for struct kref */
+#include <linux/device.h>	/* for struct device */
+#include <linux/fs.h>		/* for struct file_operations */
+#include <linux/completion.h>	/* for struct completion */
+#include <linux/sched.h>	/* for current && schedule_timeout */
+#include <linux/mutex.h>	/* for struct mutex */
+#include <linux/pm_runtime.h>	/* for runtime PM */
+
+struct usb_device;
+struct usb_driver;
+struct wusb_dev;
+
+/*-------------------------------------------------------------------------*/
+
+/*
+ * Host-side wrappers for standard USB descriptors ... these are parsed
+ * from the data provided by devices.  Parsing turns them from a flat
+ * sequence of descriptors into a hierarchy:
+ *
+ *  - devices have one (usually) or more configs;
+ *  - configs have one (often) or more interfaces;
+ *  - interfaces have one (usually) or more settings;
+ *  - each interface setting has zero or (usually) more endpoints.
+ *  - a SuperSpeed endpoint has a companion descriptor
+ *
+ * And there might be other descriptors mixed in with those.
+ *
+ * Devices may also have class-specific or vendor-specific descriptors.
+ */
+
+struct ep_device;
+
+/**
+ * struct usb_host_endpoint - host-side endpoint descriptor and queue
+ * @desc: descriptor for this endpoint, wMaxPacketSize in native byteorder
+ * @ss_ep_comp: SuperSpeed companion descriptor for this endpoint
+ * @urb_list: urbs queued to this endpoint; maintained by usbcore
+ * @hcpriv: for use by HCD; typically holds hardware dma queue head (QH)
+ *	with one or more transfer descriptors (TDs) per urb
+ * @ep_dev: ep_device for sysfs info
+ * @extra: descriptors following this endpoint in the configuration
+ * @extralen: how many bytes of "extra" are valid
+ * @enabled: URBs may be submitted to this endpoint
+ *
+ * USB requests are always queued to a given endpoint, identified by a
+ * descriptor within an active interface in a given USB configuration.
+ */
+struct usb_host_endpoint {
+	struct usb_endpoint_descriptor		desc;
+	struct usb_ss_ep_comp_descriptor	ss_ep_comp;
+	struct list_head		urb_list;
+	void				*hcpriv;
+	struct ep_device		*ep_dev;	/* For sysfs info */
+
+	unsigned char *extra;   /* Extra descriptors */
+	int extralen;
+	int enabled;
+};
+
+/* host-side wrapper for one interface setting's parsed descriptors */
+struct usb_host_interface {
+	struct usb_interface_descriptor	desc;
+
+	int extralen;
+	unsigned char *extra;   /* Extra descriptors */
+
+	/* array of desc.bNumEndpoint endpoints associated with this
+	 * interface setting.  these will be in no particular order.
+	 */
+	struct usb_host_endpoint *endpoint;
+
+	char *string;		/* iInterface string, if present */
+};
+
+enum usb_interface_condition {
+	USB_INTERFACE_UNBOUND = 0,
+	USB_INTERFACE_BINDING,
+	USB_INTERFACE_BOUND,
+	USB_INTERFACE_UNBINDING,
+};
+
+/**
+ * struct usb_interface - what usb device drivers talk to
+ * @altsetting: array of interface structures, one for each alternate
+ *	setting that may be selected.  Each one includes a set of
+ *	endpoint configurations.  They will be in no particular order.
+ * @cur_altsetting: the current altsetting.
+ * @num_altsetting: number of altsettings defined.
+ * @intf_assoc: interface association descriptor
+ * @minor: the minor number assigned to this interface, if this
+ *	interface is bound to a driver that uses the USB major number.
+ *	If this interface does not use the USB major, this field should
+ *	be unused.  The driver should set this value in the probe()
+ *	function of the driver, after it has been assigned a minor
+ *	number from the USB core by calling usb_register_dev().
+ * @condition: binding state of the interface: not bound, binding
+ *	(in probe()), bound to a driver, or unbinding (in disconnect())
+ * @sysfs_files_created: sysfs attributes exist
+ * @ep_devs_created: endpoint child pseudo-devices exist
+ * @unregistering: flag set when the interface is being unregistered
+ * @needs_remote_wakeup: flag set when the driver requires remote-wakeup
+ *	capability during autosuspend.
+ * @needs_altsetting0: flag set when a set-interface request for altsetting 0
+ *	has been deferred.
+ * @needs_binding: flag set when the driver should be re-probed or unbound
+ *	following a reset or suspend operation it doesn't support.
+ * @dev: driver model's view of this device
+ * @usb_dev: if an interface is bound to the USB major, this will point
+ *	to the sysfs representation for that device.
+ * @pm_usage_cnt: PM usage counter for this interface
+ * @reset_ws: Used for scheduling resets from atomic context.
+ * @reset_running: set to 1 if the interface is currently running a
+ *      queued reset so that usb_cancel_queued_reset() doesn't try to
+ *      remove from the workqueue when running inside the worker
+ *      thread. See __usb_queue_reset_device().
+ * @resetting_device: USB core reset the device, so use alt setting 0 as
+ *	current; needs bandwidth alloc after reset.
+ *
+ * USB device drivers attach to interfaces on a physical device.  Each
+ * interface encapsulates a single high level function, such as feeding
+ * an audio stream to a speaker or reporting a change in a volume control.
+ * Many USB devices only have one interface.  The protocol used to talk to
+ * an interface's endpoints can be defined in a usb "class" specification,
+ * or by a product's vendor.  The (default) control endpoint is part of
+ * every interface, but is never listed among the interface's descriptors.
+ *
+ * The driver that is bound to the interface can use standard driver model
+ * calls such as dev_get_drvdata() on the dev member of this structure.
+ *
+ * Each interface may have alternate settings.  The initial configuration
+ * of a device sets altsetting 0, but the device driver can change
+ * that setting using usb_set_interface().  Alternate settings are often
+ * used to control the use of periodic endpoints, such as by having
+ * different endpoints use different amounts of reserved USB bandwidth.
+ * All standards-conformant USB devices that use isochronous endpoints
+ * will use them in non-default settings.
+ *
+ * The USB specification says that alternate setting numbers must run from
+ * 0 to one less than the total number of alternate settings.  But some
+ * devices manage to mess this up, and the structures aren't necessarily
+ * stored in numerical order anyhow.  Use usb_altnum_to_altsetting() to
+ * look up an alternate setting in the altsetting array based on its number.
+ */
+struct usb_interface {
+	/* array of alternate settings for this interface,
+	 * stored in no particular order */
+	struct usb_host_interface *altsetting;
+
+	struct usb_host_interface *cur_altsetting;	/* the currently
+					 * active alternate setting */
+	unsigned num_altsetting;	/* number of alternate settings */
+
+	/* If there is an interface association descriptor then it will list
+	 * the associated interfaces */
+	struct usb_interface_assoc_descriptor *intf_assoc;
+
+	int minor;			/* minor number this interface is
+					 * bound to */
+	enum usb_interface_condition condition;		/* state of binding */
+	unsigned sysfs_files_created:1;	/* the sysfs attributes exist */
+	unsigned ep_devs_created:1;	/* endpoint "devices" exist */
+	unsigned unregistering:1;	/* unregistration is in progress */
+	unsigned needs_remote_wakeup:1;	/* driver requires remote wakeup */
+	unsigned needs_altsetting0:1;	/* switch to altsetting 0 is pending */
+	unsigned needs_binding:1;	/* needs delayed unbind/rebind */
+	unsigned reset_running:1;
+	unsigned resetting_device:1;	/* true: bandwidth alloc after reset */
+
+	struct device dev;		/* interface specific device info */
+	struct device *usb_dev;
+	atomic_t pm_usage_cnt;		/* usage counter for autosuspend */
+	struct work_struct reset_ws;	/* for resets in atomic context */
+};
+#define	to_usb_interface(d) container_of(d, struct usb_interface, dev)
+
+static inline void *usb_get_intfdata(struct usb_interface *intf)
+{
+	return dev_get_drvdata(&intf->dev);
+}
+
+static inline void usb_set_intfdata(struct usb_interface *intf, void *data)
+{
+	dev_set_drvdata(&intf->dev, data);
+}
+
+struct usb_interface *usb_get_intf(struct usb_interface *intf);
+void usb_put_intf(struct usb_interface *intf);
+
+/* this maximum is arbitrary */
+#define USB_MAXINTERFACES	32
+#define USB_MAXIADS		(USB_MAXINTERFACES/2)
+
+/**
+ * struct usb_interface_cache - long-term representation of a device interface
+ * @num_altsetting: number of altsettings defined.
+ * @ref: reference counter.
+ * @altsetting: variable-length array of interface structures, one for
+ *	each alternate setting that may be selected.  Each one includes a
+ *	set of endpoint configurations.  They will be in no particular order.
+ *
+ * These structures persist for the lifetime of a usb_device, unlike
+ * struct usb_interface (which persists only as long as its configuration
+ * is installed).  The altsetting arrays can be accessed through these
+ * structures at any time, permitting comparison of configurations and
+ * providing support for the /proc/bus/usb/devices pseudo-file.
+ */
+struct usb_interface_cache {
+	unsigned num_altsetting;	/* number of alternate settings */
+	struct kref ref;		/* reference counter */
+
+	/* variable-length array of alternate settings for this interface,
+	 * stored in no particular order */
+	struct usb_host_interface altsetting[0];
+};
+#define	ref_to_usb_interface_cache(r) \
+		container_of(r, struct usb_interface_cache, ref)
+#define	altsetting_to_usb_interface_cache(a) \
+		container_of(a, struct usb_interface_cache, altsetting[0])
+
+/**
+ * struct usb_host_config - representation of a device's configuration
+ * @desc: the device's configuration descriptor.
+ * @string: pointer to the cached version of the iConfiguration string, if
+ *	present for this configuration.
+ * @intf_assoc: list of any interface association descriptors in this config
+ * @interface: array of pointers to usb_interface structures, one for each
+ *	interface in the configuration.  The number of interfaces is stored
+ *	in desc.bNumInterfaces.  These pointers are valid only while the
+ *	the configuration is active.
+ * @intf_cache: array of pointers to usb_interface_cache structures, one
+ *	for each interface in the configuration.  These structures exist
+ *	for the entire life of the device.
+ * @extra: pointer to buffer containing all extra descriptors associated
+ *	with this configuration (those preceding the first interface
+ *	descriptor).
+ * @extralen: length of the extra descriptors buffer.
+ *
+ * USB devices may have multiple configurations, but only one can be active
+ * at any time.  Each encapsulates a different operational environment;
+ * for example, a dual-speed device would have separate configurations for
+ * full-speed and high-speed operation.  The number of configurations
+ * available is stored in the device descriptor as bNumConfigurations.
+ *
+ * A configuration can contain multiple interfaces.  Each corresponds to
+ * a different function of the USB device, and all are available whenever
+ * the configuration is active.  The USB standard says that interfaces
+ * are supposed to be numbered from 0 to desc.bNumInterfaces-1, but a lot
+ * of devices get this wrong.  In addition, the interface array is not
+ * guaranteed to be sorted in numerical order.  Use usb_ifnum_to_if() to
+ * look up an interface entry based on its number.
+ *
+ * Device drivers should not attempt to activate configurations.  The choice
+ * of which configuration to install is a policy decision based on such
+ * considerations as available power, functionality provided, and the user's
+ * desires (expressed through userspace tools).  However, drivers can call
+ * usb_reset_configuration() to reinitialize the current configuration and
+ * all its interfaces.
+ */
+struct usb_host_config {
+	struct usb_config_descriptor	desc;
+
+	char *string;		/* iConfiguration string, if present */
+
+	/* List of any Interface Association Descriptors in this
+	 * configuration. */
+	struct usb_interface_assoc_descriptor *intf_assoc[USB_MAXIADS];
+
+	/* the interfaces associated with this configuration,
+	 * stored in no particular order */
+	struct usb_interface *interface[USB_MAXINTERFACES];
+
+	/* Interface information available even when this is not the
+	 * active configuration */
+	struct usb_interface_cache *intf_cache[USB_MAXINTERFACES];
+
+	unsigned char *extra;   /* Extra descriptors */
+	int extralen;
+};
+
+/* USB2.0 and USB3.0 device BOS descriptor set */
+struct usb_host_bos {
+	struct usb_bos_descriptor	*desc;
+
+	/* wireless cap descriptor is handled by wusb */
+	struct usb_ext_cap_descriptor	*ext_cap;
+	struct usb_ss_cap_descriptor	*ss_cap;
+	struct usb_ss_container_id_descriptor	*ss_id;
+};
+
+int __usb_get_extra_descriptor(char *buffer, unsigned size,
+	unsigned char type, void **ptr);
+#define usb_get_extra_descriptor(ifpoint, type, ptr) \
+				__usb_get_extra_descriptor((ifpoint)->extra, \
+				(ifpoint)->extralen, \
+				type, (void **)ptr)
+
+/* ----------------------------------------------------------------------- */
+
+/* USB device number allocation bitmap */
+struct usb_devmap {
+	unsigned long devicemap[128 / (8*sizeof(unsigned long))];
+};
+
+/*
+ * Allocated per bus (tree of devices) we have:
+ */
+struct usb_bus {
+	struct device *controller;	/* host/master side hardware */
+	int busnum;			/* Bus number (in order of reg) */
+	const char *bus_name;		/* stable id (PCI slot_name etc) */
+	u8 uses_dma;			/* Does the host controller use DMA? */
+	u8 uses_pio_for_control;	/*
+					 * Does the host controller use PIO
+					 * for control transfers?
+					 */
+	u8 otg_port;			/* 0, or number of OTG/HNP port */
+	unsigned is_b_host:1;		/* true during some HNP roleswitches */
+	unsigned b_hnp_enable:1;	/* OTG: did A-Host enable HNP? */
+	unsigned no_stop_on_short:1;    /*
+					 * Quirk: some controllers don't stop
+					 * the ep queue on a short transfer
+					 * with the URB_SHORT_NOT_OK flag set.
+					 */
+	unsigned sg_tablesize;		/* 0 or largest number of sg list entries */
+
+	int devnum_next;		/* Next open device number in
+					 * round-robin allocation */
+
+	struct usb_devmap devmap;	/* device address allocation map */
+	struct usb_device *root_hub;	/* Root hub */
+	struct usb_bus *hs_companion;	/* Companion EHCI bus, if any */
+	struct list_head bus_list;	/* list of busses */
+
+	int bandwidth_allocated;	/* on this bus: how much of the time
+					 * reserved for periodic (intr/iso)
+					 * requests is used, on average?
+					 * Units: microseconds/frame.
+					 * Limits: Full/low speed reserve 90%,
+					 * while high speed reserves 80%.
+					 */
+	int bandwidth_int_reqs;		/* number of Interrupt requests */
+	int bandwidth_isoc_reqs;	/* number of Isoc. requests */
+
+	unsigned resuming_ports;	/* bit array: resuming root-hub ports */
+
+#if defined(CONFIG_USB_MON) || defined(CONFIG_USB_MON_MODULE)
+	struct mon_bus *mon_bus;	/* non-null when associated */
+	int monitored;			/* non-zero when monitored */
+#endif
+};
+
+/* ----------------------------------------------------------------------- */
+
+/* This is arbitrary.
+ * From USB 2.0 spec Table 11-13, offset 7, a hub can
+ * have up to 255 ports. The most yet reported is 10.
+ *
+ * Current Wireless USB host hardware (Intel i1480 for example) allows
+ * up to 22 devices to connect. Upcoming hardware might raise that
+ * limit. Because the arrays need to add a bit for hub status data, we
+ * do 31, so plus one evens out to four bytes.
+ */
+#define USB_MAXCHILDREN		(31)
+
+struct usb_tt;
+
+enum usb_device_removable {
+	USB_DEVICE_REMOVABLE_UNKNOWN = 0,
+	USB_DEVICE_REMOVABLE,
+	USB_DEVICE_FIXED,
+};
+
+enum usb_port_connect_type {
+	USB_PORT_CONNECT_TYPE_UNKNOWN = 0,
+	USB_PORT_CONNECT_TYPE_HOT_PLUG,
+	USB_PORT_CONNECT_TYPE_HARD_WIRED,
+	USB_PORT_NOT_USED,
+};
+
+/*
+ * USB 3.0 Link Power Management (LPM) parameters.
+ *
+ * PEL and SEL are USB 3.0 Link PM latencies for device-initiated LPM exit.
+ * MEL is the USB 3.0 Link PM latency for host-initiated LPM exit.
+ * All three are stored in nanoseconds.
+ */
+struct usb3_lpm_parameters {
+	/*
+	 * Maximum exit latency (MEL) for the host to send a packet to the
+	 * device (either a Ping for isoc endpoints, or a data packet for
+	 * interrupt endpoints), the hubs to decode the packet, and for all hubs
+	 * in the path to transition the links to U0.
+	 */
+	unsigned int mel;
+	/*
+	 * Maximum exit latency for a device-initiated LPM transition to bring
+	 * all links into U0.  Abbreviated as "PEL" in section 9.4.12 of the USB
+	 * 3.0 spec, with no explanation of what "P" stands for.  "Path"?
+	 */
+	unsigned int pel;
+
+	/*
+	 * The System Exit Latency (SEL) includes PEL, and three other
+	 * latencies.  After a device initiates a U0 transition, it will take
+	 * some time from when the device sends the ERDY to when it will finally
+	 * receive the data packet.  Basically, SEL should be the worse-case
+	 * latency from when a device starts initiating a U0 transition to when
+	 * it will get data.
+	 */
+	unsigned int sel;
+	/*
+	 * The idle timeout value that is currently programmed into the parent
+	 * hub for this device.  When the timer counts to zero, the parent hub
+	 * will initiate an LPM transition to either U1 or U2.
+	 */
+	int timeout;
+};
+
+/**
+ * struct usb_device - kernel's representation of a USB device
+ * @devnum: device number; address on a USB bus
+ * @devpath: device ID string for use in messages (e.g., /port/...)
+ * @route: tree topology hex string for use with xHCI
+ * @state: device state: configured, not attached, etc.
+ * @speed: device speed: high/full/low (or error)
+ * @tt: Transaction Translator info; used with low/full speed dev, highspeed hub
+ * @ttport: device port on that tt hub
+ * @toggle: one bit for each endpoint, with ([0] = IN, [1] = OUT) endpoints
+ * @parent: our hub, unless we're the root
+ * @bus: bus we're part of
+ * @ep0: endpoint 0 data (default control pipe)
+ * @dev: generic device interface
+ * @descriptor: USB device descriptor
+ * @bos: USB device BOS descriptor set
+ * @config: all of the device's configs
+ * @actconfig: the active configuration
+ * @ep_in: array of IN endpoints
+ * @ep_out: array of OUT endpoints
+ * @rawdescriptors: raw descriptors for each config
+ * @bus_mA: Current available from the bus
+ * @portnum: parent port number (origin 1)
+ * @level: number of USB hub ancestors
+ * @can_submit: URBs may be submitted
+ * @persist_enabled:  USB_PERSIST enabled for this device
+ * @have_langid: whether string_langid is valid
+ * @authorized: policy has said we can use it;
+ *	(user space) policy determines if we authorize this device to be
+ *	used or not. By default, wired USB devices are authorized.
+ *	WUSB devices are not, until we authorize them from user space.
+ *	FIXME -- complete doc
+ * @authenticated: Crypto authentication passed
+ * @wusb: device is Wireless USB
+ * @lpm_capable: device supports LPM
+ * @usb2_hw_lpm_capable: device can perform USB2 hardware LPM
+ * @usb2_hw_lpm_enabled: USB2 hardware LPM enabled
+ * @usb3_lpm_enabled: USB3 hardware LPM enabled
+ * @string_langid: language ID for strings
+ * @product: iProduct string, if present (static)
+ * @manufacturer: iManufacturer string, if present (static)
+ * @serial: iSerialNumber string, if present (static)
+ * @filelist: usbfs files that are open to this device
+ * @maxchild: number of ports if hub
+ * @quirks: quirks of the whole device
+ * @urbnum: number of URBs submitted for the whole device
+ * @active_duration: total time device is not suspended
+ * @connect_time: time device was first connected
+ * @do_remote_wakeup:  remote wakeup should be enabled
+ * @reset_resume: needs reset instead of resume
+ * @port_is_suspended: the upstream port is suspended (L2 or U3)
+ * @wusb_dev: if this is a Wireless USB device, link to the WUSB
+ *	specific data for the device.
+ * @slot_id: Slot ID assigned by xHCI
+ * @removable: Device can be physically removed from this port
+ * @u1_params: exit latencies for USB3 U1 LPM state, and hub-initiated timeout.
+ * @u2_params: exit latencies for USB3 U2 LPM state, and hub-initiated timeout.
+ * @lpm_disable_count: Ref count used by usb_disable_lpm() and usb_enable_lpm()
+ *	to keep track of the number of functions that require USB 3.0 Link Power
+ *	Management to be disabled for this usb_device.  This count should only
+ *	be manipulated by those functions, with the bandwidth_mutex is held.
+ *
+ * Notes:
+ * Usbcore drivers should not set usbdev->state directly.  Instead use
+ * usb_set_device_state().
+ */
+struct usb_device {
+	int		devnum;
+	char		devpath[16];
+	u32		route;
+	enum usb_device_state	state;
+	enum usb_device_speed	speed;
+
+	struct usb_tt	*tt;
+	int		ttport;
+
+	unsigned int toggle[2];
+
+	struct usb_device *parent;
+	struct usb_bus *bus;
+	struct usb_host_endpoint ep0;
+
+	struct device dev;
+
+	struct usb_device_descriptor descriptor;
+	struct usb_host_bos *bos;
+	struct usb_host_config *config;
+
+	struct usb_host_config *actconfig;
+	struct usb_host_endpoint *ep_in[16];
+	struct usb_host_endpoint *ep_out[16];
+
+	char **rawdescriptors;
+
+	unsigned short bus_mA;
+	u8 portnum;
+	u8 level;
+
+	unsigned can_submit:1;
+	unsigned persist_enabled:1;
+	unsigned have_langid:1;
+	unsigned authorized:1;
+	unsigned authenticated:1;
+	unsigned wusb:1;
+	unsigned lpm_capable:1;
+	unsigned usb2_hw_lpm_capable:1;
+	unsigned usb2_hw_lpm_enabled:1;
+	unsigned usb3_lpm_enabled:1;
+	int string_langid;
+
+	/* static strings from the device */
+	char *product;
+	char *manufacturer;
+	char *serial;
+
+	struct list_head filelist;
+
+	int maxchild;
+
+	u32 quirks;
+	atomic_t urbnum;
+
+	unsigned long active_duration;
+
+#ifdef CONFIG_PM
+	unsigned long connect_time;
+
+	unsigned do_remote_wakeup:1;
+	unsigned reset_resume:1;
+	unsigned port_is_suspended:1;
+#endif
+	struct wusb_dev *wusb_dev;
+	int slot_id;
+	enum usb_device_removable removable;
+	struct usb3_lpm_parameters u1_params;
+	struct usb3_lpm_parameters u2_params;
+	unsigned lpm_disable_count;
+};
+#define	to_usb_device(d) container_of(d, struct usb_device, dev)
+
+static inline struct usb_device *interface_to_usbdev(struct usb_interface *intf)
+{
+	return to_usb_device(intf->dev.parent);
+}
+
+extern struct usb_device *usb_get_dev(struct usb_device *dev);
+extern void usb_put_dev(struct usb_device *dev);
+extern struct usb_device *usb_hub_find_child(struct usb_device *hdev,
+	int port1);
+
+/**
+ * usb_hub_for_each_child - iterate over all child devices on the hub
+ * @hdev:  USB device belonging to the usb hub
+ * @port1: portnum associated with child device
+ * @child: child device pointer
+ */
+#define usb_hub_for_each_child(hdev, port1, child) \
+	for (port1 = 1,	child =	usb_hub_find_child(hdev, port1); \
+			port1 <= hdev->maxchild; \
+			child = usb_hub_find_child(hdev, ++port1)) \
+		if (!child) continue; else
+
+/* USB device locking */
+#define usb_lock_device(udev)		device_lock(&(udev)->dev)
+#define usb_unlock_device(udev)		device_unlock(&(udev)->dev)
+#define usb_trylock_device(udev)	device_trylock(&(udev)->dev)
+extern int usb_lock_device_for_reset(struct usb_device *udev,
+				     const struct usb_interface *iface);
+
+/* USB port reset for device reinitialization */
+extern int usb_reset_device(struct usb_device *dev);
+extern void usb_queue_reset_device(struct usb_interface *dev);
+
+#ifdef CONFIG_ACPI
+extern int usb_acpi_set_power_state(struct usb_device *hdev, int index,
+	bool enable);
+extern bool usb_acpi_power_manageable(struct usb_device *hdev, int index);
+#else
+static inline int usb_acpi_set_power_state(struct usb_device *hdev, int index,
+	bool enable) { return 0; }
+static inline bool usb_acpi_power_manageable(struct usb_device *hdev, int index)
+	{ return true; }
+#endif
+
+/* USB autosuspend and autoresume */
+#ifdef CONFIG_PM_RUNTIME
+extern void usb_enable_autosuspend(struct usb_device *udev);
+extern void usb_disable_autosuspend(struct usb_device *udev);
+
+extern int usb_autopm_get_interface(struct usb_interface *intf);
+extern void usb_autopm_put_interface(struct usb_interface *intf);
+extern int usb_autopm_get_interface_async(struct usb_interface *intf);
+extern void usb_autopm_put_interface_async(struct usb_interface *intf);
+extern void usb_autopm_get_interface_no_resume(struct usb_interface *intf);
+extern void usb_autopm_put_interface_no_suspend(struct usb_interface *intf);
+
+static inline void usb_mark_last_busy(struct usb_device *udev)
+{
+	pm_runtime_mark_last_busy(&udev->dev);
+}
+
+#else
+
+static inline int usb_enable_autosuspend(struct usb_device *udev)
+{ return 0; }
+static inline int usb_disable_autosuspend(struct usb_device *udev)
+{ return 0; }
+
+static inline int usb_autopm_get_interface(struct usb_interface *intf)
+{ return 0; }
+static inline int usb_autopm_get_interface_async(struct usb_interface *intf)
+{ return 0; }
+
+static inline void usb_autopm_put_interface(struct usb_interface *intf)
+{ }
+static inline void usb_autopm_put_interface_async(struct usb_interface *intf)
+{ }
+static inline void usb_autopm_get_interface_no_resume(
+		struct usb_interface *intf)
+{ }
+static inline void usb_autopm_put_interface_no_suspend(
+		struct usb_interface *intf)
+{ }
+static inline void usb_mark_last_busy(struct usb_device *udev)
+{ }
+#endif
+
+extern int usb_disable_lpm(struct usb_device *udev);
+extern void usb_enable_lpm(struct usb_device *udev);
+/* Same as above, but these functions lock/unlock the bandwidth_mutex. */
+extern int usb_unlocked_disable_lpm(struct usb_device *udev);
+extern void usb_unlocked_enable_lpm(struct usb_device *udev);
+
+extern int usb_disable_ltm(struct usb_device *udev);
+extern void usb_enable_ltm(struct usb_device *udev);
+
+static inline bool usb_device_supports_ltm(struct usb_device *udev)
+{
+	if (udev->speed != USB_SPEED_SUPER || !udev->bos || !udev->bos->ss_cap)
+		return false;
+	return udev->bos->ss_cap->bmAttributes & USB_LTM_SUPPORT;
+}
+
+
+/*-------------------------------------------------------------------------*/
+
+/* for drivers using iso endpoints */
+extern int usb_get_current_frame_number(struct usb_device *usb_dev);
+
+/* Sets up a group of bulk endpoints to support multiple stream IDs. */
+extern int usb_alloc_streams(struct usb_interface *interface,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		unsigned int num_streams, gfp_t mem_flags);
+
+/* Reverts a group of bulk endpoints back to not using stream IDs. */
+extern void usb_free_streams(struct usb_interface *interface,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		gfp_t mem_flags);
+
+/* used these for multi-interface device registration */
+extern int usb_driver_claim_interface(struct usb_driver *driver,
+			struct usb_interface *iface, void *priv);
+
+/**
+ * usb_interface_claimed - returns true iff an interface is claimed
+ * @iface: the interface being checked
+ *
+ * Returns true (nonzero) iff the interface is claimed, else false (zero).
+ * Callers must own the driver model's usb bus readlock.  So driver
+ * probe() entries don't need extra locking, but other call contexts
+ * may need to explicitly claim that lock.
+ *
+ */
+static inline int usb_interface_claimed(struct usb_interface *iface)
+{
+	return (iface->dev.driver != NULL);
+}
+
+extern void usb_driver_release_interface(struct usb_driver *driver,
+			struct usb_interface *iface);
+const struct usb_device_id *usb_match_id(struct usb_interface *interface,
+					 const struct usb_device_id *id);
+extern int usb_match_one_id(struct usb_interface *interface,
+			    const struct usb_device_id *id);
+
+extern struct usb_interface *usb_find_interface(struct usb_driver *drv,
+		int minor);
+extern struct usb_interface *usb_ifnum_to_if(const struct usb_device *dev,
+		unsigned ifnum);
+extern struct usb_host_interface *usb_altnum_to_altsetting(
+		const struct usb_interface *intf, unsigned int altnum);
+extern struct usb_host_interface *usb_find_alt_setting(
+		struct usb_host_config *config,
+		unsigned int iface_num,
+		unsigned int alt_num);
+
+
+/**
+ * usb_make_path - returns stable device path in the usb tree
+ * @dev: the device whose path is being constructed
+ * @buf: where to put the string
+ * @size: how big is "buf"?
+ *
+ * Returns length of the string (> 0) or negative if size was too small.
+ *
+ * This identifier is intended to be "stable", reflecting physical paths in
+ * hardware such as physical bus addresses for host controllers or ports on
+ * USB hubs.  That makes it stay the same until systems are physically
+ * reconfigured, by re-cabling a tree of USB devices or by moving USB host
+ * controllers.  Adding and removing devices, including virtual root hubs
+ * in host controller driver modules, does not change these path identifiers;
+ * neither does rebooting or re-enumerating.  These are more useful identifiers
+ * than changeable ("unstable") ones like bus numbers or device addresses.
+ *
+ * With a partial exception for devices connected to USB 2.0 root hubs, these
+ * identifiers are also predictable.  So long as the device tree isn't changed,
+ * plugging any USB device into a given hub port always gives it the same path.
+ * Because of the use of "companion" controllers, devices connected to ports on
+ * USB 2.0 root hubs (EHCI host controllers) will get one path ID if they are
+ * high speed, and a different one if they are full or low speed.
+ */
+static inline int usb_make_path(struct usb_device *dev, char *buf, size_t size)
+{
+	int actual;
+	actual = snprintf(buf, size, "usb-%s-%s", dev->bus->bus_name,
+			  dev->devpath);
+	return (actual >= (int)size) ? -1 : actual;
+}
+
+/*-------------------------------------------------------------------------*/
+
+#define USB_DEVICE_ID_MATCH_DEVICE \
+		(USB_DEVICE_ID_MATCH_VENDOR | USB_DEVICE_ID_MATCH_PRODUCT)
+#define USB_DEVICE_ID_MATCH_DEV_RANGE \
+		(USB_DEVICE_ID_MATCH_DEV_LO | USB_DEVICE_ID_MATCH_DEV_HI)
+#define USB_DEVICE_ID_MATCH_DEVICE_AND_VERSION \
+		(USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_DEV_RANGE)
+#define USB_DEVICE_ID_MATCH_DEV_INFO \
+		(USB_DEVICE_ID_MATCH_DEV_CLASS | \
+		USB_DEVICE_ID_MATCH_DEV_SUBCLASS | \
+		USB_DEVICE_ID_MATCH_DEV_PROTOCOL)
+#define USB_DEVICE_ID_MATCH_INT_INFO \
+		(USB_DEVICE_ID_MATCH_INT_CLASS | \
+		USB_DEVICE_ID_MATCH_INT_SUBCLASS | \
+		USB_DEVICE_ID_MATCH_INT_PROTOCOL)
+
+/**
+ * USB_DEVICE - macro used to describe a specific usb device
+ * @vend: the 16 bit USB Vendor ID
+ * @prod: the 16 bit USB Product ID
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific device.
+ */
+#define USB_DEVICE(vend, prod) \
+	.match_flags = USB_DEVICE_ID_MATCH_DEVICE, \
+	.idVendor = (vend), \
+	.idProduct = (prod)
+/**
+ * USB_DEVICE_VER - describe a specific usb device with a version range
+ * @vend: the 16 bit USB Vendor ID
+ * @prod: the 16 bit USB Product ID
+ * @lo: the bcdDevice_lo value
+ * @hi: the bcdDevice_hi value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific device, with a version range.
+ */
+#define USB_DEVICE_VER(vend, prod, lo, hi) \
+	.match_flags = USB_DEVICE_ID_MATCH_DEVICE_AND_VERSION, \
+	.idVendor = (vend), \
+	.idProduct = (prod), \
+	.bcdDevice_lo = (lo), \
+	.bcdDevice_hi = (hi)
+
+/**
+ * USB_DEVICE_INTERFACE_CLASS - describe a usb device with a specific interface class
+ * @vend: the 16 bit USB Vendor ID
+ * @prod: the 16 bit USB Product ID
+ * @cl: bInterfaceClass value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific interface class of devices.
+ */
+#define USB_DEVICE_INTERFACE_CLASS(vend, prod, cl) \
+	.match_flags = USB_DEVICE_ID_MATCH_DEVICE | \
+		       USB_DEVICE_ID_MATCH_INT_CLASS, \
+	.idVendor = (vend), \
+	.idProduct = (prod), \
+	.bInterfaceClass = (cl)
+
+/**
+ * USB_DEVICE_INTERFACE_PROTOCOL - describe a usb device with a specific interface protocol
+ * @vend: the 16 bit USB Vendor ID
+ * @prod: the 16 bit USB Product ID
+ * @pr: bInterfaceProtocol value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific interface protocol of devices.
+ */
+#define USB_DEVICE_INTERFACE_PROTOCOL(vend, prod, pr) \
+	.match_flags = USB_DEVICE_ID_MATCH_DEVICE | \
+		       USB_DEVICE_ID_MATCH_INT_PROTOCOL, \
+	.idVendor = (vend), \
+	.idProduct = (prod), \
+	.bInterfaceProtocol = (pr)
+
+/**
+ * USB_DEVICE_INTERFACE_NUMBER - describe a usb device with a specific interface number
+ * @vend: the 16 bit USB Vendor ID
+ * @prod: the 16 bit USB Product ID
+ * @num: bInterfaceNumber value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific interface number of devices.
+ */
+#define USB_DEVICE_INTERFACE_NUMBER(vend, prod, num) \
+	.match_flags = USB_DEVICE_ID_MATCH_DEVICE | \
+		       USB_DEVICE_ID_MATCH_INT_NUMBER, \
+	.idVendor = (vend), \
+	.idProduct = (prod), \
+	.bInterfaceNumber = (num)
+
+/**
+ * USB_DEVICE_INFO - macro used to describe a class of usb devices
+ * @cl: bDeviceClass value
+ * @sc: bDeviceSubClass value
+ * @pr: bDeviceProtocol value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific class of devices.
+ */
+#define USB_DEVICE_INFO(cl, sc, pr) \
+	.match_flags = USB_DEVICE_ID_MATCH_DEV_INFO, \
+	.bDeviceClass = (cl), \
+	.bDeviceSubClass = (sc), \
+	.bDeviceProtocol = (pr)
+
+/**
+ * USB_INTERFACE_INFO - macro used to describe a class of usb interfaces
+ * @cl: bInterfaceClass value
+ * @sc: bInterfaceSubClass value
+ * @pr: bInterfaceProtocol value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific class of interfaces.
+ */
+#define USB_INTERFACE_INFO(cl, sc, pr) \
+	.match_flags = USB_DEVICE_ID_MATCH_INT_INFO, \
+	.bInterfaceClass = (cl), \
+	.bInterfaceSubClass = (sc), \
+	.bInterfaceProtocol = (pr)
+
+/**
+ * USB_DEVICE_AND_INTERFACE_INFO - describe a specific usb device with a class of usb interfaces
+ * @vend: the 16 bit USB Vendor ID
+ * @prod: the 16 bit USB Product ID
+ * @cl: bInterfaceClass value
+ * @sc: bInterfaceSubClass value
+ * @pr: bInterfaceProtocol value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific device with a specific class of interfaces.
+ *
+ * This is especially useful when explicitly matching devices that have
+ * vendor specific bDeviceClass values, but standards-compliant interfaces.
+ */
+#define USB_DEVICE_AND_INTERFACE_INFO(vend, prod, cl, sc, pr) \
+	.match_flags = USB_DEVICE_ID_MATCH_INT_INFO \
+		| USB_DEVICE_ID_MATCH_DEVICE, \
+	.idVendor = (vend), \
+	.idProduct = (prod), \
+	.bInterfaceClass = (cl), \
+	.bInterfaceSubClass = (sc), \
+	.bInterfaceProtocol = (pr)
+
+/**
+ * USB_VENDOR_AND_INTERFACE_INFO - describe a specific usb vendor with a class of usb interfaces
+ * @vend: the 16 bit USB Vendor ID
+ * @cl: bInterfaceClass value
+ * @sc: bInterfaceSubClass value
+ * @pr: bInterfaceProtocol value
+ *
+ * This macro is used to create a struct usb_device_id that matches a
+ * specific vendor with a specific class of interfaces.
+ *
+ * This is especially useful when explicitly matching devices that have
+ * vendor specific bDeviceClass values, but standards-compliant interfaces.
+ */
+#define USB_VENDOR_AND_INTERFACE_INFO(vend, cl, sc, pr) \
+	.match_flags = USB_DEVICE_ID_MATCH_INT_INFO \
+		| USB_DEVICE_ID_MATCH_VENDOR, \
+	.idVendor = (vend), \
+	.bInterfaceClass = (cl), \
+	.bInterfaceSubClass = (sc), \
+	.bInterfaceProtocol = (pr)
+
+/* ----------------------------------------------------------------------- */
+
+/* Stuff for dynamic usb ids */
+struct usb_dynids {
+	spinlock_t lock;
+	struct list_head list;
+};
+
+struct usb_dynid {
+	struct list_head node;
+	struct usb_device_id id;
+};
+
+extern ssize_t usb_store_new_id(struct usb_dynids *dynids,
+				struct device_driver *driver,
+				const char *buf, size_t count);
+
+extern ssize_t usb_show_dynids(struct usb_dynids *dynids, char *buf);
+
+/**
+ * struct usbdrv_wrap - wrapper for driver-model structure
+ * @driver: The driver-model core driver structure.
+ * @for_devices: Non-zero for device drivers, 0 for interface drivers.
+ */
+struct usbdrv_wrap {
+	struct device_driver driver;
+	int for_devices;
+};
+
+/**
+ * struct usb_driver - identifies USB interface driver to usbcore
+ * @name: The driver name should be unique among USB drivers,
+ *	and should normally be the same as the module name.
+ * @probe: Called to see if the driver is willing to manage a particular
+ *	interface on a device.  If it is, probe returns zero and uses
+ *	usb_set_intfdata() to associate driver-specific data with the
+ *	interface.  It may also use usb_set_interface() to specify the
+ *	appropriate altsetting.  If unwilling to manage the interface,
+ *	return -ENODEV, if genuine IO errors occurred, an appropriate
+ *	negative errno value.
+ * @disconnect: Called when the interface is no longer accessible, usually
+ *	because its device has been (or is being) disconnected or the
+ *	driver module is being unloaded.
+ * @unlocked_ioctl: Used for drivers that want to talk to userspace through
+ *	the "usbfs" filesystem.  This lets devices provide ways to
+ *	expose information to user space regardless of where they
+ *	do (or don't) show up otherwise in the filesystem.
+ * @suspend: Called when the device is going to be suspended by the
+ *	system either from system sleep or runtime suspend context. The
+ *	return value will be ignored in system sleep context, so do NOT
+ *	try to continue using the device if suspend fails in this case.
+ *	Instead, let the resume or reset-resume routine recover from
+ *	the failure.
+ * @resume: Called when the device is being resumed by the system.
+ * @reset_resume: Called when the suspended device has been reset instead
+ *	of being resumed.
+ * @pre_reset: Called by usb_reset_device() when the device is about to be
+ *	reset.  This routine must not return until the driver has no active
+ *	URBs for the device, and no more URBs may be submitted until the
+ *	post_reset method is called.
+ * @post_reset: Called by usb_reset_device() after the device
+ *	has been reset
+ * @id_table: USB drivers use ID table to support hotplugging.
+ *	Export this with MODULE_DEVICE_TABLE(usb,...).  This must be set
+ *	or your driver's probe function will never get called.
+ * @dynids: used internally to hold the list of dynamically added device
+ *	ids for this driver.
+ * @drvwrap: Driver-model core structure wrapper.
+ * @no_dynamic_id: if set to 1, the USB core will not allow dynamic ids to be
+ *	added to this driver by preventing the sysfs file from being created.
+ * @supports_autosuspend: if set to 0, the USB core will not allow autosuspend
+ *	for interfaces bound to this driver.
+ * @soft_unbind: if set to 1, the USB core will not kill URBs and disable
+ *	endpoints before calling the driver's disconnect method.
+ * @disable_hub_initiated_lpm: if set to 0, the USB core will not allow hubs
+ *	to initiate lower power link state transitions when an idle timeout
+ *	occurs.  Device-initiated USB 3.0 link PM will still be allowed.
+ *
+ * USB interface drivers must provide a name, probe() and disconnect()
+ * methods, and an id_table.  Other driver fields are optional.
+ *
+ * The id_table is used in hotplugging.  It holds a set of descriptors,
+ * and specialized data may be associated with each entry.  That table
+ * is used by both user and kernel mode hotplugging support.
+ *
+ * The probe() and disconnect() methods are called in a context where
+ * they can sleep, but they should avoid abusing the privilege.  Most
+ * work to connect to a device should be done when the device is opened,
+ * and undone at the last close.  The disconnect code needs to address
+ * concurrency issues with respect to open() and close() methods, as
+ * well as forcing all pending I/O requests to complete (by unlinking
+ * them as necessary, and blocking until the unlinks complete).
+ */
+struct usb_driver {
+	const char *name;
+
+	int (*probe) (struct usb_interface *intf,
+		      const struct usb_device_id *id);
+
+	void (*disconnect) (struct usb_interface *intf);
+
+	int (*unlocked_ioctl) (struct usb_interface *intf, unsigned int code,
+			void *buf);
+
+	int (*suspend) (struct usb_interface *intf, pm_message_t message);
+	int (*resume) (struct usb_interface *intf);
+	int (*reset_resume)(struct usb_interface *intf);
+
+	int (*pre_reset)(struct usb_interface *intf);
+	int (*post_reset)(struct usb_interface *intf);
+
+	const struct usb_device_id *id_table;
+
+	struct usb_dynids dynids;
+	struct usbdrv_wrap drvwrap;
+	unsigned int no_dynamic_id:1;
+	unsigned int supports_autosuspend:1;
+	unsigned int disable_hub_initiated_lpm:1;
+	unsigned int soft_unbind:1;
+};
+#define	to_usb_driver(d) container_of(d, struct usb_driver, drvwrap.driver)
+
+/**
+ * struct usb_device_driver - identifies USB device driver to usbcore
+ * @name: The driver name should be unique among USB drivers,
+ *	and should normally be the same as the module name.
+ * @probe: Called to see if the driver is willing to manage a particular
+ *	device.  If it is, probe returns zero and uses dev_set_drvdata()
+ *	to associate driver-specific data with the device.  If unwilling
+ *	to manage the device, return a negative errno value.
+ * @disconnect: Called when the device is no longer accessible, usually
+ *	because it has been (or is being) disconnected or the driver's
+ *	module is being unloaded.
+ * @suspend: Called when the device is going to be suspended by the system.
+ * @resume: Called when the device is being resumed by the system.
+ * @drvwrap: Driver-model core structure wrapper.
+ * @supports_autosuspend: if set to 0, the USB core will not allow autosuspend
+ *	for devices bound to this driver.
+ *
+ * USB drivers must provide all the fields listed above except drvwrap.
+ */
+struct usb_device_driver {
+	const char *name;
+
+	int (*probe) (struct usb_device *udev);
+	void (*disconnect) (struct usb_device *udev);
+
+	int (*suspend) (struct usb_device *udev, pm_message_t message);
+	int (*resume) (struct usb_device *udev, pm_message_t message);
+	struct usbdrv_wrap drvwrap;
+	unsigned int supports_autosuspend:1;
+};
+#define	to_usb_device_driver(d) container_of(d, struct usb_device_driver, \
+		drvwrap.driver)
+
+extern struct bus_type usb_bus_type;
+
+/**
+ * struct usb_class_driver - identifies a USB driver that wants to use the USB major number
+ * @name: the usb class device name for this driver.  Will show up in sysfs.
+ * @devnode: Callback to provide a naming hint for a possible
+ *	device node to create.
+ * @fops: pointer to the struct file_operations of this driver.
+ * @minor_base: the start of the minor range for this driver.
+ *
+ * This structure is used for the usb_register_dev() and
+ * usb_unregister_dev() functions, to consolidate a number of the
+ * parameters used for them.
+ */
+struct usb_class_driver {
+	char *name;
+	char *(*devnode)(struct device *dev, umode_t *mode);
+	const struct file_operations *fops;
+	int minor_base;
+};
+
+/*
+ * use these in module_init()/module_exit()
+ * and don't forget MODULE_DEVICE_TABLE(usb, ...)
+ */
+extern int usb_register_driver(struct usb_driver *, struct module *,
+			       const char *);
+
+/* use a define to avoid include chaining to get THIS_MODULE & friends */
+#define usb_register(driver) \
+	usb_register_driver(driver, THIS_MODULE, KBUILD_MODNAME)
+
+extern void usb_deregister(struct usb_driver *);
+
+/**
+ * module_usb_driver() - Helper macro for registering a USB driver
+ * @__usb_driver: usb_driver struct
+ *
+ * Helper macro for USB drivers which do not do anything special in module
+ * init/exit. This eliminates a lot of boilerplate. Each module may only
+ * use this macro once, and calling it replaces module_init() and module_exit()
+ */
+#define module_usb_driver(__usb_driver) \
+	module_driver(__usb_driver, usb_register, \
+		       usb_deregister)
+
+extern int usb_register_device_driver(struct usb_device_driver *,
+			struct module *);
+extern void usb_deregister_device_driver(struct usb_device_driver *);
+
+extern int usb_register_dev(struct usb_interface *intf,
+			    struct usb_class_driver *class_driver);
+extern void usb_deregister_dev(struct usb_interface *intf,
+			       struct usb_class_driver *class_driver);
+
+extern int usb_disabled(void);
+
+/* ----------------------------------------------------------------------- */
+
+/*
+ * URB support, for asynchronous request completions
+ */
+
+/*
+ * urb->transfer_flags:
+ *
+ * Note: URB_DIR_IN/OUT is automatically set in usb_submit_urb().
+ */
+#define URB_SHORT_NOT_OK	0x0001	/* report short reads as errors */
+#define URB_ISO_ASAP		0x0002	/* iso-only; use the first unexpired
+					 * slot in the schedule */
+#define URB_NO_TRANSFER_DMA_MAP	0x0004	/* urb->transfer_dma valid on submit */
+#define URB_NO_FSBR		0x0020	/* UHCI-specific */
+#define URB_ZERO_PACKET		0x0040	/* Finish bulk OUT with short packet */
+#define URB_NO_INTERRUPT	0x0080	/* HINT: no non-error interrupt
+					 * needed */
+#define URB_FREE_BUFFER		0x0100	/* Free transfer buffer with the URB */
+
+/* The following flags are used internally by usbcore and HCDs */
+#define URB_DIR_IN		0x0200	/* Transfer from device to host */
+#define URB_DIR_OUT		0
+#define URB_DIR_MASK		URB_DIR_IN
+
+#define URB_DMA_MAP_SINGLE	0x00010000	/* Non-scatter-gather mapping */
+#define URB_DMA_MAP_PAGE	0x00020000	/* HCD-unsupported S-G */
+#define URB_DMA_MAP_SG		0x00040000	/* HCD-supported S-G */
+#define URB_MAP_LOCAL		0x00080000	/* HCD-local-memory mapping */
+#define URB_SETUP_MAP_SINGLE	0x00100000	/* Setup packet DMA mapped */
+#define URB_SETUP_MAP_LOCAL	0x00200000	/* HCD-local setup packet */
+#define URB_DMA_SG_COMBINED	0x00400000	/* S-G entries were combined */
+#define URB_ALIGNED_TEMP_BUFFER	0x00800000	/* Temp buffer was alloc'd */
+
+struct usb_iso_packet_descriptor {
+	unsigned int offset;
+	unsigned int length;		/* expected length */
+	unsigned int actual_length;
+	int status;
+};
+
+struct urb;
+
+struct usb_anchor {
+	struct list_head urb_list;
+	wait_queue_head_t wait;
+	spinlock_t lock;
+	unsigned int poisoned:1;
+};
+
+static inline void init_usb_anchor(struct usb_anchor *anchor)
+{
+	INIT_LIST_HEAD(&anchor->urb_list);
+	init_waitqueue_head(&anchor->wait);
+	spin_lock_init(&anchor->lock);
+}
+
+typedef void (*usb_complete_t)(struct urb *);
+
+/**
+ * struct urb - USB Request Block
+ * @urb_list: For use by current owner of the URB.
+ * @anchor_list: membership in the list of an anchor
+ * @anchor: to anchor URBs to a common mooring
+ * @ep: Points to the endpoint's data structure.  Will eventually
+ *	replace @pipe.
+ * @pipe: Holds endpoint number, direction, type, and more.
+ *	Create these values with the eight macros available;
+ *	usb_{snd,rcv}TYPEpipe(dev,endpoint), where the TYPE is "ctrl"
+ *	(control), "bulk", "int" (interrupt), or "iso" (isochronous).
+ *	For example usb_sndbulkpipe() or usb_rcvintpipe().  Endpoint
+ *	numbers range from zero to fifteen.  Note that "in" endpoint two
+ *	is a different endpoint (and pipe) from "out" endpoint two.
+ *	The current configuration controls the existence, type, and
+ *	maximum packet size of any given endpoint.
+ * @stream_id: the endpoint's stream ID for bulk streams
+ * @dev: Identifies the USB device to perform the request.
+ * @status: This is read in non-iso completion functions to get the
+ *	status of the particular request.  ISO requests only use it
+ *	to tell whether the URB was unlinked; detailed status for
+ *	each frame is in the fields of the iso_frame-desc.
+ * @transfer_flags: A variety of flags may be used to affect how URB
+ *	submission, unlinking, or operation are handled.  Different
+ *	kinds of URB can use different flags.
+ * @transfer_buffer:  This identifies the buffer to (or from) which the I/O
+ *	request will be performed unless URB_NO_TRANSFER_DMA_MAP is set
+ *	(however, do not leave garbage in transfer_buffer even then).
+ *	This buffer must be suitable for DMA; allocate it with
+ *	kmalloc() or equivalent.  For transfers to "in" endpoints, contents
+ *	of this buffer will be modified.  This buffer is used for the data
+ *	stage of control transfers.
+ * @transfer_dma: When transfer_flags includes URB_NO_TRANSFER_DMA_MAP,
+ *	the device driver is saying that it provided this DMA address,
+ *	which the host controller driver should use in preference to the
+ *	transfer_buffer.
+ * @sg: scatter gather buffer list
+ * @num_mapped_sgs: (internal) number of mapped sg entries
+ * @num_sgs: number of entries in the sg list
+ * @transfer_buffer_length: How big is transfer_buffer.  The transfer may
+ *	be broken up into chunks according to the current maximum packet
+ *	size for the endpoint, which is a function of the configuration
+ *	and is encoded in the pipe.  When the length is zero, neither
+ *	transfer_buffer nor transfer_dma is used.
+ * @actual_length: This is read in non-iso completion functions, and
+ *	it tells how many bytes (out of transfer_buffer_length) were
+ *	transferred.  It will normally be the same as requested, unless
+ *	either an error was reported or a short read was performed.
+ *	The URB_SHORT_NOT_OK transfer flag may be used to make such
+ *	short reads be reported as errors.
+ * @setup_packet: Only used for control transfers, this points to eight bytes
+ *	of setup data.  Control transfers always start by sending this data
+ *	to the device.  Then transfer_buffer is read or written, if needed.
+ * @setup_dma: DMA pointer for the setup packet.  The caller must not use
+ *	this field; setup_packet must point to a valid buffer.
+ * @start_frame: Returns the initial frame for isochronous transfers.
+ * @number_of_packets: Lists the number of ISO transfer buffers.
+ * @interval: Specifies the polling interval for interrupt or isochronous
+ *	transfers.  The units are frames (milliseconds) for full and low
+ *	speed devices, and microframes (1/8 millisecond) for highspeed
+ *	and SuperSpeed devices.
+ * @error_count: Returns the number of ISO transfers that reported errors.
+ * @context: For use in completion functions.  This normally points to
+ *	request-specific driver context.
+ * @complete: Completion handler. This URB is passed as the parameter to the
+ *	completion function.  The completion function may then do what
+ *	it likes with the URB, including resubmitting or freeing it.
+ * @iso_frame_desc: Used to provide arrays of ISO transfer buffers and to
+ *	collect the transfer status for each buffer.
+ *
+ * This structure identifies USB transfer requests.  URBs must be allocated by
+ * calling usb_alloc_urb() and freed with a call to usb_free_urb().
+ * Initialization may be done using various usb_fill_*_urb() functions.  URBs
+ * are submitted using usb_submit_urb(), and pending requests may be canceled
+ * using usb_unlink_urb() or usb_kill_urb().
+ *
+ * Data Transfer Buffers:
+ *
+ * Normally drivers provide I/O buffers allocated with kmalloc() or otherwise
+ * taken from the general page pool.  That is provided by transfer_buffer
+ * (control requests also use setup_packet), and host controller drivers
+ * perform a dma mapping (and unmapping) for each buffer transferred.  Those
+ * mapping operations can be expensive on some platforms (perhaps using a dma
+ * bounce buffer or talking to an IOMMU),
+ * although they're cheap on commodity x86 and ppc hardware.
+ *
+ * Alternatively, drivers may pass the URB_NO_TRANSFER_DMA_MAP transfer flag,
+ * which tells the host controller driver that no such mapping is needed for
+ * the transfer_buffer since
+ * the device driver is DMA-aware.  For example, a device driver might
+ * allocate a DMA buffer with usb_alloc_coherent() or call usb_buffer_map().
+ * When this transfer flag is provided, host controller drivers will
+ * attempt to use the dma address found in the transfer_dma
+ * field rather than determining a dma address themselves.
+ *
+ * Note that transfer_buffer must still be set if the controller
+ * does not support DMA (as indicated by bus.uses_dma) and when talking
+ * to root hub. If you have to trasfer between highmem zone and the device
+ * on such controller, create a bounce buffer or bail out with an error.
+ * If transfer_buffer cannot be set (is in highmem) and the controller is DMA
+ * capable, assign NULL to it, so that usbmon knows not to use the value.
+ * The setup_packet must always be set, so it cannot be located in highmem.
+ *
+ * Initialization:
+ *
+ * All URBs submitted must initialize the dev, pipe, transfer_flags (may be
+ * zero), and complete fields.  All URBs must also initialize
+ * transfer_buffer and transfer_buffer_length.  They may provide the
+ * URB_SHORT_NOT_OK transfer flag, indicating that short reads are
+ * to be treated as errors; that flag is invalid for write requests.
+ *
+ * Bulk URBs may
+ * use the URB_ZERO_PACKET transfer flag, indicating that bulk OUT transfers
+ * should always terminate with a short packet, even if it means adding an
+ * extra zero length packet.
+ *
+ * Control URBs must provide a valid pointer in the setup_packet field.
+ * Unlike the transfer_buffer, the setup_packet may not be mapped for DMA
+ * beforehand.
+ *
+ * Interrupt URBs must provide an interval, saying how often (in milliseconds
+ * or, for highspeed devices, 125 microsecond units)
+ * to poll for transfers.  After the URB has been submitted, the interval
+ * field reflects how the transfer was actually scheduled.
+ * The polling interval may be more frequent than requested.
+ * For example, some controllers have a maximum interval of 32 milliseconds,
+ * while others support intervals of up to 1024 milliseconds.
+ * Isochronous URBs also have transfer intervals.  (Note that for isochronous
+ * endpoints, as well as high speed interrupt endpoints, the encoding of
+ * the transfer interval in the endpoint descriptor is logarithmic.
+ * Device drivers must convert that value to linear units themselves.)
+ *
+ * If an isochronous endpoint queue isn't already running, the host
+ * controller will schedule a new URB to start as soon as bandwidth
+ * utilization allows.  If the queue is running then a new URB will be
+ * scheduled to start in the first transfer slot following the end of the
+ * preceding URB, if that slot has not already expired.  If the slot has
+ * expired (which can happen when IRQ delivery is delayed for a long time),
+ * the scheduling behavior depends on the URB_ISO_ASAP flag.  If the flag
+ * is clear then the URB will be scheduled to start in the expired slot,
+ * implying that some of its packets will not be transferred; if the flag
+ * is set then the URB will be scheduled in the first unexpired slot,
+ * breaking the queue's synchronization.  Upon URB completion, the
+ * start_frame field will be set to the (micro)frame number in which the
+ * transfer was scheduled.  Ranges for frame counter values are HC-specific
+ * and can go from as low as 256 to as high as 65536 frames.
+ *
+ * Isochronous URBs have a different data transfer model, in part because
+ * the quality of service is only "best effort".  Callers provide specially
+ * allocated URBs, with number_of_packets worth of iso_frame_desc structures
+ * at the end.  Each such packet is an individual ISO transfer.  Isochronous
+ * URBs are normally queued, submitted by drivers to arrange that
+ * transfers are at least double buffered, and then explicitly resubmitted
+ * in completion handlers, so
+ * that data (such as audio or video) streams at as constant a rate as the
+ * host controller scheduler can support.
+ *
+ * Completion Callbacks:
+ *
+ * The completion callback is made in_interrupt(), and one of the first
+ * things that a completion handler should do is check the status field.
+ * The status field is provided for all URBs.  It is used to report
+ * unlinked URBs, and status for all non-ISO transfers.  It should not
+ * be examined before the URB is returned to the completion handler.
+ *
+ * The context field is normally used to link URBs back to the relevant
+ * driver or request state.
+ *
+ * When the completion callback is invoked for non-isochronous URBs, the
+ * actual_length field tells how many bytes were transferred.  This field
+ * is updated even when the URB terminated with an error or was unlinked.
+ *
+ * ISO transfer status is reported in the status and actual_length fields
+ * of the iso_frame_desc array, and the number of errors is reported in
+ * error_count.  Completion callbacks for ISO transfers will normally
+ * (re)submit URBs to ensure a constant transfer rate.
+ *
+ * Note that even fields marked "public" should not be touched by the driver
+ * when the urb is owned by the hcd, that is, since the call to
+ * usb_submit_urb() till the entry into the completion routine.
+ */
+struct urb {
+	/* private: usb core and host controller only fields in the urb */
+	struct kref kref;		/* reference count of the URB */
+	void *hcpriv;			/* private data for host controller */
+	atomic_t use_count;		/* concurrent submissions counter */
+	atomic_t reject;		/* submissions will fail */
+	int unlinked;			/* unlink error code */
+
+	/* public: documented fields in the urb that can be used by drivers */
+	struct list_head urb_list;	/* list head for use by the urb's
+					 * current owner */
+	struct list_head anchor_list;	/* the URB may be anchored */
+	struct usb_anchor *anchor;
+	struct usb_device *dev;		/* (in) pointer to associated device */
+	struct usb_host_endpoint *ep;	/* (internal) pointer to endpoint */
+	unsigned int pipe;		/* (in) pipe information */
+	unsigned int stream_id;		/* (in) stream ID */
+	int status;			/* (return) non-ISO status */
+	unsigned int transfer_flags;	/* (in) URB_SHORT_NOT_OK | ...*/
+	void *transfer_buffer;		/* (in) associated data buffer */
+	dma_addr_t transfer_dma;	/* (in) dma addr for transfer_buffer */
+	struct scatterlist *sg;		/* (in) scatter gather buffer list */
+	int num_mapped_sgs;		/* (internal) mapped sg entries */
+	int num_sgs;			/* (in) number of entries in the sg list */
+	u32 transfer_buffer_length;	/* (in) data buffer length */
+	u32 actual_length;		/* (return) actual transfer length */
+	unsigned char *setup_packet;	/* (in) setup packet (control only) */
+	dma_addr_t setup_dma;		/* (in) dma addr for setup_packet */
+	int start_frame;		/* (modify) start frame (ISO) */
+	int number_of_packets;		/* (in) number of ISO packets */
+	int interval;			/* (modify) transfer interval
+					 * (INT/ISO) */
+	int error_count;		/* (return) number of ISO errors */
+	void *context;			/* (in) context for completion */
+	usb_complete_t complete;	/* (in) completion routine */
+	struct usb_iso_packet_descriptor iso_frame_desc[0];
+					/* (in) ISO ONLY */
+};
+
+/* ----------------------------------------------------------------------- */
+
+/**
+ * usb_fill_control_urb - initializes a control urb
+ * @urb: pointer to the urb to initialize.
+ * @dev: pointer to the struct usb_device for this urb.
+ * @pipe: the endpoint pipe
+ * @setup_packet: pointer to the setup_packet buffer
+ * @transfer_buffer: pointer to the transfer buffer
+ * @buffer_length: length of the transfer buffer
+ * @complete_fn: pointer to the usb_complete_t function
+ * @context: what to set the urb context to.
+ *
+ * Initializes a control urb with the proper information needed to submit
+ * it to a device.
+ */
+static inline void usb_fill_control_urb(struct urb *urb,
+					struct usb_device *dev,
+					unsigned int pipe,
+					unsigned char *setup_packet,
+					void *transfer_buffer,
+					int buffer_length,
+					usb_complete_t complete_fn,
+					void *context)
+{
+	urb->dev = dev;
+	urb->pipe = pipe;
+	urb->setup_packet = setup_packet;
+	urb->transfer_buffer = transfer_buffer;
+	urb->transfer_buffer_length = buffer_length;
+	urb->complete = complete_fn;
+	urb->context = context;
+}
+
+/**
+ * usb_fill_bulk_urb - macro to help initialize a bulk urb
+ * @urb: pointer to the urb to initialize.
+ * @dev: pointer to the struct usb_device for this urb.
+ * @pipe: the endpoint pipe
+ * @transfer_buffer: pointer to the transfer buffer
+ * @buffer_length: length of the transfer buffer
+ * @complete_fn: pointer to the usb_complete_t function
+ * @context: what to set the urb context to.
+ *
+ * Initializes a bulk urb with the proper information needed to submit it
+ * to a device.
+ */
+static inline void usb_fill_bulk_urb(struct urb *urb,
+				     struct usb_device *dev,
+				     unsigned int pipe,
+				     void *transfer_buffer,
+				     int buffer_length,
+				     usb_complete_t complete_fn,
+				     void *context)
+{
+	urb->dev = dev;
+	urb->pipe = pipe;
+	urb->transfer_buffer = transfer_buffer;
+	urb->transfer_buffer_length = buffer_length;
+	urb->complete = complete_fn;
+	urb->context = context;
+}
+
+/**
+ * usb_fill_int_urb - macro to help initialize a interrupt urb
+ * @urb: pointer to the urb to initialize.
+ * @dev: pointer to the struct usb_device for this urb.
+ * @pipe: the endpoint pipe
+ * @transfer_buffer: pointer to the transfer buffer
+ * @buffer_length: length of the transfer buffer
+ * @complete_fn: pointer to the usb_complete_t function
+ * @context: what to set the urb context to.
+ * @interval: what to set the urb interval to, encoded like
+ *	the endpoint descriptor's bInterval value.
+ *
+ * Initializes a interrupt urb with the proper information needed to submit
+ * it to a device.
+ *
+ * Note that High Speed and SuperSpeed interrupt endpoints use a logarithmic
+ * encoding of the endpoint interval, and express polling intervals in
+ * microframes (eight per millisecond) rather than in frames (one per
+ * millisecond).
+ *
+ * Wireless USB also uses the logarithmic encoding, but specifies it in units of
+ * 128us instead of 125us.  For Wireless USB devices, the interval is passed
+ * through to the host controller, rather than being translated into microframe
+ * units.
+ */
+static inline void usb_fill_int_urb(struct urb *urb,
+				    struct usb_device *dev,
+				    unsigned int pipe,
+				    void *transfer_buffer,
+				    int buffer_length,
+				    usb_complete_t complete_fn,
+				    void *context,
+				    int interval)
+{
+	urb->dev = dev;
+	urb->pipe = pipe;
+	urb->transfer_buffer = transfer_buffer;
+	urb->transfer_buffer_length = buffer_length;
+	urb->complete = complete_fn;
+	urb->context = context;
+	if (dev->speed == USB_SPEED_HIGH || dev->speed == USB_SPEED_SUPER)
+		urb->interval = 1 << (interval - 1);
+	else
+		urb->interval = interval;
+	urb->start_frame = -1;
+}
+
+extern void usb_init_urb(struct urb *urb);
+extern struct urb *usb_alloc_urb(int iso_packets, gfp_t mem_flags);
+extern void usb_free_urb(struct urb *urb);
+#define usb_put_urb usb_free_urb
+extern struct urb *usb_get_urb(struct urb *urb);
+extern int usb_submit_urb(struct urb *urb, gfp_t mem_flags);
+extern int usb_unlink_urb(struct urb *urb);
+extern void usb_kill_urb(struct urb *urb);
+extern void usb_poison_urb(struct urb *urb);
+extern void usb_unpoison_urb(struct urb *urb);
+extern void usb_block_urb(struct urb *urb);
+extern void usb_kill_anchored_urbs(struct usb_anchor *anchor);
+extern void usb_poison_anchored_urbs(struct usb_anchor *anchor);
+extern void usb_unpoison_anchored_urbs(struct usb_anchor *anchor);
+extern void usb_unlink_anchored_urbs(struct usb_anchor *anchor);
+extern void usb_anchor_urb(struct urb *urb, struct usb_anchor *anchor);
+extern void usb_unanchor_urb(struct urb *urb);
+extern int usb_wait_anchor_empty_timeout(struct usb_anchor *anchor,
+					 unsigned int timeout);
+extern struct urb *usb_get_from_anchor(struct usb_anchor *anchor);
+extern void usb_scuttle_anchored_urbs(struct usb_anchor *anchor);
+extern int usb_anchor_empty(struct usb_anchor *anchor);
+
+#define usb_unblock_urb	usb_unpoison_urb
+
+/**
+ * usb_urb_dir_in - check if an URB describes an IN transfer
+ * @urb: URB to be checked
+ *
+ * Returns 1 if @urb describes an IN transfer (device-to-host),
+ * otherwise 0.
+ */
+static inline int usb_urb_dir_in(struct urb *urb)
+{
+	return (urb->transfer_flags & URB_DIR_MASK) == URB_DIR_IN;
+}
+
+/**
+ * usb_urb_dir_out - check if an URB describes an OUT transfer
+ * @urb: URB to be checked
+ *
+ * Returns 1 if @urb describes an OUT transfer (host-to-device),
+ * otherwise 0.
+ */
+static inline int usb_urb_dir_out(struct urb *urb)
+{
+	return (urb->transfer_flags & URB_DIR_MASK) == URB_DIR_OUT;
+}
+
+void *usb_alloc_coherent(struct usb_device *dev, size_t size,
+	gfp_t mem_flags, dma_addr_t *dma);
+void usb_free_coherent(struct usb_device *dev, size_t size,
+	void *addr, dma_addr_t dma);
+
+#if 0
+struct urb *usb_buffer_map(struct urb *urb);
+void usb_buffer_dmasync(struct urb *urb);
+void usb_buffer_unmap(struct urb *urb);
+#endif
+
+struct scatterlist;
+int usb_buffer_map_sg(const struct usb_device *dev, int is_in,
+		      struct scatterlist *sg, int nents);
+#if 0
+void usb_buffer_dmasync_sg(const struct usb_device *dev, int is_in,
+			   struct scatterlist *sg, int n_hw_ents);
+#endif
+void usb_buffer_unmap_sg(const struct usb_device *dev, int is_in,
+			 struct scatterlist *sg, int n_hw_ents);
+
+/*-------------------------------------------------------------------*
+ *                         SYNCHRONOUS CALL SUPPORT                  *
+ *-------------------------------------------------------------------*/
+
+extern int usb_control_msg(struct usb_device *dev, unsigned int pipe,
+	__u8 request, __u8 requesttype, __u16 value, __u16 index,
+	void *data, __u16 size, int timeout);
+extern int usb_interrupt_msg(struct usb_device *usb_dev, unsigned int pipe,
+	void *data, int len, int *actual_length, int timeout);
+extern int usb_bulk_msg(struct usb_device *usb_dev, unsigned int pipe,
+	void *data, int len, int *actual_length,
+	int timeout);
+
+/* wrappers around usb_control_msg() for the most common standard requests */
+extern int usb_get_descriptor(struct usb_device *dev, unsigned char desctype,
+	unsigned char descindex, void *buf, int size);
+extern int usb_get_status(struct usb_device *dev,
+	int type, int target, void *data);
+extern int usb_string(struct usb_device *dev, int index,
+	char *buf, size_t size);
+
+/* wrappers that also update important state inside usbcore */
+extern int usb_clear_halt(struct usb_device *dev, int pipe);
+extern int usb_reset_configuration(struct usb_device *dev);
+extern int usb_set_interface(struct usb_device *dev, int ifnum, int alternate);
+extern void usb_reset_endpoint(struct usb_device *dev, unsigned int epaddr);
+
+/* this request isn't really synchronous, but it belongs with the others */
+extern int usb_driver_set_configuration(struct usb_device *udev, int config);
+
+/*
+ * timeouts, in milliseconds, used for sending/receiving control messages
+ * they typically complete within a few frames (msec) after they're issued
+ * USB identifies 5 second timeouts, maybe more in a few cases, and a few
+ * slow devices (like some MGE Ellipse UPSes) actually push that limit.
+ */
+#define USB_CTRL_GET_TIMEOUT	5000
+#define USB_CTRL_SET_TIMEOUT	5000
+
+
+/**
+ * struct usb_sg_request - support for scatter/gather I/O
+ * @status: zero indicates success, else negative errno
+ * @bytes: counts bytes transferred.
+ *
+ * These requests are initialized using usb_sg_init(), and then are used
+ * as request handles passed to usb_sg_wait() or usb_sg_cancel().  Most
+ * members of the request object aren't for driver access.
+ *
+ * The status and bytecount values are valid only after usb_sg_wait()
+ * returns.  If the status is zero, then the bytecount matches the total
+ * from the request.
+ *
+ * After an error completion, drivers may need to clear a halt condition
+ * on the endpoint.
+ */
+struct usb_sg_request {
+	int			status;
+	size_t			bytes;
+
+	/* private:
+	 * members below are private to usbcore,
+	 * and are not provided for driver access!
+	 */
+	spinlock_t		lock;
+
+	struct usb_device	*dev;
+	int			pipe;
+
+	int			entries;
+	struct urb		**urbs;
+
+	int			count;
+	struct completion	complete;
+};
+
+int usb_sg_init(
+	struct usb_sg_request	*io,
+	struct usb_device	*dev,
+	unsigned		pipe,
+	unsigned		period,
+	struct scatterlist	*sg,
+	int			nents,
+	size_t			length,
+	gfp_t			mem_flags
+);
+void usb_sg_cancel(struct usb_sg_request *io);
+void usb_sg_wait(struct usb_sg_request *io);
+
+
+/* ----------------------------------------------------------------------- */
+
+/*
+ * For various legacy reasons, Linux has a small cookie that's paired with
+ * a struct usb_device to identify an endpoint queue.  Queue characteristics
+ * are defined by the endpoint's descriptor.  This cookie is called a "pipe",
+ * an unsigned int encoded as:
+ *
+ *  - direction:	bit 7		(0 = Host-to-Device [Out],
+ *					 1 = Device-to-Host [In] ...
+ *					like endpoint bEndpointAddress)
+ *  - device address:	bits 8-14       ... bit positions known to uhci-hcd
+ *  - endpoint:		bits 15-18      ... bit positions known to uhci-hcd
+ *  - pipe type:	bits 30-31	(00 = isochronous, 01 = interrupt,
+ *					 10 = control, 11 = bulk)
+ *
+ * Given the device address and endpoint descriptor, pipes are redundant.
+ */
+
+/* NOTE:  these are not the standard USB_ENDPOINT_XFER_* values!! */
+/* (yet ... they're the values used by usbfs) */
+#define PIPE_ISOCHRONOUS		0
+#define PIPE_INTERRUPT			1
+#define PIPE_CONTROL			2
+#define PIPE_BULK			3
+
+#define usb_pipein(pipe)	((pipe) & USB_DIR_IN)
+#define usb_pipeout(pipe)	(!usb_pipein(pipe))
+
+#define usb_pipedevice(pipe)	(((pipe) >> 8) & 0x7f)
+#define usb_pipeendpoint(pipe)	(((pipe) >> 15) & 0xf)
+
+#define usb_pipetype(pipe)	(((pipe) >> 30) & 3)
+#define usb_pipeisoc(pipe)	(usb_pipetype((pipe)) == PIPE_ISOCHRONOUS)
+#define usb_pipeint(pipe)	(usb_pipetype((pipe)) == PIPE_INTERRUPT)
+#define usb_pipecontrol(pipe)	(usb_pipetype((pipe)) == PIPE_CONTROL)
+#define usb_pipebulk(pipe)	(usb_pipetype((pipe)) == PIPE_BULK)
+
+static inline unsigned int __create_pipe(struct usb_device *dev,
+		unsigned int endpoint)
+{
+	return (dev->devnum << 8) | (endpoint << 15);
+}
+
+/* Create various pipes... */
+#define usb_sndctrlpipe(dev, endpoint)	\
+	((PIPE_CONTROL << 30) | __create_pipe(dev, endpoint))
+#define usb_rcvctrlpipe(dev, endpoint)	\
+	((PIPE_CONTROL << 30) | __create_pipe(dev, endpoint) | USB_DIR_IN)
+#define usb_sndisocpipe(dev, endpoint)	\
+	((PIPE_ISOCHRONOUS << 30) | __create_pipe(dev, endpoint))
+#define usb_rcvisocpipe(dev, endpoint)	\
+	((PIPE_ISOCHRONOUS << 30) | __create_pipe(dev, endpoint) | USB_DIR_IN)
+#define usb_sndbulkpipe(dev, endpoint)	\
+	((PIPE_BULK << 30) | __create_pipe(dev, endpoint))
+#define usb_rcvbulkpipe(dev, endpoint)	\
+	((PIPE_BULK << 30) | __create_pipe(dev, endpoint) | USB_DIR_IN)
+#define usb_sndintpipe(dev, endpoint)	\
+	((PIPE_INTERRUPT << 30) | __create_pipe(dev, endpoint))
+#define usb_rcvintpipe(dev, endpoint)	\
+	((PIPE_INTERRUPT << 30) | __create_pipe(dev, endpoint) | USB_DIR_IN)
+
+static inline struct usb_host_endpoint *
+usb_pipe_endpoint(struct usb_device *dev, unsigned int pipe)
+{
+	struct usb_host_endpoint **eps;
+	eps = usb_pipein(pipe) ? dev->ep_in : dev->ep_out;
+	return eps[usb_pipeendpoint(pipe)];
+}
+
+/*-------------------------------------------------------------------------*/
+
+static inline __u16
+usb_maxpacket(struct usb_device *udev, int pipe, int is_out)
+{
+	struct usb_host_endpoint	*ep;
+	unsigned			epnum = usb_pipeendpoint(pipe);
+
+	if (is_out) {
+		WARN_ON(usb_pipein(pipe));
+		ep = udev->ep_out[epnum];
+	} else {
+		WARN_ON(usb_pipeout(pipe));
+		ep = udev->ep_in[epnum];
+	}
+	if (!ep)
+		return 0;
+
+	/* NOTE:  only 0x07ff bits are for packet size... */
+	return usb_endpoint_maxp(&ep->desc);
+}
+
+/* ----------------------------------------------------------------------- */
+
+/* translate USB error codes to codes user space understands */
+static inline int usb_translate_errors(int error_code)
+{
+	switch (error_code) {
+	case 0:
+	case -ENOMEM:
+	case -ENODEV:
+	case -EOPNOTSUPP:
+		return error_code;
+	default:
+		return -EIO;
+	}
+}
+
+/* Events from the usb core */
+#define USB_DEVICE_ADD		0x0001
+#define USB_DEVICE_REMOVE	0x0002
+#define USB_BUS_ADD		0x0003
+#define USB_BUS_REMOVE		0x0004
+extern void usb_register_notify(struct notifier_block *nb);
+extern void usb_unregister_notify(struct notifier_block *nb);
+
+/* debugfs stuff */
+extern struct dentry *usb_debug_root;
+
+#endif  /* __KERNEL__ */
+
+#endif
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 2/7] USB: Adapt the usb-compat.h to uboot and fix compiler errors
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file Dan Murphy
@ 2013-07-02 15:15 ` Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 3/7] USB: Initial kernel back port of the dwc3 kernel code Dan Murphy
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

Adapt the usb-compat.h to uBoot.

Use #ifndef __UBOOT__ for code that is not applicable to uBoot.
Use #ifdef __UBOOT__ to add code that is uBoot specific.

Create linux-compat.h - Linux kernel compatibility definitions that do not
exist in the uBoot.  Moved the compatibility definitions from lin_gadget_compat.h
to this file as well.

Create usb-mod-devicetable.h - Is a partial back port of mod_devicetable.h in the linux
kernel only taking the portion needed for USB

Already existing header files were modified to pick up the new header files.

Currently musb will not compile.

Signed-off-by: Dan Murphy <dmurphy@ti.com>
---
 drivers/usb/musb-new/musb_host.h        |    1 +
 drivers/usb/musb-new/usb-compat.h       |   30 ----
 include/linux/usb/gadget.h              |  184 +++++++++++++++++++-----
 include/linux/usb/linux-compat.h        |  233 +++++++++++++++++++++++++++++++
 include/linux/usb/usb-compat.h          |  186 +++++++++++++++++++++---
 include/linux/usb/usb-mod-devicetable.h |  131 +++++++++++++++++
 include/usb.h                           |  119 +---------------
 include/usb/lin_gadget_compat.h         |   29 +---
 8 files changed, 684 insertions(+), 229 deletions(-)
 create mode 100644 include/linux/usb/linux-compat.h
 create mode 100644 include/linux/usb/usb-mod-devicetable.h

diff --git a/drivers/usb/musb-new/musb_host.h b/drivers/usb/musb-new/musb_host.h
index ebebe0c..49cb094 100644
--- a/drivers/usb/musb-new/musb_host.h
+++ b/drivers/usb/musb-new/musb_host.h
@@ -35,6 +35,7 @@
 #ifndef _MUSB_HOST_H
 #define _MUSB_HOST_H
 #ifdef __UBOOT__
+#include <linux/usb/usb-compat.h>
 #include "usb-compat.h"
 #endif
 
diff --git a/drivers/usb/musb-new/usb-compat.h b/drivers/usb/musb-new/usb-compat.h
index 27f656f..bdb5f0e 100644
--- a/drivers/usb/musb-new/usb-compat.h
+++ b/drivers/usb/musb-new/usb-compat.h
@@ -6,13 +6,6 @@
 struct usb_hcd {
 	void *hcd_priv;
 };
-
-struct usb_host_endpoint {
-	struct usb_endpoint_descriptor		desc;
-	struct list_head urb_list;
-	void *hcpriv;
-};
-
 /*
  * urb->transfer_flags:
  *
@@ -20,29 +13,6 @@ struct usb_host_endpoint {
  */
 #define URB_SHORT_NOT_OK	0x0001	/* report short reads as errors */
 #define URB_ZERO_PACKET		0x0040	/* Finish bulk OUT with short packet */
-
-struct urb;
-
-typedef void (*usb_complete_t)(struct urb *);
-
-struct urb {
-	void *hcpriv;			/* private data for host controller */
-	struct list_head urb_list;	/* list head for use by the urb's
-					 * current owner */
-	struct usb_device *dev;		/* (in) pointer to associated device */
-	struct usb_host_endpoint *ep;	/* (internal) pointer to endpoint */
-	unsigned int pipe;		/* (in) pipe information */
-	int status;			/* (return) non-ISO status */
-	unsigned int transfer_flags;	/* (in) URB_SHORT_NOT_OK | ...*/
-	void *transfer_buffer;		/* (in) associated data buffer */
-	dma_addr_t transfer_dma;	/* (in) dma addr for transfer_buffer */
-	u32 transfer_buffer_length;	/* (in) data buffer length */
-	u32 actual_length;		/* (return) actual transfer length */
-	unsigned char *setup_packet;	/* (in) setup packet (control only) */
-	int start_frame;		/* (modify) start frame (ISO) */
-	usb_complete_t complete;	/* (in) completion routine */
-};
-
 #define usb_hcd_link_urb_to_ep(hcd, urb)	({		\
 	int ret = 0;						\
 	list_add_tail(&urb->urb_list, &urb->ep->urb_list);	\
diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
index 220d068..f987ff2 100644
--- a/include/linux/usb/gadget.h
+++ b/include/linux/usb/gadget.h
@@ -18,7 +18,11 @@
 #ifndef __LINUX_USB_GADGET_H
 #define __LINUX_USB_GADGET_H
 
+#include <common.h>
 #include <linux/list.h>
+#include <linux/usb/linux-compat.h>
+
+#define USB_GADGET_DELAYED_STATUS       0x7fff	/* Impossibly large value */
 
 struct usb_ep;
 
@@ -83,6 +87,11 @@ struct usb_request {
 	unsigned		length;
 	dma_addr_t		dma;
 
+	struct scatterlist	*sg;
+	unsigned		num_sgs;
+	unsigned		num_mapped_sgs;
+
+	unsigned		stream_id:16;
 	unsigned		no_interrupt:1;
 	unsigned		zero:1;
 	unsigned		short_not_ok:1;
@@ -119,6 +128,8 @@ struct usb_ep_ops {
 	int (*dequeue) (struct usb_ep *ep, struct usb_request *req);
 
 	int (*set_halt) (struct usb_ep *ep, int value);
+	int (*set_wedge) (struct usb_ep *ep);
+
 	int (*fifo_status) (struct usb_ep *ep);
 	void (*fifo_flush) (struct usb_ep *ep);
 };
@@ -140,10 +151,17 @@ struct usb_ep_ops {
  */
 struct usb_ep {
 	void			*driver_data;
+
 	const char		*name;
 	const struct usb_ep_ops	*ops;
 	struct list_head	ep_list;
 	unsigned		maxpacket:16;
+	unsigned		max_streams:16;
+	unsigned		mult:2;
+	unsigned		maxburst:5;
+	u8			address;
+	const struct usb_endpoint_descriptor	*desc;
+	const struct usb_ss_ep_comp_descriptor	*comp_desc;
 };
 
 /*-------------------------------------------------------------------------*/
@@ -390,10 +408,120 @@ static inline void usb_ep_fifo_flush(struct usb_ep *ep)
 		ep->ops->fifo_flush(ep);
 }
 
+/*-------------------------------------------------------------------------*/
+
+struct usb_dcd_config_params {
+	__u8  bU1devExitLat;	/* U1 Device exit Latency */
+#define USB_DEFAULT_U1_DEV_EXIT_LAT	0x01	/* Less then 1 microsec */
+	__le16 bU2DevExitLat;	/* U2 Device exit Latency */
+#define USB_DEFAULT_U2_DEV_EXIT_LAT	0x1F4	/* Less then 500 microsec */
+};
 
 /*-------------------------------------------------------------------------*/
 
 struct usb_gadget;
+struct usb_gadget_driver;
+
+/*-------------------------------------------------------------------------*/
+
+/* utility to simplify dealing with string descriptors */
+
+/**
+ * struct usb_string - wraps a C string and its USB id
+ * @id:the (nonzero) ID for this string
+ * @s:the string, in UTF-8 encoding
+ *
+ * If you're using usb_gadget_get_string(), use this to wrap a string
+ * together with its ID.
+ */
+struct usb_string {
+	u8			id;
+	const char		*s;
+};
+
+/**
+ * struct usb_gadget_strings - a set of USB strings in a given language
+ * @language:identifies the strings' language (0x0409 for en-us)
+ * @strings:array of strings with their ids
+ *
+ * If you're using usb_gadget_get_string(), use this to wrap all the
+ * strings for a given language.
+ */
+struct usb_gadget_strings {
+	u16			language;	/* 0x0409 for en-us */
+	struct usb_string	*strings;
+};
+
+struct usb_gadget_string_container {
+	struct list_head        list;
+	u8                      *stash[0];
+};
+
+/* put descriptor for string with that id into buf (buflen >= 256) */
+int usb_gadget_get_string(struct usb_gadget_strings *table, int id, u8 *buf);
+
+/*-------------------------------------------------------------------------*/
+
+/* utility to simplify managing config descriptors */
+
+/* write vector of descriptors into buffer */
+int usb_descriptor_fillbuf(void *, unsigned,
+		const struct usb_descriptor_header **);
+
+/* build config descriptor from single descriptor vector */
+int usb_gadget_config_buf(const struct usb_config_descriptor *config,
+	void *buf, unsigned buflen, const struct usb_descriptor_header **desc);
+
+/* copy a NULL-terminated vector of descriptors */
+struct usb_descriptor_header **usb_copy_descriptors(
+		struct usb_descriptor_header **);
+
+/**
+ * usb_free_descriptors - free descriptors returned by usb_copy_descriptors()
+ * @v: vector of descriptors
+ */
+static inline void usb_free_descriptors(struct usb_descriptor_header **v)
+{
+	kfree(v);
+}
+
+struct usb_function;
+int usb_assign_descriptors(struct usb_function *f,
+		struct usb_descriptor_header **fs,
+		struct usb_descriptor_header **hs,
+		struct usb_descriptor_header **ss);
+void usb_free_all_descriptors(struct usb_function *f);
+
+/*-------------------------------------------------------------------------*/
+
+/* utility to simplify map/unmap of usb_requests to/from DMA */
+
+extern int usb_gadget_map_request(struct usb_gadget *gadget,
+		struct usb_request *req, int is_in);
+
+extern void usb_gadget_unmap_request(struct usb_gadget *gadget,
+		struct usb_request *req, int is_in);
+
+/*-------------------------------------------------------------------------*/
+
+/* utility to set gadget state properly */
+
+extern void usb_gadget_set_state(struct usb_gadget *gadget,
+		enum usb_device_state state);
+
+/*-------------------------------------------------------------------------*/
+
+/* utility wrapping a simple endpoint selection policy */
+
+extern struct usb_ep *usb_ep_autoconfig(struct usb_gadget *,
+			struct usb_endpoint_descriptor *);
+
+
+extern struct usb_ep *usb_ep_autoconfig_ss(struct usb_gadget *,
+			struct usb_endpoint_descriptor *,
+			struct usb_ss_ep_comp_descriptor *);
+
+extern void usb_ep_autoconfig_reset(struct usb_gadget *);
 
 /* the rest of the api to the controller hardware: device operations,
  * which don't involve endpoints (or i/o).
@@ -407,11 +535,11 @@ struct usb_gadget_ops {
 	int	(*pullup) (struct usb_gadget *, int is_on);
 	int	(*ioctl)(struct usb_gadget *,
 				unsigned code, unsigned long param);
-};
-
-struct device {
-	void		*driver_data;	/* data private to the driver */
-	void            *device_data;   /* data private to the device */
+	void	(*get_config_params)(struct usb_dcd_config_params *);
+	int	(*udc_start)(struct usb_gadget *,
+			struct usb_gadget_driver *);
+	int	(*udc_stop)(struct usb_gadget *,
+			struct usb_gadget_driver *);
 };
 
 /**
@@ -462,6 +590,9 @@ struct usb_gadget {
 	struct usb_ep			*ep0;
 	struct list_head		ep_list;	/* of usb_ep */
 	enum usb_device_speed		speed;
+	enum usb_device_speed		max_speed;
+	enum usb_device_state		state;
+	unsigned			sg_supported:1;
 	unsigned			is_dualspeed:1;
 	unsigned			is_otg:1;
 	unsigned			is_a_peripheral:1;
@@ -470,6 +601,9 @@ struct usb_gadget {
 	unsigned			a_alt_hnp_support:1;
 	const char			*name;
 	struct device			dev;
+	unsigned			out_epnum;
+	unsigned			in_epnum;
+
 };
 
 static inline void set_gadget_data(struct usb_gadget *gadget, void *data)
@@ -756,16 +890,22 @@ static inline int usb_gadget_disconnect(struct usb_gadget *gadget)
  * power is maintained.
  */
 struct usb_gadget_driver {
-	enum usb_device_speed	speed;
-	int			(*bind)(struct usb_gadget *);
+	char			*function;
+	enum usb_device_speed	max_speed;
+	int			(*bind)(struct usb_gadget *gadget,
+					struct usb_gadget_driver *driver);
 	void			(*unbind)(struct usb_gadget *);
 	int			(*setup)(struct usb_gadget *,
 					const struct usb_ctrlrequest *);
 	void			(*disconnect)(struct usb_gadget *);
 	void			(*suspend)(struct usb_gadget *);
 	void			(*resume)(struct usb_gadget *);
-};
 
+	/* FIXME support safe rmmod */
+	struct device_driver	driver;
+	enum usb_device_speed   speed;
+
+};
 
 /*-------------------------------------------------------------------------*/
 
@@ -806,34 +946,6 @@ int usb_gadget_unregister_driver(struct usb_gadget_driver *driver);
 
 /*-------------------------------------------------------------------------*/
 
-/* utility to simplify dealing with string descriptors */
-
-/**
- * struct usb_string - wraps a C string and its USB id
- * @id:the (nonzero) ID for this string
- * @s:the string, in UTF-8 encoding
- *
- * If you're using usb_gadget_get_string(), use this to wrap a string
- * together with its ID.
- */
-struct usb_string {
-	u8			id;
-	const char		*s;
-};
-
-/**
- * struct usb_gadget_strings - a set of USB strings in a given language
- * @language:identifies the strings' language (0x0409 for en-us)
- * @strings:array of strings with their ids
- *
- * If you're using usb_gadget_get_string(), use this to wrap all the
- * strings for a given language.
- */
-struct usb_gadget_strings {
-	u16			language;	/* 0x0409 for en-us */
-	struct usb_string	*strings;
-};
-
 /* put descriptor for string with that id into buf (buflen >= 256) */
 int usb_gadget_get_string(struct usb_gadget_strings *table, int id, u8 *buf);
 
diff --git a/include/linux/usb/linux-compat.h b/include/linux/usb/linux-compat.h
new file mode 100644
index 0000000..9850f44
--- /dev/null
+++ b/include/linux/usb/linux-compat.h
@@ -0,0 +1,233 @@
+/*
+ * linux_compat.h -- linux compatibility header file
+ *
+ * Copyright (C) 2013
+ * Texas Instruments Incorporated.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#ifndef __LINUX_COMPAT_H__
+#define __LINUX_COMPAT_H__
+
+#include <common.h>
+#include <malloc.h>
+#include <linux/list.h>
+#include <linux/compat.h>
+#include <asm-generic/errno.h>
+
+#define __init
+#define __devinit
+#define __devinitdata
+#define __devinitconst
+#define __iomem
+
+struct unused {};
+typedef struct unused unused_t;
+
+typedef int irqreturn_t;
+typedef unused_t spinlock_t;
+
+struct work_struct {};
+
+struct timer_list {};
+struct notifier_block {};
+
+typedef unsigned long dmaaddr_t;
+#define BUS_ID_SIZE		20
+
+struct device {
+	struct device		*parent;
+	struct class		*class;
+	char bus_id[BUS_ID_SIZE];	/* position on parent bus */
+	dev_t devt;	/* dev_t, creates the sysfs "dev" */
+	void (*release)(struct device *dev);
+	void *driver_data;	/* data private to the driver */
+	void *device_data;   /* data private to the device */
+};
+
+/**
+ * struct device_driver - The basic device driver structure
+ * @name:	Name of the device driver.
+ * @bus:	The bus which the device of this driver belongs to.
+ * @suppress_bind_attrs: Disables bind/unbind via sysfs.
+ *
+ * The device driver-model tracks all of the drivers known to the system.
+ * The main reason for this tracking is to enable the driver core to match
+ * up drivers with new devices. Once drivers are known objects within the
+ * system, however, a number of other things become possible. Device drivers
+ * can export information and configuration variables that are independent
+ * of any specific device.
+ */
+struct device_driver {
+	const char		*name;
+	struct bus_type		*bus;
+
+	bool suppress_bind_attrs;	/* disables bind/unbind via sysfs */
+
+};
+
+/*
+ * Loop over each sg element, following the pointer to a new list if necessary
+ */
+#define for_each_sg(sglist, sg, nr, __i)	\
+	for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg))
+
+/* Errors */
+#define	EOPNOTSUPP 95
+
+#define setup_timer(timer, func, data) do {} while (0)
+#define del_timer_sync(timer) do {} while (0)
+#define schedule_work(work) do {} while (0)
+#define INIT_WORK(work, fun) do {} while (0)
+
+#define cpu_relax() do {} while (0)
+
+#define dev_dbg(dev, fmt, args...)		\
+	debug(fmt, ##args)
+#define dev_vdbg(dev, fmt, args...)		\
+	debug(fmt, ##args)
+#define dev_info(dev, fmt, args...)		\
+	printf(fmt, ##args)
+#define dev_err(dev, fmt, args...)		\
+	printf(fmt, ##args)
+#define printk printf
+
+#define WARN(condition, fmt, args...) ({	\
+	int ret_warn = !!condition;		\
+	if (ret_warn)				\
+		printf(fmt, ##args);		\
+	ret_warn; })
+
+#define KERN_DEBUG
+#define KERN_NOTICE
+#define KERN_WARNING
+#define KERN_ERR
+#define WARN_ON_ONCE WARN_ON
+#define BUILD_BUG_ON_NOT_POWER_OF_2(n) (0)
+
+#define kfree(ptr) free(ptr)
+
+/* common */
+#define spin_lock_init(...)
+#define spin_lock(...)
+#define spin_lock_irqsave(lock, flags) do { debug("%lu\n", flags); } while (0)
+#define spin_unlock(...)
+#define spin_unlock_irqrestore(lock, flags) do {flags = 0; } while (0)
+#define disable_irq(...)
+#define enable_irq(...)
+
+#define mutex_init(...)
+#define mutex_lock(...)
+#define mutex_unlock(...)
+
+#define GFP_KERNEL	0
+
+#define IRQ_HANDLED	1
+
+#define	EOPNOTSUPP	95
+#define ENOTSUPP	524	/* Operation is not supported */
+
+#define BITS_PER_BYTE				8
+#define BITS_TO_LONGS(nr) \
+	DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
+#define DECLARE_BITMAP(name, bits) \
+	unsigned long name[BITS_TO_LONGS(bits)]
+
+#define small_const_nbits(nbits) \
+	(__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG)
+
+/* TODO: Figure out these structs */
+static inline int pm_runtime_get_sync(struct device *dev) { return 0; }
+#define pm_runtime_put(dev) do {} while (0)
+#define pm_runtime_put_sync(dev) do {} while (0)
+#define pm_runtime_use_autosuspend(dev) do {} while (0)
+#define pm_runtime_set_autosuspend_delay(dev, delay) do {} while (0)
+#define pm_runtime_enable(dev) do {} while (0)
+#define pm_runtime_disable(dev) do {} while (0)
+
+#define MODULE_DESCRIPTION(desc)
+#define MODULE_AUTHOR(author)
+#define MODULE_LICENSE(license)
+#define MODULE_ALIAS(alias)
+#define module_param(name, type, perm)
+#define MODULE_PARM_DESC(name, desc)
+#define EXPORT_SYMBOL_GPL(name)
+#define MODULE_DEVICE_TABLE(type, name)
+
+#define writesl(a, d, s) __raw_writesl((unsigned long)a, d, s)
+#define readsl(a, d, s) __raw_readsl((unsigned long)a, d, s)
+#define writesw(a, d, s) __raw_writesw((unsigned long)a, d, s)
+#define readsw(a, d, s) __raw_readsw((unsigned long)a, d, s)
+#define writesb(a, d, s) __raw_writesb((unsigned long)a, d, s)
+#define readsb(a, d, s) __raw_readsb((unsigned long)a, d, s)
+
+/* IRQ */
+#define IRQ_NONE 0
+#define IRQ_WAKE_THREAD 0
+#define IRQF_SHARED 0
+#define IRQF_ONESHOT 0
+#define GFP_KERNEL	0
+
+#define dev_set_drvdata(dev, data) do {} while (0)
+
+#define disable_irq_wake(irq) do {} while (0)
+#define enable_irq_wake(irq) -EINVAL
+#define free_irq(irq, data) do {} while (0)
+#define request_irq(nr, f, flags, nm, data) 0
+
+#define device_init_wakeup(dev, a) do {} while (0)
+
+#define platform_data device_data
+
+#ifndef wmb
+#define wmb()			asm volatile (""   : : : "memory")
+#endif
+
+#define msleep(a)	udelay(a * 1000)
+
+/*
+ * Map U-Boot config options to Linux ones
+ */
+#ifdef CONFIG_OMAP34XX
+#define CONFIG_SOC_OMAP3430
+#endif
+
+/* TODO: Figure out these structs
+static inline int IS_ENABLED(option) { return option; };*/
+#define lower_32_bits(n) ((u32)(n))
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define PTR_ALIGN(p, a)		((typeof(p))ALIGN((unsigned long)(p), (a)))
+#define IS_ALIGNED(x, a)		(((x) & ((typeof(x))(a) - 1)) == 0)
+#define DMA_BIT_MASK(n)	(((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
+
+
+static inline void *dma_alloc_coherent(void *dev, size_t size,
+		dma_addr_t *dma_handle,	gfp_t gfp)
+{
+	void *p;
+
+	p = malloc(size);
+	*dma_handle = (unsigned long)p;
+	return p;
+}
+
+static inline void dma_free_coherent(struct device *dev, size_t size,
+		void *vaddr, dma_addr_t bus)
+{
+	free(vaddr);
+}
+
+#endif /* __LINUX_COMPAT_H__ */
diff --git a/include/linux/usb/usb-compat.h b/include/linux/usb/usb-compat.h
index a0bee5a..a46d1e6 100644
--- a/include/linux/usb/usb-compat.h
+++ b/include/linux/usb/usb-compat.h
@@ -1,15 +1,36 @@
+/*
+ * Copyright (C) 2013
+ * Texas Instruments Incorporated.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.	 See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ * MA 02111-1307 USA
+ *
+ * Note: This code has been backported from linux source include/linux/usb.h
+ * At the time of the backport the linux source did not contain a license
+ * nor a copyright.
+ *
+ */
+
 #ifndef __LINUX_USB_H
 #define __LINUX_USB_H
 
+#define __UBOOT__
+
+#ifndef __UBOOT__
 #include <linux/mod_devicetable.h>
 #include <linux/usb/ch9.h>
-
-#define USB_MAJOR			180
-#define USB_DEVICE_MAJOR		189
-
-
-#ifdef __KERNEL__
-
 #include <linux/errno.h>        /* for -ENODEV */
 #include <linux/delay.h>	/* for mdelay() */
 #include <linux/interrupt.h>	/* for in_interrupt() */
@@ -22,6 +43,20 @@
 #include <linux/mutex.h>	/* for struct mutex */
 #include <linux/pm_runtime.h>	/* for runtime PM */
 
+#else
+
+#include <common.h>
+#include <linux/list.h>		/* for struct list_head */
+#include <linux/usb/ch9.h>
+#include <linux/usb/linux-compat.h>
+
+#include "usb-mod-devicetable.h"
+
+#endif
+
+#define USB_MAJOR			180
+#define USB_DEVICE_MAJOR		189
+
 struct usb_device;
 struct usb_driver;
 struct wusb_dev;
@@ -95,6 +130,16 @@ enum usb_interface_condition {
 	USB_INTERFACE_UNBINDING,
 };
 
+#ifdef __UBOOT__
+enum {
+	/* Maximum packet size; encoded as 0,1,2,3 = 8,16,32,64 */
+	PACKET_SIZE_8   = 0,
+	PACKET_SIZE_16  = 1,
+	PACKET_SIZE_32  = 2,
+	PACKET_SIZE_64  = 3,
+};
+#endif
+
 /**
  * struct usb_interface - what usb device drivers talk to
  * @altsetting: array of interface structures, one for each alternate
@@ -169,7 +214,14 @@ struct usb_interface {
 	/* If there is an interface association descriptor then it will list
 	 * the associated interfaces */
 	struct usb_interface_assoc_descriptor *intf_assoc;
-
+#ifdef __UBOOT__
+#define USB_MAXENDPOINTS		16
+	struct usb_interface_descriptor desc;
+	struct usb_endpoint_descriptor ep_desc[USB_MAXENDPOINTS];
+	unsigned char	no_of_ep;
+	unsigned char	act_altsetting;
+	struct usb_ss_ep_comp_descriptor ss_ep_comp_desc[USB_MAXENDPOINTS];
+#endif
 	int minor;			/* minor number this interface is
 					 * bound to */
 	enum usb_interface_condition condition;		/* state of binding */
@@ -184,19 +236,31 @@ struct usb_interface {
 
 	struct device dev;		/* interface specific device info */
 	struct device *usb_dev;
+#ifndef __UBOOT__
 	atomic_t pm_usage_cnt;		/* usage counter for autosuspend */
 	struct work_struct reset_ws;	/* for resets in atomic context */
+#endif
 };
 #define	to_usb_interface(d) container_of(d, struct usb_interface, dev)
 
 static inline void *usb_get_intfdata(struct usb_interface *intf)
 {
+#ifndef __UBOOT__
 	return dev_get_drvdata(&intf->dev);
+#else
+	/* TODO Need to implement this to return the correct pointer */
+	return NULL;
+#endif
 }
 
 static inline void usb_set_intfdata(struct usb_interface *intf, void *data)
 {
+#ifndef __UBOOT__
 	dev_set_drvdata(&intf->dev, data);
+#else
+	/* TODO Need to implement this to return the correct pointer */
+	return;
+#endif
 }
 
 struct usb_interface *usb_get_intf(struct usb_interface *intf);
@@ -222,17 +286,29 @@ void usb_put_intf(struct usb_interface *intf);
  */
 struct usb_interface_cache {
 	unsigned num_altsetting;	/* number of alternate settings */
+#ifndef __UBOOT__
 	struct kref ref;		/* reference counter */
-
+#endif
 	/* variable-length array of alternate settings for this interface,
 	 * stored in no particular order */
 	struct usb_host_interface altsetting[0];
 };
+
 #define	ref_to_usb_interface_cache(r) \
 		container_of(r, struct usb_interface_cache, ref)
 #define	altsetting_to_usb_interface_cache(a) \
 		container_of(a, struct usb_interface_cache, altsetting[0])
 
+#ifdef __UBOOT__
+/* Configuration information.. */
+struct usb_config {
+	struct usb_config_descriptor desc;
+
+	unsigned char	no_of_if;	/* number of interfaces */
+	struct usb_interface if_desc[USB_MAXINTERFACES];
+} __attribute__ ((packed));
+#endif
+
 /**
  * struct usb_host_config - representation of a device's configuration
  * @desc: the device's configuration descriptor.
@@ -501,6 +577,35 @@ struct usb3_lpm_parameters {
 struct usb_device {
 	int		devnum;
 	char		devpath[16];
+#ifdef __UBOOT__
+	/* Legacy uBoot usb_device definitions */
+	char	mf[32];			/* manufacturer */
+	char	prod[32];		/* product */
+
+	struct usb_device *children[USB_MAXCHILDREN];
+	struct usb_config config; /* config descriptor */
+
+	int epmaxpacketin[16];		/* INput endpoint specific maximums */
+	int epmaxpacketout[16];		/* OUTput endpoint specific maximums */
+	int configno;			/* selected config number */
+	int act_len;			/* transfered bytes */
+	int portnr;
+	/* Maximum packet size; one of: PACKET_SIZE_* */
+	int maxpacketsize;
+	int irq_act_len;		/* transfered bytes */
+	int (*irq_handle)(struct usb_device *dev);
+
+	/* endpoint halts; one bit per endpoint # & direction;
+	 * [0] = IN, [1] = OUT
+	 */
+	unsigned int halted[2];
+	unsigned long status;
+	unsigned long irq_status;
+
+	void *controller;		/* hardware controller private data */
+	void *privptr;
+#endif
+
 	u32		route;
 	enum usb_device_state	state;
 	enum usb_device_speed	speed;
@@ -518,8 +623,13 @@ struct usb_device {
 
 	struct usb_device_descriptor descriptor;
 	struct usb_host_bos *bos;
-	struct usb_host_config *config;
 
+#ifndef __UBOOT__
+	/* This conflicts with legacy uBoot config */
+	struct usb_host_config *config;
+#else
+	struct usb_host_config *host_config;
+#endif
 	struct usb_host_config *actconfig;
 	struct usb_host_endpoint *ep_in[16];
 	struct usb_host_endpoint *ep_out[16];
@@ -552,8 +662,9 @@ struct usb_device {
 	int maxchild;
 
 	u32 quirks;
+#ifndef __UBOOT__
 	atomic_t urbnum;
-
+#endif
 	unsigned long active_duration;
 
 #ifdef CONFIG_PM
@@ -707,7 +818,11 @@ extern int usb_driver_claim_interface(struct usb_driver *driver,
  */
 static inline int usb_interface_claimed(struct usb_interface *iface)
 {
+#ifndef __UBOOT__
 	return (iface->dev.driver != NULL);
+#else
+	return true;
+#endif
 }
 
 extern void usb_driver_release_interface(struct usb_driver *driver,
@@ -1032,11 +1147,11 @@ struct usb_driver {
 
 	int (*unlocked_ioctl) (struct usb_interface *intf, unsigned int code,
 			void *buf);
-
+#ifdef CONFIG_PM
 	int (*suspend) (struct usb_interface *intf, pm_message_t message);
 	int (*resume) (struct usb_interface *intf);
 	int (*reset_resume)(struct usb_interface *intf);
-
+#endif
 	int (*pre_reset)(struct usb_interface *intf);
 	int (*post_reset)(struct usb_interface *intf);
 
@@ -1075,9 +1190,11 @@ struct usb_device_driver {
 
 	int (*probe) (struct usb_device *udev);
 	void (*disconnect) (struct usb_device *udev);
-
+#ifdef CONFIG_PM
 	int (*suspend) (struct usb_device *udev, pm_message_t message);
 	int (*resume) (struct usb_device *udev, pm_message_t message);
+#endif
+
 	struct usbdrv_wrap drvwrap;
 	unsigned int supports_autosuspend:1;
 };
@@ -1104,7 +1221,7 @@ struct usb_class_driver {
 	const struct file_operations *fops;
 	int minor_base;
 };
-
+#ifndef __UBOOT__
 /*
  * use these in module_init()/module_exit()
  * and don't forget MODULE_DEVICE_TABLE(usb, ...)
@@ -1140,6 +1257,7 @@ extern void usb_deregister_dev(struct usb_interface *intf,
 			       struct usb_class_driver *class_driver);
 
 extern int usb_disabled(void);
+#endif /* __UBOOT__ */
 
 /* ----------------------------------------------------------------------- */
 
@@ -1187,7 +1305,9 @@ struct urb;
 
 struct usb_anchor {
 	struct list_head urb_list;
+#ifndef __UBOOT__
 	wait_queue_head_t wait;
+#endif
 	spinlock_t lock;
 	unsigned int poisoned:1;
 };
@@ -1195,7 +1315,9 @@ struct usb_anchor {
 static inline void init_usb_anchor(struct usb_anchor *anchor)
 {
 	INIT_LIST_HEAD(&anchor->urb_list);
+#ifndef __UBOOT__
 	init_waitqueue_head(&anchor->wait);
+#endif
 	spin_lock_init(&anchor->lock);
 }
 
@@ -1383,11 +1505,15 @@ typedef void (*usb_complete_t)(struct urb *);
  * usb_submit_urb() till the entry into the completion routine.
  */
 struct urb {
+#ifndef __UBOOT__
 	/* private: usb core and host controller only fields in the urb */
 	struct kref kref;		/* reference count of the URB */
+#endif
 	void *hcpriv;			/* private data for host controller */
+#ifndef __UBOOT__
 	atomic_t use_count;		/* concurrent submissions counter */
 	atomic_t reject;		/* submissions will fail */
+#endif
 	int unlinked;			/* unlink error code */
 
 	/* public: documented fields in the urb that can be used by drivers */
@@ -1403,13 +1529,17 @@ struct urb {
 	unsigned int transfer_flags;	/* (in) URB_SHORT_NOT_OK | ...*/
 	void *transfer_buffer;		/* (in) associated data buffer */
 	dma_addr_t transfer_dma;	/* (in) dma addr for transfer_buffer */
+#ifndef __UBOOT__
 	struct scatterlist *sg;		/* (in) scatter gather buffer list */
+#endif
 	int num_mapped_sgs;		/* (internal) mapped sg entries */
 	int num_sgs;			/* (in) number of entries in the sg list */
 	u32 transfer_buffer_length;	/* (in) data buffer length */
 	u32 actual_length;		/* (return) actual transfer length */
 	unsigned char *setup_packet;	/* (in) setup packet (control only) */
+#ifndef __UBOOT__
 	dma_addr_t setup_dma;		/* (in) dma addr for setup_packet */
+#endif
 	int start_frame;		/* (modify) start frame (ISO) */
 	int number_of_packets;		/* (in) number of ISO packets */
 	int interval;			/* (modify) transfer interval
@@ -1604,7 +1734,7 @@ void usb_buffer_unmap_sg(const struct usb_device *dev, int is_in,
 /*-------------------------------------------------------------------*
  *                         SYNCHRONOUS CALL SUPPORT                  *
  *-------------------------------------------------------------------*/
-
+#ifndef __UBOOT__
 extern int usb_control_msg(struct usb_device *dev, unsigned int pipe,
 	__u8 request, __u8 requesttype, __u16 value, __u16 index,
 	void *data, __u16 size, int timeout);
@@ -1630,6 +1760,7 @@ extern void usb_reset_endpoint(struct usb_device *dev, unsigned int epaddr);
 
 /* this request isn't really synchronous, but it belongs with the others */
 extern int usb_driver_set_configuration(struct usb_device *udev, int config);
+#endif
 
 /*
  * timeouts, in milliseconds, used for sending/receiving control messages
@@ -1674,7 +1805,9 @@ struct usb_sg_request {
 	struct urb		**urbs;
 
 	int			count;
+#ifndef __UBOOT__
 	struct completion	complete;
+#endif
 };
 
 int usb_sg_init(
@@ -1762,7 +1895,7 @@ usb_pipe_endpoint(struct usb_device *dev, unsigned int pipe)
 }
 
 /*-------------------------------------------------------------------------*/
-
+#ifndef __UBOOT__
 static inline __u16
 usb_maxpacket(struct usb_device *udev, int pipe, int is_out)
 {
@@ -1770,10 +1903,16 @@ usb_maxpacket(struct usb_device *udev, int pipe, int is_out)
 	unsigned			epnum = usb_pipeendpoint(pipe);
 
 	if (is_out) {
+#ifndef __UBOOT__
+		/* Fix this warning */
 		WARN_ON(usb_pipein(pipe));
+#endif
 		ep = udev->ep_out[epnum];
 	} else {
+#ifndef __UBOOT__
+		/* Fix this warning */
 		WARN_ON(usb_pipeout(pipe));
+#endif
 		ep = udev->ep_in[epnum];
 	}
 	if (!ep)
@@ -1782,6 +1921,7 @@ usb_maxpacket(struct usb_device *udev, int pipe, int is_out)
 	/* NOTE:  only 0x07ff bits are for packet size... */
 	return usb_endpoint_maxp(&ep->desc);
 }
+#endif
 
 /* ----------------------------------------------------------------------- */
 
@@ -1810,6 +1950,16 @@ extern void usb_unregister_notify(struct notifier_block *nb);
 /* debugfs stuff */
 extern struct dentry *usb_debug_root;
 
-#endif  /* __KERNEL__ */
+#ifdef __UBOOT__
+/* device request (setup) */
+struct devrequest {
+	unsigned char	requesttype;
+	unsigned char	request;
+	unsigned short	value;
+	unsigned short	index;
+	unsigned short	length;
+} __attribute__ ((packed));
 
 #endif
+
+#endif  /* __KERNEL__ */
diff --git a/include/linux/usb/usb-mod-devicetable.h b/include/linux/usb/usb-mod-devicetable.h
new file mode 100644
index 0000000..c90691c
--- /dev/null
+++ b/include/linux/usb/usb-mod-devicetable.h
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) 2013
+ * Texas Instruments Incorporated.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.	 See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ * MA 02111-1307 USA
+ *
+ * Note: This part of code has been derived from linux source mod_devicetable.h
+ *
+ */
+
+#ifndef LINUX_MOD_DEVICETABLE_H
+#define LINUX_MOD_DEVICETABLE_H
+
+#define __UBOOT__
+/**
+ * struct usb_device_id - identifies USB devices for probing and hotplugging
+ * @match_flags: Bit mask controlling of the other fields are used to match
+ *	against new devices.  Any field except for driver_info may be used,
+ *	although some only make sense in conjunction with other fields.
+ *	This is usually set by a USB_DEVICE_*() macro, which sets all
+ *	other fields in this structure except for driver_info.
+ * @idVendor: USB vendor ID for a device; numbers are assigned
+ *	by the USB forum to its members.
+ * @idProduct: Vendor-assigned product ID.
+ * @bcdDevice_lo: Low end of range of vendor-assigned product version numbers.
+ *	This is also used to identify individual product versions, for
+ *	a range consisting of a single device.
+ * @bcdDevice_hi: High end of version number range.  The range of product
+ *	versions is inclusive.
+ * @bDeviceClass: Class of device; numbers are assigned
+ *	by the USB forum.  Products may choose to implement classes,
+ *	or be vendor-specific.  Device classes specify behavior of all
+ *	the interfaces on a devices.
+ * @bDeviceSubClass: Subclass of device; associated with bDeviceClass.
+ * @bDeviceProtocol: Protocol of device; associated with bDeviceClass.
+ * @bInterfaceClass: Class of interface; numbers are assigned
+ *	by the USB forum.  Products may choose to implement classes,
+ *	or be vendor-specific.  Interface classes specify behavior only
+ *	of a given interface; other interfaces may support other classes.
+ * @bInterfaceSubClass: Subclass of interface; associated with bInterfaceClass.
+ * @bInterfaceProtocol: Protocol of interface; associated with bInterfaceClass.
+ * @bInterfaceNumber: Number of interface; composite devices may use
+ *	fixed interface numbers to differentiate between vendor-specific
+ *	interfaces.
+ * @driver_info: Holds information used by the driver.  Usually it holds
+ *	a pointer to a descriptor understood by the driver, or perhaps
+ *	device flags.
+ *
+ * In most cases, drivers will create a table of device IDs by using
+ * USB_DEVICE(), or similar macros designed for that purpose.
+ * They will then export it to userspace using MODULE_DEVICE_TABLE(),
+ * and provide it to the USB core through their usb_driver structure.
+ *
+ * See the usb_match_id() function for information about how matches are
+ * performed.  Briefly, you will normally use one of several macros to help
+ * construct these entries.  Each entry you provide will either identify
+ * one or more specific products, or will identify a class of products
+ * which have agreed to behave the same.  You should put the more specific
+ * matches towards the beginning of your table, so that driver_info can
+ * record quirks of specific products.
+ */
+struct usb_device_id {
+	/* which fields to match against? */
+	__u16		match_flags;
+
+	/* Used for product specific matches; range is inclusive */
+	__u16		idVendor;
+	__u16		idProduct;
+	__u16		bcdDevice_lo;
+	__u16		bcdDevice_hi;
+
+	/* Used for device class matches */
+	__u8		bDeviceClass;
+	__u8		bDeviceSubClass;
+	__u8		bDeviceProtocol;
+
+	/* Used for interface class matches */
+	__u8		bInterfaceClass;
+	__u8		bInterfaceSubClass;
+	__u8		bInterfaceProtocol;
+
+	/* Used for vendor-specific interface matches */
+	__u8		bInterfaceNumber;
+#ifndef __UBOOT__
+	/* not matched against */
+	kernel_ulong_t	driver_info
+		__attribute__((aligned(sizeof(kernel_ulong_t))));
+#endif
+};
+
+/* Some useful macros to use to create struct usb_device_id */
+#define USB_DEVICE_ID_MATCH_VENDOR		0x0001
+#define USB_DEVICE_ID_MATCH_PRODUCT		0x0002
+#define USB_DEVICE_ID_MATCH_DEV_LO		0x0004
+#define USB_DEVICE_ID_MATCH_DEV_HI		0x0008
+#define USB_DEVICE_ID_MATCH_DEV_CLASS		0x0010
+#define USB_DEVICE_ID_MATCH_DEV_SUBCLASS	0x0020
+#define USB_DEVICE_ID_MATCH_DEV_PROTOCOL	0x0040
+#define USB_DEVICE_ID_MATCH_INT_CLASS		0x0080
+#define USB_DEVICE_ID_MATCH_INT_SUBCLASS	0x0100
+#define USB_DEVICE_ID_MATCH_INT_PROTOCOL	0x0200
+#define USB_DEVICE_ID_MATCH_INT_NUMBER		0x0400
+
+#define HID_ANY_ID				(~0)
+#define HID_BUS_ANY				0xffff
+#define HID_GROUP_ANY				0x0000
+
+struct hid_device_id {
+	__u16 bus;
+	__u16 group;
+	__u32 vendor;
+	__u32 product;
+#ifndef __UBOOT__
+	kernel_ulong_t driver_data;
+#endif
+};
+
+#endif /* LINUX_MOD_DEVICETABLE_H */
diff --git a/include/usb.h b/include/usb.h
index d7b082d..0598972 100644
--- a/include/usb.h
+++ b/include/usb.h
@@ -28,6 +28,7 @@
 
 #include <usb_defs.h>
 #include <linux/usb/ch9.h>
+#include <linux/usb/usb-compat.h>
 
 /*
  * The EHCI spec says that we must align to at least 32 bytes.  However,
@@ -45,9 +46,7 @@
 
 #define USB_MAX_DEVICE			32
 #define USB_MAXCONFIG			8
-#define USB_MAXINTERFACES		8
 #define USB_MAXENDPOINTS		16
-#define USB_MAXCHILDREN			8	/* This is arbitrary */
 #define USB_MAX_HUB			16
 
 #define USB_CNTL_TIMEOUT 100 /* 100ms timeout */
@@ -58,92 +57,6 @@
  */
 #define USB_TIMEOUT_MS(pipe) (usb_pipebulk(pipe) ? 5000 : 1000)
 
-/* device request (setup) */
-struct devrequest {
-	unsigned char	requesttype;
-	unsigned char	request;
-	unsigned short	value;
-	unsigned short	index;
-	unsigned short	length;
-} __attribute__ ((packed));
-
-/* Interface */
-struct usb_interface {
-	struct usb_interface_descriptor desc;
-
-	unsigned char	no_of_ep;
-	unsigned char	num_altsetting;
-	unsigned char	act_altsetting;
-
-	struct usb_endpoint_descriptor ep_desc[USB_MAXENDPOINTS];
-	/*
-	 * Super Speed Device will have Super Speed Endpoint
-	 * Companion Descriptor  (section 9.6.7 of usb 3.0 spec)
-	 * Revision 1.0 June 6th 2011
-	 */
-	struct usb_ss_ep_comp_descriptor ss_ep_comp_desc[USB_MAXENDPOINTS];
-} __attribute__ ((packed));
-
-/* Configuration information.. */
-struct usb_config {
-	struct usb_config_descriptor desc;
-
-	unsigned char	no_of_if;	/* number of interfaces */
-	struct usb_interface if_desc[USB_MAXINTERFACES];
-} __attribute__ ((packed));
-
-enum {
-	/* Maximum packet size; encoded as 0,1,2,3 = 8,16,32,64 */
-	PACKET_SIZE_8   = 0,
-	PACKET_SIZE_16  = 1,
-	PACKET_SIZE_32  = 2,
-	PACKET_SIZE_64  = 3,
-};
-
-struct usb_device {
-	int	devnum;			/* Device number on USB bus */
-	int	speed;			/* full/low/high */
-	char	mf[32];			/* manufacturer */
-	char	prod[32];		/* product */
-	char	serial[32];		/* serial number */
-
-	/* Maximum packet size; one of: PACKET_SIZE_* */
-	int maxpacketsize;
-	/* one bit for each endpoint ([0] = IN, [1] = OUT) */
-	unsigned int toggle[2];
-	/* endpoint halts; one bit per endpoint # & direction;
-	 * [0] = IN, [1] = OUT
-	 */
-	unsigned int halted[2];
-	int epmaxpacketin[16];		/* INput endpoint specific maximums */
-	int epmaxpacketout[16];		/* OUTput endpoint specific maximums */
-
-	int configno;			/* selected config number */
-	/* Device Descriptor */
-	struct usb_device_descriptor descriptor
-		__attribute__((aligned(ARCH_DMA_MINALIGN)));
-	struct usb_config config; /* config descriptor */
-
-	int have_langid;		/* whether string_langid is valid yet */
-	int string_langid;		/* language ID for strings */
-	int (*irq_handle)(struct usb_device *dev);
-	unsigned long irq_status;
-	int irq_act_len;		/* transfered bytes */
-	void *privptr;
-	/*
-	 * Child devices -  if this is a hub device
-	 * Each instance needs its own set of data structures.
-	 */
-	unsigned long status;
-	int act_len;			/* transfered bytes */
-	int maxchild;			/* Number of ports if hub */
-	int portnr;
-	struct usb_device *parent;
-	struct usb_device *children[USB_MAXCHILDREN];
-
-	void *controller;		/* hardware controller private data */
-};
-
 /**********************************************************************
  * this is how the lowlevel part communicate with the outer world
  */
@@ -155,7 +68,7 @@ struct usb_device {
 	defined(CONFIG_USB_OMAP3) || defined(CONFIG_USB_DA8XX) || \
 	defined(CONFIG_USB_BLACKFIN) || defined(CONFIG_USB_AM35X) || \
 	defined(CONFIG_USB_MUSB_DSPS) || defined(CONFIG_USB_MUSB_AM35X) || \
-	defined(CONFIG_USB_MUSB_OMAP2PLUS)
+	defined(CONFIG_USB_MUSB_OMAP2PLUS) || defined(CONFIG_USB_DWC3)
 
 int usb_lowlevel_init(int index, void **controller);
 int usb_lowlevel_stop(int index);
@@ -226,7 +139,6 @@ int usb_bulk_msg(struct usb_device *dev, unsigned int pipe,
 int usb_submit_int_msg(struct usb_device *dev, unsigned long pipe,
 			void *buffer, int transfer_len, int interval);
 int usb_disable_asynch(int disable);
-int usb_maxpacket(struct usb_device *dev, unsigned long pipe);
 int usb_get_configuration_no(struct usb_device *dev, unsigned char *buffer,
 				int cfgno);
 int usb_get_report(struct usb_device *dev, int ifnum, unsigned char type,
@@ -295,32 +207,7 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate);
  * specification, so that much of the uhci driver can just mask the bits
  * appropriately.
  */
-/* Create various pipes... */
-#define create_pipe(dev,endpoint) \
-		(((dev)->devnum << 8) | ((endpoint) << 15) | \
-		(dev)->maxpacketsize)
 #define default_pipe(dev) ((dev)->speed << 26)
-
-#define usb_sndctrlpipe(dev, endpoint)	((PIPE_CONTROL << 30) | \
-					 create_pipe(dev, endpoint))
-#define usb_rcvctrlpipe(dev, endpoint)	((PIPE_CONTROL << 30) | \
-					 create_pipe(dev, endpoint) | \
-					 USB_DIR_IN)
-#define usb_sndisocpipe(dev, endpoint)	((PIPE_ISOCHRONOUS << 30) | \
-					 create_pipe(dev, endpoint))
-#define usb_rcvisocpipe(dev, endpoint)	((PIPE_ISOCHRONOUS << 30) | \
-					 create_pipe(dev, endpoint) | \
-					 USB_DIR_IN)
-#define usb_sndbulkpipe(dev, endpoint)	((PIPE_BULK << 30) | \
-					 create_pipe(dev, endpoint))
-#define usb_rcvbulkpipe(dev, endpoint)	((PIPE_BULK << 30) | \
-					 create_pipe(dev, endpoint) | \
-					 USB_DIR_IN)
-#define usb_sndintpipe(dev, endpoint)	((PIPE_INTERRUPT << 30) | \
-					 create_pipe(dev, endpoint))
-#define usb_rcvintpipe(dev, endpoint)	((PIPE_INTERRUPT << 30) | \
-					 create_pipe(dev, endpoint) | \
-					 USB_DIR_IN)
 #define usb_snddefctrl(dev)		((PIPE_CONTROL << 30) | \
 					 default_pipe(dev))
 #define usb_rcvdefctrl(dev)		((PIPE_CONTROL << 30) | \
@@ -343,8 +230,6 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate);
 #define usb_packetid(pipe)	(((pipe) & USB_DIR_IN) ? USB_PID_IN : \
 				 USB_PID_OUT)
 
-#define usb_pipeout(pipe)	((((pipe) >> 7) & 1) ^ 1)
-#define usb_pipein(pipe)	(((pipe) >> 7) & 1)
 #define usb_pipedevice(pipe)	(((pipe) >> 8) & 0x7f)
 #define usb_pipe_endpdev(pipe)	(((pipe) >> 8) & 0x7ff)
 #define usb_pipeendpoint(pipe)	(((pipe) >> 15) & 0xf)
diff --git a/include/usb/lin_gadget_compat.h b/include/usb/lin_gadget_compat.h
index 5bdcb8d..93305e0 100644
--- a/include/usb/lin_gadget_compat.h
+++ b/include/usb/lin_gadget_compat.h
@@ -24,34 +24,7 @@
 #define __LIN_COMPAT_H__
 
 #include <linux/compat.h>
-
-/* common */
-#define spin_lock_init(...)
-#define spin_lock(...)
-#define spin_lock_irqsave(lock, flags) do { debug("%lu\n", flags); } while (0)
-#define spin_unlock(...)
-#define spin_unlock_irqrestore(lock, flags) do {flags = 0; } while (0)
-#define disable_irq(...)
-#define enable_irq(...)
-
-#define mutex_init(...)
-#define mutex_lock(...)
-#define mutex_unlock(...)
-
-#define GFP_KERNEL	0
-
-#define IRQ_HANDLED	1
-
-#define ENOTSUPP	524	/* Operation is not supported */
-
-#define BITS_PER_BYTE				8
-#define BITS_TO_LONGS(nr) \
-	DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
-#define DECLARE_BITMAP(name, bits) \
-	unsigned long name[BITS_TO_LONGS(bits)]
-
-#define small_const_nbits(nbits) \
-	(__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG)
+#include <linux/usb/linux-compat.h>
 
 static inline void bitmap_zero(unsigned long *dst, int nbits)
 {
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 3/7] USB: Initial kernel back port of the dwc3 kernel code
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 2/7] USB: Adapt the usb-compat.h to uboot and fix compiler errors Dan Murphy
@ 2013-07-02 15:15 ` Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 4/7] USB: dwc3: dwc3 code adaption for uBoot Dan Murphy
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

Initial back port of the dwc3 kernel code.

Kernel commit ID: aa4f608478acb7ed69dfcff4f3c404100b78ac49

Signed-off-by: Dan Murphy <dmurphy@ti.com>
---
 drivers/usb/dwc3/core.c      |  779 ++++++++++++
 drivers/usb/dwc3/core.h      |  939 ++++++++++++++
 drivers/usb/dwc3/dwc3-omap.c |  481 ++++++++
 drivers/usb/dwc3/ep0.c       | 1064 ++++++++++++++++
 drivers/usb/dwc3/gadget.c    | 2754 ++++++++++++++++++++++++++++++++++++++++++
 drivers/usb/dwc3/gadget.h    |  194 +++
 drivers/usb/dwc3/host.c      |   87 ++
 drivers/usb/dwc3/io.h        |   66 +
 8 files changed, 6364 insertions(+)
 create mode 100644 drivers/usb/dwc3/core.c
 create mode 100644 drivers/usb/dwc3/core.h
 create mode 100644 drivers/usb/dwc3/dwc3-omap.c
 create mode 100644 drivers/usb/dwc3/ep0.c
 create mode 100644 drivers/usb/dwc3/gadget.c
 create mode 100644 drivers/usb/dwc3/gadget.h
 create mode 100644 drivers/usb/dwc3/host.c
 create mode 100644 drivers/usb/dwc3/io.h

diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
new file mode 100644
index 0000000..c35d49d
--- /dev/null
+++ b/drivers/usb/dwc3/core.c
@@ -0,0 +1,779 @@
+/**
+ * core.c - DesignWare USB3 DRD Controller Core file
+ *
+ * Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+#include <linux/io.h>
+#include <linux/list.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+
+#include <linux/usb/otg.h>
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+
+#include "core.h"
+#include "gadget.h"
+#include "io.h"
+
+#include "debug.h"
+
+static char *maximum_speed = "super";
+module_param(maximum_speed, charp, 0);
+MODULE_PARM_DESC(maximum_speed, "Maximum supported speed.");
+
+/* -------------------------------------------------------------------------- */
+
+void dwc3_set_mode(struct dwc3 *dwc, u32 mode)
+{
+	u32 reg;
+
+	reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+	reg &= ~(DWC3_GCTL_PRTCAPDIR(DWC3_GCTL_PRTCAP_OTG));
+	reg |= DWC3_GCTL_PRTCAPDIR(mode);
+	dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+}
+
+/**
+ * dwc3_core_soft_reset - Issues core soft reset and PHY reset
+ * @dwc: pointer to our context structure
+ */
+static void dwc3_core_soft_reset(struct dwc3 *dwc)
+{
+	u32		reg;
+
+	/* Before Resetting PHY, put Core in Reset */
+	reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+	reg |= DWC3_GCTL_CORESOFTRESET;
+	dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+
+	/* Assert USB3 PHY reset */
+	reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
+	reg |= DWC3_GUSB3PIPECTL_PHYSOFTRST;
+	dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), reg);
+
+	/* Assert USB2 PHY reset */
+	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+	reg |= DWC3_GUSB2PHYCFG_PHYSOFTRST;
+	dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+
+	usb_phy_init(dwc->usb2_phy);
+	usb_phy_init(dwc->usb3_phy);
+	mdelay(100);
+
+	/* Clear USB3 PHY reset */
+	reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
+	reg &= ~DWC3_GUSB3PIPECTL_PHYSOFTRST;
+	dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), reg);
+
+	/* Clear USB2 PHY reset */
+	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+	reg &= ~DWC3_GUSB2PHYCFG_PHYSOFTRST;
+	dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+
+	mdelay(100);
+
+	/* After PHYs are stable we can take Core out of reset state */
+	reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+	reg &= ~DWC3_GCTL_CORESOFTRESET;
+	dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+}
+
+/**
+ * dwc3_free_one_event_buffer - Frees one event buffer
+ * @dwc: Pointer to our controller context structure
+ * @evt: Pointer to event buffer to be freed
+ */
+static void dwc3_free_one_event_buffer(struct dwc3 *dwc,
+		struct dwc3_event_buffer *evt)
+{
+	dma_free_coherent(dwc->dev, evt->length, evt->buf, evt->dma);
+}
+
+/**
+ * dwc3_alloc_one_event_buffer - Allocates one event buffer structure
+ * @dwc: Pointer to our controller context structure
+ * @length: size of the event buffer
+ *
+ * Returns a pointer to the allocated event buffer structure on success
+ * otherwise ERR_PTR(errno).
+ */
+static struct dwc3_event_buffer *dwc3_alloc_one_event_buffer(struct dwc3 *dwc,
+		unsigned length)
+{
+	struct dwc3_event_buffer	*evt;
+
+	evt = devm_kzalloc(dwc->dev, sizeof(*evt), GFP_KERNEL);
+	if (!evt)
+		return ERR_PTR(-ENOMEM);
+
+	evt->dwc	= dwc;
+	evt->length	= length;
+	evt->buf	= dma_alloc_coherent(dwc->dev, length,
+			&evt->dma, GFP_KERNEL);
+	if (!evt->buf)
+		return ERR_PTR(-ENOMEM);
+
+	return evt;
+}
+
+/**
+ * dwc3_free_event_buffers - frees all allocated event buffers
+ * @dwc: Pointer to our controller context structure
+ */
+static void dwc3_free_event_buffers(struct dwc3 *dwc)
+{
+	struct dwc3_event_buffer	*evt;
+	int i;
+
+	for (i = 0; i < dwc->num_event_buffers; i++) {
+		evt = dwc->ev_buffs[i];
+		if (evt)
+			dwc3_free_one_event_buffer(dwc, evt);
+	}
+}
+
+/**
+ * dwc3_alloc_event_buffers - Allocates @num event buffers of size @length
+ * @dwc: pointer to our controller context structure
+ * @length: size of event buffer
+ *
+ * Returns 0 on success otherwise negative errno. In the error case, dwc
+ * may contain some buffers allocated but not all which were requested.
+ */
+static int dwc3_alloc_event_buffers(struct dwc3 *dwc, unsigned length)
+{
+	int			num;
+	int			i;
+
+	num = DWC3_NUM_INT(dwc->hwparams.hwparams1);
+	dwc->num_event_buffers = num;
+
+	dwc->ev_buffs = devm_kzalloc(dwc->dev, sizeof(*dwc->ev_buffs) * num,
+			GFP_KERNEL);
+	if (!dwc->ev_buffs) {
+		dev_err(dwc->dev, "can't allocate event buffers array\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < num; i++) {
+		struct dwc3_event_buffer	*evt;
+
+		evt = dwc3_alloc_one_event_buffer(dwc, length);
+		if (IS_ERR(evt)) {
+			dev_err(dwc->dev, "can't allocate event buffer\n");
+			return PTR_ERR(evt);
+		}
+		dwc->ev_buffs[i] = evt;
+	}
+
+	return 0;
+}
+
+/**
+ * dwc3_event_buffers_setup - setup our allocated event buffers
+ * @dwc: pointer to our controller context structure
+ *
+ * Returns 0 on success otherwise negative errno.
+ */
+static int dwc3_event_buffers_setup(struct dwc3 *dwc)
+{
+	struct dwc3_event_buffer	*evt;
+	int				n;
+
+	for (n = 0; n < dwc->num_event_buffers; n++) {
+		evt = dwc->ev_buffs[n];
+		dev_dbg(dwc->dev, "Event buf %p dma %08llx length %d\n",
+				evt->buf, (unsigned long long) evt->dma,
+				evt->length);
+
+		evt->lpos = 0;
+
+		dwc3_writel(dwc->regs, DWC3_GEVNTADRLO(n),
+				lower_32_bits(evt->dma));
+		dwc3_writel(dwc->regs, DWC3_GEVNTADRHI(n),
+				upper_32_bits(evt->dma));
+		dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(n),
+				evt->length & 0xffff);
+		dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(n), 0);
+	}
+
+	return 0;
+}
+
+static void dwc3_event_buffers_cleanup(struct dwc3 *dwc)
+{
+	struct dwc3_event_buffer	*evt;
+	int				n;
+
+	for (n = 0; n < dwc->num_event_buffers; n++) {
+		evt = dwc->ev_buffs[n];
+
+		evt->lpos = 0;
+
+		dwc3_writel(dwc->regs, DWC3_GEVNTADRLO(n), 0);
+		dwc3_writel(dwc->regs, DWC3_GEVNTADRHI(n), 0);
+		dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(n), 0);
+		dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(n), 0);
+	}
+}
+
+static void dwc3_core_num_eps(struct dwc3 *dwc)
+{
+	struct dwc3_hwparams	*parms = &dwc->hwparams;
+
+	dwc->num_in_eps = DWC3_NUM_IN_EPS(parms);
+	dwc->num_out_eps = DWC3_NUM_EPS(parms) - dwc->num_in_eps;
+
+	dev_vdbg(dwc->dev, "found %d IN and %d OUT endpoints\n",
+			dwc->num_in_eps, dwc->num_out_eps);
+}
+
+static void dwc3_cache_hwparams(struct dwc3 *dwc)
+{
+	struct dwc3_hwparams	*parms = &dwc->hwparams;
+
+	parms->hwparams0 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS0);
+	parms->hwparams1 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS1);
+	parms->hwparams2 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS2);
+	parms->hwparams3 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS3);
+	parms->hwparams4 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS4);
+	parms->hwparams5 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS5);
+	parms->hwparams6 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS6);
+	parms->hwparams7 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS7);
+	parms->hwparams8 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS8);
+}
+
+/**
+ * dwc3_core_init - Low-level initialization of DWC3 Core
+ * @dwc: Pointer to our controller context structure
+ *
+ * Returns 0 on success otherwise negative errno.
+ */
+static int dwc3_core_init(struct dwc3 *dwc)
+{
+	unsigned long		timeout;
+	u32			reg;
+	int			ret;
+
+	reg = dwc3_readl(dwc->regs, DWC3_GSNPSID);
+	/* This should read as U3 followed by revision number */
+	if ((reg & DWC3_GSNPSID_MASK) != 0x55330000) {
+		dev_err(dwc->dev, "this is not a DesignWare USB3 DRD Core\n");
+		ret = -ENODEV;
+		goto err0;
+	}
+	dwc->revision = reg;
+
+	/* issue device SoftReset too */
+	timeout = jiffies + msecs_to_jiffies(500);
+	dwc3_writel(dwc->regs, DWC3_DCTL, DWC3_DCTL_CSFTRST);
+	do {
+		reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+		if (!(reg & DWC3_DCTL_CSFTRST))
+			break;
+
+		if (time_after(jiffies, timeout)) {
+			dev_err(dwc->dev, "Reset Timed Out\n");
+			ret = -ETIMEDOUT;
+			goto err0;
+		}
+
+		cpu_relax();
+	} while (true);
+
+	dwc3_core_soft_reset(dwc);
+
+	reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+	reg &= ~DWC3_GCTL_SCALEDOWN_MASK;
+	reg &= ~DWC3_GCTL_DISSCRAMBLE;
+
+	switch (DWC3_GHWPARAMS1_EN_PWROPT(dwc->hwparams.hwparams1)) {
+	case DWC3_GHWPARAMS1_EN_PWROPT_CLK:
+		reg &= ~DWC3_GCTL_DSBLCLKGTNG;
+		break;
+	default:
+		dev_dbg(dwc->dev, "No power optimization available\n");
+	}
+
+	/*
+	 * WORKAROUND: DWC3 revisions <1.90a have a bug
+	 * where the device can fail to connect@SuperSpeed
+	 * and falls back to high-speed mode which causes
+	 * the device to enter a Connect/Disconnect loop
+	 */
+	if (dwc->revision < DWC3_REVISION_190A)
+		reg |= DWC3_GCTL_U2RSTECN;
+
+	dwc3_core_num_eps(dwc);
+
+	dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+
+	return 0;
+
+err0:
+	return ret;
+}
+
+static void dwc3_core_exit(struct dwc3 *dwc)
+{
+	usb_phy_shutdown(dwc->usb2_phy);
+	usb_phy_shutdown(dwc->usb3_phy);
+}
+
+#define DWC3_ALIGN_MASK		(16 - 1)
+
+static int dwc3_probe(struct platform_device *pdev)
+{
+	struct device_node	*node = pdev->dev.of_node;
+	struct resource		*res;
+	struct dwc3		*dwc;
+	struct device		*dev = &pdev->dev;
+
+	int			ret = -ENOMEM;
+
+	void __iomem		*regs;
+	void			*mem;
+
+	u8			mode;
+
+	mem = devm_kzalloc(dev, sizeof(*dwc) + DWC3_ALIGN_MASK, GFP_KERNEL);
+	if (!mem) {
+		dev_err(dev, "not enough memory\n");
+		return -ENOMEM;
+	}
+	dwc = PTR_ALIGN(mem, DWC3_ALIGN_MASK + 1);
+	dwc->mem = mem;
+
+	res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+	if (!res) {
+		dev_err(dev, "missing IRQ\n");
+		return -ENODEV;
+	}
+	dwc->xhci_resources[1].start = res->start;
+	dwc->xhci_resources[1].end = res->end;
+	dwc->xhci_resources[1].flags = res->flags;
+	dwc->xhci_resources[1].name = res->name;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		dev_err(dev, "missing memory resource\n");
+		return -ENODEV;
+	}
+	dwc->xhci_resources[0].start = res->start;
+	dwc->xhci_resources[0].end = dwc->xhci_resources[0].start +
+					DWC3_XHCI_REGS_END;
+	dwc->xhci_resources[0].flags = res->flags;
+	dwc->xhci_resources[0].name = res->name;
+
+	 /*
+	  * Request memory region but exclude xHCI regs,
+	  * since it will be requested by the xhci-plat driver.
+	  */
+	res = devm_request_mem_region(dev, res->start + DWC3_GLOBALS_REGS_START,
+			resource_size(res) - DWC3_GLOBALS_REGS_START,
+			dev_name(dev));
+	if (!res) {
+		dev_err(dev, "can't request mem region\n");
+		return -ENOMEM;
+	}
+
+	regs = devm_ioremap_nocache(dev, res->start, resource_size(res));
+	if (!regs) {
+		dev_err(dev, "ioremap failed\n");
+		return -ENOMEM;
+	}
+
+	if (node) {
+		dwc->usb2_phy = devm_usb_get_phy_by_phandle(dev, "usb-phy", 0);
+		dwc->usb3_phy = devm_usb_get_phy_by_phandle(dev, "usb-phy", 1);
+	} else {
+		dwc->usb2_phy = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2);
+		dwc->usb3_phy = devm_usb_get_phy(dev, USB_PHY_TYPE_USB3);
+	}
+
+	if (IS_ERR(dwc->usb2_phy)) {
+		ret = PTR_ERR(dwc->usb2_phy);
+
+		/*
+		 * if -ENXIO is returned, it means PHY layer wasn't
+		 * enabled, so it makes no sense to return -EPROBE_DEFER
+		 * in that case, since no PHY driver will ever probe.
+		 */
+		if (ret == -ENXIO)
+			return ret;
+
+		dev_err(dev, "no usb2 phy configured\n");
+		return -EPROBE_DEFER;
+	}
+
+	if (IS_ERR(dwc->usb3_phy)) {
+		ret = PTR_ERR(dwc->usb2_phy);
+
+		/*
+		 * if -ENXIO is returned, it means PHY layer wasn't
+		 * enabled, so it makes no sense to return -EPROBE_DEFER
+		 * in that case, since no PHY driver will ever probe.
+		 */
+		if (ret == -ENXIO)
+			return ret;
+
+		dev_err(dev, "no usb3 phy configured\n");
+		return -EPROBE_DEFER;
+	}
+
+	usb_phy_set_suspend(dwc->usb2_phy, 0);
+	usb_phy_set_suspend(dwc->usb3_phy, 0);
+
+	spin_lock_init(&dwc->lock);
+	platform_set_drvdata(pdev, dwc);
+
+	dwc->regs	= regs;
+	dwc->regs_size	= resource_size(res);
+	dwc->dev	= dev;
+
+	dev->dma_mask	= dev->parent->dma_mask;
+	dev->dma_parms	= dev->parent->dma_parms;
+	dma_set_coherent_mask(dev, dev->parent->coherent_dma_mask);
+
+	if (!strncmp("super", maximum_speed, 5))
+		dwc->maximum_speed = DWC3_DCFG_SUPERSPEED;
+	else if (!strncmp("high", maximum_speed, 4))
+		dwc->maximum_speed = DWC3_DCFG_HIGHSPEED;
+	else if (!strncmp("full", maximum_speed, 4))
+		dwc->maximum_speed = DWC3_DCFG_FULLSPEED1;
+	else if (!strncmp("low", maximum_speed, 3))
+		dwc->maximum_speed = DWC3_DCFG_LOWSPEED;
+	else
+		dwc->maximum_speed = DWC3_DCFG_SUPERSPEED;
+
+	dwc->needs_fifo_resize = of_property_read_bool(node, "tx-fifo-resize");
+
+	pm_runtime_enable(dev);
+	pm_runtime_get_sync(dev);
+	pm_runtime_forbid(dev);
+
+	dwc3_cache_hwparams(dwc);
+
+	ret = dwc3_alloc_event_buffers(dwc, DWC3_EVENT_BUFFERS_SIZE);
+	if (ret) {
+		dev_err(dwc->dev, "failed to allocate event buffers\n");
+		ret = -ENOMEM;
+		goto err0;
+	}
+
+	ret = dwc3_core_init(dwc);
+	if (ret) {
+		dev_err(dev, "failed to initialize core\n");
+		goto err0;
+	}
+
+	ret = dwc3_event_buffers_setup(dwc);
+	if (ret) {
+		dev_err(dwc->dev, "failed to setup event buffers\n");
+		goto err1;
+	}
+
+	if (IS_ENABLED(CONFIG_USB_DWC3_HOST))
+		mode = DWC3_MODE_HOST;
+	else if (IS_ENABLED(CONFIG_USB_DWC3_GADGET))
+		mode = DWC3_MODE_DEVICE;
+	else
+		mode = DWC3_MODE_DRD;
+
+	switch (mode) {
+	case DWC3_MODE_DEVICE:
+		dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_DEVICE);
+		ret = dwc3_gadget_init(dwc);
+		if (ret) {
+			dev_err(dev, "failed to initialize gadget\n");
+			goto err2;
+		}
+		break;
+	case DWC3_MODE_HOST:
+		dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_HOST);
+		ret = dwc3_host_init(dwc);
+		if (ret) {
+			dev_err(dev, "failed to initialize host\n");
+			goto err2;
+		}
+		break;
+	case DWC3_MODE_DRD:
+		dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_OTG);
+		ret = dwc3_host_init(dwc);
+		if (ret) {
+			dev_err(dev, "failed to initialize host\n");
+			goto err2;
+		}
+
+		ret = dwc3_gadget_init(dwc);
+		if (ret) {
+			dev_err(dev, "failed to initialize gadget\n");
+			goto err2;
+		}
+		break;
+	default:
+		dev_err(dev, "Unsupported mode of operation %d\n", mode);
+		goto err2;
+	}
+	dwc->mode = mode;
+
+	ret = dwc3_debugfs_init(dwc);
+	if (ret) {
+		dev_err(dev, "failed to initialize debugfs\n");
+		goto err3;
+	}
+
+	pm_runtime_allow(dev);
+
+	return 0;
+
+err3:
+	switch (mode) {
+	case DWC3_MODE_DEVICE:
+		dwc3_gadget_exit(dwc);
+		break;
+	case DWC3_MODE_HOST:
+		dwc3_host_exit(dwc);
+		break;
+	case DWC3_MODE_DRD:
+		dwc3_host_exit(dwc);
+		dwc3_gadget_exit(dwc);
+		break;
+	default:
+		/* do nothing */
+		break;
+	}
+
+err2:
+	dwc3_event_buffers_cleanup(dwc);
+
+err1:
+	dwc3_core_exit(dwc);
+
+err0:
+	dwc3_free_event_buffers(dwc);
+
+	return ret;
+}
+
+static int dwc3_remove(struct platform_device *pdev)
+{
+	struct dwc3	*dwc = platform_get_drvdata(pdev);
+
+	usb_phy_set_suspend(dwc->usb2_phy, 1);
+	usb_phy_set_suspend(dwc->usb3_phy, 1);
+
+	pm_runtime_put(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	dwc3_debugfs_exit(dwc);
+
+	switch (dwc->mode) {
+	case DWC3_MODE_DEVICE:
+		dwc3_gadget_exit(dwc);
+		break;
+	case DWC3_MODE_HOST:
+		dwc3_host_exit(dwc);
+		break;
+	case DWC3_MODE_DRD:
+		dwc3_host_exit(dwc);
+		dwc3_gadget_exit(dwc);
+		break;
+	default:
+		/* do nothing */
+		break;
+	}
+
+	dwc3_event_buffers_cleanup(dwc);
+	dwc3_free_event_buffers(dwc);
+	dwc3_core_exit(dwc);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int dwc3_prepare(struct device *dev)
+{
+	struct dwc3	*dwc = dev_get_drvdata(dev);
+	unsigned long	flags;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	switch (dwc->mode) {
+	case DWC3_MODE_DEVICE:
+	case DWC3_MODE_DRD:
+		dwc3_gadget_prepare(dwc);
+		/* FALLTHROUGH */
+	case DWC3_MODE_HOST:
+	default:
+		dwc3_event_buffers_cleanup(dwc);
+		break;
+	}
+
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return 0;
+}
+
+static void dwc3_complete(struct device *dev)
+{
+	struct dwc3	*dwc = dev_get_drvdata(dev);
+	unsigned long	flags;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	switch (dwc->mode) {
+	case DWC3_MODE_DEVICE:
+	case DWC3_MODE_DRD:
+		dwc3_gadget_complete(dwc);
+		/* FALLTHROUGH */
+	case DWC3_MODE_HOST:
+	default:
+		dwc3_event_buffers_setup(dwc);
+		break;
+	}
+
+	spin_unlock_irqrestore(&dwc->lock, flags);
+}
+
+static int dwc3_suspend(struct device *dev)
+{
+	struct dwc3	*dwc = dev_get_drvdata(dev);
+	unsigned long	flags;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	switch (dwc->mode) {
+	case DWC3_MODE_DEVICE:
+	case DWC3_MODE_DRD:
+		dwc3_gadget_suspend(dwc);
+		/* FALLTHROUGH */
+	case DWC3_MODE_HOST:
+	default:
+		/* do nothing */
+		break;
+	}
+
+	dwc->gctl = dwc3_readl(dwc->regs, DWC3_GCTL);
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	usb_phy_shutdown(dwc->usb3_phy);
+	usb_phy_shutdown(dwc->usb2_phy);
+
+	return 0;
+}
+
+static int dwc3_resume(struct device *dev)
+{
+	struct dwc3	*dwc = dev_get_drvdata(dev);
+	unsigned long	flags;
+
+	usb_phy_init(dwc->usb3_phy);
+	usb_phy_init(dwc->usb2_phy);
+	msleep(100);
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	dwc3_writel(dwc->regs, DWC3_GCTL, dwc->gctl);
+
+	switch (dwc->mode) {
+	case DWC3_MODE_DEVICE:
+	case DWC3_MODE_DRD:
+		dwc3_gadget_resume(dwc);
+		/* FALLTHROUGH */
+	case DWC3_MODE_HOST:
+	default:
+		/* do nothing */
+		break;
+	}
+
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	pm_runtime_disable(dev);
+	pm_runtime_set_active(dev);
+	pm_runtime_enable(dev);
+
+	return 0;
+}
+
+static const struct dev_pm_ops dwc3_dev_pm_ops = {
+	.prepare	= dwc3_prepare,
+	.complete	= dwc3_complete,
+
+	SET_SYSTEM_SLEEP_PM_OPS(dwc3_suspend, dwc3_resume)
+};
+
+#define DWC3_PM_OPS	&(dwc3_dev_pm_ops)
+#else
+#define DWC3_PM_OPS	NULL
+#endif
+
+#ifdef CONFIG_OF
+static const struct of_device_id of_dwc3_match[] = {
+	{
+		.compatible = "synopsys,dwc3"
+	},
+	{ },
+};
+MODULE_DEVICE_TABLE(of, of_dwc3_match);
+#endif
+
+static struct platform_driver dwc3_driver = {
+	.probe		= dwc3_probe,
+	.remove		= dwc3_remove,
+	.driver		= {
+		.name	= "dwc3",
+		.of_match_table	= of_match_ptr(of_dwc3_match),
+		.pm	= DWC3_PM_OPS,
+	},
+};
+
+module_platform_driver(dwc3_driver);
+
+MODULE_ALIAS("platform:dwc3");
+MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_DESCRIPTION("DesignWare USB3 DRD Controller Driver");
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
new file mode 100644
index 0000000..b69d322
--- /dev/null
+++ b/drivers/usb/dwc3/core.h
@@ -0,0 +1,939 @@
+/**
+ * core.h - DesignWare USB3 DRD Core Header
+ *
+ * Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DRIVERS_USB_DWC3_CORE_H
+#define __DRIVERS_USB_DWC3_CORE_H
+
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/ioport.h>
+#include <linux/list.h>
+#include <linux/dma-mapping.h>
+#include <linux/mm.h>
+#include <linux/debugfs.h>
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+
+/* Global constants */
+#define DWC3_EP0_BOUNCE_SIZE	512
+#define DWC3_ENDPOINTS_NUM	32
+#define DWC3_XHCI_RESOURCES_NUM	2
+
+#define DWC3_EVENT_SIZE		4	/* bytes */
+#define DWC3_EVENT_MAX_NUM	64	/* 2 events/endpoint */
+#define DWC3_EVENT_BUFFERS_SIZE	(DWC3_EVENT_SIZE * DWC3_EVENT_MAX_NUM)
+#define DWC3_EVENT_TYPE_MASK	0xfe
+
+#define DWC3_EVENT_TYPE_DEV	0
+#define DWC3_EVENT_TYPE_CARKIT	3
+#define DWC3_EVENT_TYPE_I2C	4
+
+#define DWC3_DEVICE_EVENT_DISCONNECT		0
+#define DWC3_DEVICE_EVENT_RESET			1
+#define DWC3_DEVICE_EVENT_CONNECT_DONE		2
+#define DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE	3
+#define DWC3_DEVICE_EVENT_WAKEUP		4
+#define DWC3_DEVICE_EVENT_HIBER_REQ		5
+#define DWC3_DEVICE_EVENT_EOPF			6
+#define DWC3_DEVICE_EVENT_SOF			7
+#define DWC3_DEVICE_EVENT_ERRATIC_ERROR		9
+#define DWC3_DEVICE_EVENT_CMD_CMPL		10
+#define DWC3_DEVICE_EVENT_OVERFLOW		11
+
+#define DWC3_GEVNTCOUNT_MASK	0xfffc
+#define DWC3_GSNPSID_MASK	0xffff0000
+#define DWC3_GSNPSREV_MASK	0xffff
+
+/* DWC3 registers memory space boundries */
+#define DWC3_XHCI_REGS_START		0x0
+#define DWC3_XHCI_REGS_END		0x7fff
+#define DWC3_GLOBALS_REGS_START		0xc100
+#define DWC3_GLOBALS_REGS_END		0xc6ff
+#define DWC3_DEVICE_REGS_START		0xc700
+#define DWC3_DEVICE_REGS_END		0xcbff
+#define DWC3_OTG_REGS_START		0xcc00
+#define DWC3_OTG_REGS_END		0xccff
+
+/* Global Registers */
+#define DWC3_GSBUSCFG0		0xc100
+#define DWC3_GSBUSCFG1		0xc104
+#define DWC3_GTXTHRCFG		0xc108
+#define DWC3_GRXTHRCFG		0xc10c
+#define DWC3_GCTL		0xc110
+#define DWC3_GEVTEN		0xc114
+#define DWC3_GSTS		0xc118
+#define DWC3_GSNPSID		0xc120
+#define DWC3_GGPIO		0xc124
+#define DWC3_GUID		0xc128
+#define DWC3_GUCTL		0xc12c
+#define DWC3_GBUSERRADDR0	0xc130
+#define DWC3_GBUSERRADDR1	0xc134
+#define DWC3_GPRTBIMAP0		0xc138
+#define DWC3_GPRTBIMAP1		0xc13c
+#define DWC3_GHWPARAMS0		0xc140
+#define DWC3_GHWPARAMS1		0xc144
+#define DWC3_GHWPARAMS2		0xc148
+#define DWC3_GHWPARAMS3		0xc14c
+#define DWC3_GHWPARAMS4		0xc150
+#define DWC3_GHWPARAMS5		0xc154
+#define DWC3_GHWPARAMS6		0xc158
+#define DWC3_GHWPARAMS7		0xc15c
+#define DWC3_GDBGFIFOSPACE	0xc160
+#define DWC3_GDBGLTSSM		0xc164
+#define DWC3_GPRTBIMAP_HS0	0xc180
+#define DWC3_GPRTBIMAP_HS1	0xc184
+#define DWC3_GPRTBIMAP_FS0	0xc188
+#define DWC3_GPRTBIMAP_FS1	0xc18c
+
+#define DWC3_GUSB2PHYCFG(n)	(0xc200 + (n * 0x04))
+#define DWC3_GUSB2I2CCTL(n)	(0xc240 + (n * 0x04))
+
+#define DWC3_GUSB2PHYACC(n)	(0xc280 + (n * 0x04))
+
+#define DWC3_GUSB3PIPECTL(n)	(0xc2c0 + (n * 0x04))
+
+#define DWC3_GTXFIFOSIZ(n)	(0xc300 + (n * 0x04))
+#define DWC3_GRXFIFOSIZ(n)	(0xc380 + (n * 0x04))
+
+#define DWC3_GEVNTADRLO(n)	(0xc400 + (n * 0x10))
+#define DWC3_GEVNTADRHI(n)	(0xc404 + (n * 0x10))
+#define DWC3_GEVNTSIZ(n)	(0xc408 + (n * 0x10))
+#define DWC3_GEVNTCOUNT(n)	(0xc40c + (n * 0x10))
+
+#define DWC3_GHWPARAMS8		0xc600
+
+/* Device Registers */
+#define DWC3_DCFG		0xc700
+#define DWC3_DCTL		0xc704
+#define DWC3_DEVTEN		0xc708
+#define DWC3_DSTS		0xc70c
+#define DWC3_DGCMDPAR		0xc710
+#define DWC3_DGCMD		0xc714
+#define DWC3_DALEPENA		0xc720
+#define DWC3_DEPCMDPAR2(n)	(0xc800 + (n * 0x10))
+#define DWC3_DEPCMDPAR1(n)	(0xc804 + (n * 0x10))
+#define DWC3_DEPCMDPAR0(n)	(0xc808 + (n * 0x10))
+#define DWC3_DEPCMD(n)		(0xc80c + (n * 0x10))
+
+/* OTG Registers */
+#define DWC3_OCFG		0xcc00
+#define DWC3_OCTL		0xcc04
+#define DWC3_OEVT		0xcc08
+#define DWC3_OEVTEN		0xcc0C
+#define DWC3_OSTS		0xcc10
+
+/* Bit fields */
+
+/* Global Configuration Register */
+#define DWC3_GCTL_PWRDNSCALE(n)	((n) << 19)
+#define DWC3_GCTL_U2RSTECN	(1 << 16)
+#define DWC3_GCTL_RAMCLKSEL(x)	(((x) & DWC3_GCTL_CLK_MASK) << 6)
+#define DWC3_GCTL_CLK_BUS	(0)
+#define DWC3_GCTL_CLK_PIPE	(1)
+#define DWC3_GCTL_CLK_PIPEHALF	(2)
+#define DWC3_GCTL_CLK_MASK	(3)
+
+#define DWC3_GCTL_PRTCAP(n)	(((n) & (3 << 12)) >> 12)
+#define DWC3_GCTL_PRTCAPDIR(n)	((n) << 12)
+#define DWC3_GCTL_PRTCAP_HOST	1
+#define DWC3_GCTL_PRTCAP_DEVICE	2
+#define DWC3_GCTL_PRTCAP_OTG	3
+
+#define DWC3_GCTL_CORESOFTRESET		(1 << 11)
+#define DWC3_GCTL_SCALEDOWN(n)		((n) << 4)
+#define DWC3_GCTL_SCALEDOWN_MASK	DWC3_GCTL_SCALEDOWN(3)
+#define DWC3_GCTL_DISSCRAMBLE		(1 << 3)
+#define DWC3_GCTL_GBLHIBERNATIONEN	(1 << 1)
+#define DWC3_GCTL_DSBLCLKGTNG		(1 << 0)
+
+/* Global USB2 PHY Configuration Register */
+#define DWC3_GUSB2PHYCFG_PHYSOFTRST	(1 << 31)
+#define DWC3_GUSB2PHYCFG_SUSPHY		(1 << 6)
+
+/* Global USB3 PIPE Control Register */
+#define DWC3_GUSB3PIPECTL_PHYSOFTRST	(1 << 31)
+#define DWC3_GUSB3PIPECTL_SUSPHY	(1 << 17)
+
+/* Global TX Fifo Size Register */
+#define DWC3_GTXFIFOSIZ_TXFDEF(n)	((n) & 0xffff)
+#define DWC3_GTXFIFOSIZ_TXFSTADDR(n)	((n) & 0xffff0000)
+
+/* Global HWPARAMS1 Register */
+#define DWC3_GHWPARAMS1_EN_PWROPT(n)	(((n) & (3 << 24)) >> 24)
+#define DWC3_GHWPARAMS1_EN_PWROPT_NO	0
+#define DWC3_GHWPARAMS1_EN_PWROPT_CLK	1
+#define DWC3_GHWPARAMS1_EN_PWROPT_HIB	2
+#define DWC3_GHWPARAMS1_PWROPT(n)	((n) << 24)
+#define DWC3_GHWPARAMS1_PWROPT_MASK	DWC3_GHWPARAMS1_PWROPT(3)
+
+/* Global HWPARAMS4 Register */
+#define DWC3_GHWPARAMS4_HIBER_SCRATCHBUFS(n)	(((n) & (0x0f << 13)) >> 13)
+#define DWC3_MAX_HIBER_SCRATCHBUFS		15
+
+/* Device Configuration Register */
+#define DWC3_DCFG_LPM_CAP	(1 << 22)
+#define DWC3_DCFG_DEVADDR(addr)	((addr) << 3)
+#define DWC3_DCFG_DEVADDR_MASK	DWC3_DCFG_DEVADDR(0x7f)
+
+#define DWC3_DCFG_SPEED_MASK	(7 << 0)
+#define DWC3_DCFG_SUPERSPEED	(4 << 0)
+#define DWC3_DCFG_HIGHSPEED	(0 << 0)
+#define DWC3_DCFG_FULLSPEED2	(1 << 0)
+#define DWC3_DCFG_LOWSPEED	(2 << 0)
+#define DWC3_DCFG_FULLSPEED1	(3 << 0)
+
+#define DWC3_DCFG_LPM_CAP	(1 << 22)
+
+/* Device Control Register */
+#define DWC3_DCTL_RUN_STOP	(1 << 31)
+#define DWC3_DCTL_CSFTRST	(1 << 30)
+#define DWC3_DCTL_LSFTRST	(1 << 29)
+
+#define DWC3_DCTL_HIRD_THRES_MASK	(0x1f << 24)
+#define DWC3_DCTL_HIRD_THRES(n)	((n) << 24)
+
+#define DWC3_DCTL_APPL1RES	(1 << 23)
+
+/* These apply for core versions 1.87a and earlier */
+#define DWC3_DCTL_TRGTULST_MASK		(0x0f << 17)
+#define DWC3_DCTL_TRGTULST(n)		((n) << 17)
+#define DWC3_DCTL_TRGTULST_U2		(DWC3_DCTL_TRGTULST(2))
+#define DWC3_DCTL_TRGTULST_U3		(DWC3_DCTL_TRGTULST(3))
+#define DWC3_DCTL_TRGTULST_SS_DIS	(DWC3_DCTL_TRGTULST(4))
+#define DWC3_DCTL_TRGTULST_RX_DET	(DWC3_DCTL_TRGTULST(5))
+#define DWC3_DCTL_TRGTULST_SS_INACT	(DWC3_DCTL_TRGTULST(6))
+
+/* These apply for core versions 1.94a and later */
+#define DWC3_DCTL_KEEP_CONNECT	(1 << 19)
+#define DWC3_DCTL_L1_HIBER_EN	(1 << 18)
+#define DWC3_DCTL_CRS		(1 << 17)
+#define DWC3_DCTL_CSS		(1 << 16)
+
+#define DWC3_DCTL_INITU2ENA	(1 << 12)
+#define DWC3_DCTL_ACCEPTU2ENA	(1 << 11)
+#define DWC3_DCTL_INITU1ENA	(1 << 10)
+#define DWC3_DCTL_ACCEPTU1ENA	(1 << 9)
+#define DWC3_DCTL_TSTCTRL_MASK	(0xf << 1)
+
+#define DWC3_DCTL_ULSTCHNGREQ_MASK	(0x0f << 5)
+#define DWC3_DCTL_ULSTCHNGREQ(n) (((n) << 5) & DWC3_DCTL_ULSTCHNGREQ_MASK)
+
+#define DWC3_DCTL_ULSTCHNG_NO_ACTION	(DWC3_DCTL_ULSTCHNGREQ(0))
+#define DWC3_DCTL_ULSTCHNG_SS_DISABLED	(DWC3_DCTL_ULSTCHNGREQ(4))
+#define DWC3_DCTL_ULSTCHNG_RX_DETECT	(DWC3_DCTL_ULSTCHNGREQ(5))
+#define DWC3_DCTL_ULSTCHNG_SS_INACTIVE	(DWC3_DCTL_ULSTCHNGREQ(6))
+#define DWC3_DCTL_ULSTCHNG_RECOVERY	(DWC3_DCTL_ULSTCHNGREQ(8))
+#define DWC3_DCTL_ULSTCHNG_COMPLIANCE	(DWC3_DCTL_ULSTCHNGREQ(10))
+#define DWC3_DCTL_ULSTCHNG_LOOPBACK	(DWC3_DCTL_ULSTCHNGREQ(11))
+
+/* Device Event Enable Register */
+#define DWC3_DEVTEN_VNDRDEVTSTRCVEDEN	(1 << 12)
+#define DWC3_DEVTEN_EVNTOVERFLOWEN	(1 << 11)
+#define DWC3_DEVTEN_CMDCMPLTEN		(1 << 10)
+#define DWC3_DEVTEN_ERRTICERREN		(1 << 9)
+#define DWC3_DEVTEN_SOFEN		(1 << 7)
+#define DWC3_DEVTEN_EOPFEN		(1 << 6)
+#define DWC3_DEVTEN_HIBERNATIONREQEVTEN	(1 << 5)
+#define DWC3_DEVTEN_WKUPEVTEN		(1 << 4)
+#define DWC3_DEVTEN_ULSTCNGEN		(1 << 3)
+#define DWC3_DEVTEN_CONNECTDONEEN	(1 << 2)
+#define DWC3_DEVTEN_USBRSTEN		(1 << 1)
+#define DWC3_DEVTEN_DISCONNEVTEN	(1 << 0)
+
+/* Device Status Register */
+#define DWC3_DSTS_DCNRD			(1 << 29)
+
+/* This applies for core versions 1.87a and earlier */
+#define DWC3_DSTS_PWRUPREQ		(1 << 24)
+
+/* These apply for core versions 1.94a and later */
+#define DWC3_DSTS_RSS			(1 << 25)
+#define DWC3_DSTS_SSS			(1 << 24)
+
+#define DWC3_DSTS_COREIDLE		(1 << 23)
+#define DWC3_DSTS_DEVCTRLHLT		(1 << 22)
+
+#define DWC3_DSTS_USBLNKST_MASK		(0x0f << 18)
+#define DWC3_DSTS_USBLNKST(n)		(((n) & DWC3_DSTS_USBLNKST_MASK) >> 18)
+
+#define DWC3_DSTS_RXFIFOEMPTY		(1 << 17)
+
+#define DWC3_DSTS_SOFFN_MASK		(0x3fff << 3)
+#define DWC3_DSTS_SOFFN(n)		(((n) & DWC3_DSTS_SOFFN_MASK) >> 3)
+
+#define DWC3_DSTS_CONNECTSPD		(7 << 0)
+
+#define DWC3_DSTS_SUPERSPEED		(4 << 0)
+#define DWC3_DSTS_HIGHSPEED		(0 << 0)
+#define DWC3_DSTS_FULLSPEED2		(1 << 0)
+#define DWC3_DSTS_LOWSPEED		(2 << 0)
+#define DWC3_DSTS_FULLSPEED1		(3 << 0)
+
+/* Device Generic Command Register */
+#define DWC3_DGCMD_SET_LMP		0x01
+#define DWC3_DGCMD_SET_PERIODIC_PAR	0x02
+#define DWC3_DGCMD_XMIT_FUNCTION	0x03
+
+/* These apply for core versions 1.94a and later */
+#define DWC3_DGCMD_SET_SCRATCHPAD_ADDR_LO	0x04
+#define DWC3_DGCMD_SET_SCRATCHPAD_ADDR_HI	0x05
+
+#define DWC3_DGCMD_SELECTED_FIFO_FLUSH	0x09
+#define DWC3_DGCMD_ALL_FIFO_FLUSH	0x0a
+#define DWC3_DGCMD_SET_ENDPOINT_NRDY	0x0c
+#define DWC3_DGCMD_RUN_SOC_BUS_LOOPBACK	0x10
+
+#define DWC3_DGCMD_STATUS(n)		(((n) >> 15) & 1)
+#define DWC3_DGCMD_CMDACT		(1 << 10)
+#define DWC3_DGCMD_CMDIOC		(1 << 8)
+
+/* Device Generic Command Parameter Register */
+#define DWC3_DGCMDPAR_FORCE_LINKPM_ACCEPT	(1 << 0)
+#define DWC3_DGCMDPAR_FIFO_NUM(n)		((n) << 0)
+#define DWC3_DGCMDPAR_RX_FIFO			(0 << 5)
+#define DWC3_DGCMDPAR_TX_FIFO			(1 << 5)
+#define DWC3_DGCMDPAR_LOOPBACK_DIS		(0 << 0)
+#define DWC3_DGCMDPAR_LOOPBACK_ENA		(1 << 0)
+
+/* Device Endpoint Command Register */
+#define DWC3_DEPCMD_PARAM_SHIFT		16
+#define DWC3_DEPCMD_PARAM(x)		((x) << DWC3_DEPCMD_PARAM_SHIFT)
+#define DWC3_DEPCMD_GET_RSC_IDX(x)     (((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f)
+#define DWC3_DEPCMD_STATUS(x)		(((x) >> 15) & 1)
+#define DWC3_DEPCMD_HIPRI_FORCERM	(1 << 11)
+#define DWC3_DEPCMD_CMDACT		(1 << 10)
+#define DWC3_DEPCMD_CMDIOC		(1 << 8)
+
+#define DWC3_DEPCMD_DEPSTARTCFG		(0x09 << 0)
+#define DWC3_DEPCMD_ENDTRANSFER		(0x08 << 0)
+#define DWC3_DEPCMD_UPDATETRANSFER	(0x07 << 0)
+#define DWC3_DEPCMD_STARTTRANSFER	(0x06 << 0)
+#define DWC3_DEPCMD_CLEARSTALL		(0x05 << 0)
+#define DWC3_DEPCMD_SETSTALL		(0x04 << 0)
+/* This applies for core versions 1.90a and earlier */
+#define DWC3_DEPCMD_GETSEQNUMBER	(0x03 << 0)
+/* This applies for core versions 1.94a and later */
+#define DWC3_DEPCMD_GETEPSTATE		(0x03 << 0)
+#define DWC3_DEPCMD_SETTRANSFRESOURCE	(0x02 << 0)
+#define DWC3_DEPCMD_SETEPCONFIG		(0x01 << 0)
+
+/* The EP number goes 0..31 so ep0 is always out and ep1 is always in */
+#define DWC3_DALEPENA_EP(n)		(1 << n)
+
+#define DWC3_DEPCMD_TYPE_CONTROL	0
+#define DWC3_DEPCMD_TYPE_ISOC		1
+#define DWC3_DEPCMD_TYPE_BULK		2
+#define DWC3_DEPCMD_TYPE_INTR		3
+
+/* Structures */
+
+struct dwc3_trb;
+
+/**
+ * struct dwc3_event_buffer - Software event buffer representation
+ * @list: a list of event buffers
+ * @buf: _THE_ buffer
+ * @length: size of this buffer
+ * @lpos: event offset
+ * @count: cache of last read event count register
+ * @flags: flags related to this event buffer
+ * @dma: dma_addr_t
+ * @dwc: pointer to DWC controller
+ */
+struct dwc3_event_buffer {
+	void			*buf;
+	unsigned		length;
+	unsigned int		lpos;
+	unsigned int		count;
+	unsigned int		flags;
+
+#define DWC3_EVENT_PENDING	BIT(0)
+
+	dma_addr_t		dma;
+
+	struct dwc3		*dwc;
+};
+
+#define DWC3_EP_FLAG_STALLED	(1 << 0)
+#define DWC3_EP_FLAG_WEDGED	(1 << 1)
+
+#define DWC3_EP_DIRECTION_TX	true
+#define DWC3_EP_DIRECTION_RX	false
+
+#define DWC3_TRB_NUM		32
+#define DWC3_TRB_MASK		(DWC3_TRB_NUM - 1)
+
+/**
+ * struct dwc3_ep - device side endpoint representation
+ * @endpoint: usb endpoint
+ * @request_list: list of requests for this endpoint
+ * @req_queued: list of requests on this ep which have TRBs setup
+ * @trb_pool: array of transaction buffers
+ * @trb_pool_dma: dma address of @trb_pool
+ * @free_slot: next slot which is going to be used
+ * @busy_slot: first slot which is owned by HW
+ * @desc: usb_endpoint_descriptor pointer
+ * @dwc: pointer to DWC controller
+ * @flags: endpoint flags (wedged, stalled, ...)
+ * @current_trb: index of current used trb
+ * @number: endpoint number (1 - 15)
+ * @type: set to bmAttributes & USB_ENDPOINT_XFERTYPE_MASK
+ * @resource_index: Resource transfer index
+ * @interval: the intervall on which the ISOC transfer is started
+ * @name: a human readable name e.g. ep1out-bulk
+ * @direction: true for TX, false for RX
+ * @stream_capable: true when streams are enabled
+ */
+struct dwc3_ep {
+	struct usb_ep		endpoint;
+	struct list_head	request_list;
+	struct list_head	req_queued;
+
+	struct dwc3_trb		*trb_pool;
+	dma_addr_t		trb_pool_dma;
+	u32			free_slot;
+	u32			busy_slot;
+	const struct usb_ss_ep_comp_descriptor *comp_desc;
+	struct dwc3		*dwc;
+
+	unsigned		flags;
+#define DWC3_EP_ENABLED		(1 << 0)
+#define DWC3_EP_STALL		(1 << 1)
+#define DWC3_EP_WEDGE		(1 << 2)
+#define DWC3_EP_BUSY		(1 << 4)
+#define DWC3_EP_PENDING_REQUEST	(1 << 5)
+#define DWC3_EP_MISSED_ISOC	(1 << 6)
+
+	/* This last one is specific to EP0 */
+#define DWC3_EP0_DIR_IN		(1 << 31)
+
+	unsigned		current_trb;
+
+	u8			number;
+	u8			type;
+	u8			resource_index;
+	u32			interval;
+
+	char			name[20];
+
+	unsigned		direction:1;
+	unsigned		stream_capable:1;
+};
+
+enum dwc3_phy {
+	DWC3_PHY_UNKNOWN = 0,
+	DWC3_PHY_USB3,
+	DWC3_PHY_USB2,
+};
+
+enum dwc3_ep0_next {
+	DWC3_EP0_UNKNOWN = 0,
+	DWC3_EP0_COMPLETE,
+	DWC3_EP0_NRDY_DATA,
+	DWC3_EP0_NRDY_STATUS,
+};
+
+enum dwc3_ep0_state {
+	EP0_UNCONNECTED		= 0,
+	EP0_SETUP_PHASE,
+	EP0_DATA_PHASE,
+	EP0_STATUS_PHASE,
+};
+
+enum dwc3_link_state {
+	/* In SuperSpeed */
+	DWC3_LINK_STATE_U0		= 0x00, /* in HS, means ON */
+	DWC3_LINK_STATE_U1		= 0x01,
+	DWC3_LINK_STATE_U2		= 0x02, /* in HS, means SLEEP */
+	DWC3_LINK_STATE_U3		= 0x03, /* in HS, means SUSPEND */
+	DWC3_LINK_STATE_SS_DIS		= 0x04,
+	DWC3_LINK_STATE_RX_DET		= 0x05, /* in HS, means Early Suspend */
+	DWC3_LINK_STATE_SS_INACT	= 0x06,
+	DWC3_LINK_STATE_POLL		= 0x07,
+	DWC3_LINK_STATE_RECOV		= 0x08,
+	DWC3_LINK_STATE_HRESET		= 0x09,
+	DWC3_LINK_STATE_CMPLY		= 0x0a,
+	DWC3_LINK_STATE_LPBK		= 0x0b,
+	DWC3_LINK_STATE_RESET		= 0x0e,
+	DWC3_LINK_STATE_RESUME		= 0x0f,
+	DWC3_LINK_STATE_MASK		= 0x0f,
+};
+
+/* TRB Length, PCM and Status */
+#define DWC3_TRB_SIZE_MASK	(0x00ffffff)
+#define DWC3_TRB_SIZE_LENGTH(n)	((n) & DWC3_TRB_SIZE_MASK)
+#define DWC3_TRB_SIZE_PCM1(n)	(((n) & 0x03) << 24)
+#define DWC3_TRB_SIZE_TRBSTS(n)	(((n) & (0x0f << 28)) >> 28)
+
+#define DWC3_TRBSTS_OK			0
+#define DWC3_TRBSTS_MISSED_ISOC		1
+#define DWC3_TRBSTS_SETUP_PENDING	2
+#define DWC3_TRB_STS_XFER_IN_PROG	4
+
+/* TRB Control */
+#define DWC3_TRB_CTRL_HWO		(1 << 0)
+#define DWC3_TRB_CTRL_LST		(1 << 1)
+#define DWC3_TRB_CTRL_CHN		(1 << 2)
+#define DWC3_TRB_CTRL_CSP		(1 << 3)
+#define DWC3_TRB_CTRL_TRBCTL(n)		(((n) & 0x3f) << 4)
+#define DWC3_TRB_CTRL_ISP_IMI		(1 << 10)
+#define DWC3_TRB_CTRL_IOC		(1 << 11)
+#define DWC3_TRB_CTRL_SID_SOFN(n)	(((n) & 0xffff) << 14)
+
+#define DWC3_TRBCTL_NORMAL		DWC3_TRB_CTRL_TRBCTL(1)
+#define DWC3_TRBCTL_CONTROL_SETUP	DWC3_TRB_CTRL_TRBCTL(2)
+#define DWC3_TRBCTL_CONTROL_STATUS2	DWC3_TRB_CTRL_TRBCTL(3)
+#define DWC3_TRBCTL_CONTROL_STATUS3	DWC3_TRB_CTRL_TRBCTL(4)
+#define DWC3_TRBCTL_CONTROL_DATA	DWC3_TRB_CTRL_TRBCTL(5)
+#define DWC3_TRBCTL_ISOCHRONOUS_FIRST	DWC3_TRB_CTRL_TRBCTL(6)
+#define DWC3_TRBCTL_ISOCHRONOUS		DWC3_TRB_CTRL_TRBCTL(7)
+#define DWC3_TRBCTL_LINK_TRB		DWC3_TRB_CTRL_TRBCTL(8)
+
+/**
+ * struct dwc3_trb - transfer request block (hw format)
+ * @bpl: DW0-3
+ * @bph: DW4-7
+ * @size: DW8-B
+ * @trl: DWC-F
+ */
+struct dwc3_trb {
+	u32		bpl;
+	u32		bph;
+	u32		size;
+	u32		ctrl;
+} __packed;
+
+/**
+ * dwc3_hwparams - copy of HWPARAMS registers
+ * @hwparams0 - GHWPARAMS0
+ * @hwparams1 - GHWPARAMS1
+ * @hwparams2 - GHWPARAMS2
+ * @hwparams3 - GHWPARAMS3
+ * @hwparams4 - GHWPARAMS4
+ * @hwparams5 - GHWPARAMS5
+ * @hwparams6 - GHWPARAMS6
+ * @hwparams7 - GHWPARAMS7
+ * @hwparams8 - GHWPARAMS8
+ */
+struct dwc3_hwparams {
+	u32	hwparams0;
+	u32	hwparams1;
+	u32	hwparams2;
+	u32	hwparams3;
+	u32	hwparams4;
+	u32	hwparams5;
+	u32	hwparams6;
+	u32	hwparams7;
+	u32	hwparams8;
+};
+
+/* HWPARAMS0 */
+#define DWC3_MODE(n)		((n) & 0x7)
+
+#define DWC3_MODE_DEVICE	0
+#define DWC3_MODE_HOST		1
+#define DWC3_MODE_DRD		2
+#define DWC3_MODE_HUB		3
+
+#define DWC3_MDWIDTH(n)		(((n) & 0xff00) >> 8)
+
+/* HWPARAMS1 */
+#define DWC3_NUM_INT(n)		(((n) & (0x3f << 15)) >> 15)
+
+/* HWPARAMS3 */
+#define DWC3_NUM_IN_EPS_MASK	(0x1f << 18)
+#define DWC3_NUM_EPS_MASK	(0x3f << 12)
+#define DWC3_NUM_EPS(p)		(((p)->hwparams3 &		\
+			(DWC3_NUM_EPS_MASK)) >> 12)
+#define DWC3_NUM_IN_EPS(p)	(((p)->hwparams3 &		\
+			(DWC3_NUM_IN_EPS_MASK)) >> 18)
+
+/* HWPARAMS7 */
+#define DWC3_RAM1_DEPTH(n)	((n) & 0xffff)
+
+struct dwc3_request {
+	struct usb_request	request;
+	struct list_head	list;
+	struct dwc3_ep		*dep;
+	u32			start_slot;
+
+	u8			epnum;
+	struct dwc3_trb		*trb;
+	dma_addr_t		trb_dma;
+
+	unsigned		direction:1;
+	unsigned		mapped:1;
+	unsigned		queued:1;
+};
+
+/*
+ * struct dwc3_scratchpad_array - hibernation scratchpad array
+ * (format defined by hw)
+ */
+struct dwc3_scratchpad_array {
+	__le64	dma_adr[DWC3_MAX_HIBER_SCRATCHBUFS];
+};
+
+/**
+ * struct dwc3 - representation of our controller
+ * @ctrl_req: usb control request which is used for ep0
+ * @ep0_trb: trb which is used for the ctrl_req
+ * @ep0_bounce: bounce buffer for ep0
+ * @setup_buf: used while precessing STD USB requests
+ * @ctrl_req_addr: dma address of ctrl_req
+ * @ep0_trb: dma address of ep0_trb
+ * @ep0_usb_req: dummy req used while handling STD USB requests
+ * @ep0_bounce_addr: dma address of ep0_bounce
+ * @lock: for synchronizing
+ * @dev: pointer to our struct device
+ * @xhci: pointer to our xHCI child
+ * @event_buffer_list: a list of event buffers
+ * @gadget: device side representation of the peripheral controller
+ * @gadget_driver: pointer to the gadget driver
+ * @regs: base address for our registers
+ * @regs_size: address space size
+ * @num_event_buffers: calculated number of event buffers
+ * @u1u2: only used on revisions <1.83a for workaround
+ * @maximum_speed: maximum speed requested (mainly for testing purposes)
+ * @revision: revision register contents
+ * @mode: mode of operation
+ * @usb2_phy: pointer to USB2 PHY
+ * @usb3_phy: pointer to USB3 PHY
+ * @dcfg: saved contents of DCFG register
+ * @gctl: saved contents of GCTL register
+ * @is_selfpowered: true when we are selfpowered
+ * @three_stage_setup: set if we perform a three phase setup
+ * @ep0_bounced: true when we used bounce buffer
+ * @ep0_expect_in: true when we expect a DATA IN transfer
+ * @start_config_issued: true when StartConfig command has been issued
+ * @setup_packet_pending: true when there's a Setup Packet in FIFO. Workaround
+ * @needs_fifo_resize: not all users might want fifo resizing, flag it
+ * @resize_fifos: tells us it's ok to reconfigure our TxFIFO sizes.
+ * @isoch_delay: wValue from Set Isochronous Delay request;
+ * @u2sel: parameter from Set SEL request.
+ * @u2pel: parameter from Set SEL request.
+ * @u1sel: parameter from Set SEL request.
+ * @u1pel: parameter from Set SEL request.
+ * @num_out_eps: number of out endpoints
+ * @num_in_eps: number of in endpoints
+ * @ep0_next_event: hold the next expected event
+ * @ep0state: state of endpoint zero
+ * @link_state: link state
+ * @speed: device speed (super, high, full, low)
+ * @mem: points to start of memory which is used for this struct.
+ * @hwparams: copy of hwparams registers
+ * @root: debugfs root folder pointer
+ */
+struct dwc3 {
+	struct usb_ctrlrequest	*ctrl_req;
+	struct dwc3_trb		*ep0_trb;
+	void			*ep0_bounce;
+	u8			*setup_buf;
+	dma_addr_t		ctrl_req_addr;
+	dma_addr_t		ep0_trb_addr;
+	dma_addr_t		ep0_bounce_addr;
+	struct dwc3_request	ep0_usb_req;
+
+	/* device lock */
+	spinlock_t		lock;
+
+	struct device		*dev;
+
+	struct platform_device	*xhci;
+	struct resource		xhci_resources[DWC3_XHCI_RESOURCES_NUM];
+
+	struct dwc3_event_buffer **ev_buffs;
+	struct dwc3_ep		*eps[DWC3_ENDPOINTS_NUM];
+
+	struct usb_gadget	gadget;
+	struct usb_gadget_driver *gadget_driver;
+
+	struct usb_phy		*usb2_phy;
+	struct usb_phy		*usb3_phy;
+
+	void __iomem		*regs;
+	size_t			regs_size;
+
+	/* used for suspend/resume */
+	u32			dcfg;
+	u32			gctl;
+
+	u32			num_event_buffers;
+	u32			u1u2;
+	u32			maximum_speed;
+	u32			revision;
+	u32			mode;
+
+#define DWC3_REVISION_173A	0x5533173a
+#define DWC3_REVISION_175A	0x5533175a
+#define DWC3_REVISION_180A	0x5533180a
+#define DWC3_REVISION_183A	0x5533183a
+#define DWC3_REVISION_185A	0x5533185a
+#define DWC3_REVISION_187A	0x5533187a
+#define DWC3_REVISION_188A	0x5533188a
+#define DWC3_REVISION_190A	0x5533190a
+#define DWC3_REVISION_194A	0x5533194a
+#define DWC3_REVISION_200A	0x5533200a
+#define DWC3_REVISION_202A	0x5533202a
+#define DWC3_REVISION_210A	0x5533210a
+#define DWC3_REVISION_220A	0x5533220a
+#define DWC3_REVISION_230A	0x5533230a
+#define DWC3_REVISION_240A	0x5533240a
+#define DWC3_REVISION_250A	0x5533250a
+
+	unsigned		is_selfpowered:1;
+	unsigned		three_stage_setup:1;
+	unsigned		ep0_bounced:1;
+	unsigned		ep0_expect_in:1;
+	unsigned		start_config_issued:1;
+	unsigned		setup_packet_pending:1;
+	unsigned		delayed_status:1;
+	unsigned		needs_fifo_resize:1;
+	unsigned		resize_fifos:1;
+	unsigned		pullups_connected:1;
+
+	enum dwc3_ep0_next	ep0_next_event;
+	enum dwc3_ep0_state	ep0state;
+	enum dwc3_link_state	link_state;
+
+	u16			isoch_delay;
+	u16			u2sel;
+	u16			u2pel;
+	u8			u1sel;
+	u8			u1pel;
+
+	u8			speed;
+
+	u8			num_out_eps;
+	u8			num_in_eps;
+
+	void			*mem;
+
+	struct dwc3_hwparams	hwparams;
+	struct dentry		*root;
+	struct debugfs_regset32	*regset;
+
+	u8			test_mode;
+	u8			test_mode_nr;
+};
+
+/* -------------------------------------------------------------------------- */
+
+/* -------------------------------------------------------------------------- */
+
+struct dwc3_event_type {
+	u32	is_devspec:1;
+	u32	type:6;
+	u32	reserved8_31:25;
+} __packed;
+
+#define DWC3_DEPEVT_XFERCOMPLETE	0x01
+#define DWC3_DEPEVT_XFERINPROGRESS	0x02
+#define DWC3_DEPEVT_XFERNOTREADY	0x03
+#define DWC3_DEPEVT_RXTXFIFOEVT		0x04
+#define DWC3_DEPEVT_STREAMEVT		0x06
+#define DWC3_DEPEVT_EPCMDCMPLT		0x07
+
+/**
+ * struct dwc3_event_depvt - Device Endpoint Events
+ * @one_bit: indicates this is an endpoint event (not used)
+ * @endpoint_number: number of the endpoint
+ * @endpoint_event: The event we have:
+ *	0x00	- Reserved
+ *	0x01	- XferComplete
+ *	0x02	- XferInProgress
+ *	0x03	- XferNotReady
+ *	0x04	- RxTxFifoEvt (IN->Underrun, OUT->Overrun)
+ *	0x05	- Reserved
+ *	0x06	- StreamEvt
+ *	0x07	- EPCmdCmplt
+ * @reserved11_10: Reserved, don't use.
+ * @status: Indicates the status of the event. Refer to databook for
+ *	more information.
+ * @parameters: Parameters of the current event. Refer to databook for
+ *	more information.
+ */
+struct dwc3_event_depevt {
+	u32	one_bit:1;
+	u32	endpoint_number:5;
+	u32	endpoint_event:4;
+	u32	reserved11_10:2;
+	u32	status:4;
+
+/* Within XferNotReady */
+#define DEPEVT_STATUS_TRANSFER_ACTIVE	(1 << 3)
+
+/* Within XferComplete */
+#define DEPEVT_STATUS_BUSERR	(1 << 0)
+#define DEPEVT_STATUS_SHORT	(1 << 1)
+#define DEPEVT_STATUS_IOC	(1 << 2)
+#define DEPEVT_STATUS_LST	(1 << 3)
+
+/* Stream event only */
+#define DEPEVT_STREAMEVT_FOUND		1
+#define DEPEVT_STREAMEVT_NOTFOUND	2
+
+/* Control-only Status */
+#define DEPEVT_STATUS_CONTROL_DATA	1
+#define DEPEVT_STATUS_CONTROL_STATUS	2
+
+	u32	parameters:16;
+} __packed;
+
+/**
+ * struct dwc3_event_devt - Device Events
+ * @one_bit: indicates this is a non-endpoint event (not used)
+ * @device_event: indicates it's a device event. Should read as 0x00
+ * @type: indicates the type of device event.
+ *	0	- DisconnEvt
+ *	1	- USBRst
+ *	2	- ConnectDone
+ *	3	- ULStChng
+ *	4	- WkUpEvt
+ *	5	- Reserved
+ *	6	- EOPF
+ *	7	- SOF
+ *	8	- Reserved
+ *	9	- ErrticErr
+ *	10	- CmdCmplt
+ *	11	- EvntOverflow
+ *	12	- VndrDevTstRcved
+ * @reserved15_12: Reserved, not used
+ * @event_info: Information about this event
+ * @reserved31_24: Reserved, not used
+ */
+struct dwc3_event_devt {
+	u32	one_bit:1;
+	u32	device_event:7;
+	u32	type:4;
+	u32	reserved15_12:4;
+	u32	event_info:8;
+	u32	reserved31_24:8;
+} __packed;
+
+/**
+ * struct dwc3_event_gevt - Other Core Events
+ * @one_bit: indicates this is a non-endpoint event (not used)
+ * @device_event: indicates it's (0x03) Carkit or (0x04) I2C event.
+ * @phy_port_number: self-explanatory
+ * @reserved31_12: Reserved, not used.
+ */
+struct dwc3_event_gevt {
+	u32	one_bit:1;
+	u32	device_event:7;
+	u32	phy_port_number:4;
+	u32	reserved31_12:20;
+} __packed;
+
+/**
+ * union dwc3_event - representation of Event Buffer contents
+ * @raw: raw 32-bit event
+ * @type: the type of the event
+ * @depevt: Device Endpoint Event
+ * @devt: Device Event
+ * @gevt: Global Event
+ */
+union dwc3_event {
+	u32				raw;
+	struct dwc3_event_type		type;
+	struct dwc3_event_depevt	depevt;
+	struct dwc3_event_devt		devt;
+	struct dwc3_event_gevt		gevt;
+};
+
+/*
+ * DWC3 Features to be used as Driver Data
+ */
+
+#define DWC3_HAS_PERIPHERAL		BIT(0)
+#define DWC3_HAS_XHCI			BIT(1)
+#define DWC3_HAS_OTG			BIT(3)
+
+/* prototypes */
+void dwc3_set_mode(struct dwc3 *dwc, u32 mode);
+int dwc3_gadget_resize_tx_fifos(struct dwc3 *dwc);
+
+#if IS_ENABLED(CONFIG_USB_DWC3_HOST) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
+int dwc3_host_init(struct dwc3 *dwc);
+void dwc3_host_exit(struct dwc3 *dwc);
+#else
+static inline int dwc3_host_init(struct dwc3 *dwc)
+{ return 0; }
+static inline void dwc3_host_exit(struct dwc3 *dwc)
+{ }
+#endif
+
+#if IS_ENABLED(CONFIG_USB_DWC3_GADGET) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
+int dwc3_gadget_init(struct dwc3 *dwc);
+void dwc3_gadget_exit(struct dwc3 *dwc);
+#else
+static inline int dwc3_gadget_init(struct dwc3 *dwc)
+{ return 0; }
+static inline void dwc3_gadget_exit(struct dwc3 *dwc)
+{ }
+#endif
+
+/* power management interface */
+#if !IS_ENABLED(CONFIG_USB_DWC3_HOST)
+int dwc3_gadget_prepare(struct dwc3 *dwc);
+void dwc3_gadget_complete(struct dwc3 *dwc);
+int dwc3_gadget_suspend(struct dwc3 *dwc);
+int dwc3_gadget_resume(struct dwc3 *dwc);
+#else
+static inline int dwc3_gadget_prepare(struct dwc3 *dwc)
+{
+	return 0;
+}
+
+static inline void dwc3_gadget_complete(struct dwc3 *dwc)
+{
+}
+
+static inline int dwc3_gadget_suspend(struct dwc3 *dwc)
+{
+	return 0;
+}
+
+static inline int dwc3_gadget_resume(struct dwc3 *dwc)
+{
+	return 0;
+}
+#endif /* !IS_ENABLED(CONFIG_USB_DWC3_HOST) */
+
+#endif /* __DRIVERS_USB_DWC3_CORE_H */
diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
new file mode 100644
index 0000000..34638b9
--- /dev/null
+++ b/drivers/usb/dwc3/dwc3-omap.c
@@ -0,0 +1,481 @@
+/**
+ * dwc3-omap.c - OMAP Specific Glue layer
+ *
+ * Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/platform_data/dwc3-omap.h>
+#include <linux/usb/dwc3-omap.h>
+#include <linux/pm_runtime.h>
+#include <linux/dma-mapping.h>
+#include <linux/ioport.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+
+#include <linux/usb/otg.h>
+
+/*
+ * All these registers belong to OMAP's Wrapper around the
+ * DesignWare USB3 Core.
+ */
+
+#define USBOTGSS_REVISION			0x0000
+#define USBOTGSS_SYSCONFIG			0x0010
+#define USBOTGSS_IRQ_EOI			0x0020
+#define USBOTGSS_IRQSTATUS_RAW_0		0x0024
+#define USBOTGSS_IRQSTATUS_0			0x0028
+#define USBOTGSS_IRQENABLE_SET_0		0x002c
+#define USBOTGSS_IRQENABLE_CLR_0		0x0030
+#define USBOTGSS_IRQSTATUS_RAW_1		0x0034
+#define USBOTGSS_IRQSTATUS_1			0x0038
+#define USBOTGSS_IRQENABLE_SET_1		0x003c
+#define USBOTGSS_IRQENABLE_CLR_1		0x0040
+#define USBOTGSS_UTMI_OTG_CTRL			0x0080
+#define USBOTGSS_UTMI_OTG_STATUS		0x0084
+#define USBOTGSS_MMRAM_OFFSET			0x0100
+#define USBOTGSS_FLADJ				0x0104
+#define USBOTGSS_DEBUG_CFG			0x0108
+#define USBOTGSS_DEBUG_DATA			0x010c
+
+/* SYSCONFIG REGISTER */
+#define USBOTGSS_SYSCONFIG_DMADISABLE		(1 << 16)
+
+/* IRQ_EOI REGISTER */
+#define USBOTGSS_IRQ_EOI_LINE_NUMBER		(1 << 0)
+
+/* IRQS0 BITS */
+#define USBOTGSS_IRQO_COREIRQ_ST		(1 << 0)
+
+/* IRQ1 BITS */
+#define USBOTGSS_IRQ1_DMADISABLECLR		(1 << 17)
+#define USBOTGSS_IRQ1_OEVT			(1 << 16)
+#define USBOTGSS_IRQ1_DRVVBUS_RISE		(1 << 13)
+#define USBOTGSS_IRQ1_CHRGVBUS_RISE		(1 << 12)
+#define USBOTGSS_IRQ1_DISCHRGVBUS_RISE		(1 << 11)
+#define USBOTGSS_IRQ1_IDPULLUP_RISE		(1 << 8)
+#define USBOTGSS_IRQ1_DRVVBUS_FALL		(1 << 5)
+#define USBOTGSS_IRQ1_CHRGVBUS_FALL		(1 << 4)
+#define USBOTGSS_IRQ1_DISCHRGVBUS_FALL		(1 << 3)
+#define USBOTGSS_IRQ1_IDPULLUP_FALL		(1 << 0)
+
+/* UTMI_OTG_CTRL REGISTER */
+#define USBOTGSS_UTMI_OTG_CTRL_DRVVBUS		(1 << 5)
+#define USBOTGSS_UTMI_OTG_CTRL_CHRGVBUS		(1 << 4)
+#define USBOTGSS_UTMI_OTG_CTRL_DISCHRGVBUS	(1 << 3)
+#define USBOTGSS_UTMI_OTG_CTRL_IDPULLUP		(1 << 0)
+
+/* UTMI_OTG_STATUS REGISTER */
+#define USBOTGSS_UTMI_OTG_STATUS_SW_MODE	(1 << 31)
+#define USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT	(1 << 9)
+#define USBOTGSS_UTMI_OTG_STATUS_TXBITSTUFFENABLE (1 << 8)
+#define USBOTGSS_UTMI_OTG_STATUS_IDDIG		(1 << 4)
+#define USBOTGSS_UTMI_OTG_STATUS_SESSEND	(1 << 3)
+#define USBOTGSS_UTMI_OTG_STATUS_SESSVALID	(1 << 2)
+#define USBOTGSS_UTMI_OTG_STATUS_VBUSVALID	(1 << 1)
+
+struct dwc3_omap {
+	/* device lock */
+	spinlock_t		lock;
+
+	struct device		*dev;
+
+	int			irq;
+	void __iomem		*base;
+
+	u32			utmi_otg_status;
+
+	u32			dma_status:1;
+};
+
+static struct dwc3_omap		*_omap;
+
+static inline u32 dwc3_omap_readl(void __iomem *base, u32 offset)
+{
+	return readl(base + offset);
+}
+
+static inline void dwc3_omap_writel(void __iomem *base, u32 offset, u32 value)
+{
+	writel(value, base + offset);
+}
+
+int dwc3_omap_mailbox(enum omap_dwc3_vbus_id_status status)
+{
+	u32			val;
+	struct dwc3_omap	*omap = _omap;
+
+	if (!omap)
+		return -EPROBE_DEFER;
+
+	switch (status) {
+	case OMAP_DWC3_ID_GROUND:
+		dev_dbg(omap->dev, "ID GND\n");
+
+		val = dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_STATUS);
+		val &= ~(USBOTGSS_UTMI_OTG_STATUS_IDDIG
+				| USBOTGSS_UTMI_OTG_STATUS_VBUSVALID
+				| USBOTGSS_UTMI_OTG_STATUS_SESSEND);
+		val |= USBOTGSS_UTMI_OTG_STATUS_SESSVALID
+				| USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT;
+		dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_STATUS, val);
+		break;
+
+	case OMAP_DWC3_VBUS_VALID:
+		dev_dbg(omap->dev, "VBUS Connect\n");
+
+		val = dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_STATUS);
+		val &= ~USBOTGSS_UTMI_OTG_STATUS_SESSEND;
+		val |= USBOTGSS_UTMI_OTG_STATUS_IDDIG
+				| USBOTGSS_UTMI_OTG_STATUS_VBUSVALID
+				| USBOTGSS_UTMI_OTG_STATUS_SESSVALID
+				| USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT;
+		dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_STATUS, val);
+		break;
+
+	case OMAP_DWC3_ID_FLOAT:
+	case OMAP_DWC3_VBUS_OFF:
+		dev_dbg(omap->dev, "VBUS Disconnect\n");
+
+		val = dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_STATUS);
+		val &= ~(USBOTGSS_UTMI_OTG_STATUS_SESSVALID
+				| USBOTGSS_UTMI_OTG_STATUS_VBUSVALID
+				| USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT);
+		val |= USBOTGSS_UTMI_OTG_STATUS_SESSEND
+				| USBOTGSS_UTMI_OTG_STATUS_IDDIG;
+		dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_STATUS, val);
+		break;
+
+	default:
+		dev_dbg(omap->dev, "ID float\n");
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dwc3_omap_mailbox);
+
+static irqreturn_t dwc3_omap_interrupt(int irq, void *_omap)
+{
+	struct dwc3_omap	*omap = _omap;
+	u32			reg;
+
+	spin_lock(&omap->lock);
+
+	reg = dwc3_omap_readl(omap->base, USBOTGSS_IRQSTATUS_1);
+
+	if (reg & USBOTGSS_IRQ1_DMADISABLECLR) {
+		dev_dbg(omap->dev, "DMA Disable was Cleared\n");
+		omap->dma_status = false;
+	}
+
+	if (reg & USBOTGSS_IRQ1_OEVT)
+		dev_dbg(omap->dev, "OTG Event\n");
+
+	if (reg & USBOTGSS_IRQ1_DRVVBUS_RISE)
+		dev_dbg(omap->dev, "DRVVBUS Rise\n");
+
+	if (reg & USBOTGSS_IRQ1_CHRGVBUS_RISE)
+		dev_dbg(omap->dev, "CHRGVBUS Rise\n");
+
+	if (reg & USBOTGSS_IRQ1_DISCHRGVBUS_RISE)
+		dev_dbg(omap->dev, "DISCHRGVBUS Rise\n");
+
+	if (reg & USBOTGSS_IRQ1_IDPULLUP_RISE)
+		dev_dbg(omap->dev, "IDPULLUP Rise\n");
+
+	if (reg & USBOTGSS_IRQ1_DRVVBUS_FALL)
+		dev_dbg(omap->dev, "DRVVBUS Fall\n");
+
+	if (reg & USBOTGSS_IRQ1_CHRGVBUS_FALL)
+		dev_dbg(omap->dev, "CHRGVBUS Fall\n");
+
+	if (reg & USBOTGSS_IRQ1_DISCHRGVBUS_FALL)
+		dev_dbg(omap->dev, "DISCHRGVBUS Fall\n");
+
+	if (reg & USBOTGSS_IRQ1_IDPULLUP_FALL)
+		dev_dbg(omap->dev, "IDPULLUP Fall\n");
+
+	dwc3_omap_writel(omap->base, USBOTGSS_IRQSTATUS_1, reg);
+
+	reg = dwc3_omap_readl(omap->base, USBOTGSS_IRQSTATUS_0);
+	dwc3_omap_writel(omap->base, USBOTGSS_IRQSTATUS_0, reg);
+
+	spin_unlock(&omap->lock);
+
+	return IRQ_HANDLED;
+}
+
+static int dwc3_omap_remove_core(struct device *dev, void *c)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+
+	platform_device_unregister(pdev);
+
+	return 0;
+}
+
+static void dwc3_omap_enable_irqs(struct dwc3_omap *omap)
+{
+	u32			reg;
+
+	/* enable all IRQs */
+	reg = USBOTGSS_IRQO_COREIRQ_ST;
+	dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_SET_0, reg);
+
+	reg = (USBOTGSS_IRQ1_OEVT |
+			USBOTGSS_IRQ1_DRVVBUS_RISE |
+			USBOTGSS_IRQ1_CHRGVBUS_RISE |
+			USBOTGSS_IRQ1_DISCHRGVBUS_RISE |
+			USBOTGSS_IRQ1_IDPULLUP_RISE |
+			USBOTGSS_IRQ1_DRVVBUS_FALL |
+			USBOTGSS_IRQ1_CHRGVBUS_FALL |
+			USBOTGSS_IRQ1_DISCHRGVBUS_FALL |
+			USBOTGSS_IRQ1_IDPULLUP_FALL);
+
+	dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_SET_1, reg);
+}
+
+static void dwc3_omap_disable_irqs(struct dwc3_omap *omap)
+{
+	/* disable all IRQs */
+	dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_SET_1, 0x00);
+	dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_SET_0, 0x00);
+}
+
+static u64 dwc3_omap_dma_mask = DMA_BIT_MASK(32);
+
+static int dwc3_omap_probe(struct platform_device *pdev)
+{
+	struct device_node	*node = pdev->dev.of_node;
+
+	struct dwc3_omap	*omap;
+	struct resource		*res;
+	struct device		*dev = &pdev->dev;
+
+	int			ret = -ENOMEM;
+	int			irq;
+
+	int			utmi_mode = 0;
+
+	u32			reg;
+
+	void __iomem		*base;
+
+	if (!node) {
+		dev_err(dev, "device node not found\n");
+		return -EINVAL;
+	}
+
+	omap = devm_kzalloc(dev, sizeof(*omap), GFP_KERNEL);
+	if (!omap) {
+		dev_err(dev, "not enough memory\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, omap);
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(dev, "missing IRQ resource\n");
+		return -EINVAL;
+	}
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		dev_err(dev, "missing memory base resource\n");
+		return -EINVAL;
+	}
+
+	base = devm_ioremap_nocache(dev, res->start, resource_size(res));
+	if (!base) {
+		dev_err(dev, "ioremap failed\n");
+		return -ENOMEM;
+	}
+
+	spin_lock_init(&omap->lock);
+
+	omap->dev	= dev;
+	omap->irq	= irq;
+	omap->base	= base;
+	dev->dma_mask	= &dwc3_omap_dma_mask;
+
+	/*
+	 * REVISIT if we ever have two instances of the wrapper, we will be
+	 * in big trouble
+	 */
+	_omap	= omap;
+
+	pm_runtime_enable(dev);
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err(dev, "get_sync failed with err %d\n", ret);
+		return ret;
+	}
+
+	reg = dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_STATUS);
+
+	of_property_read_u32(node, "utmi-mode", &utmi_mode);
+
+	switch (utmi_mode) {
+	case DWC3_OMAP_UTMI_MODE_SW:
+		reg |= USBOTGSS_UTMI_OTG_STATUS_SW_MODE;
+		break;
+	case DWC3_OMAP_UTMI_MODE_HW:
+		reg &= ~USBOTGSS_UTMI_OTG_STATUS_SW_MODE;
+		break;
+	default:
+		dev_dbg(dev, "UNKNOWN utmi mode %d\n", utmi_mode);
+	}
+
+	dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_STATUS, reg);
+
+	/* check the DMA Status */
+	reg = dwc3_omap_readl(omap->base, USBOTGSS_SYSCONFIG);
+	omap->dma_status = !!(reg & USBOTGSS_SYSCONFIG_DMADISABLE);
+
+	ret = devm_request_irq(dev, omap->irq, dwc3_omap_interrupt, 0,
+			"dwc3-omap", omap);
+	if (ret) {
+		dev_err(dev, "failed to request IRQ #%d --> %d\n",
+				omap->irq, ret);
+		return ret;
+	}
+
+	dwc3_omap_enable_irqs(omap);
+
+	ret = of_platform_populate(node, NULL, NULL, dev);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to create dwc3 core\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int dwc3_omap_remove(struct platform_device *pdev)
+{
+	struct dwc3_omap	*omap = platform_get_drvdata(pdev);
+
+	dwc3_omap_disable_irqs(omap);
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core);
+
+	return 0;
+}
+
+static const struct of_device_id of_dwc3_match[] = {
+	{
+		.compatible =	"ti,dwc3"
+	},
+	{ },
+};
+MODULE_DEVICE_TABLE(of, of_dwc3_match);
+
+#ifdef CONFIG_PM_SLEEP
+static int dwc3_omap_prepare(struct device *dev)
+{
+	struct dwc3_omap	*omap = dev_get_drvdata(dev);
+
+	dwc3_omap_disable_irqs(omap);
+
+	return 0;
+}
+
+static void dwc3_omap_complete(struct device *dev)
+{
+	struct dwc3_omap	*omap = dev_get_drvdata(dev);
+
+	dwc3_omap_enable_irqs(omap);
+}
+
+static int dwc3_omap_suspend(struct device *dev)
+{
+	struct dwc3_omap	*omap = dev_get_drvdata(dev);
+
+	omap->utmi_otg_status = dwc3_omap_readl(omap->base,
+			USBOTGSS_UTMI_OTG_STATUS);
+
+	return 0;
+}
+
+static int dwc3_omap_resume(struct device *dev)
+{
+	struct dwc3_omap	*omap = dev_get_drvdata(dev);
+
+	dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_STATUS,
+			omap->utmi_otg_status);
+
+	pm_runtime_disable(dev);
+	pm_runtime_set_active(dev);
+	pm_runtime_enable(dev);
+
+	return 0;
+}
+
+static const struct dev_pm_ops dwc3_omap_dev_pm_ops = {
+	.prepare	= dwc3_omap_prepare,
+	.complete	= dwc3_omap_complete,
+
+	SET_SYSTEM_SLEEP_PM_OPS(dwc3_omap_suspend, dwc3_omap_resume)
+};
+
+#define DEV_PM_OPS	(&dwc3_omap_dev_pm_ops)
+#else
+#define DEV_PM_OPS	NULL
+#endif /* CONFIG_PM_SLEEP */
+
+static struct platform_driver dwc3_omap_driver = {
+	.probe		= dwc3_omap_probe,
+	.remove		= dwc3_omap_remove,
+	.driver		= {
+		.name	= "omap-dwc3",
+		.of_match_table	= of_dwc3_match,
+		.pm	= DEV_PM_OPS,
+	},
+};
+
+module_platform_driver(dwc3_omap_driver);
+
+MODULE_ALIAS("platform:omap-dwc3");
+MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_DESCRIPTION("DesignWare USB3 OMAP Glue Layer");
diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
new file mode 100644
index 0000000..5acbb94
--- /dev/null
+++ b/drivers/usb/dwc3/ep0.c
@@ -0,0 +1,1064 @@
+/**
+ * ep0.c - DesignWare USB3 DRD Controller Endpoint 0 Handling
+ *
+ * Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/list.h>
+#include <linux/dma-mapping.h>
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+#include <linux/usb/composite.h>
+
+#include "core.h"
+#include "gadget.h"
+#include "io.h"
+
+static void __dwc3_ep0_do_control_status(struct dwc3 *dwc, struct dwc3_ep *dep);
+static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
+		struct dwc3_ep *dep, struct dwc3_request *req);
+
+static const char *dwc3_ep0_state_string(enum dwc3_ep0_state state)
+{
+	switch (state) {
+	case EP0_UNCONNECTED:
+		return "Unconnected";
+	case EP0_SETUP_PHASE:
+		return "Setup Phase";
+	case EP0_DATA_PHASE:
+		return "Data Phase";
+	case EP0_STATUS_PHASE:
+		return "Status Phase";
+	default:
+		return "UNKNOWN";
+	}
+}
+
+static int dwc3_ep0_start_trans(struct dwc3 *dwc, u8 epnum, dma_addr_t buf_dma,
+		u32 len, u32 type)
+{
+	struct dwc3_gadget_ep_cmd_params params;
+	struct dwc3_trb			*trb;
+	struct dwc3_ep			*dep;
+
+	int				ret;
+
+	dep = dwc->eps[epnum];
+	if (dep->flags & DWC3_EP_BUSY) {
+		dev_vdbg(dwc->dev, "%s: still busy\n", dep->name);
+		return 0;
+	}
+
+	trb = dwc->ep0_trb;
+
+	trb->bpl = lower_32_bits(buf_dma);
+	trb->bph = upper_32_bits(buf_dma);
+	trb->size = len;
+	trb->ctrl = type;
+
+	trb->ctrl |= (DWC3_TRB_CTRL_HWO
+			| DWC3_TRB_CTRL_LST
+			| DWC3_TRB_CTRL_IOC
+			| DWC3_TRB_CTRL_ISP_IMI);
+
+	memset(&params, 0, sizeof(params));
+	params.param0 = upper_32_bits(dwc->ep0_trb_addr);
+	params.param1 = lower_32_bits(dwc->ep0_trb_addr);
+
+	ret = dwc3_send_gadget_ep_cmd(dwc, dep->number,
+			DWC3_DEPCMD_STARTTRANSFER, &params);
+	if (ret < 0) {
+		dev_dbg(dwc->dev, "failed to send STARTTRANSFER command\n");
+		return ret;
+	}
+
+	dep->flags |= DWC3_EP_BUSY;
+	dep->resource_index = dwc3_gadget_ep_get_transfer_index(dwc,
+			dep->number);
+
+	dwc->ep0_next_event = DWC3_EP0_COMPLETE;
+
+	return 0;
+}
+
+static int __dwc3_gadget_ep0_queue(struct dwc3_ep *dep,
+		struct dwc3_request *req)
+{
+	struct dwc3		*dwc = dep->dwc;
+
+	req->request.actual	= 0;
+	req->request.status	= -EINPROGRESS;
+	req->epnum		= dep->number;
+
+	list_add_tail(&req->list, &dep->request_list);
+
+	/*
+	 * Gadget driver might not be quick enough to queue a request
+	 * before we get a Transfer Not Ready event on this endpoint.
+	 *
+	 * In that case, we will set DWC3_EP_PENDING_REQUEST. When that
+	 * flag is set, it's telling us that as soon as Gadget queues the
+	 * required request, we should kick the transfer here because the
+	 * IRQ we were waiting for is long gone.
+	 */
+	if (dep->flags & DWC3_EP_PENDING_REQUEST) {
+		unsigned	direction;
+
+		direction = !!(dep->flags & DWC3_EP0_DIR_IN);
+
+		if (dwc->ep0state != EP0_DATA_PHASE) {
+			dev_WARN(dwc->dev, "Unexpected pending request\n");
+			return 0;
+		}
+
+		__dwc3_ep0_do_control_data(dwc, dwc->eps[direction], req);
+
+		dep->flags &= ~(DWC3_EP_PENDING_REQUEST |
+				DWC3_EP0_DIR_IN);
+
+		return 0;
+	}
+
+	/*
+	 * In case gadget driver asked us to delay the STATUS phase,
+	 * handle it here.
+	 */
+	if (dwc->delayed_status) {
+		unsigned	direction;
+
+		direction = !dwc->ep0_expect_in;
+		dwc->delayed_status = false;
+
+		if (dwc->ep0state == EP0_STATUS_PHASE)
+			__dwc3_ep0_do_control_status(dwc, dwc->eps[direction]);
+		else
+			dev_dbg(dwc->dev, "too early for delayed status\n");
+
+		return 0;
+	}
+
+	/*
+	 * Unfortunately we have uncovered a limitation wrt the Data Phase.
+	 *
+	 * Section 9.4 says we can wait for the XferNotReady(DATA) event to
+	 * come before issueing Start Transfer command, but if we do, we will
+	 * miss situations where the host starts another SETUP phase instead of
+	 * the DATA phase.  Such cases happen at least on TD.7.6 of the Link
+	 * Layer Compliance Suite.
+	 *
+	 * The problem surfaces due to the fact that in case of back-to-back
+	 * SETUP packets there will be no XferNotReady(DATA) generated and we
+	 * will be stuck waiting for XferNotReady(DATA) forever.
+	 *
+	 * By looking at tables 9-13 and 9-14 of the Databook, we can see that
+	 * it tells us to start Data Phase right away. It also mentions that if
+	 * we receive a SETUP phase instead of the DATA phase, core will issue
+	 * XferComplete for the DATA phase, before actually initiating it in
+	 * the wire, with the TRB's status set to "SETUP_PENDING". Such status
+	 * can only be used to print some debugging logs, as the core expects
+	 * us to go through to the STATUS phase and start a CONTROL_STATUS TRB,
+	 * just so it completes right away, without transferring anything and,
+	 * only then, we can go back to the SETUP phase.
+	 *
+	 * Because of this scenario, SNPS decided to change the programming
+	 * model of control transfers and support on-demand transfers only for
+	 * the STATUS phase. To fix the issue we have now, we will always wait
+	 * for gadget driver to queue the DATA phase's struct usb_request, then
+	 * start it right away.
+	 *
+	 * If we're actually in a 2-stage transfer, we will wait for
+	 * XferNotReady(STATUS).
+	 */
+	if (dwc->three_stage_setup) {
+		unsigned        direction;
+
+		direction = dwc->ep0_expect_in;
+		dwc->ep0state = EP0_DATA_PHASE;
+
+		__dwc3_ep0_do_control_data(dwc, dwc->eps[direction], req);
+
+		dep->flags &= ~DWC3_EP0_DIR_IN;
+	}
+
+	return 0;
+}
+
+int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request,
+		gfp_t gfp_flags)
+{
+	struct dwc3_request		*req = to_dwc3_request(request);
+	struct dwc3_ep			*dep = to_dwc3_ep(ep);
+	struct dwc3			*dwc = dep->dwc;
+
+	unsigned long			flags;
+
+	int				ret;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+	if (!dep->endpoint.desc) {
+		dev_dbg(dwc->dev, "trying to queue request %p to disabled %s\n",
+				request, dep->name);
+		ret = -ESHUTDOWN;
+		goto out;
+	}
+
+	/* we share one TRB for ep0/1 */
+	if (!list_empty(&dep->request_list)) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	dev_vdbg(dwc->dev, "queueing request %p to %s length %d, state '%s'\n",
+			request, dep->name, request->length,
+			dwc3_ep0_state_string(dwc->ep0state));
+
+	ret = __dwc3_gadget_ep0_queue(dep, req);
+
+out:
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
+{
+	struct dwc3_ep		*dep;
+
+	/* reinitialize physical ep1 */
+	dep = dwc->eps[1];
+	dep->flags = DWC3_EP_ENABLED;
+
+	/* stall is always issued on EP0 */
+	dep = dwc->eps[0];
+	__dwc3_gadget_ep_set_halt(dep, 1);
+	dep->flags = DWC3_EP_ENABLED;
+	dwc->delayed_status = false;
+
+	if (!list_empty(&dep->request_list)) {
+		struct dwc3_request	*req;
+
+		req = next_request(&dep->request_list);
+		dwc3_gadget_giveback(dep, req, -ECONNRESET);
+	}
+
+	dwc->ep0state = EP0_SETUP_PHASE;
+	dwc3_ep0_out_start(dwc);
+}
+
+int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value)
+{
+	struct dwc3_ep			*dep = to_dwc3_ep(ep);
+	struct dwc3			*dwc = dep->dwc;
+
+	dwc3_ep0_stall_and_restart(dwc);
+
+	return 0;
+}
+
+void dwc3_ep0_out_start(struct dwc3 *dwc)
+{
+	int				ret;
+
+	ret = dwc3_ep0_start_trans(dwc, 0, dwc->ctrl_req_addr, 8,
+			DWC3_TRBCTL_CONTROL_SETUP);
+	WARN_ON(ret < 0);
+}
+
+static struct dwc3_ep *dwc3_wIndex_to_dep(struct dwc3 *dwc, __le16 wIndex_le)
+{
+	struct dwc3_ep		*dep;
+	u32			windex = le16_to_cpu(wIndex_le);
+	u32			epnum;
+
+	epnum = (windex & USB_ENDPOINT_NUMBER_MASK) << 1;
+	if ((windex & USB_ENDPOINT_DIR_MASK) == USB_DIR_IN)
+		epnum |= 1;
+
+	dep = dwc->eps[epnum];
+	if (dep->flags & DWC3_EP_ENABLED)
+		return dep;
+
+	return NULL;
+}
+
+static void dwc3_ep0_status_cmpl(struct usb_ep *ep, struct usb_request *req)
+{
+}
+/*
+ * ch 9.4.5
+ */
+static int dwc3_ep0_handle_status(struct dwc3 *dwc,
+		struct usb_ctrlrequest *ctrl)
+{
+	struct dwc3_ep		*dep;
+	u32			recip;
+	u32			reg;
+	u16			usb_status = 0;
+	__le16			*response_pkt;
+
+	recip = ctrl->bRequestType & USB_RECIP_MASK;
+	switch (recip) {
+	case USB_RECIP_DEVICE:
+		/*
+		 * LTM will be set once we know how to set this in HW.
+		 */
+		usb_status |= dwc->is_selfpowered << USB_DEVICE_SELF_POWERED;
+
+		if (dwc->speed == DWC3_DSTS_SUPERSPEED) {
+			reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+			if (reg & DWC3_DCTL_INITU1ENA)
+				usb_status |= 1 << USB_DEV_STAT_U1_ENABLED;
+			if (reg & DWC3_DCTL_INITU2ENA)
+				usb_status |= 1 << USB_DEV_STAT_U2_ENABLED;
+		}
+
+		break;
+
+	case USB_RECIP_INTERFACE:
+		/*
+		 * Function Remote Wake Capable	D0
+		 * Function Remote Wakeup	D1
+		 */
+		break;
+
+	case USB_RECIP_ENDPOINT:
+		dep = dwc3_wIndex_to_dep(dwc, ctrl->wIndex);
+		if (!dep)
+			return -EINVAL;
+
+		if (dep->flags & DWC3_EP_STALL)
+			usb_status = 1 << USB_ENDPOINT_HALT;
+		break;
+	default:
+		return -EINVAL;
+	};
+
+	response_pkt = (__le16 *) dwc->setup_buf;
+	*response_pkt = cpu_to_le16(usb_status);
+
+	dep = dwc->eps[0];
+	dwc->ep0_usb_req.dep = dep;
+	dwc->ep0_usb_req.request.length = sizeof(*response_pkt);
+	dwc->ep0_usb_req.request.buf = dwc->setup_buf;
+	dwc->ep0_usb_req.request.complete = dwc3_ep0_status_cmpl;
+
+	return __dwc3_gadget_ep0_queue(dep, &dwc->ep0_usb_req);
+}
+
+static int dwc3_ep0_handle_feature(struct dwc3 *dwc,
+		struct usb_ctrlrequest *ctrl, int set)
+{
+	struct dwc3_ep		*dep;
+	u32			recip;
+	u32			wValue;
+	u32			wIndex;
+	u32			reg;
+	int			ret;
+	enum usb_device_state	state;
+
+	wValue = le16_to_cpu(ctrl->wValue);
+	wIndex = le16_to_cpu(ctrl->wIndex);
+	recip = ctrl->bRequestType & USB_RECIP_MASK;
+	state = dwc->gadget.state;
+
+	switch (recip) {
+	case USB_RECIP_DEVICE:
+
+		switch (wValue) {
+		case USB_DEVICE_REMOTE_WAKEUP:
+			break;
+		/*
+		 * 9.4.1 says only only for SS, in AddressState only for
+		 * default control pipe
+		 */
+		case USB_DEVICE_U1_ENABLE:
+			if (state != USB_STATE_CONFIGURED)
+				return -EINVAL;
+			if (dwc->speed != DWC3_DSTS_SUPERSPEED)
+				return -EINVAL;
+
+			reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+			if (set)
+				reg |= DWC3_DCTL_INITU1ENA;
+			else
+				reg &= ~DWC3_DCTL_INITU1ENA;
+			dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+			break;
+
+		case USB_DEVICE_U2_ENABLE:
+			if (state != USB_STATE_CONFIGURED)
+				return -EINVAL;
+			if (dwc->speed != DWC3_DSTS_SUPERSPEED)
+				return -EINVAL;
+
+			reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+			if (set)
+				reg |= DWC3_DCTL_INITU2ENA;
+			else
+				reg &= ~DWC3_DCTL_INITU2ENA;
+			dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+			break;
+
+		case USB_DEVICE_LTM_ENABLE:
+			return -EINVAL;
+			break;
+
+		case USB_DEVICE_TEST_MODE:
+			if ((wIndex & 0xff) != 0)
+				return -EINVAL;
+			if (!set)
+				return -EINVAL;
+
+			dwc->test_mode_nr = wIndex >> 8;
+			dwc->test_mode = true;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+
+	case USB_RECIP_INTERFACE:
+		switch (wValue) {
+		case USB_INTRF_FUNC_SUSPEND:
+			if (wIndex & USB_INTRF_FUNC_SUSPEND_LP)
+				/* XXX enable Low power suspend */
+				;
+			if (wIndex & USB_INTRF_FUNC_SUSPEND_RW)
+				/* XXX enable remote wakeup */
+				;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+
+	case USB_RECIP_ENDPOINT:
+		switch (wValue) {
+		case USB_ENDPOINT_HALT:
+			dep = dwc3_wIndex_to_dep(dwc, wIndex);
+			if (!dep)
+				return -EINVAL;
+			ret = __dwc3_gadget_ep_set_halt(dep, set);
+			if (ret)
+				return -EINVAL;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
+	};
+
+	return 0;
+}
+
+static int dwc3_ep0_set_address(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
+{
+	enum usb_device_state state = dwc->gadget.state;
+	u32 addr;
+	u32 reg;
+
+	addr = le16_to_cpu(ctrl->wValue);
+	if (addr > 127) {
+		dev_dbg(dwc->dev, "invalid device address %d\n", addr);
+		return -EINVAL;
+	}
+
+	if (state == USB_STATE_CONFIGURED) {
+		dev_dbg(dwc->dev, "trying to set address when configured\n");
+		return -EINVAL;
+	}
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCFG);
+	reg &= ~(DWC3_DCFG_DEVADDR_MASK);
+	reg |= DWC3_DCFG_DEVADDR(addr);
+	dwc3_writel(dwc->regs, DWC3_DCFG, reg);
+
+	if (addr)
+		usb_gadget_set_state(&dwc->gadget, USB_STATE_ADDRESS);
+	else
+		usb_gadget_set_state(&dwc->gadget, USB_STATE_DEFAULT);
+
+	return 0;
+}
+
+static int dwc3_ep0_delegate_req(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
+{
+	int ret;
+
+	spin_unlock(&dwc->lock);
+	ret = dwc->gadget_driver->setup(&dwc->gadget, ctrl);
+	spin_lock(&dwc->lock);
+	return ret;
+}
+
+static int dwc3_ep0_set_config(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
+{
+	enum usb_device_state state = dwc->gadget.state;
+	u32 cfg;
+	int ret;
+	u32 reg;
+
+	dwc->start_config_issued = false;
+	cfg = le16_to_cpu(ctrl->wValue);
+
+	switch (state) {
+	case USB_STATE_DEFAULT:
+		return -EINVAL;
+		break;
+
+	case USB_STATE_ADDRESS:
+		ret = dwc3_ep0_delegate_req(dwc, ctrl);
+		/* if the cfg matches and the cfg is non zero */
+		if (cfg && (!ret || (ret == USB_GADGET_DELAYED_STATUS))) {
+			usb_gadget_set_state(&dwc->gadget,
+					USB_STATE_CONFIGURED);
+
+			/*
+			 * Enable transition to U1/U2 state when
+			 * nothing is pending from application.
+			 */
+			reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+			reg |= (DWC3_DCTL_ACCEPTU1ENA | DWC3_DCTL_ACCEPTU2ENA);
+			dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+
+			dwc->resize_fifos = true;
+			dev_dbg(dwc->dev, "resize fifos flag SET\n");
+		}
+		break;
+
+	case USB_STATE_CONFIGURED:
+		ret = dwc3_ep0_delegate_req(dwc, ctrl);
+		if (!cfg)
+			usb_gadget_set_state(&dwc->gadget,
+					USB_STATE_ADDRESS);
+		break;
+	default:
+		ret = -EINVAL;
+	}
+	return ret;
+}
+
+static void dwc3_ep0_set_sel_cmpl(struct usb_ep *ep, struct usb_request *req)
+{
+	struct dwc3_ep	*dep = to_dwc3_ep(ep);
+	struct dwc3	*dwc = dep->dwc;
+
+	u32		param = 0;
+	u32		reg;
+
+	struct timing {
+		u8	u1sel;
+		u8	u1pel;
+		u16	u2sel;
+		u16	u2pel;
+	} __packed timing;
+
+	int		ret;
+
+	memcpy(&timing, req->buf, sizeof(timing));
+
+	dwc->u1sel = timing.u1sel;
+	dwc->u1pel = timing.u1pel;
+	dwc->u2sel = le16_to_cpu(timing.u2sel);
+	dwc->u2pel = le16_to_cpu(timing.u2pel);
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+	if (reg & DWC3_DCTL_INITU2ENA)
+		param = dwc->u2pel;
+	if (reg & DWC3_DCTL_INITU1ENA)
+		param = dwc->u1pel;
+
+	/*
+	 * According to Synopsys Databook, if parameter is
+	 * greater than 125, a value of zero should be
+	 * programmed in the register.
+	 */
+	if (param > 125)
+		param = 0;
+
+	/* now that we have the time, issue DGCMD Set Sel */
+	ret = dwc3_send_gadget_generic_command(dwc,
+			DWC3_DGCMD_SET_PERIODIC_PAR, param);
+	WARN_ON(ret < 0);
+}
+
+static int dwc3_ep0_set_sel(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
+{
+	struct dwc3_ep	*dep;
+	enum usb_device_state state = dwc->gadget.state;
+	u16		wLength;
+	u16		wValue;
+
+	if (state == USB_STATE_DEFAULT)
+		return -EINVAL;
+
+	wValue = le16_to_cpu(ctrl->wValue);
+	wLength = le16_to_cpu(ctrl->wLength);
+
+	if (wLength != 6) {
+		dev_err(dwc->dev, "Set SEL should be 6 bytes, got %d\n",
+				wLength);
+		return -EINVAL;
+	}
+
+	/*
+	 * To handle Set SEL we need to receive 6 bytes from Host. So let's
+	 * queue a usb_request for 6 bytes.
+	 *
+	 * Remember, though, this controller can't handle non-wMaxPacketSize
+	 * aligned transfers on the OUT direction, so we queue a request for
+	 * wMaxPacketSize instead.
+	 */
+	dep = dwc->eps[0];
+	dwc->ep0_usb_req.dep = dep;
+	dwc->ep0_usb_req.request.length = dep->endpoint.maxpacket;
+	dwc->ep0_usb_req.request.buf = dwc->setup_buf;
+	dwc->ep0_usb_req.request.complete = dwc3_ep0_set_sel_cmpl;
+
+	return __dwc3_gadget_ep0_queue(dep, &dwc->ep0_usb_req);
+}
+
+static int dwc3_ep0_set_isoch_delay(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
+{
+	u16		wLength;
+	u16		wValue;
+	u16		wIndex;
+
+	wValue = le16_to_cpu(ctrl->wValue);
+	wLength = le16_to_cpu(ctrl->wLength);
+	wIndex = le16_to_cpu(ctrl->wIndex);
+
+	if (wIndex || wLength)
+		return -EINVAL;
+
+	/*
+	 * REVISIT It's unclear from Databook what to do with this
+	 * value. For now, just cache it.
+	 */
+	dwc->isoch_delay = wValue;
+
+	return 0;
+}
+
+static int dwc3_ep0_std_request(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
+{
+	int ret;
+
+	switch (ctrl->bRequest) {
+	case USB_REQ_GET_STATUS:
+		dev_vdbg(dwc->dev, "USB_REQ_GET_STATUS\n");
+		ret = dwc3_ep0_handle_status(dwc, ctrl);
+		break;
+	case USB_REQ_CLEAR_FEATURE:
+		dev_vdbg(dwc->dev, "USB_REQ_CLEAR_FEATURE\n");
+		ret = dwc3_ep0_handle_feature(dwc, ctrl, 0);
+		break;
+	case USB_REQ_SET_FEATURE:
+		dev_vdbg(dwc->dev, "USB_REQ_SET_FEATURE\n");
+		ret = dwc3_ep0_handle_feature(dwc, ctrl, 1);
+		break;
+	case USB_REQ_SET_ADDRESS:
+		dev_vdbg(dwc->dev, "USB_REQ_SET_ADDRESS\n");
+		ret = dwc3_ep0_set_address(dwc, ctrl);
+		break;
+	case USB_REQ_SET_CONFIGURATION:
+		dev_vdbg(dwc->dev, "USB_REQ_SET_CONFIGURATION\n");
+		ret = dwc3_ep0_set_config(dwc, ctrl);
+		break;
+	case USB_REQ_SET_SEL:
+		dev_vdbg(dwc->dev, "USB_REQ_SET_SEL\n");
+		ret = dwc3_ep0_set_sel(dwc, ctrl);
+		break;
+	case USB_REQ_SET_ISOCH_DELAY:
+		dev_vdbg(dwc->dev, "USB_REQ_SET_ISOCH_DELAY\n");
+		ret = dwc3_ep0_set_isoch_delay(dwc, ctrl);
+		break;
+	default:
+		dev_vdbg(dwc->dev, "Forwarding to gadget driver\n");
+		ret = dwc3_ep0_delegate_req(dwc, ctrl);
+		break;
+	};
+
+	return ret;
+}
+
+static void dwc3_ep0_inspect_setup(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event)
+{
+	struct usb_ctrlrequest *ctrl = dwc->ctrl_req;
+	int ret = -EINVAL;
+	u32 len;
+
+	if (!dwc->gadget_driver)
+		goto out;
+
+	len = le16_to_cpu(ctrl->wLength);
+	if (!len) {
+		dwc->three_stage_setup = false;
+		dwc->ep0_expect_in = false;
+		dwc->ep0_next_event = DWC3_EP0_NRDY_STATUS;
+	} else {
+		dwc->three_stage_setup = true;
+		dwc->ep0_expect_in = !!(ctrl->bRequestType & USB_DIR_IN);
+		dwc->ep0_next_event = DWC3_EP0_NRDY_DATA;
+	}
+
+	if ((ctrl->bRequestType & USB_TYPE_MASK) == USB_TYPE_STANDARD)
+		ret = dwc3_ep0_std_request(dwc, ctrl);
+	else
+		ret = dwc3_ep0_delegate_req(dwc, ctrl);
+
+	if (ret == USB_GADGET_DELAYED_STATUS)
+		dwc->delayed_status = true;
+
+out:
+	if (ret < 0)
+		dwc3_ep0_stall_and_restart(dwc);
+}
+
+static void dwc3_ep0_complete_data(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event)
+{
+	struct dwc3_request	*r = NULL;
+	struct usb_request	*ur;
+	struct dwc3_trb		*trb;
+	struct dwc3_ep		*ep0;
+	u32			transferred;
+	u32			status;
+	u32			length;
+	u8			epnum;
+
+	epnum = event->endpoint_number;
+	ep0 = dwc->eps[0];
+
+	dwc->ep0_next_event = DWC3_EP0_NRDY_STATUS;
+
+	r = next_request(&ep0->request_list);
+	ur = &r->request;
+
+	trb = dwc->ep0_trb;
+
+	status = DWC3_TRB_SIZE_TRBSTS(trb->size);
+	if (status == DWC3_TRBSTS_SETUP_PENDING) {
+		dev_dbg(dwc->dev, "Setup Pending received\n");
+
+		if (r)
+			dwc3_gadget_giveback(ep0, r, -ECONNRESET);
+
+		return;
+	}
+
+	length = trb->size & DWC3_TRB_SIZE_MASK;
+
+	if (dwc->ep0_bounced) {
+		unsigned transfer_size = ur->length;
+		unsigned maxp = ep0->endpoint.maxpacket;
+
+		transfer_size += (maxp - (transfer_size % maxp));
+		transferred = min_t(u32, ur->length,
+				transfer_size - length);
+		memcpy(ur->buf, dwc->ep0_bounce, transferred);
+	} else {
+		transferred = ur->length - length;
+	}
+
+	ur->actual += transferred;
+
+	if ((epnum & 1) && ur->actual < ur->length) {
+		/* for some reason we did not get everything out */
+
+		dwc3_ep0_stall_and_restart(dwc);
+	} else {
+		/*
+		 * handle the case where we have to send a zero packet. This
+		 * seems to be case when req.length > maxpacket. Could it be?
+		 */
+		if (r)
+			dwc3_gadget_giveback(ep0, r, 0);
+	}
+}
+
+static void dwc3_ep0_complete_status(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event)
+{
+	struct dwc3_request	*r;
+	struct dwc3_ep		*dep;
+	struct dwc3_trb		*trb;
+	u32			status;
+
+	dep = dwc->eps[0];
+	trb = dwc->ep0_trb;
+
+	if (!list_empty(&dep->request_list)) {
+		r = next_request(&dep->request_list);
+
+		dwc3_gadget_giveback(dep, r, 0);
+	}
+
+	if (dwc->test_mode) {
+		int ret;
+
+		ret = dwc3_gadget_set_test_mode(dwc, dwc->test_mode_nr);
+		if (ret < 0) {
+			dev_dbg(dwc->dev, "Invalid Test #%d\n",
+					dwc->test_mode_nr);
+			dwc3_ep0_stall_and_restart(dwc);
+			return;
+		}
+	}
+
+	status = DWC3_TRB_SIZE_TRBSTS(trb->size);
+	if (status == DWC3_TRBSTS_SETUP_PENDING)
+		dev_dbg(dwc->dev, "Setup Pending received\n");
+
+	dwc->ep0state = EP0_SETUP_PHASE;
+	dwc3_ep0_out_start(dwc);
+}
+
+static void dwc3_ep0_xfer_complete(struct dwc3 *dwc,
+			const struct dwc3_event_depevt *event)
+{
+	struct dwc3_ep		*dep = dwc->eps[event->endpoint_number];
+
+	dep->flags &= ~DWC3_EP_BUSY;
+	dep->resource_index = 0;
+	dwc->setup_packet_pending = false;
+
+	switch (dwc->ep0state) {
+	case EP0_SETUP_PHASE:
+		dev_vdbg(dwc->dev, "Inspecting Setup Bytes\n");
+		dwc3_ep0_inspect_setup(dwc, event);
+		break;
+
+	case EP0_DATA_PHASE:
+		dev_vdbg(dwc->dev, "Data Phase\n");
+		dwc3_ep0_complete_data(dwc, event);
+		break;
+
+	case EP0_STATUS_PHASE:
+		dev_vdbg(dwc->dev, "Status Phase\n");
+		dwc3_ep0_complete_status(dwc, event);
+		break;
+	default:
+		WARN(true, "UNKNOWN ep0state %d\n", dwc->ep0state);
+	}
+}
+
+static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
+		struct dwc3_ep *dep, struct dwc3_request *req)
+{
+	int			ret;
+
+	req->direction = !!dep->number;
+
+	if (req->request.length == 0) {
+		ret = dwc3_ep0_start_trans(dwc, dep->number,
+				dwc->ctrl_req_addr, 0,
+				DWC3_TRBCTL_CONTROL_DATA);
+	} else if (!IS_ALIGNED(req->request.length, dep->endpoint.maxpacket)
+			&& (dep->number == 0)) {
+		u32	transfer_size;
+		u32	maxpacket;
+
+		ret = usb_gadget_map_request(&dwc->gadget, &req->request,
+				dep->number);
+		if (ret) {
+			dev_dbg(dwc->dev, "failed to map request\n");
+			return;
+		}
+
+		WARN_ON(req->request.length > DWC3_EP0_BOUNCE_SIZE);
+
+		maxpacket = dep->endpoint.maxpacket;
+		transfer_size = roundup(req->request.length, maxpacket);
+
+		dwc->ep0_bounced = true;
+
+		/*
+		 * REVISIT in case request length is bigger than
+		 * DWC3_EP0_BOUNCE_SIZE we will need two chained
+		 * TRBs to handle the transfer.
+		 */
+		ret = dwc3_ep0_start_trans(dwc, dep->number,
+				dwc->ep0_bounce_addr, transfer_size,
+				DWC3_TRBCTL_CONTROL_DATA);
+	} else {
+		ret = usb_gadget_map_request(&dwc->gadget, &req->request,
+				dep->number);
+		if (ret) {
+			dev_dbg(dwc->dev, "failed to map request\n");
+			return;
+		}
+
+		ret = dwc3_ep0_start_trans(dwc, dep->number, req->request.dma,
+				req->request.length, DWC3_TRBCTL_CONTROL_DATA);
+	}
+
+	WARN_ON(ret < 0);
+}
+
+static int dwc3_ep0_start_control_status(struct dwc3_ep *dep)
+{
+	struct dwc3		*dwc = dep->dwc;
+	u32			type;
+
+	type = dwc->three_stage_setup ? DWC3_TRBCTL_CONTROL_STATUS3
+		: DWC3_TRBCTL_CONTROL_STATUS2;
+
+	return dwc3_ep0_start_trans(dwc, dep->number,
+			dwc->ctrl_req_addr, 0, type);
+}
+
+static void __dwc3_ep0_do_control_status(struct dwc3 *dwc, struct dwc3_ep *dep)
+{
+	if (dwc->resize_fifos) {
+		dev_dbg(dwc->dev, "starting to resize fifos\n");
+		dwc3_gadget_resize_tx_fifos(dwc);
+		dwc->resize_fifos = 0;
+	}
+
+	WARN_ON(dwc3_ep0_start_control_status(dep));
+}
+
+static void dwc3_ep0_do_control_status(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event)
+{
+	struct dwc3_ep		*dep = dwc->eps[event->endpoint_number];
+
+	__dwc3_ep0_do_control_status(dwc, dep);
+}
+
+static void dwc3_ep0_end_control_data(struct dwc3 *dwc, struct dwc3_ep *dep)
+{
+	struct dwc3_gadget_ep_cmd_params params;
+	u32			cmd;
+	int			ret;
+
+	if (!dep->resource_index)
+		return;
+
+	cmd = DWC3_DEPCMD_ENDTRANSFER;
+	cmd |= DWC3_DEPCMD_CMDIOC;
+	cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
+	memset(&params, 0, sizeof(params));
+	ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params);
+	WARN_ON_ONCE(ret);
+	dep->resource_index = 0;
+}
+
+static void dwc3_ep0_xfernotready(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event)
+{
+	dwc->setup_packet_pending = true;
+
+	switch (event->status) {
+	case DEPEVT_STATUS_CONTROL_DATA:
+		dev_vdbg(dwc->dev, "Control Data\n");
+
+		/*
+		 * We already have a DATA transfer in the controller's cache,
+		 * if we receive a XferNotReady(DATA) we will ignore it, unless
+		 * it's for the wrong direction.
+		 *
+		 * In that case, we must issue END_TRANSFER command to the Data
+		 * Phase we already have started and issue SetStall on the
+		 * control endpoint.
+		 */
+		if (dwc->ep0_expect_in != event->endpoint_number) {
+			struct dwc3_ep	*dep = dwc->eps[dwc->ep0_expect_in];
+
+			dev_vdbg(dwc->dev, "Wrong direction for Data phase\n");
+			dwc3_ep0_end_control_data(dwc, dep);
+			dwc3_ep0_stall_and_restart(dwc);
+			return;
+		}
+
+		break;
+
+	case DEPEVT_STATUS_CONTROL_STATUS:
+		if (dwc->ep0_next_event != DWC3_EP0_NRDY_STATUS)
+			return;
+
+		dev_vdbg(dwc->dev, "Control Status\n");
+
+		dwc->ep0state = EP0_STATUS_PHASE;
+
+		if (dwc->delayed_status) {
+			WARN_ON_ONCE(event->endpoint_number != 1);
+			dev_vdbg(dwc->dev, "Mass Storage delayed status\n");
+			return;
+		}
+
+		dwc3_ep0_do_control_status(dwc, event);
+	}
+}
+
+void dwc3_ep0_interrupt(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event)
+{
+	u8			epnum = event->endpoint_number;
+
+	dev_dbg(dwc->dev, "%s while ep%d%s in state '%s'\n",
+			dwc3_ep_event_string(event->endpoint_event),
+			epnum >> 1, (epnum & 1) ? "in" : "out",
+			dwc3_ep0_state_string(dwc->ep0state));
+
+	switch (event->endpoint_event) {
+	case DWC3_DEPEVT_XFERCOMPLETE:
+		dwc3_ep0_xfer_complete(dwc, event);
+		break;
+
+	case DWC3_DEPEVT_XFERNOTREADY:
+		dwc3_ep0_xfernotready(dwc, event);
+		break;
+
+	case DWC3_DEPEVT_XFERINPROGRESS:
+	case DWC3_DEPEVT_RXTXFIFOEVT:
+	case DWC3_DEPEVT_STREAMEVT:
+	case DWC3_DEPEVT_EPCMDCMPLT:
+		break;
+	}
+}
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
new file mode 100644
index 0000000..2b6e7e0
--- /dev/null
+++ b/drivers/usb/dwc3/gadget.c
@@ -0,0 +1,2754 @@
+/**
+ * gadget.c - DesignWare USB3 DRD Controller Gadget Framework Link
+ *
+ * Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/list.h>
+#include <linux/dma-mapping.h>
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+
+#include "core.h"
+#include "gadget.h"
+#include "io.h"
+
+/**
+ * dwc3_gadget_set_test_mode - Enables USB2 Test Modes
+ * @dwc: pointer to our context structure
+ * @mode: the mode to set (J, K SE0 NAK, Force Enable)
+ *
+ * Caller should take care of locking. This function will
+ * return 0 on success or -EINVAL if wrong Test Selector
+ * is passed
+ */
+int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode)
+{
+	u32		reg;
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+	reg &= ~DWC3_DCTL_TSTCTRL_MASK;
+
+	switch (mode) {
+	case TEST_J:
+	case TEST_K:
+	case TEST_SE0_NAK:
+	case TEST_PACKET:
+	case TEST_FORCE_EN:
+		reg |= mode << 1;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+
+	return 0;
+}
+
+/**
+ * dwc3_gadget_set_link_state - Sets USB Link to a particular State
+ * @dwc: pointer to our context structure
+ * @state: the state to put link into
+ *
+ * Caller should take care of locking. This function will
+ * return 0 on success or -ETIMEDOUT.
+ */
+int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state)
+{
+	int		retries = 10000;
+	u32		reg;
+
+	/*
+	 * Wait until device controller is ready. Only applies to 1.94a and
+	 * later RTL.
+	 */
+	if (dwc->revision >= DWC3_REVISION_194A) {
+		while (--retries) {
+			reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+			if (reg & DWC3_DSTS_DCNRD)
+				udelay(5);
+			else
+				break;
+		}
+
+		if (retries <= 0)
+			return -ETIMEDOUT;
+	}
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+	reg &= ~DWC3_DCTL_ULSTCHNGREQ_MASK;
+
+	/* set requested state */
+	reg |= DWC3_DCTL_ULSTCHNGREQ(state);
+	dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+
+	/*
+	 * The following code is racy when called from dwc3_gadget_wakeup,
+	 * and is not needed, at least on newer versions
+	 */
+	if (dwc->revision >= DWC3_REVISION_194A)
+		return 0;
+
+	/* wait for a change in DSTS */
+	retries = 10000;
+	while (--retries) {
+		reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+
+		if (DWC3_DSTS_USBLNKST(reg) == state)
+			return 0;
+
+		udelay(5);
+	}
+
+	dev_vdbg(dwc->dev, "link state change request timed out\n");
+
+	return -ETIMEDOUT;
+}
+
+/**
+ * dwc3_gadget_resize_tx_fifos - reallocate fifo spaces for current use-case
+ * @dwc: pointer to our context structure
+ *
+ * This function will a best effort FIFO allocation in order
+ * to improve FIFO usage and throughput, while still allowing
+ * us to enable as many endpoints as possible.
+ *
+ * Keep in mind that this operation will be highly dependent
+ * on the configured size for RAM1 - which contains TxFifo -,
+ * the amount of endpoints enabled on coreConsultant tool, and
+ * the width of the Master Bus.
+ *
+ * In the ideal world, we would always be able to satisfy the
+ * following equation:
+ *
+ * ((512 + 2 * MDWIDTH-Bytes) + (Number of IN Endpoints - 1) * \
+ * (3 * (1024 + MDWIDTH-Bytes) + MDWIDTH-Bytes)) / MDWIDTH-Bytes
+ *
+ * Unfortunately, due to many variables that's not always the case.
+ */
+int dwc3_gadget_resize_tx_fifos(struct dwc3 *dwc)
+{
+	int		last_fifo_depth = 0;
+	int		ram1_depth;
+	int		fifo_size;
+	int		mdwidth;
+	int		num;
+
+	if (!dwc->needs_fifo_resize)
+		return 0;
+
+	ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
+	mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
+
+	/* MDWIDTH is represented in bits, we need it in bytes */
+	mdwidth >>= 3;
+
+	/*
+	 * FIXME For now we will only allocate 1 wMaxPacketSize space
+	 * for each enabled endpoint, later patches will come to
+	 * improve this algorithm so that we better use the internal
+	 * FIFO space
+	 */
+	for (num = 0; num < DWC3_ENDPOINTS_NUM; num++) {
+		struct dwc3_ep	*dep = dwc->eps[num];
+		int		fifo_number = dep->number >> 1;
+		int		mult = 1;
+		int		tmp;
+
+		if (!(dep->number & 1))
+			continue;
+
+		if (!(dep->flags & DWC3_EP_ENABLED))
+			continue;
+
+		if (usb_endpoint_xfer_bulk(dep->endpoint.desc)
+				|| usb_endpoint_xfer_isoc(dep->endpoint.desc))
+			mult = 3;
+
+		/*
+		 * REVISIT: the following assumes we will always have enough
+		 * space available on the FIFO RAM for all possible use cases.
+		 * Make sure that's true somehow and change FIFO allocation
+		 * accordingly.
+		 *
+		 * If we have Bulk or Isochronous endpoints, we want
+		 * them to be able to be very, very fast. So we're giving
+		 * those endpoints a fifo_size which is enough for 3 full
+		 * packets
+		 */
+		tmp = mult * (dep->endpoint.maxpacket + mdwidth);
+		tmp += mdwidth;
+
+		fifo_size = DIV_ROUND_UP(tmp, mdwidth);
+
+		fifo_size |= (last_fifo_depth << 16);
+
+		dev_vdbg(dwc->dev, "%s: Fifo Addr %04x Size %d\n",
+				dep->name, last_fifo_depth, fifo_size & 0xffff);
+
+		dwc3_writel(dwc->regs, DWC3_GTXFIFOSIZ(fifo_number),
+				fifo_size);
+
+		last_fifo_depth += (fifo_size & 0xffff);
+	}
+
+	return 0;
+}
+
+void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req,
+		int status)
+{
+	struct dwc3			*dwc = dep->dwc;
+	int				i;
+
+	if (req->queued) {
+		i = 0;
+		do {
+			dep->busy_slot++;
+			/*
+			 * Skip LINK TRB. We can't use req->trb and check for
+			 * DWC3_TRBCTL_LINK_TRB because it points the TRB we
+			 * just completed (not the LINK TRB).
+			 */
+			if (((dep->busy_slot & DWC3_TRB_MASK) ==
+				DWC3_TRB_NUM- 1) &&
+				usb_endpoint_xfer_isoc(dep->endpoint.desc))
+				dep->busy_slot++;
+		} while(++i < req->request.num_mapped_sgs);
+		req->queued = false;
+	}
+	list_del(&req->list);
+	req->trb = NULL;
+
+	if (req->request.status == -EINPROGRESS)
+		req->request.status = status;
+
+	if (dwc->ep0_bounced && dep->number == 0)
+		dwc->ep0_bounced = false;
+	else
+		usb_gadget_unmap_request(&dwc->gadget, &req->request,
+				req->direction);
+
+	dev_dbg(dwc->dev, "request %p from %s completed %d/%d ===> %d\n",
+			req, dep->name, req->request.actual,
+			req->request.length, status);
+
+	spin_unlock(&dwc->lock);
+	req->request.complete(&dep->endpoint, &req->request);
+	spin_lock(&dwc->lock);
+}
+
+static const char *dwc3_gadget_ep_cmd_string(u8 cmd)
+{
+	switch (cmd) {
+	case DWC3_DEPCMD_DEPSTARTCFG:
+		return "Start New Configuration";
+	case DWC3_DEPCMD_ENDTRANSFER:
+		return "End Transfer";
+	case DWC3_DEPCMD_UPDATETRANSFER:
+		return "Update Transfer";
+	case DWC3_DEPCMD_STARTTRANSFER:
+		return "Start Transfer";
+	case DWC3_DEPCMD_CLEARSTALL:
+		return "Clear Stall";
+	case DWC3_DEPCMD_SETSTALL:
+		return "Set Stall";
+	case DWC3_DEPCMD_GETEPSTATE:
+		return "Get Endpoint State";
+	case DWC3_DEPCMD_SETTRANSFRESOURCE:
+		return "Set Endpoint Transfer Resource";
+	case DWC3_DEPCMD_SETEPCONFIG:
+		return "Set Endpoint Configuration";
+	default:
+		return "UNKNOWN command";
+	}
+}
+
+int dwc3_send_gadget_generic_command(struct dwc3 *dwc, int cmd, u32 param)
+{
+	u32		timeout = 500;
+	u32		reg;
+
+	dwc3_writel(dwc->regs, DWC3_DGCMDPAR, param);
+	dwc3_writel(dwc->regs, DWC3_DGCMD, cmd | DWC3_DGCMD_CMDACT);
+
+	do {
+		reg = dwc3_readl(dwc->regs, DWC3_DGCMD);
+		if (!(reg & DWC3_DGCMD_CMDACT)) {
+			dev_vdbg(dwc->dev, "Command Complete --> %d\n",
+					DWC3_DGCMD_STATUS(reg));
+			return 0;
+		}
+
+		/*
+		 * We can't sleep here, because it's also called from
+		 * interrupt context.
+		 */
+		timeout--;
+		if (!timeout)
+			return -ETIMEDOUT;
+		udelay(1);
+	} while (1);
+}
+
+int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep,
+		unsigned cmd, struct dwc3_gadget_ep_cmd_params *params)
+{
+	struct dwc3_ep		*dep = dwc->eps[ep];
+	u32			timeout = 500;
+	u32			reg;
+
+	dev_vdbg(dwc->dev, "%s: cmd '%s' params %08x %08x %08x\n",
+			dep->name,
+			dwc3_gadget_ep_cmd_string(cmd), params->param0,
+			params->param1, params->param2);
+
+	dwc3_writel(dwc->regs, DWC3_DEPCMDPAR0(ep), params->param0);
+	dwc3_writel(dwc->regs, DWC3_DEPCMDPAR1(ep), params->param1);
+	dwc3_writel(dwc->regs, DWC3_DEPCMDPAR2(ep), params->param2);
+
+	dwc3_writel(dwc->regs, DWC3_DEPCMD(ep), cmd | DWC3_DEPCMD_CMDACT);
+	do {
+		reg = dwc3_readl(dwc->regs, DWC3_DEPCMD(ep));
+		if (!(reg & DWC3_DEPCMD_CMDACT)) {
+			dev_vdbg(dwc->dev, "Command Complete --> %d\n",
+					DWC3_DEPCMD_STATUS(reg));
+			return 0;
+		}
+
+		/*
+		 * We can't sleep here, because it is also called from
+		 * interrupt context.
+		 */
+		timeout--;
+		if (!timeout)
+			return -ETIMEDOUT;
+
+		udelay(1);
+	} while (1);
+}
+
+static dma_addr_t dwc3_trb_dma_offset(struct dwc3_ep *dep,
+		struct dwc3_trb *trb)
+{
+	u32		offset = (char *) trb - (char *) dep->trb_pool;
+
+	return dep->trb_pool_dma + offset;
+}
+
+static int dwc3_alloc_trb_pool(struct dwc3_ep *dep)
+{
+	struct dwc3		*dwc = dep->dwc;
+
+	if (dep->trb_pool)
+		return 0;
+
+	if (dep->number == 0 || dep->number == 1)
+		return 0;
+
+	dep->trb_pool = dma_alloc_coherent(dwc->dev,
+			sizeof(struct dwc3_trb) * DWC3_TRB_NUM,
+			&dep->trb_pool_dma, GFP_KERNEL);
+	if (!dep->trb_pool) {
+		dev_err(dep->dwc->dev, "failed to allocate trb pool for %s\n",
+				dep->name);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void dwc3_free_trb_pool(struct dwc3_ep *dep)
+{
+	struct dwc3		*dwc = dep->dwc;
+
+	dma_free_coherent(dwc->dev, sizeof(struct dwc3_trb) * DWC3_TRB_NUM,
+			dep->trb_pool, dep->trb_pool_dma);
+
+	dep->trb_pool = NULL;
+	dep->trb_pool_dma = 0;
+}
+
+static int dwc3_gadget_start_config(struct dwc3 *dwc, struct dwc3_ep *dep)
+{
+	struct dwc3_gadget_ep_cmd_params params;
+	u32			cmd;
+
+	memset(&params, 0x00, sizeof(params));
+
+	if (dep->number != 1) {
+		cmd = DWC3_DEPCMD_DEPSTARTCFG;
+		/* XferRscIdx == 0 for ep0 and 2 for the remaining */
+		if (dep->number > 1) {
+			if (dwc->start_config_issued)
+				return 0;
+			dwc->start_config_issued = true;
+			cmd |= DWC3_DEPCMD_PARAM(2);
+		}
+
+		return dwc3_send_gadget_ep_cmd(dwc, 0, cmd, &params);
+	}
+
+	return 0;
+}
+
+static int dwc3_gadget_set_ep_config(struct dwc3 *dwc, struct dwc3_ep *dep,
+		const struct usb_endpoint_descriptor *desc,
+		const struct usb_ss_ep_comp_descriptor *comp_desc,
+		bool ignore)
+{
+	struct dwc3_gadget_ep_cmd_params params;
+
+	memset(&params, 0x00, sizeof(params));
+
+	params.param0 = DWC3_DEPCFG_EP_TYPE(usb_endpoint_type(desc))
+		| DWC3_DEPCFG_MAX_PACKET_SIZE(usb_endpoint_maxp(desc));
+
+	/* Burst size is only needed in SuperSpeed mode */
+	if (dwc->gadget.speed == USB_SPEED_SUPER) {
+		u32 burst = dep->endpoint.maxburst - 1;
+
+		params.param0 |= DWC3_DEPCFG_BURST_SIZE(burst);
+	}
+
+	if (ignore)
+		params.param0 |= DWC3_DEPCFG_IGN_SEQ_NUM;
+
+	params.param1 = DWC3_DEPCFG_XFER_COMPLETE_EN
+		| DWC3_DEPCFG_XFER_NOT_READY_EN;
+
+	if (usb_ss_max_streams(comp_desc) && usb_endpoint_xfer_bulk(desc)) {
+		params.param1 |= DWC3_DEPCFG_STREAM_CAPABLE
+			| DWC3_DEPCFG_STREAM_EVENT_EN;
+		dep->stream_capable = true;
+	}
+
+	if (usb_endpoint_xfer_isoc(desc))
+		params.param1 |= DWC3_DEPCFG_XFER_IN_PROGRESS_EN;
+
+	/*
+	 * We are doing 1:1 mapping for endpoints, meaning
+	 * Physical Endpoints 2 maps to Logical Endpoint 2 and
+	 * so on. We consider the direction bit as part of the physical
+	 * endpoint number. So USB endpoint 0x81 is 0x03.
+	 */
+	params.param1 |= DWC3_DEPCFG_EP_NUMBER(dep->number);
+
+	/*
+	 * We must use the lower 16 TX FIFOs even though
+	 * HW might have more
+	 */
+	if (dep->direction)
+		params.param0 |= DWC3_DEPCFG_FIFO_NUMBER(dep->number >> 1);
+
+	if (desc->bInterval) {
+		params.param1 |= DWC3_DEPCFG_BINTERVAL_M1(desc->bInterval - 1);
+		dep->interval = 1 << (desc->bInterval - 1);
+	}
+
+	return dwc3_send_gadget_ep_cmd(dwc, dep->number,
+			DWC3_DEPCMD_SETEPCONFIG, &params);
+}
+
+static int dwc3_gadget_set_xfer_resource(struct dwc3 *dwc, struct dwc3_ep *dep)
+{
+	struct dwc3_gadget_ep_cmd_params params;
+
+	memset(&params, 0x00, sizeof(params));
+
+	params.param0 = DWC3_DEPXFERCFG_NUM_XFER_RES(1);
+
+	return dwc3_send_gadget_ep_cmd(dwc, dep->number,
+			DWC3_DEPCMD_SETTRANSFRESOURCE, &params);
+}
+
+/**
+ * __dwc3_gadget_ep_enable - Initializes a HW endpoint
+ * @dep: endpoint to be initialized
+ * @desc: USB Endpoint Descriptor
+ *
+ * Caller should take care of locking
+ */
+static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep,
+		const struct usb_endpoint_descriptor *desc,
+		const struct usb_ss_ep_comp_descriptor *comp_desc,
+		bool ignore)
+{
+	struct dwc3		*dwc = dep->dwc;
+	u32			reg;
+	int			ret = -ENOMEM;
+
+	if (!(dep->flags & DWC3_EP_ENABLED)) {
+		ret = dwc3_gadget_start_config(dwc, dep);
+		if (ret)
+			return ret;
+	}
+
+	ret = dwc3_gadget_set_ep_config(dwc, dep, desc, comp_desc, ignore);
+	if (ret)
+		return ret;
+
+	if (!(dep->flags & DWC3_EP_ENABLED)) {
+		struct dwc3_trb	*trb_st_hw;
+		struct dwc3_trb	*trb_link;
+
+		ret = dwc3_gadget_set_xfer_resource(dwc, dep);
+		if (ret)
+			return ret;
+
+		dep->endpoint.desc = desc;
+		dep->comp_desc = comp_desc;
+		dep->type = usb_endpoint_type(desc);
+		dep->flags |= DWC3_EP_ENABLED;
+
+		reg = dwc3_readl(dwc->regs, DWC3_DALEPENA);
+		reg |= DWC3_DALEPENA_EP(dep->number);
+		dwc3_writel(dwc->regs, DWC3_DALEPENA, reg);
+
+		if (!usb_endpoint_xfer_isoc(desc))
+			return 0;
+
+		memset(&trb_link, 0, sizeof(trb_link));
+
+		/* Link TRB for ISOC. The HWO bit is never reset */
+		trb_st_hw = &dep->trb_pool[0];
+
+		trb_link = &dep->trb_pool[DWC3_TRB_NUM - 1];
+
+		trb_link->bpl = lower_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw));
+		trb_link->bph = upper_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw));
+		trb_link->ctrl |= DWC3_TRBCTL_LINK_TRB;
+		trb_link->ctrl |= DWC3_TRB_CTRL_HWO;
+	}
+
+	return 0;
+}
+
+static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum);
+static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep)
+{
+	struct dwc3_request		*req;
+
+	if (!list_empty(&dep->req_queued)) {
+		dwc3_stop_active_transfer(dwc, dep->number);
+
+		/* - giveback all requests to gadget driver */
+		while (!list_empty(&dep->req_queued)) {
+			req = next_request(&dep->req_queued);
+
+			dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
+		}
+	}
+
+	while (!list_empty(&dep->request_list)) {
+		req = next_request(&dep->request_list);
+
+		dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
+	}
+}
+
+/**
+ * __dwc3_gadget_ep_disable - Disables a HW endpoint
+ * @dep: the endpoint to disable
+ *
+ * This function also removes requests which are currently processed ny the
+ * hardware and those which are not yet scheduled.
+ * Caller should take care of locking.
+ */
+static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
+{
+	struct dwc3		*dwc = dep->dwc;
+	u32			reg;
+
+	dwc3_remove_requests(dwc, dep);
+
+	reg = dwc3_readl(dwc->regs, DWC3_DALEPENA);
+	reg &= ~DWC3_DALEPENA_EP(dep->number);
+	dwc3_writel(dwc->regs, DWC3_DALEPENA, reg);
+
+	dep->stream_capable = false;
+	dep->endpoint.desc = NULL;
+	dep->comp_desc = NULL;
+	dep->type = 0;
+	dep->flags = 0;
+
+	return 0;
+}
+
+/* -------------------------------------------------------------------------- */
+
+static int dwc3_gadget_ep0_enable(struct usb_ep *ep,
+		const struct usb_endpoint_descriptor *desc)
+{
+	return -EINVAL;
+}
+
+static int dwc3_gadget_ep0_disable(struct usb_ep *ep)
+{
+	return -EINVAL;
+}
+
+/* -------------------------------------------------------------------------- */
+
+static int dwc3_gadget_ep_enable(struct usb_ep *ep,
+		const struct usb_endpoint_descriptor *desc)
+{
+	struct dwc3_ep			*dep;
+	struct dwc3			*dwc;
+	unsigned long			flags;
+	int				ret;
+
+	if (!ep || !desc || desc->bDescriptorType != USB_DT_ENDPOINT) {
+		pr_debug("dwc3: invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (!desc->wMaxPacketSize) {
+		pr_debug("dwc3: missing wMaxPacketSize\n");
+		return -EINVAL;
+	}
+
+	dep = to_dwc3_ep(ep);
+	dwc = dep->dwc;
+
+	if (dep->flags & DWC3_EP_ENABLED) {
+		dev_WARN_ONCE(dwc->dev, true, "%s is already enabled\n",
+				dep->name);
+		return 0;
+	}
+
+	switch (usb_endpoint_type(desc)) {
+	case USB_ENDPOINT_XFER_CONTROL:
+		strlcat(dep->name, "-control", sizeof(dep->name));
+		break;
+	case USB_ENDPOINT_XFER_ISOC:
+		strlcat(dep->name, "-isoc", sizeof(dep->name));
+		break;
+	case USB_ENDPOINT_XFER_BULK:
+		strlcat(dep->name, "-bulk", sizeof(dep->name));
+		break;
+	case USB_ENDPOINT_XFER_INT:
+		strlcat(dep->name, "-int", sizeof(dep->name));
+		break;
+	default:
+		dev_err(dwc->dev, "invalid endpoint transfer type\n");
+	}
+
+	dev_vdbg(dwc->dev, "Enabling %s\n", dep->name);
+
+	spin_lock_irqsave(&dwc->lock, flags);
+	ret = __dwc3_gadget_ep_enable(dep, desc, ep->comp_desc, false);
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static int dwc3_gadget_ep_disable(struct usb_ep *ep)
+{
+	struct dwc3_ep			*dep;
+	struct dwc3			*dwc;
+	unsigned long			flags;
+	int				ret;
+
+	if (!ep) {
+		pr_debug("dwc3: invalid parameters\n");
+		return -EINVAL;
+	}
+
+	dep = to_dwc3_ep(ep);
+	dwc = dep->dwc;
+
+	if (!(dep->flags & DWC3_EP_ENABLED)) {
+		dev_WARN_ONCE(dwc->dev, true, "%s is already disabled\n",
+				dep->name);
+		return 0;
+	}
+
+	snprintf(dep->name, sizeof(dep->name), "ep%d%s",
+			dep->number >> 1,
+			(dep->number & 1) ? "in" : "out");
+
+	spin_lock_irqsave(&dwc->lock, flags);
+	ret = __dwc3_gadget_ep_disable(dep);
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static struct usb_request *dwc3_gadget_ep_alloc_request(struct usb_ep *ep,
+	gfp_t gfp_flags)
+{
+	struct dwc3_request		*req;
+	struct dwc3_ep			*dep = to_dwc3_ep(ep);
+	struct dwc3			*dwc = dep->dwc;
+
+	req = kzalloc(sizeof(*req), gfp_flags);
+	if (!req) {
+		dev_err(dwc->dev, "not enough memory\n");
+		return NULL;
+	}
+
+	req->epnum	= dep->number;
+	req->dep	= dep;
+
+	return &req->request;
+}
+
+static void dwc3_gadget_ep_free_request(struct usb_ep *ep,
+		struct usb_request *request)
+{
+	struct dwc3_request		*req = to_dwc3_request(request);
+
+	kfree(req);
+}
+
+/**
+ * dwc3_prepare_one_trb - setup one TRB from one request
+ * @dep: endpoint for which this request is prepared
+ * @req: dwc3_request pointer
+ */
+static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
+		struct dwc3_request *req, dma_addr_t dma,
+		unsigned length, unsigned last, unsigned chain, unsigned node)
+{
+	struct dwc3		*dwc = dep->dwc;
+	struct dwc3_trb		*trb;
+
+	dev_vdbg(dwc->dev, "%s: req %p dma %08llx length %d%s%s\n",
+			dep->name, req, (unsigned long long) dma,
+			length, last ? " last" : "",
+			chain ? " chain" : "");
+
+	/* Skip the LINK-TRB on ISOC */
+	if (((dep->free_slot & DWC3_TRB_MASK) == DWC3_TRB_NUM - 1) &&
+			usb_endpoint_xfer_isoc(dep->endpoint.desc))
+		dep->free_slot++;
+
+	trb = &dep->trb_pool[dep->free_slot & DWC3_TRB_MASK];
+
+	if (!req->trb) {
+		dwc3_gadget_move_request_queued(req);
+		req->trb = trb;
+		req->trb_dma = dwc3_trb_dma_offset(dep, trb);
+		req->start_slot = dep->free_slot & DWC3_TRB_MASK;
+	}
+
+	dep->free_slot++;
+
+	trb->size = DWC3_TRB_SIZE_LENGTH(length);
+	trb->bpl = lower_32_bits(dma);
+	trb->bph = upper_32_bits(dma);
+
+	switch (usb_endpoint_type(dep->endpoint.desc)) {
+	case USB_ENDPOINT_XFER_CONTROL:
+		trb->ctrl = DWC3_TRBCTL_CONTROL_SETUP;
+		break;
+
+	case USB_ENDPOINT_XFER_ISOC:
+		if (!node)
+			trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS_FIRST;
+		else
+			trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS;
+
+		if (!req->request.no_interrupt && !chain)
+			trb->ctrl |= DWC3_TRB_CTRL_IOC;
+		break;
+
+	case USB_ENDPOINT_XFER_BULK:
+	case USB_ENDPOINT_XFER_INT:
+		trb->ctrl = DWC3_TRBCTL_NORMAL;
+		break;
+	default:
+		/*
+		 * This is only possible with faulty memory because we
+		 * checked it already :)
+		 */
+		BUG();
+	}
+
+	if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+		trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
+		trb->ctrl |= DWC3_TRB_CTRL_CSP;
+	} else if (last) {
+		trb->ctrl |= DWC3_TRB_CTRL_LST;
+	}
+
+	if (chain)
+		trb->ctrl |= DWC3_TRB_CTRL_CHN;
+
+	if (usb_endpoint_xfer_bulk(dep->endpoint.desc) && dep->stream_capable)
+		trb->ctrl |= DWC3_TRB_CTRL_SID_SOFN(req->request.stream_id);
+
+	trb->ctrl |= DWC3_TRB_CTRL_HWO;
+}
+
+/*
+ * dwc3_prepare_trbs - setup TRBs from requests
+ * @dep: endpoint for which requests are being prepared
+ * @starting: true if the endpoint is idle and no requests are queued.
+ *
+ * The function goes through the requests list and sets up TRBs for the
+ * transfers. The function returns once there are no more TRBs available or
+ * it runs out of requests.
+ */
+static void dwc3_prepare_trbs(struct dwc3_ep *dep, bool starting)
+{
+	struct dwc3_request	*req, *n;
+	u32			trbs_left;
+	u32			max;
+	unsigned int		last_one = 0;
+
+	BUILD_BUG_ON_NOT_POWER_OF_2(DWC3_TRB_NUM);
+
+	/* the first request must not be queued */
+	trbs_left = (dep->busy_slot - dep->free_slot) & DWC3_TRB_MASK;
+
+	/* Can't wrap around on a non-isoc EP since there's no link TRB */
+	if (!usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+		max = DWC3_TRB_NUM - (dep->free_slot & DWC3_TRB_MASK);
+		if (trbs_left > max)
+			trbs_left = max;
+	}
+
+	/*
+	 * If busy & slot are equal than it is either full or empty. If we are
+	 * starting to process requests then we are empty. Otherwise we are
+	 * full and don't do anything
+	 */
+	if (!trbs_left) {
+		if (!starting)
+			return;
+		trbs_left = DWC3_TRB_NUM;
+		/*
+		 * In case we start from scratch, we queue the ISOC requests
+		 * starting from slot 1. This is done because we use ring
+		 * buffer and have no LST bit to stop us. Instead, we place
+		 * IOC bit every TRB_NUM/4. We try to avoid having an interrupt
+		 * after the first request so we start at slot 1 and have
+		 * 7 requests proceed before we hit the first IOC.
+		 * Other transfer types don't use the ring buffer and are
+		 * processed from the first TRB until the last one. Since we
+		 * don't wrap around we have to start at the beginning.
+		 */
+		if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+			dep->busy_slot = 1;
+			dep->free_slot = 1;
+		} else {
+			dep->busy_slot = 0;
+			dep->free_slot = 0;
+		}
+	}
+
+	/* The last TRB is a link TRB, not used for xfer */
+	if ((trbs_left <= 1) && usb_endpoint_xfer_isoc(dep->endpoint.desc))
+		return;
+
+	list_for_each_entry_safe(req, n, &dep->request_list, list) {
+		unsigned	length;
+		dma_addr_t	dma;
+		last_one = false;
+
+		if (req->request.num_mapped_sgs > 0) {
+			struct usb_request *request = &req->request;
+			struct scatterlist *sg = request->sg;
+			struct scatterlist *s;
+			int		i;
+
+			for_each_sg(sg, s, request->num_mapped_sgs, i) {
+				unsigned chain = true;
+
+				length = sg_dma_len(s);
+				dma = sg_dma_address(s);
+
+				if (i == (request->num_mapped_sgs - 1) ||
+						sg_is_last(s)) {
+					if (list_is_last(&req->list,
+							&dep->request_list))
+						last_one = true;
+					chain = false;
+				}
+
+				trbs_left--;
+				if (!trbs_left)
+					last_one = true;
+
+				if (last_one)
+					chain = false;
+
+				dwc3_prepare_one_trb(dep, req, dma, length,
+						last_one, chain, i);
+
+				if (last_one)
+					break;
+			}
+		} else {
+			dma = req->request.dma;
+			length = req->request.length;
+			trbs_left--;
+
+			if (!trbs_left)
+				last_one = 1;
+
+			/* Is this the last request? */
+			if (list_is_last(&req->list, &dep->request_list))
+				last_one = 1;
+
+			dwc3_prepare_one_trb(dep, req, dma, length,
+					last_one, false, 0);
+
+			if (last_one)
+				break;
+		}
+	}
+}
+
+static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep, u16 cmd_param,
+		int start_new)
+{
+	struct dwc3_gadget_ep_cmd_params params;
+	struct dwc3_request		*req;
+	struct dwc3			*dwc = dep->dwc;
+	int				ret;
+	u32				cmd;
+
+	if (start_new && (dep->flags & DWC3_EP_BUSY)) {
+		dev_vdbg(dwc->dev, "%s: endpoint busy\n", dep->name);
+		return -EBUSY;
+	}
+	dep->flags &= ~DWC3_EP_PENDING_REQUEST;
+
+	/*
+	 * If we are getting here after a short-out-packet we don't enqueue any
+	 * new requests as we try to set the IOC bit only on the last request.
+	 */
+	if (start_new) {
+		if (list_empty(&dep->req_queued))
+			dwc3_prepare_trbs(dep, start_new);
+
+		/* req points to the first request which will be sent */
+		req = next_request(&dep->req_queued);
+	} else {
+		dwc3_prepare_trbs(dep, start_new);
+
+		/*
+		 * req points to the first request where HWO changed from 0 to 1
+		 */
+		req = next_request(&dep->req_queued);
+	}
+	if (!req) {
+		dep->flags |= DWC3_EP_PENDING_REQUEST;
+		return 0;
+	}
+
+	memset(&params, 0, sizeof(params));
+
+	if (start_new) {
+		params.param0 = upper_32_bits(req->trb_dma);
+		params.param1 = lower_32_bits(req->trb_dma);
+		cmd = DWC3_DEPCMD_STARTTRANSFER;
+	} else {
+		cmd = DWC3_DEPCMD_UPDATETRANSFER;
+	}
+
+	cmd |= DWC3_DEPCMD_PARAM(cmd_param);
+	ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params);
+	if (ret < 0) {
+		dev_dbg(dwc->dev, "failed to send STARTTRANSFER command\n");
+
+		/*
+		 * FIXME we need to iterate over the list of requests
+		 * here and stop, unmap, free and del each of the linked
+		 * requests instead of what we do now.
+		 */
+		usb_gadget_unmap_request(&dwc->gadget, &req->request,
+				req->direction);
+		list_del(&req->list);
+		return ret;
+	}
+
+	dep->flags |= DWC3_EP_BUSY;
+
+	if (start_new) {
+		dep->resource_index = dwc3_gadget_ep_get_transfer_index(dwc,
+				dep->number);
+		WARN_ON_ONCE(!dep->resource_index);
+	}
+
+	return 0;
+}
+
+static void __dwc3_gadget_start_isoc(struct dwc3 *dwc,
+		struct dwc3_ep *dep, u32 cur_uf)
+{
+	u32 uf;
+
+	if (list_empty(&dep->request_list)) {
+		dev_vdbg(dwc->dev, "ISOC ep %s run out for requests.\n",
+			dep->name);
+		dep->flags |= DWC3_EP_PENDING_REQUEST;
+		return;
+	}
+
+	/* 4 micro frames in the future */
+	uf = cur_uf + dep->interval * 4;
+
+	__dwc3_gadget_kick_transfer(dep, uf, 1);
+}
+
+static void dwc3_gadget_start_isoc(struct dwc3 *dwc,
+		struct dwc3_ep *dep, const struct dwc3_event_depevt *event)
+{
+	u32 cur_uf, mask;
+
+	mask = ~(dep->interval - 1);
+	cur_uf = event->parameters & mask;
+
+	__dwc3_gadget_start_isoc(dwc, dep, cur_uf);
+}
+
+static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
+{
+	struct dwc3		*dwc = dep->dwc;
+	int			ret;
+
+	req->request.actual	= 0;
+	req->request.status	= -EINPROGRESS;
+	req->direction		= dep->direction;
+	req->epnum		= dep->number;
+
+	/*
+	 * We only add to our list of requests now and
+	 * start consuming the list once we get XferNotReady
+	 * IRQ.
+	 *
+	 * That way, we avoid doing anything that we don't need
+	 * to do now and defer it until the point we receive a
+	 * particular token from the Host side.
+	 *
+	 * This will also avoid Host cancelling URBs due to too
+	 * many NAKs.
+	 */
+	ret = usb_gadget_map_request(&dwc->gadget, &req->request,
+			dep->direction);
+	if (ret)
+		return ret;
+
+	list_add_tail(&req->list, &dep->request_list);
+
+	/*
+	 * There are a few special cases:
+	 *
+	 * 1. XferNotReady with empty list of requests. We need to kick the
+	 *    transfer here in that situation, otherwise we will be NAKing
+	 *    forever. If we get XferNotReady before gadget driver has a
+	 *    chance to queue a request, we will ACK the IRQ but won't be
+	 *    able to receive the data until the next request is queued.
+	 *    The following code is handling exactly that.
+	 *
+	 */
+	if (dep->flags & DWC3_EP_PENDING_REQUEST) {
+		/*
+		 * If xfernotready is already elapsed and it is a case
+		 * of isoc transfer, then issue END TRANSFER, so that
+		 * you can receive xfernotready again and can have
+		 * notion of current microframe.
+		 */
+		if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+			if (list_empty(&dep->req_queued)) {
+				dwc3_stop_active_transfer(dwc, dep->number);
+				dep->flags = DWC3_EP_ENABLED;
+			}
+			return 0;
+		}
+
+		ret = __dwc3_gadget_kick_transfer(dep, 0, true);
+		if (ret && ret != -EBUSY)
+			dev_dbg(dwc->dev, "%s: failed to kick transfers\n",
+					dep->name);
+		return ret;
+	}
+
+	/*
+	 * 2. XferInProgress on Isoc EP with an active transfer. We need to
+	 *    kick the transfer here after queuing a request, otherwise the
+	 *    core may not see the modified TRB(s).
+	 */
+	if (usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
+			(dep->flags & DWC3_EP_BUSY) &&
+			!(dep->flags & DWC3_EP_MISSED_ISOC)) {
+		WARN_ON_ONCE(!dep->resource_index);
+		ret = __dwc3_gadget_kick_transfer(dep, dep->resource_index,
+				false);
+		if (ret && ret != -EBUSY)
+			dev_dbg(dwc->dev, "%s: failed to kick transfers\n",
+					dep->name);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request,
+	gfp_t gfp_flags)
+{
+	struct dwc3_request		*req = to_dwc3_request(request);
+	struct dwc3_ep			*dep = to_dwc3_ep(ep);
+	struct dwc3			*dwc = dep->dwc;
+
+	unsigned long			flags;
+
+	int				ret;
+
+	if (!dep->endpoint.desc) {
+		dev_dbg(dwc->dev, "trying to queue request %p to disabled %s\n",
+				request, ep->name);
+		return -ESHUTDOWN;
+	}
+
+	dev_vdbg(dwc->dev, "queing request %p to %s length %d\n",
+			request, ep->name, request->length);
+
+	spin_lock_irqsave(&dwc->lock, flags);
+	ret = __dwc3_gadget_ep_queue(dep, req);
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
+		struct usb_request *request)
+{
+	struct dwc3_request		*req = to_dwc3_request(request);
+	struct dwc3_request		*r = NULL;
+
+	struct dwc3_ep			*dep = to_dwc3_ep(ep);
+	struct dwc3			*dwc = dep->dwc;
+
+	unsigned long			flags;
+	int				ret = 0;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	list_for_each_entry(r, &dep->request_list, list) {
+		if (r == req)
+			break;
+	}
+
+	if (r != req) {
+		list_for_each_entry(r, &dep->req_queued, list) {
+			if (r == req)
+				break;
+		}
+		if (r == req) {
+			/* wait until it is processed */
+			dwc3_stop_active_transfer(dwc, dep->number);
+			goto out1;
+		}
+		dev_err(dwc->dev, "request %p was not queued to %s\n",
+				request, ep->name);
+		ret = -EINVAL;
+		goto out0;
+	}
+
+out1:
+	/* giveback the request */
+	dwc3_gadget_giveback(dep, req, -ECONNRESET);
+
+out0:
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value)
+{
+	struct dwc3_gadget_ep_cmd_params	params;
+	struct dwc3				*dwc = dep->dwc;
+	int					ret;
+
+	memset(&params, 0x00, sizeof(params));
+
+	if (value) {
+		ret = dwc3_send_gadget_ep_cmd(dwc, dep->number,
+			DWC3_DEPCMD_SETSTALL, &params);
+		if (ret)
+			dev_err(dwc->dev, "failed to %s STALL on %s\n",
+					value ? "set" : "clear",
+					dep->name);
+		else
+			dep->flags |= DWC3_EP_STALL;
+	} else {
+		if (dep->flags & DWC3_EP_WEDGE)
+			return 0;
+
+		ret = dwc3_send_gadget_ep_cmd(dwc, dep->number,
+			DWC3_DEPCMD_CLEARSTALL, &params);
+		if (ret)
+			dev_err(dwc->dev, "failed to %s STALL on %s\n",
+					value ? "set" : "clear",
+					dep->name);
+		else
+			dep->flags &= ~DWC3_EP_STALL;
+	}
+
+	return ret;
+}
+
+static int dwc3_gadget_ep_set_halt(struct usb_ep *ep, int value)
+{
+	struct dwc3_ep			*dep = to_dwc3_ep(ep);
+	struct dwc3			*dwc = dep->dwc;
+
+	unsigned long			flags;
+
+	int				ret;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+		dev_err(dwc->dev, "%s is of Isochronous type\n", dep->name);
+		ret = -EINVAL;
+		goto out;
+	}
+
+	ret = __dwc3_gadget_ep_set_halt(dep, value);
+out:
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static int dwc3_gadget_ep_set_wedge(struct usb_ep *ep)
+{
+	struct dwc3_ep			*dep = to_dwc3_ep(ep);
+	struct dwc3			*dwc = dep->dwc;
+	unsigned long			flags;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+	dep->flags |= DWC3_EP_WEDGE;
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	if (dep->number == 0 || dep->number == 1)
+		return dwc3_gadget_ep0_set_halt(ep, 1);
+	else
+		return dwc3_gadget_ep_set_halt(ep, 1);
+}
+
+/* -------------------------------------------------------------------------- */
+
+static struct usb_endpoint_descriptor dwc3_gadget_ep0_desc = {
+	.bLength	= USB_DT_ENDPOINT_SIZE,
+	.bDescriptorType = USB_DT_ENDPOINT,
+	.bmAttributes	= USB_ENDPOINT_XFER_CONTROL,
+};
+
+static const struct usb_ep_ops dwc3_gadget_ep0_ops = {
+	.enable		= dwc3_gadget_ep0_enable,
+	.disable	= dwc3_gadget_ep0_disable,
+	.alloc_request	= dwc3_gadget_ep_alloc_request,
+	.free_request	= dwc3_gadget_ep_free_request,
+	.queue		= dwc3_gadget_ep0_queue,
+	.dequeue	= dwc3_gadget_ep_dequeue,
+	.set_halt	= dwc3_gadget_ep0_set_halt,
+	.set_wedge	= dwc3_gadget_ep_set_wedge,
+};
+
+static const struct usb_ep_ops dwc3_gadget_ep_ops = {
+	.enable		= dwc3_gadget_ep_enable,
+	.disable	= dwc3_gadget_ep_disable,
+	.alloc_request	= dwc3_gadget_ep_alloc_request,
+	.free_request	= dwc3_gadget_ep_free_request,
+	.queue		= dwc3_gadget_ep_queue,
+	.dequeue	= dwc3_gadget_ep_dequeue,
+	.set_halt	= dwc3_gadget_ep_set_halt,
+	.set_wedge	= dwc3_gadget_ep_set_wedge,
+};
+
+/* -------------------------------------------------------------------------- */
+
+static int dwc3_gadget_get_frame(struct usb_gadget *g)
+{
+	struct dwc3		*dwc = gadget_to_dwc(g);
+	u32			reg;
+
+	reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+	return DWC3_DSTS_SOFFN(reg);
+}
+
+static int dwc3_gadget_wakeup(struct usb_gadget *g)
+{
+	struct dwc3		*dwc = gadget_to_dwc(g);
+
+	unsigned long		timeout;
+	unsigned long		flags;
+
+	u32			reg;
+
+	int			ret = 0;
+
+	u8			link_state;
+	u8			speed;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	/*
+	 * According to the Databook Remote wakeup request should
+	 * be issued only when the device is in early suspend state.
+	 *
+	 * We can check that via USB Link State bits in DSTS register.
+	 */
+	reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+
+	speed = reg & DWC3_DSTS_CONNECTSPD;
+	if (speed == DWC3_DSTS_SUPERSPEED) {
+		dev_dbg(dwc->dev, "no wakeup on SuperSpeed\n");
+		ret = -EINVAL;
+		goto out;
+	}
+
+	link_state = DWC3_DSTS_USBLNKST(reg);
+
+	switch (link_state) {
+	case DWC3_LINK_STATE_RX_DET:	/* in HS, means Early Suspend */
+	case DWC3_LINK_STATE_U3:	/* in HS, means SUSPEND */
+		break;
+	default:
+		dev_dbg(dwc->dev, "can't wakeup from link state %d\n",
+				link_state);
+		ret = -EINVAL;
+		goto out;
+	}
+
+	ret = dwc3_gadget_set_link_state(dwc, DWC3_LINK_STATE_RECOV);
+	if (ret < 0) {
+		dev_err(dwc->dev, "failed to put link in Recovery\n");
+		goto out;
+	}
+
+	/* Recent versions do this automatically */
+	if (dwc->revision < DWC3_REVISION_194A) {
+		/* write zeroes to Link Change Request */
+		reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+		reg &= ~DWC3_DCTL_ULSTCHNGREQ_MASK;
+		dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+	}
+
+	/* poll until Link State changes to ON */
+	timeout = jiffies + msecs_to_jiffies(100);
+
+	while (!time_after(jiffies, timeout)) {
+		reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+
+		/* in HS, means ON */
+		if (DWC3_DSTS_USBLNKST(reg) == DWC3_LINK_STATE_U0)
+			break;
+	}
+
+	if (DWC3_DSTS_USBLNKST(reg) != DWC3_LINK_STATE_U0) {
+		dev_err(dwc->dev, "failed to send remote wakeup\n");
+		ret = -EINVAL;
+	}
+
+out:
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static int dwc3_gadget_set_selfpowered(struct usb_gadget *g,
+		int is_selfpowered)
+{
+	struct dwc3		*dwc = gadget_to_dwc(g);
+	unsigned long		flags;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+	dwc->is_selfpowered = !!is_selfpowered;
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return 0;
+}
+
+static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on)
+{
+	u32			reg;
+	u32			timeout = 500;
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+	if (is_on) {
+		if (dwc->revision <= DWC3_REVISION_187A) {
+			reg &= ~DWC3_DCTL_TRGTULST_MASK;
+			reg |= DWC3_DCTL_TRGTULST_RX_DET;
+		}
+
+		if (dwc->revision >= DWC3_REVISION_194A)
+			reg &= ~DWC3_DCTL_KEEP_CONNECT;
+		reg |= DWC3_DCTL_RUN_STOP;
+		dwc->pullups_connected = true;
+	} else {
+		reg &= ~DWC3_DCTL_RUN_STOP;
+		dwc->pullups_connected = false;
+	}
+
+	dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+
+	do {
+		reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+		if (is_on) {
+			if (!(reg & DWC3_DSTS_DEVCTRLHLT))
+				break;
+		} else {
+			if (reg & DWC3_DSTS_DEVCTRLHLT)
+				break;
+		}
+		timeout--;
+		if (!timeout)
+			return -ETIMEDOUT;
+		udelay(1);
+	} while (1);
+
+	dev_vdbg(dwc->dev, "gadget %s data soft-%s\n",
+			dwc->gadget_driver
+			? dwc->gadget_driver->function : "no-function",
+			is_on ? "connect" : "disconnect");
+
+	return 0;
+}
+
+static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+{
+	struct dwc3		*dwc = gadget_to_dwc(g);
+	unsigned long		flags;
+	int			ret;
+
+	is_on = !!is_on;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+	ret = dwc3_gadget_run_stop(dwc, is_on);
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static void dwc3_gadget_enable_irq(struct dwc3 *dwc)
+{
+	u32			reg;
+
+	/* Enable all but Start and End of Frame IRQs */
+	reg = (DWC3_DEVTEN_VNDRDEVTSTRCVEDEN |
+			DWC3_DEVTEN_EVNTOVERFLOWEN |
+			DWC3_DEVTEN_CMDCMPLTEN |
+			DWC3_DEVTEN_ERRTICERREN |
+			DWC3_DEVTEN_WKUPEVTEN |
+			DWC3_DEVTEN_ULSTCNGEN |
+			DWC3_DEVTEN_CONNECTDONEEN |
+			DWC3_DEVTEN_USBRSTEN |
+			DWC3_DEVTEN_DISCONNEVTEN);
+
+	dwc3_writel(dwc->regs, DWC3_DEVTEN, reg);
+}
+
+static void dwc3_gadget_disable_irq(struct dwc3 *dwc)
+{
+	/* mask all interrupts */
+	dwc3_writel(dwc->regs, DWC3_DEVTEN, 0x00);
+}
+
+static irqreturn_t dwc3_interrupt(int irq, void *_dwc);
+static irqreturn_t dwc3_thread_interrupt(int irq, void *_dwc);
+
+static int dwc3_gadget_start(struct usb_gadget *g,
+		struct usb_gadget_driver *driver)
+{
+	struct dwc3		*dwc = gadget_to_dwc(g);
+	struct dwc3_ep		*dep;
+	unsigned long		flags;
+	int			ret = 0;
+	int			irq;
+	u32			reg;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	if (dwc->gadget_driver) {
+		dev_err(dwc->dev, "%s is already bound to %s\n",
+				dwc->gadget.name,
+				dwc->gadget_driver->driver.name);
+		ret = -EBUSY;
+		goto err0;
+	}
+
+	dwc->gadget_driver	= driver;
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCFG);
+	reg &= ~(DWC3_DCFG_SPEED_MASK);
+
+	/**
+	 * WORKAROUND: DWC3 revision < 2.20a have an issue
+	 * which would cause metastability state on Run/Stop
+	 * bit if we try to force the IP to USB2-only mode.
+	 *
+	 * Because of that, we cannot configure the IP to any
+	 * speed other than the SuperSpeed
+	 *
+	 * Refers to:
+	 *
+	 * STAR#9000525659: Clock Domain Crossing on DCTL in
+	 * USB 2.0 Mode
+	 */
+	if (dwc->revision < DWC3_REVISION_220A)
+		reg |= DWC3_DCFG_SUPERSPEED;
+	else
+		reg |= dwc->maximum_speed;
+	dwc3_writel(dwc->regs, DWC3_DCFG, reg);
+
+	dwc->start_config_issued = false;
+
+	/* Start with SuperSpeed Default */
+	dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512);
+
+	dep = dwc->eps[0];
+	ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false);
+	if (ret) {
+		dev_err(dwc->dev, "failed to enable %s\n", dep->name);
+		goto err0;
+	}
+
+	dep = dwc->eps[1];
+	ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false);
+	if (ret) {
+		dev_err(dwc->dev, "failed to enable %s\n", dep->name);
+		goto err1;
+	}
+
+	/* begin to receive SETUP packets */
+	dwc->ep0state = EP0_SETUP_PHASE;
+	dwc3_ep0_out_start(dwc);
+
+	irq = platform_get_irq(to_platform_device(dwc->dev), 0);
+	ret = request_threaded_irq(irq, dwc3_interrupt, dwc3_thread_interrupt,
+			IRQF_SHARED | IRQF_ONESHOT, "dwc3", dwc);
+	if (ret) {
+		dev_err(dwc->dev, "failed to request irq #%d --> %d\n",
+				irq, ret);
+		goto err1;
+	}
+
+	dwc3_gadget_enable_irq(dwc);
+
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return 0;
+
+err1:
+	__dwc3_gadget_ep_disable(dwc->eps[0]);
+
+err0:
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static int dwc3_gadget_stop(struct usb_gadget *g,
+		struct usb_gadget_driver *driver)
+{
+	struct dwc3		*dwc = gadget_to_dwc(g);
+	unsigned long		flags;
+	int			irq;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	dwc3_gadget_disable_irq(dwc);
+	irq = platform_get_irq(to_platform_device(dwc->dev), 0);
+	free_irq(irq, dwc);
+
+	__dwc3_gadget_ep_disable(dwc->eps[0]);
+	__dwc3_gadget_ep_disable(dwc->eps[1]);
+
+	dwc->gadget_driver	= NULL;
+
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return 0;
+}
+
+static const struct usb_gadget_ops dwc3_gadget_ops = {
+	.get_frame		= dwc3_gadget_get_frame,
+	.wakeup			= dwc3_gadget_wakeup,
+	.set_selfpowered	= dwc3_gadget_set_selfpowered,
+	.pullup			= dwc3_gadget_pullup,
+	.udc_start		= dwc3_gadget_start,
+	.udc_stop		= dwc3_gadget_stop,
+};
+
+/* -------------------------------------------------------------------------- */
+
+static int dwc3_gadget_init_hw_endpoints(struct dwc3 *dwc,
+		u8 num, u32 direction)
+{
+	struct dwc3_ep			*dep;
+	u8				i;
+
+	for (i = 0; i < num; i++) {
+		u8 epnum = (i << 1) | (!!direction);
+
+		dep = kzalloc(sizeof(*dep), GFP_KERNEL);
+		if (!dep) {
+			dev_err(dwc->dev, "can't allocate endpoint %d\n",
+					epnum);
+			return -ENOMEM;
+		}
+
+		dep->dwc = dwc;
+		dep->number = epnum;
+		dwc->eps[epnum] = dep;
+
+		snprintf(dep->name, sizeof(dep->name), "ep%d%s", epnum >> 1,
+				(epnum & 1) ? "in" : "out");
+
+		dep->endpoint.name = dep->name;
+		dep->direction = (epnum & 1);
+
+		if (epnum == 0 || epnum == 1) {
+			dep->endpoint.maxpacket = 512;
+			dep->endpoint.maxburst = 1;
+			dep->endpoint.ops = &dwc3_gadget_ep0_ops;
+			if (!epnum)
+				dwc->gadget.ep0 = &dep->endpoint;
+		} else {
+			int		ret;
+
+			dep->endpoint.maxpacket = 1024;
+			dep->endpoint.max_streams = 15;
+			dep->endpoint.ops = &dwc3_gadget_ep_ops;
+			list_add_tail(&dep->endpoint.ep_list,
+					&dwc->gadget.ep_list);
+
+			ret = dwc3_alloc_trb_pool(dep);
+			if (ret)
+				return ret;
+		}
+
+		INIT_LIST_HEAD(&dep->request_list);
+		INIT_LIST_HEAD(&dep->req_queued);
+	}
+
+	return 0;
+}
+
+static int dwc3_gadget_init_endpoints(struct dwc3 *dwc)
+{
+	int				ret;
+
+	INIT_LIST_HEAD(&dwc->gadget.ep_list);
+
+	ret = dwc3_gadget_init_hw_endpoints(dwc, dwc->num_out_eps, 0);
+	if (ret < 0) {
+		dev_vdbg(dwc->dev, "failed to allocate OUT endpoints\n");
+		return ret;
+	}
+
+	ret = dwc3_gadget_init_hw_endpoints(dwc, dwc->num_in_eps, 1);
+	if (ret < 0) {
+		dev_vdbg(dwc->dev, "failed to allocate IN endpoints\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void dwc3_gadget_free_endpoints(struct dwc3 *dwc)
+{
+	struct dwc3_ep			*dep;
+	u8				epnum;
+
+	for (epnum = 0; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
+		dep = dwc->eps[epnum];
+		if (!dep)
+			continue;
+
+		dwc3_free_trb_pool(dep);
+
+		if (epnum != 0 && epnum != 1)
+			list_del(&dep->endpoint.ep_list);
+
+		kfree(dep);
+	}
+}
+
+/* -------------------------------------------------------------------------- */
+
+static int __dwc3_cleanup_done_trbs(struct dwc3 *dwc, struct dwc3_ep *dep,
+		struct dwc3_request *req, struct dwc3_trb *trb,
+		const struct dwc3_event_depevt *event, int status)
+{
+	unsigned int		count;
+	unsigned int		s_pkt = 0;
+	unsigned int		trb_status;
+
+	if ((trb->ctrl & DWC3_TRB_CTRL_HWO) && status != -ESHUTDOWN)
+		/*
+		 * We continue despite the error. There is not much we
+		 * can do. If we don't clean it up we loop forever. If
+		 * we skip the TRB then it gets overwritten after a
+		 * while since we use them in a ring buffer. A BUG()
+		 * would help. Lets hope that if this occurs, someone
+		 * fixes the root cause instead of looking away :)
+		 */
+		dev_err(dwc->dev, "%s's TRB (%p) still owned by HW\n",
+				dep->name, trb);
+	count = trb->size & DWC3_TRB_SIZE_MASK;
+
+	if (dep->direction) {
+		if (count) {
+			trb_status = DWC3_TRB_SIZE_TRBSTS(trb->size);
+			if (trb_status == DWC3_TRBSTS_MISSED_ISOC) {
+				dev_dbg(dwc->dev, "incomplete IN transfer %s\n",
+						dep->name);
+				/*
+				 * If missed isoc occurred and there is
+				 * no request queued then issue END
+				 * TRANSFER, so that core generates
+				 * next xfernotready and we will issue
+				 * a fresh START TRANSFER.
+				 * If there are still queued request
+				 * then wait, do not issue either END
+				 * or UPDATE TRANSFER, just attach next
+				 * request in request_list during
+				 * giveback.If any future queued request
+				 * is successfully transferred then we
+				 * will issue UPDATE TRANSFER for all
+				 * request in the request_list.
+				 */
+				dep->flags |= DWC3_EP_MISSED_ISOC;
+			} else {
+				dev_err(dwc->dev, "incomplete IN transfer %s\n",
+						dep->name);
+				status = -ECONNRESET;
+			}
+		} else {
+			dep->flags &= ~DWC3_EP_MISSED_ISOC;
+		}
+	} else {
+		if (count && (event->status & DEPEVT_STATUS_SHORT))
+			s_pkt = 1;
+	}
+
+	/*
+	 * We assume here we will always receive the entire data block
+	 * which we should receive. Meaning, if we program RX to
+	 * receive 4K but we receive only 2K, we assume that's all we
+	 * should receive and we simply bounce the request back to the
+	 * gadget driver for further processing.
+	 */
+	req->request.actual += req->request.length - count;
+	if (s_pkt)
+		return 1;
+	if ((event->status & DEPEVT_STATUS_LST) &&
+			(trb->ctrl & (DWC3_TRB_CTRL_LST |
+				DWC3_TRB_CTRL_HWO)))
+		return 1;
+	if ((event->status & DEPEVT_STATUS_IOC) &&
+			(trb->ctrl & DWC3_TRB_CTRL_IOC))
+		return 1;
+	return 0;
+}
+
+static int dwc3_cleanup_done_reqs(struct dwc3 *dwc, struct dwc3_ep *dep,
+		const struct dwc3_event_depevt *event, int status)
+{
+	struct dwc3_request	*req;
+	struct dwc3_trb		*trb;
+	unsigned int		slot;
+	unsigned int		i;
+	int			ret;
+
+	do {
+		req = next_request(&dep->req_queued);
+		if (!req) {
+			WARN_ON_ONCE(1);
+			return 1;
+		}
+		i = 0;
+		do {
+			slot = req->start_slot + i;
+			if ((slot == DWC3_TRB_NUM - 1) &&
+				usb_endpoint_xfer_isoc(dep->endpoint.desc))
+				slot++;
+			slot %= DWC3_TRB_NUM;
+			trb = &dep->trb_pool[slot];
+
+			ret = __dwc3_cleanup_done_trbs(dwc, dep, req, trb,
+					event, status);
+			if (ret)
+				break;
+		}while (++i < req->request.num_mapped_sgs);
+
+		dwc3_gadget_giveback(dep, req, status);
+
+		if (ret)
+			break;
+	} while (1);
+
+	if (usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
+			list_empty(&dep->req_queued)) {
+		if (list_empty(&dep->request_list)) {
+			/*
+			 * If there is no entry in request list then do
+			 * not issue END TRANSFER now. Just set PENDING
+			 * flag, so that END TRANSFER is issued when an
+			 * entry is added into request list.
+			 */
+			dep->flags = DWC3_EP_PENDING_REQUEST;
+		} else {
+			dwc3_stop_active_transfer(dwc, dep->number);
+			dep->flags = DWC3_EP_ENABLED;
+		}
+		return 1;
+	}
+
+	if ((event->status & DEPEVT_STATUS_IOC) &&
+			(trb->ctrl & DWC3_TRB_CTRL_IOC))
+		return 0;
+	return 1;
+}
+
+static void dwc3_endpoint_transfer_complete(struct dwc3 *dwc,
+		struct dwc3_ep *dep, const struct dwc3_event_depevt *event,
+		int start_new)
+{
+	unsigned		status = 0;
+	int			clean_busy;
+
+	if (event->status & DEPEVT_STATUS_BUSERR)
+		status = -ECONNRESET;
+
+	clean_busy = dwc3_cleanup_done_reqs(dwc, dep, event, status);
+	if (clean_busy)
+		dep->flags &= ~DWC3_EP_BUSY;
+
+	/*
+	 * WORKAROUND: This is the 2nd half of U1/U2 -> U0 workaround.
+	 * See dwc3_gadget_linksts_change_interrupt() for 1st half.
+	 */
+	if (dwc->revision < DWC3_REVISION_183A) {
+		u32		reg;
+		int		i;
+
+		for (i = 0; i < DWC3_ENDPOINTS_NUM; i++) {
+			dep = dwc->eps[i];
+
+			if (!(dep->flags & DWC3_EP_ENABLED))
+				continue;
+
+			if (!list_empty(&dep->req_queued))
+				return;
+		}
+
+		reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+		reg |= dwc->u1u2;
+		dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+
+		dwc->u1u2 = 0;
+	}
+}
+
+static void dwc3_endpoint_interrupt(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event)
+{
+	struct dwc3_ep		*dep;
+	u8			epnum = event->endpoint_number;
+
+	dep = dwc->eps[epnum];
+
+	if (!(dep->flags & DWC3_EP_ENABLED))
+		return;
+
+	dev_vdbg(dwc->dev, "%s: %s\n", dep->name,
+			dwc3_ep_event_string(event->endpoint_event));
+
+	if (epnum == 0 || epnum == 1) {
+		dwc3_ep0_interrupt(dwc, event);
+		return;
+	}
+
+	switch (event->endpoint_event) {
+	case DWC3_DEPEVT_XFERCOMPLETE:
+		dep->resource_index = 0;
+
+		if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+			dev_dbg(dwc->dev, "%s is an Isochronous endpoint\n",
+					dep->name);
+			return;
+		}
+
+		dwc3_endpoint_transfer_complete(dwc, dep, event, 1);
+		break;
+	case DWC3_DEPEVT_XFERINPROGRESS:
+		if (!usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+			dev_dbg(dwc->dev, "%s is not an Isochronous endpoint\n",
+					dep->name);
+			return;
+		}
+
+		dwc3_endpoint_transfer_complete(dwc, dep, event, 0);
+		break;
+	case DWC3_DEPEVT_XFERNOTREADY:
+		if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) {
+			dwc3_gadget_start_isoc(dwc, dep, event);
+		} else {
+			int ret;
+
+			dev_vdbg(dwc->dev, "%s: reason %s\n",
+					dep->name, event->status &
+					DEPEVT_STATUS_TRANSFER_ACTIVE
+					? "Transfer Active"
+					: "Transfer Not Active");
+
+			ret = __dwc3_gadget_kick_transfer(dep, 0, 1);
+			if (!ret || ret == -EBUSY)
+				return;
+
+			dev_dbg(dwc->dev, "%s: failed to kick transfers\n",
+					dep->name);
+		}
+
+		break;
+	case DWC3_DEPEVT_STREAMEVT:
+		if (!usb_endpoint_xfer_bulk(dep->endpoint.desc)) {
+			dev_err(dwc->dev, "Stream event for non-Bulk %s\n",
+					dep->name);
+			return;
+		}
+
+		switch (event->status) {
+		case DEPEVT_STREAMEVT_FOUND:
+			dev_vdbg(dwc->dev, "Stream %d found and started\n",
+					event->parameters);
+
+			break;
+		case DEPEVT_STREAMEVT_NOTFOUND:
+			/* FALLTHROUGH */
+		default:
+			dev_dbg(dwc->dev, "Couldn't find suitable stream\n");
+		}
+		break;
+	case DWC3_DEPEVT_RXTXFIFOEVT:
+		dev_dbg(dwc->dev, "%s FIFO Overrun\n", dep->name);
+		break;
+	case DWC3_DEPEVT_EPCMDCMPLT:
+		dev_vdbg(dwc->dev, "Endpoint Command Complete\n");
+		break;
+	}
+}
+
+static void dwc3_disconnect_gadget(struct dwc3 *dwc)
+{
+	if (dwc->gadget_driver && dwc->gadget_driver->disconnect) {
+		spin_unlock(&dwc->lock);
+		dwc->gadget_driver->disconnect(&dwc->gadget);
+		spin_lock(&dwc->lock);
+	}
+}
+
+static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum)
+{
+	struct dwc3_ep *dep;
+	struct dwc3_gadget_ep_cmd_params params;
+	u32 cmd;
+	int ret;
+
+	dep = dwc->eps[epnum];
+
+	if (!dep->resource_index)
+		return;
+
+	/*
+	 * NOTICE: We are violating what the Databook says about the
+	 * EndTransfer command. Ideally we would _always_ wait for the
+	 * EndTransfer Command Completion IRQ, but that's causing too
+	 * much trouble synchronizing between us and gadget driver.
+	 *
+	 * We have discussed this with the IP Provider and it was
+	 * suggested to giveback all requests here, but give HW some
+	 * extra time to synchronize with the interconnect. We're using
+	 * an arbitraty 100us delay for that.
+	 *
+	 * Note also that a similar handling was tested by Synopsys
+	 * (thanks a lot Paul) and nothing bad has come out of it.
+	 * In short, what we're doing is:
+	 *
+	 * - Issue EndTransfer WITH CMDIOC bit set
+	 * - Wait 100us
+	 */
+
+	cmd = DWC3_DEPCMD_ENDTRANSFER;
+	cmd |= DWC3_DEPCMD_HIPRI_FORCERM | DWC3_DEPCMD_CMDIOC;
+	cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
+	memset(&params, 0, sizeof(params));
+	ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params);
+	WARN_ON_ONCE(ret);
+	dep->resource_index = 0;
+	dep->flags &= ~DWC3_EP_BUSY;
+	udelay(100);
+}
+
+static void dwc3_stop_active_transfers(struct dwc3 *dwc)
+{
+	u32 epnum;
+
+	for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
+		struct dwc3_ep *dep;
+
+		dep = dwc->eps[epnum];
+		if (!dep)
+			continue;
+
+		if (!(dep->flags & DWC3_EP_ENABLED))
+			continue;
+
+		dwc3_remove_requests(dwc, dep);
+	}
+}
+
+static void dwc3_clear_stall_all_ep(struct dwc3 *dwc)
+{
+	u32 epnum;
+
+	for (epnum = 1; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
+		struct dwc3_ep *dep;
+		struct dwc3_gadget_ep_cmd_params params;
+		int ret;
+
+		dep = dwc->eps[epnum];
+		if (!dep)
+			continue;
+
+		if (!(dep->flags & DWC3_EP_STALL))
+			continue;
+
+		dep->flags &= ~DWC3_EP_STALL;
+
+		memset(&params, 0, sizeof(params));
+		ret = dwc3_send_gadget_ep_cmd(dwc, dep->number,
+				DWC3_DEPCMD_CLEARSTALL, &params);
+		WARN_ON_ONCE(ret);
+	}
+}
+
+static void dwc3_gadget_disconnect_interrupt(struct dwc3 *dwc)
+{
+	int			reg;
+
+	dev_vdbg(dwc->dev, "%s\n", __func__);
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+	reg &= ~DWC3_DCTL_INITU1ENA;
+	dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+
+	reg &= ~DWC3_DCTL_INITU2ENA;
+	dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+
+	dwc3_disconnect_gadget(dwc);
+	dwc->start_config_issued = false;
+
+	dwc->gadget.speed = USB_SPEED_UNKNOWN;
+	dwc->setup_packet_pending = false;
+}
+
+static void dwc3_gadget_usb3_phy_suspend(struct dwc3 *dwc, int suspend)
+{
+	u32			reg;
+
+	reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
+
+	if (suspend)
+		reg |= DWC3_GUSB3PIPECTL_SUSPHY;
+	else
+		reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;
+
+	dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), reg);
+}
+
+static void dwc3_gadget_usb2_phy_suspend(struct dwc3 *dwc, int suspend)
+{
+	u32			reg;
+
+	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+
+	if (suspend)
+		reg |= DWC3_GUSB2PHYCFG_SUSPHY;
+	else
+		reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+
+	dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
+}
+
+static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
+{
+	u32			reg;
+
+	dev_vdbg(dwc->dev, "%s\n", __func__);
+
+	/*
+	 * WORKAROUND: DWC3 revisions <1.88a have an issue which
+	 * would cause a missing Disconnect Event if there's a
+	 * pending Setup Packet in the FIFO.
+	 *
+	 * There's no suggested workaround on the official Bug
+	 * report, which states that "unless the driver/application
+	 * is doing any special handling of a disconnect event,
+	 * there is no functional issue".
+	 *
+	 * Unfortunately, it turns out that we _do_ some special
+	 * handling of a disconnect event, namely complete all
+	 * pending transfers, notify gadget driver of the
+	 * disconnection, and so on.
+	 *
+	 * Our suggested workaround is to follow the Disconnect
+	 * Event steps here, instead, based on a setup_packet_pending
+	 * flag. Such flag gets set whenever we have a XferNotReady
+	 * event on EP0 and gets cleared on XferComplete for the
+	 * same endpoint.
+	 *
+	 * Refers to:
+	 *
+	 * STAR#9000466709: RTL: Device : Disconnect event not
+	 * generated if setup packet pending in FIFO
+	 */
+	if (dwc->revision < DWC3_REVISION_188A) {
+		if (dwc->setup_packet_pending)
+			dwc3_gadget_disconnect_interrupt(dwc);
+	}
+
+	/* after reset -> Default State */
+	usb_gadget_set_state(&dwc->gadget, USB_STATE_DEFAULT);
+
+	/* Recent versions support automatic phy suspend and don't need this */
+	if (dwc->revision < DWC3_REVISION_194A) {
+		/* Resume PHYs */
+		dwc3_gadget_usb2_phy_suspend(dwc, false);
+		dwc3_gadget_usb3_phy_suspend(dwc, false);
+	}
+
+	if (dwc->gadget.speed != USB_SPEED_UNKNOWN)
+		dwc3_disconnect_gadget(dwc);
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+	reg &= ~DWC3_DCTL_TSTCTRL_MASK;
+	dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+	dwc->test_mode = false;
+
+	dwc3_stop_active_transfers(dwc);
+	dwc3_clear_stall_all_ep(dwc);
+	dwc->start_config_issued = false;
+
+	/* Reset device address to zero */
+	reg = dwc3_readl(dwc->regs, DWC3_DCFG);
+	reg &= ~(DWC3_DCFG_DEVADDR_MASK);
+	dwc3_writel(dwc->regs, DWC3_DCFG, reg);
+}
+
+static void dwc3_update_ram_clk_sel(struct dwc3 *dwc, u32 speed)
+{
+	u32 reg;
+	u32 usb30_clock = DWC3_GCTL_CLK_BUS;
+
+	/*
+	 * We change the clock only at SS but I dunno why I would want to do
+	 * this. Maybe it becomes part of the power saving plan.
+	 */
+
+	if (speed != DWC3_DSTS_SUPERSPEED)
+		return;
+
+	/*
+	 * RAMClkSel is reset to 0 after USB reset, so it must be reprogrammed
+	 * each time on Connect Done.
+	 */
+	if (!usb30_clock)
+		return;
+
+	reg = dwc3_readl(dwc->regs, DWC3_GCTL);
+	reg |= DWC3_GCTL_RAMCLKSEL(usb30_clock);
+	dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+}
+
+static void dwc3_gadget_phy_suspend(struct dwc3 *dwc, u8 speed)
+{
+	switch (speed) {
+	case USB_SPEED_SUPER:
+		dwc3_gadget_usb2_phy_suspend(dwc, true);
+		break;
+	case USB_SPEED_HIGH:
+	case USB_SPEED_FULL:
+	case USB_SPEED_LOW:
+		dwc3_gadget_usb3_phy_suspend(dwc, true);
+		break;
+	}
+}
+
+static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
+{
+	struct dwc3_ep		*dep;
+	int			ret;
+	u32			reg;
+	u8			speed;
+
+	dev_vdbg(dwc->dev, "%s\n", __func__);
+
+	reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+	speed = reg & DWC3_DSTS_CONNECTSPD;
+	dwc->speed = speed;
+
+	dwc3_update_ram_clk_sel(dwc, speed);
+
+	switch (speed) {
+	case DWC3_DCFG_SUPERSPEED:
+		/*
+		 * WORKAROUND: DWC3 revisions <1.90a have an issue which
+		 * would cause a missing USB3 Reset event.
+		 *
+		 * In such situations, we should force a USB3 Reset
+		 * event by calling our dwc3_gadget_reset_interrupt()
+		 * routine.
+		 *
+		 * Refers to:
+		 *
+		 * STAR#9000483510: RTL: SS : USB3 reset event may
+		 * not be generated always when the link enters poll
+		 */
+		if (dwc->revision < DWC3_REVISION_190A)
+			dwc3_gadget_reset_interrupt(dwc);
+
+		dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512);
+		dwc->gadget.ep0->maxpacket = 512;
+		dwc->gadget.speed = USB_SPEED_SUPER;
+		break;
+	case DWC3_DCFG_HIGHSPEED:
+		dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(64);
+		dwc->gadget.ep0->maxpacket = 64;
+		dwc->gadget.speed = USB_SPEED_HIGH;
+		break;
+	case DWC3_DCFG_FULLSPEED2:
+	case DWC3_DCFG_FULLSPEED1:
+		dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(64);
+		dwc->gadget.ep0->maxpacket = 64;
+		dwc->gadget.speed = USB_SPEED_FULL;
+		break;
+	case DWC3_DCFG_LOWSPEED:
+		dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(8);
+		dwc->gadget.ep0->maxpacket = 8;
+		dwc->gadget.speed = USB_SPEED_LOW;
+		break;
+	}
+
+	/* Enable USB2 LPM Capability */
+
+	if ((dwc->revision > DWC3_REVISION_194A)
+			&& (speed != DWC3_DCFG_SUPERSPEED)) {
+		reg = dwc3_readl(dwc->regs, DWC3_DCFG);
+		reg |= DWC3_DCFG_LPM_CAP;
+		dwc3_writel(dwc->regs, DWC3_DCFG, reg);
+
+		reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+		reg &= ~(DWC3_DCTL_HIRD_THRES_MASK | DWC3_DCTL_L1_HIBER_EN);
+
+		/*
+		 * TODO: This should be configurable. For now using
+		 * maximum allowed HIRD threshold value of 0b1100
+		 */
+		reg |= DWC3_DCTL_HIRD_THRES(12);
+
+		dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+	}
+
+	/* Recent versions support automatic phy suspend and don't need this */
+	if (dwc->revision < DWC3_REVISION_194A) {
+		/* Suspend unneeded PHY */
+		dwc3_gadget_phy_suspend(dwc, dwc->gadget.speed);
+	}
+
+	dep = dwc->eps[0];
+	ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true);
+	if (ret) {
+		dev_err(dwc->dev, "failed to enable %s\n", dep->name);
+		return;
+	}
+
+	dep = dwc->eps[1];
+	ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true);
+	if (ret) {
+		dev_err(dwc->dev, "failed to enable %s\n", dep->name);
+		return;
+	}
+
+	/*
+	 * Configure PHY via GUSB3PIPECTLn if required.
+	 *
+	 * Update GTXFIFOSIZn
+	 *
+	 * In both cases reset values should be sufficient.
+	 */
+}
+
+static void dwc3_gadget_wakeup_interrupt(struct dwc3 *dwc)
+{
+	dev_vdbg(dwc->dev, "%s\n", __func__);
+
+	/*
+	 * TODO take core out of low power mode when that's
+	 * implemented.
+	 */
+
+	dwc->gadget_driver->resume(&dwc->gadget);
+}
+
+static void dwc3_gadget_linksts_change_interrupt(struct dwc3 *dwc,
+		unsigned int evtinfo)
+{
+	enum dwc3_link_state	next = evtinfo & DWC3_LINK_STATE_MASK;
+	unsigned int		pwropt;
+
+	/*
+	 * WORKAROUND: DWC3 < 2.50a have an issue when configured without
+	 * Hibernation mode enabled which would show up when device detects
+	 * host-initiated U3 exit.
+	 *
+	 * In that case, device will generate a Link State Change Interrupt
+	 * from U3 to RESUME which is only necessary if Hibernation is
+	 * configured in.
+	 *
+	 * There are no functional changes due to such spurious event and we
+	 * just need to ignore it.
+	 *
+	 * Refers to:
+	 *
+	 * STAR#9000570034 RTL: SS Resume event generated in non-Hibernation
+	 * operational mode
+	 */
+	pwropt = DWC3_GHWPARAMS1_EN_PWROPT(dwc->hwparams.hwparams1);
+	if ((dwc->revision < DWC3_REVISION_250A) &&
+			(pwropt != DWC3_GHWPARAMS1_EN_PWROPT_HIB)) {
+		if ((dwc->link_state == DWC3_LINK_STATE_U3) &&
+				(next == DWC3_LINK_STATE_RESUME)) {
+			dev_vdbg(dwc->dev, "ignoring transition U3 -> Resume\n");
+			return;
+		}
+	}
+
+	/*
+	 * WORKAROUND: DWC3 Revisions <1.83a have an issue which, depending
+	 * on the link partner, the USB session might do multiple entry/exit
+	 * of low power states before a transfer takes place.
+	 *
+	 * Due to this problem, we might experience lower throughput. The
+	 * suggested workaround is to disable DCTL[12:9] bits if we're
+	 * transitioning from U1/U2 to U0 and enable those bits again
+	 * after a transfer completes and there are no pending transfers
+	 * on any of the enabled endpoints.
+	 *
+	 * This is the first half of that workaround.
+	 *
+	 * Refers to:
+	 *
+	 * STAR#9000446952: RTL: Device SS : if U1/U2 ->U0 takes >128us
+	 * core send LGO_Ux entering U0
+	 */
+	if (dwc->revision < DWC3_REVISION_183A) {
+		if (next == DWC3_LINK_STATE_U0) {
+			u32	u1u2;
+			u32	reg;
+
+			switch (dwc->link_state) {
+			case DWC3_LINK_STATE_U1:
+			case DWC3_LINK_STATE_U2:
+				reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+				u1u2 = reg & (DWC3_DCTL_INITU2ENA
+						| DWC3_DCTL_ACCEPTU2ENA
+						| DWC3_DCTL_INITU1ENA
+						| DWC3_DCTL_ACCEPTU1ENA);
+
+				if (!dwc->u1u2)
+					dwc->u1u2 = reg & u1u2;
+
+				reg &= ~u1u2;
+
+				dwc3_writel(dwc->regs, DWC3_DCTL, reg);
+				break;
+			default:
+				/* do nothing */
+				break;
+			}
+		}
+	}
+
+	dwc->link_state = next;
+
+	dev_vdbg(dwc->dev, "%s link %d\n", __func__, dwc->link_state);
+}
+
+static void dwc3_gadget_interrupt(struct dwc3 *dwc,
+		const struct dwc3_event_devt *event)
+{
+	switch (event->type) {
+	case DWC3_DEVICE_EVENT_DISCONNECT:
+		dwc3_gadget_disconnect_interrupt(dwc);
+		break;
+	case DWC3_DEVICE_EVENT_RESET:
+		dwc3_gadget_reset_interrupt(dwc);
+		break;
+	case DWC3_DEVICE_EVENT_CONNECT_DONE:
+		dwc3_gadget_conndone_interrupt(dwc);
+		break;
+	case DWC3_DEVICE_EVENT_WAKEUP:
+		dwc3_gadget_wakeup_interrupt(dwc);
+		break;
+	case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE:
+		dwc3_gadget_linksts_change_interrupt(dwc, event->event_info);
+		break;
+	case DWC3_DEVICE_EVENT_EOPF:
+		dev_vdbg(dwc->dev, "End of Periodic Frame\n");
+		break;
+	case DWC3_DEVICE_EVENT_SOF:
+		dev_vdbg(dwc->dev, "Start of Periodic Frame\n");
+		break;
+	case DWC3_DEVICE_EVENT_ERRATIC_ERROR:
+		dev_vdbg(dwc->dev, "Erratic Error\n");
+		break;
+	case DWC3_DEVICE_EVENT_CMD_CMPL:
+		dev_vdbg(dwc->dev, "Command Complete\n");
+		break;
+	case DWC3_DEVICE_EVENT_OVERFLOW:
+		dev_vdbg(dwc->dev, "Overflow\n");
+		break;
+	default:
+		dev_dbg(dwc->dev, "UNKNOWN IRQ %d\n", event->type);
+	}
+}
+
+static void dwc3_process_event_entry(struct dwc3 *dwc,
+		const union dwc3_event *event)
+{
+	/* Endpoint IRQ, handle it and return early */
+	if (event->type.is_devspec == 0) {
+		/* depevt */
+		return dwc3_endpoint_interrupt(dwc, &event->depevt);
+	}
+
+	switch (event->type.type) {
+	case DWC3_EVENT_TYPE_DEV:
+		dwc3_gadget_interrupt(dwc, &event->devt);
+		break;
+	/* REVISIT what to do with Carkit and I2C events ? */
+	default:
+		dev_err(dwc->dev, "UNKNOWN IRQ type %d\n", event->raw);
+	}
+}
+
+static irqreturn_t dwc3_thread_interrupt(int irq, void *_dwc)
+{
+	struct dwc3 *dwc = _dwc;
+	unsigned long flags;
+	irqreturn_t ret = IRQ_NONE;
+	int i;
+
+	spin_lock_irqsave(&dwc->lock, flags);
+
+	for (i = 0; i < dwc->num_event_buffers; i++) {
+		struct dwc3_event_buffer *evt;
+		int			left;
+
+		evt = dwc->ev_buffs[i];
+		left = evt->count;
+
+		if (!(evt->flags & DWC3_EVENT_PENDING))
+			continue;
+
+		while (left > 0) {
+			union dwc3_event event;
+
+			event.raw = *(u32 *) (evt->buf + evt->lpos);
+
+			dwc3_process_event_entry(dwc, &event);
+
+			/*
+			 * FIXME we wrap around correctly to the next entry as
+			 * almost all entries are 4 bytes in size. There is one
+			 * entry which has 12 bytes which is a regular entry
+			 * followed by 8 bytes data. ATM I don't know how
+			 * things are organized if we get next to the a
+			 * boundary so I worry about that once we try to handle
+			 * that.
+			 */
+			evt->lpos = (evt->lpos + 4) % DWC3_EVENT_BUFFERS_SIZE;
+			left -= 4;
+
+			dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(i), 4);
+		}
+
+		evt->count = 0;
+		evt->flags &= ~DWC3_EVENT_PENDING;
+		ret = IRQ_HANDLED;
+	}
+
+	spin_unlock_irqrestore(&dwc->lock, flags);
+
+	return ret;
+}
+
+static irqreturn_t dwc3_process_event_buf(struct dwc3 *dwc, u32 buf)
+{
+	struct dwc3_event_buffer *evt;
+	u32 count;
+
+	evt = dwc->ev_buffs[buf];
+
+	count = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(buf));
+	count &= DWC3_GEVNTCOUNT_MASK;
+	if (!count)
+		return IRQ_NONE;
+
+	evt->count = count;
+	evt->flags |= DWC3_EVENT_PENDING;
+
+	return IRQ_WAKE_THREAD;
+}
+
+static irqreturn_t dwc3_interrupt(int irq, void *_dwc)
+{
+	struct dwc3			*dwc = _dwc;
+	int				i;
+	irqreturn_t			ret = IRQ_NONE;
+
+	spin_lock(&dwc->lock);
+
+	for (i = 0; i < dwc->num_event_buffers; i++) {
+		irqreturn_t status;
+
+		status = dwc3_process_event_buf(dwc, i);
+		if (status == IRQ_WAKE_THREAD)
+			ret = status;
+	}
+
+	spin_unlock(&dwc->lock);
+
+	return ret;
+}
+
+/**
+ * dwc3_gadget_init - Initializes gadget related registers
+ * @dwc: pointer to our controller context structure
+ *
+ * Returns 0 on success otherwise negative errno.
+ */
+int dwc3_gadget_init(struct dwc3 *dwc)
+{
+	u32					reg;
+	int					ret;
+
+	dwc->ctrl_req = dma_alloc_coherent(dwc->dev, sizeof(*dwc->ctrl_req),
+			&dwc->ctrl_req_addr, GFP_KERNEL);
+	if (!dwc->ctrl_req) {
+		dev_err(dwc->dev, "failed to allocate ctrl request\n");
+		ret = -ENOMEM;
+		goto err0;
+	}
+
+	dwc->ep0_trb = dma_alloc_coherent(dwc->dev, sizeof(*dwc->ep0_trb),
+			&dwc->ep0_trb_addr, GFP_KERNEL);
+	if (!dwc->ep0_trb) {
+		dev_err(dwc->dev, "failed to allocate ep0 trb\n");
+		ret = -ENOMEM;
+		goto err1;
+	}
+
+	dwc->setup_buf = kzalloc(DWC3_EP0_BOUNCE_SIZE, GFP_KERNEL);
+	if (!dwc->setup_buf) {
+		dev_err(dwc->dev, "failed to allocate setup buffer\n");
+		ret = -ENOMEM;
+		goto err2;
+	}
+
+	dwc->ep0_bounce = dma_alloc_coherent(dwc->dev,
+			DWC3_EP0_BOUNCE_SIZE, &dwc->ep0_bounce_addr,
+			GFP_KERNEL);
+	if (!dwc->ep0_bounce) {
+		dev_err(dwc->dev, "failed to allocate ep0 bounce buffer\n");
+		ret = -ENOMEM;
+		goto err3;
+	}
+
+	dwc->gadget.ops			= &dwc3_gadget_ops;
+	dwc->gadget.max_speed		= USB_SPEED_SUPER;
+	dwc->gadget.speed		= USB_SPEED_UNKNOWN;
+	dwc->gadget.sg_supported	= true;
+	dwc->gadget.name		= "dwc3-gadget";
+
+	/*
+	 * REVISIT: Here we should clear all pending IRQs to be
+	 * sure we're starting from a well known location.
+	 */
+
+	ret = dwc3_gadget_init_endpoints(dwc);
+	if (ret)
+		goto err4;
+
+	reg = dwc3_readl(dwc->regs, DWC3_DCFG);
+	reg |= DWC3_DCFG_LPM_CAP;
+	dwc3_writel(dwc->regs, DWC3_DCFG, reg);
+
+	/* Enable USB2 LPM and automatic phy suspend only on recent versions */
+	if (dwc->revision >= DWC3_REVISION_194A) {
+		dwc3_gadget_usb2_phy_suspend(dwc, false);
+		dwc3_gadget_usb3_phy_suspend(dwc, false);
+	}
+
+	ret = usb_add_gadget_udc(dwc->dev, &dwc->gadget);
+	if (ret) {
+		dev_err(dwc->dev, "failed to register udc\n");
+		goto err5;
+	}
+
+	return 0;
+
+err5:
+	dwc3_gadget_free_endpoints(dwc);
+
+err4:
+	dma_free_coherent(dwc->dev, DWC3_EP0_BOUNCE_SIZE,
+			dwc->ep0_bounce, dwc->ep0_bounce_addr);
+
+err3:
+	kfree(dwc->setup_buf);
+
+err2:
+	dma_free_coherent(dwc->dev, sizeof(*dwc->ep0_trb),
+			dwc->ep0_trb, dwc->ep0_trb_addr);
+
+err1:
+	dma_free_coherent(dwc->dev, sizeof(*dwc->ctrl_req),
+			dwc->ctrl_req, dwc->ctrl_req_addr);
+
+err0:
+	return ret;
+}
+
+/* -------------------------------------------------------------------------- */
+
+void dwc3_gadget_exit(struct dwc3 *dwc)
+{
+	usb_del_gadget_udc(&dwc->gadget);
+
+	dwc3_gadget_free_endpoints(dwc);
+
+	dma_free_coherent(dwc->dev, DWC3_EP0_BOUNCE_SIZE,
+			dwc->ep0_bounce, dwc->ep0_bounce_addr);
+
+	kfree(dwc->setup_buf);
+
+	dma_free_coherent(dwc->dev, sizeof(*dwc->ep0_trb),
+			dwc->ep0_trb, dwc->ep0_trb_addr);
+
+	dma_free_coherent(dwc->dev, sizeof(*dwc->ctrl_req),
+			dwc->ctrl_req, dwc->ctrl_req_addr);
+}
+
+int dwc3_gadget_prepare(struct dwc3 *dwc)
+{
+	if (dwc->pullups_connected)
+		dwc3_gadget_disable_irq(dwc);
+
+	return 0;
+}
+
+void dwc3_gadget_complete(struct dwc3 *dwc)
+{
+	if (dwc->pullups_connected) {
+		dwc3_gadget_enable_irq(dwc);
+		dwc3_gadget_run_stop(dwc, true);
+	}
+}
+
+int dwc3_gadget_suspend(struct dwc3 *dwc)
+{
+	__dwc3_gadget_ep_disable(dwc->eps[0]);
+	__dwc3_gadget_ep_disable(dwc->eps[1]);
+
+	dwc->dcfg = dwc3_readl(dwc->regs, DWC3_DCFG);
+
+	return 0;
+}
+
+int dwc3_gadget_resume(struct dwc3 *dwc)
+{
+	struct dwc3_ep		*dep;
+	int			ret;
+
+	/* Start with SuperSpeed Default */
+	dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512);
+
+	dep = dwc->eps[0];
+	ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false);
+	if (ret)
+		goto err0;
+
+	dep = dwc->eps[1];
+	ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false);
+	if (ret)
+		goto err1;
+
+	/* begin to receive SETUP packets */
+	dwc->ep0state = EP0_SETUP_PHASE;
+	dwc3_ep0_out_start(dwc);
+
+	dwc3_writel(dwc->regs, DWC3_DCFG, dwc->dcfg);
+
+	return 0;
+
+err1:
+	__dwc3_gadget_ep_disable(dwc->eps[0]);
+
+err0:
+	return ret;
+}
diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
new file mode 100644
index 0000000..99e6d72
--- /dev/null
+++ b/drivers/usb/dwc3/gadget.h
@@ -0,0 +1,194 @@
+/**
+ * gadget.h - DesignWare USB3 DRD Gadget Header
+ *
+ * Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DRIVERS_USB_DWC3_GADGET_H
+#define __DRIVERS_USB_DWC3_GADGET_H
+
+#include <linux/list.h>
+#include <linux/usb/gadget.h>
+#include "io.h"
+
+struct dwc3;
+#define to_dwc3_ep(ep)		(container_of(ep, struct dwc3_ep, endpoint))
+#define gadget_to_dwc(g)	(container_of(g, struct dwc3, gadget))
+
+/* DEPCFG parameter 1 */
+#define DWC3_DEPCFG_INT_NUM(n)		((n) << 0)
+#define DWC3_DEPCFG_XFER_COMPLETE_EN	(1 << 8)
+#define DWC3_DEPCFG_XFER_IN_PROGRESS_EN	(1 << 9)
+#define DWC3_DEPCFG_XFER_NOT_READY_EN	(1 << 10)
+#define DWC3_DEPCFG_FIFO_ERROR_EN	(1 << 11)
+#define DWC3_DEPCFG_STREAM_EVENT_EN	(1 << 13)
+#define DWC3_DEPCFG_BINTERVAL_M1(n)	((n) << 16)
+#define DWC3_DEPCFG_STREAM_CAPABLE	(1 << 24)
+#define DWC3_DEPCFG_EP_NUMBER(n)	((n) << 25)
+#define DWC3_DEPCFG_BULK_BASED		(1 << 30)
+#define DWC3_DEPCFG_FIFO_BASED		(1 << 31)
+
+/* DEPCFG parameter 0 */
+#define DWC3_DEPCFG_EP_TYPE(n)		((n) << 1)
+#define DWC3_DEPCFG_MAX_PACKET_SIZE(n)	((n) << 3)
+#define DWC3_DEPCFG_FIFO_NUMBER(n)	((n) << 17)
+#define DWC3_DEPCFG_BURST_SIZE(n)	((n) << 22)
+#define DWC3_DEPCFG_DATA_SEQ_NUM(n)	((n) << 26)
+/* This applies for core versions earlier than 1.94a */
+#define DWC3_DEPCFG_IGN_SEQ_NUM		(1 << 31)
+/* These apply for core versions 1.94a and later */
+#define DWC3_DEPCFG_ACTION_INIT		(0 << 30)
+#define DWC3_DEPCFG_ACTION_RESTORE	(1 << 30)
+#define DWC3_DEPCFG_ACTION_MODIFY	(2 << 30)
+
+/* DEPXFERCFG parameter 0 */
+#define DWC3_DEPXFERCFG_NUM_XFER_RES(n)	((n) & 0xffff)
+
+struct dwc3_gadget_ep_cmd_params {
+	u32	param2;
+	u32	param1;
+	u32	param0;
+};
+
+/* -------------------------------------------------------------------------- */
+
+#define to_dwc3_request(r)	(container_of(r, struct dwc3_request, request))
+
+static inline struct dwc3_request *next_request(struct list_head *list)
+{
+	if (list_empty(list))
+		return NULL;
+
+	return list_first_entry(list, struct dwc3_request, list);
+}
+
+static inline void dwc3_gadget_move_request_queued(struct dwc3_request *req)
+{
+	struct dwc3_ep		*dep = req->dep;
+
+	req->queued = true;
+	list_move_tail(&req->list, &dep->req_queued);
+}
+
+void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req,
+		int status);
+
+int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode);
+int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state);
+
+void dwc3_ep0_interrupt(struct dwc3 *dwc,
+		const struct dwc3_event_depevt *event);
+void dwc3_ep0_out_start(struct dwc3 *dwc);
+int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value);
+int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request,
+		gfp_t gfp_flags);
+int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value);
+int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep,
+		unsigned cmd, struct dwc3_gadget_ep_cmd_params *params);
+int dwc3_send_gadget_generic_command(struct dwc3 *dwc, int cmd, u32 param);
+
+/**
+ * dwc3_gadget_ep_get_transfer_index - Gets transfer index from HW
+ * @dwc: DesignWare USB3 Pointer
+ * @number: DWC endpoint number
+ *
+ * Caller should take care of locking
+ */
+static inline u32 dwc3_gadget_ep_get_transfer_index(struct dwc3 *dwc, u8 number)
+{
+	u32			res_id;
+
+	res_id = dwc3_readl(dwc->regs, DWC3_DEPCMD(number));
+
+	return DWC3_DEPCMD_GET_RSC_IDX(res_id);
+}
+
+/**
+ * dwc3_gadget_event_string - returns event name
+ * @event: the event code
+ */
+static inline const char *dwc3_gadget_event_string(u8 event)
+{
+	switch (event) {
+	case DWC3_DEVICE_EVENT_DISCONNECT:
+		return "Disconnect";
+	case DWC3_DEVICE_EVENT_RESET:
+		return "Reset";
+	case DWC3_DEVICE_EVENT_CONNECT_DONE:
+		return "Connection Done";
+	case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE:
+		return "Link Status Change";
+	case DWC3_DEVICE_EVENT_WAKEUP:
+		return "WakeUp";
+	case DWC3_DEVICE_EVENT_EOPF:
+		return "End-Of-Frame";
+	case DWC3_DEVICE_EVENT_SOF:
+		return "Start-Of-Frame";
+	case DWC3_DEVICE_EVENT_ERRATIC_ERROR:
+		return "Erratic Error";
+	case DWC3_DEVICE_EVENT_CMD_CMPL:
+		return "Command Complete";
+	case DWC3_DEVICE_EVENT_OVERFLOW:
+		return "Overflow";
+	}
+
+	return "UNKNOWN";
+}
+
+/**
+ * dwc3_ep_event_string - returns event name
+ * @event: then event code
+ */
+static inline const char *dwc3_ep_event_string(u8 event)
+{
+	switch (event) {
+	case DWC3_DEPEVT_XFERCOMPLETE:
+		return "Transfer Complete";
+	case DWC3_DEPEVT_XFERINPROGRESS:
+		return "Transfer In-Progress";
+	case DWC3_DEPEVT_XFERNOTREADY:
+		return "Transfer Not Ready";
+	case DWC3_DEPEVT_RXTXFIFOEVT:
+		return "FIFO";
+	case DWC3_DEPEVT_STREAMEVT:
+		return "Stream";
+	case DWC3_DEPEVT_EPCMDCMPLT:
+		return "Endpoint Command Complete";
+	}
+
+	return "UNKNOWN";
+}
+
+#endif /* __DRIVERS_USB_DWC3_GADGET_H */
diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
new file mode 100644
index 0000000..0fa1846
--- /dev/null
+++ b/drivers/usb/dwc3/host.c
@@ -0,0 +1,87 @@
+/**
+ * host.c - DesignWare USB3 DRD Controller Host Glue
+ *
+ * Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/platform_device.h>
+
+#include "core.h"
+
+int dwc3_host_init(struct dwc3 *dwc)
+{
+	struct platform_device	*xhci;
+	int			ret;
+
+	xhci = platform_device_alloc("xhci-hcd", PLATFORM_DEVID_AUTO);
+	if (!xhci) {
+		dev_err(dwc->dev, "couldn't allocate xHCI device\n");
+		ret = -ENOMEM;
+		goto err0;
+	}
+
+	dma_set_coherent_mask(&xhci->dev, dwc->dev->coherent_dma_mask);
+
+	xhci->dev.parent	= dwc->dev;
+	xhci->dev.dma_mask	= dwc->dev->dma_mask;
+	xhci->dev.dma_parms	= dwc->dev->dma_parms;
+
+	dwc->xhci = xhci;
+
+	ret = platform_device_add_resources(xhci, dwc->xhci_resources,
+						DWC3_XHCI_RESOURCES_NUM);
+	if (ret) {
+		dev_err(dwc->dev, "couldn't add resources to xHCI device\n");
+		goto err1;
+	}
+
+	ret = platform_device_add(xhci);
+	if (ret) {
+		dev_err(dwc->dev, "failed to register xHCI device\n");
+		goto err1;
+	}
+
+	return 0;
+
+err1:
+	platform_device_put(xhci);
+
+err0:
+	return ret;
+}
+
+void dwc3_host_exit(struct dwc3 *dwc)
+{
+	platform_device_unregister(dwc->xhci);
+}
diff --git a/drivers/usb/dwc3/io.h b/drivers/usb/dwc3/io.h
new file mode 100644
index 0000000..a50f76b
--- /dev/null
+++ b/drivers/usb/dwc3/io.h
@@ -0,0 +1,66 @@
+/**
+ * io.h - DesignWare USB3 DRD IO Header
+ *
+ * Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DRIVERS_USB_DWC3_IO_H
+#define __DRIVERS_USB_DWC3_IO_H
+
+#include <linux/io.h>
+
+#include "core.h"
+
+static inline u32 dwc3_readl(void __iomem *base, u32 offset)
+{
+	/*
+	 * We requested the mem region starting from the Globals address
+	 * space, see dwc3_probe in core.c.
+	 * However, the offsets are given starting from xHCI address space.
+	 */
+	return readl(base + (offset - DWC3_GLOBALS_REGS_START));
+}
+
+static inline void dwc3_writel(void __iomem *base, u32 offset, u32 value)
+{
+	/*
+	 * We requested the mem region starting from the Globals address
+	 * space, see dwc3_probe in core.c.
+	 * However, the offsets are given starting from xHCI address space.
+	 */
+	writel(value, base + (offset - DWC3_GLOBALS_REGS_START));
+}
+
+#endif /* __DRIVERS_USB_DWC3_IO_H */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 4/7] USB: dwc3: dwc3 code adaption for uBoot
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
                   ` (2 preceding siblings ...)
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 3/7] USB: Initial kernel back port of the dwc3 kernel code Dan Murphy
@ 2013-07-02 15:15 ` Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 5/7] omap5: usb: Add usb otg clocks and enable Dan Murphy
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

All code not applicable to uBoot is ifdef'd out with

ifndef __UBOOT__ as it is done in the musb-new directory.

This code has not been fully debuged or excersized.

Signed-off-by: Dan Murphy <dmurphy@ti.com>
---
 Makefile                      |    1 +
 drivers/usb/dwc3/Makefile     |   53 ++++++
 drivers/usb/dwc3/core.c       |   90 +++++++++-
 drivers/usb/dwc3/core.h       |   53 +++++-
 drivers/usb/dwc3/dwc3-omap.c  |   28 ++-
 drivers/usb/dwc3/dwc3-omap.h  |   41 +++++
 drivers/usb/dwc3/dwc3-uboot.c |  384 +++++++++++++++++++++++++++++++++++++++++
 drivers/usb/dwc3/ep0.c        |   31 +++-
 drivers/usb/dwc3/gadget.c     |   75 +++++++-
 drivers/usb/dwc3/gadget.h     |    2 +
 drivers/usb/dwc3/host.c       |   27 ++-
 drivers/usb/dwc3/io.h         |   15 ++
 12 files changed, 779 insertions(+), 21 deletions(-)
 create mode 100644 drivers/usb/dwc3/Makefile
 create mode 100644 drivers/usb/dwc3/dwc3-omap.h
 create mode 100644 drivers/usb/dwc3/dwc3-uboot.c

diff --git a/Makefile b/Makefile
index fdaddb9..e4a6264 100644
--- a/Makefile
+++ b/Makefile
@@ -327,6 +327,7 @@ LIBS-y += drivers/usb/gadget/libusb_gadget.o
 LIBS-y += drivers/usb/host/libusb_host.o
 LIBS-y += drivers/usb/musb/libusb_musb.o
 LIBS-y += drivers/usb/musb-new/libusb_musb-new.o
+LIBS-y += drivers/usb/dwc3/libusb_dwc3.o
 LIBS-y += drivers/usb/phy/libusb_phy.o
 LIBS-y += drivers/usb/ulpi/libusb_ulpi.o
 LIBS-y += drivers/video/libvideo.o
diff --git a/drivers/usb/dwc3/Makefile b/drivers/usb/dwc3/Makefile
new file mode 100644
index 0000000..0d589cc
--- /dev/null
+++ b/drivers/usb/dwc3/Makefile
@@ -0,0 +1,53 @@
+#
+# (C) Copyright 2013
+# Texas Instruments Incorporated.
+#
+# Author: Dan Murphy <dmurphy@ti.com>
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 2 of
+# the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.	See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+# MA 02111-1307 USA
+#
+
+include $(TOPDIR)/config.mk
+
+LIB	:= $(obj)libusb_dwc3.o
+
+COBJS-$(CONFIG_USB_DWC3) += core.o dwc3-uboot.o
+COBJS-$(CONFIG_USB_DWC3_GADGET) += gadget.o ep0.o
+COBJS-$(CONFIG_USB_DWC3_HOST) += host.o
+COBJS-$(CONFIG_USB_DWC3_OMAP) += dwc3-omap.o
+
+CFLAGS_NO_WARN := $(call cc-option,-Wno-unused-variable) \
+			$(call cc-option,-Wno-unused-label)
+CFLAGS += $(CFLAGS_NO_WARN)
+
+COBJS	:= $(COBJS-y)
+SRCS	:= $(COBJS:.o=.c)
+OBJS	:= $(addprefix $(obj),$(COBJS))
+
+all:	$(LIB)
+
+$(LIB):	$(obj).depend $(OBJS)
+	$(call cmd_link_o_target, $(OBJS))
+
+#########################################################################
+
+# defines $(obj).depend target
+include $(SRCTREE)/rules.mk
+
+sinclude $(obj).depend
+
+#########################################################################
+
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index c35d49d..1bf1882 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
@@ -6,6 +6,8 @@
  * Authors: Felipe Balbi <balbi@ti.com>,
  *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
@@ -36,6 +38,9 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
+#define __UBOOT__
+#ifndef __UBOOT__
+
 #include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/slab.h>
@@ -51,14 +56,28 @@
 #include <linux/of.h>
 
 #include <linux/usb/otg.h>
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+
+#else
+#include <common.h>
+
+#include <linux/err.h>
 #include <linux/usb/ch9.h>
 #include <linux/usb/gadget.h>
+#include <linux/usb/linux-compat.h>
+
+#endif
 
 #include "core.h"
 #include "gadget.h"
 #include "io.h"
 
+#ifndef __UBOOT__
+/* TODO: Need to move over the debug files */
 #include "debug.h"
+#endif
 
 static char *maximum_speed = "super";
 module_param(maximum_speed, charp, 0);
@@ -98,9 +117,11 @@ static void dwc3_core_soft_reset(struct dwc3 *dwc)
 	reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
 	reg |= DWC3_GUSB2PHYCFG_PHYSOFTRST;
 	dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
-
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 	usb_phy_init(dwc->usb2_phy);
 	usb_phy_init(dwc->usb3_phy);
+#endif
 	mdelay(100);
 
 	/* Clear USB3 PHY reset */
@@ -145,7 +166,11 @@ static struct dwc3_event_buffer *dwc3_alloc_one_event_buffer(struct dwc3 *dwc,
 {
 	struct dwc3_event_buffer	*evt;
 
+#ifndef __UBOOT__
 	evt = devm_kzalloc(dwc->dev, sizeof(*evt), GFP_KERNEL);
+#else
+	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
+#endif
 	if (!evt)
 		return ERR_PTR(-ENOMEM);
 
@@ -163,7 +188,11 @@ static struct dwc3_event_buffer *dwc3_alloc_one_event_buffer(struct dwc3 *dwc,
  * dwc3_free_event_buffers - frees all allocated event buffers
  * @dwc: Pointer to our controller context structure
  */
+#ifndef __UBOOT__
 static void dwc3_free_event_buffers(struct dwc3 *dwc)
+#else
+void dwc3_free_event_buffers(struct dwc3 *dwc)
+#endif
 {
 	struct dwc3_event_buffer	*evt;
 	int i;
@@ -183,16 +212,23 @@ static void dwc3_free_event_buffers(struct dwc3 *dwc)
  * Returns 0 on success otherwise negative errno. In the error case, dwc
  * may contain some buffers allocated but not all which were requested.
  */
+#ifndef __UBOOT__
 static int dwc3_alloc_event_buffers(struct dwc3 *dwc, unsigned length)
+#else
+int dwc3_alloc_event_buffers(struct dwc3 *dwc, unsigned length)
+#endif
 {
 	int			num;
 	int			i;
 
 	num = DWC3_NUM_INT(dwc->hwparams.hwparams1);
 	dwc->num_event_buffers = num;
-
+#ifndef __UBOOT__
 	dwc->ev_buffs = devm_kzalloc(dwc->dev, sizeof(*dwc->ev_buffs) * num,
 			GFP_KERNEL);
+#else
+	dwc->ev_buffs = kzalloc(sizeof(*dwc->ev_buffs) * num, GFP_KERNEL);
+#endif
 	if (!dwc->ev_buffs) {
 		dev_err(dwc->dev, "can't allocate event buffers array\n");
 		return -ENOMEM;
@@ -218,7 +254,11 @@ static int dwc3_alloc_event_buffers(struct dwc3 *dwc, unsigned length)
  *
  * Returns 0 on success otherwise negative errno.
  */
+#ifndef __UBOOT__
 static int dwc3_event_buffers_setup(struct dwc3 *dwc)
+#else
+int dwc3_event_buffers_setup(struct dwc3 *dwc)
+#endif
 {
 	struct dwc3_event_buffer	*evt;
 	int				n;
@@ -242,8 +282,11 @@ static int dwc3_event_buffers_setup(struct dwc3 *dwc)
 
 	return 0;
 }
-
+#ifndef __UBOOT__
 static void dwc3_event_buffers_cleanup(struct dwc3 *dwc)
+#else
+void dwc3_event_buffers_cleanup(struct dwc3 *dwc)
+#endif
 {
 	struct dwc3_event_buffer	*evt;
 	int				n;
@@ -270,8 +313,11 @@ static void dwc3_core_num_eps(struct dwc3 *dwc)
 	dev_vdbg(dwc->dev, "found %d IN and %d OUT endpoints\n",
 			dwc->num_in_eps, dwc->num_out_eps);
 }
-
+#ifndef __UBOOT__
 static void dwc3_cache_hwparams(struct dwc3 *dwc)
+#else
+void dwc3_cache_hwparams(struct dwc3 *dwc)
+#endif
 {
 	struct dwc3_hwparams	*parms = &dwc->hwparams;
 
@@ -292,7 +338,11 @@ static void dwc3_cache_hwparams(struct dwc3 *dwc)
  *
  * Returns 0 on success otherwise negative errno.
  */
+#ifndef __UBOOT__
 static int dwc3_core_init(struct dwc3 *dwc)
+#else
+int dwc3_core_init(struct dwc3 *dwc)
+#endif
 {
 	unsigned long		timeout;
 	u32			reg;
@@ -308,14 +358,25 @@ static int dwc3_core_init(struct dwc3 *dwc)
 	dwc->revision = reg;
 
 	/* issue device SoftReset too */
+#ifndef __UBOOT__
 	timeout = jiffies + msecs_to_jiffies(500);
+#else
+	timeout = 500;
+#endif
 	dwc3_writel(dwc->regs, DWC3_DCTL, DWC3_DCTL_CSFTRST);
 	do {
 		reg = dwc3_readl(dwc->regs, DWC3_DCTL);
 		if (!(reg & DWC3_DCTL_CSFTRST))
 			break;
 
+#ifndef __UBOOT__
 		if (time_after(jiffies, timeout)) {
+#else
+		mdelay(1);
+		timeout--;
+
+		if (!timeout) {
+#endif
 			dev_err(dwc->dev, "Reset Timed Out\n");
 			ret = -ETIMEDOUT;
 			goto err0;
@@ -356,13 +417,21 @@ static int dwc3_core_init(struct dwc3 *dwc)
 err0:
 	return ret;
 }
-
+#ifndef __UBOOT__
 static void dwc3_core_exit(struct dwc3 *dwc)
+#else
+void dwc3_core_exit(struct dwc3 *dwc)
+#endif
 {
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 	usb_phy_shutdown(dwc->usb2_phy);
 	usb_phy_shutdown(dwc->usb3_phy);
+#endif
 }
 
+#ifndef __UBOOT__
+
 #define DWC3_ALIGN_MASK		(16 - 1)
 
 static int dwc3_probe(struct platform_device *pdev)
@@ -699,10 +768,11 @@ static int dwc3_suspend(struct device *dev)
 
 	dwc->gctl = dwc3_readl(dwc->regs, DWC3_GCTL);
 	spin_unlock_irqrestore(&dwc->lock, flags);
-
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 	usb_phy_shutdown(dwc->usb3_phy);
 	usb_phy_shutdown(dwc->usb2_phy);
-
+#endif
 	return 0;
 }
 
@@ -710,9 +780,11 @@ static int dwc3_resume(struct device *dev)
 {
 	struct dwc3	*dwc = dev_get_drvdata(dev);
 	unsigned long	flags;
-
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 	usb_phy_init(dwc->usb3_phy);
 	usb_phy_init(dwc->usb2_phy);
+#endif
 	msleep(100);
 
 	spin_lock_irqsave(&dwc->lock, flags);
@@ -773,6 +845,8 @@ static struct platform_driver dwc3_driver = {
 
 module_platform_driver(dwc3_driver);
 
+#endif /* __UBOOT__ */
+
 MODULE_ALIAS("platform:dwc3");
 MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>");
 MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index b69d322..43e81f9 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -6,6 +6,8 @@
  * Authors: Felipe Balbi <balbi@ti.com>,
  *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
@@ -39,6 +41,8 @@
 #ifndef __DRIVERS_USB_DWC3_CORE_H
 #define __DRIVERS_USB_DWC3_CORE_H
 
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/device.h>
 #include <linux/spinlock.h>
 #include <linux/ioport.h>
@@ -50,6 +54,15 @@
 #include <linux/usb/ch9.h>
 #include <linux/usb/gadget.h>
 
+#else
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+#include <linux/compiler.h>
+#include <linux/usb/linux-compat.h>
+
+#endif
+
 /* Global constants */
 #define DWC3_EP0_BOUNCE_SIZE	512
 #define DWC3_ENDPOINTS_NUM	32
@@ -676,7 +689,9 @@ struct dwc3 {
 	struct device		*dev;
 
 	struct platform_device	*xhci;
+#ifndef __UBOOT__
 	struct resource		xhci_resources[DWC3_XHCI_RESOURCES_NUM];
+#endif
 
 	struct dwc3_event_buffer **ev_buffs;
 	struct dwc3_ep		*eps[DWC3_ENDPOINTS_NUM];
@@ -889,7 +904,15 @@ union dwc3_event {
 void dwc3_set_mode(struct dwc3 *dwc, u32 mode);
 int dwc3_gadget_resize_tx_fifos(struct dwc3 *dwc);
 
+#ifndef __UBOOT__
+/* TODO rework this for uboot */
 #if IS_ENABLED(CONFIG_USB_DWC3_HOST) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
+#endif
+#endif /* __UBOOT__ */
+
+#if defined(CONFIG_USB_DWC3_HOST) || \
+	defined(CONFIG_USB_DWC3_DUAL_ROLE)
+
 int dwc3_host_init(struct dwc3 *dwc);
 void dwc3_host_exit(struct dwc3 *dwc);
 #else
@@ -899,7 +922,15 @@ static inline void dwc3_host_exit(struct dwc3 *dwc)
 { }
 #endif
 
+#ifndef __UBOOT__
+/* TODO rework this for uboot */
 #if IS_ENABLED(CONFIG_USB_DWC3_GADGET) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
+#endif
+#endif
+
+#if defined(CONFIG_USB_DWC3_GADGET) || \
+	defined(CONFIG_USB_DWC3_DUAL_ROLE)
+
 int dwc3_gadget_init(struct dwc3 *dwc);
 void dwc3_gadget_exit(struct dwc3 *dwc);
 #else
@@ -910,11 +941,20 @@ static inline void dwc3_gadget_exit(struct dwc3 *dwc)
 #endif
 
 /* power management interface */
+#ifndef __UBOOT__
+/* TODO rework this for uboot */
 #if !IS_ENABLED(CONFIG_USB_DWC3_HOST)
+#endif /* __UBOOT__ */
+#endif
+#if defined (CONFIG_USB_DWC3_HOST)
+
 int dwc3_gadget_prepare(struct dwc3 *dwc);
 void dwc3_gadget_complete(struct dwc3 *dwc);
 int dwc3_gadget_suspend(struct dwc3 *dwc);
 int dwc3_gadget_resume(struct dwc3 *dwc);
+
+#endif
+/*
 #else
 static inline int dwc3_gadget_prepare(struct dwc3 *dwc)
 {
@@ -934,6 +974,17 @@ static inline int dwc3_gadget_resume(struct dwc3 *dwc)
 {
 	return 0;
 }
-#endif /* !IS_ENABLED(CONFIG_USB_DWC3_HOST) */
+#endif */
+/* !IS_ENABLED(CONFIG_USB_DWC3_HOST) */
+
+#ifdef __UBOOT__
+int dwc3_core_init(struct dwc3 *dwc);
+int dwc3_event_buffers_setup(struct dwc3 *dwc);
+void dwc3_core_exit(struct dwc3 *dwc);
+int dwc3_alloc_event_buffers(struct dwc3 *dwc, unsigned length);
+void dwc3_free_event_buffers(struct dwc3 *dwc);
+void dwc3_event_buffers_cleanup(struct dwc3 *dwc);
+void dwc3_cache_hwparams(struct dwc3 *dwc);
+#endif
 
 #endif /* __DRIVERS_USB_DWC3_CORE_H */
diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
index 34638b9..3dc31d2 100644
--- a/drivers/usb/dwc3/dwc3-omap.c
+++ b/drivers/usb/dwc3/dwc3-omap.c
@@ -6,6 +6,8 @@
  * Authors: Felipe Balbi <balbi@ti.com>,
  *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
@@ -36,6 +38,9 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
+#define __UBOOT__
+#ifndef __UBOOT__
+
 #include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/slab.h>
@@ -53,6 +58,16 @@
 
 #include <linux/usb/otg.h>
 
+#else
+
+#include <linux/usb/linux-compat.h>
+#include <usb/lin_gadget_compat.h>
+
+#include "io.h"
+#include "dwc3-omap.h"
+
+#endif
+
 /*
  * All these registers belong to OMAP's Wrapper around the
  * DesignWare USB3 Core.
@@ -144,8 +159,11 @@ int dwc3_omap_mailbox(enum omap_dwc3_vbus_id_status status)
 	struct dwc3_omap	*omap = _omap;
 
 	if (!omap)
+#ifndef __UBOOT__
 		return -EPROBE_DEFER;
-
+#else
+		return -EINVAL;
+#endif
 	switch (status) {
 	case OMAP_DWC3_ID_GROUND:
 		dev_dbg(omap->dev, "ID GND\n");
@@ -243,6 +261,7 @@ static irqreturn_t dwc3_omap_interrupt(int irq, void *_omap)
 	return IRQ_HANDLED;
 }
 
+#ifndef __UBOOT__
 static int dwc3_omap_remove_core(struct device *dev, void *c)
 {
 	struct platform_device *pdev = to_platform_device(dev);
@@ -251,8 +270,13 @@ static int dwc3_omap_remove_core(struct device *dev, void *c)
 
 	return 0;
 }
+#endif
 
+#ifndef __UBOOT__
 static void dwc3_omap_enable_irqs(struct dwc3_omap *omap)
+#else
+void dwc3_omap_enable_irqs(struct dwc3_omap *omap)
+#endif
 {
 	u32			reg;
 
@@ -280,6 +304,7 @@ static void dwc3_omap_disable_irqs(struct dwc3_omap *omap)
 	dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_SET_0, 0x00);
 }
 
+#ifndef __UBOOT__
 static u64 dwc3_omap_dma_mask = DMA_BIT_MASK(32);
 
 static int dwc3_omap_probe(struct platform_device *pdev)
@@ -474,6 +499,7 @@ static struct platform_driver dwc3_omap_driver = {
 };
 
 module_platform_driver(dwc3_omap_driver);
+#endif /* __UBOOT__ */
 
 MODULE_ALIAS("platform:omap-dwc3");
 MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>");
diff --git a/drivers/usb/dwc3/dwc3-omap.h b/drivers/usb/dwc3/dwc3-omap.h
new file mode 100644
index 0000000..e80a89f
--- /dev/null
+++ b/drivers/usb/dwc3/dwc3-omap.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2013 by Texas Instruments
+ * *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
+ * The Inventra Controller Driver for Linux is free software; you
+ * can redistribute it and/or modify it under the terms of the GNU
+ * General Public License version 2 as published by the Free Software
+ * Foundation.
+ */
+
+#ifndef __DWC3_OMAP_H__
+#define __DWC3_OMAP_H__
+
+enum omap_dwc3_vbus_id_status {
+	OMAP_DWC3_UNKNOWN = 0,
+	OMAP_DWC3_ID_GROUND,
+	OMAP_DWC3_ID_FLOAT,
+	OMAP_DWC3_VBUS_VALID,
+	OMAP_DWC3_VBUS_OFF,
+};
+
+enum dwc3_omap_utmi_mode {
+	DWC3_OMAP_UTMI_MODE_UNKNOWN = 0,
+	DWC3_OMAP_UTMI_MODE_HW,
+	DWC3_OMAP_UTMI_MODE_SW,
+};
+
+#if defined(CONFIG_USB_DWC3)
+extern int dwc3_omap_mailbox(enum omap_dwc3_vbus_id_status status);
+#endif
+/*
+TODO need this override for non-USB_DWC3
+#else
+static inline int dwc3_omap_mailbox(enum omap_dwc3_vbus_id_status status)
+{
+	return -ENODEV;
+}
+#endif
+*/
+#endif	/* __DWC3_OMAP_H__ */
diff --git a/drivers/usb/dwc3/dwc3-uboot.c b/drivers/usb/dwc3/dwc3-uboot.c
new file mode 100644
index 0000000..3732462
--- /dev/null
+++ b/drivers/usb/dwc3/dwc3-uboot.c
@@ -0,0 +1,384 @@
+/**
+ * dwc3-uboot.c - dwc3 uboot initialization
+ *
+ * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Authors: Dan Murphy <dmurphy@ti.com>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the above-listed copyright holders may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2, as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <usb.h>
+#include <linux/usb/linux-compat.h>
+#include <linux/usb/usb-compat.h>
+#include <usb_defs.h>
+
+#include "core.h"
+#include "io.h"
+#include "dwc3-omap.h"
+
+#define DWC3_ALIGN_MASK		(16 - 1)
+
+static char *maximum_speed = "super";
+
+
+#include <asm/omap_common.h>
+/* IRQS0 BITS */
+#define USBOTGSS_IRQO_COREIRQ_ST		(1 << 0)
+
+/* IRQ1 BITS */
+#define USBOTGSS_IRQ1_DMADISABLECLR		(1 << 17)
+#define USBOTGSS_IRQ1_OEVT			(1 << 16)
+#define USBOTGSS_IRQ1_DRVVBUS_RISE		(1 << 13)
+#define USBOTGSS_IRQ1_CHRGVBUS_RISE		(1 << 12)
+#define USBOTGSS_IRQ1_DISCHRGVBUS_RISE		(1 << 11)
+#define USBOTGSS_IRQ1_IDPULLUP_RISE		(1 << 8)
+#define USBOTGSS_IRQ1_DRVVBUS_FALL		(1 << 5)
+#define USBOTGSS_IRQ1_CHRGVBUS_FALL		(1 << 4)
+#define USBOTGSS_IRQ1_DISCHRGVBUS_FALL		(1 << 3)
+#define USBOTGSS_IRQ1_IDPULLUP_FALL		(1 << 0)
+
+#define USBOTGSS_IRQENABLE_SET_0	0x4A02002c
+#define USBOTGSS_IRQENABLE_SET_1	0x4A02003c
+#define USBOTGSS_SYSCONFIG		0x4A020010
+#define USBOTGSS_IRQSTATUS_0		0x4A020028
+#define USBOTGSS_IRQSTATUS_1		0x4A020038
+#define USBOTGSS_UTMI_OTG_STATUS 0x4A020084
+
+struct usb_dpll_params {
+	u16	m;
+	u8	n;
+	u8	freq:3;
+	u8	sd;
+	u32	mf;
+};
+
+static struct usb_dpll_params omap_usb3_dpll_params[6] = {
+	{1250, 5, 4, 20, 0},		/* 12 MHz */
+	{0, 0, 0, 0, 0},		/* for 13 MHz TBD   */
+	{3125, 20, 4, 20, 0},		/* 16.8 MHz */
+	{1172, 8, 4, 20, 65537},	/* 19.2 MHz */
+	{1250, 12, 4, 20, 0},		/* 26 MHz */
+	{3125, 47, 4, 20, 92843},	/* 38.4 MHz */
+};
+
+#define USB3_PHY_PLL_CONFIGURATION1	0x4A084C0C
+#define USB3_PHY_PLL_REGN_MASK		0xFE
+#define USB3_PHY_PLL_REGN_SHIFT		1
+#define USB3_PHY_PLL_REGM_MASK		0x1FFE00
+#define USB3_PHY_PLL_REGM_SHIFT		9
+#define USB3_PHY_PLL_CONFIGURATION2	0x4A084C10
+#define USB3_PHY_PLL_SELFREQDCO_MASK	0xE
+#define USB3_PHY_PLL_SELFREQDCO_SHIFT	1
+#define USB3_PHY_PLL_CONFIGURATION4     0x4A084C20
+#define USB3_PHY_PLL_REGM_F_MASK	0x3FFFF
+#define USB3_PHY_PLL_REGM_F_SHIFT	0
+#define USB3_PHY_PLL_CONFIGURATION3	0x4A084C14
+#define USB3_PHY_PLL_SD_MASK		0x3FC00
+#define USB3_PHY_PLL_SD_SHIFT		9
+#define USB3_PHY_CONTROL_PHY_POWER_USB	0x4A002370
+#define USB3_PWRCTL_CLK_CMD_MASK	0x3FE000
+#define USB3_PWRCTL_CLK_FREQ_MASK	0xFFC
+#define USB3_PHY_PARTIAL_RX_POWERON     (1<<6)
+#define USB3_PHY_TX_RX_POWERON		0x3
+#define USB3_PWRCTL_CLK_CMD_SHIFT	14
+#define USB3_PWRCTL_CLK_FREQ_SHIFT	22
+#define USB3_PHY_PLL_IDLE		1
+
+#define USB3_PHY_PLL_STATUS	0x4A084C04
+#define USB3_PHY_PLL_TICOPWDN   0x10000
+#define USB3_PHY_PLL_LOCK	0x2
+#define CONTROL_DEV_CONF	0x4A002300
+#define CONTROL_DEV_CONF_USBPHY_PD	1
+
+#define USB3_PHY_PLL_GO		0x4A084C08
+#define USB3_PHY_SET_PLL_GO	1
+
+void setup_usb(void)
+{
+	u32			val;
+	u32			retry;
+	u8 vali;
+	writel(0x118, 0x4A0029EC);
+	writel(0x1180000, 0x4A0029F0);
+	writel(0x118, 0x4A0029F4);
+	/* Turn on 32K AON clk */
+	writel(0x100, 0x4A008640);
+	/* Setting USBOTGSS_SYSCONFIG set to NO idle  */
+	val = readl(0x4A020010);
+	writel(0x10034, 0x4A020010);
+
+	/* Set the IRQ Enables */
+	/* Clear status */
+	val = readl(USBOTGSS_UTMI_OTG_STATUS);
+	writel(val, USBOTGSS_UTMI_OTG_STATUS);
+	/* Enable interrupts */
+	writel(0x1, USBOTGSS_IRQENABLE_SET_0);
+	writel(0x13939, USBOTGSS_IRQENABLE_SET_1);
+	/* Check for non zero status */
+	val = readl(USBOTGSS_IRQSTATUS_1);
+	writel(val, USBOTGSS_IRQSTATUS_1);
+	val = readl(USBOTGSS_IRQSTATUS_0);
+	writel(val, USBOTGSS_IRQSTATUS_0);
+}
+
+#ifdef CONFIG_USB_DWC3_HOST
+static struct musb *host;
+static struct usb_hcd hcd;
+static enum usb_device_speed host_speed;
+
+#define DWC_HOST_TIMEOUT	0x3ffffff
+
+static struct usb_host_endpoint hep;
+static struct urb urb;
+
+static void dwc3_host_complete_urb(struct urb *urb)
+{
+	urb->dev->status &= ~USB_ST_NOT_PROC;
+	urb->dev->act_len = urb->actual_length;
+
+	return;
+}
+
+static struct urb *construct_urb(struct usb_device *dev, int endpoint_type,
+				unsigned long pipe, void *buffer, int len,
+				struct devrequest *setup, int interval)
+{
+	int epnum = usb_pipeendpoint(pipe);
+	int is_in = usb_pipein(pipe);
+
+	memset(&urb, 0, sizeof(struct urb));
+	memset(&hep, 0, sizeof(struct usb_host_endpoint));
+
+	INIT_LIST_HEAD(&hep.urb_list);
+
+	INIT_LIST_HEAD(&urb.urb_list);
+	urb.ep = &hep;
+	urb.complete = dwc3_host_complete_urb;
+	urb.status = -EINPROGRESS;
+	urb.dev = dev;
+	urb.pipe = pipe;
+	urb.transfer_buffer = buffer;
+	urb.transfer_dma = (unsigned long)buffer;
+	urb.transfer_buffer_length = len;
+	urb.setup_packet = (unsigned char *)setup;
+
+	urb.ep->desc.wMaxPacketSize =
+		__cpu_to_le16(is_in ? dev->epmaxpacketin[epnum] :
+				dev->epmaxpacketout[epnum]);
+	urb.ep->desc.bmAttributes = endpoint_type;
+	urb.ep->desc.bEndpointAddress =
+		(is_in ? USB_DIR_IN : USB_DIR_OUT) | epnum;
+	urb.ep->desc.bInterval = interval;
+
+	return &urb;
+}
+
+static int submit_urb(struct usb_hcd *hcd, struct urb *urb)
+{
+	return NULL;
+}
+
+int submit_control_msg(struct usb_device *dev, unsigned long pipe,
+			void *buffer, int len, struct devrequest *setup)
+{
+	struct urb *urb = construct_urb(dev, USB_ENDPOINT_XFER_CONTROL, pipe,
+					buffer, len, setup, 0);
+
+	/* Fix speed for non hub-attached devices */
+	if (!dev->parent)
+		dev->speed = host_speed;
+
+	return submit_urb(&hcd, urb);
+}
+
+
+int submit_bulk_msg(struct usb_device *dev, unsigned long pipe,
+					void *buffer, int len)
+{
+	struct urb *urb = construct_urb(dev, USB_ENDPOINT_XFER_BULK, pipe,
+					buffer, len, NULL, 0);
+	return submit_urb(&hcd, urb);
+}
+
+int submit_int_msg(struct usb_device *dev, unsigned long pipe,
+				void *buffer, int len, int interval)
+{
+	struct urb *urb = construct_urb(dev, USB_ENDPOINT_XFER_INT, pipe,
+					buffer, len, NULL, interval);
+	return submit_urb(&hcd, urb);
+}
+
+/* The init sequence was abstracted from the core.c probe function */
+int usb_lowlevel_init(int index, void **controller)
+{
+	struct dwc3 *dwc;
+	int ret = -ENOMEM;
+	void *mem;
+	u8 mode;
+
+	mem = kzalloc(sizeof(*dwc) + DWC3_ALIGN_MASK, GFP_KERNEL);
+	if (!mem) {
+		dev_err(dev, "not enough memory\n");
+		return -ENOMEM;
+	}
+	dwc = PTR_ALIGN(mem, DWC3_ALIGN_MASK + 1);
+	dwc->mem = mem;
+	dwc->regs = 0x4A030000;
+
+	if (!strncmp("super", maximum_speed, 5))
+		dwc->maximum_speed = DWC3_DCFG_SUPERSPEED;
+	else if (!strncmp("high", maximum_speed, 4))
+		dwc->maximum_speed = DWC3_DCFG_HIGHSPEED;
+	else if (!strncmp("full", maximum_speed, 4))
+		dwc->maximum_speed = DWC3_DCFG_FULLSPEED1;
+	else if (!strncmp("low", maximum_speed, 3))
+		dwc->maximum_speed = DWC3_DCFG_LOWSPEED;
+	else
+		dwc->maximum_speed = DWC3_DCFG_SUPERSPEED;
+
+	setup_usb();
+
+	dwc3_cache_hwparams(dwc);
+
+	ret = dwc3_alloc_event_buffers(dwc, DWC3_EVENT_BUFFERS_SIZE);
+	if (ret) {
+		dev_err(dwc->dev, "failed to allocate event buffers\n");
+		ret = -ENOMEM;
+		goto err0;
+	}
+
+	ret = dwc3_core_init(dwc);
+	if (ret) {
+		dev_err(dev, "failed to initialize core\n");
+		goto err0;
+	}
+
+	ret = dwc3_event_buffers_setup(dwc);
+	if (ret) {
+		dev_err(dwc->dev, "failed to setup event buffers\n");
+		goto err1;
+	}
+/* TODO: Figure out how to enable this
+	if (CONFIG_USB_DWC3_HOST)
+		mode = DWC3_MODE_HOST;
+	else if (CONFIG_USB_DWC3_GADGET)
+		mode = DWC3_MODE_DEVICE;
+	else
+		mode = DWC3_MODE_DRD;
+*/
+	mode = DWC3_MODE_HOST;
+	switch (mode) {
+	case DWC3_MODE_DEVICE:
+		dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_DEVICE);
+		ret = dwc3_gadget_init(dwc);
+		if (ret) {
+			printf("failed to initialize gadget\n");
+			goto err2;
+		}
+		break;
+	case DWC3_MODE_HOST:
+		dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_HOST);
+		ret = dwc3_host_init(dwc);
+		if (ret) {
+			printf("failed to initialize host\n");
+			goto err2;
+		}
+		break;
+	case DWC3_MODE_DRD:
+		dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_OTG);
+		ret = dwc3_host_init(dwc);
+		if (ret) {
+			printf("failed to initialize host\n");
+			goto err2;
+		}
+
+		ret = dwc3_gadget_init(dwc);
+		if (ret) {
+			printf("failed to initialize gadget\n");
+			goto err2;
+		}
+		break;
+	default:
+		printf("Unsupported mode of operation %d\n", mode);
+		goto err2;
+	}
+	dwc->mode = mode;
+
+	return 0;
+
+err2:
+	dwc3_event_buffers_cleanup(dwc);
+
+err1:
+	dwc3_core_exit(dwc);
+
+err0:
+	dwc3_free_event_buffers(dwc);
+
+	return ret;
+}
+
+int usb_lowlevel_stop(int index)
+{
+	return 0;
+}
+#endif /* CONFIG_USB_DWC3_HOST */
+
+#ifdef CONFIG_USB_DWC3_GADGET
+int usb_gadget_unregister_driver(struct usb_gadget_driver *driver)
+{
+	return 0;
+}
+
+static struct dwc *gadget;
+int usb_gadget_register_driver(struct usb_gadget_driver *driver)
+{
+	int ret;
+
+	if (!driver || /* driver->speed < USB_SPEED_FULL ||*/ !driver->bind ||
+	    !driver->setup) {
+		printf("bad parameter.\n");
+		return -EINVAL;
+	}
+
+	if (!gadget) {
+		printf("Controller uninitialized\n");
+		return -ENXIO;
+	}
+
+	return 0;
+}
+
+int usb_gadget_handle_interrupts(void)
+{
+	return 0;
+}
+#endif /* CONFIG_USB_DWC3_GADGET */
diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 5acbb94..423301e 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -6,6 +6,8 @@
  * Authors: Felipe Balbi <balbi@ti.com>,
  *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
@@ -36,6 +38,9 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
+#define __UBOOT__
+#ifndef __UBOOT__
+
 #include <linux/kernel.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
@@ -50,6 +55,18 @@
 #include <linux/usb/gadget.h>
 #include <linux/usb/composite.h>
 
+#else
+
+#include <common.h>
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+#include <linux/usb/composite.h>
+#include <linux/usb/linux-compat.h>
+#include <usb/lin_gadget_compat.h>
+
+#endif
+
 #include "core.h"
 #include "gadget.h"
 #include "io.h"
@@ -147,7 +164,12 @@ static int __dwc3_gadget_ep0_queue(struct dwc3_ep *dep,
 		direction = !!(dep->flags & DWC3_EP0_DIR_IN);
 
 		if (dwc->ep0state != EP0_DATA_PHASE) {
+
+#ifndef __UBOOT__
 			dev_WARN(dwc->dev, "Unexpected pending request\n");
+#else
+			printf("Unexpected pending request\n");
+#endif
 			return 0;
 		}
 
@@ -902,14 +924,15 @@ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
 			&& (dep->number == 0)) {
 		u32	transfer_size;
 		u32	maxpacket;
-
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 		ret = usb_gadget_map_request(&dwc->gadget, &req->request,
 				dep->number);
 		if (ret) {
 			dev_dbg(dwc->dev, "failed to map request\n");
 			return;
 		}
-
+#endif
 		WARN_ON(req->request.length > DWC3_EP0_BOUNCE_SIZE);
 
 		maxpacket = dep->endpoint.maxpacket;
@@ -926,13 +949,15 @@ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
 				dwc->ep0_bounce_addr, transfer_size,
 				DWC3_TRBCTL_CONTROL_DATA);
 	} else {
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 		ret = usb_gadget_map_request(&dwc->gadget, &req->request,
 				dep->number);
 		if (ret) {
 			dev_dbg(dwc->dev, "failed to map request\n");
 			return;
 		}
-
+#endif
 		ret = dwc3_ep0_start_trans(dwc, dep->number, req->request.dma,
 				req->request.length, DWC3_TRBCTL_CONTROL_DATA);
 	}
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 2b6e7e0..694762d 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -6,6 +6,8 @@
  * Authors: Felipe Balbi <balbi@ti.com>,
  *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
@@ -36,6 +38,9 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
+#define __UBOOT__
+#ifndef __UBOOT__
+
 #include <linux/kernel.h>
 #include <linux/delay.h>
 #include <linux/slab.h>
@@ -50,6 +55,16 @@
 #include <linux/usb/ch9.h>
 #include <linux/usb/gadget.h>
 
+#else
+
+#include <common.h>
+#include <linux/usb/ch9.h>
+#include <linux/usb/gadget.h>
+#include <linux/usb/linux-compat.h>
+#include <usb/lin_gadget_compat.h>
+
+#endif
+
 #include "core.h"
 #include "gadget.h"
 #include "io.h"
@@ -268,8 +283,12 @@ void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req,
 	if (dwc->ep0_bounced && dep->number == 0)
 		dwc->ep0_bounced = false;
 	else
+
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 		usb_gadget_unmap_request(&dwc->gadget, &req->request,
 				req->direction);
+#endif
 
 	dev_dbg(dwc->dev, "request %p from %s completed %d/%d ===> %d\n",
 			req, dep->name, req->request.actual,
@@ -654,23 +673,44 @@ static int dwc3_gadget_ep_enable(struct usb_ep *ep,
 	dwc = dep->dwc;
 
 	if (dep->flags & DWC3_EP_ENABLED) {
+
+#ifndef __UBOOT__
 		dev_WARN_ONCE(dwc->dev, true, "%s is already enabled\n",
 				dep->name);
+#else
+		printf("%s is already enabled\n", dep->name);
+#endif
 		return 0;
 	}
 
 	switch (usb_endpoint_type(desc)) {
 	case USB_ENDPOINT_XFER_CONTROL:
+#ifndef __UBOOT__
 		strlcat(dep->name, "-control", sizeof(dep->name));
+#else
+		strcat(dep->name, "-control");
+#endif
 		break;
 	case USB_ENDPOINT_XFER_ISOC:
+#ifndef __UBOOT__
 		strlcat(dep->name, "-isoc", sizeof(dep->name));
+#else
+		strcat(dep->name, "-isoc");
+#endif
 		break;
 	case USB_ENDPOINT_XFER_BULK:
+#ifndef __UBOOT__
 		strlcat(dep->name, "-bulk", sizeof(dep->name));
+#else
+		strcat(dep->name, "-bulk");
+#endif
 		break;
 	case USB_ENDPOINT_XFER_INT:
+#ifndef __UBOOT__
 		strlcat(dep->name, "-int", sizeof(dep->name));
+#else
+		strcat(dep->name, "-int");
+#endif
 		break;
 	default:
 		dev_err(dwc->dev, "invalid endpoint transfer type\n");
@@ -701,8 +741,12 @@ static int dwc3_gadget_ep_disable(struct usb_ep *ep)
 	dwc = dep->dwc;
 
 	if (!(dep->flags & DWC3_EP_ENABLED)) {
+#ifndef __UBOOT__
 		dev_WARN_ONCE(dwc->dev, true, "%s is already disabled\n",
 				dep->name);
+#else
+		printf("%s is already disabled\n", dep->name);
+#endif
 		return 0;
 	}
 
@@ -896,6 +940,9 @@ static void dwc3_prepare_trbs(struct dwc3_ep *dep, bool starting)
 			struct scatterlist *s;
 			int		i;
 
+			printf("Fix this\n");
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 			for_each_sg(sg, s, request->num_mapped_sgs, i) {
 				unsigned chain = true;
 
@@ -923,6 +970,7 @@ static void dwc3_prepare_trbs(struct dwc3_ep *dep, bool starting)
 				if (last_one)
 					break;
 			}
+#endif
 		} else {
 			dma = req->request.dma;
 			length = req->request.length;
@@ -1002,8 +1050,11 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep, u16 cmd_param,
 		 * here and stop, unmap, free and del each of the linked
 		 * requests instead of what we do now.
 		 */
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 		usb_gadget_unmap_request(&dwc->gadget, &req->request,
 				req->direction);
+#endif
 		list_del(&req->list);
 		return ret;
 	}
@@ -1070,11 +1121,13 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
 	 * This will also avoid Host cancelling URBs due to too
 	 * many NAKs.
 	 */
+#ifndef __UBOOT__
+	/* FIX THIS DM */
 	ret = usb_gadget_map_request(&dwc->gadget, &req->request,
 			dep->direction);
 	if (ret)
 		return ret;
-
+#endif
 	list_add_tail(&req->list, &dep->request_list);
 
 	/*
@@ -1376,14 +1429,23 @@ static int dwc3_gadget_wakeup(struct usb_gadget *g)
 	}
 
 	/* poll until Link State changes to ON */
+#ifndef __UBOOT__
 	timeout = jiffies + msecs_to_jiffies(100);
-
 	while (!time_after(jiffies, timeout)) {
+#else
+	timeout = 100;
+	while (timeout != 0) {
+#endif
+
 		reg = dwc3_readl(dwc->regs, DWC3_DSTS);
 
 		/* in HS, means ON */
 		if (DWC3_DSTS_USBLNKST(reg) == DWC3_LINK_STATE_U0)
 			break;
+#ifdef __UBOOT__
+		mdelay(1);
+		timeout--;
+#endif
 	}
 
 	if (DWC3_DSTS_USBLNKST(reg) != DWC3_LINK_STATE_U0) {
@@ -1564,7 +1626,7 @@ static int dwc3_gadget_start(struct usb_gadget *g,
 	/* begin to receive SETUP packets */
 	dwc->ep0state = EP0_SETUP_PHASE;
 	dwc3_ep0_out_start(dwc);
-
+#ifndef __UBOOT__
 	irq = platform_get_irq(to_platform_device(dwc->dev), 0);
 	ret = request_threaded_irq(irq, dwc3_interrupt, dwc3_thread_interrupt,
 			IRQF_SHARED | IRQF_ONESHOT, "dwc3", dwc);
@@ -1573,7 +1635,7 @@ static int dwc3_gadget_start(struct usb_gadget *g,
 				irq, ret);
 		goto err1;
 	}
-
+#endif /* __UBOOT__ */
 	dwc3_gadget_enable_irq(dwc);
 
 	spin_unlock_irqrestore(&dwc->lock, flags);
@@ -1599,8 +1661,10 @@ static int dwc3_gadget_stop(struct usb_gadget *g,
 	spin_lock_irqsave(&dwc->lock, flags);
 
 	dwc3_gadget_disable_irq(dwc);
+#ifndef __UBOOT__
 	irq = platform_get_irq(to_platform_device(dwc->dev), 0);
 	free_irq(irq, dwc);
+#endif
 
 	__dwc3_gadget_ep_disable(dwc->eps[0]);
 	__dwc3_gadget_ep_disable(dwc->eps[1]);
@@ -2644,12 +2708,13 @@ int dwc3_gadget_init(struct dwc3 *dwc)
 		dwc3_gadget_usb3_phy_suspend(dwc, false);
 	}
 
+#ifndef __UBOOT__
 	ret = usb_add_gadget_udc(dwc->dev, &dwc->gadget);
 	if (ret) {
 		dev_err(dwc->dev, "failed to register udc\n");
 		goto err5;
 	}
-
+#endif
 	return 0;
 
 err5:
diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
index 99e6d72..23c57fe 100644
--- a/drivers/usb/dwc3/gadget.h
+++ b/drivers/usb/dwc3/gadget.h
@@ -6,6 +6,8 @@
  * Authors: Felipe Balbi <balbi@ti.com>,
  *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
index 0fa1846..0171b6c 100644
--- a/drivers/usb/dwc3/host.c
+++ b/drivers/usb/dwc3/host.c
@@ -5,6 +5,8 @@
  *
  * Authors: Felipe Balbi <balbi@ti.com>,
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
@@ -35,30 +37,44 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/platform_device.h>
+#endif
 
 #include "core.h"
 
 int dwc3_host_init(struct dwc3 *dwc)
 {
+#ifndef __UBOOT__
 	struct platform_device	*xhci;
+#endif
+	void *xhci;
 	int			ret;
-
+#ifndef __UBOOT__
 	xhci = platform_device_alloc("xhci-hcd", PLATFORM_DEVID_AUTO);
 	if (!xhci) {
 		dev_err(dwc->dev, "couldn't allocate xHCI device\n");
 		ret = -ENOMEM;
 		goto err0;
 	}
-
 	dma_set_coherent_mask(&xhci->dev, dwc->dev->coherent_dma_mask);
 
 	xhci->dev.parent	= dwc->dev;
 	xhci->dev.dma_mask	= dwc->dev->dma_mask;
 	xhci->dev.dma_parms	= dwc->dev->dma_parms;
 
+#else
+	xhci = kzalloc(sizeof(*dwc), GFP_KERNEL);
+	if (!xhci) {
+		dev_err(dev, "not enough memory\n");
+		return -ENOMEM;
+	}
+#endif
+
 	dwc->xhci = xhci;
 
+#ifndef __UBOOT__
 	ret = platform_device_add_resources(xhci, dwc->xhci_resources,
 						DWC3_XHCI_RESOURCES_NUM);
 	if (ret) {
@@ -71,11 +87,14 @@ int dwc3_host_init(struct dwc3 *dwc)
 		dev_err(dwc->dev, "failed to register xHCI device\n");
 		goto err1;
 	}
-
+#endif
 	return 0;
 
 err1:
+#ifndef __UBOOT__
 	platform_device_put(xhci);
+#endif
+
 
 err0:
 	return ret;
@@ -83,5 +102,7 @@ err0:
 
 void dwc3_host_exit(struct dwc3 *dwc)
 {
+#ifndef __UBOOT__
 	platform_device_unregister(dwc->xhci);
+#endif
 }
diff --git a/drivers/usb/dwc3/io.h b/drivers/usb/dwc3/io.h
index a50f76b..2b0895a 100644
--- a/drivers/usb/dwc3/io.h
+++ b/drivers/usb/dwc3/io.h
@@ -6,6 +6,8 @@
  * Authors: Felipe Balbi <balbi@ti.com>,
  *	    Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  *
+ * Back-ported by: Dan Murphy <dmurphy@ti.com>
+ *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
@@ -39,7 +41,12 @@
 #ifndef __DRIVERS_USB_DWC3_IO_H
 #define __DRIVERS_USB_DWC3_IO_H
 
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/io.h>
+#else
+#include <asm/io.h>
+#endif
 
 #include "core.h"
 
@@ -50,7 +57,11 @@ static inline u32 dwc3_readl(void __iomem *base, u32 offset)
 	 * space, see dwc3_probe in core.c.
 	 * However, the offsets are given starting from xHCI address space.
 	 */
+#ifndef __UBOOT__
 	return readl(base + (offset - DWC3_GLOBALS_REGS_START));
+#else
+	return readl(base + offset);
+#endif
 }
 
 static inline void dwc3_writel(void __iomem *base, u32 offset, u32 value)
@@ -60,7 +71,11 @@ static inline void dwc3_writel(void __iomem *base, u32 offset, u32 value)
 	 * space, see dwc3_probe in core.c.
 	 * However, the offsets are given starting from xHCI address space.
 	 */
+#ifndef __UBOOT__
 	writel(value, base + (offset - DWC3_GLOBALS_REGS_START));
+#else
+	writel(value, (base + offset));
+#endif
 }
 
 #endif /* __DRIVERS_USB_DWC3_IO_H */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 5/7] omap5: usb: Add usb otg clocks and enable
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
                   ` (3 preceding siblings ...)
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 4/7] USB: dwc3: dwc3 code adaption for uBoot Dan Murphy
@ 2013-07-02 15:15 ` Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 6/7] USB: Add xhci linux kernel host driver Dan Murphy
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

Add and enable the USB OTG clocks.

Signed-off-by: Dan Murphy <dmurphy@ti.com>
---
 arch/arm/cpu/armv7/omap5/hw_data.c      |   14 ++++++++++++++
 arch/arm/cpu/armv7/omap5/prcm-regs.c    |    1 +
 arch/arm/include/asm/arch-omap5/clock.h |    4 ++++
 arch/arm/include/asm/omap_common.h      |    1 +
 common/cmd_usb.c                        |    6 +++++-
 include/configs/omap5_common.h          |    9 +++++++++
 6 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/arm/cpu/armv7/omap5/hw_data.c b/arch/arm/cpu/armv7/omap5/hw_data.c
index 56cf1f8..7b88338 100644
--- a/arch/arm/cpu/armv7/omap5/hw_data.c
+++ b/arch/arm/cpu/armv7/omap5/hw_data.c
@@ -456,6 +456,20 @@ void enable_basic_clocks(void)
 			OPTFCLKEN_SCRM_PER_MASK);
 	setbits_le32((*prcm)->cm_wkupaon_scrm_clkctrl,
 			OPTFCLKEN_SCRM_CORE_MASK);
+
+/* TODO wrap this with USB defines */
+	/* Setting OCP2SCP1 register */
+	setbits_le32((*prcm)->cm_l3init_ocp2scp1_clkctrl,
+			MODULE_CLKCTRL_MODULEMODE_HW_AUTO);
+
+	/* Select USB OTG SS clock */
+	setbits_le32((*prcm)->cm_l3init_usb_otg_ss_clkctrl,
+			(MODULE_CLKCTRL_MODULEMODE_HW_AUTO |
+			 OPTFCLKEN_USB_OTG_SS_FCLK_MASK));
+
+	/* Setting l3init register */
+	setbits_le32((*prcm)->cm_l3init_clkstctrl, OPTFCLKEN_USB_OTG_SS_FCLK_MASK);
+
 }
 
 void enable_basic_uboot_clocks(void)
diff --git a/arch/arm/cpu/armv7/omap5/prcm-regs.c b/arch/arm/cpu/armv7/omap5/prcm-regs.c
index e839ff5..f90da58 100644
--- a/arch/arm/cpu/armv7/omap5/prcm-regs.c
+++ b/arch/arm/cpu/armv7/omap5/prcm-regs.c
@@ -718,6 +718,7 @@ struct prcm_regs const omap5_es2_prcm = {
 	.cm_l3init_p1500_clkctrl = 0x4a009678,
 	.cm_l3init_fsusb_clkctrl = 0x4a0096d0,
 	.cm_l3init_ocp2scp1_clkctrl = 0x4a0096e0,
+	.cm_l3init_usb_otg_ss_clkctrl = 0x4a0096f0,
 
 	/* prm irqstatus regs */
 	.prm_irqstatus_mpu_2 = 0x4ae06014,
diff --git a/arch/arm/include/asm/arch-omap5/clock.h b/arch/arm/include/asm/arch-omap5/clock.h
index 4d2765d..24ef2bc 100644
--- a/arch/arm/include/asm/arch-omap5/clock.h
+++ b/arch/arm/include/asm/arch-omap5/clock.h
@@ -165,6 +165,10 @@
 /* CM_L3INIT_USBPHY_CLKCTRL */
 #define USBPHY_CLKCTRL_OPTFCLKEN_PHY_48M_MASK	8
 
+/* CM_L3INIT_USB_OTG_SS_CLKCTRL */
+#define OPTFCLKEN_USB_OTG_SS_FCLK_SHIFT		8
+#define OPTFCLKEN_USB_OTG_SS_FCLK_MASK		(1 << 8)
+
 /* CM_MPU_MPU_CLKCTRL */
 #define MPU_CLKCTRL_CLKSEL_EMIF_DIV_MODE_SHIFT	24
 #define MPU_CLKCTRL_CLKSEL_EMIF_DIV_MODE_MASK	(3 << 24)
diff --git a/arch/arm/include/asm/omap_common.h b/arch/arm/include/asm/omap_common.h
index 0dbe81b..11cc870 100644
--- a/arch/arm/include/asm/omap_common.h
+++ b/arch/arm/include/asm/omap_common.h
@@ -241,6 +241,7 @@ struct prcm_regs {
 	u32 cm_l3init_p1500_clkctrl;
 	u32 cm_l3init_fsusb_clkctrl;
 	u32 cm_l3init_ocp2scp1_clkctrl;
+	u32 cm_l3init_usb_otg_ss_clkctrl;
 
 	u32 prm_irqstatus_mpu_2;
 
diff --git a/common/cmd_usb.c b/common/cmd_usb.c
index 70e803b..816fb23 100644
--- a/common/cmd_usb.c
+++ b/common/cmd_usb.c
@@ -31,6 +31,7 @@
 #include <asm/unaligned.h>
 #include <part.h>
 #include <usb.h>
+#include <linux/usb/usb-compat.h>
 
 #ifdef CONFIG_USB_STORAGE
 static int usb_stor_curr_dev = -1; /* current device */
@@ -160,6 +161,7 @@ static void usb_display_string(struct usb_device *dev, int index)
 
 static void usb_display_desc(struct usb_device *dev)
 {
+#if 0
 	if (dev->descriptor.bDescriptorType == USB_DT_DEVICE) {
 		printf("%d: %s,  USB Revision %x.%x\n", dev->devnum,
 		usb_get_class_desc(dev->config.if_desc[0].desc.bInterfaceClass),
@@ -189,7 +191,7 @@ static void usb_display_desc(struct usb_device *dev)
 			(dev->descriptor.bcdDevice>>8) & 0xff,
 			dev->descriptor.bcdDevice & 0xff);
 	}
-
+#endif
 }
 
 static void usb_display_conf_desc(struct usb_config_descriptor *config,
@@ -350,10 +352,12 @@ static void usb_show_tree_graph(struct usb_device *dev, char *pre)
 	pre[index++] = ' ';
 	pre[index++] = has_child ? '|' : ' ';
 	pre[index] = 0;
+#if 0
 	printf(" %s (%s, %dmA)\n", usb_get_class_desc(
 					dev->config.if_desc[0].desc.bInterfaceClass),
 					portspeed(dev->speed),
 					dev->config.desc.bMaxPower * 2);
+#endif
 	if (strlen(dev->mf) || strlen(dev->prod) || strlen(dev->serial))
 		printf(" %s  %s %s %s\n", pre, dev->mf, dev->prod, dev->serial);
 	printf(" %s\n", pre);
diff --git a/include/configs/omap5_common.h b/include/configs/omap5_common.h
index b87ee42..1df553e 100644
--- a/include/configs/omap5_common.h
+++ b/include/configs/omap5_common.h
@@ -96,6 +96,15 @@
 
 #define CONFIG_SYS_CONSOLE_IS_IN_ENV
 
+/* USB */
+#define CONFIG_CMD_USB
+#define CONFIG_USB_STORAGE
+#define CONFIG_USB_DWC3
+#define CONFIG_USB_DWC3_GADGET
+#define CONFIG_USB_DWC3_DUAL_ROLE
+#define CONFIG_USB_DWC3_OMAP
+#define CONFIG_USB_DWC3_HOST
+
 /* Flash */
 #define CONFIG_SYS_NO_FLASH
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3  6/7] USB: Add xhci linux kernel host driver
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
                   ` (4 preceding siblings ...)
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 5/7] omap5: usb: Add usb otg clocks and enable Dan Murphy
@ 2013-07-02 15:15 ` Dan Murphy
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 7/7] USB: Modify the xHCI to adapt to the uBoot code base Dan Murphy
  2013-07-02 21:43 ` [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Wolfgang Denk
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

Add xhci linux kernel host driver

Kernel base commit ID:aa4f608478acb7ed69dfcff4f3c404100b78ac49

Signed-off-by: Dan Murphy <dmurphy@ti.com>
---
 drivers/usb/host/xhci-ext-caps.h |  155 ++
 drivers/usb/host/xhci-hub.c      | 1216 ++++++++++
 drivers/usb/host/xhci-mem.c      | 2467 ++++++++++++++++++++
 drivers/usb/host/xhci-pci.c      |  356 +++
 drivers/usb/host/xhci-plat.c     |  205 ++
 drivers/usb/host/xhci-ring.c     | 4011 ++++++++++++++++++++++++++++++++
 drivers/usb/host/xhci.c          | 4769 ++++++++++++++++++++++++++++++++++++++
 drivers/usb/host/xhci.h          | 1856 +++++++++++++++
 include/linux/usb/ch11.h         |  266 +++
 include/linux/usb/hcd.h          |  672 ++++++
 10 files changed, 15973 insertions(+)
 create mode 100644 drivers/usb/host/xhci-ext-caps.h
 create mode 100644 drivers/usb/host/xhci-hub.c
 create mode 100644 drivers/usb/host/xhci-mem.c
 create mode 100644 drivers/usb/host/xhci-pci.c
 create mode 100644 drivers/usb/host/xhci-plat.c
 create mode 100644 drivers/usb/host/xhci-ring.c
 create mode 100644 drivers/usb/host/xhci.c
 create mode 100644 drivers/usb/host/xhci.h
 create mode 100644 include/linux/usb/ch11.h
 create mode 100644 include/linux/usb/hcd.h

diff --git a/drivers/usb/host/xhci-ext-caps.h b/drivers/usb/host/xhci-ext-caps.h
new file mode 100644
index 0000000..377f424
--- /dev/null
+++ b/drivers/usb/host/xhci-ext-caps.h
@@ -0,0 +1,155 @@
+/*
+ * xHCI host controller driver
+ *
+ * Copyright (C) 2008 Intel Corp.
+ *
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+/* Up to 16 ms to halt an HC */
+#define XHCI_MAX_HALT_USEC	(16*1000)
+/* HC not running - set to 1 when run/stop bit is cleared. */
+#define XHCI_STS_HALT		(1<<0)
+
+/* HCCPARAMS offset from PCI base address */
+#define XHCI_HCC_PARAMS_OFFSET	0x10
+/* HCCPARAMS contains the first extended capability pointer */
+#define XHCI_HCC_EXT_CAPS(p)	(((p)>>16)&0xffff)
+
+/* Command and Status registers offset from the Operational Registers address */
+#define XHCI_CMD_OFFSET		0x00
+#define XHCI_STS_OFFSET		0x04
+
+#define XHCI_MAX_EXT_CAPS		50
+
+/* Capability Register */
+/* bits 7:0 - how long is the Capabilities register */
+#define XHCI_HC_LENGTH(p)	(((p)>>00)&0x00ff)
+
+/* Extended capability register fields */
+#define XHCI_EXT_CAPS_ID(p)	(((p)>>0)&0xff)
+#define XHCI_EXT_CAPS_NEXT(p)	(((p)>>8)&0xff)
+#define	XHCI_EXT_CAPS_VAL(p)	((p)>>16)
+/* Extended capability IDs - ID 0 reserved */
+#define XHCI_EXT_CAPS_LEGACY	1
+#define XHCI_EXT_CAPS_PROTOCOL	2
+#define XHCI_EXT_CAPS_PM	3
+#define XHCI_EXT_CAPS_VIRT	4
+#define XHCI_EXT_CAPS_ROUTE	5
+/* IDs 6-9 reserved */
+#define XHCI_EXT_CAPS_DEBUG	10
+/* USB Legacy Support Capability - section 7.1.1 */
+#define XHCI_HC_BIOS_OWNED	(1 << 16)
+#define XHCI_HC_OS_OWNED	(1 << 24)
+
+/* USB Legacy Support Capability - section 7.1.1 */
+/* Add this offset, plus the value of xECP in HCCPARAMS to the base address */
+#define XHCI_LEGACY_SUPPORT_OFFSET	(0x00)
+
+/* USB Legacy Support Control and Status Register  - section 7.1.2 */
+/* Add this offset, plus the value of xECP in HCCPARAMS to the base address */
+#define XHCI_LEGACY_CONTROL_OFFSET	(0x04)
+/* bits 1:3, 5:12, and 17:19 need to be preserved; bits 21:28 should be zero */
+#define	XHCI_LEGACY_DISABLE_SMI		((0x7 << 1) + (0xff << 5) + (0x7 << 17))
+#define XHCI_LEGACY_SMI_EVENTS		(0x7 << 29)
+
+/* USB 2.0 xHCI 0.96 L1C capability - section 7.2.2.1.3.2 */
+#define XHCI_L1C               (1 << 16)
+
+/* USB 2.0 xHCI 1.0 hardware LMP capability - section 7.2.2.1.3.2 */
+#define XHCI_HLC               (1 << 19)
+
+/* command register values to disable interrupts and halt the HC */
+/* start/stop HC execution - do not write unless HC is halted*/
+#define XHCI_CMD_RUN		(1 << 0)
+/* Event Interrupt Enable - get irq when EINT bit is set in USBSTS register */
+#define XHCI_CMD_EIE		(1 << 2)
+/* Host System Error Interrupt Enable - get irq when HSEIE bit set in USBSTS */
+#define XHCI_CMD_HSEIE		(1 << 3)
+/* Enable Wrap Event - '1' means xHC generates an event when MFINDEX wraps. */
+#define XHCI_CMD_EWE		(1 << 10)
+
+#define XHCI_IRQS		(XHCI_CMD_EIE | XHCI_CMD_HSEIE | XHCI_CMD_EWE)
+
+/* true: Controller Not Ready to accept doorbell or op reg writes after reset */
+#define XHCI_STS_CNR		(1 << 11)
+
+#include <linux/io.h>
+
+/**
+ * Return the next extended capability pointer register.
+ *
+ * @base	PCI register base address.
+ *
+ * @ext_offset	Offset of the 32-bit register that contains the extended
+ * capabilites pointer.  If searching for the first extended capability, pass
+ * in XHCI_HCC_PARAMS_OFFSET.  If searching for the next extended capability,
+ * pass in the offset of the current extended capability register.
+ *
+ * Returns 0 if there is no next extended capability register or returns the register offset
+ * from the PCI registers base address.
+ */
+static inline int xhci_find_next_cap_offset(void __iomem *base, int ext_offset)
+{
+	u32 next;
+
+	next = readl(base + ext_offset);
+
+	if (ext_offset == XHCI_HCC_PARAMS_OFFSET) {
+		/* Find the first extended capability */
+		next = XHCI_HCC_EXT_CAPS(next);
+		ext_offset = 0;
+	} else {
+		/* Find the next extended capability */
+		next = XHCI_EXT_CAPS_NEXT(next);
+	}
+
+	if (!next)
+		return 0;
+	/*
+	 * Address calculation from offset of extended capabilities
+	 * (or HCCPARAMS) register - see section 5.3.6 and section 7.
+	 */
+	return ext_offset + (next << 2);
+}
+
+/**
+ * Find the offset of the extended capabilities with capability ID id.
+ *
+ * @base PCI MMIO registers base address.
+ * @ext_offset Offset from base of the first extended capability to look at,
+ * 		or the address of HCCPARAMS.
+ * @id Extended capability ID to search for.
+ *
+ * This uses an arbitrary limit of XHCI_MAX_EXT_CAPS extended capabilities
+ * to make sure that the list doesn't contain a loop.
+ */
+static inline int xhci_find_ext_cap_by_id(void __iomem *base, int ext_offset, int id)
+{
+	u32 val;
+	int limit = XHCI_MAX_EXT_CAPS;
+
+	while (ext_offset && limit > 0) {
+		val = readl(base + ext_offset);
+		if (XHCI_EXT_CAPS_ID(val) == id)
+			break;
+		ext_offset = xhci_find_next_cap_offset(base, ext_offset);
+		limit--;
+	}
+	if (limit > 0)
+		return ext_offset;
+	return 0;
+}
diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
new file mode 100644
index 0000000..187a3ec
--- /dev/null
+++ b/drivers/usb/host/xhci-hub.c
@@ -0,0 +1,1216 @@
+/*
+ * xHCI host controller driver
+ *
+ * Copyright (C) 2008 Intel Corp.
+ *
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/gfp.h>
+#include <asm/unaligned.h>
+
+#include "xhci.h"
+
+#define	PORT_WAKE_BITS	(PORT_WKOC_E | PORT_WKDISC_E | PORT_WKCONN_E)
+#define	PORT_RWC_BITS	(PORT_CSC | PORT_PEC | PORT_WRC | PORT_OCC | \
+			 PORT_RC | PORT_PLC | PORT_PE)
+
+/* USB 3.0 BOS descriptor and a capability descriptor, combined */
+static u8 usb_bos_descriptor [] = {
+	USB_DT_BOS_SIZE,		/*  __u8 bLength, 5 bytes */
+	USB_DT_BOS,			/*  __u8 bDescriptorType */
+	0x0F, 0x00,			/*  __le16 wTotalLength, 15 bytes */
+	0x1,				/*  __u8 bNumDeviceCaps */
+	/* First device capability */
+	USB_DT_USB_SS_CAP_SIZE,		/*  __u8 bLength, 10 bytes */
+	USB_DT_DEVICE_CAPABILITY,	/* Device Capability */
+	USB_SS_CAP_TYPE,		/* bDevCapabilityType, SUPERSPEED_USB */
+	0x00,				/* bmAttributes, LTM off by default */
+	USB_5GBPS_OPERATION, 0x00,	/* wSpeedsSupported, 5Gbps only */
+	0x03,				/* bFunctionalitySupport,
+					   USB 3.0 speed only */
+	0x00,				/* bU1DevExitLat, set later. */
+	0x00, 0x00			/* __le16 bU2DevExitLat, set later. */
+};
+
+
+static void xhci_common_hub_descriptor(struct xhci_hcd *xhci,
+		struct usb_hub_descriptor *desc, int ports)
+{
+	u16 temp;
+
+	desc->bPwrOn2PwrGood = 10;	/* xhci section 5.4.9 says 20ms max */
+	desc->bHubContrCurrent = 0;
+
+	desc->bNbrPorts = ports;
+	temp = 0;
+	/* Bits 1:0 - support per-port power switching, or power always on */
+	if (HCC_PPC(xhci->hcc_params))
+		temp |= HUB_CHAR_INDV_PORT_LPSM;
+	else
+		temp |= HUB_CHAR_NO_LPSM;
+	/* Bit  2 - root hubs are not part of a compound device */
+	/* Bits 4:3 - individual port over current protection */
+	temp |= HUB_CHAR_INDV_PORT_OCPM;
+	/* Bits 6:5 - no TTs in root ports */
+	/* Bit  7 - no port indicators */
+	desc->wHubCharacteristics = cpu_to_le16(temp);
+}
+
+/* Fill in the USB 2.0 roothub descriptor */
+static void xhci_usb2_hub_descriptor(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+		struct usb_hub_descriptor *desc)
+{
+	int ports;
+	u16 temp;
+	__u8 port_removable[(USB_MAXCHILDREN + 1 + 7) / 8];
+	u32 portsc;
+	unsigned int i;
+
+	ports = xhci->num_usb2_ports;
+
+	xhci_common_hub_descriptor(xhci, desc, ports);
+	desc->bDescriptorType = USB_DT_HUB;
+	temp = 1 + (ports / 8);
+	desc->bDescLength = USB_DT_HUB_NONVAR_SIZE + 2 * temp;
+
+	/* The Device Removable bits are reported on a byte granularity.
+	 * If the port doesn't exist within that byte, the bit is set to 0.
+	 */
+	memset(port_removable, 0, sizeof(port_removable));
+	for (i = 0; i < ports; i++) {
+		portsc = xhci_readl(xhci, xhci->usb2_ports[i]);
+		/* If a device is removable, PORTSC reports a 0, same as in the
+		 * hub descriptor DeviceRemovable bits.
+		 */
+		if (portsc & PORT_DEV_REMOVE)
+			/* This math is hairy because bit 0 of DeviceRemovable
+			 * is reserved, and bit 1 is for port 1, etc.
+			 */
+			port_removable[(i + 1) / 8] |= 1 << ((i + 1) % 8);
+	}
+
+	/* ch11.h defines a hub descriptor that has room for USB_MAXCHILDREN
+	 * ports on it.  The USB 2.0 specification says that there are two
+	 * variable length fields@the end of the hub descriptor:
+	 * DeviceRemovable and PortPwrCtrlMask.  But since we can have less than
+	 * USB_MAXCHILDREN ports, we may need to use the DeviceRemovable array
+	 * to set PortPwrCtrlMask bits.  PortPwrCtrlMask must always be set to
+	 * 0xFF, so we initialize the both arrays (DeviceRemovable and
+	 * PortPwrCtrlMask) to 0xFF.  Then we set the DeviceRemovable for each
+	 * set of ports that actually exist.
+	 */
+	memset(desc->u.hs.DeviceRemovable, 0xff,
+			sizeof(desc->u.hs.DeviceRemovable));
+	memset(desc->u.hs.PortPwrCtrlMask, 0xff,
+			sizeof(desc->u.hs.PortPwrCtrlMask));
+
+	for (i = 0; i < (ports + 1 + 7) / 8; i++)
+		memset(&desc->u.hs.DeviceRemovable[i], port_removable[i],
+				sizeof(__u8));
+}
+
+/* Fill in the USB 3.0 roothub descriptor */
+static void xhci_usb3_hub_descriptor(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+		struct usb_hub_descriptor *desc)
+{
+	int ports;
+	u16 port_removable;
+	u32 portsc;
+	unsigned int i;
+
+	ports = xhci->num_usb3_ports;
+	xhci_common_hub_descriptor(xhci, desc, ports);
+	desc->bDescriptorType = USB_DT_SS_HUB;
+	desc->bDescLength = USB_DT_SS_HUB_SIZE;
+
+	/* header decode latency should be zero for roothubs,
+	 * see section 4.23.5.2.
+	 */
+	desc->u.ss.bHubHdrDecLat = 0;
+	desc->u.ss.wHubDelay = 0;
+
+	port_removable = 0;
+	/* bit 0 is reserved, bit 1 is for port 1, etc. */
+	for (i = 0; i < ports; i++) {
+		portsc = xhci_readl(xhci, xhci->usb3_ports[i]);
+		if (portsc & PORT_DEV_REMOVE)
+			port_removable |= 1 << (i + 1);
+	}
+
+	desc->u.ss.DeviceRemovable = cpu_to_le16(port_removable);
+}
+
+static void xhci_hub_descriptor(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+		struct usb_hub_descriptor *desc)
+{
+
+	if (hcd->speed == HCD_USB3)
+		xhci_usb3_hub_descriptor(hcd, xhci, desc);
+	else
+		xhci_usb2_hub_descriptor(hcd, xhci, desc);
+
+}
+
+static unsigned int xhci_port_speed(unsigned int port_status)
+{
+	if (DEV_LOWSPEED(port_status))
+		return USB_PORT_STAT_LOW_SPEED;
+	if (DEV_HIGHSPEED(port_status))
+		return USB_PORT_STAT_HIGH_SPEED;
+	/*
+	 * FIXME: Yes, we should check for full speed, but the core uses that as
+	 * a default in portspeed() in usb/core/hub.c (which is the only place
+	 * USB_PORT_STAT_*_SPEED is used).
+	 */
+	return 0;
+}
+
+/*
+ * These bits are Read Only (RO) and should be saved and written to the
+ * registers: 0, 3, 10:13, 30
+ * connect status, over-current status, port speed, and device removable.
+ * connect status and port speed are also sticky - meaning they're in
+ * the AUX well and they aren't changed by a hot, warm, or cold reset.
+ */
+#define	XHCI_PORT_RO	((1<<0) | (1<<3) | (0xf<<10) | (1<<30))
+/*
+ * These bits are RW; writing a 0 clears the bit, writing a 1 sets the bit:
+ * bits 5:8, 9, 14:15, 25:27
+ * link state, port power, port indicator state, "wake on" enable state
+ */
+#define XHCI_PORT_RWS	((0xf<<5) | (1<<9) | (0x3<<14) | (0x7<<25))
+/*
+ * These bits are RW; writing a 1 sets the bit, writing a 0 has no effect:
+ * bit 4 (port reset)
+ */
+#define	XHCI_PORT_RW1S	((1<<4))
+/*
+ * These bits are RW; writing a 1 clears the bit, writing a 0 has no effect:
+ * bits 1, 17, 18, 19, 20, 21, 22, 23
+ * port enable/disable, and
+ * change bits: connect, PED, warm port reset changed (reserved zero for USB 2.0 ports),
+ * over-current, reset, link state, and L1 change
+ */
+#define XHCI_PORT_RW1CS	((1<<1) | (0x7f<<17))
+/*
+ * Bit 16 is RW, and writing a '1' to it causes the link state control to be
+ * latched in
+ */
+#define	XHCI_PORT_RW	((1<<16))
+/*
+ * These bits are Reserved Zero (RsvdZ) and zero should be written to them:
+ * bits 2, 24, 28:31
+ */
+#define	XHCI_PORT_RZ	((1<<2) | (1<<24) | (0xf<<28))
+
+/*
+ * Given a port state, this function returns a value that would result in the
+ * port being in the same state, if the value was written to the port status
+ * control register.
+ * Save Read Only (RO) bits and save read/write bits where
+ * writing a 0 clears the bit and writing a 1 sets the bit (RWS).
+ * For all other types (RW1S, RW1CS, RW, and RZ), writing a '0' has no effect.
+ */
+u32 xhci_port_state_to_neutral(u32 state)
+{
+	/* Save read-only status and port state */
+	return (state & XHCI_PORT_RO) | (state & XHCI_PORT_RWS);
+}
+
+/*
+ * find slot id based on port number.
+ * @port: The one-based port number from one of the two split roothubs.
+ */
+int xhci_find_slot_id_by_port(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+		u16 port)
+{
+	int slot_id;
+	int i;
+	enum usb_device_speed speed;
+
+	slot_id = 0;
+	for (i = 0; i < MAX_HC_SLOTS; i++) {
+		if (!xhci->devs[i])
+			continue;
+		speed = xhci->devs[i]->udev->speed;
+		if (((speed == USB_SPEED_SUPER) == (hcd->speed == HCD_USB3))
+				&& xhci->devs[i]->fake_port == port) {
+			slot_id = i;
+			break;
+		}
+	}
+
+	return slot_id;
+}
+
+/*
+ * Stop device
+ * It issues stop endpoint command for EP 0 to 30. And wait the last command
+ * to complete.
+ * suspend will set to 1, if suspend bit need to set in command.
+ */
+static int xhci_stop_device(struct xhci_hcd *xhci, int slot_id, int suspend)
+{
+	struct xhci_virt_device *virt_dev;
+	struct xhci_command *cmd;
+	unsigned long flags;
+	int timeleft;
+	int ret;
+	int i;
+
+	ret = 0;
+	virt_dev = xhci->devs[slot_id];
+	cmd = xhci_alloc_command(xhci, false, true, GFP_NOIO);
+	if (!cmd) {
+		xhci_dbg(xhci, "Couldn't allocate command structure.\n");
+		return -ENOMEM;
+	}
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	for (i = LAST_EP_INDEX; i > 0; i--) {
+		if (virt_dev->eps[i].ring && virt_dev->eps[i].ring->dequeue)
+			xhci_queue_stop_endpoint(xhci, slot_id, i, suspend);
+	}
+	cmd->command_trb = xhci->cmd_ring->enqueue;
+	list_add_tail(&cmd->cmd_list, &virt_dev->cmd_list);
+	xhci_queue_stop_endpoint(xhci, slot_id, 0, suspend);
+	xhci_ring_cmd_db(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* Wait for last stop endpoint command to finish */
+	timeleft = wait_for_completion_interruptible_timeout(
+			cmd->completion,
+			USB_CTRL_SET_TIMEOUT);
+	if (timeleft <= 0) {
+		xhci_warn(xhci, "%s while waiting for stop endpoint command\n",
+				timeleft == 0 ? "Timeout" : "Signal");
+		spin_lock_irqsave(&xhci->lock, flags);
+		/* The timeout might have raced with the event ring handler, so
+		 * only delete from the list if the item isn't poisoned.
+		 */
+		if (cmd->cmd_list.next != LIST_POISON1)
+			list_del(&cmd->cmd_list);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		ret = -ETIME;
+		goto command_cleanup;
+	}
+
+command_cleanup:
+	xhci_free_command(xhci, cmd);
+	return ret;
+}
+
+/*
+ * Ring device, it rings the all doorbells unconditionally.
+ */
+void xhci_ring_device(struct xhci_hcd *xhci, int slot_id)
+{
+	int i;
+
+	for (i = 0; i < LAST_EP_INDEX + 1; i++)
+		if (xhci->devs[slot_id]->eps[i].ring &&
+		    xhci->devs[slot_id]->eps[i].ring->dequeue)
+			xhci_ring_ep_doorbell(xhci, slot_id, i, 0);
+
+	return;
+}
+
+static void xhci_disable_port(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+		u16 wIndex, __le32 __iomem *addr, u32 port_status)
+{
+	/* Don't allow the USB core to disable SuperSpeed ports. */
+	if (hcd->speed == HCD_USB3) {
+		xhci_dbg(xhci, "Ignoring request to disable "
+				"SuperSpeed port.\n");
+		return;
+	}
+
+	/* Write 1 to disable the port */
+	xhci_writel(xhci, port_status | PORT_PE, addr);
+	port_status = xhci_readl(xhci, addr);
+	xhci_dbg(xhci, "disable port, actual port %d status  = 0x%x\n",
+			wIndex, port_status);
+}
+
+static void xhci_clear_port_change_bit(struct xhci_hcd *xhci, u16 wValue,
+		u16 wIndex, __le32 __iomem *addr, u32 port_status)
+{
+	char *port_change_bit;
+	u32 status;
+
+	switch (wValue) {
+	case USB_PORT_FEAT_C_RESET:
+		status = PORT_RC;
+		port_change_bit = "reset";
+		break;
+	case USB_PORT_FEAT_C_BH_PORT_RESET:
+		status = PORT_WRC;
+		port_change_bit = "warm(BH) reset";
+		break;
+	case USB_PORT_FEAT_C_CONNECTION:
+		status = PORT_CSC;
+		port_change_bit = "connect";
+		break;
+	case USB_PORT_FEAT_C_OVER_CURRENT:
+		status = PORT_OCC;
+		port_change_bit = "over-current";
+		break;
+	case USB_PORT_FEAT_C_ENABLE:
+		status = PORT_PEC;
+		port_change_bit = "enable/disable";
+		break;
+	case USB_PORT_FEAT_C_SUSPEND:
+		status = PORT_PLC;
+		port_change_bit = "suspend/resume";
+		break;
+	case USB_PORT_FEAT_C_PORT_LINK_STATE:
+		status = PORT_PLC;
+		port_change_bit = "link state";
+		break;
+	default:
+		/* Should never happen */
+		return;
+	}
+	/* Change bits are all write 1 to clear */
+	xhci_writel(xhci, port_status | status, addr);
+	port_status = xhci_readl(xhci, addr);
+	xhci_dbg(xhci, "clear port %s change, actual port %d status  = 0x%x\n",
+			port_change_bit, wIndex, port_status);
+}
+
+static int xhci_get_ports(struct usb_hcd *hcd, __le32 __iomem ***port_array)
+{
+	int max_ports;
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+
+	if (hcd->speed == HCD_USB3) {
+		max_ports = xhci->num_usb3_ports;
+		*port_array = xhci->usb3_ports;
+	} else {
+		max_ports = xhci->num_usb2_ports;
+		*port_array = xhci->usb2_ports;
+	}
+
+	return max_ports;
+}
+
+void xhci_set_link_state(struct xhci_hcd *xhci, __le32 __iomem **port_array,
+				int port_id, u32 link_state)
+{
+	u32 temp;
+
+	temp = xhci_readl(xhci, port_array[port_id]);
+	temp = xhci_port_state_to_neutral(temp);
+	temp &= ~PORT_PLS_MASK;
+	temp |= PORT_LINK_STROBE | link_state;
+	xhci_writel(xhci, temp, port_array[port_id]);
+}
+
+static void xhci_set_remote_wake_mask(struct xhci_hcd *xhci,
+		__le32 __iomem **port_array, int port_id, u16 wake_mask)
+{
+	u32 temp;
+
+	temp = xhci_readl(xhci, port_array[port_id]);
+	temp = xhci_port_state_to_neutral(temp);
+
+	if (wake_mask & USB_PORT_FEAT_REMOTE_WAKE_CONNECT)
+		temp |= PORT_WKCONN_E;
+	else
+		temp &= ~PORT_WKCONN_E;
+
+	if (wake_mask & USB_PORT_FEAT_REMOTE_WAKE_DISCONNECT)
+		temp |= PORT_WKDISC_E;
+	else
+		temp &= ~PORT_WKDISC_E;
+
+	if (wake_mask & USB_PORT_FEAT_REMOTE_WAKE_OVER_CURRENT)
+		temp |= PORT_WKOC_E;
+	else
+		temp &= ~PORT_WKOC_E;
+
+	xhci_writel(xhci, temp, port_array[port_id]);
+}
+
+/* Test and clear port RWC bit */
+void xhci_test_and_clear_bit(struct xhci_hcd *xhci, __le32 __iomem **port_array,
+				int port_id, u32 port_bit)
+{
+	u32 temp;
+
+	temp = xhci_readl(xhci, port_array[port_id]);
+	if (temp & port_bit) {
+		temp = xhci_port_state_to_neutral(temp);
+		temp |= port_bit;
+		xhci_writel(xhci, temp, port_array[port_id]);
+	}
+}
+
+/* Updates Link Status for super Speed port */
+static void xhci_hub_report_link_state(u32 *status, u32 status_reg)
+{
+	u32 pls = status_reg & PORT_PLS_MASK;
+
+	/* resume state is a xHCI internal state.
+	 * Do not report it to usb core.
+	 */
+	if (pls == XDEV_RESUME)
+		return;
+
+	/* When the CAS bit is set then warm reset
+	 * should be performed on port
+	 */
+	if (status_reg & PORT_CAS) {
+		/* The CAS bit can be set while the port is
+		 * in any link state.
+		 * Only roothubs have CAS bit, so we
+		 * pretend to be in compliance mode
+		 * unless we're already in compliance
+		 * or the inactive state.
+		 */
+		if (pls != USB_SS_PORT_LS_COMP_MOD &&
+		    pls != USB_SS_PORT_LS_SS_INACTIVE) {
+			pls = USB_SS_PORT_LS_COMP_MOD;
+		}
+		/* Return also connection bit -
+		 * hub state machine resets port
+		 * when this bit is set.
+		 */
+		pls |= USB_PORT_STAT_CONNECTION;
+	} else {
+		/*
+		 * If CAS bit isn't set but the Port is already at
+		 * Compliance Mode, fake a connection so the USB core
+		 * notices the Compliance state and resets the port.
+		 * This resolves an issue generated by the SN65LVPE502CP
+		 * in which sometimes the port enters compliance mode
+		 * caused by a delay on the host-device negotiation.
+		 */
+		if (pls == USB_SS_PORT_LS_COMP_MOD)
+			pls |= USB_PORT_STAT_CONNECTION;
+	}
+
+	/* update status field */
+	*status |= pls;
+}
+
+/*
+ * Function for Compliance Mode Quirk.
+ *
+ * This Function verifies if all xhc USB3 ports have entered U0, if so,
+ * the compliance mode timer is deleted. A port won't enter
+ * compliance mode if it has previously entered U0.
+ */
+void xhci_del_comp_mod_timer(struct xhci_hcd *xhci, u32 status, u16 wIndex)
+{
+	u32 all_ports_seen_u0 = ((1 << xhci->num_usb3_ports)-1);
+	bool port_in_u0 = ((status & PORT_PLS_MASK) == XDEV_U0);
+
+	if (!(xhci->quirks & XHCI_COMP_MODE_QUIRK))
+		return;
+
+	if ((xhci->port_status_u0 != all_ports_seen_u0) && port_in_u0) {
+		xhci->port_status_u0 |= 1 << wIndex;
+		if (xhci->port_status_u0 == all_ports_seen_u0) {
+			del_timer_sync(&xhci->comp_mode_recovery_timer);
+			xhci_dbg(xhci, "All USB3 ports have entered U0 already!\n");
+			xhci_dbg(xhci, "Compliance Mode Recovery Timer Deleted.\n");
+		}
+	}
+}
+
+int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+		u16 wIndex, char *buf, u16 wLength)
+{
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+	int max_ports;
+	unsigned long flags;
+	u32 temp, status;
+	int retval = 0;
+	__le32 __iomem **port_array;
+	int slot_id;
+	struct xhci_bus_state *bus_state;
+	u16 link_state = 0;
+	u16 wake_mask = 0;
+	u16 timeout = 0;
+
+	max_ports = xhci_get_ports(hcd, &port_array);
+	bus_state = &xhci->bus_state[hcd_index(hcd)];
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	switch (typeReq) {
+	case GetHubStatus:
+		/* No power source, over-current reported per port */
+		memset(buf, 0, 4);
+		break;
+	case GetHubDescriptor:
+		/* Check to make sure userspace is asking for the USB 3.0 hub
+		 * descriptor for the USB 3.0 roothub.  If not, we stall the
+		 * endpoint, like external hubs do.
+		 */
+		if (hcd->speed == HCD_USB3 &&
+				(wLength < USB_DT_SS_HUB_SIZE ||
+				 wValue != (USB_DT_SS_HUB << 8))) {
+			xhci_dbg(xhci, "Wrong hub descriptor type for "
+					"USB 3.0 roothub.\n");
+			goto error;
+		}
+		xhci_hub_descriptor(hcd, xhci,
+				(struct usb_hub_descriptor *) buf);
+		break;
+	case DeviceRequest | USB_REQ_GET_DESCRIPTOR:
+		if ((wValue & 0xff00) != (USB_DT_BOS << 8))
+			goto error;
+
+		if (hcd->speed != HCD_USB3)
+			goto error;
+
+		/* Set the U1 and U2 exit latencies. */
+		memcpy(buf, &usb_bos_descriptor,
+				USB_DT_BOS_SIZE + USB_DT_USB_SS_CAP_SIZE);
+		temp = xhci_readl(xhci, &xhci->cap_regs->hcs_params3);
+		buf[12] = HCS_U1_LATENCY(temp);
+		put_unaligned_le16(HCS_U2_LATENCY(temp), &buf[13]);
+
+		/* Indicate whether the host has LTM support. */
+		temp = xhci_readl(xhci, &xhci->cap_regs->hcc_params);
+		if (HCC_LTC(temp))
+			buf[8] |= USB_LTM_SUPPORT;
+
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return USB_DT_BOS_SIZE + USB_DT_USB_SS_CAP_SIZE;
+	case GetPortStatus:
+		if (!wIndex || wIndex > max_ports)
+			goto error;
+		wIndex--;
+		status = 0;
+		temp = xhci_readl(xhci, port_array[wIndex]);
+		if (temp == 0xffffffff) {
+			retval = -ENODEV;
+			break;
+		}
+		xhci_dbg(xhci, "get port status, actual port %d status  = 0x%x\n", wIndex, temp);
+
+		/* wPortChange bits */
+		if (temp & PORT_CSC)
+			status |= USB_PORT_STAT_C_CONNECTION << 16;
+		if (temp & PORT_PEC)
+			status |= USB_PORT_STAT_C_ENABLE << 16;
+		if ((temp & PORT_OCC))
+			status |= USB_PORT_STAT_C_OVERCURRENT << 16;
+		if ((temp & PORT_RC))
+			status |= USB_PORT_STAT_C_RESET << 16;
+		/* USB3.0 only */
+		if (hcd->speed == HCD_USB3) {
+			if ((temp & PORT_PLC))
+				status |= USB_PORT_STAT_C_LINK_STATE << 16;
+			if ((temp & PORT_WRC))
+				status |= USB_PORT_STAT_C_BH_RESET << 16;
+		}
+
+		if (hcd->speed != HCD_USB3) {
+			if ((temp & PORT_PLS_MASK) == XDEV_U3
+					&& (temp & PORT_POWER))
+				status |= USB_PORT_STAT_SUSPEND;
+		}
+		if ((temp & PORT_PLS_MASK) == XDEV_RESUME &&
+				!DEV_SUPERSPEED(temp)) {
+			if ((temp & PORT_RESET) || !(temp & PORT_PE))
+				goto error;
+			if (time_after_eq(jiffies,
+					bus_state->resume_done[wIndex])) {
+				xhci_dbg(xhci, "Resume USB2 port %d\n",
+					wIndex + 1);
+				bus_state->resume_done[wIndex] = 0;
+				clear_bit(wIndex, &bus_state->resuming_ports);
+				xhci_set_link_state(xhci, port_array, wIndex,
+							XDEV_U0);
+				xhci_dbg(xhci, "set port %d resume\n",
+					wIndex + 1);
+				slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+								 wIndex + 1);
+				if (!slot_id) {
+					xhci_dbg(xhci, "slot_id is zero\n");
+					goto error;
+				}
+				xhci_ring_device(xhci, slot_id);
+				bus_state->port_c_suspend |= 1 << wIndex;
+				bus_state->suspended_ports &= ~(1 << wIndex);
+			} else {
+				/*
+				 * The resume has been signaling for less than
+				 * 20ms. Report the port status as SUSPEND,
+				 * let the usbcore check port status again
+				 * and clear resume signaling later.
+				 */
+				status |= USB_PORT_STAT_SUSPEND;
+			}
+		}
+		if ((temp & PORT_PLS_MASK) == XDEV_U0
+			&& (temp & PORT_POWER)
+			&& (bus_state->suspended_ports & (1 << wIndex))) {
+			bus_state->suspended_ports &= ~(1 << wIndex);
+			if (hcd->speed != HCD_USB3)
+				bus_state->port_c_suspend |= 1 << wIndex;
+		}
+		if (temp & PORT_CONNECT) {
+			status |= USB_PORT_STAT_CONNECTION;
+			status |= xhci_port_speed(temp);
+		}
+		if (temp & PORT_PE)
+			status |= USB_PORT_STAT_ENABLE;
+		if (temp & PORT_OC)
+			status |= USB_PORT_STAT_OVERCURRENT;
+		if (temp & PORT_RESET)
+			status |= USB_PORT_STAT_RESET;
+		if (temp & PORT_POWER) {
+			if (hcd->speed == HCD_USB3)
+				status |= USB_SS_PORT_STAT_POWER;
+			else
+				status |= USB_PORT_STAT_POWER;
+		}
+		/* Update Port Link State for super speed ports*/
+		if (hcd->speed == HCD_USB3) {
+			xhci_hub_report_link_state(&status, temp);
+			/*
+			 * Verify if all USB3 Ports Have entered U0 already.
+			 * Delete Compliance Mode Timer if so.
+			 */
+			xhci_del_comp_mod_timer(xhci, temp, wIndex);
+		}
+		if (bus_state->port_c_suspend & (1 << wIndex))
+			status |= 1 << USB_PORT_FEAT_C_SUSPEND;
+		xhci_dbg(xhci, "Get port status returned 0x%x\n", status);
+		put_unaligned(cpu_to_le32(status), (__le32 *) buf);
+		break;
+	case SetPortFeature:
+		if (wValue == USB_PORT_FEAT_LINK_STATE)
+			link_state = (wIndex & 0xff00) >> 3;
+		if (wValue == USB_PORT_FEAT_REMOTE_WAKE_MASK)
+			wake_mask = wIndex & 0xff00;
+		/* The MSB of wIndex is the U1/U2 timeout */
+		timeout = (wIndex & 0xff00) >> 8;
+		wIndex &= 0xff;
+		if (!wIndex || wIndex > max_ports)
+			goto error;
+		wIndex--;
+		temp = xhci_readl(xhci, port_array[wIndex]);
+		if (temp == 0xffffffff) {
+			retval = -ENODEV;
+			break;
+		}
+		temp = xhci_port_state_to_neutral(temp);
+		/* FIXME: What new port features do we need to support? */
+		switch (wValue) {
+		case USB_PORT_FEAT_SUSPEND:
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			if ((temp & PORT_PLS_MASK) != XDEV_U0) {
+				/* Resume the port to U0 first */
+				xhci_set_link_state(xhci, port_array, wIndex,
+							XDEV_U0);
+				spin_unlock_irqrestore(&xhci->lock, flags);
+				msleep(10);
+				spin_lock_irqsave(&xhci->lock, flags);
+			}
+			/* In spec software should not attempt to suspend
+			 * a port unless the port reports that it is in the
+			 * enabled (PED = ?1?,PLS < ?3?) state.
+			 */
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			if ((temp & PORT_PE) == 0 || (temp & PORT_RESET)
+				|| (temp & PORT_PLS_MASK) >= XDEV_U3) {
+				xhci_warn(xhci, "USB core suspending device "
+					  "not in U0/U1/U2.\n");
+				goto error;
+			}
+
+			slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+					wIndex + 1);
+			if (!slot_id) {
+				xhci_warn(xhci, "slot_id is zero\n");
+				goto error;
+			}
+			/* unlock to execute stop endpoint commands */
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			xhci_stop_device(xhci, slot_id, 1);
+			spin_lock_irqsave(&xhci->lock, flags);
+
+			xhci_set_link_state(xhci, port_array, wIndex, XDEV_U3);
+
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			msleep(10); /* wait device to enter */
+			spin_lock_irqsave(&xhci->lock, flags);
+
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			bus_state->suspended_ports |= 1 << wIndex;
+			break;
+		case USB_PORT_FEAT_LINK_STATE:
+			temp = xhci_readl(xhci, port_array[wIndex]);
+
+			/* Disable port */
+			if (link_state == USB_SS_PORT_LS_SS_DISABLED) {
+				xhci_dbg(xhci, "Disable port %d\n", wIndex);
+				temp = xhci_port_state_to_neutral(temp);
+				/*
+				 * Clear all change bits, so that we get a new
+				 * connection event.
+				 */
+				temp |= PORT_CSC | PORT_PEC | PORT_WRC |
+					PORT_OCC | PORT_RC | PORT_PLC |
+					PORT_CEC;
+				xhci_writel(xhci, temp | PORT_PE,
+					port_array[wIndex]);
+				temp = xhci_readl(xhci, port_array[wIndex]);
+				break;
+			}
+
+			/* Put link in RxDetect (enable port) */
+			if (link_state == USB_SS_PORT_LS_RX_DETECT) {
+				xhci_dbg(xhci, "Enable port %d\n", wIndex);
+				xhci_set_link_state(xhci, port_array, wIndex,
+						link_state);
+				temp = xhci_readl(xhci, port_array[wIndex]);
+				break;
+			}
+
+			/* Software should not attempt to set
+			 * port link state above '3' (U3) and the port
+			 * must be enabled.
+			 */
+			if ((temp & PORT_PE) == 0 ||
+				(link_state > USB_SS_PORT_LS_U3)) {
+				xhci_warn(xhci, "Cannot set link state.\n");
+				goto error;
+			}
+
+			if (link_state == USB_SS_PORT_LS_U3) {
+				slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+						wIndex + 1);
+				if (slot_id) {
+					/* unlock to execute stop endpoint
+					 * commands */
+					spin_unlock_irqrestore(&xhci->lock,
+								flags);
+					xhci_stop_device(xhci, slot_id, 1);
+					spin_lock_irqsave(&xhci->lock, flags);
+				}
+			}
+
+			xhci_set_link_state(xhci, port_array, wIndex,
+						link_state);
+
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			msleep(20); /* wait device to enter */
+			spin_lock_irqsave(&xhci->lock, flags);
+
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			if (link_state == USB_SS_PORT_LS_U3)
+				bus_state->suspended_ports |= 1 << wIndex;
+			break;
+		case USB_PORT_FEAT_POWER:
+			/*
+			 * Turn on ports, even if there isn't per-port switching.
+			 * HC will report connect events even before this is set.
+			 * However, khubd will ignore the roothub events until
+			 * the roothub is registered.
+			 */
+			xhci_writel(xhci, temp | PORT_POWER,
+					port_array[wIndex]);
+
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			xhci_dbg(xhci, "set port power, actual port %d status  = 0x%x\n", wIndex, temp);
+
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			temp = usb_acpi_power_manageable(hcd->self.root_hub,
+					wIndex);
+			if (temp)
+				usb_acpi_set_power_state(hcd->self.root_hub,
+						wIndex, true);
+			spin_lock_irqsave(&xhci->lock, flags);
+			break;
+		case USB_PORT_FEAT_RESET:
+			temp = (temp | PORT_RESET);
+			xhci_writel(xhci, temp, port_array[wIndex]);
+
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			xhci_dbg(xhci, "set port reset, actual port %d status  = 0x%x\n", wIndex, temp);
+			break;
+		case USB_PORT_FEAT_REMOTE_WAKE_MASK:
+			xhci_set_remote_wake_mask(xhci, port_array,
+					wIndex, wake_mask);
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			xhci_dbg(xhci, "set port remote wake mask, "
+					"actual port %d status  = 0x%x\n",
+					wIndex, temp);
+			break;
+		case USB_PORT_FEAT_BH_PORT_RESET:
+			temp |= PORT_WR;
+			xhci_writel(xhci, temp, port_array[wIndex]);
+
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			break;
+		case USB_PORT_FEAT_U1_TIMEOUT:
+			if (hcd->speed != HCD_USB3)
+				goto error;
+			temp = xhci_readl(xhci, port_array[wIndex] + 1);
+			temp &= ~PORT_U1_TIMEOUT_MASK;
+			temp |= PORT_U1_TIMEOUT(timeout);
+			xhci_writel(xhci, temp, port_array[wIndex] + 1);
+			break;
+		case USB_PORT_FEAT_U2_TIMEOUT:
+			if (hcd->speed != HCD_USB3)
+				goto error;
+			temp = xhci_readl(xhci, port_array[wIndex] + 1);
+			temp &= ~PORT_U2_TIMEOUT_MASK;
+			temp |= PORT_U2_TIMEOUT(timeout);
+			xhci_writel(xhci, temp, port_array[wIndex] + 1);
+			break;
+		default:
+			goto error;
+		}
+		/* unblock any posted writes */
+		temp = xhci_readl(xhci, port_array[wIndex]);
+		break;
+	case ClearPortFeature:
+		if (!wIndex || wIndex > max_ports)
+			goto error;
+		wIndex--;
+		temp = xhci_readl(xhci, port_array[wIndex]);
+		if (temp == 0xffffffff) {
+			retval = -ENODEV;
+			break;
+		}
+		/* FIXME: What new port features do we need to support? */
+		temp = xhci_port_state_to_neutral(temp);
+		switch (wValue) {
+		case USB_PORT_FEAT_SUSPEND:
+			temp = xhci_readl(xhci, port_array[wIndex]);
+			xhci_dbg(xhci, "clear USB_PORT_FEAT_SUSPEND\n");
+			xhci_dbg(xhci, "PORTSC %04x\n", temp);
+			if (temp & PORT_RESET)
+				goto error;
+			if ((temp & PORT_PLS_MASK) == XDEV_U3) {
+				if ((temp & PORT_PE) == 0)
+					goto error;
+
+				xhci_set_link_state(xhci, port_array, wIndex,
+							XDEV_RESUME);
+				spin_unlock_irqrestore(&xhci->lock, flags);
+				msleep(20);
+				spin_lock_irqsave(&xhci->lock, flags);
+				xhci_set_link_state(xhci, port_array, wIndex,
+							XDEV_U0);
+			}
+			bus_state->port_c_suspend |= 1 << wIndex;
+
+			slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+					wIndex + 1);
+			if (!slot_id) {
+				xhci_dbg(xhci, "slot_id is zero\n");
+				goto error;
+			}
+			xhci_ring_device(xhci, slot_id);
+			break;
+		case USB_PORT_FEAT_C_SUSPEND:
+			bus_state->port_c_suspend &= ~(1 << wIndex);
+		case USB_PORT_FEAT_C_RESET:
+		case USB_PORT_FEAT_C_BH_PORT_RESET:
+		case USB_PORT_FEAT_C_CONNECTION:
+		case USB_PORT_FEAT_C_OVER_CURRENT:
+		case USB_PORT_FEAT_C_ENABLE:
+		case USB_PORT_FEAT_C_PORT_LINK_STATE:
+			xhci_clear_port_change_bit(xhci, wValue, wIndex,
+					port_array[wIndex], temp);
+			break;
+		case USB_PORT_FEAT_ENABLE:
+			xhci_disable_port(hcd, xhci, wIndex,
+					port_array[wIndex], temp);
+			break;
+		case USB_PORT_FEAT_POWER:
+			xhci_writel(xhci, temp & ~PORT_POWER,
+				port_array[wIndex]);
+
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			temp = usb_acpi_power_manageable(hcd->self.root_hub,
+					wIndex);
+			if (temp)
+				usb_acpi_set_power_state(hcd->self.root_hub,
+						wIndex, false);
+			spin_lock_irqsave(&xhci->lock, flags);
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+error:
+		/* "stall" on error */
+		retval = -EPIPE;
+	}
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return retval;
+}
+
+/*
+ * Returns 0 if the status hasn't changed, or the number of bytes in buf.
+ * Ports are 0-indexed from the HCD point of view,
+ * and 1-indexed from the USB core pointer of view.
+ *
+ * Note that the status change bits will be cleared as soon as a port status
+ * change event is generated, so we use the saved status from that event.
+ */
+int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+{
+	unsigned long flags;
+	u32 temp, status;
+	u32 mask;
+	int i, retval;
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+	int max_ports;
+	__le32 __iomem **port_array;
+	struct xhci_bus_state *bus_state;
+	bool reset_change = false;
+
+	max_ports = xhci_get_ports(hcd, &port_array);
+	bus_state = &xhci->bus_state[hcd_index(hcd)];
+
+	/* Initial status is no changes */
+	retval = (max_ports + 8) / 8;
+	memset(buf, 0, retval);
+
+	/*
+	 * Inform the usbcore about resume-in-progress by returning
+	 * a non-zero value even if there are no status changes.
+	 */
+	status = bus_state->resuming_ports;
+
+	mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	/* For each port, did anything change?  If so, set that bit in buf. */
+	for (i = 0; i < max_ports; i++) {
+		temp = xhci_readl(xhci, port_array[i]);
+		if (temp == 0xffffffff) {
+			retval = -ENODEV;
+			break;
+		}
+		if ((temp & mask) != 0 ||
+			(bus_state->port_c_suspend & 1 << i) ||
+			(bus_state->resume_done[i] && time_after_eq(
+			    jiffies, bus_state->resume_done[i]))) {
+			buf[(i + 1) / 8] |= 1 << (i + 1) % 8;
+			status = 1;
+		}
+		if ((temp & PORT_RC))
+			reset_change = true;
+	}
+	if (!status && !reset_change) {
+		xhci_dbg(xhci, "%s: stopping port polling.\n", __func__);
+		clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+	}
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return status ? retval : 0;
+}
+
+#ifdef CONFIG_PM
+
+int xhci_bus_suspend(struct usb_hcd *hcd)
+{
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+	int max_ports, port_index;
+	__le32 __iomem **port_array;
+	struct xhci_bus_state *bus_state;
+	unsigned long flags;
+
+	max_ports = xhci_get_ports(hcd, &port_array);
+	bus_state = &xhci->bus_state[hcd_index(hcd)];
+
+	spin_lock_irqsave(&xhci->lock, flags);
+
+	if (hcd->self.root_hub->do_remote_wakeup) {
+		if (bus_state->resuming_ports) {
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			xhci_dbg(xhci, "suspend failed because "
+						"a port is resuming\n");
+			return -EBUSY;
+		}
+	}
+
+	port_index = max_ports;
+	bus_state->bus_suspended = 0;
+	while (port_index--) {
+		/* suspend the port if the port is not suspended */
+		u32 t1, t2;
+		int slot_id;
+
+		t1 = xhci_readl(xhci, port_array[port_index]);
+		t2 = xhci_port_state_to_neutral(t1);
+
+		if ((t1 & PORT_PE) && !(t1 & PORT_PLS_MASK)) {
+			xhci_dbg(xhci, "port %d not suspended\n", port_index);
+			slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+					port_index + 1);
+			if (slot_id) {
+				spin_unlock_irqrestore(&xhci->lock, flags);
+				xhci_stop_device(xhci, slot_id, 1);
+				spin_lock_irqsave(&xhci->lock, flags);
+			}
+			t2 &= ~PORT_PLS_MASK;
+			t2 |= PORT_LINK_STROBE | XDEV_U3;
+			set_bit(port_index, &bus_state->bus_suspended);
+		}
+		/* USB core sets remote wake mask for USB 3.0 hubs,
+		 * including the USB 3.0 roothub, but only if CONFIG_PM_RUNTIME
+		 * is enabled, so also enable remote wake here.
+		 */
+		if (hcd->self.root_hub->do_remote_wakeup) {
+			if (t1 & PORT_CONNECT) {
+				t2 |= PORT_WKOC_E | PORT_WKDISC_E;
+				t2 &= ~PORT_WKCONN_E;
+			} else {
+				t2 |= PORT_WKOC_E | PORT_WKCONN_E;
+				t2 &= ~PORT_WKDISC_E;
+			}
+		} else
+			t2 &= ~PORT_WAKE_BITS;
+
+		t1 = xhci_port_state_to_neutral(t1);
+		if (t1 != t2)
+			xhci_writel(xhci, t2, port_array[port_index]);
+
+		if (hcd->speed != HCD_USB3) {
+			/* enable remote wake up for USB 2.0 */
+			__le32 __iomem *addr;
+			u32 tmp;
+
+			/* Add one to the port status register address to get
+			 * the port power control register address.
+			 */
+			addr = port_array[port_index] + 1;
+			tmp = xhci_readl(xhci, addr);
+			tmp |= PORT_RWE;
+			xhci_writel(xhci, tmp, addr);
+		}
+	}
+	hcd->state = HC_STATE_SUSPENDED;
+	bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return 0;
+}
+
+int xhci_bus_resume(struct usb_hcd *hcd)
+{
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+	int max_ports, port_index;
+	__le32 __iomem **port_array;
+	struct xhci_bus_state *bus_state;
+	u32 temp;
+	unsigned long flags;
+
+	max_ports = xhci_get_ports(hcd, &port_array);
+	bus_state = &xhci->bus_state[hcd_index(hcd)];
+
+	if (time_before(jiffies, bus_state->next_statechange))
+		msleep(5);
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	if (!HCD_HW_ACCESSIBLE(hcd)) {
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return -ESHUTDOWN;
+	}
+
+	/* delay the irqs */
+	temp = xhci_readl(xhci, &xhci->op_regs->command);
+	temp &= ~CMD_EIE;
+	xhci_writel(xhci, temp, &xhci->op_regs->command);
+
+	port_index = max_ports;
+	while (port_index--) {
+		/* Check whether need resume ports. If needed
+		   resume port and disable remote wakeup */
+		u32 temp;
+		int slot_id;
+
+		temp = xhci_readl(xhci, port_array[port_index]);
+		if (DEV_SUPERSPEED(temp))
+			temp &= ~(PORT_RWC_BITS | PORT_CEC | PORT_WAKE_BITS);
+		else
+			temp &= ~(PORT_RWC_BITS | PORT_WAKE_BITS);
+		if (test_bit(port_index, &bus_state->bus_suspended) &&
+		    (temp & PORT_PLS_MASK)) {
+			if (DEV_SUPERSPEED(temp)) {
+				xhci_set_link_state(xhci, port_array,
+							port_index, XDEV_U0);
+			} else {
+				xhci_set_link_state(xhci, port_array,
+						port_index, XDEV_RESUME);
+
+				spin_unlock_irqrestore(&xhci->lock, flags);
+				msleep(20);
+				spin_lock_irqsave(&xhci->lock, flags);
+
+				xhci_set_link_state(xhci, port_array,
+							port_index, XDEV_U0);
+			}
+			/* wait for the port to enter U0 and report port link
+			 * state change.
+			 */
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			msleep(20);
+			spin_lock_irqsave(&xhci->lock, flags);
+
+			/* Clear PLC */
+			xhci_test_and_clear_bit(xhci, port_array, port_index,
+						PORT_PLC);
+
+			slot_id = xhci_find_slot_id_by_port(hcd,
+					xhci, port_index + 1);
+			if (slot_id)
+				xhci_ring_device(xhci, slot_id);
+		} else
+			xhci_writel(xhci, temp, port_array[port_index]);
+
+		if (hcd->speed != HCD_USB3) {
+			/* disable remote wake up for USB 2.0 */
+			__le32 __iomem *addr;
+			u32 tmp;
+
+			/* Add one to the port status register address to get
+			 * the port power control register address.
+			 */
+			addr = port_array[port_index] + 1;
+			tmp = xhci_readl(xhci, addr);
+			tmp &= ~PORT_RWE;
+			xhci_writel(xhci, tmp, addr);
+		}
+	}
+
+	(void) xhci_readl(xhci, &xhci->op_regs->command);
+
+	bus_state->next_statechange = jiffies + msecs_to_jiffies(5);
+	/* re-enable irqs */
+	temp = xhci_readl(xhci, &xhci->op_regs->command);
+	temp |= CMD_EIE;
+	xhci_writel(xhci, temp, &xhci->op_regs->command);
+	temp = xhci_readl(xhci, &xhci->op_regs->command);
+
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return 0;
+}
+
+#endif	/* CONFIG_PM */
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
new file mode 100644
index 0000000..2cfc465
--- /dev/null
+++ b/drivers/usb/host/xhci-mem.c
@@ -0,0 +1,2467 @@
+/*
+ * xHCI host controller driver
+ *
+ * Copyright (C) 2008 Intel Corp.
+ *
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/usb.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/dmapool.h>
+
+#include "xhci.h"
+
+/*
+ * Allocates a generic ring segment from the ring pool, sets the dma address,
+ * initializes the segment to zero, and sets the private next pointer to NULL.
+ *
+ * Section 4.11.1.1:
+ * "All components of all Command and Transfer TRBs shall be initialized to '0'"
+ */
+static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci,
+					unsigned int cycle_state, gfp_t flags)
+{
+	struct xhci_segment *seg;
+	dma_addr_t	dma;
+	int		i;
+
+	seg = kzalloc(sizeof *seg, flags);
+	if (!seg)
+		return NULL;
+
+	seg->trbs = dma_pool_alloc(xhci->segment_pool, flags, &dma);
+	if (!seg->trbs) {
+		kfree(seg);
+		return NULL;
+	}
+
+	memset(seg->trbs, 0, TRB_SEGMENT_SIZE);
+	/* If the cycle state is 0, set the cycle bit to 1 for all the TRBs */
+	if (cycle_state == 0) {
+		for (i = 0; i < TRBS_PER_SEGMENT; i++)
+			seg->trbs[i].link.control |= TRB_CYCLE;
+	}
+	seg->dma = dma;
+	seg->next = NULL;
+
+	return seg;
+}
+
+static void xhci_segment_free(struct xhci_hcd *xhci, struct xhci_segment *seg)
+{
+	if (seg->trbs) {
+		dma_pool_free(xhci->segment_pool, seg->trbs, seg->dma);
+		seg->trbs = NULL;
+	}
+	kfree(seg);
+}
+
+static void xhci_free_segments_for_ring(struct xhci_hcd *xhci,
+				struct xhci_segment *first)
+{
+	struct xhci_segment *seg;
+
+	seg = first->next;
+	while (seg != first) {
+		struct xhci_segment *next = seg->next;
+		xhci_segment_free(xhci, seg);
+		seg = next;
+	}
+	xhci_segment_free(xhci, first);
+}
+
+/*
+ * Make the prev segment point to the next segment.
+ *
+ * Change the last TRB in the prev segment to be a Link TRB which points to the
+ * DMA address of the next segment.  The caller needs to set any Link TRB
+ * related flags, such as End TRB, Toggle Cycle, and no snoop.
+ */
+static void xhci_link_segments(struct xhci_hcd *xhci, struct xhci_segment *prev,
+		struct xhci_segment *next, enum xhci_ring_type type)
+{
+	u32 val;
+
+	if (!prev || !next)
+		return;
+	prev->next = next;
+	if (type != TYPE_EVENT) {
+		prev->trbs[TRBS_PER_SEGMENT-1].link.segment_ptr =
+			cpu_to_le64(next->dma);
+
+		/* Set the last TRB in the segment to have a TRB type ID of Link TRB */
+		val = le32_to_cpu(prev->trbs[TRBS_PER_SEGMENT-1].link.control);
+		val &= ~TRB_TYPE_BITMASK;
+		val |= TRB_TYPE(TRB_LINK);
+		/* Always set the chain bit with 0.95 hardware */
+		/* Set chain bit for isoc rings on AMD 0.96 host */
+		if (xhci_link_trb_quirk(xhci) ||
+				(type == TYPE_ISOC &&
+				 (xhci->quirks & XHCI_AMD_0x96_HOST)))
+			val |= TRB_CHAIN;
+		prev->trbs[TRBS_PER_SEGMENT-1].link.control = cpu_to_le32(val);
+	}
+}
+
+/*
+ * Link the ring to the new segments.
+ * Set Toggle Cycle for the new ring if needed.
+ */
+static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring,
+		struct xhci_segment *first, struct xhci_segment *last,
+		unsigned int num_segs)
+{
+	struct xhci_segment *next;
+
+	if (!ring || !first || !last)
+		return;
+
+	next = ring->enq_seg->next;
+	xhci_link_segments(xhci, ring->enq_seg, first, ring->type);
+	xhci_link_segments(xhci, last, next, ring->type);
+	ring->num_segs += num_segs;
+	ring->num_trbs_free += (TRBS_PER_SEGMENT - 1) * num_segs;
+
+	if (ring->type != TYPE_EVENT && ring->enq_seg == ring->last_seg) {
+		ring->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control
+			&= ~cpu_to_le32(LINK_TOGGLE);
+		last->trbs[TRBS_PER_SEGMENT-1].link.control
+			|= cpu_to_le32(LINK_TOGGLE);
+		ring->last_seg = last;
+	}
+}
+
+/* XXX: Do we need the hcd structure in all these functions? */
+void xhci_ring_free(struct xhci_hcd *xhci, struct xhci_ring *ring)
+{
+	if (!ring)
+		return;
+
+	if (ring->first_seg)
+		xhci_free_segments_for_ring(xhci, ring->first_seg);
+
+	kfree(ring);
+}
+
+static void xhci_initialize_ring_info(struct xhci_ring *ring,
+					unsigned int cycle_state)
+{
+	/* The ring is empty, so the enqueue pointer == dequeue pointer */
+	ring->enqueue = ring->first_seg->trbs;
+	ring->enq_seg = ring->first_seg;
+	ring->dequeue = ring->enqueue;
+	ring->deq_seg = ring->first_seg;
+	/* The ring is initialized to 0. The producer must write 1 to the cycle
+	 * bit to handover ownership of the TRB, so PCS = 1.  The consumer must
+	 * compare CCS to the cycle bit to check ownership, so CCS = 1.
+	 *
+	 * New rings are initialized with cycle state equal to 1; if we are
+	 * handling ring expansion, set the cycle state equal to the old ring.
+	 */
+	ring->cycle_state = cycle_state;
+	/* Not necessary for new rings, but needed for re-initialized rings */
+	ring->enq_updates = 0;
+	ring->deq_updates = 0;
+
+	/*
+	 * Each segment has a link TRB, and leave an extra TRB for SW
+	 * accounting purpose
+	 */
+	ring->num_trbs_free = ring->num_segs * (TRBS_PER_SEGMENT - 1) - 1;
+}
+
+/* Allocate segments and link them for a ring */
+static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci,
+		struct xhci_segment **first, struct xhci_segment **last,
+		unsigned int num_segs, unsigned int cycle_state,
+		enum xhci_ring_type type, gfp_t flags)
+{
+	struct xhci_segment *prev;
+
+	prev = xhci_segment_alloc(xhci, cycle_state, flags);
+	if (!prev)
+		return -ENOMEM;
+	num_segs--;
+
+	*first = prev;
+	while (num_segs > 0) {
+		struct xhci_segment	*next;
+
+		next = xhci_segment_alloc(xhci, cycle_state, flags);
+		if (!next) {
+			prev = *first;
+			while (prev) {
+				next = prev->next;
+				xhci_segment_free(xhci, prev);
+				prev = next;
+			}
+			return -ENOMEM;
+		}
+		xhci_link_segments(xhci, prev, next, type);
+
+		prev = next;
+		num_segs--;
+	}
+	xhci_link_segments(xhci, prev, *first, type);
+	*last = prev;
+
+	return 0;
+}
+
+/**
+ * Create a new ring with zero or more segments.
+ *
+ * Link each segment together into a ring.
+ * Set the end flag and the cycle toggle bit on the last segment.
+ * See section 4.9.1 and figures 15 and 16.
+ */
+static struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci,
+		unsigned int num_segs, unsigned int cycle_state,
+		enum xhci_ring_type type, gfp_t flags)
+{
+	struct xhci_ring	*ring;
+	int ret;
+
+	ring = kzalloc(sizeof *(ring), flags);
+	if (!ring)
+		return NULL;
+
+	ring->num_segs = num_segs;
+	INIT_LIST_HEAD(&ring->td_list);
+	ring->type = type;
+	if (num_segs == 0)
+		return ring;
+
+	ret = xhci_alloc_segments_for_ring(xhci, &ring->first_seg,
+			&ring->last_seg, num_segs, cycle_state, type, flags);
+	if (ret)
+		goto fail;
+
+	/* Only event ring does not use link TRB */
+	if (type != TYPE_EVENT) {
+		/* See section 4.9.2.1 and 6.4.4.1 */
+		ring->last_seg->trbs[TRBS_PER_SEGMENT - 1].link.control |=
+			cpu_to_le32(LINK_TOGGLE);
+	}
+	xhci_initialize_ring_info(ring, cycle_state);
+	return ring;
+
+fail:
+	kfree(ring);
+	return NULL;
+}
+
+void xhci_free_or_cache_endpoint_ring(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		unsigned int ep_index)
+{
+	int rings_cached;
+
+	rings_cached = virt_dev->num_rings_cached;
+	if (rings_cached < XHCI_MAX_RINGS_CACHED) {
+		virt_dev->ring_cache[rings_cached] =
+			virt_dev->eps[ep_index].ring;
+		virt_dev->num_rings_cached++;
+		xhci_dbg(xhci, "Cached old ring, "
+				"%d ring%s cached\n",
+				virt_dev->num_rings_cached,
+				(virt_dev->num_rings_cached > 1) ? "s" : "");
+	} else {
+		xhci_ring_free(xhci, virt_dev->eps[ep_index].ring);
+		xhci_dbg(xhci, "Ring cache full (%d rings), "
+				"freeing ring\n",
+				virt_dev->num_rings_cached);
+	}
+	virt_dev->eps[ep_index].ring = NULL;
+}
+
+/* Zero an endpoint ring (except for link TRBs) and move the enqueue and dequeue
+ * pointers to the beginning of the ring.
+ */
+static void xhci_reinit_cached_ring(struct xhci_hcd *xhci,
+			struct xhci_ring *ring, unsigned int cycle_state,
+			enum xhci_ring_type type)
+{
+	struct xhci_segment	*seg = ring->first_seg;
+	int i;
+
+	do {
+		memset(seg->trbs, 0,
+				sizeof(union xhci_trb)*TRBS_PER_SEGMENT);
+		if (cycle_state == 0) {
+			for (i = 0; i < TRBS_PER_SEGMENT; i++)
+				seg->trbs[i].link.control |= TRB_CYCLE;
+		}
+		/* All endpoint rings have link TRBs */
+		xhci_link_segments(xhci, seg, seg->next, type);
+		seg = seg->next;
+	} while (seg != ring->first_seg);
+	ring->type = type;
+	xhci_initialize_ring_info(ring, cycle_state);
+	/* td list should be empty since all URBs have been cancelled,
+	 * but just in case...
+	 */
+	INIT_LIST_HEAD(&ring->td_list);
+}
+
+/*
+ * Expand an existing ring.
+ * Look for a cached ring or allocate a new ring which has same segment numbers
+ * and link the two rings.
+ */
+int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring,
+				unsigned int num_trbs, gfp_t flags)
+{
+	struct xhci_segment	*first;
+	struct xhci_segment	*last;
+	unsigned int		num_segs;
+	unsigned int		num_segs_needed;
+	int			ret;
+
+	num_segs_needed = (num_trbs + (TRBS_PER_SEGMENT - 1) - 1) /
+				(TRBS_PER_SEGMENT - 1);
+
+	/* Allocate number of segments we needed, or double the ring size */
+	num_segs = ring->num_segs > num_segs_needed ?
+			ring->num_segs : num_segs_needed;
+
+	ret = xhci_alloc_segments_for_ring(xhci, &first, &last,
+			num_segs, ring->cycle_state, ring->type, flags);
+	if (ret)
+		return -ENOMEM;
+
+	xhci_link_rings(xhci, ring, first, last, num_segs);
+	xhci_dbg(xhci, "ring expansion succeed, now has %d segments\n",
+			ring->num_segs);
+
+	return 0;
+}
+
+#define CTX_SIZE(_hcc) (HCC_64BYTE_CONTEXT(_hcc) ? 64 : 32)
+
+static struct xhci_container_ctx *xhci_alloc_container_ctx(struct xhci_hcd *xhci,
+						    int type, gfp_t flags)
+{
+	struct xhci_container_ctx *ctx = kzalloc(sizeof(*ctx), flags);
+	if (!ctx)
+		return NULL;
+
+	BUG_ON((type != XHCI_CTX_TYPE_DEVICE) && (type != XHCI_CTX_TYPE_INPUT));
+	ctx->type = type;
+	ctx->size = HCC_64BYTE_CONTEXT(xhci->hcc_params) ? 2048 : 1024;
+	if (type == XHCI_CTX_TYPE_INPUT)
+		ctx->size += CTX_SIZE(xhci->hcc_params);
+
+	ctx->bytes = dma_pool_alloc(xhci->device_pool, flags, &ctx->dma);
+	memset(ctx->bytes, 0, ctx->size);
+	return ctx;
+}
+
+static void xhci_free_container_ctx(struct xhci_hcd *xhci,
+			     struct xhci_container_ctx *ctx)
+{
+	if (!ctx)
+		return;
+	dma_pool_free(xhci->device_pool, ctx->bytes, ctx->dma);
+	kfree(ctx);
+}
+
+struct xhci_input_control_ctx *xhci_get_input_control_ctx(struct xhci_hcd *xhci,
+					      struct xhci_container_ctx *ctx)
+{
+	BUG_ON(ctx->type != XHCI_CTX_TYPE_INPUT);
+	return (struct xhci_input_control_ctx *)ctx->bytes;
+}
+
+struct xhci_slot_ctx *xhci_get_slot_ctx(struct xhci_hcd *xhci,
+					struct xhci_container_ctx *ctx)
+{
+	if (ctx->type == XHCI_CTX_TYPE_DEVICE)
+		return (struct xhci_slot_ctx *)ctx->bytes;
+
+	return (struct xhci_slot_ctx *)
+		(ctx->bytes + CTX_SIZE(xhci->hcc_params));
+}
+
+struct xhci_ep_ctx *xhci_get_ep_ctx(struct xhci_hcd *xhci,
+				    struct xhci_container_ctx *ctx,
+				    unsigned int ep_index)
+{
+	/* increment ep index by offset of start of ep ctx array */
+	ep_index++;
+	if (ctx->type == XHCI_CTX_TYPE_INPUT)
+		ep_index++;
+
+	return (struct xhci_ep_ctx *)
+		(ctx->bytes + (ep_index * CTX_SIZE(xhci->hcc_params)));
+}
+
+
+/***************** Streams structures manipulation *************************/
+
+static void xhci_free_stream_ctx(struct xhci_hcd *xhci,
+		unsigned int num_stream_ctxs,
+		struct xhci_stream_ctx *stream_ctx, dma_addr_t dma)
+{
+	struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+
+	if (num_stream_ctxs > MEDIUM_STREAM_ARRAY_SIZE)
+		dma_free_coherent(&pdev->dev,
+				sizeof(struct xhci_stream_ctx)*num_stream_ctxs,
+				stream_ctx, dma);
+	else if (num_stream_ctxs <= SMALL_STREAM_ARRAY_SIZE)
+		return dma_pool_free(xhci->small_streams_pool,
+				stream_ctx, dma);
+	else
+		return dma_pool_free(xhci->medium_streams_pool,
+				stream_ctx, dma);
+}
+
+/*
+ * The stream context array for each endpoint with bulk streams enabled can
+ * vary in size, based on:
+ *  - how many streams the endpoint supports,
+ *  - the maximum primary stream array size the host controller supports,
+ *  - and how many streams the device driver asks for.
+ *
+ * The stream context array must be a power of 2, and can be as small as
+ * 64 bytes or as large as 1MB.
+ */
+static struct xhci_stream_ctx *xhci_alloc_stream_ctx(struct xhci_hcd *xhci,
+		unsigned int num_stream_ctxs, dma_addr_t *dma,
+		gfp_t mem_flags)
+{
+	struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+
+	if (num_stream_ctxs > MEDIUM_STREAM_ARRAY_SIZE)
+		return dma_alloc_coherent(&pdev->dev,
+				sizeof(struct xhci_stream_ctx)*num_stream_ctxs,
+				dma, mem_flags);
+	else if (num_stream_ctxs <= SMALL_STREAM_ARRAY_SIZE)
+		return dma_pool_alloc(xhci->small_streams_pool,
+				mem_flags, dma);
+	else
+		return dma_pool_alloc(xhci->medium_streams_pool,
+				mem_flags, dma);
+}
+
+struct xhci_ring *xhci_dma_to_transfer_ring(
+		struct xhci_virt_ep *ep,
+		u64 address)
+{
+	if (ep->ep_state & EP_HAS_STREAMS)
+		return radix_tree_lookup(&ep->stream_info->trb_address_map,
+				address >> TRB_SEGMENT_SHIFT);
+	return ep->ring;
+}
+
+/* Only use this when you know stream_info is valid */
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+static struct xhci_ring *dma_to_stream_ring(
+		struct xhci_stream_info *stream_info,
+		u64 address)
+{
+	return radix_tree_lookup(&stream_info->trb_address_map,
+			address >> TRB_SEGMENT_SHIFT);
+}
+#endif	/* CONFIG_USB_XHCI_HCD_DEBUGGING */
+
+struct xhci_ring *xhci_stream_id_to_ring(
+		struct xhci_virt_device *dev,
+		unsigned int ep_index,
+		unsigned int stream_id)
+{
+	struct xhci_virt_ep *ep = &dev->eps[ep_index];
+
+	if (stream_id == 0)
+		return ep->ring;
+	if (!ep->stream_info)
+		return NULL;
+
+	if (stream_id > ep->stream_info->num_streams)
+		return NULL;
+	return ep->stream_info->stream_rings[stream_id];
+}
+
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+static int xhci_test_radix_tree(struct xhci_hcd *xhci,
+		unsigned int num_streams,
+		struct xhci_stream_info *stream_info)
+{
+	u32 cur_stream;
+	struct xhci_ring *cur_ring;
+	u64 addr;
+
+	for (cur_stream = 1; cur_stream < num_streams; cur_stream++) {
+		struct xhci_ring *mapped_ring;
+		int trb_size = sizeof(union xhci_trb);
+
+		cur_ring = stream_info->stream_rings[cur_stream];
+		for (addr = cur_ring->first_seg->dma;
+				addr < cur_ring->first_seg->dma + TRB_SEGMENT_SIZE;
+				addr += trb_size) {
+			mapped_ring = dma_to_stream_ring(stream_info, addr);
+			if (cur_ring != mapped_ring) {
+				xhci_warn(xhci, "WARN: DMA address 0x%08llx "
+						"didn't map to stream ID %u; "
+						"mapped to ring %p\n",
+						(unsigned long long) addr,
+						cur_stream,
+						mapped_ring);
+				return -EINVAL;
+			}
+		}
+		/* One TRB after the end of the ring segment shouldn't return a
+		 * pointer to the current ring (although it may be a part of a
+		 * different ring).
+		 */
+		mapped_ring = dma_to_stream_ring(stream_info, addr);
+		if (mapped_ring != cur_ring) {
+			/* One TRB before should also fail */
+			addr = cur_ring->first_seg->dma - trb_size;
+			mapped_ring = dma_to_stream_ring(stream_info, addr);
+		}
+		if (mapped_ring == cur_ring) {
+			xhci_warn(xhci, "WARN: Bad DMA address 0x%08llx "
+					"mapped to valid stream ID %u; "
+					"mapped ring = %p\n",
+					(unsigned long long) addr,
+					cur_stream,
+					mapped_ring);
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+#endif	/* CONFIG_USB_XHCI_HCD_DEBUGGING */
+
+/*
+ * Change an endpoint's internal structure so it supports stream IDs.  The
+ * number of requested streams includes stream 0, which cannot be used by device
+ * drivers.
+ *
+ * The number of stream contexts in the stream context array may be bigger than
+ * the number of streams the driver wants to use.  This is because the number of
+ * stream context array entries must be a power of two.
+ *
+ * We need a radix tree for mapping physical addresses of TRBs to which stream
+ * ID they belong to.  We need to do this because the host controller won't tell
+ * us which stream ring the TRB came from.  We could store the stream ID in an
+ * event data TRB, but that doesn't help us for the cancellation case, since the
+ * endpoint may stop before it reaches that event data TRB.
+ *
+ * The radix tree maps the upper portion of the TRB DMA address to a ring
+ * segment that has the same upper portion of DMA addresses.  For example, say I
+ * have segments of size 1KB, that are always 64-byte aligned.  A segment may
+ * start at 0x10c91000 and end at 0x10c913f0.  If I use the upper 10 bits, the
+ * key to the stream ID is 0x43244.  I can use the DMA address of the TRB to
+ * pass the radix tree a key to get the right stream ID:
+ *
+ * 	0x10c90fff >> 10 = 0x43243
+ * 	0x10c912c0 >> 10 = 0x43244
+ * 	0x10c91400 >> 10 = 0x43245
+ *
+ * Obviously, only those TRBs with DMA addresses that are within the segment
+ * will make the radix tree return the stream ID for that ring.
+ *
+ * Caveats for the radix tree:
+ *
+ * The radix tree uses an unsigned long as a key pair.  On 32-bit systems, an
+ * unsigned long will be 32-bits; on a 64-bit system an unsigned long will be
+ * 64-bits.  Since we only request 32-bit DMA addresses, we can use that as the
+ * key on 32-bit or 64-bit systems (it would also be fine if we asked for 64-bit
+ * PCI DMA addresses on a 64-bit system).  There might be a problem on 32-bit
+ * extended systems (where the DMA address can be bigger than 32-bits),
+ * if we allow the PCI dma mask to be bigger than 32-bits.  So don't do that.
+ */
+struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
+		unsigned int num_stream_ctxs,
+		unsigned int num_streams, gfp_t mem_flags)
+{
+	struct xhci_stream_info *stream_info;
+	u32 cur_stream;
+	struct xhci_ring *cur_ring;
+	unsigned long key;
+	u64 addr;
+	int ret;
+
+	xhci_dbg(xhci, "Allocating %u streams and %u "
+			"stream context array entries.\n",
+			num_streams, num_stream_ctxs);
+	if (xhci->cmd_ring_reserved_trbs == MAX_RSVD_CMD_TRBS) {
+		xhci_dbg(xhci, "Command ring has no reserved TRBs available\n");
+		return NULL;
+	}
+	xhci->cmd_ring_reserved_trbs++;
+
+	stream_info = kzalloc(sizeof(struct xhci_stream_info), mem_flags);
+	if (!stream_info)
+		goto cleanup_trbs;
+
+	stream_info->num_streams = num_streams;
+	stream_info->num_stream_ctxs = num_stream_ctxs;
+
+	/* Initialize the array of virtual pointers to stream rings. */
+	stream_info->stream_rings = kzalloc(
+			sizeof(struct xhci_ring *)*num_streams,
+			mem_flags);
+	if (!stream_info->stream_rings)
+		goto cleanup_info;
+
+	/* Initialize the array of DMA addresses for stream rings for the HW. */
+	stream_info->stream_ctx_array = xhci_alloc_stream_ctx(xhci,
+			num_stream_ctxs, &stream_info->ctx_array_dma,
+			mem_flags);
+	if (!stream_info->stream_ctx_array)
+		goto cleanup_ctx;
+	memset(stream_info->stream_ctx_array, 0,
+			sizeof(struct xhci_stream_ctx)*num_stream_ctxs);
+
+	/* Allocate everything needed to free the stream rings later */
+	stream_info->free_streams_command =
+		xhci_alloc_command(xhci, true, true, mem_flags);
+	if (!stream_info->free_streams_command)
+		goto cleanup_ctx;
+
+	INIT_RADIX_TREE(&stream_info->trb_address_map, GFP_ATOMIC);
+
+	/* Allocate rings for all the streams that the driver will use,
+	 * and add their segment DMA addresses to the radix tree.
+	 * Stream 0 is reserved.
+	 */
+	for (cur_stream = 1; cur_stream < num_streams; cur_stream++) {
+		stream_info->stream_rings[cur_stream] =
+			xhci_ring_alloc(xhci, 2, 1, TYPE_STREAM, mem_flags);
+		cur_ring = stream_info->stream_rings[cur_stream];
+		if (!cur_ring)
+			goto cleanup_rings;
+		cur_ring->stream_id = cur_stream;
+		/* Set deq ptr, cycle bit, and stream context type */
+		addr = cur_ring->first_seg->dma |
+			SCT_FOR_CTX(SCT_PRI_TR) |
+			cur_ring->cycle_state;
+		stream_info->stream_ctx_array[cur_stream].stream_ring =
+			cpu_to_le64(addr);
+		xhci_dbg(xhci, "Setting stream %d ring ptr to 0x%08llx\n",
+				cur_stream, (unsigned long long) addr);
+
+		key = (unsigned long)
+			(cur_ring->first_seg->dma >> TRB_SEGMENT_SHIFT);
+		ret = radix_tree_insert(&stream_info->trb_address_map,
+				key, cur_ring);
+		if (ret) {
+			xhci_ring_free(xhci, cur_ring);
+			stream_info->stream_rings[cur_stream] = NULL;
+			goto cleanup_rings;
+		}
+	}
+	/* Leave the other unused stream ring pointers in the stream context
+	 * array initialized to zero.  This will cause the xHC to give us an
+	 * error if the device asks for a stream ID we don't have setup (if it
+	 * was any other way, the host controller would assume the ring is
+	 * "empty" and wait forever for data to be queued to that stream ID).
+	 */
+#if XHCI_DEBUG
+	/* Do a little test on the radix tree to make sure it returns the
+	 * correct values.
+	 */
+	if (xhci_test_radix_tree(xhci, num_streams, stream_info))
+		goto cleanup_rings;
+#endif
+
+	return stream_info;
+
+cleanup_rings:
+	for (cur_stream = 1; cur_stream < num_streams; cur_stream++) {
+		cur_ring = stream_info->stream_rings[cur_stream];
+		if (cur_ring) {
+			addr = cur_ring->first_seg->dma;
+			radix_tree_delete(&stream_info->trb_address_map,
+					addr >> TRB_SEGMENT_SHIFT);
+			xhci_ring_free(xhci, cur_ring);
+			stream_info->stream_rings[cur_stream] = NULL;
+		}
+	}
+	xhci_free_command(xhci, stream_info->free_streams_command);
+cleanup_ctx:
+	kfree(stream_info->stream_rings);
+cleanup_info:
+	kfree(stream_info);
+cleanup_trbs:
+	xhci->cmd_ring_reserved_trbs--;
+	return NULL;
+}
+/*
+ * Sets the MaxPStreams field and the Linear Stream Array field.
+ * Sets the dequeue pointer to the stream context array.
+ */
+void xhci_setup_streams_ep_input_ctx(struct xhci_hcd *xhci,
+		struct xhci_ep_ctx *ep_ctx,
+		struct xhci_stream_info *stream_info)
+{
+	u32 max_primary_streams;
+	/* MaxPStreams is the number of stream context array entries, not the
+	 * number we're actually using.  Must be in 2^(MaxPstreams + 1) format.
+	 * fls(0) = 0, fls(0x1) = 1, fls(0x10) = 2, fls(0x100) = 3, etc.
+	 */
+	max_primary_streams = fls(stream_info->num_stream_ctxs) - 2;
+	xhci_dbg(xhci, "Setting number of stream ctx array entries to %u\n",
+			1 << (max_primary_streams + 1));
+	ep_ctx->ep_info &= cpu_to_le32(~EP_MAXPSTREAMS_MASK);
+	ep_ctx->ep_info |= cpu_to_le32(EP_MAXPSTREAMS(max_primary_streams)
+				       | EP_HAS_LSA);
+	ep_ctx->deq  = cpu_to_le64(stream_info->ctx_array_dma);
+}
+
+/*
+ * Sets the MaxPStreams field and the Linear Stream Array field to 0.
+ * Reinstalls the "normal" endpoint ring (at its previous dequeue mark,
+ * not at the beginning of the ring).
+ */
+void xhci_setup_no_streams_ep_input_ctx(struct xhci_hcd *xhci,
+		struct xhci_ep_ctx *ep_ctx,
+		struct xhci_virt_ep *ep)
+{
+	dma_addr_t addr;
+	ep_ctx->ep_info &= cpu_to_le32(~(EP_MAXPSTREAMS_MASK | EP_HAS_LSA));
+	addr = xhci_trb_virt_to_dma(ep->ring->deq_seg, ep->ring->dequeue);
+	ep_ctx->deq  = cpu_to_le64(addr | ep->ring->cycle_state);
+}
+
+/* Frees all stream contexts associated with the endpoint,
+ *
+ * Caller should fix the endpoint context streams fields.
+ */
+void xhci_free_stream_info(struct xhci_hcd *xhci,
+		struct xhci_stream_info *stream_info)
+{
+	int cur_stream;
+	struct xhci_ring *cur_ring;
+	dma_addr_t addr;
+
+	if (!stream_info)
+		return;
+
+	for (cur_stream = 1; cur_stream < stream_info->num_streams;
+			cur_stream++) {
+		cur_ring = stream_info->stream_rings[cur_stream];
+		if (cur_ring) {
+			addr = cur_ring->first_seg->dma;
+			radix_tree_delete(&stream_info->trb_address_map,
+					addr >> TRB_SEGMENT_SHIFT);
+			xhci_ring_free(xhci, cur_ring);
+			stream_info->stream_rings[cur_stream] = NULL;
+		}
+	}
+	xhci_free_command(xhci, stream_info->free_streams_command);
+	xhci->cmd_ring_reserved_trbs--;
+	if (stream_info->stream_ctx_array)
+		xhci_free_stream_ctx(xhci,
+				stream_info->num_stream_ctxs,
+				stream_info->stream_ctx_array,
+				stream_info->ctx_array_dma);
+
+	if (stream_info)
+		kfree(stream_info->stream_rings);
+	kfree(stream_info);
+}
+
+
+/***************** Device context manipulation *************************/
+
+static void xhci_init_endpoint_timer(struct xhci_hcd *xhci,
+		struct xhci_virt_ep *ep)
+{
+	init_timer(&ep->stop_cmd_timer);
+	ep->stop_cmd_timer.data = (unsigned long) ep;
+	ep->stop_cmd_timer.function = xhci_stop_endpoint_command_watchdog;
+	ep->xhci = xhci;
+}
+
+static void xhci_free_tt_info(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		int slot_id)
+{
+	struct list_head *tt_list_head;
+	struct xhci_tt_bw_info *tt_info, *next;
+	bool slot_found = false;
+
+	/* If the device never made it past the Set Address stage,
+	 * it may not have the real_port set correctly.
+	 */
+	if (virt_dev->real_port == 0 ||
+			virt_dev->real_port > HCS_MAX_PORTS(xhci->hcs_params1)) {
+		xhci_dbg(xhci, "Bad real port.\n");
+		return;
+	}
+
+	tt_list_head = &(xhci->rh_bw[virt_dev->real_port - 1].tts);
+	list_for_each_entry_safe(tt_info, next, tt_list_head, tt_list) {
+		/* Multi-TT hubs will have more than one entry */
+		if (tt_info->slot_id == slot_id) {
+			slot_found = true;
+			list_del(&tt_info->tt_list);
+			kfree(tt_info);
+		} else if (slot_found) {
+			break;
+		}
+	}
+}
+
+int xhci_alloc_tt_info(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		struct usb_device *hdev,
+		struct usb_tt *tt, gfp_t mem_flags)
+{
+	struct xhci_tt_bw_info		*tt_info;
+	unsigned int			num_ports;
+	int				i, j;
+
+	if (!tt->multi)
+		num_ports = 1;
+	else
+		num_ports = hdev->maxchild;
+
+	for (i = 0; i < num_ports; i++, tt_info++) {
+		struct xhci_interval_bw_table *bw_table;
+
+		tt_info = kzalloc(sizeof(*tt_info), mem_flags);
+		if (!tt_info)
+			goto free_tts;
+		INIT_LIST_HEAD(&tt_info->tt_list);
+		list_add(&tt_info->tt_list,
+				&xhci->rh_bw[virt_dev->real_port - 1].tts);
+		tt_info->slot_id = virt_dev->udev->slot_id;
+		if (tt->multi)
+			tt_info->ttport = i+1;
+		bw_table = &tt_info->bw_table;
+		for (j = 0; j < XHCI_MAX_INTERVAL; j++)
+			INIT_LIST_HEAD(&bw_table->interval_bw[j].endpoints);
+	}
+	return 0;
+
+free_tts:
+	xhci_free_tt_info(xhci, virt_dev, virt_dev->udev->slot_id);
+	return -ENOMEM;
+}
+
+
+/* All the xhci_tds in the ring's TD list should be freed at this point.
+ * Should be called with xhci->lock held if there is any chance the TT lists
+ * will be manipulated by the configure endpoint, allocate device, or update
+ * hub functions while this function is removing the TT entries from the list.
+ */
+void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
+{
+	struct xhci_virt_device *dev;
+	int i;
+	int old_active_eps = 0;
+
+	/* Slot ID 0 is reserved */
+	if (slot_id == 0 || !xhci->devs[slot_id])
+		return;
+
+	dev = xhci->devs[slot_id];
+	xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
+	if (!dev)
+		return;
+
+	if (dev->tt_info)
+		old_active_eps = dev->tt_info->active_eps;
+
+	for (i = 0; i < 31; ++i) {
+		if (dev->eps[i].ring)
+			xhci_ring_free(xhci, dev->eps[i].ring);
+		if (dev->eps[i].stream_info)
+			xhci_free_stream_info(xhci,
+					dev->eps[i].stream_info);
+		/* Endpoints on the TT/root port lists should have been removed
+		 * when usb_disable_device() was called for the device.
+		 * We can't drop them anyway, because the udev might have gone
+		 * away by this point, and we can't tell what speed it was.
+		 */
+		if (!list_empty(&dev->eps[i].bw_endpoint_list))
+			xhci_warn(xhci, "Slot %u endpoint %u "
+					"not removed from BW list!\n",
+					slot_id, i);
+	}
+	/* If this is a hub, free the TT(s) from the TT list */
+	xhci_free_tt_info(xhci, dev, slot_id);
+	/* If necessary, update the number of active TTs on this root port */
+	xhci_update_tt_active_eps(xhci, dev, old_active_eps);
+
+	if (dev->ring_cache) {
+		for (i = 0; i < dev->num_rings_cached; i++)
+			xhci_ring_free(xhci, dev->ring_cache[i]);
+		kfree(dev->ring_cache);
+	}
+
+	if (dev->in_ctx)
+		xhci_free_container_ctx(xhci, dev->in_ctx);
+	if (dev->out_ctx)
+		xhci_free_container_ctx(xhci, dev->out_ctx);
+
+	kfree(xhci->devs[slot_id]);
+	xhci->devs[slot_id] = NULL;
+}
+
+int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+		struct usb_device *udev, gfp_t flags)
+{
+	struct xhci_virt_device *dev;
+	int i;
+
+	/* Slot ID 0 is reserved */
+	if (slot_id == 0 || xhci->devs[slot_id]) {
+		xhci_warn(xhci, "Bad Slot ID %d\n", slot_id);
+		return 0;
+	}
+
+	xhci->devs[slot_id] = kzalloc(sizeof(*xhci->devs[slot_id]), flags);
+	if (!xhci->devs[slot_id])
+		return 0;
+	dev = xhci->devs[slot_id];
+
+	/* Allocate the (output) device context that will be used in the HC. */
+	dev->out_ctx = xhci_alloc_container_ctx(xhci, XHCI_CTX_TYPE_DEVICE, flags);
+	if (!dev->out_ctx)
+		goto fail;
+
+	xhci_dbg(xhci, "Slot %d output ctx = 0x%llx (dma)\n", slot_id,
+			(unsigned long long)dev->out_ctx->dma);
+
+	/* Allocate the (input) device context for address device command */
+	dev->in_ctx = xhci_alloc_container_ctx(xhci, XHCI_CTX_TYPE_INPUT, flags);
+	if (!dev->in_ctx)
+		goto fail;
+
+	xhci_dbg(xhci, "Slot %d input ctx = 0x%llx (dma)\n", slot_id,
+			(unsigned long long)dev->in_ctx->dma);
+
+	/* Initialize the cancellation list and watchdog timers for each ep */
+	for (i = 0; i < 31; i++) {
+		xhci_init_endpoint_timer(xhci, &dev->eps[i]);
+		INIT_LIST_HEAD(&dev->eps[i].cancelled_td_list);
+		INIT_LIST_HEAD(&dev->eps[i].bw_endpoint_list);
+	}
+
+	/* Allocate endpoint 0 ring */
+	dev->eps[0].ring = xhci_ring_alloc(xhci, 2, 1, TYPE_CTRL, flags);
+	if (!dev->eps[0].ring)
+		goto fail;
+
+	/* Allocate pointers to the ring cache */
+	dev->ring_cache = kzalloc(
+			sizeof(struct xhci_ring *)*XHCI_MAX_RINGS_CACHED,
+			flags);
+	if (!dev->ring_cache)
+		goto fail;
+	dev->num_rings_cached = 0;
+
+	init_completion(&dev->cmd_completion);
+	INIT_LIST_HEAD(&dev->cmd_list);
+	dev->udev = udev;
+
+	/* Point to output device context in dcbaa. */
+	xhci->dcbaa->dev_context_ptrs[slot_id] = cpu_to_le64(dev->out_ctx->dma);
+	xhci_dbg(xhci, "Set slot id %d dcbaa entry %p to 0x%llx\n",
+		 slot_id,
+		 &xhci->dcbaa->dev_context_ptrs[slot_id],
+		 le64_to_cpu(xhci->dcbaa->dev_context_ptrs[slot_id]));
+
+	return 1;
+fail:
+	xhci_free_virt_device(xhci, slot_id);
+	return 0;
+}
+
+void xhci_copy_ep0_dequeue_into_input_ctx(struct xhci_hcd *xhci,
+		struct usb_device *udev)
+{
+	struct xhci_virt_device *virt_dev;
+	struct xhci_ep_ctx	*ep0_ctx;
+	struct xhci_ring	*ep_ring;
+
+	virt_dev = xhci->devs[udev->slot_id];
+	ep0_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, 0);
+	ep_ring = virt_dev->eps[0].ring;
+	/*
+	 * FIXME we don't keep track of the dequeue pointer very well after a
+	 * Set TR dequeue pointer, so we're setting the dequeue pointer of the
+	 * host to our enqueue pointer.  This should only be called after a
+	 * configured device has reset, so all control transfers should have
+	 * been completed or cancelled before the reset.
+	 */
+	ep0_ctx->deq = cpu_to_le64(xhci_trb_virt_to_dma(ep_ring->enq_seg,
+							ep_ring->enqueue)
+				   | ep_ring->cycle_state);
+}
+
+/*
+ * The xHCI roothub may have ports of differing speeds in any order in the port
+ * status registers.  xhci->port_array provides an array of the port speed for
+ * each offset into the port status registers.
+ *
+ * The xHCI hardware wants to know the roothub port number that the USB device
+ * is attached to (or the roothub port its ancestor hub is attached to).  All we
+ * know is the index of that port under either the USB 2.0 or the USB 3.0
+ * roothub, but that doesn't give us the real index into the HW port status
+ * registers. Call xhci_find_raw_port_number() to get real index.
+ */
+static u32 xhci_find_real_port_number(struct xhci_hcd *xhci,
+		struct usb_device *udev)
+{
+	struct usb_device *top_dev;
+	struct usb_hcd *hcd;
+
+	if (udev->speed == USB_SPEED_SUPER)
+		hcd = xhci->shared_hcd;
+	else
+		hcd = xhci->main_hcd;
+
+	for (top_dev = udev; top_dev->parent && top_dev->parent->parent;
+			top_dev = top_dev->parent)
+		/* Found device below root hub */;
+
+	return	xhci_find_raw_port_number(hcd, top_dev->portnum);
+}
+
+/* Setup an xHCI virtual device for a Set Address command */
+int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *udev)
+{
+	struct xhci_virt_device *dev;
+	struct xhci_ep_ctx	*ep0_ctx;
+	struct xhci_slot_ctx    *slot_ctx;
+	u32			port_num;
+	struct usb_device *top_dev;
+
+	dev = xhci->devs[udev->slot_id];
+	/* Slot ID 0 is reserved */
+	if (udev->slot_id == 0 || !dev) {
+		xhci_warn(xhci, "Slot ID %d is not assigned to this device\n",
+				udev->slot_id);
+		return -EINVAL;
+	}
+	ep0_ctx = xhci_get_ep_ctx(xhci, dev->in_ctx, 0);
+	slot_ctx = xhci_get_slot_ctx(xhci, dev->in_ctx);
+
+	/* 3) Only the control endpoint is valid - one endpoint context */
+	slot_ctx->dev_info |= cpu_to_le32(LAST_CTX(1) | udev->route);
+	switch (udev->speed) {
+	case USB_SPEED_SUPER:
+		slot_ctx->dev_info |= cpu_to_le32(SLOT_SPEED_SS);
+		break;
+	case USB_SPEED_HIGH:
+		slot_ctx->dev_info |= cpu_to_le32(SLOT_SPEED_HS);
+		break;
+	case USB_SPEED_FULL:
+		slot_ctx->dev_info |= cpu_to_le32(SLOT_SPEED_FS);
+		break;
+	case USB_SPEED_LOW:
+		slot_ctx->dev_info |= cpu_to_le32(SLOT_SPEED_LS);
+		break;
+	case USB_SPEED_WIRELESS:
+		xhci_dbg(xhci, "FIXME xHCI doesn't support wireless speeds\n");
+		return -EINVAL;
+		break;
+	default:
+		/* Speed was set earlier, this shouldn't happen. */
+		BUG();
+	}
+	/* Find the root hub port this device is under */
+	port_num = xhci_find_real_port_number(xhci, udev);
+	if (!port_num)
+		return -EINVAL;
+	slot_ctx->dev_info2 |= cpu_to_le32(ROOT_HUB_PORT(port_num));
+	/* Set the port number in the virtual_device to the faked port number */
+	for (top_dev = udev; top_dev->parent && top_dev->parent->parent;
+			top_dev = top_dev->parent)
+		/* Found device below root hub */;
+	dev->fake_port = top_dev->portnum;
+	dev->real_port = port_num;
+	xhci_dbg(xhci, "Set root hub portnum to %d\n", port_num);
+	xhci_dbg(xhci, "Set fake root hub portnum to %d\n", dev->fake_port);
+
+	/* Find the right bandwidth table that this device will be a part of.
+	 * If this is a full speed device attached directly to a root port (or a
+	 * decendent of one), it counts as a primary bandwidth domain, not a
+	 * secondary bandwidth domain under a TT.  An xhci_tt_info structure
+	 * will never be created for the HS root hub.
+	 */
+	if (!udev->tt || !udev->tt->hub->parent) {
+		dev->bw_table = &xhci->rh_bw[port_num - 1].bw_table;
+	} else {
+		struct xhci_root_port_bw_info *rh_bw;
+		struct xhci_tt_bw_info *tt_bw;
+
+		rh_bw = &xhci->rh_bw[port_num - 1];
+		/* Find the right TT. */
+		list_for_each_entry(tt_bw, &rh_bw->tts, tt_list) {
+			if (tt_bw->slot_id != udev->tt->hub->slot_id)
+				continue;
+
+			if (!dev->udev->tt->multi ||
+					(udev->tt->multi &&
+					 tt_bw->ttport == dev->udev->ttport)) {
+				dev->bw_table = &tt_bw->bw_table;
+				dev->tt_info = tt_bw;
+				break;
+			}
+		}
+		if (!dev->tt_info)
+			xhci_warn(xhci, "WARN: Didn't find a matching TT\n");
+	}
+
+	/* Is this a LS/FS device under an external HS hub? */
+	if (udev->tt && udev->tt->hub->parent) {
+		slot_ctx->tt_info = cpu_to_le32(udev->tt->hub->slot_id |
+						(udev->ttport << 8));
+		if (udev->tt->multi)
+			slot_ctx->dev_info |= cpu_to_le32(DEV_MTT);
+	}
+	xhci_dbg(xhci, "udev->tt = %p\n", udev->tt);
+	xhci_dbg(xhci, "udev->ttport = 0x%x\n", udev->ttport);
+
+	/* Step 4 - ring already allocated */
+	/* Step 5 */
+	ep0_ctx->ep_info2 = cpu_to_le32(EP_TYPE(CTRL_EP));
+	/*
+	 * XXX: Not sure about wireless USB devices.
+	 */
+	switch (udev->speed) {
+	case USB_SPEED_SUPER:
+		ep0_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(512));
+		break;
+	case USB_SPEED_HIGH:
+	/* USB core guesses at a 64-byte max packet first for FS devices */
+	case USB_SPEED_FULL:
+		ep0_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(64));
+		break;
+	case USB_SPEED_LOW:
+		ep0_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(8));
+		break;
+	case USB_SPEED_WIRELESS:
+		xhci_dbg(xhci, "FIXME xHCI doesn't support wireless speeds\n");
+		return -EINVAL;
+		break;
+	default:
+		/* New speed? */
+		BUG();
+	}
+	/* EP 0 can handle "burst" sizes of 1, so Max Burst Size field is 0 */
+	ep0_ctx->ep_info2 |= cpu_to_le32(MAX_BURST(0) | ERROR_COUNT(3));
+
+	ep0_ctx->deq = cpu_to_le64(dev->eps[0].ring->first_seg->dma |
+				   dev->eps[0].ring->cycle_state);
+
+	/* Steps 7 and 8 were done in xhci_alloc_virt_device() */
+
+	return 0;
+}
+
+/*
+ * Convert interval expressed as 2^(bInterval - 1) == interval into
+ * straight exponent value 2^n == interval.
+ *
+ */
+static unsigned int xhci_parse_exponent_interval(struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	unsigned int interval;
+
+	interval = clamp_val(ep->desc.bInterval, 1, 16) - 1;
+	if (interval != ep->desc.bInterval - 1)
+		dev_warn(&udev->dev,
+			 "ep %#x - rounding interval to %d %sframes\n",
+			 ep->desc.bEndpointAddress,
+			 1 << interval,
+			 udev->speed == USB_SPEED_FULL ? "" : "micro");
+
+	if (udev->speed == USB_SPEED_FULL) {
+		/*
+		 * Full speed isoc endpoints specify interval in frames,
+		 * not microframes. We are using microframes everywhere,
+		 * so adjust accordingly.
+		 */
+		interval += 3;	/* 1 frame = 2^3 uframes */
+	}
+
+	return interval;
+}
+
+/*
+ * Convert bInterval expressed in microframes (in 1-255 range) to exponent of
+ * microframes, rounded down to nearest power of 2.
+ */
+static unsigned int xhci_microframes_to_exponent(struct usb_device *udev,
+		struct usb_host_endpoint *ep, unsigned int desc_interval,
+		unsigned int min_exponent, unsigned int max_exponent)
+{
+	unsigned int interval;
+
+	interval = fls(desc_interval) - 1;
+	interval = clamp_val(interval, min_exponent, max_exponent);
+	if ((1 << interval) != desc_interval)
+		dev_warn(&udev->dev,
+			 "ep %#x - rounding interval to %d microframes, ep desc says %d microframes\n",
+			 ep->desc.bEndpointAddress,
+			 1 << interval,
+			 desc_interval);
+
+	return interval;
+}
+
+static unsigned int xhci_parse_microframe_interval(struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	if (ep->desc.bInterval == 0)
+		return 0;
+	return xhci_microframes_to_exponent(udev, ep,
+			ep->desc.bInterval, 0, 15);
+}
+
+
+static unsigned int xhci_parse_frame_interval(struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	return xhci_microframes_to_exponent(udev, ep,
+			ep->desc.bInterval * 8, 3, 10);
+}
+
+/* Return the polling or NAK interval.
+ *
+ * The polling interval is expressed in "microframes".  If xHCI's Interval field
+ * is set to N, it will service the endpoint every 2^(Interval)*125us.
+ *
+ * The NAK interval is one NAK per 1 to 255 microframes, or no NAKs if interval
+ * is set to 0.
+ */
+static unsigned int xhci_get_endpoint_interval(struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	unsigned int interval = 0;
+
+	switch (udev->speed) {
+	case USB_SPEED_HIGH:
+		/* Max NAK rate */
+		if (usb_endpoint_xfer_control(&ep->desc) ||
+		    usb_endpoint_xfer_bulk(&ep->desc)) {
+			interval = xhci_parse_microframe_interval(udev, ep);
+			break;
+		}
+		/* Fall through - SS and HS isoc/int have same decoding */
+
+	case USB_SPEED_SUPER:
+		if (usb_endpoint_xfer_int(&ep->desc) ||
+		    usb_endpoint_xfer_isoc(&ep->desc)) {
+			interval = xhci_parse_exponent_interval(udev, ep);
+		}
+		break;
+
+	case USB_SPEED_FULL:
+		if (usb_endpoint_xfer_isoc(&ep->desc)) {
+			interval = xhci_parse_exponent_interval(udev, ep);
+			break;
+		}
+		/*
+		 * Fall through for interrupt endpoint interval decoding
+		 * since it uses the same rules as low speed interrupt
+		 * endpoints.
+		 */
+
+	case USB_SPEED_LOW:
+		if (usb_endpoint_xfer_int(&ep->desc) ||
+		    usb_endpoint_xfer_isoc(&ep->desc)) {
+
+			interval = xhci_parse_frame_interval(udev, ep);
+		}
+		break;
+
+	default:
+		BUG();
+	}
+	return EP_INTERVAL(interval);
+}
+
+/* The "Mult" field in the endpoint context is only set for SuperSpeed isoc eps.
+ * High speed endpoint descriptors can define "the number of additional
+ * transaction opportunities per microframe", but that goes in the Max Burst
+ * endpoint context field.
+ */
+static u32 xhci_get_endpoint_mult(struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	if (udev->speed != USB_SPEED_SUPER ||
+			!usb_endpoint_xfer_isoc(&ep->desc))
+		return 0;
+	return ep->ss_ep_comp.bmAttributes;
+}
+
+static u32 xhci_get_endpoint_type(struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	int in;
+	u32 type;
+
+	in = usb_endpoint_dir_in(&ep->desc);
+	if (usb_endpoint_xfer_control(&ep->desc)) {
+		type = EP_TYPE(CTRL_EP);
+	} else if (usb_endpoint_xfer_bulk(&ep->desc)) {
+		if (in)
+			type = EP_TYPE(BULK_IN_EP);
+		else
+			type = EP_TYPE(BULK_OUT_EP);
+	} else if (usb_endpoint_xfer_isoc(&ep->desc)) {
+		if (in)
+			type = EP_TYPE(ISOC_IN_EP);
+		else
+			type = EP_TYPE(ISOC_OUT_EP);
+	} else if (usb_endpoint_xfer_int(&ep->desc)) {
+		if (in)
+			type = EP_TYPE(INT_IN_EP);
+		else
+			type = EP_TYPE(INT_OUT_EP);
+	} else {
+		BUG();
+	}
+	return type;
+}
+
+/* Return the maximum endpoint service interval time (ESIT) payload.
+ * Basically, this is the maxpacket size, multiplied by the burst size
+ * and mult size.
+ */
+static u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	int max_burst;
+	int max_packet;
+
+	/* Only applies for interrupt or isochronous endpoints */
+	if (usb_endpoint_xfer_control(&ep->desc) ||
+			usb_endpoint_xfer_bulk(&ep->desc))
+		return 0;
+
+	if (udev->speed == USB_SPEED_SUPER)
+		return le16_to_cpu(ep->ss_ep_comp.wBytesPerInterval);
+
+	max_packet = GET_MAX_PACKET(usb_endpoint_maxp(&ep->desc));
+	max_burst = (usb_endpoint_maxp(&ep->desc) & 0x1800) >> 11;
+	/* A 0 in max burst means 1 transfer per ESIT */
+	return max_packet * (max_burst + 1);
+}
+
+/* Set up an endpoint with one ring segment.  Do not allocate stream rings.
+ * Drivers will have to call usb_alloc_streams() to do that.
+ */
+int xhci_endpoint_init(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		struct usb_device *udev,
+		struct usb_host_endpoint *ep,
+		gfp_t mem_flags)
+{
+	unsigned int ep_index;
+	struct xhci_ep_ctx *ep_ctx;
+	struct xhci_ring *ep_ring;
+	unsigned int max_packet;
+	unsigned int max_burst;
+	enum xhci_ring_type type;
+	u32 max_esit_payload;
+
+	ep_index = xhci_get_endpoint_index(&ep->desc);
+	ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+
+	type = usb_endpoint_type(&ep->desc);
+	/* Set up the endpoint ring */
+	virt_dev->eps[ep_index].new_ring =
+		xhci_ring_alloc(xhci, 2, 1, type, mem_flags);
+	if (!virt_dev->eps[ep_index].new_ring) {
+		/* Attempt to use the ring cache */
+		if (virt_dev->num_rings_cached == 0)
+			return -ENOMEM;
+		virt_dev->eps[ep_index].new_ring =
+			virt_dev->ring_cache[virt_dev->num_rings_cached];
+		virt_dev->ring_cache[virt_dev->num_rings_cached] = NULL;
+		virt_dev->num_rings_cached--;
+		xhci_reinit_cached_ring(xhci, virt_dev->eps[ep_index].new_ring,
+					1, type);
+	}
+	virt_dev->eps[ep_index].skip = false;
+	ep_ring = virt_dev->eps[ep_index].new_ring;
+	ep_ctx->deq = cpu_to_le64(ep_ring->first_seg->dma | ep_ring->cycle_state);
+
+	ep_ctx->ep_info = cpu_to_le32(xhci_get_endpoint_interval(udev, ep)
+				      | EP_MULT(xhci_get_endpoint_mult(udev, ep)));
+
+	/* FIXME dig Mult and streams info out of ep companion desc */
+
+	/* Allow 3 retries for everything but isoc;
+	 * CErr shall be set to 0 for Isoch endpoints.
+	 */
+	if (!usb_endpoint_xfer_isoc(&ep->desc))
+		ep_ctx->ep_info2 = cpu_to_le32(ERROR_COUNT(3));
+	else
+		ep_ctx->ep_info2 = cpu_to_le32(ERROR_COUNT(0));
+
+	ep_ctx->ep_info2 |= cpu_to_le32(xhci_get_endpoint_type(udev, ep));
+
+	/* Set the max packet size and max burst */
+	max_packet = GET_MAX_PACKET(usb_endpoint_maxp(&ep->desc));
+	max_burst = 0;
+	switch (udev->speed) {
+	case USB_SPEED_SUPER:
+		/* dig out max burst from ep companion desc */
+		max_burst = ep->ss_ep_comp.bMaxBurst;
+		break;
+	case USB_SPEED_HIGH:
+		/* Some devices get this wrong */
+		if (usb_endpoint_xfer_bulk(&ep->desc))
+			max_packet = 512;
+		/* bits 11:12 specify the number of additional transaction
+		 * opportunities per microframe (USB 2.0, section 9.6.6)
+		 */
+		if (usb_endpoint_xfer_isoc(&ep->desc) ||
+				usb_endpoint_xfer_int(&ep->desc)) {
+			max_burst = (usb_endpoint_maxp(&ep->desc)
+				     & 0x1800) >> 11;
+		}
+		break;
+	case USB_SPEED_FULL:
+	case USB_SPEED_LOW:
+		break;
+	default:
+		BUG();
+	}
+	ep_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(max_packet) |
+			MAX_BURST(max_burst));
+	max_esit_payload = xhci_get_max_esit_payload(xhci, udev, ep);
+	ep_ctx->tx_info = cpu_to_le32(MAX_ESIT_PAYLOAD_FOR_EP(max_esit_payload));
+
+	/*
+	 * XXX no idea how to calculate the average TRB buffer length for bulk
+	 * endpoints, as the driver gives us no clue how big each scatter gather
+	 * list entry (or buffer) is going to be.
+	 *
+	 * For isochronous and interrupt endpoints, we set it to the max
+	 * available, until we have new API in the USB core to allow drivers to
+	 * declare how much bandwidth they actually need.
+	 *
+	 * Normally, it would be calculated by taking the total of the buffer
+	 * lengths in the TD and then dividing by the number of TRBs in a TD,
+	 * including link TRBs, No-op TRBs, and Event data TRBs.  Since we don't
+	 * use Event Data TRBs, and we don't chain in a link TRB on short
+	 * transfers, we're basically dividing by 1.
+	 *
+	 * xHCI 1.0 specification indicates that the Average TRB Length should
+	 * be set to 8 for control endpoints.
+	 */
+	if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100)
+		ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8));
+	else
+		ep_ctx->tx_info |=
+			 cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(max_esit_payload));
+
+	/* FIXME Debug endpoint context */
+	return 0;
+}
+
+void xhci_endpoint_zero(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		struct usb_host_endpoint *ep)
+{
+	unsigned int ep_index;
+	struct xhci_ep_ctx *ep_ctx;
+
+	ep_index = xhci_get_endpoint_index(&ep->desc);
+	ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+
+	ep_ctx->ep_info = 0;
+	ep_ctx->ep_info2 = 0;
+	ep_ctx->deq = 0;
+	ep_ctx->tx_info = 0;
+	/* Don't free the endpoint ring until the set interface or configuration
+	 * request succeeds.
+	 */
+}
+
+void xhci_clear_endpoint_bw_info(struct xhci_bw_info *bw_info)
+{
+	bw_info->ep_interval = 0;
+	bw_info->mult = 0;
+	bw_info->num_packets = 0;
+	bw_info->max_packet_size = 0;
+	bw_info->type = 0;
+	bw_info->max_esit_payload = 0;
+}
+
+void xhci_update_bw_info(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx,
+		struct xhci_input_control_ctx *ctrl_ctx,
+		struct xhci_virt_device *virt_dev)
+{
+	struct xhci_bw_info *bw_info;
+	struct xhci_ep_ctx *ep_ctx;
+	unsigned int ep_type;
+	int i;
+
+	for (i = 1; i < 31; ++i) {
+		bw_info = &virt_dev->eps[i].bw_info;
+
+		/* We can't tell what endpoint type is being dropped, but
+		 * unconditionally clearing the bandwidth info for non-periodic
+		 * endpoints should be harmless because the info will never be
+		 * set in the first place.
+		 */
+		if (!EP_IS_ADDED(ctrl_ctx, i) && EP_IS_DROPPED(ctrl_ctx, i)) {
+			/* Dropped endpoint */
+			xhci_clear_endpoint_bw_info(bw_info);
+			continue;
+		}
+
+		if (EP_IS_ADDED(ctrl_ctx, i)) {
+			ep_ctx = xhci_get_ep_ctx(xhci, in_ctx, i);
+			ep_type = CTX_TO_EP_TYPE(le32_to_cpu(ep_ctx->ep_info2));
+
+			/* Ignore non-periodic endpoints */
+			if (ep_type != ISOC_OUT_EP && ep_type != INT_OUT_EP &&
+					ep_type != ISOC_IN_EP &&
+					ep_type != INT_IN_EP)
+				continue;
+
+			/* Added or changed endpoint */
+			bw_info->ep_interval = CTX_TO_EP_INTERVAL(
+					le32_to_cpu(ep_ctx->ep_info));
+			/* Number of packets and mult are zero-based in the
+			 * input context, but we want one-based for the
+			 * interval table.
+			 */
+			bw_info->mult = CTX_TO_EP_MULT(
+					le32_to_cpu(ep_ctx->ep_info)) + 1;
+			bw_info->num_packets = CTX_TO_MAX_BURST(
+					le32_to_cpu(ep_ctx->ep_info2)) + 1;
+			bw_info->max_packet_size = MAX_PACKET_DECODED(
+					le32_to_cpu(ep_ctx->ep_info2));
+			bw_info->type = ep_type;
+			bw_info->max_esit_payload = CTX_TO_MAX_ESIT_PAYLOAD(
+					le32_to_cpu(ep_ctx->tx_info));
+		}
+	}
+}
+
+/* Copy output xhci_ep_ctx to the input xhci_ep_ctx copy.
+ * Useful when you want to change one particular aspect of the endpoint and then
+ * issue a configure endpoint command.
+ */
+void xhci_endpoint_copy(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx,
+		struct xhci_container_ctx *out_ctx,
+		unsigned int ep_index)
+{
+	struct xhci_ep_ctx *out_ep_ctx;
+	struct xhci_ep_ctx *in_ep_ctx;
+
+	out_ep_ctx = xhci_get_ep_ctx(xhci, out_ctx, ep_index);
+	in_ep_ctx = xhci_get_ep_ctx(xhci, in_ctx, ep_index);
+
+	in_ep_ctx->ep_info = out_ep_ctx->ep_info;
+	in_ep_ctx->ep_info2 = out_ep_ctx->ep_info2;
+	in_ep_ctx->deq = out_ep_ctx->deq;
+	in_ep_ctx->tx_info = out_ep_ctx->tx_info;
+}
+
+/* Copy output xhci_slot_ctx to the input xhci_slot_ctx.
+ * Useful when you want to change one particular aspect of the endpoint and then
+ * issue a configure endpoint command.  Only the context entries field matters,
+ * but we'll copy the whole thing anyway.
+ */
+void xhci_slot_copy(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx,
+		struct xhci_container_ctx *out_ctx)
+{
+	struct xhci_slot_ctx *in_slot_ctx;
+	struct xhci_slot_ctx *out_slot_ctx;
+
+	in_slot_ctx = xhci_get_slot_ctx(xhci, in_ctx);
+	out_slot_ctx = xhci_get_slot_ctx(xhci, out_ctx);
+
+	in_slot_ctx->dev_info = out_slot_ctx->dev_info;
+	in_slot_ctx->dev_info2 = out_slot_ctx->dev_info2;
+	in_slot_ctx->tt_info = out_slot_ctx->tt_info;
+	in_slot_ctx->dev_state = out_slot_ctx->dev_state;
+}
+
+/* Set up the scratchpad buffer array and scratchpad buffers, if needed. */
+static int scratchpad_alloc(struct xhci_hcd *xhci, gfp_t flags)
+{
+	int i;
+	struct device *dev = xhci_to_hcd(xhci)->self.controller;
+	int num_sp = HCS_MAX_SCRATCHPAD(xhci->hcs_params2);
+
+	xhci_dbg(xhci, "Allocating %d scratchpad buffers\n", num_sp);
+
+	if (!num_sp)
+		return 0;
+
+	xhci->scratchpad = kzalloc(sizeof(*xhci->scratchpad), flags);
+	if (!xhci->scratchpad)
+		goto fail_sp;
+
+	xhci->scratchpad->sp_array = dma_alloc_coherent(dev,
+				     num_sp * sizeof(u64),
+				     &xhci->scratchpad->sp_dma, flags);
+	if (!xhci->scratchpad->sp_array)
+		goto fail_sp2;
+
+	xhci->scratchpad->sp_buffers = kzalloc(sizeof(void *) * num_sp, flags);
+	if (!xhci->scratchpad->sp_buffers)
+		goto fail_sp3;
+
+	xhci->scratchpad->sp_dma_buffers =
+		kzalloc(sizeof(dma_addr_t) * num_sp, flags);
+
+	if (!xhci->scratchpad->sp_dma_buffers)
+		goto fail_sp4;
+
+	xhci->dcbaa->dev_context_ptrs[0] = cpu_to_le64(xhci->scratchpad->sp_dma);
+	for (i = 0; i < num_sp; i++) {
+		dma_addr_t dma;
+		void *buf = dma_alloc_coherent(dev, xhci->page_size, &dma,
+				flags);
+		if (!buf)
+			goto fail_sp5;
+
+		xhci->scratchpad->sp_array[i] = dma;
+		xhci->scratchpad->sp_buffers[i] = buf;
+		xhci->scratchpad->sp_dma_buffers[i] = dma;
+	}
+
+	return 0;
+
+ fail_sp5:
+	for (i = i - 1; i >= 0; i--) {
+		dma_free_coherent(dev, xhci->page_size,
+				    xhci->scratchpad->sp_buffers[i],
+				    xhci->scratchpad->sp_dma_buffers[i]);
+	}
+	kfree(xhci->scratchpad->sp_dma_buffers);
+
+ fail_sp4:
+	kfree(xhci->scratchpad->sp_buffers);
+
+ fail_sp3:
+	dma_free_coherent(dev, num_sp * sizeof(u64),
+			    xhci->scratchpad->sp_array,
+			    xhci->scratchpad->sp_dma);
+
+ fail_sp2:
+	kfree(xhci->scratchpad);
+	xhci->scratchpad = NULL;
+
+ fail_sp:
+	return -ENOMEM;
+}
+
+static void scratchpad_free(struct xhci_hcd *xhci)
+{
+	int num_sp;
+	int i;
+	struct pci_dev	*pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+
+	if (!xhci->scratchpad)
+		return;
+
+	num_sp = HCS_MAX_SCRATCHPAD(xhci->hcs_params2);
+
+	for (i = 0; i < num_sp; i++) {
+		dma_free_coherent(&pdev->dev, xhci->page_size,
+				    xhci->scratchpad->sp_buffers[i],
+				    xhci->scratchpad->sp_dma_buffers[i]);
+	}
+	kfree(xhci->scratchpad->sp_dma_buffers);
+	kfree(xhci->scratchpad->sp_buffers);
+	dma_free_coherent(&pdev->dev, num_sp * sizeof(u64),
+			    xhci->scratchpad->sp_array,
+			    xhci->scratchpad->sp_dma);
+	kfree(xhci->scratchpad);
+	xhci->scratchpad = NULL;
+}
+
+struct xhci_command *xhci_alloc_command(struct xhci_hcd *xhci,
+		bool allocate_in_ctx, bool allocate_completion,
+		gfp_t mem_flags)
+{
+	struct xhci_command *command;
+
+	command = kzalloc(sizeof(*command), mem_flags);
+	if (!command)
+		return NULL;
+
+	if (allocate_in_ctx) {
+		command->in_ctx =
+			xhci_alloc_container_ctx(xhci, XHCI_CTX_TYPE_INPUT,
+					mem_flags);
+		if (!command->in_ctx) {
+			kfree(command);
+			return NULL;
+		}
+	}
+
+	if (allocate_completion) {
+		command->completion =
+			kzalloc(sizeof(struct completion), mem_flags);
+		if (!command->completion) {
+			xhci_free_container_ctx(xhci, command->in_ctx);
+			kfree(command);
+			return NULL;
+		}
+		init_completion(command->completion);
+	}
+
+	command->status = 0;
+	INIT_LIST_HEAD(&command->cmd_list);
+	return command;
+}
+
+void xhci_urb_free_priv(struct xhci_hcd *xhci, struct urb_priv *urb_priv)
+{
+	if (urb_priv) {
+		kfree(urb_priv->td[0]);
+		kfree(urb_priv);
+	}
+}
+
+void xhci_free_command(struct xhci_hcd *xhci,
+		struct xhci_command *command)
+{
+	xhci_free_container_ctx(xhci,
+			command->in_ctx);
+	kfree(command->completion);
+	kfree(command);
+}
+
+void xhci_mem_cleanup(struct xhci_hcd *xhci)
+{
+	struct pci_dev	*pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+	struct dev_info	*dev_info, *next;
+	struct xhci_cd  *cur_cd, *next_cd;
+	unsigned long	flags;
+	int size;
+	int i, j, num_ports;
+
+	/* Free the Event Ring Segment Table and the actual Event Ring */
+	size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries);
+	if (xhci->erst.entries)
+		dma_free_coherent(&pdev->dev, size,
+				xhci->erst.entries, xhci->erst.erst_dma_addr);
+	xhci->erst.entries = NULL;
+	xhci_dbg(xhci, "Freed ERST\n");
+	if (xhci->event_ring)
+		xhci_ring_free(xhci, xhci->event_ring);
+	xhci->event_ring = NULL;
+	xhci_dbg(xhci, "Freed event ring\n");
+
+	if (xhci->lpm_command)
+		xhci_free_command(xhci, xhci->lpm_command);
+	xhci->cmd_ring_reserved_trbs = 0;
+	if (xhci->cmd_ring)
+		xhci_ring_free(xhci, xhci->cmd_ring);
+	xhci->cmd_ring = NULL;
+	xhci_dbg(xhci, "Freed command ring\n");
+	list_for_each_entry_safe(cur_cd, next_cd,
+			&xhci->cancel_cmd_list, cancel_cmd_list) {
+		list_del(&cur_cd->cancel_cmd_list);
+		kfree(cur_cd);
+	}
+
+	for (i = 1; i < MAX_HC_SLOTS; ++i)
+		xhci_free_virt_device(xhci, i);
+
+	if (xhci->segment_pool)
+		dma_pool_destroy(xhci->segment_pool);
+	xhci->segment_pool = NULL;
+	xhci_dbg(xhci, "Freed segment pool\n");
+
+	if (xhci->device_pool)
+		dma_pool_destroy(xhci->device_pool);
+	xhci->device_pool = NULL;
+	xhci_dbg(xhci, "Freed device context pool\n");
+
+	if (xhci->small_streams_pool)
+		dma_pool_destroy(xhci->small_streams_pool);
+	xhci->small_streams_pool = NULL;
+	xhci_dbg(xhci, "Freed small stream array pool\n");
+
+	if (xhci->medium_streams_pool)
+		dma_pool_destroy(xhci->medium_streams_pool);
+	xhci->medium_streams_pool = NULL;
+	xhci_dbg(xhci, "Freed medium stream array pool\n");
+
+	if (xhci->dcbaa)
+		dma_free_coherent(&pdev->dev, sizeof(*xhci->dcbaa),
+				xhci->dcbaa, xhci->dcbaa->dma);
+	xhci->dcbaa = NULL;
+
+	scratchpad_free(xhci);
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	list_for_each_entry_safe(dev_info, next, &xhci->lpm_failed_devs, list) {
+		list_del(&dev_info->list);
+		kfree(dev_info);
+	}
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	num_ports = HCS_MAX_PORTS(xhci->hcs_params1);
+	for (i = 0; i < num_ports; i++) {
+		struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table;
+		for (j = 0; j < XHCI_MAX_INTERVAL; j++) {
+			struct list_head *ep = &bwt->interval_bw[j].endpoints;
+			while (!list_empty(ep))
+				list_del_init(ep->next);
+		}
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct xhci_tt_bw_info *tt, *n;
+		list_for_each_entry_safe(tt, n, &xhci->rh_bw[i].tts, tt_list) {
+			list_del(&tt->tt_list);
+			kfree(tt);
+		}
+	}
+
+	xhci->num_usb2_ports = 0;
+	xhci->num_usb3_ports = 0;
+	xhci->num_active_eps = 0;
+	kfree(xhci->usb2_ports);
+	kfree(xhci->usb3_ports);
+	kfree(xhci->port_array);
+	kfree(xhci->rh_bw);
+
+	xhci->page_size = 0;
+	xhci->page_shift = 0;
+	xhci->bus_state[0].bus_suspended = 0;
+	xhci->bus_state[1].bus_suspended = 0;
+}
+
+static int xhci_test_trb_in_td(struct xhci_hcd *xhci,
+		struct xhci_segment *input_seg,
+		union xhci_trb *start_trb,
+		union xhci_trb *end_trb,
+		dma_addr_t input_dma,
+		struct xhci_segment *result_seg,
+		char *test_name, int test_number)
+{
+	unsigned long long start_dma;
+	unsigned long long end_dma;
+	struct xhci_segment *seg;
+
+	start_dma = xhci_trb_virt_to_dma(input_seg, start_trb);
+	end_dma = xhci_trb_virt_to_dma(input_seg, end_trb);
+
+	seg = trb_in_td(input_seg, start_trb, end_trb, input_dma);
+	if (seg != result_seg) {
+		xhci_warn(xhci, "WARN: %s TRB math test %d failed!\n",
+				test_name, test_number);
+		xhci_warn(xhci, "Tested TRB math w/ seg %p and "
+				"input DMA 0x%llx\n",
+				input_seg,
+				(unsigned long long) input_dma);
+		xhci_warn(xhci, "starting TRB %p (0x%llx DMA), "
+				"ending TRB %p (0x%llx DMA)\n",
+				start_trb, start_dma,
+				end_trb, end_dma);
+		xhci_warn(xhci, "Expected seg %p, got seg %p\n",
+				result_seg, seg);
+		return -1;
+	}
+	return 0;
+}
+
+/* TRB math checks for xhci_trb_in_td(), using the command and event rings. */
+static int xhci_check_trb_in_td_math(struct xhci_hcd *xhci, gfp_t mem_flags)
+{
+	struct {
+		dma_addr_t		input_dma;
+		struct xhci_segment	*result_seg;
+	} simple_test_vector [] = {
+		/* A zeroed DMA field should fail */
+		{ 0, NULL },
+		/* One TRB before the ring start should fail */
+		{ xhci->event_ring->first_seg->dma - 16, NULL },
+		/* One byte before the ring start should fail */
+		{ xhci->event_ring->first_seg->dma - 1, NULL },
+		/* Starting TRB should succeed */
+		{ xhci->event_ring->first_seg->dma, xhci->event_ring->first_seg },
+		/* Ending TRB should succeed */
+		{ xhci->event_ring->first_seg->dma + (TRBS_PER_SEGMENT - 1)*16,
+			xhci->event_ring->first_seg },
+		/* One byte after the ring end should fail */
+		{ xhci->event_ring->first_seg->dma + (TRBS_PER_SEGMENT - 1)*16 + 1, NULL },
+		/* One TRB after the ring end should fail */
+		{ xhci->event_ring->first_seg->dma + (TRBS_PER_SEGMENT)*16, NULL },
+		/* An address of all ones should fail */
+		{ (dma_addr_t) (~0), NULL },
+	};
+	struct {
+		struct xhci_segment	*input_seg;
+		union xhci_trb		*start_trb;
+		union xhci_trb		*end_trb;
+		dma_addr_t		input_dma;
+		struct xhci_segment	*result_seg;
+	} complex_test_vector [] = {
+		/* Test feeding a valid DMA address from a different ring */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = xhci->event_ring->first_seg->trbs,
+			.end_trb = &xhci->event_ring->first_seg->trbs[TRBS_PER_SEGMENT - 1],
+			.input_dma = xhci->cmd_ring->first_seg->dma,
+			.result_seg = NULL,
+		},
+		/* Test feeding a valid end TRB from a different ring */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = xhci->event_ring->first_seg->trbs,
+			.end_trb = &xhci->cmd_ring->first_seg->trbs[TRBS_PER_SEGMENT - 1],
+			.input_dma = xhci->cmd_ring->first_seg->dma,
+			.result_seg = NULL,
+		},
+		/* Test feeding a valid start and end TRB from a different ring */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = xhci->cmd_ring->first_seg->trbs,
+			.end_trb = &xhci->cmd_ring->first_seg->trbs[TRBS_PER_SEGMENT - 1],
+			.input_dma = xhci->cmd_ring->first_seg->dma,
+			.result_seg = NULL,
+		},
+		/* TRB in this ring, but after this TD */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = &xhci->event_ring->first_seg->trbs[0],
+			.end_trb = &xhci->event_ring->first_seg->trbs[3],
+			.input_dma = xhci->event_ring->first_seg->dma + 4*16,
+			.result_seg = NULL,
+		},
+		/* TRB in this ring, but before this TD */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = &xhci->event_ring->first_seg->trbs[3],
+			.end_trb = &xhci->event_ring->first_seg->trbs[6],
+			.input_dma = xhci->event_ring->first_seg->dma + 2*16,
+			.result_seg = NULL,
+		},
+		/* TRB in this ring, but after this wrapped TD */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = &xhci->event_ring->first_seg->trbs[TRBS_PER_SEGMENT - 3],
+			.end_trb = &xhci->event_ring->first_seg->trbs[1],
+			.input_dma = xhci->event_ring->first_seg->dma + 2*16,
+			.result_seg = NULL,
+		},
+		/* TRB in this ring, but before this wrapped TD */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = &xhci->event_ring->first_seg->trbs[TRBS_PER_SEGMENT - 3],
+			.end_trb = &xhci->event_ring->first_seg->trbs[1],
+			.input_dma = xhci->event_ring->first_seg->dma + (TRBS_PER_SEGMENT - 4)*16,
+			.result_seg = NULL,
+		},
+		/* TRB not in this ring, and we have a wrapped TD */
+		{	.input_seg = xhci->event_ring->first_seg,
+			.start_trb = &xhci->event_ring->first_seg->trbs[TRBS_PER_SEGMENT - 3],
+			.end_trb = &xhci->event_ring->first_seg->trbs[1],
+			.input_dma = xhci->cmd_ring->first_seg->dma + 2*16,
+			.result_seg = NULL,
+		},
+	};
+
+	unsigned int num_tests;
+	int i, ret;
+
+	num_tests = ARRAY_SIZE(simple_test_vector);
+	for (i = 0; i < num_tests; i++) {
+		ret = xhci_test_trb_in_td(xhci,
+				xhci->event_ring->first_seg,
+				xhci->event_ring->first_seg->trbs,
+				&xhci->event_ring->first_seg->trbs[TRBS_PER_SEGMENT - 1],
+				simple_test_vector[i].input_dma,
+				simple_test_vector[i].result_seg,
+				"Simple", i);
+		if (ret < 0)
+			return ret;
+	}
+
+	num_tests = ARRAY_SIZE(complex_test_vector);
+	for (i = 0; i < num_tests; i++) {
+		ret = xhci_test_trb_in_td(xhci,
+				complex_test_vector[i].input_seg,
+				complex_test_vector[i].start_trb,
+				complex_test_vector[i].end_trb,
+				complex_test_vector[i].input_dma,
+				complex_test_vector[i].result_seg,
+				"Complex", i);
+		if (ret < 0)
+			return ret;
+	}
+	xhci_dbg(xhci, "TRB math tests passed.\n");
+	return 0;
+}
+
+static void xhci_set_hc_event_deq(struct xhci_hcd *xhci)
+{
+	u64 temp;
+	dma_addr_t deq;
+
+	deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg,
+			xhci->event_ring->dequeue);
+	if (deq == 0 && !in_interrupt())
+		xhci_warn(xhci, "WARN something wrong with SW event ring "
+				"dequeue ptr.\n");
+	/* Update HC event ring dequeue pointer */
+	temp = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
+	temp &= ERST_PTR_MASK;
+	/* Don't clear the EHB bit (which is RW1C) because
+	 * there might be more events to service.
+	 */
+	temp &= ~ERST_EHB;
+	xhci_dbg(xhci, "// Write event ring dequeue pointer, "
+			"preserving EHB bit\n");
+	xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK) | temp,
+			&xhci->ir_set->erst_dequeue);
+}
+
+static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
+		__le32 __iomem *addr, u8 major_revision)
+{
+	u32 temp, port_offset, port_count;
+	int i;
+
+	if (major_revision > 0x03) {
+		xhci_warn(xhci, "Ignoring unknown port speed, "
+				"Ext Cap %p, revision = 0x%x\n",
+				addr, major_revision);
+		/* Ignoring port protocol we can't understand. FIXME */
+		return;
+	}
+
+	/* Port offset and count in the third dword, see section 7.2 */
+	temp = xhci_readl(xhci, addr + 2);
+	port_offset = XHCI_EXT_PORT_OFF(temp);
+	port_count = XHCI_EXT_PORT_COUNT(temp);
+	xhci_dbg(xhci, "Ext Cap %p, port offset = %u, "
+			"count = %u, revision = 0x%x\n",
+			addr, port_offset, port_count, major_revision);
+	/* Port count includes the current port offset */
+	if (port_offset == 0 || (port_offset + port_count - 1) > num_ports)
+		/* WTF? "Valid values are ?1? to MaxPorts" */
+		return;
+
+	/* Check the host's USB2 LPM capability */
+	if ((xhci->hci_version == 0x96) && (major_revision != 0x03) &&
+			(temp & XHCI_L1C)) {
+		xhci_dbg(xhci, "xHCI 0.96: support USB2 software lpm\n");
+		xhci->sw_lpm_support = 1;
+	}
+
+	if ((xhci->hci_version >= 0x100) && (major_revision != 0x03)) {
+		xhci_dbg(xhci, "xHCI 1.0: support USB2 software lpm\n");
+		xhci->sw_lpm_support = 1;
+		if (temp & XHCI_HLC) {
+			xhci_dbg(xhci, "xHCI 1.0: support USB2 hardware lpm\n");
+			xhci->hw_lpm_support = 1;
+		}
+	}
+
+	port_offset--;
+	for (i = port_offset; i < (port_offset + port_count); i++) {
+		/* Duplicate entry.  Ignore the port if the revisions differ. */
+		if (xhci->port_array[i] != 0) {
+			xhci_warn(xhci, "Duplicate port entry, Ext Cap %p,"
+					" port %u\n", addr, i);
+			xhci_warn(xhci, "Port was marked as USB %u, "
+					"duplicated as USB %u\n",
+					xhci->port_array[i], major_revision);
+			/* Only adjust the roothub port counts if we haven't
+			 * found a similar duplicate.
+			 */
+			if (xhci->port_array[i] != major_revision &&
+				xhci->port_array[i] != DUPLICATE_ENTRY) {
+				if (xhci->port_array[i] == 0x03)
+					xhci->num_usb3_ports--;
+				else
+					xhci->num_usb2_ports--;
+				xhci->port_array[i] = DUPLICATE_ENTRY;
+			}
+			/* FIXME: Should we disable the port? */
+			continue;
+		}
+		xhci->port_array[i] = major_revision;
+		if (major_revision == 0x03)
+			xhci->num_usb3_ports++;
+		else
+			xhci->num_usb2_ports++;
+	}
+	/* FIXME: Should we disable ports not in the Extended Capabilities? */
+}
+
+/*
+ * Scan the Extended Capabilities for the "Supported Protocol Capabilities" that
+ * specify what speeds each port is supposed to be.  We can't count on the port
+ * speed bits in the PORTSC register being correct until a device is connected,
+ * but we need to set up the two fake roothubs with the correct number of USB
+ * 3.0 and USB 2.0 ports at host controller initialization time.
+ */
+static int xhci_setup_port_arrays(struct xhci_hcd *xhci, gfp_t flags)
+{
+	__le32 __iomem *addr;
+	u32 offset;
+	unsigned int num_ports;
+	int i, j, port_index;
+
+	addr = &xhci->cap_regs->hcc_params;
+	offset = XHCI_HCC_EXT_CAPS(xhci_readl(xhci, addr));
+	if (offset == 0) {
+		xhci_err(xhci, "No Extended Capability registers, "
+				"unable to set up roothub.\n");
+		return -ENODEV;
+	}
+
+	num_ports = HCS_MAX_PORTS(xhci->hcs_params1);
+	xhci->port_array = kzalloc(sizeof(*xhci->port_array)*num_ports, flags);
+	if (!xhci->port_array)
+		return -ENOMEM;
+
+	xhci->rh_bw = kzalloc(sizeof(*xhci->rh_bw)*num_ports, flags);
+	if (!xhci->rh_bw)
+		return -ENOMEM;
+	for (i = 0; i < num_ports; i++) {
+		struct xhci_interval_bw_table *bw_table;
+
+		INIT_LIST_HEAD(&xhci->rh_bw[i].tts);
+		bw_table = &xhci->rh_bw[i].bw_table;
+		for (j = 0; j < XHCI_MAX_INTERVAL; j++)
+			INIT_LIST_HEAD(&bw_table->interval_bw[j].endpoints);
+	}
+
+	/*
+	 * For whatever reason, the first capability offset is from the
+	 * capability register base, not from the HCCPARAMS register.
+	 * See section 5.3.6 for offset calculation.
+	 */
+	addr = &xhci->cap_regs->hc_capbase + offset;
+	while (1) {
+		u32 cap_id;
+
+		cap_id = xhci_readl(xhci, addr);
+		if (XHCI_EXT_CAPS_ID(cap_id) == XHCI_EXT_CAPS_PROTOCOL)
+			xhci_add_in_port(xhci, num_ports, addr,
+					(u8) XHCI_EXT_PORT_MAJOR(cap_id));
+		offset = XHCI_EXT_CAPS_NEXT(cap_id);
+		if (!offset || (xhci->num_usb2_ports + xhci->num_usb3_ports)
+				== num_ports)
+			break;
+		/*
+		 * Once you're into the Extended Capabilities, the offset is
+		 * always relative to the register holding the offset.
+		 */
+		addr += offset;
+	}
+
+	if (xhci->num_usb2_ports == 0 && xhci->num_usb3_ports == 0) {
+		xhci_warn(xhci, "No ports on the roothubs?\n");
+		return -ENODEV;
+	}
+	xhci_dbg(xhci, "Found %u USB 2.0 ports and %u USB 3.0 ports.\n",
+			xhci->num_usb2_ports, xhci->num_usb3_ports);
+
+	/* Place limits on the number of roothub ports so that the hub
+	 * descriptors aren't longer than the USB core will allocate.
+	 */
+	if (xhci->num_usb3_ports > 15) {
+		xhci_dbg(xhci, "Limiting USB 3.0 roothub ports to 15.\n");
+		xhci->num_usb3_ports = 15;
+	}
+	if (xhci->num_usb2_ports > USB_MAXCHILDREN) {
+		xhci_dbg(xhci, "Limiting USB 2.0 roothub ports to %u.\n",
+				USB_MAXCHILDREN);
+		xhci->num_usb2_ports = USB_MAXCHILDREN;
+	}
+
+	/*
+	 * Note we could have all USB 3.0 ports, or all USB 2.0 ports.
+	 * Not sure how the USB core will handle a hub with no ports...
+	 */
+	if (xhci->num_usb2_ports) {
+		xhci->usb2_ports = kmalloc(sizeof(*xhci->usb2_ports)*
+				xhci->num_usb2_ports, flags);
+		if (!xhci->usb2_ports)
+			return -ENOMEM;
+
+		port_index = 0;
+		for (i = 0; i < num_ports; i++) {
+			if (xhci->port_array[i] == 0x03 ||
+					xhci->port_array[i] == 0 ||
+					xhci->port_array[i] == DUPLICATE_ENTRY)
+				continue;
+
+			xhci->usb2_ports[port_index] =
+				&xhci->op_regs->port_status_base +
+				NUM_PORT_REGS*i;
+			xhci_dbg(xhci, "USB 2.0 port at index %u, "
+					"addr = %p\n", i,
+					xhci->usb2_ports[port_index]);
+			port_index++;
+			if (port_index == xhci->num_usb2_ports)
+				break;
+		}
+	}
+	if (xhci->num_usb3_ports) {
+		xhci->usb3_ports = kmalloc(sizeof(*xhci->usb3_ports)*
+				xhci->num_usb3_ports, flags);
+		if (!xhci->usb3_ports)
+			return -ENOMEM;
+
+		port_index = 0;
+		for (i = 0; i < num_ports; i++)
+			if (xhci->port_array[i] == 0x03) {
+				xhci->usb3_ports[port_index] =
+					&xhci->op_regs->port_status_base +
+					NUM_PORT_REGS*i;
+				xhci_dbg(xhci, "USB 3.0 port at index %u, "
+						"addr = %p\n", i,
+						xhci->usb3_ports[port_index]);
+				port_index++;
+				if (port_index == xhci->num_usb3_ports)
+					break;
+			}
+	}
+	return 0;
+}
+
+int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+{
+	dma_addr_t	dma;
+	struct device	*dev = xhci_to_hcd(xhci)->self.controller;
+	unsigned int	val, val2;
+	u64		val_64;
+	struct xhci_segment	*seg;
+	u32 page_size, temp;
+	int i;
+
+	page_size = xhci_readl(xhci, &xhci->op_regs->page_size);
+	xhci_dbg(xhci, "Supported page size register = 0x%x\n", page_size);
+	for (i = 0; i < 16; i++) {
+		if ((0x1 & page_size) != 0)
+			break;
+		page_size = page_size >> 1;
+	}
+	if (i < 16)
+		xhci_dbg(xhci, "Supported page size of %iK\n", (1 << (i+12)) / 1024);
+	else
+		xhci_warn(xhci, "WARN: no supported page size\n");
+	/* Use 4K pages, since that's common and the minimum the HC supports */
+	xhci->page_shift = 12;
+	xhci->page_size = 1 << xhci->page_shift;
+	xhci_dbg(xhci, "HCD page size set to %iK\n", xhci->page_size / 1024);
+
+	/*
+	 * Program the Number of Device Slots Enabled field in the CONFIG
+	 * register with the max value of slots the HC can handle.
+	 */
+	val = HCS_MAX_SLOTS(xhci_readl(xhci, &xhci->cap_regs->hcs_params1));
+	xhci_dbg(xhci, "// xHC can handle at most %d device slots.\n",
+			(unsigned int) val);
+	val2 = xhci_readl(xhci, &xhci->op_regs->config_reg);
+	val |= (val2 & ~HCS_SLOTS_MASK);
+	xhci_dbg(xhci, "// Setting Max device slots reg = 0x%x.\n",
+			(unsigned int) val);
+	xhci_writel(xhci, val, &xhci->op_regs->config_reg);
+
+	/*
+	 * Section 5.4.8 - doorbell array must be
+	 * "physically contiguous and 64-byte (cache line) aligned".
+	 */
+	xhci->dcbaa = dma_alloc_coherent(dev, sizeof(*xhci->dcbaa), &dma,
+			GFP_KERNEL);
+	if (!xhci->dcbaa)
+		goto fail;
+	memset(xhci->dcbaa, 0, sizeof *(xhci->dcbaa));
+	xhci->dcbaa->dma = dma;
+	xhci_dbg(xhci, "// Device context base array address = 0x%llx (DMA), %p (virt)\n",
+			(unsigned long long)xhci->dcbaa->dma, xhci->dcbaa);
+	xhci_write_64(xhci, dma, &xhci->op_regs->dcbaa_ptr);
+
+	/*
+	 * Initialize the ring segment pool.  The ring must be a contiguous
+	 * structure comprised of TRBs.  The TRBs must be 16 byte aligned,
+	 * however, the command ring segment needs 64-byte aligned segments,
+	 * so we pick the greater alignment need.
+	 */
+	xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+			TRB_SEGMENT_SIZE, 64, xhci->page_size);
+
+	/* See Table 46 and Note on Figure 55 */
+	xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
+			2112, 64, xhci->page_size);
+	if (!xhci->segment_pool || !xhci->device_pool)
+		goto fail;
+
+	/* Linear stream context arrays don't have any boundary restrictions,
+	 * and only need to be 16-byte aligned.
+	 */
+	xhci->small_streams_pool =
+		dma_pool_create("xHCI 256 byte stream ctx arrays",
+			dev, SMALL_STREAM_ARRAY_SIZE, 16, 0);
+	xhci->medium_streams_pool =
+		dma_pool_create("xHCI 1KB stream ctx arrays",
+			dev, MEDIUM_STREAM_ARRAY_SIZE, 16, 0);
+	/* Any stream context array bigger than MEDIUM_STREAM_ARRAY_SIZE
+	 * will be allocated with dma_alloc_coherent()
+	 */
+
+	if (!xhci->small_streams_pool || !xhci->medium_streams_pool)
+		goto fail;
+
+	/* Set up the command ring to have one segments for now. */
+	xhci->cmd_ring = xhci_ring_alloc(xhci, 1, 1, TYPE_COMMAND, flags);
+	if (!xhci->cmd_ring)
+		goto fail;
+	INIT_LIST_HEAD(&xhci->cancel_cmd_list);
+	xhci_dbg(xhci, "Allocated command ring at %p\n", xhci->cmd_ring);
+	xhci_dbg(xhci, "First segment DMA is 0x%llx\n",
+			(unsigned long long)xhci->cmd_ring->first_seg->dma);
+
+	/* Set the address in the Command Ring Control register */
+	val_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
+	val_64 = (val_64 & (u64) CMD_RING_RSVD_BITS) |
+		(xhci->cmd_ring->first_seg->dma & (u64) ~CMD_RING_RSVD_BITS) |
+		xhci->cmd_ring->cycle_state;
+	xhci_dbg(xhci, "// Setting command ring address to 0x%x\n", val);
+	xhci_write_64(xhci, val_64, &xhci->op_regs->cmd_ring);
+	xhci_dbg_cmd_ptrs(xhci);
+
+	xhci->lpm_command = xhci_alloc_command(xhci, true, true, flags);
+	if (!xhci->lpm_command)
+		goto fail;
+
+	/* Reserve one command ring TRB for disabling LPM.
+	 * Since the USB core grabs the shared usb_bus bandwidth mutex before
+	 * disabling LPM, we only need to reserve one TRB for all devices.
+	 */
+	xhci->cmd_ring_reserved_trbs++;
+
+	val = xhci_readl(xhci, &xhci->cap_regs->db_off);
+	val &= DBOFF_MASK;
+	xhci_dbg(xhci, "// Doorbell array is located at offset 0x%x"
+			" from cap regs base addr\n", val);
+	xhci->dba = (void __iomem *) xhci->cap_regs + val;
+	xhci_dbg_regs(xhci);
+	xhci_print_run_regs(xhci);
+	/* Set ir_set to interrupt register set 0 */
+	xhci->ir_set = &xhci->run_regs->ir_set[0];
+
+	/*
+	 * Event ring setup: Allocate a normal ring, but also setup
+	 * the event ring segment table (ERST).  Section 4.9.3.
+	 */
+	xhci_dbg(xhci, "// Allocating event ring\n");
+	xhci->event_ring = xhci_ring_alloc(xhci, ERST_NUM_SEGS, 1, TYPE_EVENT,
+						flags);
+	if (!xhci->event_ring)
+		goto fail;
+	if (xhci_check_trb_in_td_math(xhci, flags) < 0)
+		goto fail;
+
+	xhci->erst.entries = dma_alloc_coherent(dev,
+			sizeof(struct xhci_erst_entry) * ERST_NUM_SEGS, &dma,
+			GFP_KERNEL);
+	if (!xhci->erst.entries)
+		goto fail;
+	xhci_dbg(xhci, "// Allocated event ring segment table at 0x%llx\n",
+			(unsigned long long)dma);
+
+	memset(xhci->erst.entries, 0, sizeof(struct xhci_erst_entry)*ERST_NUM_SEGS);
+	xhci->erst.num_entries = ERST_NUM_SEGS;
+	xhci->erst.erst_dma_addr = dma;
+	xhci_dbg(xhci, "Set ERST to 0; private num segs = %i, virt addr = %p, dma addr = 0x%llx\n",
+			xhci->erst.num_entries,
+			xhci->erst.entries,
+			(unsigned long long)xhci->erst.erst_dma_addr);
+
+	/* set ring base address and size for each segment table entry */
+	for (val = 0, seg = xhci->event_ring->first_seg; val < ERST_NUM_SEGS; val++) {
+		struct xhci_erst_entry *entry = &xhci->erst.entries[val];
+		entry->seg_addr = cpu_to_le64(seg->dma);
+		entry->seg_size = cpu_to_le32(TRBS_PER_SEGMENT);
+		entry->rsvd = 0;
+		seg = seg->next;
+	}
+
+	/* set ERST count with the number of entries in the segment table */
+	val = xhci_readl(xhci, &xhci->ir_set->erst_size);
+	val &= ERST_SIZE_MASK;
+	val |= ERST_NUM_SEGS;
+	xhci_dbg(xhci, "// Write ERST size = %i to ir_set 0 (some bits preserved)\n",
+			val);
+	xhci_writel(xhci, val, &xhci->ir_set->erst_size);
+
+	xhci_dbg(xhci, "// Set ERST entries to point to event ring.\n");
+	/* set the segment table base address */
+	xhci_dbg(xhci, "// Set ERST base address for ir_set 0 = 0x%llx\n",
+			(unsigned long long)xhci->erst.erst_dma_addr);
+	val_64 = xhci_read_64(xhci, &xhci->ir_set->erst_base);
+	val_64 &= ERST_PTR_MASK;
+	val_64 |= (xhci->erst.erst_dma_addr & (u64) ~ERST_PTR_MASK);
+	xhci_write_64(xhci, val_64, &xhci->ir_set->erst_base);
+
+	/* Set the event ring dequeue address */
+	xhci_set_hc_event_deq(xhci);
+	xhci_dbg(xhci, "Wrote ERST address to ir_set 0.\n");
+	xhci_print_ir_set(xhci, 0);
+
+	/*
+	 * XXX: Might need to set the Interrupter Moderation Register to
+	 * something other than the default (~1ms minimum between interrupts).
+	 * See section 5.5.1.2.
+	 */
+	init_completion(&xhci->addr_dev);
+	for (i = 0; i < MAX_HC_SLOTS; ++i)
+		xhci->devs[i] = NULL;
+	for (i = 0; i < USB_MAXCHILDREN; ++i) {
+		xhci->bus_state[0].resume_done[i] = 0;
+		xhci->bus_state[1].resume_done[i] = 0;
+	}
+
+	if (scratchpad_alloc(xhci, flags))
+		goto fail;
+	if (xhci_setup_port_arrays(xhci, flags))
+		goto fail;
+
+	INIT_LIST_HEAD(&xhci->lpm_failed_devs);
+
+	/* Enable USB 3.0 device notifications for function remote wake, which
+	 * is necessary for allowing USB 3.0 devices to do remote wakeup from
+	 * U3 (device suspend).
+	 */
+	temp = xhci_readl(xhci, &xhci->op_regs->dev_notification);
+	temp &= ~DEV_NOTE_MASK;
+	temp |= DEV_NOTE_FWAKE;
+	xhci_writel(xhci, temp, &xhci->op_regs->dev_notification);
+
+	return 0;
+
+fail:
+	xhci_warn(xhci, "Couldn't initialize memory\n");
+	xhci_halt(xhci);
+	xhci_reset(xhci);
+	xhci_mem_cleanup(xhci);
+	return -ENOMEM;
+}
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
new file mode 100644
index 0000000..1a30c38
--- /dev/null
+++ b/drivers/usb/host/xhci-pci.c
@@ -0,0 +1,356 @@
+/*
+ * xHCI host controller driver PCI Bus Glue.
+ *
+ * Copyright (C) 2008 Intel Corp.
+ *
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "xhci.h"
+
+/* Device for a quirk */
+#define PCI_VENDOR_ID_FRESCO_LOGIC	0x1b73
+#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK	0x1000
+#define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400	0x1400
+
+#define PCI_VENDOR_ID_ETRON		0x1b6f
+#define PCI_DEVICE_ID_ASROCK_P67	0x7023
+
+static const char hcd_name[] = "xhci_hcd";
+
+/* called after powerup, by probe or system-pm "wakeup" */
+static int xhci_pci_reinit(struct xhci_hcd *xhci, struct pci_dev *pdev)
+{
+	/*
+	 * TODO: Implement finding debug ports later.
+	 * TODO: see if there are any quirks that need to be added to handle
+	 * new extended capabilities.
+	 */
+
+	/* PCI Memory-Write-Invalidate cycle support is optional (uncommon) */
+	if (!pci_set_mwi(pdev))
+		xhci_dbg(xhci, "MWI active\n");
+
+	xhci_dbg(xhci, "Finished xhci_pci_reinit\n");
+	return 0;
+}
+
+static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+{
+	struct pci_dev		*pdev = to_pci_dev(dev);
+
+	/* Look for vendor-specific quirks */
+	if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC &&
+			(pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK ||
+			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) {
+		if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&
+				pdev->revision == 0x0) {
+			xhci->quirks |= XHCI_RESET_EP_QUIRK;
+			xhci_dbg(xhci, "QUIRK: Fresco Logic xHC needs configure"
+					" endpoint cmd after reset endpoint\n");
+		}
+		/* Fresco Logic confirms: all revisions of this chip do not
+		 * support MSI, even though some of them claim to in their PCI
+		 * capabilities.
+		 */
+		xhci->quirks |= XHCI_BROKEN_MSI;
+		xhci_dbg(xhci, "QUIRK: Fresco Logic revision %u "
+				"has broken MSI implementation\n",
+				pdev->revision);
+		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+	}
+
+	if (pdev->vendor == PCI_VENDOR_ID_NEC)
+		xhci->quirks |= XHCI_NEC_HOST;
+
+	if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version == 0x96)
+		xhci->quirks |= XHCI_AMD_0x96_HOST;
+
+	/* AMD PLL quirk */
+	if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info())
+		xhci->quirks |= XHCI_AMD_PLL_FIX;
+	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
+		xhci->quirks |= XHCI_LPM_SUPPORT;
+		xhci->quirks |= XHCI_INTEL_HOST;
+	}
+	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+			pdev->device == PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI) {
+		xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
+		xhci->quirks |= XHCI_EP_LIMIT_QUIRK;
+		xhci->limit_active_eps = 64;
+		xhci->quirks |= XHCI_SW_BW_CHECKING;
+		/*
+		 * PPT desktop boards DH77EB and DH77DF will power back on after
+		 * a few seconds of being shutdown.  The fix for this is to
+		 * switch the ports from xHCI to EHCI on shutdown.  We can't use
+		 * DMI information to find those particular boards (since each
+		 * vendor will change the board name), so we have to key off all
+		 * PPT chipsets.
+		 */
+		xhci->quirks |= XHCI_SPURIOUS_REBOOT;
+		xhci->quirks |= XHCI_AVOID_BEI;
+	}
+	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+			pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
+		xhci->quirks |= XHCI_RESET_ON_RESUME;
+		xhci_dbg(xhci, "QUIRK: Resetting on resume\n");
+		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+	}
+	if (pdev->vendor == PCI_VENDOR_ID_VIA)
+		xhci->quirks |= XHCI_RESET_ON_RESUME;
+}
+
+/* called during probe() after chip reset completes */
+static int xhci_pci_setup(struct usb_hcd *hcd)
+{
+	struct xhci_hcd		*xhci;
+	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
+	int			retval;
+
+	retval = xhci_gen_setup(hcd, xhci_pci_quirks);
+	if (retval)
+		return retval;
+
+	xhci = hcd_to_xhci(hcd);
+	if (!usb_hcd_is_primary_hcd(hcd))
+		return 0;
+
+	pci_read_config_byte(pdev, XHCI_SBRN_OFFSET, &xhci->sbrn);
+	xhci_dbg(xhci, "Got SBRN %u\n", (unsigned int) xhci->sbrn);
+
+	/* Find any debug ports */
+	retval = xhci_pci_reinit(xhci, pdev);
+	if (!retval)
+		return retval;
+
+	kfree(xhci);
+	return retval;
+}
+
+/*
+ * We need to register our own PCI probe function (instead of the USB core's
+ * function) in order to create a second roothub under xHCI.
+ */
+static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+{
+	int retval;
+	struct xhci_hcd *xhci;
+	struct hc_driver *driver;
+	struct usb_hcd *hcd;
+
+	driver = (struct hc_driver *)id->driver_data;
+	/* Register the USB 2.0 roothub.
+	 * FIXME: USB core must know to register the USB 2.0 roothub first.
+	 * This is sort of silly, because we could just set the HCD driver flags
+	 * to say USB 2.0, but I'm not sure what the implications would be in
+	 * the other parts of the HCD code.
+	 */
+	retval = usb_hcd_pci_probe(dev, id);
+
+	if (retval)
+		return retval;
+
+	/* USB 2.0 roothub is stored in the PCI device now. */
+	hcd = dev_get_drvdata(&dev->dev);
+	xhci = hcd_to_xhci(hcd);
+	xhci->shared_hcd = usb_create_shared_hcd(driver, &dev->dev,
+				pci_name(dev), hcd);
+	if (!xhci->shared_hcd) {
+		retval = -ENOMEM;
+		goto dealloc_usb2_hcd;
+	}
+
+	/* Set the xHCI pointer before xhci_pci_setup() (aka hcd_driver.reset)
+	 * is called by usb_add_hcd().
+	 */
+	*((struct xhci_hcd **) xhci->shared_hcd->hcd_priv) = xhci;
+
+	retval = usb_add_hcd(xhci->shared_hcd, dev->irq,
+			IRQF_SHARED);
+	if (retval)
+		goto put_usb3_hcd;
+	/* Roothub already marked as USB 3.0 speed */
+
+	/* We know the LPM timeout algorithms for this host, let the USB core
+	 * enable and disable LPM for devices under the USB 3.0 roothub.
+	 */
+	if (xhci->quirks & XHCI_LPM_SUPPORT)
+		hcd_to_bus(xhci->shared_hcd)->root_hub->lpm_capable = 1;
+
+	return 0;
+
+put_usb3_hcd:
+	usb_put_hcd(xhci->shared_hcd);
+dealloc_usb2_hcd:
+	usb_hcd_pci_remove(dev);
+	return retval;
+}
+
+static void xhci_pci_remove(struct pci_dev *dev)
+{
+	struct xhci_hcd *xhci;
+
+	xhci = hcd_to_xhci(pci_get_drvdata(dev));
+	if (xhci->shared_hcd) {
+		usb_remove_hcd(xhci->shared_hcd);
+		usb_put_hcd(xhci->shared_hcd);
+	}
+	usb_hcd_pci_remove(dev);
+	kfree(xhci);
+}
+
+#ifdef CONFIG_PM
+static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
+{
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+
+	return xhci_suspend(xhci);
+}
+
+static int xhci_pci_resume(struct usb_hcd *hcd, bool hibernated)
+{
+	struct xhci_hcd		*xhci = hcd_to_xhci(hcd);
+	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
+	int			retval = 0;
+
+	/* The BIOS on systems with the Intel Panther Point chipset may or may
+	 * not support xHCI natively.  That means that during system resume, it
+	 * may switch the ports back to EHCI so that users can use their
+	 * keyboard to select a kernel from GRUB after resume from hibernate.
+	 *
+	 * The BIOS is supposed to remember whether the OS had xHCI ports
+	 * enabled before resume, and switch the ports back to xHCI when the
+	 * BIOS/OS semaphore is written, but we all know we can't trust BIOS
+	 * writers.
+	 *
+	 * Unconditionally switch the ports back to xHCI after a system resume.
+	 * We can't tell whether the EHCI or xHCI controller will be resumed
+	 * first, so we have to do the port switchover in both drivers.  Writing
+	 * a '1' to the port switchover registers should have no effect if the
+	 * port was already switched over.
+	 */
+	if (usb_is_intel_switchable_xhci(pdev))
+		usb_enable_xhci_ports(pdev);
+
+	retval = xhci_resume(xhci, hibernated);
+	return retval;
+}
+#endif /* CONFIG_PM */
+
+static const struct hc_driver xhci_pci_hc_driver = {
+	.description =		hcd_name,
+	.product_desc =		"xHCI Host Controller",
+	.hcd_priv_size =	sizeof(struct xhci_hcd *),
+
+	/*
+	 * generic hardware linkage
+	 */
+	.irq =			xhci_irq,
+	.flags =		HCD_MEMORY | HCD_USB3 | HCD_SHARED,
+
+	/*
+	 * basic lifecycle operations
+	 */
+	.reset =		xhci_pci_setup,
+	.start =		xhci_run,
+#ifdef CONFIG_PM
+	.pci_suspend =          xhci_pci_suspend,
+	.pci_resume =           xhci_pci_resume,
+#endif
+	.stop =			xhci_stop,
+	.shutdown =		xhci_shutdown,
+
+	/*
+	 * managing i/o requests and associated device resources
+	 */
+	.urb_enqueue =		xhci_urb_enqueue,
+	.urb_dequeue =		xhci_urb_dequeue,
+	.alloc_dev =		xhci_alloc_dev,
+	.free_dev =		xhci_free_dev,
+	.alloc_streams =	xhci_alloc_streams,
+	.free_streams =		xhci_free_streams,
+	.add_endpoint =		xhci_add_endpoint,
+	.drop_endpoint =	xhci_drop_endpoint,
+	.endpoint_reset =	xhci_endpoint_reset,
+	.check_bandwidth =	xhci_check_bandwidth,
+	.reset_bandwidth =	xhci_reset_bandwidth,
+	.address_device =	xhci_address_device,
+	.update_hub_device =	xhci_update_hub_device,
+	.reset_device =		xhci_discover_or_reset_device,
+
+	/*
+	 * scheduling support
+	 */
+	.get_frame_number =	xhci_get_frame,
+
+	/* Root hub support */
+	.hub_control =		xhci_hub_control,
+	.hub_status_data =	xhci_hub_status_data,
+	.bus_suspend =		xhci_bus_suspend,
+	.bus_resume =		xhci_bus_resume,
+	/*
+	 * call back when device connected and addressed
+	 */
+	.update_device =        xhci_update_device,
+	.set_usb2_hw_lpm =	xhci_set_usb2_hardware_lpm,
+	.enable_usb3_lpm_timeout =	xhci_enable_usb3_lpm_timeout,
+	.disable_usb3_lpm_timeout =	xhci_disable_usb3_lpm_timeout,
+	.find_raw_port_number =	xhci_find_raw_port_number,
+};
+
+/*-------------------------------------------------------------------------*/
+
+/* PCI driver selection metadata; PCI hotplugging uses this */
+static const struct pci_device_id pci_ids[] = { {
+	/* handle any USB 3.0 xHCI controller */
+	PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_XHCI, ~0),
+	.driver_data =	(unsigned long) &xhci_pci_hc_driver,
+	},
+	{ /* end: all zeroes */ }
+};
+MODULE_DEVICE_TABLE(pci, pci_ids);
+
+/* pci driver glue; this is a "new style" PCI driver module */
+static struct pci_driver xhci_pci_driver = {
+	.name =		(char *) hcd_name,
+	.id_table =	pci_ids,
+
+	.probe =	xhci_pci_probe,
+	.remove =	xhci_pci_remove,
+	/* suspend and resume implemented later */
+
+	.shutdown = 	usb_hcd_pci_shutdown,
+#ifdef CONFIG_PM_SLEEP
+	.driver = {
+		.pm = &usb_hcd_pci_pm_ops
+	},
+#endif
+};
+
+int __init xhci_register_pci(void)
+{
+	return pci_register_driver(&xhci_pci_driver);
+}
+
+void xhci_unregister_pci(void)
+{
+	pci_unregister_driver(&xhci_pci_driver);
+}
diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
new file mode 100644
index 0000000..df90fe5
--- /dev/null
+++ b/drivers/usb/host/xhci-plat.c
@@ -0,0 +1,205 @@
+/*
+ * xhci-plat.c - xHCI host controller driver platform Bus Glue.
+ *
+ * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com
+ * Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ *
+ * A lot of code borrowed from the Linux xHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ */
+
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "xhci.h"
+
+static void xhci_plat_quirks(struct device *dev, struct xhci_hcd *xhci)
+{
+	/*
+	 * As of now platform drivers don't provide MSI support so we ensure
+	 * here that the generic code does not try to make a pci_dev from our
+	 * dev struct in order to setup MSI
+	 */
+	xhci->quirks |= XHCI_BROKEN_MSI;
+}
+
+/* called during probe() after chip reset completes */
+static int xhci_plat_setup(struct usb_hcd *hcd)
+{
+	return xhci_gen_setup(hcd, xhci_plat_quirks);
+}
+
+static const struct hc_driver xhci_plat_xhci_driver = {
+	.description =		"xhci-hcd",
+	.product_desc =		"xHCI Host Controller",
+	.hcd_priv_size =	sizeof(struct xhci_hcd *),
+
+	/*
+	 * generic hardware linkage
+	 */
+	.irq =			xhci_irq,
+	.flags =		HCD_MEMORY | HCD_USB3 | HCD_SHARED,
+
+	/*
+	 * basic lifecycle operations
+	 */
+	.reset =		xhci_plat_setup,
+	.start =		xhci_run,
+	.stop =			xhci_stop,
+	.shutdown =		xhci_shutdown,
+
+	/*
+	 * managing i/o requests and associated device resources
+	 */
+	.urb_enqueue =		xhci_urb_enqueue,
+	.urb_dequeue =		xhci_urb_dequeue,
+	.alloc_dev =		xhci_alloc_dev,
+	.free_dev =		xhci_free_dev,
+	.alloc_streams =	xhci_alloc_streams,
+	.free_streams =		xhci_free_streams,
+	.add_endpoint =		xhci_add_endpoint,
+	.drop_endpoint =	xhci_drop_endpoint,
+	.endpoint_reset =	xhci_endpoint_reset,
+	.check_bandwidth =	xhci_check_bandwidth,
+	.reset_bandwidth =	xhci_reset_bandwidth,
+	.address_device =	xhci_address_device,
+	.update_hub_device =	xhci_update_hub_device,
+	.reset_device =		xhci_discover_or_reset_device,
+
+	/*
+	 * scheduling support
+	 */
+	.get_frame_number =	xhci_get_frame,
+
+	/* Root hub support */
+	.hub_control =		xhci_hub_control,
+	.hub_status_data =	xhci_hub_status_data,
+	.bus_suspend =		xhci_bus_suspend,
+	.bus_resume =		xhci_bus_resume,
+};
+
+static int xhci_plat_probe(struct platform_device *pdev)
+{
+	const struct hc_driver	*driver;
+	struct xhci_hcd		*xhci;
+	struct resource         *res;
+	struct usb_hcd		*hcd;
+	int			ret;
+	int			irq;
+
+	if (usb_disabled())
+		return -ENODEV;
+
+	driver = &xhci_plat_xhci_driver;
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0)
+		return -ENODEV;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res)
+		return -ENODEV;
+
+	hcd = usb_create_hcd(driver, &pdev->dev, dev_name(&pdev->dev));
+	if (!hcd)
+		return -ENOMEM;
+
+	hcd->rsrc_start = res->start;
+	hcd->rsrc_len = resource_size(res);
+
+	if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len,
+				driver->description)) {
+		dev_dbg(&pdev->dev, "controller already in use\n");
+		ret = -EBUSY;
+		goto put_hcd;
+	}
+
+	hcd->regs = ioremap_nocache(hcd->rsrc_start, hcd->rsrc_len);
+	if (!hcd->regs) {
+		dev_dbg(&pdev->dev, "error mapping memory\n");
+		ret = -EFAULT;
+		goto release_mem_region;
+	}
+
+	ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+	if (ret)
+		goto unmap_registers;
+
+	/* USB 2.0 roothub is stored in the platform_device now. */
+	hcd = dev_get_drvdata(&pdev->dev);
+	xhci = hcd_to_xhci(hcd);
+	xhci->shared_hcd = usb_create_shared_hcd(driver, &pdev->dev,
+			dev_name(&pdev->dev), hcd);
+	if (!xhci->shared_hcd) {
+		ret = -ENOMEM;
+		goto dealloc_usb2_hcd;
+	}
+
+	/*
+	 * Set the xHCI pointer before xhci_plat_setup() (aka hcd_driver.reset)
+	 * is called by usb_add_hcd().
+	 */
+	*((struct xhci_hcd **) xhci->shared_hcd->hcd_priv) = xhci;
+
+	ret = usb_add_hcd(xhci->shared_hcd, irq, IRQF_SHARED);
+	if (ret)
+		goto put_usb3_hcd;
+
+	return 0;
+
+put_usb3_hcd:
+	usb_put_hcd(xhci->shared_hcd);
+
+dealloc_usb2_hcd:
+	usb_remove_hcd(hcd);
+
+unmap_registers:
+	iounmap(hcd->regs);
+
+release_mem_region:
+	release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
+
+put_hcd:
+	usb_put_hcd(hcd);
+
+	return ret;
+}
+
+static int xhci_plat_remove(struct platform_device *dev)
+{
+	struct usb_hcd	*hcd = platform_get_drvdata(dev);
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+
+	usb_remove_hcd(xhci->shared_hcd);
+	usb_put_hcd(xhci->shared_hcd);
+
+	usb_remove_hcd(hcd);
+	iounmap(hcd->regs);
+	usb_put_hcd(hcd);
+	kfree(xhci);
+
+	return 0;
+}
+
+static struct platform_driver usb_xhci_driver = {
+	.probe	= xhci_plat_probe,
+	.remove	= xhci_plat_remove,
+	.driver	= {
+		.name = "xhci-hcd",
+	},
+};
+MODULE_ALIAS("platform:xhci-hcd");
+
+int xhci_register_plat(void)
+{
+	return platform_driver_register(&usb_xhci_driver);
+}
+
+void xhci_unregister_plat(void)
+{
+	platform_driver_unregister(&usb_xhci_driver);
+}
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
new file mode 100644
index 0000000..1969c00
--- /dev/null
+++ b/drivers/usb/host/xhci-ring.c
@@ -0,0 +1,4011 @@
+/*
+ * xHCI host controller driver
+ *
+ * Copyright (C) 2008 Intel Corp.
+ *
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+/*
+ * Ring initialization rules:
+ * 1. Each segment is initialized to zero, except for link TRBs.
+ * 2. Ring cycle state = 0.  This represents Producer Cycle State (PCS) or
+ *    Consumer Cycle State (CCS), depending on ring function.
+ * 3. Enqueue pointer = dequeue pointer = address of first TRB in the segment.
+ *
+ * Ring behavior rules:
+ * 1. A ring is empty if enqueue == dequeue.  This means there will always be at
+ *    least one free TRB in the ring.  This is useful if you want to turn that
+ *    into a link TRB and expand the ring.
+ * 2. When incrementing an enqueue or dequeue pointer, if the next TRB is a
+ *    link TRB, then load the pointer with the address in the link TRB.  If the
+ *    link TRB had its toggle bit set, you may need to update the ring cycle
+ *    state (see cycle bit rules).  You may have to do this multiple times
+ *    until you reach a non-link TRB.
+ * 3. A ring is full if enqueue++ (for the definition of increment above)
+ *    equals the dequeue pointer.
+ *
+ * Cycle bit rules:
+ * 1. When a consumer increments a dequeue pointer and encounters a toggle bit
+ *    in a link TRB, it must toggle the ring cycle state.
+ * 2. When a producer increments an enqueue pointer and encounters a toggle bit
+ *    in a link TRB, it must toggle the ring cycle state.
+ *
+ * Producer rules:
+ * 1. Check if ring is full before you enqueue.
+ * 2. Write the ring cycle state to the cycle bit in the TRB you're enqueuing.
+ *    Update enqueue pointer between each write (which may update the ring
+ *    cycle state).
+ * 3. Notify consumer.  If SW is producer, it rings the doorbell for command
+ *    and endpoint rings.  If HC is the producer for the event ring,
+ *    and it generates an interrupt according to interrupt modulation rules.
+ *
+ * Consumer rules:
+ * 1. Check if TRB belongs to you.  If the cycle bit == your ring cycle state,
+ *    the TRB is owned by the consumer.
+ * 2. Update dequeue pointer (which may update the ring cycle state) and
+ *    continue processing TRBs until you reach a TRB which is not owned by you.
+ * 3. Notify the producer.  SW is the consumer for the event ring, and it
+ *   updates event ring dequeue pointer.  HC is the consumer for the command and
+ *   endpoint rings; it generates events on the event ring for these.
+ */
+
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include "xhci.h"
+
+static int handle_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		struct xhci_event_cmd *event);
+
+/*
+ * Returns zero if the TRB isn't in this segment, otherwise it returns the DMA
+ * address of the TRB.
+ */
+dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg,
+		union xhci_trb *trb)
+{
+	unsigned long segment_offset;
+
+	if (!seg || !trb || trb < seg->trbs)
+		return 0;
+	/* offset in TRBs */
+	segment_offset = trb - seg->trbs;
+	if (segment_offset > TRBS_PER_SEGMENT)
+		return 0;
+	return seg->dma + (segment_offset * sizeof(*trb));
+}
+
+/* Does this link TRB point to the first segment in a ring,
+ * or was the previous TRB the last TRB on the last segment in the ERST?
+ */
+static bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring,
+		struct xhci_segment *seg, union xhci_trb *trb)
+{
+	if (ring == xhci->event_ring)
+		return (trb == &seg->trbs[TRBS_PER_SEGMENT]) &&
+			(seg->next == xhci->event_ring->first_seg);
+	else
+		return le32_to_cpu(trb->link.control) & LINK_TOGGLE;
+}
+
+/* Is this TRB a link TRB or was the last TRB the last TRB in this event ring
+ * segment?  I.e. would the updated event TRB pointer step off the end of the
+ * event seg?
+ */
+static int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,
+		struct xhci_segment *seg, union xhci_trb *trb)
+{
+	if (ring == xhci->event_ring)
+		return trb == &seg->trbs[TRBS_PER_SEGMENT];
+	else
+		return TRB_TYPE_LINK_LE32(trb->link.control);
+}
+
+static int enqueue_is_link_trb(struct xhci_ring *ring)
+{
+	struct xhci_link_trb *link = &ring->enqueue->link;
+	return TRB_TYPE_LINK_LE32(link->control);
+}
+
+/* Updates trb to point to the next TRB in the ring, and updates seg if the next
+ * TRB is in a new segment.  This does not skip over link TRBs, and it does not
+ * effect the ring dequeue or enqueue pointers.
+ */
+static void next_trb(struct xhci_hcd *xhci,
+		struct xhci_ring *ring,
+		struct xhci_segment **seg,
+		union xhci_trb **trb)
+{
+	if (last_trb(xhci, ring, *seg, *trb)) {
+		*seg = (*seg)->next;
+		*trb = ((*seg)->trbs);
+	} else {
+		(*trb)++;
+	}
+}
+
+/*
+ * See Cycle bit rules. SW is the consumer for the event ring only.
+ * Don't make a ring full of link TRBs.  That would be dumb and this would loop.
+ */
+static void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring)
+{
+	unsigned long long addr;
+
+	ring->deq_updates++;
+
+	/*
+	 * If this is not event ring, and the dequeue pointer
+	 * is not on a link TRB, there is one more usable TRB
+	 */
+	if (ring->type != TYPE_EVENT &&
+			!last_trb(xhci, ring, ring->deq_seg, ring->dequeue))
+		ring->num_trbs_free++;
+
+	do {
+		/*
+		 * Update the dequeue pointer further if that was a link TRB or
+		 * we're at the end of an event ring segment (which doesn't have
+		 * link TRBS)
+		 */
+		if (last_trb(xhci, ring, ring->deq_seg, ring->dequeue)) {
+			if (ring->type == TYPE_EVENT &&
+					last_trb_on_last_seg(xhci, ring,
+						ring->deq_seg, ring->dequeue)) {
+				ring->cycle_state = (ring->cycle_state ? 0 : 1);
+			}
+			ring->deq_seg = ring->deq_seg->next;
+			ring->dequeue = ring->deq_seg->trbs;
+		} else {
+			ring->dequeue++;
+		}
+	} while (last_trb(xhci, ring, ring->deq_seg, ring->dequeue));
+
+	addr = (unsigned long long) xhci_trb_virt_to_dma(ring->deq_seg, ring->dequeue);
+}
+
+/*
+ * See Cycle bit rules. SW is the consumer for the event ring only.
+ * Don't make a ring full of link TRBs.  That would be dumb and this would loop.
+ *
+ * If we've just enqueued a TRB that is in the middle of a TD (meaning the
+ * chain bit is set), then set the chain bit in all the following link TRBs.
+ * If we've enqueued the last TRB in a TD, make sure the following link TRBs
+ * have their chain bit cleared (so that each Link TRB is a separate TD).
+ *
+ * Section 6.4.4.1 of the 0.95 spec says link TRBs cannot have the chain bit
+ * set, but other sections talk about dealing with the chain bit set.  This was
+ * fixed in the 0.96 specification errata, but we have to assume that all 0.95
+ * xHCI hardware can't handle the chain bit being cleared on a link TRB.
+ *
+ * @more_trbs_coming:	Will you enqueue more TRBs before calling
+ *			prepare_transfer()?
+ */
+static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring,
+			bool more_trbs_coming)
+{
+	u32 chain;
+	union xhci_trb *next;
+	unsigned long long addr;
+
+	chain = le32_to_cpu(ring->enqueue->generic.field[3]) & TRB_CHAIN;
+	/* If this is not event ring, there is one less usable TRB */
+	if (ring->type != TYPE_EVENT &&
+			!last_trb(xhci, ring, ring->enq_seg, ring->enqueue))
+		ring->num_trbs_free--;
+	next = ++(ring->enqueue);
+
+	ring->enq_updates++;
+	/* Update the dequeue pointer further if that was a link TRB or we're at
+	 * the end of an event ring segment (which doesn't have link TRBS)
+	 */
+	while (last_trb(xhci, ring, ring->enq_seg, next)) {
+		if (ring->type != TYPE_EVENT) {
+			/*
+			 * If the caller doesn't plan on enqueueing more
+			 * TDs before ringing the doorbell, then we
+			 * don't want to give the link TRB to the
+			 * hardware just yet.  We'll give the link TRB
+			 * back in prepare_ring() just before we enqueue
+			 * the TD at the top of the ring.
+			 */
+			if (!chain && !more_trbs_coming)
+				break;
+
+			/* If we're not dealing with 0.95 hardware or
+			 * isoc rings on AMD 0.96 host,
+			 * carry over the chain bit of the previous TRB
+			 * (which may mean the chain bit is cleared).
+			 */
+			if (!(ring->type == TYPE_ISOC &&
+					(xhci->quirks & XHCI_AMD_0x96_HOST))
+						&& !xhci_link_trb_quirk(xhci)) {
+				next->link.control &=
+					cpu_to_le32(~TRB_CHAIN);
+				next->link.control |=
+					cpu_to_le32(chain);
+			}
+			/* Give this link TRB to the hardware */
+			wmb();
+			next->link.control ^= cpu_to_le32(TRB_CYCLE);
+
+			/* Toggle the cycle bit after the last ring segment. */
+			if (last_trb_on_last_seg(xhci, ring, ring->enq_seg, next)) {
+				ring->cycle_state = (ring->cycle_state ? 0 : 1);
+			}
+		}
+		ring->enq_seg = ring->enq_seg->next;
+		ring->enqueue = ring->enq_seg->trbs;
+		next = ring->enqueue;
+	}
+	addr = (unsigned long long) xhci_trb_virt_to_dma(ring->enq_seg, ring->enqueue);
+}
+
+/*
+ * Check to see if there's room to enqueue num_trbs on the ring and make sure
+ * enqueue pointer will not advance into dequeue segment. See rules above.
+ */
+static inline int room_on_ring(struct xhci_hcd *xhci, struct xhci_ring *ring,
+		unsigned int num_trbs)
+{
+	int num_trbs_in_deq_seg;
+
+	if (ring->num_trbs_free < num_trbs)
+		return 0;
+
+	if (ring->type != TYPE_COMMAND && ring->type != TYPE_EVENT) {
+		num_trbs_in_deq_seg = ring->dequeue - ring->deq_seg->trbs;
+		if (ring->num_trbs_free < num_trbs + num_trbs_in_deq_seg)
+			return 0;
+	}
+
+	return 1;
+}
+
+/* Ring the host controller doorbell after placing a command on the ring */
+void xhci_ring_cmd_db(struct xhci_hcd *xhci)
+{
+	if (!(xhci->cmd_ring_state & CMD_RING_STATE_RUNNING))
+		return;
+
+	xhci_dbg(xhci, "// Ding dong!\n");
+	xhci_writel(xhci, DB_VALUE_HOST, &xhci->dba->doorbell[0]);
+	/* Flush PCI posted writes */
+	xhci_readl(xhci, &xhci->dba->doorbell[0]);
+}
+
+static int xhci_abort_cmd_ring(struct xhci_hcd *xhci)
+{
+	u64 temp_64;
+	int ret;
+
+	xhci_dbg(xhci, "Abort command ring\n");
+
+	if (!(xhci->cmd_ring_state & CMD_RING_STATE_RUNNING)) {
+		xhci_dbg(xhci, "The command ring isn't running, "
+				"Have the command ring been stopped?\n");
+		return 0;
+	}
+
+	temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
+	if (!(temp_64 & CMD_RING_RUNNING)) {
+		xhci_dbg(xhci, "Command ring had been stopped\n");
+		return 0;
+	}
+	xhci->cmd_ring_state = CMD_RING_STATE_ABORTED;
+	xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
+			&xhci->op_regs->cmd_ring);
+
+	/* Section 4.6.1.2 of xHCI 1.0 spec says software should
+	 * time the completion od all xHCI commands, including
+	 * the Command Abort operation. If software doesn't see
+	 * CRR negated in a timely manner (e.g. longer than 5
+	 * seconds), then it should assume that the there are
+	 * larger problems with the xHC and assert HCRST.
+	 */
+	ret = xhci_handshake(xhci, &xhci->op_regs->cmd_ring,
+			CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
+	if (ret < 0) {
+		xhci_err(xhci, "Stopped the command ring failed, "
+				"maybe the host is dead\n");
+		xhci->xhc_state |= XHCI_STATE_DYING;
+		xhci_quiesce(xhci);
+		xhci_halt(xhci);
+		return -ESHUTDOWN;
+	}
+
+	return 0;
+}
+
+static int xhci_queue_cd(struct xhci_hcd *xhci,
+		struct xhci_command *command,
+		union xhci_trb *cmd_trb)
+{
+	struct xhci_cd *cd;
+	cd = kzalloc(sizeof(struct xhci_cd), GFP_ATOMIC);
+	if (!cd)
+		return -ENOMEM;
+	INIT_LIST_HEAD(&cd->cancel_cmd_list);
+
+	cd->command = command;
+	cd->cmd_trb = cmd_trb;
+	list_add_tail(&cd->cancel_cmd_list, &xhci->cancel_cmd_list);
+
+	return 0;
+}
+
+/*
+ * Cancel the command which has issue.
+ *
+ * Some commands may hang due to waiting for acknowledgement from
+ * usb device. It is outside of the xHC's ability to control and
+ * will cause the command ring is blocked. When it occurs software
+ * should intervene to recover the command ring.
+ * See Section 4.6.1.1 and 4.6.1.2
+ */
+int xhci_cancel_cmd(struct xhci_hcd *xhci, struct xhci_command *command,
+		union xhci_trb *cmd_trb)
+{
+	int retval = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+
+	if (xhci->xhc_state & XHCI_STATE_DYING) {
+		xhci_warn(xhci, "Abort the command ring,"
+				" but the xHCI is dead.\n");
+		retval = -ESHUTDOWN;
+		goto fail;
+	}
+
+	/* queue the cmd desriptor to cancel_cmd_list */
+	retval = xhci_queue_cd(xhci, command, cmd_trb);
+	if (retval) {
+		xhci_warn(xhci, "Queuing command descriptor failed.\n");
+		goto fail;
+	}
+
+	/* abort command ring */
+	retval = xhci_abort_cmd_ring(xhci);
+	if (retval) {
+		xhci_err(xhci, "Abort command ring failed\n");
+		if (unlikely(retval == -ESHUTDOWN)) {
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			usb_hc_died(xhci_to_hcd(xhci)->primary_hcd);
+			xhci_dbg(xhci, "xHCI host controller is dead.\n");
+			return retval;
+		}
+	}
+
+fail:
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return retval;
+}
+
+void xhci_ring_ep_doorbell(struct xhci_hcd *xhci,
+		unsigned int slot_id,
+		unsigned int ep_index,
+		unsigned int stream_id)
+{
+	__le32 __iomem *db_addr = &xhci->dba->doorbell[slot_id];
+	struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index];
+	unsigned int ep_state = ep->ep_state;
+
+	/* Don't ring the doorbell for this endpoint if there are pending
+	 * cancellations because we don't want to interrupt processing.
+	 * We don't want to restart any stream rings if there's a set dequeue
+	 * pointer command pending because the device can choose to start any
+	 * stream once the endpoint is on the HW schedule.
+	 * FIXME - check all the stream rings for pending cancellations.
+	 */
+	if ((ep_state & EP_HALT_PENDING) || (ep_state & SET_DEQ_PENDING) ||
+	    (ep_state & EP_HALTED))
+		return;
+	xhci_writel(xhci, DB_VALUE(ep_index, stream_id), db_addr);
+	/* The CPU has better things to do at this point than wait for a
+	 * write-posting flush.  It'll get there soon enough.
+	 */
+}
+
+/* Ring the doorbell for any rings with pending URBs */
+static void ring_doorbell_for_active_rings(struct xhci_hcd *xhci,
+		unsigned int slot_id,
+		unsigned int ep_index)
+{
+	unsigned int stream_id;
+	struct xhci_virt_ep *ep;
+
+	ep = &xhci->devs[slot_id]->eps[ep_index];
+
+	/* A ring has pending URBs if its TD list is not empty */
+	if (!(ep->ep_state & EP_HAS_STREAMS)) {
+		if (!(list_empty(&ep->ring->td_list)))
+			xhci_ring_ep_doorbell(xhci, slot_id, ep_index, 0);
+		return;
+	}
+
+	for (stream_id = 1; stream_id < ep->stream_info->num_streams;
+			stream_id++) {
+		struct xhci_stream_info *stream_info = ep->stream_info;
+		if (!list_empty(&stream_info->stream_rings[stream_id]->td_list))
+			xhci_ring_ep_doorbell(xhci, slot_id, ep_index,
+						stream_id);
+	}
+}
+
+/*
+ * Find the segment that trb is in.  Start searching in start_seg.
+ * If we must move past a segment that has a link TRB with a toggle cycle state
+ * bit set, then we will toggle the value pointed at by cycle_state.
+ */
+static struct xhci_segment *find_trb_seg(
+		struct xhci_segment *start_seg,
+		union xhci_trb	*trb, int *cycle_state)
+{
+	struct xhci_segment *cur_seg = start_seg;
+	struct xhci_generic_trb *generic_trb;
+
+	while (cur_seg->trbs > trb ||
+			&cur_seg->trbs[TRBS_PER_SEGMENT - 1] < trb) {
+		generic_trb = &cur_seg->trbs[TRBS_PER_SEGMENT - 1].generic;
+		if (generic_trb->field[3] & cpu_to_le32(LINK_TOGGLE))
+			*cycle_state ^= 0x1;
+		cur_seg = cur_seg->next;
+		if (cur_seg == start_seg)
+			/* Looped over the entire list.  Oops! */
+			return NULL;
+	}
+	return cur_seg;
+}
+
+
+static struct xhci_ring *xhci_triad_to_transfer_ring(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		unsigned int stream_id)
+{
+	struct xhci_virt_ep *ep;
+
+	ep = &xhci->devs[slot_id]->eps[ep_index];
+	/* Common case: no streams */
+	if (!(ep->ep_state & EP_HAS_STREAMS))
+		return ep->ring;
+
+	if (stream_id == 0) {
+		xhci_warn(xhci,
+				"WARN: Slot ID %u, ep index %u has streams, "
+				"but URB has no stream ID.\n",
+				slot_id, ep_index);
+		return NULL;
+	}
+
+	if (stream_id < ep->stream_info->num_streams)
+		return ep->stream_info->stream_rings[stream_id];
+
+	xhci_warn(xhci,
+			"WARN: Slot ID %u, ep index %u has "
+			"stream IDs 1 to %u allocated, "
+			"but stream ID %u is requested.\n",
+			slot_id, ep_index,
+			ep->stream_info->num_streams - 1,
+			stream_id);
+	return NULL;
+}
+
+/* Get the right ring for the given URB.
+ * If the endpoint supports streams, boundary check the URB's stream ID.
+ * If the endpoint doesn't support streams, return the singular endpoint ring.
+ */
+static struct xhci_ring *xhci_urb_to_transfer_ring(struct xhci_hcd *xhci,
+		struct urb *urb)
+{
+	return xhci_triad_to_transfer_ring(xhci, urb->dev->slot_id,
+		xhci_get_endpoint_index(&urb->ep->desc), urb->stream_id);
+}
+
+/*
+ * Move the xHC's endpoint ring dequeue pointer past cur_td.
+ * Record the new state of the xHC's endpoint ring dequeue segment,
+ * dequeue pointer, and new consumer cycle state in state.
+ * Update our internal representation of the ring's dequeue pointer.
+ *
+ * We do this in three jumps:
+ *  - First we update our new ring state to be the same as when the xHC stopped.
+ *  - Then we traverse the ring to find the segment that contains
+ *    the last TRB in the TD.  We toggle the xHC's new cycle state when we pass
+ *    any link TRBs with the toggle cycle bit set.
+ *  - Finally we move the dequeue state one TRB further, toggling the cycle bit
+ *    if we've moved it past a link TRB with the toggle cycle bit set.
+ *
+ * Some of the uses of xhci_generic_trb are grotty, but if they're done
+ * with correct __le32 accesses they should work fine.  Only users of this are
+ * in here.
+ */
+void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		unsigned int stream_id, struct xhci_td *cur_td,
+		struct xhci_dequeue_state *state)
+{
+	struct xhci_virt_device *dev = xhci->devs[slot_id];
+	struct xhci_ring *ep_ring;
+	struct xhci_generic_trb *trb;
+	struct xhci_ep_ctx *ep_ctx;
+	dma_addr_t addr;
+
+	ep_ring = xhci_triad_to_transfer_ring(xhci, slot_id,
+			ep_index, stream_id);
+	if (!ep_ring) {
+		xhci_warn(xhci, "WARN can't find new dequeue state "
+				"for invalid stream ID %u.\n",
+				stream_id);
+		return;
+	}
+	state->new_cycle_state = 0;
+	xhci_dbg(xhci, "Finding segment containing stopped TRB.\n");
+	state->new_deq_seg = find_trb_seg(cur_td->start_seg,
+			dev->eps[ep_index].stopped_trb,
+			&state->new_cycle_state);
+	if (!state->new_deq_seg) {
+		WARN_ON(1);
+		return;
+	}
+
+	/* Dig out the cycle state saved by the xHC during the stop ep cmd */
+	xhci_dbg(xhci, "Finding endpoint context\n");
+	ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);
+	state->new_cycle_state = 0x1 & le64_to_cpu(ep_ctx->deq);
+
+	state->new_deq_ptr = cur_td->last_trb;
+	xhci_dbg(xhci, "Finding segment containing last TRB in TD.\n");
+	state->new_deq_seg = find_trb_seg(state->new_deq_seg,
+			state->new_deq_ptr,
+			&state->new_cycle_state);
+	if (!state->new_deq_seg) {
+		WARN_ON(1);
+		return;
+	}
+
+	trb = &state->new_deq_ptr->generic;
+	if (TRB_TYPE_LINK_LE32(trb->field[3]) &&
+	    (trb->field[3] & cpu_to_le32(LINK_TOGGLE)))
+		state->new_cycle_state ^= 0x1;
+	next_trb(xhci, ep_ring, &state->new_deq_seg, &state->new_deq_ptr);
+
+	/*
+	 * If there is only one segment in a ring, find_trb_seg()'s while loop
+	 * will not run, and it will return before it has a chance to see if it
+	 * needs to toggle the cycle bit.  It can't tell if the stalled transfer
+	 * ended just before the link TRB on a one-segment ring, or if the TD
+	 * wrapped around the top of the ring, because it doesn't have the TD in
+	 * question.  Look for the one-segment case where stalled TRB's address
+	 * is greater than the new dequeue pointer address.
+	 */
+	if (ep_ring->first_seg == ep_ring->first_seg->next &&
+			state->new_deq_ptr < dev->eps[ep_index].stopped_trb)
+		state->new_cycle_state ^= 0x1;
+	xhci_dbg(xhci, "Cycle state = 0x%x\n", state->new_cycle_state);
+
+	/* Don't update the ring cycle state for the producer (us). */
+	xhci_dbg(xhci, "New dequeue segment = %p (virtual)\n",
+			state->new_deq_seg);
+	addr = xhci_trb_virt_to_dma(state->new_deq_seg, state->new_deq_ptr);
+	xhci_dbg(xhci, "New dequeue pointer = 0x%llx (DMA)\n",
+			(unsigned long long) addr);
+}
+
+/* flip_cycle means flip the cycle bit of all but the first and last TRB.
+ * (The last TRB actually points to the ring enqueue pointer, which is not part
+ * of this TD.)  This is used to remove partially enqueued isoc TDs from a ring.
+ */
+static void td_to_noop(struct xhci_hcd *xhci, struct xhci_ring *ep_ring,
+		struct xhci_td *cur_td, bool flip_cycle)
+{
+	struct xhci_segment *cur_seg;
+	union xhci_trb *cur_trb;
+
+	for (cur_seg = cur_td->start_seg, cur_trb = cur_td->first_trb;
+			true;
+			next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) {
+		if (TRB_TYPE_LINK_LE32(cur_trb->generic.field[3])) {
+			/* Unchain any chained Link TRBs, but
+			 * leave the pointers intact.
+			 */
+			cur_trb->generic.field[3] &= cpu_to_le32(~TRB_CHAIN);
+			/* Flip the cycle bit (link TRBs can't be the first
+			 * or last TRB).
+			 */
+			if (flip_cycle)
+				cur_trb->generic.field[3] ^=
+					cpu_to_le32(TRB_CYCLE);
+			xhci_dbg(xhci, "Cancel (unchain) link TRB\n");
+			xhci_dbg(xhci, "Address = %p (0x%llx dma); "
+					"in seg %p (0x%llx dma)\n",
+					cur_trb,
+					(unsigned long long)xhci_trb_virt_to_dma(cur_seg, cur_trb),
+					cur_seg,
+					(unsigned long long)cur_seg->dma);
+		} else {
+			cur_trb->generic.field[0] = 0;
+			cur_trb->generic.field[1] = 0;
+			cur_trb->generic.field[2] = 0;
+			/* Preserve only the cycle bit of this TRB */
+			cur_trb->generic.field[3] &= cpu_to_le32(TRB_CYCLE);
+			/* Flip the cycle bit except on the first or last TRB */
+			if (flip_cycle && cur_trb != cur_td->first_trb &&
+					cur_trb != cur_td->last_trb)
+				cur_trb->generic.field[3] ^=
+					cpu_to_le32(TRB_CYCLE);
+			cur_trb->generic.field[3] |= cpu_to_le32(
+				TRB_TYPE(TRB_TR_NOOP));
+			xhci_dbg(xhci, "TRB to noop at offset 0x%llx\n",
+					(unsigned long long)
+					xhci_trb_virt_to_dma(cur_seg, cur_trb));
+		}
+		if (cur_trb == cur_td->last_trb)
+			break;
+	}
+}
+
+static int queue_set_tr_deq(struct xhci_hcd *xhci, int slot_id,
+		unsigned int ep_index, unsigned int stream_id,
+		struct xhci_segment *deq_seg,
+		union xhci_trb *deq_ptr, u32 cycle_state);
+
+void xhci_queue_new_dequeue_state(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		unsigned int stream_id,
+		struct xhci_dequeue_state *deq_state)
+{
+	struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index];
+
+	xhci_dbg(xhci, "Set TR Deq Ptr cmd, new deq seg = %p (0x%llx dma), "
+			"new deq ptr = %p (0x%llx dma), new cycle = %u\n",
+			deq_state->new_deq_seg,
+			(unsigned long long)deq_state->new_deq_seg->dma,
+			deq_state->new_deq_ptr,
+			(unsigned long long)xhci_trb_virt_to_dma(deq_state->new_deq_seg, deq_state->new_deq_ptr),
+			deq_state->new_cycle_state);
+	queue_set_tr_deq(xhci, slot_id, ep_index, stream_id,
+			deq_state->new_deq_seg,
+			deq_state->new_deq_ptr,
+			(u32) deq_state->new_cycle_state);
+	/* Stop the TD queueing code from ringing the doorbell until
+	 * this command completes.  The HC won't set the dequeue pointer
+	 * if the ring is running, and ringing the doorbell starts the
+	 * ring running.
+	 */
+	ep->ep_state |= SET_DEQ_PENDING;
+}
+
+static void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci,
+		struct xhci_virt_ep *ep)
+{
+	ep->ep_state &= ~EP_HALT_PENDING;
+	/* Can't del_timer_sync in interrupt, so we attempt to cancel.  If the
+	 * timer is running on another CPU, we don't decrement stop_cmds_pending
+	 * (since we didn't successfully stop the watchdog timer).
+	 */
+	if (del_timer(&ep->stop_cmd_timer))
+		ep->stop_cmds_pending--;
+}
+
+/* Must be called with xhci->lock held in interrupt context */
+static void xhci_giveback_urb_in_irq(struct xhci_hcd *xhci,
+		struct xhci_td *cur_td, int status, char *adjective)
+{
+	struct usb_hcd *hcd;
+	struct urb	*urb;
+	struct urb_priv	*urb_priv;
+
+	urb = cur_td->urb;
+	urb_priv = urb->hcpriv;
+	urb_priv->td_cnt++;
+	hcd = bus_to_hcd(urb->dev->bus);
+
+	/* Only giveback urb when this is the last td in urb */
+	if (urb_priv->td_cnt == urb_priv->length) {
+		if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
+			xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;
+			if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs	== 0) {
+				if (xhci->quirks & XHCI_AMD_PLL_FIX)
+					usb_amd_quirk_pll_enable();
+			}
+		}
+		usb_hcd_unlink_urb_from_ep(hcd, urb);
+
+		spin_unlock(&xhci->lock);
+		usb_hcd_giveback_urb(hcd, urb, status);
+		xhci_urb_free_priv(xhci, urb_priv);
+		spin_lock(&xhci->lock);
+	}
+}
+
+/*
+ * When we get a command completion for a Stop Endpoint Command, we need to
+ * unlink any cancelled TDs from the ring.  There are two ways to do that:
+ *
+ *  1. If the HW was in the middle of processing the TD that needs to be
+ *     cancelled, then we must move the ring's dequeue pointer past the last TRB
+ *     in the TD with a Set Dequeue Pointer Command.
+ *  2. Otherwise, we turn all the TRBs in the TD into No-op TRBs (with the chain
+ *     bit cleared) so that the HW will skip over them.
+ */
+static void handle_stopped_endpoint(struct xhci_hcd *xhci,
+		union xhci_trb *trb, struct xhci_event_cmd *event)
+{
+	unsigned int slot_id;
+	unsigned int ep_index;
+	struct xhci_virt_device *virt_dev;
+	struct xhci_ring *ep_ring;
+	struct xhci_virt_ep *ep;
+	struct list_head *entry;
+	struct xhci_td *cur_td = NULL;
+	struct xhci_td *last_unlinked_td;
+
+	struct xhci_dequeue_state deq_state;
+
+	if (unlikely(TRB_TO_SUSPEND_PORT(
+			     le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3])))) {
+		slot_id = TRB_TO_SLOT_ID(
+			le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3]));
+		virt_dev = xhci->devs[slot_id];
+		if (virt_dev)
+			handle_cmd_in_cmd_wait_list(xhci, virt_dev,
+				event);
+		else
+			xhci_warn(xhci, "Stop endpoint command "
+				"completion for disabled slot %u\n",
+				slot_id);
+		return;
+	}
+
+	memset(&deq_state, 0, sizeof(deq_state));
+	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(trb->generic.field[3]));
+	ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+	ep = &xhci->devs[slot_id]->eps[ep_index];
+
+	if (list_empty(&ep->cancelled_td_list)) {
+		xhci_stop_watchdog_timer_in_irq(xhci, ep);
+		ep->stopped_td = NULL;
+		ep->stopped_trb = NULL;
+		ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+		return;
+	}
+
+	/* Fix up the ep ring first, so HW stops executing cancelled TDs.
+	 * We have the xHCI lock, so nothing can modify this list until we drop
+	 * it.  We're also in the event handler, so we can't get re-interrupted
+	 * if another Stop Endpoint command completes
+	 */
+	list_for_each(entry, &ep->cancelled_td_list) {
+		cur_td = list_entry(entry, struct xhci_td, cancelled_td_list);
+		xhci_dbg(xhci, "Removing canceled TD starting at 0x%llx (dma).\n",
+				(unsigned long long)xhci_trb_virt_to_dma(
+					cur_td->start_seg, cur_td->first_trb));
+		ep_ring = xhci_urb_to_transfer_ring(xhci, cur_td->urb);
+		if (!ep_ring) {
+			/* This shouldn't happen unless a driver is mucking
+			 * with the stream ID after submission.  This will
+			 * leave the TD on the hardware ring, and the hardware
+			 * will try to execute it, and may access a buffer
+			 * that has already been freed.  In the best case, the
+			 * hardware will execute it, and the event handler will
+			 * ignore the completion event for that TD, since it was
+			 * removed from the td_list for that endpoint.  In
+			 * short, don't muck with the stream ID after
+			 * submission.
+			 */
+			xhci_warn(xhci, "WARN Cancelled URB %p "
+					"has invalid stream ID %u.\n",
+					cur_td->urb,
+					cur_td->urb->stream_id);
+			goto remove_finished_td;
+		}
+		/*
+		 * If we stopped on the TD we need to cancel, then we have to
+		 * move the xHC endpoint ring dequeue pointer past this TD.
+		 */
+		if (cur_td == ep->stopped_td)
+			xhci_find_new_dequeue_state(xhci, slot_id, ep_index,
+					cur_td->urb->stream_id,
+					cur_td, &deq_state);
+		else
+			td_to_noop(xhci, ep_ring, cur_td, false);
+remove_finished_td:
+		/*
+		 * The event handler won't see a completion for this TD anymore,
+		 * so remove it from the endpoint ring's TD list.  Keep it in
+		 * the cancelled TD list for URB completion later.
+		 */
+		list_del_init(&cur_td->td_list);
+	}
+	last_unlinked_td = cur_td;
+	xhci_stop_watchdog_timer_in_irq(xhci, ep);
+
+	/* If necessary, queue a Set Transfer Ring Dequeue Pointer command */
+	if (deq_state.new_deq_ptr && deq_state.new_deq_seg) {
+		xhci_queue_new_dequeue_state(xhci,
+				slot_id, ep_index,
+				ep->stopped_td->urb->stream_id,
+				&deq_state);
+		xhci_ring_cmd_db(xhci);
+	} else {
+		/* Otherwise ring the doorbell(s) to restart queued transfers */
+		ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+	}
+	ep->stopped_td = NULL;
+	ep->stopped_trb = NULL;
+
+	/*
+	 * Drop the lock and complete the URBs in the cancelled TD list.
+	 * New TDs to be cancelled might be added to the end of the list before
+	 * we can complete all the URBs for the TDs we already unlinked.
+	 * So stop when we've completed the URB for the last TD we unlinked.
+	 */
+	do {
+		cur_td = list_entry(ep->cancelled_td_list.next,
+				struct xhci_td, cancelled_td_list);
+		list_del_init(&cur_td->cancelled_td_list);
+
+		/* Clean up the cancelled URB */
+		/* Doesn't matter what we pass for status, since the core will
+		 * just overwrite it (because the URB has been unlinked).
+		 */
+		xhci_giveback_urb_in_irq(xhci, cur_td, 0, "cancelled");
+
+		/* Stop processing the cancelled list if the watchdog timer is
+		 * running.
+		 */
+		if (xhci->xhc_state & XHCI_STATE_DYING)
+			return;
+	} while (cur_td != last_unlinked_td);
+
+	/* Return to the event handler with xhci->lock re-acquired */
+}
+
+/* Watchdog timer function for when a stop endpoint command fails to complete.
+ * In this case, we assume the host controller is broken or dying or dead.  The
+ * host may still be completing some other events, so we have to be careful to
+ * let the event ring handler and the URB dequeueing/enqueueing functions know
+ * through xhci->state.
+ *
+ * The timer may also fire if the host takes a very long time to respond to the
+ * command, and the stop endpoint command completion handler cannot delete the
+ * timer before the timer function is called.  Another endpoint cancellation may
+ * sneak in before the timer function can grab the lock, and that may queue
+ * another stop endpoint command and add the timer back.  So we cannot use a
+ * simple flag to say whether there is a pending stop endpoint command for a
+ * particular endpoint.
+ *
+ * Instead we use a combination of that flag and a counter for the number of
+ * pending stop endpoint commands.  If the timer is the tail end of the last
+ * stop endpoint command, and the endpoint's command is still pending, we assume
+ * the host is dying.
+ */
+void xhci_stop_endpoint_command_watchdog(unsigned long arg)
+{
+	struct xhci_hcd *xhci;
+	struct xhci_virt_ep *ep;
+	struct xhci_virt_ep *temp_ep;
+	struct xhci_ring *ring;
+	struct xhci_td *cur_td;
+	int ret, i, j;
+	unsigned long flags;
+
+	ep = (struct xhci_virt_ep *) arg;
+	xhci = ep->xhci;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+
+	ep->stop_cmds_pending--;
+	if (xhci->xhc_state & XHCI_STATE_DYING) {
+		xhci_dbg(xhci, "Stop EP timer ran, but another timer marked "
+				"xHCI as DYING, exiting.\n");
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return;
+	}
+	if (!(ep->stop_cmds_pending == 0 && (ep->ep_state & EP_HALT_PENDING))) {
+		xhci_dbg(xhci, "Stop EP timer ran, but no command pending, "
+				"exiting.\n");
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return;
+	}
+
+	xhci_warn(xhci, "xHCI host not responding to stop endpoint command.\n");
+	xhci_warn(xhci, "Assuming host is dying, halting host.\n");
+	/* Oops, HC is dead or dying or at least not responding to the stop
+	 * endpoint command.
+	 */
+	xhci->xhc_state |= XHCI_STATE_DYING;
+	/* Disable interrupts from the host controller and start halting it */
+	xhci_quiesce(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	ret = xhci_halt(xhci);
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	if (ret < 0) {
+		/* This is bad; the host is not responding to commands and it's
+		 * not allowing itself to be halted.  At least interrupts are
+		 * disabled. If we call usb_hc_died(), it will attempt to
+		 * disconnect all device drivers under this host.  Those
+		 * disconnect() methods will wait for all URBs to be unlinked,
+		 * so we must complete them.
+		 */
+		xhci_warn(xhci, "Non-responsive xHCI host is not halting.\n");
+		xhci_warn(xhci, "Completing active URBs anyway.\n");
+		/* We could turn all TDs on the rings to no-ops.  This won't
+		 * help if the host has cached part of the ring, and is slow if
+		 * we want to preserve the cycle bit.  Skip it and hope the host
+		 * doesn't touch the memory.
+		 */
+	}
+	for (i = 0; i < MAX_HC_SLOTS; i++) {
+		if (!xhci->devs[i])
+			continue;
+		for (j = 0; j < 31; j++) {
+			temp_ep = &xhci->devs[i]->eps[j];
+			ring = temp_ep->ring;
+			if (!ring)
+				continue;
+			xhci_dbg(xhci, "Killing URBs for slot ID %u, "
+					"ep index %u\n", i, j);
+			while (!list_empty(&ring->td_list)) {
+				cur_td = list_first_entry(&ring->td_list,
+						struct xhci_td,
+						td_list);
+				list_del_init(&cur_td->td_list);
+				if (!list_empty(&cur_td->cancelled_td_list))
+					list_del_init(&cur_td->cancelled_td_list);
+				xhci_giveback_urb_in_irq(xhci, cur_td,
+						-ESHUTDOWN, "killed");
+			}
+			while (!list_empty(&temp_ep->cancelled_td_list)) {
+				cur_td = list_first_entry(
+						&temp_ep->cancelled_td_list,
+						struct xhci_td,
+						cancelled_td_list);
+				list_del_init(&cur_td->cancelled_td_list);
+				xhci_giveback_urb_in_irq(xhci, cur_td,
+						-ESHUTDOWN, "killed");
+			}
+		}
+	}
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	xhci_dbg(xhci, "Calling usb_hc_died()\n");
+	usb_hc_died(xhci_to_hcd(xhci)->primary_hcd);
+	xhci_dbg(xhci, "xHCI host controller is dead.\n");
+}
+
+
+static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci,
+		struct xhci_virt_device *dev,
+		struct xhci_ring *ep_ring,
+		unsigned int ep_index)
+{
+	union xhci_trb *dequeue_temp;
+	int num_trbs_free_temp;
+	bool revert = false;
+
+	num_trbs_free_temp = ep_ring->num_trbs_free;
+	dequeue_temp = ep_ring->dequeue;
+
+	/* If we get two back-to-back stalls, and the first stalled transfer
+	 * ends just before a link TRB, the dequeue pointer will be left on
+	 * the link TRB by the code in the while loop.  So we have to update
+	 * the dequeue pointer one segment further, or we'll jump off
+	 * the segment into la-la-land.
+	 */
+	if (last_trb(xhci, ep_ring, ep_ring->deq_seg, ep_ring->dequeue)) {
+		ep_ring->deq_seg = ep_ring->deq_seg->next;
+		ep_ring->dequeue = ep_ring->deq_seg->trbs;
+	}
+
+	while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) {
+		/* We have more usable TRBs */
+		ep_ring->num_trbs_free++;
+		ep_ring->dequeue++;
+		if (last_trb(xhci, ep_ring, ep_ring->deq_seg,
+				ep_ring->dequeue)) {
+			if (ep_ring->dequeue ==
+					dev->eps[ep_index].queued_deq_ptr)
+				break;
+			ep_ring->deq_seg = ep_ring->deq_seg->next;
+			ep_ring->dequeue = ep_ring->deq_seg->trbs;
+		}
+		if (ep_ring->dequeue == dequeue_temp) {
+			revert = true;
+			break;
+		}
+	}
+
+	if (revert) {
+		xhci_dbg(xhci, "Unable to find new dequeue pointer\n");
+		ep_ring->num_trbs_free = num_trbs_free_temp;
+	}
+}
+
+/*
+ * When we get a completion for a Set Transfer Ring Dequeue Pointer command,
+ * we need to clear the set deq pending flag in the endpoint ring state, so that
+ * the TD queueing code can ring the doorbell again.  We also need to ring the
+ * endpoint doorbell to restart the ring, but only if there aren't more
+ * cancellations pending.
+ */
+static void handle_set_deq_completion(struct xhci_hcd *xhci,
+		struct xhci_event_cmd *event,
+		union xhci_trb *trb)
+{
+	unsigned int slot_id;
+	unsigned int ep_index;
+	unsigned int stream_id;
+	struct xhci_ring *ep_ring;
+	struct xhci_virt_device *dev;
+	struct xhci_ep_ctx *ep_ctx;
+	struct xhci_slot_ctx *slot_ctx;
+
+	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(trb->generic.field[3]));
+	ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+	stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2]));
+	dev = xhci->devs[slot_id];
+
+	ep_ring = xhci_stream_id_to_ring(dev, ep_index, stream_id);
+	if (!ep_ring) {
+		xhci_warn(xhci, "WARN Set TR deq ptr command for "
+				"freed stream ID %u\n",
+				stream_id);
+		/* XXX: Harmless??? */
+		dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING;
+		return;
+	}
+
+	ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);
+	slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx);
+
+	if (GET_COMP_CODE(le32_to_cpu(event->status)) != COMP_SUCCESS) {
+		unsigned int ep_state;
+		unsigned int slot_state;
+
+		switch (GET_COMP_CODE(le32_to_cpu(event->status))) {
+		case COMP_TRB_ERR:
+			xhci_warn(xhci, "WARN Set TR Deq Ptr cmd invalid because "
+					"of stream ID configuration\n");
+			break;
+		case COMP_CTX_STATE:
+			xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed due "
+					"to incorrect slot or ep state.\n");
+			ep_state = le32_to_cpu(ep_ctx->ep_info);
+			ep_state &= EP_STATE_MASK;
+			slot_state = le32_to_cpu(slot_ctx->dev_state);
+			slot_state = GET_SLOT_STATE(slot_state);
+			xhci_dbg(xhci, "Slot state = %u, EP state = %u\n",
+					slot_state, ep_state);
+			break;
+		case COMP_EBADSLT:
+			xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed because "
+					"slot %u was not enabled.\n", slot_id);
+			break;
+		default:
+			xhci_warn(xhci, "WARN Set TR Deq Ptr cmd with unknown "
+					"completion code of %u.\n",
+				  GET_COMP_CODE(le32_to_cpu(event->status)));
+			break;
+		}
+		/* OK what do we do now?  The endpoint state is hosed, and we
+		 * should never get to this point if the synchronization between
+		 * queueing, and endpoint state are correct.  This might happen
+		 * if the device gets disconnected after we've finished
+		 * cancelling URBs, which might not be an error...
+		 */
+	} else {
+		xhci_dbg(xhci, "Successful Set TR Deq Ptr cmd, deq = @%08llx\n",
+			 le64_to_cpu(ep_ctx->deq));
+		if (xhci_trb_virt_to_dma(dev->eps[ep_index].queued_deq_seg,
+					 dev->eps[ep_index].queued_deq_ptr) ==
+		    (le64_to_cpu(ep_ctx->deq) & ~(EP_CTX_CYCLE_MASK))) {
+			/* Update the ring's dequeue segment and dequeue pointer
+			 * to reflect the new position.
+			 */
+			update_ring_for_set_deq_completion(xhci, dev,
+				ep_ring, ep_index);
+		} else {
+			xhci_warn(xhci, "Mismatch between completed Set TR Deq "
+					"Ptr command & xHCI internal state.\n");
+			xhci_warn(xhci, "ep deq seg = %p, deq ptr = %p\n",
+					dev->eps[ep_index].queued_deq_seg,
+					dev->eps[ep_index].queued_deq_ptr);
+		}
+	}
+
+	dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING;
+	dev->eps[ep_index].queued_deq_seg = NULL;
+	dev->eps[ep_index].queued_deq_ptr = NULL;
+	/* Restart any rings with pending URBs */
+	ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+}
+
+static void handle_reset_ep_completion(struct xhci_hcd *xhci,
+		struct xhci_event_cmd *event,
+		union xhci_trb *trb)
+{
+	int slot_id;
+	unsigned int ep_index;
+
+	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(trb->generic.field[3]));
+	ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+	/* This command will only fail if the endpoint wasn't halted,
+	 * but we don't care.
+	 */
+	xhci_dbg(xhci, "Ignoring reset ep completion code of %u\n",
+		 GET_COMP_CODE(le32_to_cpu(event->status)));
+
+	/* HW with the reset endpoint quirk needs to have a configure endpoint
+	 * command complete before the endpoint can be used.  Queue that here
+	 * because the HW can't handle two commands being queued in a row.
+	 */
+	if (xhci->quirks & XHCI_RESET_EP_QUIRK) {
+		xhci_dbg(xhci, "Queueing configure endpoint command\n");
+		xhci_queue_configure_endpoint(xhci,
+				xhci->devs[slot_id]->in_ctx->dma, slot_id,
+				false);
+		xhci_ring_cmd_db(xhci);
+	} else {
+		/* Clear our internal halted state and restart the ring(s) */
+		xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_HALTED;
+		ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+	}
+}
+
+/* Complete the command and detele it from the devcie's command queue.
+ */
+static void xhci_complete_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
+		struct xhci_command *command, u32 status)
+{
+	command->status = status;
+	list_del(&command->cmd_list);
+	if (command->completion)
+		complete(command->completion);
+	else
+		xhci_free_command(xhci, command);
+}
+
+
+/* Check to see if a command in the device's command queue matches this one.
+ * Signal the completion or free the command, and return 1.  Return 0 if the
+ * completed command isn't at the head of the command list.
+ */
+static int handle_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		struct xhci_event_cmd *event)
+{
+	struct xhci_command *command;
+
+	if (list_empty(&virt_dev->cmd_list))
+		return 0;
+
+	command = list_entry(virt_dev->cmd_list.next,
+			struct xhci_command, cmd_list);
+	if (xhci->cmd_ring->dequeue != command->command_trb)
+		return 0;
+
+	xhci_complete_cmd_in_cmd_wait_list(xhci, command,
+			GET_COMP_CODE(le32_to_cpu(event->status)));
+	return 1;
+}
+
+/*
+ * Finding the command trb need to be cancelled and modifying it to
+ * NO OP command. And if the command is in device's command wait
+ * list, finishing and freeing it.
+ *
+ * If we can't find the command trb, we think it had already been
+ * executed.
+ */
+static void xhci_cmd_to_noop(struct xhci_hcd *xhci, struct xhci_cd *cur_cd)
+{
+	struct xhci_segment *cur_seg;
+	union xhci_trb *cmd_trb;
+	u32 cycle_state;
+
+	if (xhci->cmd_ring->dequeue == xhci->cmd_ring->enqueue)
+		return;
+
+	/* find the current segment of command ring */
+	cur_seg = find_trb_seg(xhci->cmd_ring->first_seg,
+			xhci->cmd_ring->dequeue, &cycle_state);
+
+	if (!cur_seg) {
+		xhci_warn(xhci, "Command ring mismatch, dequeue = %p %llx (dma)\n",
+				xhci->cmd_ring->dequeue,
+				(unsigned long long)
+				xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg,
+					xhci->cmd_ring->dequeue));
+		xhci_debug_ring(xhci, xhci->cmd_ring);
+		xhci_dbg_ring_ptrs(xhci, xhci->cmd_ring);
+		return;
+	}
+
+	/* find the command trb matched by cd from command ring */
+	for (cmd_trb = xhci->cmd_ring->dequeue;
+			cmd_trb != xhci->cmd_ring->enqueue;
+			next_trb(xhci, xhci->cmd_ring, &cur_seg, &cmd_trb)) {
+		/* If the trb is link trb, continue */
+		if (TRB_TYPE_LINK_LE32(cmd_trb->generic.field[3]))
+			continue;
+
+		if (cur_cd->cmd_trb == cmd_trb) {
+
+			/* If the command in device's command list, we should
+			 * finish it and free the command structure.
+			 */
+			if (cur_cd->command)
+				xhci_complete_cmd_in_cmd_wait_list(xhci,
+					cur_cd->command, COMP_CMD_STOP);
+
+			/* get cycle state from the origin command trb */
+			cycle_state = le32_to_cpu(cmd_trb->generic.field[3])
+				& TRB_CYCLE;
+
+			/* modify the command trb to NO OP command */
+			cmd_trb->generic.field[0] = 0;
+			cmd_trb->generic.field[1] = 0;
+			cmd_trb->generic.field[2] = 0;
+			cmd_trb->generic.field[3] = cpu_to_le32(
+					TRB_TYPE(TRB_CMD_NOOP) | cycle_state);
+			break;
+		}
+	}
+}
+
+static void xhci_cancel_cmd_in_cd_list(struct xhci_hcd *xhci)
+{
+	struct xhci_cd *cur_cd, *next_cd;
+
+	if (list_empty(&xhci->cancel_cmd_list))
+		return;
+
+	list_for_each_entry_safe(cur_cd, next_cd,
+			&xhci->cancel_cmd_list, cancel_cmd_list) {
+		xhci_cmd_to_noop(xhci, cur_cd);
+		list_del(&cur_cd->cancel_cmd_list);
+		kfree(cur_cd);
+	}
+}
+
+/*
+ * traversing the cancel_cmd_list. If the command descriptor according
+ * to cmd_trb is found, the function free it and return 1, otherwise
+ * return 0.
+ */
+static int xhci_search_cmd_trb_in_cd_list(struct xhci_hcd *xhci,
+		union xhci_trb *cmd_trb)
+{
+	struct xhci_cd *cur_cd, *next_cd;
+
+	if (list_empty(&xhci->cancel_cmd_list))
+		return 0;
+
+	list_for_each_entry_safe(cur_cd, next_cd,
+			&xhci->cancel_cmd_list, cancel_cmd_list) {
+		if (cur_cd->cmd_trb == cmd_trb) {
+			if (cur_cd->command)
+				xhci_complete_cmd_in_cmd_wait_list(xhci,
+					cur_cd->command, COMP_CMD_STOP);
+			list_del(&cur_cd->cancel_cmd_list);
+			kfree(cur_cd);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * If the cmd_trb_comp_code is COMP_CMD_ABORT, we just check whether the
+ * trb pointed by the command ring dequeue pointer is the trb we want to
+ * cancel or not. And if the cmd_trb_comp_code is COMP_CMD_STOP, we will
+ * traverse the cancel_cmd_list to trun the all of the commands according
+ * to command descriptor to NO-OP trb.
+ */
+static int handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+		int cmd_trb_comp_code)
+{
+	int cur_trb_is_good = 0;
+
+	/* Searching the cmd trb pointed by the command ring dequeue
+	 * pointer in command descriptor list. If it is found, free it.
+	 */
+	cur_trb_is_good = xhci_search_cmd_trb_in_cd_list(xhci,
+			xhci->cmd_ring->dequeue);
+
+	if (cmd_trb_comp_code == COMP_CMD_ABORT)
+		xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+	else if (cmd_trb_comp_code == COMP_CMD_STOP) {
+		/* traversing the cancel_cmd_list and canceling
+		 * the command according to command descriptor
+		 */
+		xhci_cancel_cmd_in_cd_list(xhci);
+
+		xhci->cmd_ring_state = CMD_RING_STATE_RUNNING;
+		/*
+		 * ring command ring doorbell again to restart the
+		 * command ring
+		 */
+		if (xhci->cmd_ring->dequeue != xhci->cmd_ring->enqueue)
+			xhci_ring_cmd_db(xhci);
+	}
+	return cur_trb_is_good;
+}
+
+static void handle_cmd_completion(struct xhci_hcd *xhci,
+		struct xhci_event_cmd *event)
+{
+	int slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+	u64 cmd_dma;
+	dma_addr_t cmd_dequeue_dma;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	struct xhci_virt_device *virt_dev;
+	unsigned int ep_index;
+	struct xhci_ring *ep_ring;
+	unsigned int ep_state;
+
+	cmd_dma = le64_to_cpu(event->cmd_trb);
+	cmd_dequeue_dma = xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg,
+			xhci->cmd_ring->dequeue);
+	/* Is the command ring deq ptr out of sync with the deq seg ptr? */
+	if (cmd_dequeue_dma == 0) {
+		xhci->error_bitmask |= 1 << 4;
+		return;
+	}
+	/* Does the DMA address match our internal dequeue pointer address? */
+	if (cmd_dma != (u64) cmd_dequeue_dma) {
+		xhci->error_bitmask |= 1 << 5;
+		return;
+	}
+
+	if ((GET_COMP_CODE(le32_to_cpu(event->status)) == COMP_CMD_ABORT) ||
+		(GET_COMP_CODE(le32_to_cpu(event->status)) == COMP_CMD_STOP)) {
+		/* If the return value is 0, we think the trb pointed by
+		 * command ring dequeue pointer is a good trb. The good
+		 * trb means we don't want to cancel the trb, but it have
+		 * been stopped by host. So we should handle it normally.
+		 * Otherwise, driver should invoke inc_deq() and return.
+		 */
+		if (handle_stopped_cmd_ring(xhci,
+				GET_COMP_CODE(le32_to_cpu(event->status)))) {
+			inc_deq(xhci, xhci->cmd_ring);
+			return;
+		}
+	}
+
+	switch (le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3])
+		& TRB_TYPE_BITMASK) {
+	case TRB_TYPE(TRB_ENABLE_SLOT):
+		if (GET_COMP_CODE(le32_to_cpu(event->status)) == COMP_SUCCESS)
+			xhci->slot_id = slot_id;
+		else
+			xhci->slot_id = 0;
+		complete(&xhci->addr_dev);
+		break;
+	case TRB_TYPE(TRB_DISABLE_SLOT):
+		if (xhci->devs[slot_id]) {
+			if (xhci->quirks & XHCI_EP_LIMIT_QUIRK)
+				/* Delete default control endpoint resources */
+				xhci_free_device_endpoint_resources(xhci,
+						xhci->devs[slot_id], true);
+			xhci_free_virt_device(xhci, slot_id);
+		}
+		break;
+	case TRB_TYPE(TRB_CONFIG_EP):
+		virt_dev = xhci->devs[slot_id];
+		if (handle_cmd_in_cmd_wait_list(xhci, virt_dev, event))
+			break;
+		/*
+		 * Configure endpoint commands can come from the USB core
+		 * configuration or alt setting changes, or because the HW
+		 * needed an extra configure endpoint command after a reset
+		 * endpoint command or streams were being configured.
+		 * If the command was for a halted endpoint, the xHCI driver
+		 * is not waiting on the configure endpoint command.
+		 */
+		ctrl_ctx = xhci_get_input_control_ctx(xhci,
+				virt_dev->in_ctx);
+		/* Input ctx add_flags are the endpoint index plus one */
+		ep_index = xhci_last_valid_endpoint(le32_to_cpu(ctrl_ctx->add_flags)) - 1;
+		/* A usb_set_interface() call directly after clearing a halted
+		 * condition may race on this quirky hardware.  Not worth
+		 * worrying about, since this is prototype hardware.  Not sure
+		 * if this will work for streams, but streams support was
+		 * untested on this prototype.
+		 */
+		if (xhci->quirks & XHCI_RESET_EP_QUIRK &&
+				ep_index != (unsigned int) -1 &&
+		    le32_to_cpu(ctrl_ctx->add_flags) - SLOT_FLAG ==
+		    le32_to_cpu(ctrl_ctx->drop_flags)) {
+			ep_ring = xhci->devs[slot_id]->eps[ep_index].ring;
+			ep_state = xhci->devs[slot_id]->eps[ep_index].ep_state;
+			if (!(ep_state & EP_HALTED))
+				goto bandwidth_change;
+			xhci_dbg(xhci, "Completed config ep cmd - "
+					"last ep index = %d, state = %d\n",
+					ep_index, ep_state);
+			/* Clear internal halted state and restart ring(s) */
+			xhci->devs[slot_id]->eps[ep_index].ep_state &=
+				~EP_HALTED;
+			ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+			break;
+		}
+bandwidth_change:
+		xhci_dbg(xhci, "Completed config ep cmd\n");
+		xhci->devs[slot_id]->cmd_status =
+			GET_COMP_CODE(le32_to_cpu(event->status));
+		complete(&xhci->devs[slot_id]->cmd_completion);
+		break;
+	case TRB_TYPE(TRB_EVAL_CONTEXT):
+		virt_dev = xhci->devs[slot_id];
+		if (handle_cmd_in_cmd_wait_list(xhci, virt_dev, event))
+			break;
+		xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(le32_to_cpu(event->status));
+		complete(&xhci->devs[slot_id]->cmd_completion);
+		break;
+	case TRB_TYPE(TRB_ADDR_DEV):
+		xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(le32_to_cpu(event->status));
+		complete(&xhci->addr_dev);
+		break;
+	case TRB_TYPE(TRB_STOP_RING):
+		handle_stopped_endpoint(xhci, xhci->cmd_ring->dequeue, event);
+		break;
+	case TRB_TYPE(TRB_SET_DEQ):
+		handle_set_deq_completion(xhci, event, xhci->cmd_ring->dequeue);
+		break;
+	case TRB_TYPE(TRB_CMD_NOOP):
+		break;
+	case TRB_TYPE(TRB_RESET_EP):
+		handle_reset_ep_completion(xhci, event, xhci->cmd_ring->dequeue);
+		break;
+	case TRB_TYPE(TRB_RESET_DEV):
+		xhci_dbg(xhci, "Completed reset device command.\n");
+		slot_id = TRB_TO_SLOT_ID(
+			le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3]));
+		virt_dev = xhci->devs[slot_id];
+		if (virt_dev)
+			handle_cmd_in_cmd_wait_list(xhci, virt_dev, event);
+		else
+			xhci_warn(xhci, "Reset device command completion "
+					"for disabled slot %u\n", slot_id);
+		break;
+	case TRB_TYPE(TRB_NEC_GET_FW):
+		if (!(xhci->quirks & XHCI_NEC_HOST)) {
+			xhci->error_bitmask |= 1 << 6;
+			break;
+		}
+		xhci_dbg(xhci, "NEC firmware version %2x.%02x\n",
+			 NEC_FW_MAJOR(le32_to_cpu(event->status)),
+			 NEC_FW_MINOR(le32_to_cpu(event->status)));
+		break;
+	default:
+		/* Skip over unknown commands on the event ring */
+		xhci->error_bitmask |= 1 << 6;
+		break;
+	}
+	inc_deq(xhci, xhci->cmd_ring);
+}
+
+static void handle_vendor_event(struct xhci_hcd *xhci,
+		union xhci_trb *event)
+{
+	u32 trb_type;
+
+	trb_type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->generic.field[3]));
+	xhci_dbg(xhci, "Vendor specific event TRB type = %u\n", trb_type);
+	if (trb_type == TRB_NEC_CMD_COMP && (xhci->quirks & XHCI_NEC_HOST))
+		handle_cmd_completion(xhci, &event->event_cmd);
+}
+
+/* @port_id: the one-based port ID from the hardware (indexed from array of all
+ * port registers -- USB 3.0 and USB 2.0).
+ *
+ * Returns a zero-based port number, which is suitable for indexing into each of
+ * the split roothubs' port arrays and bus state arrays.
+ * Add one to it in order to call xhci_find_slot_id_by_port.
+ */
+static unsigned int find_faked_portnum_from_hw_portnum(struct usb_hcd *hcd,
+		struct xhci_hcd *xhci, u32 port_id)
+{
+	unsigned int i;
+	unsigned int num_similar_speed_ports = 0;
+
+	/* port_id from the hardware is 1-based, but port_array[], usb3_ports[],
+	 * and usb2_ports are 0-based indexes.  Count the number of similar
+	 * speed ports, up to 1 port before this port.
+	 */
+	for (i = 0; i < (port_id - 1); i++) {
+		u8 port_speed = xhci->port_array[i];
+
+		/*
+		 * Skip ports that don't have known speeds, or have duplicate
+		 * Extended Capabilities port speed entries.
+		 */
+		if (port_speed == 0 || port_speed == DUPLICATE_ENTRY)
+			continue;
+
+		/*
+		 * USB 3.0 ports are always under a USB 3.0 hub.  USB 2.0 and
+		 * 1.1 ports are under the USB 2.0 hub.  If the port speed
+		 * matches the device speed, it's a similar speed port.
+		 */
+		if ((port_speed == 0x03) == (hcd->speed == HCD_USB3))
+			num_similar_speed_ports++;
+	}
+	return num_similar_speed_ports;
+}
+
+static void handle_device_notification(struct xhci_hcd *xhci,
+		union xhci_trb *event)
+{
+	u32 slot_id;
+	struct usb_device *udev;
+
+	slot_id = TRB_TO_SLOT_ID(event->generic.field[3]);
+	if (!xhci->devs[slot_id]) {
+		xhci_warn(xhci, "Device Notification event for "
+				"unused slot %u\n", slot_id);
+		return;
+	}
+
+	xhci_dbg(xhci, "Device Wake Notification event for slot ID %u\n",
+			slot_id);
+	udev = xhci->devs[slot_id]->udev;
+	if (udev && udev->parent)
+		usb_wakeup_notification(udev->parent, udev->portnum);
+}
+
+static void handle_port_status(struct xhci_hcd *xhci,
+		union xhci_trb *event)
+{
+	struct usb_hcd *hcd;
+	u32 port_id;
+	u32 temp, temp1;
+	int max_ports;
+	int slot_id;
+	unsigned int faked_port_index;
+	u8 major_revision;
+	struct xhci_bus_state *bus_state;
+	__le32 __iomem **port_array;
+	bool bogus_port_status = false;
+
+	/* Port status change events always have a successful completion code */
+	if (GET_COMP_CODE(le32_to_cpu(event->generic.field[2])) != COMP_SUCCESS) {
+		xhci_warn(xhci, "WARN: xHC returned failed port status event\n");
+		xhci->error_bitmask |= 1 << 8;
+	}
+	port_id = GET_PORT_ID(le32_to_cpu(event->generic.field[0]));
+	xhci_dbg(xhci, "Port Status Change Event for port %d\n", port_id);
+
+	max_ports = HCS_MAX_PORTS(xhci->hcs_params1);
+	if ((port_id <= 0) || (port_id > max_ports)) {
+		xhci_warn(xhci, "Invalid port id %d\n", port_id);
+		inc_deq(xhci, xhci->event_ring);
+		return;
+	}
+
+	/* Figure out which usb_hcd this port is attached to:
+	 * is it a USB 3.0 port or a USB 2.0/1.1 port?
+	 */
+	major_revision = xhci->port_array[port_id - 1];
+
+	/* Find the right roothub. */
+	hcd = xhci_to_hcd(xhci);
+	if ((major_revision == 0x03) != (hcd->speed == HCD_USB3))
+		hcd = xhci->shared_hcd;
+
+	if (major_revision == 0) {
+		xhci_warn(xhci, "Event for port %u not in "
+				"Extended Capabilities, ignoring.\n",
+				port_id);
+		bogus_port_status = true;
+		goto cleanup;
+	}
+	if (major_revision == DUPLICATE_ENTRY) {
+		xhci_warn(xhci, "Event for port %u duplicated in"
+				"Extended Capabilities, ignoring.\n",
+				port_id);
+		bogus_port_status = true;
+		goto cleanup;
+	}
+
+	/*
+	 * Hardware port IDs reported by a Port Status Change Event include USB
+	 * 3.0 and USB 2.0 ports.  We want to check if the port has reported a
+	 * resume event, but we first need to translate the hardware port ID
+	 * into the index into the ports on the correct split roothub, and the
+	 * correct bus_state structure.
+	 */
+	bus_state = &xhci->bus_state[hcd_index(hcd)];
+	if (hcd->speed == HCD_USB3)
+		port_array = xhci->usb3_ports;
+	else
+		port_array = xhci->usb2_ports;
+	/* Find the faked port hub number */
+	faked_port_index = find_faked_portnum_from_hw_portnum(hcd, xhci,
+			port_id);
+
+	temp = xhci_readl(xhci, port_array[faked_port_index]);
+	if (hcd->state == HC_STATE_SUSPENDED) {
+		xhci_dbg(xhci, "resume root hub\n");
+		usb_hcd_resume_root_hub(hcd);
+	}
+
+	if ((temp & PORT_PLC) && (temp & PORT_PLS_MASK) == XDEV_RESUME) {
+		xhci_dbg(xhci, "port resume event for port %d\n", port_id);
+
+		temp1 = xhci_readl(xhci, &xhci->op_regs->command);
+		if (!(temp1 & CMD_RUN)) {
+			xhci_warn(xhci, "xHC is not running.\n");
+			goto cleanup;
+		}
+
+		if (DEV_SUPERSPEED(temp)) {
+			xhci_dbg(xhci, "remote wake SS port %d\n", port_id);
+			/* Set a flag to say the port signaled remote wakeup,
+			 * so we can tell the difference between the end of
+			 * device and host initiated resume.
+			 */
+			bus_state->port_remote_wakeup |= 1 << faked_port_index;
+			xhci_test_and_clear_bit(xhci, port_array,
+					faked_port_index, PORT_PLC);
+			xhci_set_link_state(xhci, port_array, faked_port_index,
+						XDEV_U0);
+			/* Need to wait until the next link state change
+			 * indicates the device is actually in U0.
+			 */
+			bogus_port_status = true;
+			goto cleanup;
+		} else {
+			xhci_dbg(xhci, "resume HS port %d\n", port_id);
+			bus_state->resume_done[faked_port_index] = jiffies +
+				msecs_to_jiffies(20);
+			set_bit(faked_port_index, &bus_state->resuming_ports);
+			mod_timer(&hcd->rh_timer,
+				  bus_state->resume_done[faked_port_index]);
+			/* Do the rest in GetPortStatus */
+		}
+	}
+
+	if ((temp & PORT_PLC) && (temp & PORT_PLS_MASK) == XDEV_U0 &&
+			DEV_SUPERSPEED(temp)) {
+		xhci_dbg(xhci, "resume SS port %d finished\n", port_id);
+		/* We've just brought the device into U0 through either the
+		 * Resume state after a device remote wakeup, or through the
+		 * U3Exit state after a host-initiated resume.  If it's a device
+		 * initiated remote wake, don't pass up the link state change,
+		 * so the roothub behavior is consistent with external
+		 * USB 3.0 hub behavior.
+		 */
+		slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+				faked_port_index + 1);
+		if (slot_id && xhci->devs[slot_id])
+			xhci_ring_device(xhci, slot_id);
+		if (bus_state->port_remote_wakeup & (1 << faked_port_index)) {
+			bus_state->port_remote_wakeup &=
+				~(1 << faked_port_index);
+			xhci_test_and_clear_bit(xhci, port_array,
+					faked_port_index, PORT_PLC);
+			usb_wakeup_notification(hcd->self.root_hub,
+					faked_port_index + 1);
+			bogus_port_status = true;
+			goto cleanup;
+		}
+	}
+
+	if (hcd->speed != HCD_USB3)
+		xhci_test_and_clear_bit(xhci, port_array, faked_port_index,
+					PORT_PLC);
+
+cleanup:
+	/* Update event ring dequeue pointer before dropping the lock */
+	inc_deq(xhci, xhci->event_ring);
+
+	/* Don't make the USB core poll the roothub if we got a bad port status
+	 * change event.  Besides, at that point we can't tell which roothub
+	 * (USB 2.0 or USB 3.0) to kick.
+	 */
+	if (bogus_port_status)
+		return;
+
+	/*
+	 * xHCI port-status-change events occur when the "or" of all the
+	 * status-change bits in the portsc register changes from 0 to 1.
+	 * New status changes won't cause an event if any other change
+	 * bits are still set.  When an event occurs, switch over to
+	 * polling to avoid losing status changes.
+	 */
+	xhci_dbg(xhci, "%s: starting port polling.\n", __func__);
+	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+	spin_unlock(&xhci->lock);
+	/* Pass this up to the core */
+	usb_hcd_poll_rh_status(hcd);
+	spin_lock(&xhci->lock);
+}
+
+/*
+ * This TD is defined by the TRBs starting at start_trb in start_seg and ending
+ * at end_trb, which may be in another segment.  If the suspect DMA address is a
+ * TRB in this TD, this function returns that TRB's segment.  Otherwise it
+ * returns 0.
+ */
+struct xhci_segment *trb_in_td(struct xhci_segment *start_seg,
+		union xhci_trb	*start_trb,
+		union xhci_trb	*end_trb,
+		dma_addr_t	suspect_dma)
+{
+	dma_addr_t start_dma;
+	dma_addr_t end_seg_dma;
+	dma_addr_t end_trb_dma;
+	struct xhci_segment *cur_seg;
+
+	start_dma = xhci_trb_virt_to_dma(start_seg, start_trb);
+	cur_seg = start_seg;
+
+	do {
+		if (start_dma == 0)
+			return NULL;
+		/* We may get an event for a Link TRB in the middle of a TD */
+		end_seg_dma = xhci_trb_virt_to_dma(cur_seg,
+				&cur_seg->trbs[TRBS_PER_SEGMENT - 1]);
+		/* If the end TRB isn't in this segment, this is set to 0 */
+		end_trb_dma = xhci_trb_virt_to_dma(cur_seg, end_trb);
+
+		if (end_trb_dma > 0) {
+			/* The end TRB is in this segment, so suspect should be here */
+			if (start_dma <= end_trb_dma) {
+				if (suspect_dma >= start_dma && suspect_dma <= end_trb_dma)
+					return cur_seg;
+			} else {
+				/* Case for one segment with
+				 * a TD wrapped around to the top
+				 */
+				if ((suspect_dma >= start_dma &&
+							suspect_dma <= end_seg_dma) ||
+						(suspect_dma >= cur_seg->dma &&
+						 suspect_dma <= end_trb_dma))
+					return cur_seg;
+			}
+			return NULL;
+		} else {
+			/* Might still be somewhere in this segment */
+			if (suspect_dma >= start_dma && suspect_dma <= end_seg_dma)
+				return cur_seg;
+		}
+		cur_seg = cur_seg->next;
+		start_dma = xhci_trb_virt_to_dma(cur_seg, &cur_seg->trbs[0]);
+	} while (cur_seg != start_seg);
+
+	return NULL;
+}
+
+static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		unsigned int stream_id,
+		struct xhci_td *td, union xhci_trb *event_trb)
+{
+	struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index];
+	ep->ep_state |= EP_HALTED;
+	ep->stopped_td = td;
+	ep->stopped_trb = event_trb;
+	ep->stopped_stream = stream_id;
+
+	xhci_queue_reset_ep(xhci, slot_id, ep_index);
+	xhci_cleanup_stalled_ring(xhci, td->urb->dev, ep_index);
+
+	ep->stopped_td = NULL;
+	ep->stopped_trb = NULL;
+	ep->stopped_stream = 0;
+
+	xhci_ring_cmd_db(xhci);
+}
+
+/* Check if an error has halted the endpoint ring.  The class driver will
+ * cleanup the halt for a non-default control endpoint if we indicate a stall.
+ * However, a babble and other errors also halt the endpoint ring, and the class
+ * driver won't clear the halt in that case, so we need to issue a Set Transfer
+ * Ring Dequeue Pointer command manually.
+ */
+static int xhci_requires_manual_halt_cleanup(struct xhci_hcd *xhci,
+		struct xhci_ep_ctx *ep_ctx,
+		unsigned int trb_comp_code)
+{
+	/* TRB completion codes that may require a manual halt cleanup */
+	if (trb_comp_code == COMP_TX_ERR ||
+			trb_comp_code == COMP_BABBLE ||
+			trb_comp_code == COMP_SPLIT_ERR)
+		/* The 0.96 spec says a babbling control endpoint
+		 * is not halted. The 0.96 spec says it is.  Some HW
+		 * claims to be 0.95 compliant, but it halts the control
+		 * endpoint anyway.  Check if a babble halted the
+		 * endpoint.
+		 */
+		if ((ep_ctx->ep_info & cpu_to_le32(EP_STATE_MASK)) ==
+		    cpu_to_le32(EP_STATE_HALTED))
+			return 1;
+
+	return 0;
+}
+
+int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code)
+{
+	if (trb_comp_code >= 224 && trb_comp_code <= 255) {
+		/* Vendor defined "informational" completion code,
+		 * treat as not-an-error.
+		 */
+		xhci_dbg(xhci, "Vendor defined info completion code %u\n",
+				trb_comp_code);
+		xhci_dbg(xhci, "Treating code as success.\n");
+		return 1;
+	}
+	return 0;
+}
+
+/*
+ * Finish the td processing, remove the td from td list;
+ * Return 1 if the urb can be given back.
+ */
+static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td,
+	union xhci_trb *event_trb, struct xhci_transfer_event *event,
+	struct xhci_virt_ep *ep, int *status, bool skip)
+{
+	struct xhci_virt_device *xdev;
+	struct xhci_ring *ep_ring;
+	unsigned int slot_id;
+	int ep_index;
+	struct urb *urb = NULL;
+	struct xhci_ep_ctx *ep_ctx;
+	int ret = 0;
+	struct urb_priv	*urb_priv;
+	u32 trb_comp_code;
+
+	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+	xdev = xhci->devs[slot_id];
+	ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1;
+	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+	ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
+	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+
+	if (skip)
+		goto td_cleanup;
+
+	if (trb_comp_code == COMP_STOP_INVAL ||
+			trb_comp_code == COMP_STOP) {
+		/* The Endpoint Stop Command completion will take care of any
+		 * stopped TDs.  A stopped TD may be restarted, so don't update
+		 * the ring dequeue pointer or take this TD off any lists yet.
+		 */
+		ep->stopped_td = td;
+		ep->stopped_trb = event_trb;
+		return 0;
+	} else {
+		if (trb_comp_code == COMP_STALL) {
+			/* The transfer is completed from the driver's
+			 * perspective, but we need to issue a set dequeue
+			 * command for this stalled endpoint to move the dequeue
+			 * pointer past the TD.  We can't do that here because
+			 * the halt condition must be cleared first.  Let the
+			 * USB class driver clear the stall later.
+			 */
+			ep->stopped_td = td;
+			ep->stopped_trb = event_trb;
+			ep->stopped_stream = ep_ring->stream_id;
+		} else if (xhci_requires_manual_halt_cleanup(xhci,
+					ep_ctx, trb_comp_code)) {
+			/* Other types of errors halt the endpoint, but the
+			 * class driver doesn't call usb_reset_endpoint() unless
+			 * the error is -EPIPE.  Clear the halted status in the
+			 * xHCI hardware manually.
+			 */
+			xhci_cleanup_halted_endpoint(xhci,
+					slot_id, ep_index, ep_ring->stream_id,
+					td, event_trb);
+		} else {
+			/* Update ring dequeue pointer */
+			while (ep_ring->dequeue != td->last_trb)
+				inc_deq(xhci, ep_ring);
+			inc_deq(xhci, ep_ring);
+		}
+
+td_cleanup:
+		/* Clean up the endpoint's TD list */
+		urb = td->urb;
+		urb_priv = urb->hcpriv;
+
+		/* Do one last check of the actual transfer length.
+		 * If the host controller said we transferred more data than
+		 * the buffer length, urb->actual_length will be a very big
+		 * number (since it's unsigned).  Play it safe and say we didn't
+		 * transfer anything.
+		 */
+		if (urb->actual_length > urb->transfer_buffer_length) {
+			xhci_warn(xhci, "URB transfer length is wrong, "
+					"xHC issue? req. len = %u, "
+					"act. len = %u\n",
+					urb->transfer_buffer_length,
+					urb->actual_length);
+			urb->actual_length = 0;
+			if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
+				*status = -EREMOTEIO;
+			else
+				*status = 0;
+		}
+		list_del_init(&td->td_list);
+		/* Was this TD slated to be cancelled but completed anyway? */
+		if (!list_empty(&td->cancelled_td_list))
+			list_del_init(&td->cancelled_td_list);
+
+		urb_priv->td_cnt++;
+		/* Giveback the urb when all the tds are completed */
+		if (urb_priv->td_cnt == urb_priv->length) {
+			ret = 1;
+			if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
+				xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;
+				if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs
+					== 0) {
+					if (xhci->quirks & XHCI_AMD_PLL_FIX)
+						usb_amd_quirk_pll_enable();
+				}
+			}
+		}
+	}
+
+	return ret;
+}
+
+/*
+ * Process control tds, update urb status and actual_length.
+ */
+static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+	union xhci_trb *event_trb, struct xhci_transfer_event *event,
+	struct xhci_virt_ep *ep, int *status)
+{
+	struct xhci_virt_device *xdev;
+	struct xhci_ring *ep_ring;
+	unsigned int slot_id;
+	int ep_index;
+	struct xhci_ep_ctx *ep_ctx;
+	u32 trb_comp_code;
+
+	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+	xdev = xhci->devs[slot_id];
+	ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1;
+	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+	ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
+	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+
+	switch (trb_comp_code) {
+	case COMP_SUCCESS:
+		if (event_trb == ep_ring->dequeue) {
+			xhci_warn(xhci, "WARN: Success on ctrl setup TRB "
+					"without IOC set??\n");
+			*status = -ESHUTDOWN;
+		} else if (event_trb != td->last_trb) {
+			xhci_warn(xhci, "WARN: Success on ctrl data TRB "
+					"without IOC set??\n");
+			*status = -ESHUTDOWN;
+		} else {
+			*status = 0;
+		}
+		break;
+	case COMP_SHORT_TX:
+		if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
+			*status = -EREMOTEIO;
+		else
+			*status = 0;
+		break;
+	case COMP_STOP_INVAL:
+	case COMP_STOP:
+		return finish_td(xhci, td, event_trb, event, ep, status, false);
+	default:
+		if (!xhci_requires_manual_halt_cleanup(xhci,
+					ep_ctx, trb_comp_code))
+			break;
+		xhci_dbg(xhci, "TRB error code %u, "
+				"halted endpoint index = %u\n",
+				trb_comp_code, ep_index);
+		/* else fall through */
+	case COMP_STALL:
+		/* Did we transfer part of the data (middle) phase? */
+		if (event_trb != ep_ring->dequeue &&
+				event_trb != td->last_trb)
+			td->urb->actual_length =
+				td->urb->transfer_buffer_length -
+				EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+		else
+			td->urb->actual_length = 0;
+
+		xhci_cleanup_halted_endpoint(xhci,
+			slot_id, ep_index, 0, td, event_trb);
+		return finish_td(xhci, td, event_trb, event, ep, status, true);
+	}
+	/*
+	 * Did we transfer any data, despite the errors that might have
+	 * happened?  I.e. did we get past the setup stage?
+	 */
+	if (event_trb != ep_ring->dequeue) {
+		/* The event was for the status stage */
+		if (event_trb == td->last_trb) {
+			if (td->urb->actual_length != 0) {
+				/* Don't overwrite a previously set error code
+				 */
+				if ((*status == -EINPROGRESS || *status == 0) &&
+						(td->urb->transfer_flags
+						 & URB_SHORT_NOT_OK))
+					/* Did we already see a short data
+					 * stage? */
+					*status = -EREMOTEIO;
+			} else {
+				td->urb->actual_length =
+					td->urb->transfer_buffer_length;
+			}
+		} else {
+		/* Maybe the event was for the data stage? */
+			td->urb->actual_length =
+				td->urb->transfer_buffer_length -
+				EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+			xhci_dbg(xhci, "Waiting for status "
+					"stage event\n");
+			return 0;
+		}
+	}
+
+	return finish_td(xhci, td, event_trb, event, ep, status, false);
+}
+
+/*
+ * Process isochronous tds, update urb packet status and actual_length.
+ */
+static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+	union xhci_trb *event_trb, struct xhci_transfer_event *event,
+	struct xhci_virt_ep *ep, int *status)
+{
+	struct xhci_ring *ep_ring;
+	struct urb_priv *urb_priv;
+	int idx;
+	int len = 0;
+	union xhci_trb *cur_trb;
+	struct xhci_segment *cur_seg;
+	struct usb_iso_packet_descriptor *frame;
+	u32 trb_comp_code;
+	bool skip_td = false;
+
+	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+	urb_priv = td->urb->hcpriv;
+	idx = urb_priv->td_cnt;
+	frame = &td->urb->iso_frame_desc[idx];
+
+	/* handle completion code */
+	switch (trb_comp_code) {
+	case COMP_SUCCESS:
+		if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0) {
+			frame->status = 0;
+			break;
+		}
+		if ((xhci->quirks & XHCI_TRUST_TX_LENGTH))
+			trb_comp_code = COMP_SHORT_TX;
+	case COMP_SHORT_TX:
+		frame->status = td->urb->transfer_flags & URB_SHORT_NOT_OK ?
+				-EREMOTEIO : 0;
+		break;
+	case COMP_BW_OVER:
+		frame->status = -ECOMM;
+		skip_td = true;
+		break;
+	case COMP_BUFF_OVER:
+	case COMP_BABBLE:
+		frame->status = -EOVERFLOW;
+		skip_td = true;
+		break;
+	case COMP_DEV_ERR:
+	case COMP_STALL:
+	case COMP_TX_ERR:
+		frame->status = -EPROTO;
+		skip_td = true;
+		break;
+	case COMP_STOP:
+	case COMP_STOP_INVAL:
+		break;
+	default:
+		frame->status = -1;
+		break;
+	}
+
+	if (trb_comp_code == COMP_SUCCESS || skip_td) {
+		frame->actual_length = frame->length;
+		td->urb->actual_length += frame->length;
+	} else {
+		for (cur_trb = ep_ring->dequeue,
+		     cur_seg = ep_ring->deq_seg; cur_trb != event_trb;
+		     next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) {
+			if (!TRB_TYPE_NOOP_LE32(cur_trb->generic.field[3]) &&
+			    !TRB_TYPE_LINK_LE32(cur_trb->generic.field[3]))
+				len += TRB_LEN(le32_to_cpu(cur_trb->generic.field[2]));
+		}
+		len += TRB_LEN(le32_to_cpu(cur_trb->generic.field[2])) -
+			EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+
+		if (trb_comp_code != COMP_STOP_INVAL) {
+			frame->actual_length = len;
+			td->urb->actual_length += len;
+		}
+	}
+
+	return finish_td(xhci, td, event_trb, event, ep, status, false);
+}
+
+static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+			struct xhci_transfer_event *event,
+			struct xhci_virt_ep *ep, int *status)
+{
+	struct xhci_ring *ep_ring;
+	struct urb_priv *urb_priv;
+	struct usb_iso_packet_descriptor *frame;
+	int idx;
+
+	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+	urb_priv = td->urb->hcpriv;
+	idx = urb_priv->td_cnt;
+	frame = &td->urb->iso_frame_desc[idx];
+
+	/* The transfer is partly done. */
+	frame->status = -EXDEV;
+
+	/* calc actual length */
+	frame->actual_length = 0;
+
+	/* Update ring dequeue pointer */
+	while (ep_ring->dequeue != td->last_trb)
+		inc_deq(xhci, ep_ring);
+	inc_deq(xhci, ep_ring);
+
+	return finish_td(xhci, td, NULL, event, ep, status, true);
+}
+
+/*
+ * Process bulk and interrupt tds, update urb status and actual_length.
+ */
+static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
+	union xhci_trb *event_trb, struct xhci_transfer_event *event,
+	struct xhci_virt_ep *ep, int *status)
+{
+	struct xhci_ring *ep_ring;
+	union xhci_trb *cur_trb;
+	struct xhci_segment *cur_seg;
+	u32 trb_comp_code;
+
+	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+
+	switch (trb_comp_code) {
+	case COMP_SUCCESS:
+		/* Double check that the HW transferred everything. */
+		if (event_trb != td->last_trb ||
+		    EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
+			xhci_warn(xhci, "WARN Successful completion "
+					"on short TX\n");
+			if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
+				*status = -EREMOTEIO;
+			else
+				*status = 0;
+			if ((xhci->quirks & XHCI_TRUST_TX_LENGTH))
+				trb_comp_code = COMP_SHORT_TX;
+		} else {
+			*status = 0;
+		}
+		break;
+	case COMP_SHORT_TX:
+		if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
+			*status = -EREMOTEIO;
+		else
+			*status = 0;
+		break;
+	default:
+		/* Others already handled above */
+		break;
+	}
+	if (trb_comp_code == COMP_SHORT_TX)
+		xhci_dbg(xhci, "ep %#x - asked for %d bytes, "
+				"%d bytes untransferred\n",
+				td->urb->ep->desc.bEndpointAddress,
+				td->urb->transfer_buffer_length,
+				EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)));
+	/* Fast path - was this the last TRB in the TD for this URB? */
+	if (event_trb == td->last_trb) {
+		if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
+			td->urb->actual_length =
+				td->urb->transfer_buffer_length -
+				EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+			if (td->urb->transfer_buffer_length <
+					td->urb->actual_length) {
+				xhci_warn(xhci, "HC gave bad length "
+						"of %d bytes left\n",
+					  EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)));
+				td->urb->actual_length = 0;
+				if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
+					*status = -EREMOTEIO;
+				else
+					*status = 0;
+			}
+			/* Don't overwrite a previously set error code */
+			if (*status == -EINPROGRESS) {
+				if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
+					*status = -EREMOTEIO;
+				else
+					*status = 0;
+			}
+		} else {
+			td->urb->actual_length =
+				td->urb->transfer_buffer_length;
+			/* Ignore a short packet completion if the
+			 * untransferred length was zero.
+			 */
+			if (*status == -EREMOTEIO)
+				*status = 0;
+		}
+	} else {
+		/* Slow path - walk the list, starting from the dequeue
+		 * pointer, to get the actual length transferred.
+		 */
+		td->urb->actual_length = 0;
+		for (cur_trb = ep_ring->dequeue, cur_seg = ep_ring->deq_seg;
+				cur_trb != event_trb;
+				next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) {
+			if (!TRB_TYPE_NOOP_LE32(cur_trb->generic.field[3]) &&
+			    !TRB_TYPE_LINK_LE32(cur_trb->generic.field[3]))
+				td->urb->actual_length +=
+					TRB_LEN(le32_to_cpu(cur_trb->generic.field[2]));
+		}
+		/* If the ring didn't stop on a Link or No-op TRB, add
+		 * in the actual bytes transferred from the Normal TRB
+		 */
+		if (trb_comp_code != COMP_STOP_INVAL)
+			td->urb->actual_length +=
+				TRB_LEN(le32_to_cpu(cur_trb->generic.field[2])) -
+				EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+	}
+
+	return finish_td(xhci, td, event_trb, event, ep, status, false);
+}
+
+/*
+ * If this function returns an error condition, it means it got a Transfer
+ * event with a corrupted Slot ID, Endpoint ID, or TRB DMA address.
+ * At this point, the host controller is probably hosed and should be reset.
+ */
+static int handle_tx_event(struct xhci_hcd *xhci,
+		struct xhci_transfer_event *event)
+	__releases(&xhci->lock)
+	__acquires(&xhci->lock)
+{
+	struct xhci_virt_device *xdev;
+	struct xhci_virt_ep *ep;
+	struct xhci_ring *ep_ring;
+	unsigned int slot_id;
+	int ep_index;
+	struct xhci_td *td = NULL;
+	dma_addr_t event_dma;
+	struct xhci_segment *event_seg;
+	union xhci_trb *event_trb;
+	struct urb *urb = NULL;
+	int status = -EINPROGRESS;
+	struct urb_priv *urb_priv;
+	struct xhci_ep_ctx *ep_ctx;
+	struct list_head *tmp;
+	u32 trb_comp_code;
+	int ret = 0;
+	int td_num = 0;
+
+	slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+	xdev = xhci->devs[slot_id];
+	if (!xdev) {
+		xhci_err(xhci, "ERROR Transfer event pointed to bad slot\n");
+		xhci_err(xhci, "@%016llx %08x %08x %08x %08x\n",
+			 (unsigned long long) xhci_trb_virt_to_dma(
+				 xhci->event_ring->deq_seg,
+				 xhci->event_ring->dequeue),
+			 lower_32_bits(le64_to_cpu(event->buffer)),
+			 upper_32_bits(le64_to_cpu(event->buffer)),
+			 le32_to_cpu(event->transfer_len),
+			 le32_to_cpu(event->flags));
+		xhci_dbg(xhci, "Event ring:\n");
+		xhci_debug_segment(xhci, xhci->event_ring->deq_seg);
+		return -ENODEV;
+	}
+
+	/* Endpoint ID is 1 based, our index is zero based */
+	ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1;
+	ep = &xdev->eps[ep_index];
+	ep_ring = xhci_dma_to_transfer_ring(ep, le64_to_cpu(event->buffer));
+	ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
+	if (!ep_ring ||
+	    (le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK) ==
+	    EP_STATE_DISABLED) {
+		xhci_err(xhci, "ERROR Transfer event for disabled endpoint "
+				"or incorrect stream ring\n");
+		xhci_err(xhci, "@%016llx %08x %08x %08x %08x\n",
+			 (unsigned long long) xhci_trb_virt_to_dma(
+				 xhci->event_ring->deq_seg,
+				 xhci->event_ring->dequeue),
+			 lower_32_bits(le64_to_cpu(event->buffer)),
+			 upper_32_bits(le64_to_cpu(event->buffer)),
+			 le32_to_cpu(event->transfer_len),
+			 le32_to_cpu(event->flags));
+		xhci_dbg(xhci, "Event ring:\n");
+		xhci_debug_segment(xhci, xhci->event_ring->deq_seg);
+		return -ENODEV;
+	}
+
+	/* Count current td numbers if ep->skip is set */
+	if (ep->skip) {
+		list_for_each(tmp, &ep_ring->td_list)
+			td_num++;
+	}
+
+	event_dma = le64_to_cpu(event->buffer);
+	trb_comp_code = GET_COMP_CODE(le32_to_cpu(event->transfer_len));
+	/* Look for common error cases */
+	switch (trb_comp_code) {
+	/* Skip codes that require special handling depending on
+	 * transfer type
+	 */
+	case COMP_SUCCESS:
+		if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0)
+			break;
+		if (xhci->quirks & XHCI_TRUST_TX_LENGTH)
+			trb_comp_code = COMP_SHORT_TX;
+		else
+			xhci_warn_ratelimited(xhci,
+					"WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk?\n");
+	case COMP_SHORT_TX:
+		break;
+	case COMP_STOP:
+		xhci_dbg(xhci, "Stopped on Transfer TRB\n");
+		break;
+	case COMP_STOP_INVAL:
+		xhci_dbg(xhci, "Stopped on No-op or Link TRB\n");
+		break;
+	case COMP_STALL:
+		xhci_dbg(xhci, "Stalled endpoint\n");
+		ep->ep_state |= EP_HALTED;
+		status = -EPIPE;
+		break;
+	case COMP_TRB_ERR:
+		xhci_warn(xhci, "WARN: TRB error on endpoint\n");
+		status = -EILSEQ;
+		break;
+	case COMP_SPLIT_ERR:
+	case COMP_TX_ERR:
+		xhci_dbg(xhci, "Transfer error on endpoint\n");
+		status = -EPROTO;
+		break;
+	case COMP_BABBLE:
+		xhci_dbg(xhci, "Babble error on endpoint\n");
+		status = -EOVERFLOW;
+		break;
+	case COMP_DB_ERR:
+		xhci_warn(xhci, "WARN: HC couldn't access mem fast enough\n");
+		status = -ENOSR;
+		break;
+	case COMP_BW_OVER:
+		xhci_warn(xhci, "WARN: bandwidth overrun event on endpoint\n");
+		break;
+	case COMP_BUFF_OVER:
+		xhci_warn(xhci, "WARN: buffer overrun event on endpoint\n");
+		break;
+	case COMP_UNDERRUN:
+		/*
+		 * When the Isoch ring is empty, the xHC will generate
+		 * a Ring Overrun Event for IN Isoch endpoint or Ring
+		 * Underrun Event for OUT Isoch endpoint.
+		 */
+		xhci_dbg(xhci, "underrun event on endpoint\n");
+		if (!list_empty(&ep_ring->td_list))
+			xhci_dbg(xhci, "Underrun Event for slot %d ep %d "
+					"still with TDs queued?\n",
+				 TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),
+				 ep_index);
+		goto cleanup;
+	case COMP_OVERRUN:
+		xhci_dbg(xhci, "overrun event on endpoint\n");
+		if (!list_empty(&ep_ring->td_list))
+			xhci_dbg(xhci, "Overrun Event for slot %d ep %d "
+					"still with TDs queued?\n",
+				 TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),
+				 ep_index);
+		goto cleanup;
+	case COMP_DEV_ERR:
+		xhci_warn(xhci, "WARN: detect an incompatible device");
+		status = -EPROTO;
+		break;
+	case COMP_MISSED_INT:
+		/*
+		 * When encounter missed service error, one or more isoc tds
+		 * may be missed by xHC.
+		 * Set skip flag of the ep_ring; Complete the missed tds as
+		 * short transfer when process the ep_ring next time.
+		 */
+		ep->skip = true;
+		xhci_dbg(xhci, "Miss service interval error, set skip flag\n");
+		goto cleanup;
+	default:
+		if (xhci_is_vendor_info_code(xhci, trb_comp_code)) {
+			status = 0;
+			break;
+		}
+		xhci_warn(xhci, "ERROR Unknown event condition, HC probably "
+				"busted\n");
+		goto cleanup;
+	}
+
+	do {
+		/* This TRB should be in the TD at the head of this ring's
+		 * TD list.
+		 */
+		if (list_empty(&ep_ring->td_list)) {
+			/*
+			 * A stopped endpoint may generate an extra completion
+			 * event if the device was suspended.  Don't print
+			 * warnings.
+			 */
+			if (!(trb_comp_code == COMP_STOP ||
+						trb_comp_code == COMP_STOP_INVAL)) {
+				xhci_warn(xhci, "WARN Event TRB for slot %d ep %d with no TDs queued?\n",
+						TRB_TO_SLOT_ID(le32_to_cpu(event->flags)),
+						ep_index);
+				xhci_dbg(xhci, "Event TRB with TRB type ID %u\n",
+						(le32_to_cpu(event->flags) &
+						 TRB_TYPE_BITMASK)>>10);
+				xhci_print_trb_offsets(xhci, (union xhci_trb *) event);
+			}
+			if (ep->skip) {
+				ep->skip = false;
+				xhci_dbg(xhci, "td_list is empty while skip "
+						"flag set. Clear skip flag.\n");
+			}
+			ret = 0;
+			goto cleanup;
+		}
+
+		/* We've skipped all the TDs on the ep ring when ep->skip set */
+		if (ep->skip && td_num == 0) {
+			ep->skip = false;
+			xhci_dbg(xhci, "All tds on the ep_ring skipped. "
+						"Clear skip flag.\n");
+			ret = 0;
+			goto cleanup;
+		}
+
+		td = list_entry(ep_ring->td_list.next, struct xhci_td, td_list);
+		if (ep->skip)
+			td_num--;
+
+		/* Is this a TRB in the currently executing TD? */
+		event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue,
+				td->last_trb, event_dma);
+
+		/*
+		 * Skip the Force Stopped Event. The event_trb(event_dma) of FSE
+		 * is not in the current TD pointed by ep_ring->dequeue because
+		 * that the hardware dequeue pointer still at the previous TRB
+		 * of the current TD. The previous TRB maybe a Link TD or the
+		 * last TRB of the previous TD. The command completion handle
+		 * will take care the rest.
+		 */
+		if (!event_seg && trb_comp_code == COMP_STOP_INVAL) {
+			ret = 0;
+			goto cleanup;
+		}
+
+		if (!event_seg) {
+			if (!ep->skip ||
+			    !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
+				/* Some host controllers give a spurious
+				 * successful event after a short transfer.
+				 * Ignore it.
+				 */
+				if ((xhci->quirks & XHCI_SPURIOUS_SUCCESS) && 
+						ep_ring->last_td_was_short) {
+					ep_ring->last_td_was_short = false;
+					ret = 0;
+					goto cleanup;
+				}
+				/* HC is busted, give up! */
+				xhci_err(xhci,
+					"ERROR Transfer event TRB DMA ptr not "
+					"part of current TD\n");
+				return -ESHUTDOWN;
+			}
+
+			ret = skip_isoc_td(xhci, td, event, ep, &status);
+			goto cleanup;
+		}
+		if (trb_comp_code == COMP_SHORT_TX)
+			ep_ring->last_td_was_short = true;
+		else
+			ep_ring->last_td_was_short = false;
+
+		if (ep->skip) {
+			xhci_dbg(xhci, "Found td. Clear skip flag.\n");
+			ep->skip = false;
+		}
+
+		event_trb = &event_seg->trbs[(event_dma - event_seg->dma) /
+						sizeof(*event_trb)];
+		/*
+		 * No-op TRB should not trigger interrupts.
+		 * If event_trb is a no-op TRB, it means the
+		 * corresponding TD has been cancelled. Just ignore
+		 * the TD.
+		 */
+		if (TRB_TYPE_NOOP_LE32(event_trb->generic.field[3])) {
+			xhci_dbg(xhci,
+				 "event_trb is a no-op TRB. Skip it\n");
+			goto cleanup;
+		}
+
+		/* Now update the urb's actual_length and give back to
+		 * the core
+		 */
+		if (usb_endpoint_xfer_control(&td->urb->ep->desc))
+			ret = process_ctrl_td(xhci, td, event_trb, event, ep,
+						 &status);
+		else if (usb_endpoint_xfer_isoc(&td->urb->ep->desc))
+			ret = process_isoc_td(xhci, td, event_trb, event, ep,
+						 &status);
+		else
+			ret = process_bulk_intr_td(xhci, td, event_trb, event,
+						 ep, &status);
+
+cleanup:
+		/*
+		 * Do not update event ring dequeue pointer if ep->skip is set.
+		 * Will roll back to continue process missed tds.
+		 */
+		if (trb_comp_code == COMP_MISSED_INT || !ep->skip) {
+			inc_deq(xhci, xhci->event_ring);
+		}
+
+		if (ret) {
+			urb = td->urb;
+			urb_priv = urb->hcpriv;
+			/* Leave the TD around for the reset endpoint function
+			 * to use(but only if it's not a control endpoint,
+			 * since we already queued the Set TR dequeue pointer
+			 * command for stalled control endpoints).
+			 */
+			if (usb_endpoint_xfer_control(&urb->ep->desc) ||
+				(trb_comp_code != COMP_STALL &&
+					trb_comp_code != COMP_BABBLE))
+				xhci_urb_free_priv(xhci, urb_priv);
+			else
+				kfree(urb_priv);
+
+			usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb);
+			if ((urb->actual_length != urb->transfer_buffer_length &&
+						(urb->transfer_flags &
+						 URB_SHORT_NOT_OK)) ||
+					(status != 0 &&
+					 !usb_endpoint_xfer_isoc(&urb->ep->desc)))
+				xhci_dbg(xhci, "Giveback URB %p, len = %d, "
+						"expected = %d, status = %d\n",
+						urb, urb->actual_length,
+						urb->transfer_buffer_length,
+						status);
+			spin_unlock(&xhci->lock);
+			/* EHCI, UHCI, and OHCI always unconditionally set the
+			 * urb->status of an isochronous endpoint to 0.
+			 */
+			if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS)
+				status = 0;
+			usb_hcd_giveback_urb(bus_to_hcd(urb->dev->bus), urb, status);
+			spin_lock(&xhci->lock);
+		}
+
+	/*
+	 * If ep->skip is set, it means there are missed tds on the
+	 * endpoint ring need to take care of.
+	 * Process them as short transfer until reach the td pointed by
+	 * the event.
+	 */
+	} while (ep->skip && trb_comp_code != COMP_MISSED_INT);
+
+	return 0;
+}
+
+/*
+ * This function handles all OS-owned events on the event ring.  It may drop
+ * xhci->lock between event processing (e.g. to pass up port status changes).
+ * Returns >0 for "possibly more events to process" (caller should call again),
+ * otherwise 0 if done.  In future, <0 returns should indicate error code.
+ */
+static int xhci_handle_event(struct xhci_hcd *xhci)
+{
+	union xhci_trb *event;
+	int update_ptrs = 1;
+	int ret;
+
+	if (!xhci->event_ring || !xhci->event_ring->dequeue) {
+		xhci->error_bitmask |= 1 << 1;
+		return 0;
+	}
+
+	event = xhci->event_ring->dequeue;
+	/* Does the HC or OS own the TRB? */
+	if ((le32_to_cpu(event->event_cmd.flags) & TRB_CYCLE) !=
+	    xhci->event_ring->cycle_state) {
+		xhci->error_bitmask |= 1 << 2;
+		return 0;
+	}
+
+	/*
+	 * Barrier between reading the TRB_CYCLE (valid) flag above and any
+	 * speculative reads of the event's flags/data below.
+	 */
+	rmb();
+	/* FIXME: Handle more event types. */
+	switch ((le32_to_cpu(event->event_cmd.flags) & TRB_TYPE_BITMASK)) {
+	case TRB_TYPE(TRB_COMPLETION):
+		handle_cmd_completion(xhci, &event->event_cmd);
+		break;
+	case TRB_TYPE(TRB_PORT_STATUS):
+		handle_port_status(xhci, event);
+		update_ptrs = 0;
+		break;
+	case TRB_TYPE(TRB_TRANSFER):
+		ret = handle_tx_event(xhci, &event->trans_event);
+		if (ret < 0)
+			xhci->error_bitmask |= 1 << 9;
+		else
+			update_ptrs = 0;
+		break;
+	case TRB_TYPE(TRB_DEV_NOTE):
+		handle_device_notification(xhci, event);
+		break;
+	default:
+		if ((le32_to_cpu(event->event_cmd.flags) & TRB_TYPE_BITMASK) >=
+		    TRB_TYPE(48))
+			handle_vendor_event(xhci, event);
+		else
+			xhci->error_bitmask |= 1 << 3;
+	}
+	/* Any of the above functions may drop and re-acquire the lock, so check
+	 * to make sure a watchdog timer didn't mark the host as non-responsive.
+	 */
+	if (xhci->xhc_state & XHCI_STATE_DYING) {
+		xhci_dbg(xhci, "xHCI host dying, returning from "
+				"event handler.\n");
+		return 0;
+	}
+
+	if (update_ptrs)
+		/* Update SW event ring dequeue pointer */
+		inc_deq(xhci, xhci->event_ring);
+
+	/* Are there more items on the event ring?  Caller will call us again to
+	 * check.
+	 */
+	return 1;
+}
+
+/*
+ * xHCI spec says we can get an interrupt, and if the HC has an error condition,
+ * we might get bad data out of the event ring.  Section 4.10.2.7 has a list of
+ * indicators of an event TRB error, but we check the status *first* to be safe.
+ */
+irqreturn_t xhci_irq(struct usb_hcd *hcd)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	u32 status;
+	u64 temp_64;
+	union xhci_trb *event_ring_deq;
+	dma_addr_t deq;
+
+	spin_lock(&xhci->lock);
+	/* Check if the xHC generated the interrupt, or the irq is shared */
+	status = xhci_readl(xhci, &xhci->op_regs->status);
+	if (status == 0xffffffff)
+		goto hw_died;
+
+	if (!(status & STS_EINT)) {
+		spin_unlock(&xhci->lock);
+		return IRQ_NONE;
+	}
+	if (status & STS_FATAL) {
+		xhci_warn(xhci, "WARNING: Host System Error\n");
+		xhci_halt(xhci);
+hw_died:
+		spin_unlock(&xhci->lock);
+		return -ESHUTDOWN;
+	}
+
+	/*
+	 * Clear the op reg interrupt status first,
+	 * so we can receive interrupts from other MSI-X interrupters.
+	 * Write 1 to clear the interrupt status.
+	 */
+	status |= STS_EINT;
+	xhci_writel(xhci, status, &xhci->op_regs->status);
+	/* FIXME when MSI-X is supported and there are multiple vectors */
+	/* Clear the MSI-X event interrupt status */
+
+	if (hcd->irq) {
+		u32 irq_pending;
+		/* Acknowledge the PCI interrupt */
+		irq_pending = xhci_readl(xhci, &xhci->ir_set->irq_pending);
+		irq_pending |= IMAN_IP;
+		xhci_writel(xhci, irq_pending, &xhci->ir_set->irq_pending);
+	}
+
+	if (xhci->xhc_state & XHCI_STATE_DYING) {
+		xhci_dbg(xhci, "xHCI dying, ignoring interrupt. "
+				"Shouldn't IRQs be disabled?\n");
+		/* Clear the event handler busy flag (RW1C);
+		 * the event ring should be empty.
+		 */
+		temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
+		xhci_write_64(xhci, temp_64 | ERST_EHB,
+				&xhci->ir_set->erst_dequeue);
+		spin_unlock(&xhci->lock);
+
+		return IRQ_HANDLED;
+	}
+
+	event_ring_deq = xhci->event_ring->dequeue;
+	/* FIXME this should be a delayed service routine
+	 * that clears the EHB.
+	 */
+	while (xhci_handle_event(xhci) > 0) {}
+
+	temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
+	/* If necessary, update the HW's version of the event ring deq ptr. */
+	if (event_ring_deq != xhci->event_ring->dequeue) {
+		deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg,
+				xhci->event_ring->dequeue);
+		if (deq == 0)
+			xhci_warn(xhci, "WARN something wrong with SW event "
+					"ring dequeue ptr.\n");
+		/* Update HC event ring dequeue pointer */
+		temp_64 &= ERST_PTR_MASK;
+		temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK);
+	}
+
+	/* Clear the event handler busy flag (RW1C); event ring is empty. */
+	temp_64 |= ERST_EHB;
+	xhci_write_64(xhci, temp_64, &xhci->ir_set->erst_dequeue);
+
+	spin_unlock(&xhci->lock);
+
+	return IRQ_HANDLED;
+}
+
+irqreturn_t xhci_msi_irq(int irq, struct usb_hcd *hcd)
+{
+	return xhci_irq(hcd);
+}
+
+/****		Endpoint Ring Operations	****/
+
+/*
+ * Generic function for queueing a TRB on a ring.
+ * The caller must have checked to make sure there's room on the ring.
+ *
+ * @more_trbs_coming:	Will you enqueue more TRBs before calling
+ *			prepare_transfer()?
+ */
+static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,
+		bool more_trbs_coming,
+		u32 field1, u32 field2, u32 field3, u32 field4)
+{
+	struct xhci_generic_trb *trb;
+
+	trb = &ring->enqueue->generic;
+	trb->field[0] = cpu_to_le32(field1);
+	trb->field[1] = cpu_to_le32(field2);
+	trb->field[2] = cpu_to_le32(field3);
+	trb->field[3] = cpu_to_le32(field4);
+	inc_enq(xhci, ring, more_trbs_coming);
+}
+
+/*
+ * Does various checks on the endpoint ring, and makes it ready to queue num_trbs.
+ * FIXME allocate segments if the ring is full.
+ */
+static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring,
+		u32 ep_state, unsigned int num_trbs, gfp_t mem_flags)
+{
+	unsigned int num_trbs_needed;
+
+	/* Make sure the endpoint has been added to xHC schedule */
+	switch (ep_state) {
+	case EP_STATE_DISABLED:
+		/*
+		 * USB core changed config/interfaces without notifying us,
+		 * or hardware is reporting the wrong state.
+		 */
+		xhci_warn(xhci, "WARN urb submitted to disabled ep\n");
+		return -ENOENT;
+	case EP_STATE_ERROR:
+		xhci_warn(xhci, "WARN waiting for error on ep to be cleared\n");
+		/* FIXME event handling code for error needs to clear it */
+		/* XXX not sure if this should be -ENOENT or not */
+		return -EINVAL;
+	case EP_STATE_HALTED:
+		xhci_dbg(xhci, "WARN halted endpoint, queueing URB anyway.\n");
+	case EP_STATE_STOPPED:
+	case EP_STATE_RUNNING:
+		break;
+	default:
+		xhci_err(xhci, "ERROR unknown endpoint state for ep\n");
+		/*
+		 * FIXME issue Configure Endpoint command to try to get the HC
+		 * back into a known state.
+		 */
+		return -EINVAL;
+	}
+
+	while (1) {
+		if (room_on_ring(xhci, ep_ring, num_trbs))
+			break;
+
+		if (ep_ring == xhci->cmd_ring) {
+			xhci_err(xhci, "Do not support expand command ring\n");
+			return -ENOMEM;
+		}
+
+		xhci_dbg(xhci, "ERROR no room on ep ring, "
+					"try ring expansion\n");
+		num_trbs_needed = num_trbs - ep_ring->num_trbs_free;
+		if (xhci_ring_expansion(xhci, ep_ring, num_trbs_needed,
+					mem_flags)) {
+			xhci_err(xhci, "Ring expansion failed\n");
+			return -ENOMEM;
+		}
+	}
+
+	if (enqueue_is_link_trb(ep_ring)) {
+		struct xhci_ring *ring = ep_ring;
+		union xhci_trb *next;
+
+		next = ring->enqueue;
+
+		while (last_trb(xhci, ring, ring->enq_seg, next)) {
+			/* If we're not dealing with 0.95 hardware or isoc rings
+			 * on AMD 0.96 host, clear the chain bit.
+			 */
+			if (!xhci_link_trb_quirk(xhci) &&
+					!(ring->type == TYPE_ISOC &&
+					 (xhci->quirks & XHCI_AMD_0x96_HOST)))
+				next->link.control &= cpu_to_le32(~TRB_CHAIN);
+			else
+				next->link.control |= cpu_to_le32(TRB_CHAIN);
+
+			wmb();
+			next->link.control ^= cpu_to_le32(TRB_CYCLE);
+
+			/* Toggle the cycle bit after the last ring segment. */
+			if (last_trb_on_last_seg(xhci, ring, ring->enq_seg, next)) {
+				ring->cycle_state = (ring->cycle_state ? 0 : 1);
+			}
+			ring->enq_seg = ring->enq_seg->next;
+			ring->enqueue = ring->enq_seg->trbs;
+			next = ring->enqueue;
+		}
+	}
+
+	return 0;
+}
+
+static int prepare_transfer(struct xhci_hcd *xhci,
+		struct xhci_virt_device *xdev,
+		unsigned int ep_index,
+		unsigned int stream_id,
+		unsigned int num_trbs,
+		struct urb *urb,
+		unsigned int td_index,
+		gfp_t mem_flags)
+{
+	int ret;
+	struct urb_priv *urb_priv;
+	struct xhci_td	*td;
+	struct xhci_ring *ep_ring;
+	struct xhci_ep_ctx *ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
+
+	ep_ring = xhci_stream_id_to_ring(xdev, ep_index, stream_id);
+	if (!ep_ring) {
+		xhci_dbg(xhci, "Can't prepare ring for bad stream ID %u\n",
+				stream_id);
+		return -EINVAL;
+	}
+
+	ret = prepare_ring(xhci, ep_ring,
+			   le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK,
+			   num_trbs, mem_flags);
+	if (ret)
+		return ret;
+
+	urb_priv = urb->hcpriv;
+	td = urb_priv->td[td_index];
+
+	INIT_LIST_HEAD(&td->td_list);
+	INIT_LIST_HEAD(&td->cancelled_td_list);
+
+	if (td_index == 0) {
+		ret = usb_hcd_link_urb_to_ep(bus_to_hcd(urb->dev->bus), urb);
+		if (unlikely(ret))
+			return ret;
+	}
+
+	td->urb = urb;
+	/* Add this TD to the tail of the endpoint ring's TD list */
+	list_add_tail(&td->td_list, &ep_ring->td_list);
+	td->start_seg = ep_ring->enq_seg;
+	td->first_trb = ep_ring->enqueue;
+
+	urb_priv->td[td_index] = td;
+
+	return 0;
+}
+
+static unsigned int count_sg_trbs_needed(struct xhci_hcd *xhci, struct urb *urb)
+{
+	int num_sgs, num_trbs, running_total, temp, i;
+	struct scatterlist *sg;
+
+	sg = NULL;
+	num_sgs = urb->num_mapped_sgs;
+	temp = urb->transfer_buffer_length;
+
+	num_trbs = 0;
+	for_each_sg(urb->sg, sg, num_sgs, i) {
+		unsigned int len = sg_dma_len(sg);
+
+		/* Scatter gather list entries may cross 64KB boundaries */
+		running_total = TRB_MAX_BUFF_SIZE -
+			(sg_dma_address(sg) & (TRB_MAX_BUFF_SIZE - 1));
+		running_total &= TRB_MAX_BUFF_SIZE - 1;
+		if (running_total != 0)
+			num_trbs++;
+
+		/* How many more 64KB chunks to transfer, how many more TRBs? */
+		while (running_total < sg_dma_len(sg) && running_total < temp) {
+			num_trbs++;
+			running_total += TRB_MAX_BUFF_SIZE;
+		}
+		len = min_t(int, len, temp);
+		temp -= len;
+		if (temp == 0)
+			break;
+	}
+	return num_trbs;
+}
+
+static void check_trb_math(struct urb *urb, int num_trbs, int running_total)
+{
+	if (num_trbs != 0)
+		dev_err(&urb->dev->dev, "%s - ep %#x - Miscalculated number of "
+				"TRBs, %d left\n", __func__,
+				urb->ep->desc.bEndpointAddress, num_trbs);
+	if (running_total != urb->transfer_buffer_length)
+		dev_err(&urb->dev->dev, "%s - ep %#x - Miscalculated tx length, "
+				"queued %#x (%d), asked for %#x (%d)\n",
+				__func__,
+				urb->ep->desc.bEndpointAddress,
+				running_total, running_total,
+				urb->transfer_buffer_length,
+				urb->transfer_buffer_length);
+}
+
+static void giveback_first_trb(struct xhci_hcd *xhci, int slot_id,
+		unsigned int ep_index, unsigned int stream_id, int start_cycle,
+		struct xhci_generic_trb *start_trb)
+{
+	/*
+	 * Pass all the TRBs to the hardware at once and make sure this write
+	 * isn't reordered.
+	 */
+	wmb();
+	if (start_cycle)
+		start_trb->field[3] |= cpu_to_le32(start_cycle);
+	else
+		start_trb->field[3] &= cpu_to_le32(~TRB_CYCLE);
+	xhci_ring_ep_doorbell(xhci, slot_id, ep_index, stream_id);
+}
+
+/*
+ * xHCI uses normal TRBs for both bulk and interrupt.  When the interrupt
+ * endpoint is to be serviced, the xHC will consume (at most) one TD.  A TD
+ * (comprised of sg list entries) can take several service intervals to
+ * transmit.
+ */
+int xhci_queue_intr_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+		struct urb *urb, int slot_id, unsigned int ep_index)
+{
+	struct xhci_ep_ctx *ep_ctx = xhci_get_ep_ctx(xhci,
+			xhci->devs[slot_id]->out_ctx, ep_index);
+	int xhci_interval;
+	int ep_interval;
+
+	xhci_interval = EP_INTERVAL_TO_UFRAMES(le32_to_cpu(ep_ctx->ep_info));
+	ep_interval = urb->interval;
+	/* Convert to microframes */
+	if (urb->dev->speed == USB_SPEED_LOW ||
+			urb->dev->speed == USB_SPEED_FULL)
+		ep_interval *= 8;
+	/* FIXME change this to a warning and a suggestion to use the new API
+	 * to set the polling interval (once the API is added).
+	 */
+	if (xhci_interval != ep_interval) {
+		if (printk_ratelimit())
+			dev_dbg(&urb->dev->dev, "Driver uses different interval"
+					" (%d microframe%s) than xHCI "
+					"(%d microframe%s)\n",
+					ep_interval,
+					ep_interval == 1 ? "" : "s",
+					xhci_interval,
+					xhci_interval == 1 ? "" : "s");
+		urb->interval = xhci_interval;
+		/* Convert back to frames for LS/FS devices */
+		if (urb->dev->speed == USB_SPEED_LOW ||
+				urb->dev->speed == USB_SPEED_FULL)
+			urb->interval /= 8;
+	}
+	return xhci_queue_bulk_tx(xhci, mem_flags, urb, slot_id, ep_index);
+}
+
+/*
+ * The TD size is the number of bytes remaining in the TD (including this TRB),
+ * right shifted by 10.
+ * It must fit in bits 21:17, so it can't be bigger than 31.
+ */
+static u32 xhci_td_remainder(unsigned int remainder)
+{
+	u32 max = (1 << (21 - 17 + 1)) - 1;
+
+	if ((remainder >> 10) >= max)
+		return max << 17;
+	else
+		return (remainder >> 10) << 17;
+}
+
+/*
+ * For xHCI 1.0 host controllers, TD size is the number of max packet sized
+ * packets remaining in the TD (*not* including this TRB).
+ *
+ * Total TD packet count = total_packet_count =
+ *     DIV_ROUND_UP(TD size in bytes / wMaxPacketSize)
+ *
+ * Packets transferred up to and including this TRB = packets_transferred =
+ *     rounddown(total bytes transferred including this TRB / wMaxPacketSize)
+ *
+ * TD size = total_packet_count - packets_transferred
+ *
+ * It must fit in bits 21:17, so it can't be bigger than 31.
+ * The last TRB in a TD must have the TD size set to zero.
+ */
+static u32 xhci_v1_0_td_remainder(int running_total, int trb_buff_len,
+		unsigned int total_packet_count, struct urb *urb,
+		unsigned int num_trbs_left)
+{
+	int packets_transferred;
+
+	/* One TRB with a zero-length data packet. */
+	if (num_trbs_left == 0 || (running_total == 0 && trb_buff_len == 0))
+		return 0;
+
+	/* All the TRB queueing functions don't count the current TRB in
+	 * running_total.
+	 */
+	packets_transferred = (running_total + trb_buff_len) /
+		GET_MAX_PACKET(usb_endpoint_maxp(&urb->ep->desc));
+
+	if ((total_packet_count - packets_transferred) > 31)
+		return 31 << 17;
+	return (total_packet_count - packets_transferred) << 17;
+}
+
+static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+		struct urb *urb, int slot_id, unsigned int ep_index)
+{
+	struct xhci_ring *ep_ring;
+	unsigned int num_trbs;
+	struct urb_priv *urb_priv;
+	struct xhci_td *td;
+	struct scatterlist *sg;
+	int num_sgs;
+	int trb_buff_len, this_sg_len, running_total;
+	unsigned int total_packet_count;
+	bool first_trb;
+	u64 addr;
+	bool more_trbs_coming;
+
+	struct xhci_generic_trb *start_trb;
+	int start_cycle;
+
+	ep_ring = xhci_urb_to_transfer_ring(xhci, urb);
+	if (!ep_ring)
+		return -EINVAL;
+
+	num_trbs = count_sg_trbs_needed(xhci, urb);
+	num_sgs = urb->num_mapped_sgs;
+	total_packet_count = DIV_ROUND_UP(urb->transfer_buffer_length,
+			usb_endpoint_maxp(&urb->ep->desc));
+
+	trb_buff_len = prepare_transfer(xhci, xhci->devs[slot_id],
+			ep_index, urb->stream_id,
+			num_trbs, urb, 0, mem_flags);
+	if (trb_buff_len < 0)
+		return trb_buff_len;
+
+	urb_priv = urb->hcpriv;
+	td = urb_priv->td[0];
+
+	/*
+	 * Don't give the first TRB to the hardware (by toggling the cycle bit)
+	 * until we've finished creating all the other TRBs.  The ring's cycle
+	 * state may change as we enqueue the other TRBs, so save it too.
+	 */
+	start_trb = &ep_ring->enqueue->generic;
+	start_cycle = ep_ring->cycle_state;
+
+	running_total = 0;
+	/*
+	 * How much data is in the first TRB?
+	 *
+	 * There are three forces at work for TRB buffer pointers and lengths:
+	 * 1. We don't want to walk off the end of this sg-list entry buffer.
+	 * 2. The transfer length that the driver requested may be smaller than
+	 *    the amount of memory allocated for this scatter-gather list.
+	 * 3. TRBs buffers can't cross 64KB boundaries.
+	 */
+	sg = urb->sg;
+	addr = (u64) sg_dma_address(sg);
+	this_sg_len = sg_dma_len(sg);
+	trb_buff_len = TRB_MAX_BUFF_SIZE - (addr & (TRB_MAX_BUFF_SIZE - 1));
+	trb_buff_len = min_t(int, trb_buff_len, this_sg_len);
+	if (trb_buff_len > urb->transfer_buffer_length)
+		trb_buff_len = urb->transfer_buffer_length;
+
+	first_trb = true;
+	/* Queue the first TRB, even if it's zero-length */
+	do {
+		u32 field = 0;
+		u32 length_field = 0;
+		u32 remainder = 0;
+
+		/* Don't change the cycle bit of the first TRB until later */
+		if (first_trb) {
+			first_trb = false;
+			if (start_cycle == 0)
+				field |= 0x1;
+		} else
+			field |= ep_ring->cycle_state;
+
+		/* Chain all the TRBs together; clear the chain bit in the last
+		 * TRB to indicate it's the last TRB in the chain.
+		 */
+		if (num_trbs > 1) {
+			field |= TRB_CHAIN;
+		} else {
+			/* FIXME - add check for ZERO_PACKET flag before this */
+			td->last_trb = ep_ring->enqueue;
+			field |= TRB_IOC;
+		}
+
+		/* Only set interrupt on short packet for IN endpoints */
+		if (usb_urb_dir_in(urb))
+			field |= TRB_ISP;
+
+		if (TRB_MAX_BUFF_SIZE -
+				(addr & (TRB_MAX_BUFF_SIZE - 1)) < trb_buff_len) {
+			xhci_warn(xhci, "WARN: sg dma xfer crosses 64KB boundaries!\n");
+			xhci_dbg(xhci, "Next boundary@%#x, end dma = %#x\n",
+					(unsigned int) (addr + TRB_MAX_BUFF_SIZE) & ~(TRB_MAX_BUFF_SIZE - 1),
+					(unsigned int) addr + trb_buff_len);
+		}
+
+		/* Set the TRB length, TD size, and interrupter fields. */
+		if (xhci->hci_version < 0x100) {
+			remainder = xhci_td_remainder(
+					urb->transfer_buffer_length -
+					running_total);
+		} else {
+			remainder = xhci_v1_0_td_remainder(running_total,
+					trb_buff_len, total_packet_count, urb,
+					num_trbs - 1);
+		}
+		length_field = TRB_LEN(trb_buff_len) |
+			remainder |
+			TRB_INTR_TARGET(0);
+
+		if (num_trbs > 1)
+			more_trbs_coming = true;
+		else
+			more_trbs_coming = false;
+		queue_trb(xhci, ep_ring, more_trbs_coming,
+				lower_32_bits(addr),
+				upper_32_bits(addr),
+				length_field,
+				field | TRB_TYPE(TRB_NORMAL));
+		--num_trbs;
+		running_total += trb_buff_len;
+
+		/* Calculate length for next transfer --
+		 * Are we done queueing all the TRBs for this sg entry?
+		 */
+		this_sg_len -= trb_buff_len;
+		if (this_sg_len == 0) {
+			--num_sgs;
+			if (num_sgs == 0)
+				break;
+			sg = sg_next(sg);
+			addr = (u64) sg_dma_address(sg);
+			this_sg_len = sg_dma_len(sg);
+		} else {
+			addr += trb_buff_len;
+		}
+
+		trb_buff_len = TRB_MAX_BUFF_SIZE -
+			(addr & (TRB_MAX_BUFF_SIZE - 1));
+		trb_buff_len = min_t(int, trb_buff_len, this_sg_len);
+		if (running_total + trb_buff_len > urb->transfer_buffer_length)
+			trb_buff_len =
+				urb->transfer_buffer_length - running_total;
+	} while (running_total < urb->transfer_buffer_length);
+
+	check_trb_math(urb, num_trbs, running_total);
+	giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
+			start_cycle, start_trb);
+	return 0;
+}
+
+/* This is very similar to what ehci-q.c qtd_fill() does */
+int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+		struct urb *urb, int slot_id, unsigned int ep_index)
+{
+	struct xhci_ring *ep_ring;
+	struct urb_priv *urb_priv;
+	struct xhci_td *td;
+	int num_trbs;
+	struct xhci_generic_trb *start_trb;
+	bool first_trb;
+	bool more_trbs_coming;
+	int start_cycle;
+	u32 field, length_field;
+
+	int running_total, trb_buff_len, ret;
+	unsigned int total_packet_count;
+	u64 addr;
+
+	if (urb->num_sgs)
+		return queue_bulk_sg_tx(xhci, mem_flags, urb, slot_id, ep_index);
+
+	ep_ring = xhci_urb_to_transfer_ring(xhci, urb);
+	if (!ep_ring)
+		return -EINVAL;
+
+	num_trbs = 0;
+	/* How much data is (potentially) left before the 64KB boundary? */
+	running_total = TRB_MAX_BUFF_SIZE -
+		(urb->transfer_dma & (TRB_MAX_BUFF_SIZE - 1));
+	running_total &= TRB_MAX_BUFF_SIZE - 1;
+
+	/* If there's some data on this 64KB chunk, or we have to send a
+	 * zero-length transfer, we need at least one TRB
+	 */
+	if (running_total != 0 || urb->transfer_buffer_length == 0)
+		num_trbs++;
+	/* How many more 64KB chunks to transfer, how many more TRBs? */
+	while (running_total < urb->transfer_buffer_length) {
+		num_trbs++;
+		running_total += TRB_MAX_BUFF_SIZE;
+	}
+	/* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */
+
+	ret = prepare_transfer(xhci, xhci->devs[slot_id],
+			ep_index, urb->stream_id,
+			num_trbs, urb, 0, mem_flags);
+	if (ret < 0)
+		return ret;
+
+	urb_priv = urb->hcpriv;
+	td = urb_priv->td[0];
+
+	/*
+	 * Don't give the first TRB to the hardware (by toggling the cycle bit)
+	 * until we've finished creating all the other TRBs.  The ring's cycle
+	 * state may change as we enqueue the other TRBs, so save it too.
+	 */
+	start_trb = &ep_ring->enqueue->generic;
+	start_cycle = ep_ring->cycle_state;
+
+	running_total = 0;
+	total_packet_count = DIV_ROUND_UP(urb->transfer_buffer_length,
+			usb_endpoint_maxp(&urb->ep->desc));
+	/* How much data is in the first TRB? */
+	addr = (u64) urb->transfer_dma;
+	trb_buff_len = TRB_MAX_BUFF_SIZE -
+		(urb->transfer_dma & (TRB_MAX_BUFF_SIZE - 1));
+	if (trb_buff_len > urb->transfer_buffer_length)
+		trb_buff_len = urb->transfer_buffer_length;
+
+	first_trb = true;
+
+	/* Queue the first TRB, even if it's zero-length */
+	do {
+		u32 remainder = 0;
+		field = 0;
+
+		/* Don't change the cycle bit of the first TRB until later */
+		if (first_trb) {
+			first_trb = false;
+			if (start_cycle == 0)
+				field |= 0x1;
+		} else
+			field |= ep_ring->cycle_state;
+
+		/* Chain all the TRBs together; clear the chain bit in the last
+		 * TRB to indicate it's the last TRB in the chain.
+		 */
+		if (num_trbs > 1) {
+			field |= TRB_CHAIN;
+		} else {
+			/* FIXME - add check for ZERO_PACKET flag before this */
+			td->last_trb = ep_ring->enqueue;
+			field |= TRB_IOC;
+		}
+
+		/* Only set interrupt on short packet for IN endpoints */
+		if (usb_urb_dir_in(urb))
+			field |= TRB_ISP;
+
+		/* Set the TRB length, TD size, and interrupter fields. */
+		if (xhci->hci_version < 0x100) {
+			remainder = xhci_td_remainder(
+					urb->transfer_buffer_length -
+					running_total);
+		} else {
+			remainder = xhci_v1_0_td_remainder(running_total,
+					trb_buff_len, total_packet_count, urb,
+					num_trbs - 1);
+		}
+		length_field = TRB_LEN(trb_buff_len) |
+			remainder |
+			TRB_INTR_TARGET(0);
+
+		if (num_trbs > 1)
+			more_trbs_coming = true;
+		else
+			more_trbs_coming = false;
+		queue_trb(xhci, ep_ring, more_trbs_coming,
+				lower_32_bits(addr),
+				upper_32_bits(addr),
+				length_field,
+				field | TRB_TYPE(TRB_NORMAL));
+		--num_trbs;
+		running_total += trb_buff_len;
+
+		/* Calculate length for next transfer */
+		addr += trb_buff_len;
+		trb_buff_len = urb->transfer_buffer_length - running_total;
+		if (trb_buff_len > TRB_MAX_BUFF_SIZE)
+			trb_buff_len = TRB_MAX_BUFF_SIZE;
+	} while (running_total < urb->transfer_buffer_length);
+
+	check_trb_math(urb, num_trbs, running_total);
+	giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
+			start_cycle, start_trb);
+	return 0;
+}
+
+/* Caller must have locked xhci->lock */
+int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+		struct urb *urb, int slot_id, unsigned int ep_index)
+{
+	struct xhci_ring *ep_ring;
+	int num_trbs;
+	int ret;
+	struct usb_ctrlrequest *setup;
+	struct xhci_generic_trb *start_trb;
+	int start_cycle;
+	u32 field, length_field;
+	struct urb_priv *urb_priv;
+	struct xhci_td *td;
+
+	ep_ring = xhci_urb_to_transfer_ring(xhci, urb);
+	if (!ep_ring)
+		return -EINVAL;
+
+	/*
+	 * Need to copy setup packet into setup TRB, so we can't use the setup
+	 * DMA address.
+	 */
+	if (!urb->setup_packet)
+		return -EINVAL;
+
+	/* 1 TRB for setup, 1 for status */
+	num_trbs = 2;
+	/*
+	 * Don't need to check if we need additional event data and normal TRBs,
+	 * since data in control transfers will never get bigger than 16MB
+	 * XXX: can we get a buffer that crosses 64KB boundaries?
+	 */
+	if (urb->transfer_buffer_length > 0)
+		num_trbs++;
+	ret = prepare_transfer(xhci, xhci->devs[slot_id],
+			ep_index, urb->stream_id,
+			num_trbs, urb, 0, mem_flags);
+	if (ret < 0)
+		return ret;
+
+	urb_priv = urb->hcpriv;
+	td = urb_priv->td[0];
+
+	/*
+	 * Don't give the first TRB to the hardware (by toggling the cycle bit)
+	 * until we've finished creating all the other TRBs.  The ring's cycle
+	 * state may change as we enqueue the other TRBs, so save it too.
+	 */
+	start_trb = &ep_ring->enqueue->generic;
+	start_cycle = ep_ring->cycle_state;
+
+	/* Queue setup TRB - see section 6.4.1.2.1 */
+	/* FIXME better way to translate setup_packet into two u32 fields? */
+	setup = (struct usb_ctrlrequest *) urb->setup_packet;
+	field = 0;
+	field |= TRB_IDT | TRB_TYPE(TRB_SETUP);
+	if (start_cycle == 0)
+		field |= 0x1;
+
+	/* xHCI 1.0 6.4.1.2.1: Transfer Type field */
+	if (xhci->hci_version == 0x100) {
+		if (urb->transfer_buffer_length > 0) {
+			if (setup->bRequestType & USB_DIR_IN)
+				field |= TRB_TX_TYPE(TRB_DATA_IN);
+			else
+				field |= TRB_TX_TYPE(TRB_DATA_OUT);
+		}
+	}
+
+	queue_trb(xhci, ep_ring, true,
+		  setup->bRequestType | setup->bRequest << 8 | le16_to_cpu(setup->wValue) << 16,
+		  le16_to_cpu(setup->wIndex) | le16_to_cpu(setup->wLength) << 16,
+		  TRB_LEN(8) | TRB_INTR_TARGET(0),
+		  /* Immediate data in pointer */
+		  field);
+
+	/* If there's data, queue data TRBs */
+	/* Only set interrupt on short packet for IN endpoints */
+	if (usb_urb_dir_in(urb))
+		field = TRB_ISP | TRB_TYPE(TRB_DATA);
+	else
+		field = TRB_TYPE(TRB_DATA);
+
+	length_field = TRB_LEN(urb->transfer_buffer_length) |
+		xhci_td_remainder(urb->transfer_buffer_length) |
+		TRB_INTR_TARGET(0);
+	if (urb->transfer_buffer_length > 0) {
+		if (setup->bRequestType & USB_DIR_IN)
+			field |= TRB_DIR_IN;
+		queue_trb(xhci, ep_ring, true,
+				lower_32_bits(urb->transfer_dma),
+				upper_32_bits(urb->transfer_dma),
+				length_field,
+				field | ep_ring->cycle_state);
+	}
+
+	/* Save the DMA address of the last TRB in the TD */
+	td->last_trb = ep_ring->enqueue;
+
+	/* Queue status TRB - see Table 7 and sections 4.11.2.2 and 6.4.1.2.3 */
+	/* If the device sent data, the status stage is an OUT transfer */
+	if (urb->transfer_buffer_length > 0 && setup->bRequestType & USB_DIR_IN)
+		field = 0;
+	else
+		field = TRB_DIR_IN;
+	queue_trb(xhci, ep_ring, false,
+			0,
+			0,
+			TRB_INTR_TARGET(0),
+			/* Event on completion */
+			field | TRB_IOC | TRB_TYPE(TRB_STATUS) | ep_ring->cycle_state);
+
+	giveback_first_trb(xhci, slot_id, ep_index, 0,
+			start_cycle, start_trb);
+	return 0;
+}
+
+static int count_isoc_trbs_needed(struct xhci_hcd *xhci,
+		struct urb *urb, int i)
+{
+	int num_trbs = 0;
+	u64 addr, td_len;
+
+	addr = (u64) (urb->transfer_dma + urb->iso_frame_desc[i].offset);
+	td_len = urb->iso_frame_desc[i].length;
+
+	num_trbs = DIV_ROUND_UP(td_len + (addr & (TRB_MAX_BUFF_SIZE - 1)),
+			TRB_MAX_BUFF_SIZE);
+	if (num_trbs == 0)
+		num_trbs++;
+
+	return num_trbs;
+}
+
+/*
+ * The transfer burst count field of the isochronous TRB defines the number of
+ * bursts that are required to move all packets in this TD.  Only SuperSpeed
+ * devices can burst up to bMaxBurst number of packets per service interval.
+ * This field is zero based, meaning a value of zero in the field means one
+ * burst.  Basically, for everything but SuperSpeed devices, this field will be
+ * zero.  Only xHCI 1.0 host controllers support this field.
+ */
+static unsigned int xhci_get_burst_count(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct urb *urb, unsigned int total_packet_count)
+{
+	unsigned int max_burst;
+
+	if (xhci->hci_version < 0x100 || udev->speed != USB_SPEED_SUPER)
+		return 0;
+
+	max_burst = urb->ep->ss_ep_comp.bMaxBurst;
+	return roundup(total_packet_count, max_burst + 1) - 1;
+}
+
+/*
+ * Returns the number of packets in the last "burst" of packets.  This field is
+ * valid for all speeds of devices.  USB 2.0 devices can only do one "burst", so
+ * the last burst packet count is equal to the total number of packets in the
+ * TD.  SuperSpeed endpoints can have up to 3 bursts.  All but the last burst
+ * must contain (bMaxBurst + 1) number of packets, but the last burst can
+ * contain 1 to (bMaxBurst + 1) packets.
+ */
+static unsigned int xhci_get_last_burst_packet_count(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct urb *urb, unsigned int total_packet_count)
+{
+	unsigned int max_burst;
+	unsigned int residue;
+
+	if (xhci->hci_version < 0x100)
+		return 0;
+
+	switch (udev->speed) {
+	case USB_SPEED_SUPER:
+		/* bMaxBurst is zero based: 0 means 1 packet per burst */
+		max_burst = urb->ep->ss_ep_comp.bMaxBurst;
+		residue = total_packet_count % (max_burst + 1);
+		/* If residue is zero, the last burst contains (max_burst + 1)
+		 * number of packets, but the TLBPC field is zero-based.
+		 */
+		if (residue == 0)
+			return max_burst;
+		return residue - 1;
+	default:
+		if (total_packet_count == 0)
+			return 0;
+		return total_packet_count - 1;
+	}
+}
+
+/* This is for isoc transfer */
+static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+		struct urb *urb, int slot_id, unsigned int ep_index)
+{
+	struct xhci_ring *ep_ring;
+	struct urb_priv *urb_priv;
+	struct xhci_td *td;
+	int num_tds, trbs_per_td;
+	struct xhci_generic_trb *start_trb;
+	bool first_trb;
+	int start_cycle;
+	u32 field, length_field;
+	int running_total, trb_buff_len, td_len, td_remain_len, ret;
+	u64 start_addr, addr;
+	int i, j;
+	bool more_trbs_coming;
+
+	ep_ring = xhci->devs[slot_id]->eps[ep_index].ring;
+
+	num_tds = urb->number_of_packets;
+	if (num_tds < 1) {
+		xhci_dbg(xhci, "Isoc URB with zero packets?\n");
+		return -EINVAL;
+	}
+
+	start_addr = (u64) urb->transfer_dma;
+	start_trb = &ep_ring->enqueue->generic;
+	start_cycle = ep_ring->cycle_state;
+
+	urb_priv = urb->hcpriv;
+	/* Queue the first TRB, even if it's zero-length */
+	for (i = 0; i < num_tds; i++) {
+		unsigned int total_packet_count;
+		unsigned int burst_count;
+		unsigned int residue;
+
+		first_trb = true;
+		running_total = 0;
+		addr = start_addr + urb->iso_frame_desc[i].offset;
+		td_len = urb->iso_frame_desc[i].length;
+		td_remain_len = td_len;
+		total_packet_count = DIV_ROUND_UP(td_len,
+				GET_MAX_PACKET(
+					usb_endpoint_maxp(&urb->ep->desc)));
+		/* A zero-length transfer still involves at least one packet. */
+		if (total_packet_count == 0)
+			total_packet_count++;
+		burst_count = xhci_get_burst_count(xhci, urb->dev, urb,
+				total_packet_count);
+		residue = xhci_get_last_burst_packet_count(xhci,
+				urb->dev, urb, total_packet_count);
+
+		trbs_per_td = count_isoc_trbs_needed(xhci, urb, i);
+
+		ret = prepare_transfer(xhci, xhci->devs[slot_id], ep_index,
+				urb->stream_id, trbs_per_td, urb, i, mem_flags);
+		if (ret < 0) {
+			if (i == 0)
+				return ret;
+			goto cleanup;
+		}
+
+		td = urb_priv->td[i];
+		for (j = 0; j < trbs_per_td; j++) {
+			u32 remainder = 0;
+			field = 0;
+
+			if (first_trb) {
+				field = TRB_TBC(burst_count) |
+					TRB_TLBPC(residue);
+				/* Queue the isoc TRB */
+				field |= TRB_TYPE(TRB_ISOC);
+				/* Assume URB_ISO_ASAP is set */
+				field |= TRB_SIA;
+				if (i == 0) {
+					if (start_cycle == 0)
+						field |= 0x1;
+				} else
+					field |= ep_ring->cycle_state;
+				first_trb = false;
+			} else {
+				/* Queue other normal TRBs */
+				field |= TRB_TYPE(TRB_NORMAL);
+				field |= ep_ring->cycle_state;
+			}
+
+			/* Only set interrupt on short packet for IN EPs */
+			if (usb_urb_dir_in(urb))
+				field |= TRB_ISP;
+
+			/* Chain all the TRBs together; clear the chain bit in
+			 * the last TRB to indicate it's the last TRB in the
+			 * chain.
+			 */
+			if (j < trbs_per_td - 1) {
+				field |= TRB_CHAIN;
+				more_trbs_coming = true;
+			} else {
+				td->last_trb = ep_ring->enqueue;
+				field |= TRB_IOC;
+				if (xhci->hci_version == 0x100 &&
+						!(xhci->quirks &
+							XHCI_AVOID_BEI)) {
+					/* Set BEI bit except for the last td */
+					if (i < num_tds - 1)
+						field |= TRB_BEI;
+				}
+				more_trbs_coming = false;
+			}
+
+			/* Calculate TRB length */
+			trb_buff_len = TRB_MAX_BUFF_SIZE -
+				(addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1));
+			if (trb_buff_len > td_remain_len)
+				trb_buff_len = td_remain_len;
+
+			/* Set the TRB length, TD size, & interrupter fields. */
+			if (xhci->hci_version < 0x100) {
+				remainder = xhci_td_remainder(
+						td_len - running_total);
+			} else {
+				remainder = xhci_v1_0_td_remainder(
+						running_total, trb_buff_len,
+						total_packet_count, urb,
+						(trbs_per_td - j - 1));
+			}
+			length_field = TRB_LEN(trb_buff_len) |
+				remainder |
+				TRB_INTR_TARGET(0);
+
+			queue_trb(xhci, ep_ring, more_trbs_coming,
+				lower_32_bits(addr),
+				upper_32_bits(addr),
+				length_field,
+				field);
+			running_total += trb_buff_len;
+
+			addr += trb_buff_len;
+			td_remain_len -= trb_buff_len;
+		}
+
+		/* Check TD length */
+		if (running_total != td_len) {
+			xhci_err(xhci, "ISOC TD length unmatch\n");
+			ret = -EINVAL;
+			goto cleanup;
+		}
+	}
+
+	if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) {
+		if (xhci->quirks & XHCI_AMD_PLL_FIX)
+			usb_amd_quirk_pll_disable();
+	}
+	xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs++;
+
+	giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
+			start_cycle, start_trb);
+	return 0;
+cleanup:
+	/* Clean up a partially enqueued isoc transfer. */
+
+	for (i--; i >= 0; i--)
+		list_del_init(&urb_priv->td[i]->td_list);
+
+	/* Use the first TD as a temporary variable to turn the TDs we've queued
+	 * into No-ops with a software-owned cycle bit. That way the hardware
+	 * won't accidentally start executing bogus TDs when we partially
+	 * overwrite them.  td->first_trb and td->start_seg are already set.
+	 */
+	urb_priv->td[0]->last_trb = ep_ring->enqueue;
+	/* Every TRB except the first & last will have its cycle bit flipped. */
+	td_to_noop(xhci, ep_ring, urb_priv->td[0], true);
+
+	/* Reset the ring enqueue back to the first TRB and its cycle bit. */
+	ep_ring->enqueue = urb_priv->td[0]->first_trb;
+	ep_ring->enq_seg = urb_priv->td[0]->start_seg;
+	ep_ring->cycle_state = start_cycle;
+	ep_ring->num_trbs_free = ep_ring->num_trbs_free_temp;
+	usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb);
+	return ret;
+}
+
+/*
+ * Check transfer ring to guarantee there is enough room for the urb.
+ * Update ISO URB start_frame and interval.
+ * Update interval as xhci_queue_intr_tx does. Just use xhci frame_index to
+ * update the urb->start_frame by now.
+ * Always assume URB_ISO_ASAP set, and NEVER use urb->start_frame as input.
+ */
+int xhci_queue_isoc_tx_prepare(struct xhci_hcd *xhci, gfp_t mem_flags,
+		struct urb *urb, int slot_id, unsigned int ep_index)
+{
+	struct xhci_virt_device *xdev;
+	struct xhci_ring *ep_ring;
+	struct xhci_ep_ctx *ep_ctx;
+	int start_frame;
+	int xhci_interval;
+	int ep_interval;
+	int num_tds, num_trbs, i;
+	int ret;
+
+	xdev = xhci->devs[slot_id];
+	ep_ring = xdev->eps[ep_index].ring;
+	ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index);
+
+	num_trbs = 0;
+	num_tds = urb->number_of_packets;
+	for (i = 0; i < num_tds; i++)
+		num_trbs += count_isoc_trbs_needed(xhci, urb, i);
+
+	/* Check the ring to guarantee there is enough room for the whole urb.
+	 * Do not insert any td of the urb to the ring if the check failed.
+	 */
+	ret = prepare_ring(xhci, ep_ring, le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK,
+			   num_trbs, mem_flags);
+	if (ret)
+		return ret;
+
+	start_frame = xhci_readl(xhci, &xhci->run_regs->microframe_index);
+	start_frame &= 0x3fff;
+
+	urb->start_frame = start_frame;
+	if (urb->dev->speed == USB_SPEED_LOW ||
+			urb->dev->speed == USB_SPEED_FULL)
+		urb->start_frame >>= 3;
+
+	xhci_interval = EP_INTERVAL_TO_UFRAMES(le32_to_cpu(ep_ctx->ep_info));
+	ep_interval = urb->interval;
+	/* Convert to microframes */
+	if (urb->dev->speed == USB_SPEED_LOW ||
+			urb->dev->speed == USB_SPEED_FULL)
+		ep_interval *= 8;
+	/* FIXME change this to a warning and a suggestion to use the new API
+	 * to set the polling interval (once the API is added).
+	 */
+	if (xhci_interval != ep_interval) {
+		if (printk_ratelimit())
+			dev_dbg(&urb->dev->dev, "Driver uses different interval"
+					" (%d microframe%s) than xHCI "
+					"(%d microframe%s)\n",
+					ep_interval,
+					ep_interval == 1 ? "" : "s",
+					xhci_interval,
+					xhci_interval == 1 ? "" : "s");
+		urb->interval = xhci_interval;
+		/* Convert back to frames for LS/FS devices */
+		if (urb->dev->speed == USB_SPEED_LOW ||
+				urb->dev->speed == USB_SPEED_FULL)
+			urb->interval /= 8;
+	}
+	ep_ring->num_trbs_free_temp = ep_ring->num_trbs_free;
+
+	return xhci_queue_isoc_tx(xhci, mem_flags, urb, slot_id, ep_index);
+}
+
+/****		Command Ring Operations		****/
+
+/* Generic function for queueing a command TRB on the command ring.
+ * Check to make sure there's room on the command ring for one command TRB.
+ * Also check that there's room reserved for commands that must not fail.
+ * If this is a command that must not fail, meaning command_must_succeed = TRUE,
+ * then only check for the number of reserved spots.
+ * Don't decrement xhci->cmd_ring_reserved_trbs after we've queued the TRB
+ * because the command event handler may want to resubmit a failed command.
+ */
+static int queue_command(struct xhci_hcd *xhci, u32 field1, u32 field2,
+		u32 field3, u32 field4, bool command_must_succeed)
+{
+	int reserved_trbs = xhci->cmd_ring_reserved_trbs;
+	int ret;
+
+	if (!command_must_succeed)
+		reserved_trbs++;
+
+	ret = prepare_ring(xhci, xhci->cmd_ring, EP_STATE_RUNNING,
+			reserved_trbs, GFP_ATOMIC);
+	if (ret < 0) {
+		xhci_err(xhci, "ERR: No room for command on command ring\n");
+		if (command_must_succeed)
+			xhci_err(xhci, "ERR: Reserved TRB counting for "
+					"unfailable commands failed.\n");
+		return ret;
+	}
+	queue_trb(xhci, xhci->cmd_ring, false, field1, field2, field3,
+			field4 | xhci->cmd_ring->cycle_state);
+	return 0;
+}
+
+/* Queue a slot enable or disable request on the command ring */
+int xhci_queue_slot_control(struct xhci_hcd *xhci, u32 trb_type, u32 slot_id)
+{
+	return queue_command(xhci, 0, 0, 0,
+			TRB_TYPE(trb_type) | SLOT_ID_FOR_TRB(slot_id), false);
+}
+
+/* Queue an address device command TRB */
+int xhci_queue_address_device(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr,
+		u32 slot_id)
+{
+	return queue_command(xhci, lower_32_bits(in_ctx_ptr),
+			upper_32_bits(in_ctx_ptr), 0,
+			TRB_TYPE(TRB_ADDR_DEV) | SLOT_ID_FOR_TRB(slot_id),
+			false);
+}
+
+int xhci_queue_vendor_command(struct xhci_hcd *xhci,
+		u32 field1, u32 field2, u32 field3, u32 field4)
+{
+	return queue_command(xhci, field1, field2, field3, field4, false);
+}
+
+/* Queue a reset device command TRB */
+int xhci_queue_reset_device(struct xhci_hcd *xhci, u32 slot_id)
+{
+	return queue_command(xhci, 0, 0, 0,
+			TRB_TYPE(TRB_RESET_DEV) | SLOT_ID_FOR_TRB(slot_id),
+			false);
+}
+
+/* Queue a configure endpoint command TRB */
+int xhci_queue_configure_endpoint(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr,
+		u32 slot_id, bool command_must_succeed)
+{
+	return queue_command(xhci, lower_32_bits(in_ctx_ptr),
+			upper_32_bits(in_ctx_ptr), 0,
+			TRB_TYPE(TRB_CONFIG_EP) | SLOT_ID_FOR_TRB(slot_id),
+			command_must_succeed);
+}
+
+/* Queue an evaluate context command TRB */
+int xhci_queue_evaluate_context(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr,
+		u32 slot_id, bool command_must_succeed)
+{
+	return queue_command(xhci, lower_32_bits(in_ctx_ptr),
+			upper_32_bits(in_ctx_ptr), 0,
+			TRB_TYPE(TRB_EVAL_CONTEXT) | SLOT_ID_FOR_TRB(slot_id),
+			command_must_succeed);
+}
+
+/*
+ * Suspend is set to indicate "Stop Endpoint Command" is being issued to stop
+ * activity on an endpoint that is about to be suspended.
+ */
+int xhci_queue_stop_endpoint(struct xhci_hcd *xhci, int slot_id,
+		unsigned int ep_index, int suspend)
+{
+	u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id);
+	u32 trb_ep_index = EP_ID_FOR_TRB(ep_index);
+	u32 type = TRB_TYPE(TRB_STOP_RING);
+	u32 trb_suspend = SUSPEND_PORT_FOR_TRB(suspend);
+
+	return queue_command(xhci, 0, 0, 0,
+			trb_slot_id | trb_ep_index | type | trb_suspend, false);
+}
+
+/* Set Transfer Ring Dequeue Pointer command.
+ * This should not be used for endpoints that have streams enabled.
+ */
+static int queue_set_tr_deq(struct xhci_hcd *xhci, int slot_id,
+		unsigned int ep_index, unsigned int stream_id,
+		struct xhci_segment *deq_seg,
+		union xhci_trb *deq_ptr, u32 cycle_state)
+{
+	dma_addr_t addr;
+	u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id);
+	u32 trb_ep_index = EP_ID_FOR_TRB(ep_index);
+	u32 trb_stream_id = STREAM_ID_FOR_TRB(stream_id);
+	u32 type = TRB_TYPE(TRB_SET_DEQ);
+	struct xhci_virt_ep *ep;
+
+	addr = xhci_trb_virt_to_dma(deq_seg, deq_ptr);
+	if (addr == 0) {
+		xhci_warn(xhci, "WARN Cannot submit Set TR Deq Ptr\n");
+		xhci_warn(xhci, "WARN deq seg = %p, deq pt = %p\n",
+				deq_seg, deq_ptr);
+		return 0;
+	}
+	ep = &xhci->devs[slot_id]->eps[ep_index];
+	if ((ep->ep_state & SET_DEQ_PENDING)) {
+		xhci_warn(xhci, "WARN Cannot submit Set TR Deq Ptr\n");
+		xhci_warn(xhci, "A Set TR Deq Ptr command is pending.\n");
+		return 0;
+	}
+	ep->queued_deq_seg = deq_seg;
+	ep->queued_deq_ptr = deq_ptr;
+	return queue_command(xhci, lower_32_bits(addr) | cycle_state,
+			upper_32_bits(addr), trb_stream_id,
+			trb_slot_id | trb_ep_index | type, false);
+}
+
+int xhci_queue_reset_ep(struct xhci_hcd *xhci, int slot_id,
+		unsigned int ep_index)
+{
+	u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id);
+	u32 trb_ep_index = EP_ID_FOR_TRB(ep_index);
+	u32 type = TRB_TYPE(TRB_RESET_EP);
+
+	return queue_command(xhci, 0, 0, 0, trb_slot_id | trb_ep_index | type,
+			false);
+}
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
new file mode 100644
index 0000000..b4aa79d
--- /dev/null
+++ b/drivers/usb/host/xhci.c
@@ -0,0 +1,4769 @@
+/*
+ * xHCI host controller driver
+ *
+ * Copyright (C) 2008 Intel Corp.
+ *
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/pci.h>
+#include <linux/irq.h>
+#include <linux/log2.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/dmi.h>
+
+#include "xhci.h"
+
+#define DRIVER_AUTHOR "Sarah Sharp"
+#define DRIVER_DESC "'eXtensible' Host Controller (xHC) Driver"
+
+/* Some 0.95 hardware can't handle the chain bit on a Link TRB being cleared */
+static int link_quirk;
+module_param(link_quirk, int, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(link_quirk, "Don't clear the chain bit on a link TRB");
+
+/* TODO: copied from ehci-hcd.c - can this be refactored? */
+/*
+ * xhci_handshake - spin reading hc until handshake completes or fails
+ * @ptr: address of hc register to be read
+ * @mask: bits to look at in result of read
+ * @done: value of those bits when handshake succeeds
+ * @usec: timeout in microseconds
+ *
+ * Returns negative errno, or zero on success
+ *
+ * Success happens when the "mask" bits have the specified value (hardware
+ * handshake done).  There are two failure modes:  "usec" have passed (major
+ * hardware flakeout), or the register reads as all-ones (hardware removed).
+ */
+int xhci_handshake(struct xhci_hcd *xhci, void __iomem *ptr,
+		      u32 mask, u32 done, int usec)
+{
+	u32	result;
+
+	do {
+		result = xhci_readl(xhci, ptr);
+		if (result == ~(u32)0)		/* card removed */
+			return -ENODEV;
+		result &= mask;
+		if (result == done)
+			return 0;
+		udelay(1);
+		usec--;
+	} while (usec > 0);
+	return -ETIMEDOUT;
+}
+
+/*
+ * Disable interrupts and begin the xHCI halting process.
+ */
+void xhci_quiesce(struct xhci_hcd *xhci)
+{
+	u32 halted;
+	u32 cmd;
+	u32 mask;
+
+	mask = ~(XHCI_IRQS);
+	halted = xhci_readl(xhci, &xhci->op_regs->status) & STS_HALT;
+	if (!halted)
+		mask &= ~CMD_RUN;
+
+	cmd = xhci_readl(xhci, &xhci->op_regs->command);
+	cmd &= mask;
+	xhci_writel(xhci, cmd, &xhci->op_regs->command);
+}
+
+/*
+ * Force HC into halt state.
+ *
+ * Disable any IRQs and clear the run/stop bit.
+ * HC will complete any current and actively pipelined transactions, and
+ * should halt within 16 ms of the run/stop bit being cleared.
+ * Read HC Halted bit in the status register to see when the HC is finished.
+ */
+int xhci_halt(struct xhci_hcd *xhci)
+{
+	int ret;
+	xhci_dbg(xhci, "// Halt the HC\n");
+	xhci_quiesce(xhci);
+
+	ret = xhci_handshake(xhci, &xhci->op_regs->status,
+			STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
+	if (!ret) {
+		xhci->xhc_state |= XHCI_STATE_HALTED;
+		xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+	} else
+		xhci_warn(xhci, "Host not halted after %u microseconds.\n",
+				XHCI_MAX_HALT_USEC);
+	return ret;
+}
+
+/*
+ * Set the run bit and wait for the host to be running.
+ */
+static int xhci_start(struct xhci_hcd *xhci)
+{
+	u32 temp;
+	int ret;
+
+	temp = xhci_readl(xhci, &xhci->op_regs->command);
+	temp |= (CMD_RUN);
+	xhci_dbg(xhci, "// Turn on HC, cmd = 0x%x.\n",
+			temp);
+	xhci_writel(xhci, temp, &xhci->op_regs->command);
+
+	/*
+	 * Wait for the HCHalted Status bit to be 0 to indicate the host is
+	 * running.
+	 */
+	ret = xhci_handshake(xhci, &xhci->op_regs->status,
+			STS_HALT, 0, XHCI_MAX_HALT_USEC);
+	if (ret == -ETIMEDOUT)
+		xhci_err(xhci, "Host took too long to start, "
+				"waited %u microseconds.\n",
+				XHCI_MAX_HALT_USEC);
+	if (!ret)
+		xhci->xhc_state &= ~XHCI_STATE_HALTED;
+	return ret;
+}
+
+/*
+ * Reset a halted HC.
+ *
+ * This resets pipelines, timers, counters, state machines, etc.
+ * Transactions will be terminated immediately, and operational registers
+ * will be set to their defaults.
+ */
+int xhci_reset(struct xhci_hcd *xhci)
+{
+	u32 command;
+	u32 state;
+	int ret, i;
+
+	state = xhci_readl(xhci, &xhci->op_regs->status);
+	if ((state & STS_HALT) == 0) {
+		xhci_warn(xhci, "Host controller not halted, aborting reset.\n");
+		return 0;
+	}
+
+	xhci_dbg(xhci, "// Reset the HC\n");
+	command = xhci_readl(xhci, &xhci->op_regs->command);
+	command |= CMD_RESET;
+	xhci_writel(xhci, command, &xhci->op_regs->command);
+
+	ret = xhci_handshake(xhci, &xhci->op_regs->command,
+			CMD_RESET, 0, 10 * 1000 * 1000);
+	if (ret)
+		return ret;
+
+	xhci_dbg(xhci, "Wait for controller to be ready for doorbell rings\n");
+	/*
+	 * xHCI cannot write to any doorbells or operational registers other
+	 * than status until the "Controller Not Ready" flag is cleared.
+	 */
+	ret = xhci_handshake(xhci, &xhci->op_regs->status,
+			STS_CNR, 0, 10 * 1000 * 1000);
+
+	for (i = 0; i < 2; ++i) {
+		xhci->bus_state[i].port_c_suspend = 0;
+		xhci->bus_state[i].suspended_ports = 0;
+		xhci->bus_state[i].resuming_ports = 0;
+	}
+
+	return ret;
+}
+
+#ifdef CONFIG_PCI
+static int xhci_free_msi(struct xhci_hcd *xhci)
+{
+	int i;
+
+	if (!xhci->msix_entries)
+		return -EINVAL;
+
+	for (i = 0; i < xhci->msix_count; i++)
+		if (xhci->msix_entries[i].vector)
+			free_irq(xhci->msix_entries[i].vector,
+					xhci_to_hcd(xhci));
+	return 0;
+}
+
+/*
+ * Set up MSI
+ */
+static int xhci_setup_msi(struct xhci_hcd *xhci)
+{
+	int ret;
+	struct pci_dev  *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+
+	ret = pci_enable_msi(pdev);
+	if (ret) {
+		xhci_dbg(xhci, "failed to allocate MSI entry\n");
+		return ret;
+	}
+
+	ret = request_irq(pdev->irq, (irq_handler_t)xhci_msi_irq,
+				0, "xhci_hcd", xhci_to_hcd(xhci));
+	if (ret) {
+		xhci_dbg(xhci, "disable MSI interrupt\n");
+		pci_disable_msi(pdev);
+	}
+
+	return ret;
+}
+
+/*
+ * Free IRQs
+ * free all IRQs request
+ */
+static void xhci_free_irq(struct xhci_hcd *xhci)
+{
+	struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+	int ret;
+
+	/* return if using legacy interrupt */
+	if (xhci_to_hcd(xhci)->irq > 0)
+		return;
+
+	ret = xhci_free_msi(xhci);
+	if (!ret)
+		return;
+	if (pdev->irq > 0)
+		free_irq(pdev->irq, xhci_to_hcd(xhci));
+
+	return;
+}
+
+/*
+ * Set up MSI-X
+ */
+static int xhci_setup_msix(struct xhci_hcd *xhci)
+{
+	int i, ret = 0;
+	struct usb_hcd *hcd = xhci_to_hcd(xhci);
+	struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
+
+	/*
+	 * calculate number of msi-x vectors supported.
+	 * - HCS_MAX_INTRS: the max number of interrupts the host can handle,
+	 *   with max number of interrupters based on the xhci HCSPARAMS1.
+	 * - num_online_cpus: maximum msi-x vectors per CPUs core.
+	 *   Add additional 1 vector to ensure always available interrupt.
+	 */
+	xhci->msix_count = min(num_online_cpus() + 1,
+				HCS_MAX_INTRS(xhci->hcs_params1));
+
+	xhci->msix_entries =
+		kmalloc((sizeof(struct msix_entry))*xhci->msix_count,
+				GFP_KERNEL);
+	if (!xhci->msix_entries) {
+		xhci_err(xhci, "Failed to allocate MSI-X entries\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < xhci->msix_count; i++) {
+		xhci->msix_entries[i].entry = i;
+		xhci->msix_entries[i].vector = 0;
+	}
+
+	ret = pci_enable_msix(pdev, xhci->msix_entries, xhci->msix_count);
+	if (ret) {
+		xhci_dbg(xhci, "Failed to enable MSI-X\n");
+		goto free_entries;
+	}
+
+	for (i = 0; i < xhci->msix_count; i++) {
+		ret = request_irq(xhci->msix_entries[i].vector,
+				(irq_handler_t)xhci_msi_irq,
+				0, "xhci_hcd", xhci_to_hcd(xhci));
+		if (ret)
+			goto disable_msix;
+	}
+
+	hcd->msix_enabled = 1;
+	return ret;
+
+disable_msix:
+	xhci_dbg(xhci, "disable MSI-X interrupt\n");
+	xhci_free_irq(xhci);
+	pci_disable_msix(pdev);
+free_entries:
+	kfree(xhci->msix_entries);
+	xhci->msix_entries = NULL;
+	return ret;
+}
+
+/* Free any IRQs and disable MSI-X */
+static void xhci_cleanup_msix(struct xhci_hcd *xhci)
+{
+	struct usb_hcd *hcd = xhci_to_hcd(xhci);
+	struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
+
+	xhci_free_irq(xhci);
+
+	if (xhci->msix_entries) {
+		pci_disable_msix(pdev);
+		kfree(xhci->msix_entries);
+		xhci->msix_entries = NULL;
+	} else {
+		pci_disable_msi(pdev);
+	}
+
+	hcd->msix_enabled = 0;
+	return;
+}
+
+static void xhci_msix_sync_irqs(struct xhci_hcd *xhci)
+{
+	int i;
+
+	if (xhci->msix_entries) {
+		for (i = 0; i < xhci->msix_count; i++)
+			synchronize_irq(xhci->msix_entries[i].vector);
+	}
+}
+
+static int xhci_try_enable_msi(struct usb_hcd *hcd)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	struct pci_dev  *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+	int ret;
+
+	/*
+	 * Some Fresco Logic host controllers advertise MSI, but fail to
+	 * generate interrupts.  Don't even try to enable MSI.
+	 */
+	if (xhci->quirks & XHCI_BROKEN_MSI)
+		goto legacy_irq;
+
+	/* unregister the legacy interrupt */
+	if (hcd->irq)
+		free_irq(hcd->irq, hcd);
+	hcd->irq = 0;
+
+	ret = xhci_setup_msix(xhci);
+	if (ret)
+		/* fall back to msi*/
+		ret = xhci_setup_msi(xhci);
+
+	if (!ret)
+		/* hcd->irq is 0, we have MSI */
+		return 0;
+
+	if (!pdev->irq) {
+		xhci_err(xhci, "No msi-x/msi found and no IRQ in BIOS\n");
+		return -EINVAL;
+	}
+
+ legacy_irq:
+	/* fall back to legacy interrupt*/
+	ret = request_irq(pdev->irq, &usb_hcd_irq, IRQF_SHARED,
+			hcd->irq_descr, hcd);
+	if (ret) {
+		xhci_err(xhci, "request interrupt %d failed\n",
+				pdev->irq);
+		return ret;
+	}
+	hcd->irq = pdev->irq;
+	return 0;
+}
+
+#else
+
+static int xhci_try_enable_msi(struct usb_hcd *hcd)
+{
+	return 0;
+}
+
+static void xhci_cleanup_msix(struct xhci_hcd *xhci)
+{
+}
+
+static void xhci_msix_sync_irqs(struct xhci_hcd *xhci)
+{
+}
+
+#endif
+
+static void compliance_mode_recovery(unsigned long arg)
+{
+	struct xhci_hcd *xhci;
+	struct usb_hcd *hcd;
+	u32 temp;
+	int i;
+
+	xhci = (struct xhci_hcd *)arg;
+
+	for (i = 0; i < xhci->num_usb3_ports; i++) {
+		temp = xhci_readl(xhci, xhci->usb3_ports[i]);
+		if ((temp & PORT_PLS_MASK) == USB_SS_PORT_LS_COMP_MOD) {
+			/*
+			 * Compliance Mode Detected. Letting USB Core
+			 * handle the Warm Reset
+			 */
+			xhci_dbg(xhci, "Compliance mode detected->port %d\n",
+					i + 1);
+			xhci_dbg(xhci, "Attempting compliance mode recovery\n");
+			hcd = xhci->shared_hcd;
+
+			if (hcd->state == HC_STATE_SUSPENDED)
+				usb_hcd_resume_root_hub(hcd);
+
+			usb_hcd_poll_rh_status(hcd);
+		}
+	}
+
+	if (xhci->port_status_u0 != ((1 << xhci->num_usb3_ports)-1))
+		mod_timer(&xhci->comp_mode_recovery_timer,
+			jiffies + msecs_to_jiffies(COMP_MODE_RCVRY_MSECS));
+}
+
+/*
+ * Quirk to work around issue generated by the SN65LVPE502CP USB3.0 re-driver
+ * that causes ports behind that hardware to enter compliance mode sometimes.
+ * The quirk creates a timer that polls every 2 seconds the link state of
+ * each host controller's port and recovers it by issuing a Warm reset
+ * if Compliance mode is detected, otherwise the port will become "dead" (no
+ * device connections or disconnections will be detected anymore). Becasue no
+ * status event is generated when entering compliance mode (per xhci spec),
+ * this quirk is needed on systems that have the failing hardware installed.
+ */
+static void compliance_mode_recovery_timer_init(struct xhci_hcd *xhci)
+{
+	xhci->port_status_u0 = 0;
+	init_timer(&xhci->comp_mode_recovery_timer);
+
+	xhci->comp_mode_recovery_timer.data = (unsigned long) xhci;
+	xhci->comp_mode_recovery_timer.function = compliance_mode_recovery;
+	xhci->comp_mode_recovery_timer.expires = jiffies +
+			msecs_to_jiffies(COMP_MODE_RCVRY_MSECS);
+
+	set_timer_slack(&xhci->comp_mode_recovery_timer,
+			msecs_to_jiffies(COMP_MODE_RCVRY_MSECS));
+	add_timer(&xhci->comp_mode_recovery_timer);
+	xhci_dbg(xhci, "Compliance mode recovery timer initialized\n");
+}
+
+/*
+ * This function identifies the systems that have installed the SN65LVPE502CP
+ * USB3.0 re-driver and that need the Compliance Mode Quirk.
+ * Systems:
+ * Vendor: Hewlett-Packard -> System Models: Z420, Z620 and Z820
+ */
+static bool compliance_mode_recovery_timer_quirk_check(void)
+{
+	const char *dmi_product_name, *dmi_sys_vendor;
+
+	dmi_product_name = dmi_get_system_info(DMI_PRODUCT_NAME);
+	dmi_sys_vendor = dmi_get_system_info(DMI_SYS_VENDOR);
+	if (!dmi_product_name || !dmi_sys_vendor)
+		return false;
+
+	if (!(strstr(dmi_sys_vendor, "Hewlett-Packard")))
+		return false;
+
+	if (strstr(dmi_product_name, "Z420") ||
+			strstr(dmi_product_name, "Z620") ||
+			strstr(dmi_product_name, "Z820") ||
+			strstr(dmi_product_name, "Z1 Workstation"))
+		return true;
+
+	return false;
+}
+
+static int xhci_all_ports_seen_u0(struct xhci_hcd *xhci)
+{
+	return (xhci->port_status_u0 == ((1 << xhci->num_usb3_ports)-1));
+}
+
+
+/*
+ * Initialize memory for HCD and xHC (one-time init).
+ *
+ * Program the PAGESIZE register, initialize the device context array, create
+ * device contexts (?), set up a command ring segment (or two?), create event
+ * ring (one for now).
+ */
+int xhci_init(struct usb_hcd *hcd)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	int retval = 0;
+
+	xhci_dbg(xhci, "xhci_init\n");
+	spin_lock_init(&xhci->lock);
+	if (xhci->hci_version == 0x95 && link_quirk) {
+		xhci_dbg(xhci, "QUIRK: Not clearing Link TRB chain bits.\n");
+		xhci->quirks |= XHCI_LINK_TRB_QUIRK;
+	} else {
+		xhci_dbg(xhci, "xHCI doesn't need link TRB QUIRK\n");
+	}
+	retval = xhci_mem_init(xhci, GFP_KERNEL);
+	xhci_dbg(xhci, "Finished xhci_init\n");
+
+	/* Initializing Compliance Mode Recovery Data If Needed */
+	if (compliance_mode_recovery_timer_quirk_check()) {
+		xhci->quirks |= XHCI_COMP_MODE_QUIRK;
+		compliance_mode_recovery_timer_init(xhci);
+	}
+
+	return retval;
+}
+
+/*-------------------------------------------------------------------------*/
+
+
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+static void xhci_event_ring_work(unsigned long arg)
+{
+	unsigned long flags;
+	int temp;
+	u64 temp_64;
+	struct xhci_hcd *xhci = (struct xhci_hcd *) arg;
+	int i, j;
+
+	xhci_dbg(xhci, "Poll event ring: %lu\n", jiffies);
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	temp = xhci_readl(xhci, &xhci->op_regs->status);
+	xhci_dbg(xhci, "op reg status = 0x%x\n", temp);
+	if (temp == 0xffffffff || (xhci->xhc_state & XHCI_STATE_DYING) ||
+			(xhci->xhc_state & XHCI_STATE_HALTED)) {
+		xhci_dbg(xhci, "HW died, polling stopped.\n");
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return;
+	}
+
+	temp = xhci_readl(xhci, &xhci->ir_set->irq_pending);
+	xhci_dbg(xhci, "ir_set 0 pending = 0x%x\n", temp);
+	xhci_dbg(xhci, "HC error bitmask = 0x%x\n", xhci->error_bitmask);
+	xhci->error_bitmask = 0;
+	xhci_dbg(xhci, "Event ring:\n");
+	xhci_debug_segment(xhci, xhci->event_ring->deq_seg);
+	xhci_dbg_ring_ptrs(xhci, xhci->event_ring);
+	temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
+	temp_64 &= ~ERST_PTR_MASK;
+	xhci_dbg(xhci, "ERST deq = 64'h%0lx\n", (long unsigned int) temp_64);
+	xhci_dbg(xhci, "Command ring:\n");
+	xhci_debug_segment(xhci, xhci->cmd_ring->deq_seg);
+	xhci_dbg_ring_ptrs(xhci, xhci->cmd_ring);
+	xhci_dbg_cmd_ptrs(xhci);
+	for (i = 0; i < MAX_HC_SLOTS; ++i) {
+		if (!xhci->devs[i])
+			continue;
+		for (j = 0; j < 31; ++j) {
+			xhci_dbg_ep_rings(xhci, i, j, &xhci->devs[i]->eps[j]);
+		}
+	}
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	if (!xhci->zombie)
+		mod_timer(&xhci->event_ring_timer, jiffies + POLL_TIMEOUT * HZ);
+	else
+		xhci_dbg(xhci, "Quit polling the event ring.\n");
+}
+#endif
+
+static int xhci_run_finished(struct xhci_hcd *xhci)
+{
+	if (xhci_start(xhci)) {
+		xhci_halt(xhci);
+		return -ENODEV;
+	}
+	xhci->shared_hcd->state = HC_STATE_RUNNING;
+	xhci->cmd_ring_state = CMD_RING_STATE_RUNNING;
+
+	if (xhci->quirks & XHCI_NEC_HOST)
+		xhci_ring_cmd_db(xhci);
+
+	xhci_dbg(xhci, "Finished xhci_run for USB3 roothub\n");
+	return 0;
+}
+
+/*
+ * Start the HC after it was halted.
+ *
+ * This function is called by the USB core when the HC driver is added.
+ * Its opposite is xhci_stop().
+ *
+ * xhci_init() must be called once before this function can be called.
+ * Reset the HC, enable device slot contexts, program DCBAAP, and
+ * set command ring pointer and event ring pointer.
+ *
+ * Setup MSI-X vectors and enable interrupts.
+ */
+int xhci_run(struct usb_hcd *hcd)
+{
+	u32 temp;
+	u64 temp_64;
+	int ret;
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+
+	/* Start the xHCI host controller running only after the USB 2.0 roothub
+	 * is setup.
+	 */
+
+	hcd->uses_new_polling = 1;
+	if (!usb_hcd_is_primary_hcd(hcd))
+		return xhci_run_finished(xhci);
+
+	xhci_dbg(xhci, "xhci_run\n");
+
+	ret = xhci_try_enable_msi(hcd);
+	if (ret)
+		return ret;
+
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+	init_timer(&xhci->event_ring_timer);
+	xhci->event_ring_timer.data = (unsigned long) xhci;
+	xhci->event_ring_timer.function = xhci_event_ring_work;
+	/* Poll the event ring */
+	xhci->event_ring_timer.expires = jiffies + POLL_TIMEOUT * HZ;
+	xhci->zombie = 0;
+	xhci_dbg(xhci, "Setting event ring polling timer\n");
+	add_timer(&xhci->event_ring_timer);
+#endif
+
+	xhci_dbg(xhci, "Command ring memory map follows:\n");
+	xhci_debug_ring(xhci, xhci->cmd_ring);
+	xhci_dbg_ring_ptrs(xhci, xhci->cmd_ring);
+	xhci_dbg_cmd_ptrs(xhci);
+
+	xhci_dbg(xhci, "ERST memory map follows:\n");
+	xhci_dbg_erst(xhci, &xhci->erst);
+	xhci_dbg(xhci, "Event ring:\n");
+	xhci_debug_ring(xhci, xhci->event_ring);
+	xhci_dbg_ring_ptrs(xhci, xhci->event_ring);
+	temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
+	temp_64 &= ~ERST_PTR_MASK;
+	xhci_dbg(xhci, "ERST deq = 64'h%0lx\n", (long unsigned int) temp_64);
+
+	xhci_dbg(xhci, "// Set the interrupt modulation register\n");
+	temp = xhci_readl(xhci, &xhci->ir_set->irq_control);
+	temp &= ~ER_IRQ_INTERVAL_MASK;
+	temp |= (u32) 160;
+	xhci_writel(xhci, temp, &xhci->ir_set->irq_control);
+
+	/* Set the HCD state before we enable the irqs */
+	temp = xhci_readl(xhci, &xhci->op_regs->command);
+	temp |= (CMD_EIE);
+	xhci_dbg(xhci, "// Enable interrupts, cmd = 0x%x.\n",
+			temp);
+	xhci_writel(xhci, temp, &xhci->op_regs->command);
+
+	temp = xhci_readl(xhci, &xhci->ir_set->irq_pending);
+	xhci_dbg(xhci, "// Enabling event ring interrupter %p by writing 0x%x to irq_pending\n",
+			xhci->ir_set, (unsigned int) ER_IRQ_ENABLE(temp));
+	xhci_writel(xhci, ER_IRQ_ENABLE(temp),
+			&xhci->ir_set->irq_pending);
+	xhci_print_ir_set(xhci, 0);
+
+	if (xhci->quirks & XHCI_NEC_HOST)
+		xhci_queue_vendor_command(xhci, 0, 0, 0,
+				TRB_TYPE(TRB_NEC_GET_FW));
+
+	xhci_dbg(xhci, "Finished xhci_run for USB2 roothub\n");
+	return 0;
+}
+
+static void xhci_only_stop_hcd(struct usb_hcd *hcd)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+
+	spin_lock_irq(&xhci->lock);
+	xhci_halt(xhci);
+
+	/* The shared_hcd is going to be deallocated shortly (the USB core only
+	 * calls this function when allocation fails in usb_add_hcd(), or
+	 * usb_remove_hcd() is called).  So we need to unset xHCI's pointer.
+	 */
+	xhci->shared_hcd = NULL;
+	spin_unlock_irq(&xhci->lock);
+}
+
+/*
+ * Stop xHCI driver.
+ *
+ * This function is called by the USB core when the HC driver is removed.
+ * Its opposite is xhci_run().
+ *
+ * Disable device contexts, disable IRQs, and quiesce the HC.
+ * Reset the HC, finish any completed transactions, and cleanup memory.
+ */
+void xhci_stop(struct usb_hcd *hcd)
+{
+	u32 temp;
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+
+	if (!usb_hcd_is_primary_hcd(hcd)) {
+		xhci_only_stop_hcd(xhci->shared_hcd);
+		return;
+	}
+
+	spin_lock_irq(&xhci->lock);
+	/* Make sure the xHC is halted for a USB3 roothub
+	 * (xhci_stop() could be called as part of failed init).
+	 */
+	xhci_halt(xhci);
+	xhci_reset(xhci);
+	spin_unlock_irq(&xhci->lock);
+
+	xhci_cleanup_msix(xhci);
+
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+	/* Tell the event ring poll function not to reschedule */
+	xhci->zombie = 1;
+	del_timer_sync(&xhci->event_ring_timer);
+#endif
+
+	/* Deleting Compliance Mode Recovery Timer */
+	if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) &&
+			(!(xhci_all_ports_seen_u0(xhci)))) {
+		del_timer_sync(&xhci->comp_mode_recovery_timer);
+		xhci_dbg(xhci, "%s: compliance mode recovery timer deleted\n",
+				__func__);
+	}
+
+	if (xhci->quirks & XHCI_AMD_PLL_FIX)
+		usb_amd_dev_put();
+
+	xhci_dbg(xhci, "// Disabling event ring interrupts\n");
+	temp = xhci_readl(xhci, &xhci->op_regs->status);
+	xhci_writel(xhci, temp & ~STS_EINT, &xhci->op_regs->status);
+	temp = xhci_readl(xhci, &xhci->ir_set->irq_pending);
+	xhci_writel(xhci, ER_IRQ_DISABLE(temp),
+			&xhci->ir_set->irq_pending);
+	xhci_print_ir_set(xhci, 0);
+
+	xhci_dbg(xhci, "cleaning up memory\n");
+	xhci_mem_cleanup(xhci);
+	xhci_dbg(xhci, "xhci_stop completed - status = %x\n",
+		    xhci_readl(xhci, &xhci->op_regs->status));
+}
+
+/*
+ * Shutdown HC (not bus-specific)
+ *
+ * This is called when the machine is rebooting or halting.  We assume that the
+ * machine will be powered off, and the HC's internal state will be reset.
+ * Don't bother to free memory.
+ *
+ * This will only ever be called with the main usb_hcd (the USB3 roothub).
+ */
+void xhci_shutdown(struct usb_hcd *hcd)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+
+	if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
+		usb_disable_xhci_ports(to_pci_dev(hcd->self.controller));
+
+	spin_lock_irq(&xhci->lock);
+	xhci_halt(xhci);
+	spin_unlock_irq(&xhci->lock);
+
+	xhci_cleanup_msix(xhci);
+
+	xhci_dbg(xhci, "xhci_shutdown completed - status = %x\n",
+		    xhci_readl(xhci, &xhci->op_regs->status));
+}
+
+#ifdef CONFIG_PM
+static void xhci_save_registers(struct xhci_hcd *xhci)
+{
+	xhci->s3.command = xhci_readl(xhci, &xhci->op_regs->command);
+	xhci->s3.dev_nt = xhci_readl(xhci, &xhci->op_regs->dev_notification);
+	xhci->s3.dcbaa_ptr = xhci_read_64(xhci, &xhci->op_regs->dcbaa_ptr);
+	xhci->s3.config_reg = xhci_readl(xhci, &xhci->op_regs->config_reg);
+	xhci->s3.erst_size = xhci_readl(xhci, &xhci->ir_set->erst_size);
+	xhci->s3.erst_base = xhci_read_64(xhci, &xhci->ir_set->erst_base);
+	xhci->s3.erst_dequeue = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
+	xhci->s3.irq_pending = xhci_readl(xhci, &xhci->ir_set->irq_pending);
+	xhci->s3.irq_control = xhci_readl(xhci, &xhci->ir_set->irq_control);
+}
+
+static void xhci_restore_registers(struct xhci_hcd *xhci)
+{
+	xhci_writel(xhci, xhci->s3.command, &xhci->op_regs->command);
+	xhci_writel(xhci, xhci->s3.dev_nt, &xhci->op_regs->dev_notification);
+	xhci_write_64(xhci, xhci->s3.dcbaa_ptr, &xhci->op_regs->dcbaa_ptr);
+	xhci_writel(xhci, xhci->s3.config_reg, &xhci->op_regs->config_reg);
+	xhci_writel(xhci, xhci->s3.erst_size, &xhci->ir_set->erst_size);
+	xhci_write_64(xhci, xhci->s3.erst_base, &xhci->ir_set->erst_base);
+	xhci_write_64(xhci, xhci->s3.erst_dequeue, &xhci->ir_set->erst_dequeue);
+	xhci_writel(xhci, xhci->s3.irq_pending, &xhci->ir_set->irq_pending);
+	xhci_writel(xhci, xhci->s3.irq_control, &xhci->ir_set->irq_control);
+}
+
+static void xhci_set_cmd_ring_deq(struct xhci_hcd *xhci)
+{
+	u64	val_64;
+
+	/* step 2: initialize command ring buffer */
+	val_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
+	val_64 = (val_64 & (u64) CMD_RING_RSVD_BITS) |
+		(xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg,
+				      xhci->cmd_ring->dequeue) &
+		 (u64) ~CMD_RING_RSVD_BITS) |
+		xhci->cmd_ring->cycle_state;
+	xhci_dbg(xhci, "// Setting command ring address to 0x%llx\n",
+			(long unsigned long) val_64);
+	xhci_write_64(xhci, val_64, &xhci->op_regs->cmd_ring);
+}
+
+/*
+ * The whole command ring must be cleared to zero when we suspend the host.
+ *
+ * The host doesn't save the command ring pointer in the suspend well, so we
+ * need to re-program it on resume.  Unfortunately, the pointer must be 64-byte
+ * aligned, because of the reserved bits in the command ring dequeue pointer
+ * register.  Therefore, we can't just set the dequeue pointer back in the
+ * middle of the ring (TRBs are 16-byte aligned).
+ */
+static void xhci_clear_command_ring(struct xhci_hcd *xhci)
+{
+	struct xhci_ring *ring;
+	struct xhci_segment *seg;
+
+	ring = xhci->cmd_ring;
+	seg = ring->deq_seg;
+	do {
+		memset(seg->trbs, 0,
+			sizeof(union xhci_trb) * (TRBS_PER_SEGMENT - 1));
+		seg->trbs[TRBS_PER_SEGMENT - 1].link.control &=
+			cpu_to_le32(~TRB_CYCLE);
+		seg = seg->next;
+	} while (seg != ring->deq_seg);
+
+	/* Reset the software enqueue and dequeue pointers */
+	ring->deq_seg = ring->first_seg;
+	ring->dequeue = ring->first_seg->trbs;
+	ring->enq_seg = ring->deq_seg;
+	ring->enqueue = ring->dequeue;
+
+	ring->num_trbs_free = ring->num_segs * (TRBS_PER_SEGMENT - 1) - 1;
+	/*
+	 * Ring is now zeroed, so the HW should look for change of ownership
+	 * when the cycle bit is set to 1.
+	 */
+	ring->cycle_state = 1;
+
+	/*
+	 * Reset the hardware dequeue pointer.
+	 * Yes, this will need to be re-written after resume, but we're paranoid
+	 * and want to make sure the hardware doesn't access bogus memory
+	 * because, say, the BIOS or an SMI started the host without changing
+	 * the command ring pointers.
+	 */
+	xhci_set_cmd_ring_deq(xhci);
+}
+
+/*
+ * Stop HC (not bus-specific)
+ *
+ * This is called when the machine transition into S3/S4 mode.
+ *
+ */
+int xhci_suspend(struct xhci_hcd *xhci)
+{
+	int			rc = 0;
+	struct usb_hcd		*hcd = xhci_to_hcd(xhci);
+	u32			command;
+
+	if (hcd->state != HC_STATE_SUSPENDED ||
+			xhci->shared_hcd->state != HC_STATE_SUSPENDED)
+		return -EINVAL;
+
+	/* Don't poll the roothubs on bus suspend. */
+	xhci_dbg(xhci, "%s: stopping port polling.\n", __func__);
+	clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+	del_timer_sync(&hcd->rh_timer);
+
+	spin_lock_irq(&xhci->lock);
+	clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
+	clear_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);
+	/* step 1: stop endpoint */
+	/* skipped assuming that port suspend has done */
+
+	/* step 2: clear Run/Stop bit */
+	command = xhci_readl(xhci, &xhci->op_regs->command);
+	command &= ~CMD_RUN;
+	xhci_writel(xhci, command, &xhci->op_regs->command);
+	if (xhci_handshake(xhci, &xhci->op_regs->status,
+		      STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) {
+		xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n");
+		spin_unlock_irq(&xhci->lock);
+		return -ETIMEDOUT;
+	}
+	xhci_clear_command_ring(xhci);
+
+	/* step 3: save registers */
+	xhci_save_registers(xhci);
+
+	/* step 4: set CSS flag */
+	command = xhci_readl(xhci, &xhci->op_regs->command);
+	command |= CMD_CSS;
+	xhci_writel(xhci, command, &xhci->op_regs->command);
+	if (xhci_handshake(xhci, &xhci->op_regs->status,
+				STS_SAVE, 0, 10 * 1000)) {
+		xhci_warn(xhci, "WARN: xHC save state timeout\n");
+		spin_unlock_irq(&xhci->lock);
+		return -ETIMEDOUT;
+	}
+	spin_unlock_irq(&xhci->lock);
+
+	/*
+	 * Deleting Compliance Mode Recovery Timer because the xHCI Host
+	 * is about to be suspended.
+	 */
+	if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) &&
+			(!(xhci_all_ports_seen_u0(xhci)))) {
+		del_timer_sync(&xhci->comp_mode_recovery_timer);
+		xhci_dbg(xhci, "%s: compliance mode recovery timer deleted\n",
+				__func__);
+	}
+
+	/* step 5: remove core well power */
+	/* synchronize irq when using MSI-X */
+	xhci_msix_sync_irqs(xhci);
+
+	return rc;
+}
+
+/*
+ * start xHC (not bus-specific)
+ *
+ * This is called when the machine transition from S3/S4 mode.
+ *
+ */
+int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+{
+	u32			command, temp = 0;
+	struct usb_hcd		*hcd = xhci_to_hcd(xhci);
+	struct usb_hcd		*secondary_hcd;
+	int			retval = 0;
+
+	/* Wait a bit if either of the roothubs need to settle from the
+	 * transition into bus suspend.
+	 */
+	if (time_before(jiffies, xhci->bus_state[0].next_statechange) ||
+			time_before(jiffies,
+				xhci->bus_state[1].next_statechange))
+		msleep(100);
+
+	set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
+	set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);
+
+	spin_lock_irq(&xhci->lock);
+	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+		hibernated = true;
+
+	if (!hibernated) {
+		/* step 1: restore register */
+		xhci_restore_registers(xhci);
+		/* step 2: initialize command ring buffer */
+		xhci_set_cmd_ring_deq(xhci);
+		/* step 3: restore state and start state*/
+		/* step 3: set CRS flag */
+		command = xhci_readl(xhci, &xhci->op_regs->command);
+		command |= CMD_CRS;
+		xhci_writel(xhci, command, &xhci->op_regs->command);
+		if (xhci_handshake(xhci, &xhci->op_regs->status,
+			      STS_RESTORE, 0, 10 * 1000)) {
+			xhci_warn(xhci, "WARN: xHC restore state timeout\n");
+			spin_unlock_irq(&xhci->lock);
+			return -ETIMEDOUT;
+		}
+		temp = xhci_readl(xhci, &xhci->op_regs->status);
+	}
+
+	/* If restore operation fails, re-initialize the HC during resume */
+	if ((temp & STS_SRE) || hibernated) {
+		/* Let the USB core know _both_ roothubs lost power. */
+		usb_root_hub_lost_power(xhci->main_hcd->self.root_hub);
+		usb_root_hub_lost_power(xhci->shared_hcd->self.root_hub);
+
+		xhci_dbg(xhci, "Stop HCD\n");
+		xhci_halt(xhci);
+		xhci_reset(xhci);
+		spin_unlock_irq(&xhci->lock);
+		xhci_cleanup_msix(xhci);
+
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+		/* Tell the event ring poll function not to reschedule */
+		xhci->zombie = 1;
+		del_timer_sync(&xhci->event_ring_timer);
+#endif
+
+		xhci_dbg(xhci, "// Disabling event ring interrupts\n");
+		temp = xhci_readl(xhci, &xhci->op_regs->status);
+		xhci_writel(xhci, temp & ~STS_EINT, &xhci->op_regs->status);
+		temp = xhci_readl(xhci, &xhci->ir_set->irq_pending);
+		xhci_writel(xhci, ER_IRQ_DISABLE(temp),
+				&xhci->ir_set->irq_pending);
+		xhci_print_ir_set(xhci, 0);
+
+		xhci_dbg(xhci, "cleaning up memory\n");
+		xhci_mem_cleanup(xhci);
+		xhci_dbg(xhci, "xhci_stop completed - status = %x\n",
+			    xhci_readl(xhci, &xhci->op_regs->status));
+
+		/* USB core calls the PCI reinit and start functions twice:
+		 * first with the primary HCD, and then with the secondary HCD.
+		 * If we don't do the same, the host will never be started.
+		 */
+		if (!usb_hcd_is_primary_hcd(hcd))
+			secondary_hcd = hcd;
+		else
+			secondary_hcd = xhci->shared_hcd;
+
+		xhci_dbg(xhci, "Initialize the xhci_hcd\n");
+		retval = xhci_init(hcd->primary_hcd);
+		if (retval)
+			return retval;
+		xhci_dbg(xhci, "Start the primary HCD\n");
+		retval = xhci_run(hcd->primary_hcd);
+		if (!retval) {
+			xhci_dbg(xhci, "Start the secondary HCD\n");
+			retval = xhci_run(secondary_hcd);
+		}
+		hcd->state = HC_STATE_SUSPENDED;
+		xhci->shared_hcd->state = HC_STATE_SUSPENDED;
+		goto done;
+	}
+
+	/* step 4: set Run/Stop bit */
+	command = xhci_readl(xhci, &xhci->op_regs->command);
+	command |= CMD_RUN;
+	xhci_writel(xhci, command, &xhci->op_regs->command);
+	xhci_handshake(xhci, &xhci->op_regs->status, STS_HALT,
+		  0, 250 * 1000);
+
+	/* step 5: walk topology and initialize portsc,
+	 * portpmsc and portli
+	 */
+	/* this is done in bus_resume */
+
+	/* step 6: restart each of the previously
+	 * Running endpoints by ringing their doorbells
+	 */
+
+	spin_unlock_irq(&xhci->lock);
+
+ done:
+	if (retval == 0) {
+		usb_hcd_resume_root_hub(hcd);
+		usb_hcd_resume_root_hub(xhci->shared_hcd);
+	}
+
+	/*
+	 * If system is subject to the Quirk, Compliance Mode Timer needs to
+	 * be re-initialized Always after a system resume. Ports are subject
+	 * to suffer the Compliance Mode issue again. It doesn't matter if
+	 * ports have entered previously to U0 before system's suspension.
+	 */
+	if (xhci->quirks & XHCI_COMP_MODE_QUIRK)
+		compliance_mode_recovery_timer_init(xhci);
+
+	/* Re-enable port polling. */
+	xhci_dbg(xhci, "%s: starting port polling.\n", __func__);
+	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+	usb_hcd_poll_rh_status(hcd);
+
+	return retval;
+}
+#endif	/* CONFIG_PM */
+
+/*-------------------------------------------------------------------------*/
+
+/**
+ * xhci_get_endpoint_index - Used for passing endpoint bitmasks between the core and
+ * HCDs.  Find the index for an endpoint given its descriptor.  Use the return
+ * value to right shift 1 for the bitmask.
+ *
+ * Index  = (epnum * 2) + direction - 1,
+ * where direction = 0 for OUT, 1 for IN.
+ * For control endpoints, the IN index is used (OUT index is unused), so
+ * index = (epnum * 2) + direction - 1 = (epnum * 2) + 1 - 1 = (epnum * 2)
+ */
+unsigned int xhci_get_endpoint_index(struct usb_endpoint_descriptor *desc)
+{
+	unsigned int index;
+	if (usb_endpoint_xfer_control(desc))
+		index = (unsigned int) (usb_endpoint_num(desc)*2);
+	else
+		index = (unsigned int) (usb_endpoint_num(desc)*2) +
+			(usb_endpoint_dir_in(desc) ? 1 : 0) - 1;
+	return index;
+}
+
+/* Find the flag for this endpoint (for use in the control context).  Use the
+ * endpoint index to create a bitmask.  The slot context is bit 0, endpoint 0 is
+ * bit 1, etc.
+ */
+unsigned int xhci_get_endpoint_flag(struct usb_endpoint_descriptor *desc)
+{
+	return 1 << (xhci_get_endpoint_index(desc) + 1);
+}
+
+/* Find the flag for this endpoint (for use in the control context).  Use the
+ * endpoint index to create a bitmask.  The slot context is bit 0, endpoint 0 is
+ * bit 1, etc.
+ */
+unsigned int xhci_get_endpoint_flag_from_index(unsigned int ep_index)
+{
+	return 1 << (ep_index + 1);
+}
+
+/* Compute the last valid endpoint context index.  Basically, this is the
+ * endpoint index plus one.  For slot contexts with more than valid endpoint,
+ * we find the most significant bit set in the added contexts flags.
+ * e.g. ep 1 IN (with epnum 0x81) => added_ctxs = 0b1000
+ * fls(0b1000) = 4, but the endpoint context index is 3, so subtract one.
+ */
+unsigned int xhci_last_valid_endpoint(u32 added_ctxs)
+{
+	return fls(added_ctxs) - 1;
+}
+
+/* Returns 1 if the arguments are OK;
+ * returns 0 this is a root hub; returns -EINVAL for NULL pointers.
+ */
+static int xhci_check_args(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint *ep, int check_ep, bool check_virt_dev,
+		const char *func) {
+	struct xhci_hcd	*xhci;
+	struct xhci_virt_device	*virt_dev;
+
+	if (!hcd || (check_ep && !ep) || !udev) {
+		printk(KERN_DEBUG "xHCI %s called with invalid args\n",
+				func);
+		return -EINVAL;
+	}
+	if (!udev->parent) {
+		printk(KERN_DEBUG "xHCI %s called for root hub\n",
+				func);
+		return 0;
+	}
+
+	xhci = hcd_to_xhci(hcd);
+	if (xhci->xhc_state & XHCI_STATE_HALTED)
+		return -ENODEV;
+
+	if (check_virt_dev) {
+		if (!udev->slot_id || !xhci->devs[udev->slot_id]) {
+			printk(KERN_DEBUG "xHCI %s called with unaddressed "
+						"device\n", func);
+			return -EINVAL;
+		}
+
+		virt_dev = xhci->devs[udev->slot_id];
+		if (virt_dev->udev != udev) {
+			printk(KERN_DEBUG "xHCI %s called with udev and "
+					  "virt_dev does not match\n", func);
+			return -EINVAL;
+		}
+	}
+
+	return 1;
+}
+
+static int xhci_configure_endpoint(struct xhci_hcd *xhci,
+		struct usb_device *udev, struct xhci_command *command,
+		bool ctx_change, bool must_succeed);
+
+/*
+ * Full speed devices may have a max packet size greater than 8 bytes, but the
+ * USB core doesn't know that until it reads the first 8 bytes of the
+ * descriptor.  If the usb_device's max packet size changes after that point,
+ * we need to issue an evaluate context command and wait on it.
+ */
+static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id,
+		unsigned int ep_index, struct urb *urb)
+{
+	struct xhci_container_ctx *in_ctx;
+	struct xhci_container_ctx *out_ctx;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	struct xhci_ep_ctx *ep_ctx;
+	int max_packet_size;
+	int hw_max_packet_size;
+	int ret = 0;
+
+	out_ctx = xhci->devs[slot_id]->out_ctx;
+	ep_ctx = xhci_get_ep_ctx(xhci, out_ctx, ep_index);
+	hw_max_packet_size = MAX_PACKET_DECODED(le32_to_cpu(ep_ctx->ep_info2));
+	max_packet_size = usb_endpoint_maxp(&urb->dev->ep0.desc);
+	if (hw_max_packet_size != max_packet_size) {
+		xhci_dbg(xhci, "Max Packet Size for ep 0 changed.\n");
+		xhci_dbg(xhci, "Max packet size in usb_device = %d\n",
+				max_packet_size);
+		xhci_dbg(xhci, "Max packet size in xHCI HW = %d\n",
+				hw_max_packet_size);
+		xhci_dbg(xhci, "Issuing evaluate context command.\n");
+
+		/* Set up the modified control endpoint 0 */
+		xhci_endpoint_copy(xhci, xhci->devs[slot_id]->in_ctx,
+				xhci->devs[slot_id]->out_ctx, ep_index);
+		in_ctx = xhci->devs[slot_id]->in_ctx;
+		ep_ctx = xhci_get_ep_ctx(xhci, in_ctx, ep_index);
+		ep_ctx->ep_info2 &= cpu_to_le32(~MAX_PACKET_MASK);
+		ep_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(max_packet_size));
+
+		/* Set up the input context flags for the command */
+		/* FIXME: This won't work if a non-default control endpoint
+		 * changes max packet sizes.
+		 */
+		ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);
+		ctrl_ctx->add_flags = cpu_to_le32(EP0_FLAG);
+		ctrl_ctx->drop_flags = 0;
+
+		xhci_dbg(xhci, "Slot %d input context\n", slot_id);
+		xhci_dbg_ctx(xhci, in_ctx, ep_index);
+		xhci_dbg(xhci, "Slot %d output context\n", slot_id);
+		xhci_dbg_ctx(xhci, out_ctx, ep_index);
+
+		ret = xhci_configure_endpoint(xhci, urb->dev, NULL,
+				true, false);
+
+		/* Clean up the input context for later use by bandwidth
+		 * functions.
+		 */
+		ctrl_ctx->add_flags = cpu_to_le32(SLOT_FLAG);
+	}
+	return ret;
+}
+
+/*
+ * non-error returns are a promise to giveback() the urb later
+ * we drop ownership so next owner (or urb unlink) can get it
+ */
+int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	struct xhci_td *buffer;
+	unsigned long flags;
+	int ret = 0;
+	unsigned int slot_id, ep_index;
+	struct urb_priv	*urb_priv;
+	int size, i;
+
+	if (!urb || xhci_check_args(hcd, urb->dev, urb->ep,
+					true, true, __func__) <= 0)
+		return -EINVAL;
+
+	slot_id = urb->dev->slot_id;
+	ep_index = xhci_get_endpoint_index(&urb->ep->desc);
+
+	if (!HCD_HW_ACCESSIBLE(hcd)) {
+		if (!in_interrupt())
+			xhci_dbg(xhci, "urb submitted during PCI suspend\n");
+		ret = -ESHUTDOWN;
+		goto exit;
+	}
+
+	if (usb_endpoint_xfer_isoc(&urb->ep->desc))
+		size = urb->number_of_packets;
+	else
+		size = 1;
+
+	urb_priv = kzalloc(sizeof(struct urb_priv) +
+				  size * sizeof(struct xhci_td *), mem_flags);
+	if (!urb_priv)
+		return -ENOMEM;
+
+	buffer = kzalloc(size * sizeof(struct xhci_td), mem_flags);
+	if (!buffer) {
+		kfree(urb_priv);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < size; i++) {
+		urb_priv->td[i] = buffer;
+		buffer++;
+	}
+
+	urb_priv->length = size;
+	urb_priv->td_cnt = 0;
+	urb->hcpriv = urb_priv;
+
+	if (usb_endpoint_xfer_control(&urb->ep->desc)) {
+		/* Check to see if the max packet size for the default control
+		 * endpoint changed during FS device enumeration
+		 */
+		if (urb->dev->speed == USB_SPEED_FULL) {
+			ret = xhci_check_maxpacket(xhci, slot_id,
+					ep_index, urb);
+			if (ret < 0) {
+				xhci_urb_free_priv(xhci, urb_priv);
+				urb->hcpriv = NULL;
+				return ret;
+			}
+		}
+
+		/* We have a spinlock and interrupts disabled, so we must pass
+		 * atomic context to this function, which may allocate memory.
+		 */
+		spin_lock_irqsave(&xhci->lock, flags);
+		if (xhci->xhc_state & XHCI_STATE_DYING)
+			goto dying;
+		ret = xhci_queue_ctrl_tx(xhci, GFP_ATOMIC, urb,
+				slot_id, ep_index);
+		if (ret)
+			goto free_priv;
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	} else if (usb_endpoint_xfer_bulk(&urb->ep->desc)) {
+		spin_lock_irqsave(&xhci->lock, flags);
+		if (xhci->xhc_state & XHCI_STATE_DYING)
+			goto dying;
+		if (xhci->devs[slot_id]->eps[ep_index].ep_state &
+				EP_GETTING_STREAMS) {
+			xhci_warn(xhci, "WARN: Can't enqueue URB while bulk ep "
+					"is transitioning to using streams.\n");
+			ret = -EINVAL;
+		} else if (xhci->devs[slot_id]->eps[ep_index].ep_state &
+				EP_GETTING_NO_STREAMS) {
+			xhci_warn(xhci, "WARN: Can't enqueue URB while bulk ep "
+					"is transitioning to "
+					"not having streams.\n");
+			ret = -EINVAL;
+		} else {
+			ret = xhci_queue_bulk_tx(xhci, GFP_ATOMIC, urb,
+					slot_id, ep_index);
+		}
+		if (ret)
+			goto free_priv;
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	} else if (usb_endpoint_xfer_int(&urb->ep->desc)) {
+		spin_lock_irqsave(&xhci->lock, flags);
+		if (xhci->xhc_state & XHCI_STATE_DYING)
+			goto dying;
+		ret = xhci_queue_intr_tx(xhci, GFP_ATOMIC, urb,
+				slot_id, ep_index);
+		if (ret)
+			goto free_priv;
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	} else {
+		spin_lock_irqsave(&xhci->lock, flags);
+		if (xhci->xhc_state & XHCI_STATE_DYING)
+			goto dying;
+		ret = xhci_queue_isoc_tx_prepare(xhci, GFP_ATOMIC, urb,
+				slot_id, ep_index);
+		if (ret)
+			goto free_priv;
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	}
+exit:
+	return ret;
+dying:
+	xhci_dbg(xhci, "Ep 0x%x: URB %p submitted for "
+			"non-responsive xHCI host.\n",
+			urb->ep->desc.bEndpointAddress, urb);
+	ret = -ESHUTDOWN;
+free_priv:
+	xhci_urb_free_priv(xhci, urb_priv);
+	urb->hcpriv = NULL;
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return ret;
+}
+
+/* Get the right ring for the given URB.
+ * If the endpoint supports streams, boundary check the URB's stream ID.
+ * If the endpoint doesn't support streams, return the singular endpoint ring.
+ */
+static struct xhci_ring *xhci_urb_to_transfer_ring(struct xhci_hcd *xhci,
+		struct urb *urb)
+{
+	unsigned int slot_id;
+	unsigned int ep_index;
+	unsigned int stream_id;
+	struct xhci_virt_ep *ep;
+
+	slot_id = urb->dev->slot_id;
+	ep_index = xhci_get_endpoint_index(&urb->ep->desc);
+	stream_id = urb->stream_id;
+	ep = &xhci->devs[slot_id]->eps[ep_index];
+	/* Common case: no streams */
+	if (!(ep->ep_state & EP_HAS_STREAMS))
+		return ep->ring;
+
+	if (stream_id == 0) {
+		xhci_warn(xhci,
+				"WARN: Slot ID %u, ep index %u has streams, "
+				"but URB has no stream ID.\n",
+				slot_id, ep_index);
+		return NULL;
+	}
+
+	if (stream_id < ep->stream_info->num_streams)
+		return ep->stream_info->stream_rings[stream_id];
+
+	xhci_warn(xhci,
+			"WARN: Slot ID %u, ep index %u has "
+			"stream IDs 1 to %u allocated, "
+			"but stream ID %u is requested.\n",
+			slot_id, ep_index,
+			ep->stream_info->num_streams - 1,
+			stream_id);
+	return NULL;
+}
+
+/*
+ * Remove the URB's TD from the endpoint ring.  This may cause the HC to stop
+ * USB transfers, potentially stopping in the middle of a TRB buffer.  The HC
+ * should pick up where it left off in the TD, unless a Set Transfer Ring
+ * Dequeue Pointer is issued.
+ *
+ * The TRBs that make up the buffers for the canceled URB will be "removed" from
+ * the ring.  Since the ring is a contiguous structure, they can't be physically
+ * removed.  Instead, there are two options:
+ *
+ *  1) If the HC is in the middle of processing the URB to be canceled, we
+ *     simply move the ring's dequeue pointer past those TRBs using the Set
+ *     Transfer Ring Dequeue Pointer command.  This will be the common case,
+ *     when drivers timeout on the last submitted URB and attempt to cancel.
+ *
+ *  2) If the HC is in the middle of a different TD, we turn the TRBs into a
+ *     series of 1-TRB transfer no-op TDs.  (No-ops shouldn't be chained.)  The
+ *     HC will need to invalidate the any TRBs it has cached after the stop
+ *     endpoint command, as noted in the xHCI 0.95 errata.
+ *
+ *  3) The TD may have completed by the time the Stop Endpoint Command
+ *     completes, so software needs to handle that case too.
+ *
+ * This function should protect against the TD enqueueing code ringing the
+ * doorbell while this code is waiting for a Stop Endpoint command to complete.
+ * It also needs to account for multiple cancellations on happening at the same
+ * time for the same endpoint.
+ *
+ * Note that this function can be called in any context, or so says
+ * usb_hcd_unlink_urb()
+ */
+int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+{
+	unsigned long flags;
+	int ret, i;
+	u32 temp;
+	struct xhci_hcd *xhci;
+	struct urb_priv	*urb_priv;
+	struct xhci_td *td;
+	unsigned int ep_index;
+	struct xhci_ring *ep_ring;
+	struct xhci_virt_ep *ep;
+
+	xhci = hcd_to_xhci(hcd);
+	spin_lock_irqsave(&xhci->lock, flags);
+	/* Make sure the URB hasn't completed or been unlinked already */
+	ret = usb_hcd_check_unlink_urb(hcd, urb, status);
+	if (ret || !urb->hcpriv)
+		goto done;
+	temp = xhci_readl(xhci, &xhci->op_regs->status);
+	if (temp == 0xffffffff || (xhci->xhc_state & XHCI_STATE_HALTED)) {
+		xhci_dbg(xhci, "HW died, freeing TD.\n");
+		urb_priv = urb->hcpriv;
+		for (i = urb_priv->td_cnt; i < urb_priv->length; i++) {
+			td = urb_priv->td[i];
+			if (!list_empty(&td->td_list))
+				list_del_init(&td->td_list);
+			if (!list_empty(&td->cancelled_td_list))
+				list_del_init(&td->cancelled_td_list);
+		}
+
+		usb_hcd_unlink_urb_from_ep(hcd, urb);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		usb_hcd_giveback_urb(hcd, urb, -ESHUTDOWN);
+		xhci_urb_free_priv(xhci, urb_priv);
+		return ret;
+	}
+	if ((xhci->xhc_state & XHCI_STATE_DYING) ||
+			(xhci->xhc_state & XHCI_STATE_HALTED)) {
+		xhci_dbg(xhci, "Ep 0x%x: URB %p to be canceled on "
+				"non-responsive xHCI host.\n",
+				urb->ep->desc.bEndpointAddress, urb);
+		/* Let the stop endpoint command watchdog timer (which set this
+		 * state) finish cleaning up the endpoint TD lists.  We must
+		 * have caught it in the middle of dropping a lock and giving
+		 * back an URB.
+		 */
+		goto done;
+	}
+
+	ep_index = xhci_get_endpoint_index(&urb->ep->desc);
+	ep = &xhci->devs[urb->dev->slot_id]->eps[ep_index];
+	ep_ring = xhci_urb_to_transfer_ring(xhci, urb);
+	if (!ep_ring) {
+		ret = -EINVAL;
+		goto done;
+	}
+
+	urb_priv = urb->hcpriv;
+	i = urb_priv->td_cnt;
+	if (i < urb_priv->length)
+		xhci_dbg(xhci, "Cancel URB %p, dev %s, ep 0x%x, "
+				"starting at offset 0x%llx\n",
+				urb, urb->dev->devpath,
+				urb->ep->desc.bEndpointAddress,
+				(unsigned long long) xhci_trb_virt_to_dma(
+					urb_priv->td[i]->start_seg,
+					urb_priv->td[i]->first_trb));
+
+	for (; i < urb_priv->length; i++) {
+		td = urb_priv->td[i];
+		list_add_tail(&td->cancelled_td_list, &ep->cancelled_td_list);
+	}
+
+	/* Queue a stop endpoint command, but only if this is
+	 * the first cancellation to be handled.
+	 */
+	if (!(ep->ep_state & EP_HALT_PENDING)) {
+		ep->ep_state |= EP_HALT_PENDING;
+		ep->stop_cmds_pending++;
+		ep->stop_cmd_timer.expires = jiffies +
+			XHCI_STOP_EP_CMD_TIMEOUT * HZ;
+		add_timer(&ep->stop_cmd_timer);
+		xhci_queue_stop_endpoint(xhci, urb->dev->slot_id, ep_index, 0);
+		xhci_ring_cmd_db(xhci);
+	}
+done:
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return ret;
+}
+
+/* Drop an endpoint from a new bandwidth configuration for this device.
+ * Only one call to this function is allowed per endpoint before
+ * check_bandwidth() or reset_bandwidth() must be called.
+ * A call to xhci_drop_endpoint() followed by a call to xhci_add_endpoint() will
+ * add the endpoint to the schedule with possibly new parameters denoted by a
+ * different endpoint descriptor in usb_host_endpoint.
+ * A call to xhci_add_endpoint() followed by a call to xhci_drop_endpoint() is
+ * not allowed.
+ *
+ * The USB core will not allow URBs to be queued to an endpoint that is being
+ * disabled, so there's no need for mutual exclusion to protect
+ * the xhci->devs[slot_id] structure.
+ */
+int xhci_drop_endpoint(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	struct xhci_hcd *xhci;
+	struct xhci_container_ctx *in_ctx, *out_ctx;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	struct xhci_slot_ctx *slot_ctx;
+	unsigned int last_ctx;
+	unsigned int ep_index;
+	struct xhci_ep_ctx *ep_ctx;
+	u32 drop_flag;
+	u32 new_add_flags, new_drop_flags, new_slot_info;
+	int ret;
+
+	ret = xhci_check_args(hcd, udev, ep, 1, true, __func__);
+	if (ret <= 0)
+		return ret;
+	xhci = hcd_to_xhci(hcd);
+	if (xhci->xhc_state & XHCI_STATE_DYING)
+		return -ENODEV;
+
+	xhci_dbg(xhci, "%s called for udev %p\n", __func__, udev);
+	drop_flag = xhci_get_endpoint_flag(&ep->desc);
+	if (drop_flag == SLOT_FLAG || drop_flag == EP0_FLAG) {
+		xhci_dbg(xhci, "xHCI %s - can't drop slot or ep 0 %#x\n",
+				__func__, drop_flag);
+		return 0;
+	}
+
+	in_ctx = xhci->devs[udev->slot_id]->in_ctx;
+	out_ctx = xhci->devs[udev->slot_id]->out_ctx;
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);
+	ep_index = xhci_get_endpoint_index(&ep->desc);
+	ep_ctx = xhci_get_ep_ctx(xhci, out_ctx, ep_index);
+	/* If the HC already knows the endpoint is disabled,
+	 * or the HCD has noted it is disabled, ignore this request
+	 */
+	if (((ep_ctx->ep_info & cpu_to_le32(EP_STATE_MASK)) ==
+	     cpu_to_le32(EP_STATE_DISABLED)) ||
+	    le32_to_cpu(ctrl_ctx->drop_flags) &
+	    xhci_get_endpoint_flag(&ep->desc)) {
+		xhci_warn(xhci, "xHCI %s called with disabled ep %p\n",
+				__func__, ep);
+		return 0;
+	}
+
+	ctrl_ctx->drop_flags |= cpu_to_le32(drop_flag);
+	new_drop_flags = le32_to_cpu(ctrl_ctx->drop_flags);
+
+	ctrl_ctx->add_flags &= cpu_to_le32(~drop_flag);
+	new_add_flags = le32_to_cpu(ctrl_ctx->add_flags);
+
+	last_ctx = xhci_last_valid_endpoint(le32_to_cpu(ctrl_ctx->add_flags));
+	slot_ctx = xhci_get_slot_ctx(xhci, in_ctx);
+	/* Update the last valid endpoint context, if we deleted the last one */
+	if ((le32_to_cpu(slot_ctx->dev_info) & LAST_CTX_MASK) >
+	    LAST_CTX(last_ctx)) {
+		slot_ctx->dev_info &= cpu_to_le32(~LAST_CTX_MASK);
+		slot_ctx->dev_info |= cpu_to_le32(LAST_CTX(last_ctx));
+	}
+	new_slot_info = le32_to_cpu(slot_ctx->dev_info);
+
+	xhci_endpoint_zero(xhci, xhci->devs[udev->slot_id], ep);
+
+	xhci_dbg(xhci, "drop ep 0x%x, slot id %d, new drop flags = %#x, new add flags = %#x, new slot info = %#x\n",
+			(unsigned int) ep->desc.bEndpointAddress,
+			udev->slot_id,
+			(unsigned int) new_drop_flags,
+			(unsigned int) new_add_flags,
+			(unsigned int) new_slot_info);
+	return 0;
+}
+
+/* Add an endpoint to a new possible bandwidth configuration for this device.
+ * Only one call to this function is allowed per endpoint before
+ * check_bandwidth() or reset_bandwidth() must be called.
+ * A call to xhci_drop_endpoint() followed by a call to xhci_add_endpoint() will
+ * add the endpoint to the schedule with possibly new parameters denoted by a
+ * different endpoint descriptor in usb_host_endpoint.
+ * A call to xhci_add_endpoint() followed by a call to xhci_drop_endpoint() is
+ * not allowed.
+ *
+ * The USB core will not allow URBs to be queued to an endpoint until the
+ * configuration or alt setting is installed in the device, so there's no need
+ * for mutual exclusion to protect the xhci->devs[slot_id] structure.
+ */
+int xhci_add_endpoint(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint *ep)
+{
+	struct xhci_hcd *xhci;
+	struct xhci_container_ctx *in_ctx, *out_ctx;
+	unsigned int ep_index;
+	struct xhci_slot_ctx *slot_ctx;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	u32 added_ctxs;
+	unsigned int last_ctx;
+	u32 new_add_flags, new_drop_flags, new_slot_info;
+	struct xhci_virt_device *virt_dev;
+	int ret = 0;
+
+	ret = xhci_check_args(hcd, udev, ep, 1, true, __func__);
+	if (ret <= 0) {
+		/* So we won't queue a reset ep command for a root hub */
+		ep->hcpriv = NULL;
+		return ret;
+	}
+	xhci = hcd_to_xhci(hcd);
+	if (xhci->xhc_state & XHCI_STATE_DYING)
+		return -ENODEV;
+
+	added_ctxs = xhci_get_endpoint_flag(&ep->desc);
+	last_ctx = xhci_last_valid_endpoint(added_ctxs);
+	if (added_ctxs == SLOT_FLAG || added_ctxs == EP0_FLAG) {
+		/* FIXME when we have to issue an evaluate endpoint command to
+		 * deal with ep0 max packet size changing once we get the
+		 * descriptors
+		 */
+		xhci_dbg(xhci, "xHCI %s - can't add slot or ep 0 %#x\n",
+				__func__, added_ctxs);
+		return 0;
+	}
+
+	virt_dev = xhci->devs[udev->slot_id];
+	in_ctx = virt_dev->in_ctx;
+	out_ctx = virt_dev->out_ctx;
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);
+	ep_index = xhci_get_endpoint_index(&ep->desc);
+
+	/* If this endpoint is already in use, and the upper layers are trying
+	 * to add it again without dropping it, reject the addition.
+	 */
+	if (virt_dev->eps[ep_index].ring &&
+			!(le32_to_cpu(ctrl_ctx->drop_flags) &
+				xhci_get_endpoint_flag(&ep->desc))) {
+		xhci_warn(xhci, "Trying to add endpoint 0x%x "
+				"without dropping it.\n",
+				(unsigned int) ep->desc.bEndpointAddress);
+		return -EINVAL;
+	}
+
+	/* If the HCD has already noted the endpoint is enabled,
+	 * ignore this request.
+	 */
+	if (le32_to_cpu(ctrl_ctx->add_flags) &
+	    xhci_get_endpoint_flag(&ep->desc)) {
+		xhci_warn(xhci, "xHCI %s called with enabled ep %p\n",
+				__func__, ep);
+		return 0;
+	}
+
+	/*
+	 * Configuration and alternate setting changes must be done in
+	 * process context, not interrupt context (or so documenation
+	 * for usb_set_interface() and usb_set_configuration() claim).
+	 */
+	if (xhci_endpoint_init(xhci, virt_dev, udev, ep, GFP_NOIO) < 0) {
+		dev_dbg(&udev->dev, "%s - could not initialize ep %#x\n",
+				__func__, ep->desc.bEndpointAddress);
+		return -ENOMEM;
+	}
+
+	ctrl_ctx->add_flags |= cpu_to_le32(added_ctxs);
+	new_add_flags = le32_to_cpu(ctrl_ctx->add_flags);
+
+	/* If xhci_endpoint_disable() was called for this endpoint, but the
+	 * xHC hasn't been notified yet through the check_bandwidth() call,
+	 * this re-adds a new state for the endpoint from the new endpoint
+	 * descriptors.  We must drop and re-add this endpoint, so we leave the
+	 * drop flags alone.
+	 */
+	new_drop_flags = le32_to_cpu(ctrl_ctx->drop_flags);
+
+	slot_ctx = xhci_get_slot_ctx(xhci, in_ctx);
+	/* Update the last valid endpoint context, if we just added one past */
+	if ((le32_to_cpu(slot_ctx->dev_info) & LAST_CTX_MASK) <
+	    LAST_CTX(last_ctx)) {
+		slot_ctx->dev_info &= cpu_to_le32(~LAST_CTX_MASK);
+		slot_ctx->dev_info |= cpu_to_le32(LAST_CTX(last_ctx));
+	}
+	new_slot_info = le32_to_cpu(slot_ctx->dev_info);
+
+	/* Store the usb_device pointer for later use */
+	ep->hcpriv = udev;
+
+	xhci_dbg(xhci, "add ep 0x%x, slot id %d, new drop flags = %#x, new add flags = %#x, new slot info = %#x\n",
+			(unsigned int) ep->desc.bEndpointAddress,
+			udev->slot_id,
+			(unsigned int) new_drop_flags,
+			(unsigned int) new_add_flags,
+			(unsigned int) new_slot_info);
+	return 0;
+}
+
+static void xhci_zero_in_ctx(struct xhci_hcd *xhci, struct xhci_virt_device *virt_dev)
+{
+	struct xhci_input_control_ctx *ctrl_ctx;
+	struct xhci_ep_ctx *ep_ctx;
+	struct xhci_slot_ctx *slot_ctx;
+	int i;
+
+	/* When a device's add flag and drop flag are zero, any subsequent
+	 * configure endpoint command will leave that endpoint's state
+	 * untouched.  Make sure we don't leave any old state in the input
+	 * endpoint contexts.
+	 */
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, virt_dev->in_ctx);
+	ctrl_ctx->drop_flags = 0;
+	ctrl_ctx->add_flags = 0;
+	slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->in_ctx);
+	slot_ctx->dev_info &= cpu_to_le32(~LAST_CTX_MASK);
+	/* Endpoint 0 is always valid */
+	slot_ctx->dev_info |= cpu_to_le32(LAST_CTX(1));
+	for (i = 1; i < 31; ++i) {
+		ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, i);
+		ep_ctx->ep_info = 0;
+		ep_ctx->ep_info2 = 0;
+		ep_ctx->deq = 0;
+		ep_ctx->tx_info = 0;
+	}
+}
+
+static int xhci_configure_endpoint_result(struct xhci_hcd *xhci,
+		struct usb_device *udev, u32 *cmd_status)
+{
+	int ret;
+
+	switch (*cmd_status) {
+	case COMP_ENOMEM:
+		dev_warn(&udev->dev, "Not enough host controller resources "
+				"for new device state.\n");
+		ret = -ENOMEM;
+		/* FIXME: can we allocate more resources for the HC? */
+		break;
+	case COMP_BW_ERR:
+	case COMP_2ND_BW_ERR:
+		dev_warn(&udev->dev, "Not enough bandwidth "
+				"for new device state.\n");
+		ret = -ENOSPC;
+		/* FIXME: can we go back to the old state? */
+		break;
+	case COMP_TRB_ERR:
+		/* the HCD set up something wrong */
+		dev_warn(&udev->dev, "ERROR: Endpoint drop flag = 0, "
+				"add flag = 1, "
+				"and endpoint is not disabled.\n");
+		ret = -EINVAL;
+		break;
+	case COMP_DEV_ERR:
+		dev_warn(&udev->dev, "ERROR: Incompatible device for endpoint "
+				"configure command.\n");
+		ret = -ENODEV;
+		break;
+	case COMP_SUCCESS:
+		dev_dbg(&udev->dev, "Successful Endpoint Configure command\n");
+		ret = 0;
+		break;
+	default:
+		xhci_err(xhci, "ERROR: unexpected command completion "
+				"code 0x%x.\n", *cmd_status);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+static int xhci_evaluate_context_result(struct xhci_hcd *xhci,
+		struct usb_device *udev, u32 *cmd_status)
+{
+	int ret;
+	struct xhci_virt_device *virt_dev = xhci->devs[udev->slot_id];
+
+	switch (*cmd_status) {
+	case COMP_EINVAL:
+		dev_warn(&udev->dev, "WARN: xHCI driver setup invalid evaluate "
+				"context command.\n");
+		ret = -EINVAL;
+		break;
+	case COMP_EBADSLT:
+		dev_warn(&udev->dev, "WARN: slot not enabled for"
+				"evaluate context command.\n");
+		ret = -EINVAL;
+		break;
+	case COMP_CTX_STATE:
+		dev_warn(&udev->dev, "WARN: invalid context state for "
+				"evaluate context command.\n");
+		xhci_dbg_ctx(xhci, virt_dev->out_ctx, 1);
+		ret = -EINVAL;
+		break;
+	case COMP_DEV_ERR:
+		dev_warn(&udev->dev, "ERROR: Incompatible device for evaluate "
+				"context command.\n");
+		ret = -ENODEV;
+		break;
+	case COMP_MEL_ERR:
+		/* Max Exit Latency too large error */
+		dev_warn(&udev->dev, "WARN: Max Exit Latency too large\n");
+		ret = -EINVAL;
+		break;
+	case COMP_SUCCESS:
+		dev_dbg(&udev->dev, "Successful evaluate context command\n");
+		ret = 0;
+		break;
+	default:
+		xhci_err(xhci, "ERROR: unexpected command completion "
+				"code 0x%x.\n", *cmd_status);
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+static u32 xhci_count_num_new_endpoints(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx)
+{
+	struct xhci_input_control_ctx *ctrl_ctx;
+	u32 valid_add_flags;
+	u32 valid_drop_flags;
+
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);
+	/* Ignore the slot flag (bit 0), and the default control endpoint flag
+	 * (bit 1).  The default control endpoint is added during the Address
+	 * Device command and is never removed until the slot is disabled.
+	 */
+	valid_add_flags = ctrl_ctx->add_flags >> 2;
+	valid_drop_flags = ctrl_ctx->drop_flags >> 2;
+
+	/* Use hweight32 to count the number of ones in the add flags, or
+	 * number of endpoints added.  Don't count endpoints that are changed
+	 * (both added and dropped).
+	 */
+	return hweight32(valid_add_flags) -
+		hweight32(valid_add_flags & valid_drop_flags);
+}
+
+static unsigned int xhci_count_num_dropped_endpoints(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx)
+{
+	struct xhci_input_control_ctx *ctrl_ctx;
+	u32 valid_add_flags;
+	u32 valid_drop_flags;
+
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);
+	valid_add_flags = ctrl_ctx->add_flags >> 2;
+	valid_drop_flags = ctrl_ctx->drop_flags >> 2;
+
+	return hweight32(valid_drop_flags) -
+		hweight32(valid_add_flags & valid_drop_flags);
+}
+
+/*
+ * We need to reserve the new number of endpoints before the configure endpoint
+ * command completes.  We can't subtract the dropped endpoints from the number
+ * of active endpoints until the command completes because we can oversubscribe
+ * the host in this case:
+ *
+ *  - the first configure endpoint command drops more endpoints than it adds
+ *  - a second configure endpoint command that adds more endpoints is queued
+ *  - the first configure endpoint command fails, so the config is unchanged
+ *  - the second command may succeed, even though there isn't enough resources
+ *
+ * Must be called with xhci->lock held.
+ */
+static int xhci_reserve_host_resources(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx)
+{
+	u32 added_eps;
+
+	added_eps = xhci_count_num_new_endpoints(xhci, in_ctx);
+	if (xhci->num_active_eps + added_eps > xhci->limit_active_eps) {
+		xhci_dbg(xhci, "Not enough ep ctxs: "
+				"%u active, need to add %u, limit is %u.\n",
+				xhci->num_active_eps, added_eps,
+				xhci->limit_active_eps);
+		return -ENOMEM;
+	}
+	xhci->num_active_eps += added_eps;
+	xhci_dbg(xhci, "Adding %u ep ctxs, %u now active.\n", added_eps,
+			xhci->num_active_eps);
+	return 0;
+}
+
+/*
+ * The configure endpoint was failed by the xHC for some other reason, so we
+ * need to revert the resources that failed configuration would have used.
+ *
+ * Must be called with xhci->lock held.
+ */
+static void xhci_free_host_resources(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx)
+{
+	u32 num_failed_eps;
+
+	num_failed_eps = xhci_count_num_new_endpoints(xhci, in_ctx);
+	xhci->num_active_eps -= num_failed_eps;
+	xhci_dbg(xhci, "Removing %u failed ep ctxs, %u now active.\n",
+			num_failed_eps,
+			xhci->num_active_eps);
+}
+
+/*
+ * Now that the command has completed, clean up the active endpoint count by
+ * subtracting out the endpoints that were dropped (but not changed).
+ *
+ * Must be called with xhci->lock held.
+ */
+static void xhci_finish_resource_reservation(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx)
+{
+	u32 num_dropped_eps;
+
+	num_dropped_eps = xhci_count_num_dropped_endpoints(xhci, in_ctx);
+	xhci->num_active_eps -= num_dropped_eps;
+	if (num_dropped_eps)
+		xhci_dbg(xhci, "Removing %u dropped ep ctxs, %u now active.\n",
+				num_dropped_eps,
+				xhci->num_active_eps);
+}
+
+static unsigned int xhci_get_block_size(struct usb_device *udev)
+{
+	switch (udev->speed) {
+	case USB_SPEED_LOW:
+	case USB_SPEED_FULL:
+		return FS_BLOCK;
+	case USB_SPEED_HIGH:
+		return HS_BLOCK;
+	case USB_SPEED_SUPER:
+		return SS_BLOCK;
+	case USB_SPEED_UNKNOWN:
+	case USB_SPEED_WIRELESS:
+	default:
+		/* Should never happen */
+		return 1;
+	}
+}
+
+static unsigned int
+xhci_get_largest_overhead(struct xhci_interval_bw *interval_bw)
+{
+	if (interval_bw->overhead[LS_OVERHEAD_TYPE])
+		return LS_OVERHEAD;
+	if (interval_bw->overhead[FS_OVERHEAD_TYPE])
+		return FS_OVERHEAD;
+	return HS_OVERHEAD;
+}
+
+/* If we are changing a LS/FS device under a HS hub,
+ * make sure (if we are activating a new TT) that the HS bus has enough
+ * bandwidth for this new TT.
+ */
+static int xhci_check_tt_bw_table(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		int old_active_eps)
+{
+	struct xhci_interval_bw_table *bw_table;
+	struct xhci_tt_bw_info *tt_info;
+
+	/* Find the bandwidth table for the root port this TT is attached to. */
+	bw_table = &xhci->rh_bw[virt_dev->real_port - 1].bw_table;
+	tt_info = virt_dev->tt_info;
+	/* If this TT already had active endpoints, the bandwidth for this TT
+	 * has already been added.  Removing all periodic endpoints (and thus
+	 * making the TT enactive) will only decrease the bandwidth used.
+	 */
+	if (old_active_eps)
+		return 0;
+	if (old_active_eps == 0 && tt_info->active_eps != 0) {
+		if (bw_table->bw_used + TT_HS_OVERHEAD > HS_BW_LIMIT)
+			return -ENOMEM;
+		return 0;
+	}
+	/* Not sure why we would have no new active endpoints...
+	 *
+	 * Maybe because of an Evaluate Context change for a hub update or a
+	 * control endpoint 0 max packet size change?
+	 * FIXME: skip the bandwidth calculation in that case.
+	 */
+	return 0;
+}
+
+static int xhci_check_ss_bw(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev)
+{
+	unsigned int bw_reserved;
+
+	bw_reserved = DIV_ROUND_UP(SS_BW_RESERVED*SS_BW_LIMIT_IN, 100);
+	if (virt_dev->bw_table->ss_bw_in > (SS_BW_LIMIT_IN - bw_reserved))
+		return -ENOMEM;
+
+	bw_reserved = DIV_ROUND_UP(SS_BW_RESERVED*SS_BW_LIMIT_OUT, 100);
+	if (virt_dev->bw_table->ss_bw_out > (SS_BW_LIMIT_OUT - bw_reserved))
+		return -ENOMEM;
+
+	return 0;
+}
+
+/*
+ * This algorithm is a very conservative estimate of the worst-case scheduling
+ * scenario for any one interval.  The hardware dynamically schedules the
+ * packets, so we can't tell which microframe could be the limiting factor in
+ * the bandwidth scheduling.  This only takes into account periodic endpoints.
+ *
+ * Obviously, we can't solve an NP complete problem to find the minimum worst
+ * case scenario.  Instead, we come up with an estimate that is no less than
+ * the worst case bandwidth used for any one microframe, but may be an
+ * over-estimate.
+ *
+ * We walk the requirements for each endpoint by interval, starting with the
+ * smallest interval, and place packets in the schedule where there is only one
+ * possible way to schedule packets for that interval.  In order to simplify
+ * this algorithm, we record the largest max packet size for each interval, and
+ * assume all packets will be that size.
+ *
+ * For interval 0, we obviously must schedule all packets for each interval.
+ * The bandwidth for interval 0 is just the amount of data to be transmitted
+ * (the sum of all max ESIT payload sizes, plus any overhead per packet times
+ * the number of packets).
+ *
+ * For interval 1, we have two possible microframes to schedule those packets
+ * in.  For this algorithm, if we can schedule the same number of packets for
+ * each possible scheduling opportunity (each microframe), we will do so.  The
+ * remaining number of packets will be saved to be transmitted in the gaps in
+ * the next interval's scheduling sequence.
+ *
+ * As we move those remaining packets to be scheduled with interval 2 packets,
+ * we have to double the number of remaining packets to transmit.  This is
+ * because the intervals are actually powers of 2, and we would be transmitting
+ * the previous interval's packets twice in this interval.  We also have to be
+ * sure that when we look at the largest max packet size for this interval, we
+ * also look at the largest max packet size for the remaining packets and take
+ * the greater of the two.
+ *
+ * The algorithm continues to evenly distribute packets in each scheduling
+ * opportunity, and push the remaining packets out, until we get to the last
+ * interval.  Then those packets and their associated overhead are just added
+ * to the bandwidth used.
+ */
+static int xhci_check_bw_table(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		int old_active_eps)
+{
+	unsigned int bw_reserved;
+	unsigned int max_bandwidth;
+	unsigned int bw_used;
+	unsigned int block_size;
+	struct xhci_interval_bw_table *bw_table;
+	unsigned int packet_size = 0;
+	unsigned int overhead = 0;
+	unsigned int packets_transmitted = 0;
+	unsigned int packets_remaining = 0;
+	unsigned int i;
+
+	if (virt_dev->udev->speed == USB_SPEED_SUPER)
+		return xhci_check_ss_bw(xhci, virt_dev);
+
+	if (virt_dev->udev->speed == USB_SPEED_HIGH) {
+		max_bandwidth = HS_BW_LIMIT;
+		/* Convert percent of bus BW reserved to blocks reserved */
+		bw_reserved = DIV_ROUND_UP(HS_BW_RESERVED * max_bandwidth, 100);
+	} else {
+		max_bandwidth = FS_BW_LIMIT;
+		bw_reserved = DIV_ROUND_UP(FS_BW_RESERVED * max_bandwidth, 100);
+	}
+
+	bw_table = virt_dev->bw_table;
+	/* We need to translate the max packet size and max ESIT payloads into
+	 * the units the hardware uses.
+	 */
+	block_size = xhci_get_block_size(virt_dev->udev);
+
+	/* If we are manipulating a LS/FS device under a HS hub, double check
+	 * that the HS bus has enough bandwidth if we are activing a new TT.
+	 */
+	if (virt_dev->tt_info) {
+		xhci_dbg(xhci, "Recalculating BW for rootport %u\n",
+				virt_dev->real_port);
+		if (xhci_check_tt_bw_table(xhci, virt_dev, old_active_eps)) {
+			xhci_warn(xhci, "Not enough bandwidth on HS bus for "
+					"newly activated TT.\n");
+			return -ENOMEM;
+		}
+		xhci_dbg(xhci, "Recalculating BW for TT slot %u port %u\n",
+				virt_dev->tt_info->slot_id,
+				virt_dev->tt_info->ttport);
+	} else {
+		xhci_dbg(xhci, "Recalculating BW for rootport %u\n",
+				virt_dev->real_port);
+	}
+
+	/* Add in how much bandwidth will be used for interval zero, or the
+	 * rounded max ESIT payload + number of packets * largest overhead.
+	 */
+	bw_used = DIV_ROUND_UP(bw_table->interval0_esit_payload, block_size) +
+		bw_table->interval_bw[0].num_packets *
+		xhci_get_largest_overhead(&bw_table->interval_bw[0]);
+
+	for (i = 1; i < XHCI_MAX_INTERVAL; i++) {
+		unsigned int bw_added;
+		unsigned int largest_mps;
+		unsigned int interval_overhead;
+
+		/*
+		 * How many packets could we transmit in this interval?
+		 * If packets didn't fit in the previous interval, we will need
+		 * to transmit that many packets twice within this interval.
+		 */
+		packets_remaining = 2 * packets_remaining +
+			bw_table->interval_bw[i].num_packets;
+
+		/* Find the largest max packet size of this or the previous
+		 * interval.
+		 */
+		if (list_empty(&bw_table->interval_bw[i].endpoints))
+			largest_mps = 0;
+		else {
+			struct xhci_virt_ep *virt_ep;
+			struct list_head *ep_entry;
+
+			ep_entry = bw_table->interval_bw[i].endpoints.next;
+			virt_ep = list_entry(ep_entry,
+					struct xhci_virt_ep, bw_endpoint_list);
+			/* Convert to blocks, rounding up */
+			largest_mps = DIV_ROUND_UP(
+					virt_ep->bw_info.max_packet_size,
+					block_size);
+		}
+		if (largest_mps > packet_size)
+			packet_size = largest_mps;
+
+		/* Use the larger overhead of this or the previous interval. */
+		interval_overhead = xhci_get_largest_overhead(
+				&bw_table->interval_bw[i]);
+		if (interval_overhead > overhead)
+			overhead = interval_overhead;
+
+		/* How many packets can we evenly distribute across
+		 * (1 << (i + 1)) possible scheduling opportunities?
+		 */
+		packets_transmitted = packets_remaining >> (i + 1);
+
+		/* Add in the bandwidth used for those scheduled packets */
+		bw_added = packets_transmitted * (overhead + packet_size);
+
+		/* How many packets do we have remaining to transmit? */
+		packets_remaining = packets_remaining % (1 << (i + 1));
+
+		/* What largest max packet size should those packets have? */
+		/* If we've transmitted all packets, don't carry over the
+		 * largest packet size.
+		 */
+		if (packets_remaining == 0) {
+			packet_size = 0;
+			overhead = 0;
+		} else if (packets_transmitted > 0) {
+			/* Otherwise if we do have remaining packets, and we've
+			 * scheduled some packets in this interval, take the
+			 * largest max packet size from endpoints with this
+			 * interval.
+			 */
+			packet_size = largest_mps;
+			overhead = interval_overhead;
+		}
+		/* Otherwise carry over packet_size and overhead from the last
+		 * time we had a remainder.
+		 */
+		bw_used += bw_added;
+		if (bw_used > max_bandwidth) {
+			xhci_warn(xhci, "Not enough bandwidth. "
+					"Proposed: %u, Max: %u\n",
+				bw_used, max_bandwidth);
+			return -ENOMEM;
+		}
+	}
+	/*
+	 * Ok, we know we have some packets left over after even-handedly
+	 * scheduling interval 15.  We don't know which microframes they will
+	 * fit into, so we over-schedule and say they will be scheduled every
+	 * microframe.
+	 */
+	if (packets_remaining > 0)
+		bw_used += overhead + packet_size;
+
+	if (!virt_dev->tt_info && virt_dev->udev->speed == USB_SPEED_HIGH) {
+		unsigned int port_index = virt_dev->real_port - 1;
+
+		/* OK, we're manipulating a HS device attached to a
+		 * root port bandwidth domain.  Include the number of active TTs
+		 * in the bandwidth used.
+		 */
+		bw_used += TT_HS_OVERHEAD *
+			xhci->rh_bw[port_index].num_active_tts;
+	}
+
+	xhci_dbg(xhci, "Final bandwidth: %u, Limit: %u, Reserved: %u, "
+		"Available: %u " "percent\n",
+		bw_used, max_bandwidth, bw_reserved,
+		(max_bandwidth - bw_used - bw_reserved) * 100 /
+		max_bandwidth);
+
+	bw_used += bw_reserved;
+	if (bw_used > max_bandwidth) {
+		xhci_warn(xhci, "Not enough bandwidth. Proposed: %u, Max: %u\n",
+				bw_used, max_bandwidth);
+		return -ENOMEM;
+	}
+
+	bw_table->bw_used = bw_used;
+	return 0;
+}
+
+static bool xhci_is_async_ep(unsigned int ep_type)
+{
+	return (ep_type != ISOC_OUT_EP && ep_type != INT_OUT_EP &&
+					ep_type != ISOC_IN_EP &&
+					ep_type != INT_IN_EP);
+}
+
+static bool xhci_is_sync_in_ep(unsigned int ep_type)
+{
+	return (ep_type == ISOC_IN_EP || ep_type == INT_IN_EP);
+}
+
+static unsigned int xhci_get_ss_bw_consumed(struct xhci_bw_info *ep_bw)
+{
+	unsigned int mps = DIV_ROUND_UP(ep_bw->max_packet_size, SS_BLOCK);
+
+	if (ep_bw->ep_interval == 0)
+		return SS_OVERHEAD_BURST +
+			(ep_bw->mult * ep_bw->num_packets *
+					(SS_OVERHEAD + mps));
+	return DIV_ROUND_UP(ep_bw->mult * ep_bw->num_packets *
+				(SS_OVERHEAD + mps + SS_OVERHEAD_BURST),
+				1 << ep_bw->ep_interval);
+
+}
+
+void xhci_drop_ep_from_interval_table(struct xhci_hcd *xhci,
+		struct xhci_bw_info *ep_bw,
+		struct xhci_interval_bw_table *bw_table,
+		struct usb_device *udev,
+		struct xhci_virt_ep *virt_ep,
+		struct xhci_tt_bw_info *tt_info)
+{
+	struct xhci_interval_bw	*interval_bw;
+	int normalized_interval;
+
+	if (xhci_is_async_ep(ep_bw->type))
+		return;
+
+	if (udev->speed == USB_SPEED_SUPER) {
+		if (xhci_is_sync_in_ep(ep_bw->type))
+			xhci->devs[udev->slot_id]->bw_table->ss_bw_in -=
+				xhci_get_ss_bw_consumed(ep_bw);
+		else
+			xhci->devs[udev->slot_id]->bw_table->ss_bw_out -=
+				xhci_get_ss_bw_consumed(ep_bw);
+		return;
+	}
+
+	/* SuperSpeed endpoints never get added to intervals in the table, so
+	 * this check is only valid for HS/FS/LS devices.
+	 */
+	if (list_empty(&virt_ep->bw_endpoint_list))
+		return;
+	/* For LS/FS devices, we need to translate the interval expressed in
+	 * microframes to frames.
+	 */
+	if (udev->speed == USB_SPEED_HIGH)
+		normalized_interval = ep_bw->ep_interval;
+	else
+		normalized_interval = ep_bw->ep_interval - 3;
+
+	if (normalized_interval == 0)
+		bw_table->interval0_esit_payload -= ep_bw->max_esit_payload;
+	interval_bw = &bw_table->interval_bw[normalized_interval];
+	interval_bw->num_packets -= ep_bw->num_packets;
+	switch (udev->speed) {
+	case USB_SPEED_LOW:
+		interval_bw->overhead[LS_OVERHEAD_TYPE] -= 1;
+		break;
+	case USB_SPEED_FULL:
+		interval_bw->overhead[FS_OVERHEAD_TYPE] -= 1;
+		break;
+	case USB_SPEED_HIGH:
+		interval_bw->overhead[HS_OVERHEAD_TYPE] -= 1;
+		break;
+	case USB_SPEED_SUPER:
+	case USB_SPEED_UNKNOWN:
+	case USB_SPEED_WIRELESS:
+		/* Should never happen because only LS/FS/HS endpoints will get
+		 * added to the endpoint list.
+		 */
+		return;
+	}
+	if (tt_info)
+		tt_info->active_eps -= 1;
+	list_del_init(&virt_ep->bw_endpoint_list);
+}
+
+static void xhci_add_ep_to_interval_table(struct xhci_hcd *xhci,
+		struct xhci_bw_info *ep_bw,
+		struct xhci_interval_bw_table *bw_table,
+		struct usb_device *udev,
+		struct xhci_virt_ep *virt_ep,
+		struct xhci_tt_bw_info *tt_info)
+{
+	struct xhci_interval_bw	*interval_bw;
+	struct xhci_virt_ep *smaller_ep;
+	int normalized_interval;
+
+	if (xhci_is_async_ep(ep_bw->type))
+		return;
+
+	if (udev->speed == USB_SPEED_SUPER) {
+		if (xhci_is_sync_in_ep(ep_bw->type))
+			xhci->devs[udev->slot_id]->bw_table->ss_bw_in +=
+				xhci_get_ss_bw_consumed(ep_bw);
+		else
+			xhci->devs[udev->slot_id]->bw_table->ss_bw_out +=
+				xhci_get_ss_bw_consumed(ep_bw);
+		return;
+	}
+
+	/* For LS/FS devices, we need to translate the interval expressed in
+	 * microframes to frames.
+	 */
+	if (udev->speed == USB_SPEED_HIGH)
+		normalized_interval = ep_bw->ep_interval;
+	else
+		normalized_interval = ep_bw->ep_interval - 3;
+
+	if (normalized_interval == 0)
+		bw_table->interval0_esit_payload += ep_bw->max_esit_payload;
+	interval_bw = &bw_table->interval_bw[normalized_interval];
+	interval_bw->num_packets += ep_bw->num_packets;
+	switch (udev->speed) {
+	case USB_SPEED_LOW:
+		interval_bw->overhead[LS_OVERHEAD_TYPE] += 1;
+		break;
+	case USB_SPEED_FULL:
+		interval_bw->overhead[FS_OVERHEAD_TYPE] += 1;
+		break;
+	case USB_SPEED_HIGH:
+		interval_bw->overhead[HS_OVERHEAD_TYPE] += 1;
+		break;
+	case USB_SPEED_SUPER:
+	case USB_SPEED_UNKNOWN:
+	case USB_SPEED_WIRELESS:
+		/* Should never happen because only LS/FS/HS endpoints will get
+		 * added to the endpoint list.
+		 */
+		return;
+	}
+
+	if (tt_info)
+		tt_info->active_eps += 1;
+	/* Insert the endpoint into the list, largest max packet size first. */
+	list_for_each_entry(smaller_ep, &interval_bw->endpoints,
+			bw_endpoint_list) {
+		if (ep_bw->max_packet_size >=
+				smaller_ep->bw_info.max_packet_size) {
+			/* Add the new ep before the smaller endpoint */
+			list_add_tail(&virt_ep->bw_endpoint_list,
+					&smaller_ep->bw_endpoint_list);
+			return;
+		}
+	}
+	/* Add the new endpoint at the end of the list. */
+	list_add_tail(&virt_ep->bw_endpoint_list,
+			&interval_bw->endpoints);
+}
+
+void xhci_update_tt_active_eps(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		int old_active_eps)
+{
+	struct xhci_root_port_bw_info *rh_bw_info;
+	if (!virt_dev->tt_info)
+		return;
+
+	rh_bw_info = &xhci->rh_bw[virt_dev->real_port - 1];
+	if (old_active_eps == 0 &&
+				virt_dev->tt_info->active_eps != 0) {
+		rh_bw_info->num_active_tts += 1;
+		rh_bw_info->bw_table.bw_used += TT_HS_OVERHEAD;
+	} else if (old_active_eps != 0 &&
+				virt_dev->tt_info->active_eps == 0) {
+		rh_bw_info->num_active_tts -= 1;
+		rh_bw_info->bw_table.bw_used -= TT_HS_OVERHEAD;
+	}
+}
+
+static int xhci_reserve_bandwidth(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		struct xhci_container_ctx *in_ctx)
+{
+	struct xhci_bw_info ep_bw_info[31];
+	int i;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	int old_active_eps = 0;
+
+	if (virt_dev->tt_info)
+		old_active_eps = virt_dev->tt_info->active_eps;
+
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);
+
+	for (i = 0; i < 31; i++) {
+		if (!EP_IS_ADDED(ctrl_ctx, i) && !EP_IS_DROPPED(ctrl_ctx, i))
+			continue;
+
+		/* Make a copy of the BW info in case we need to revert this */
+		memcpy(&ep_bw_info[i], &virt_dev->eps[i].bw_info,
+				sizeof(ep_bw_info[i]));
+		/* Drop the endpoint from the interval table if the endpoint is
+		 * being dropped or changed.
+		 */
+		if (EP_IS_DROPPED(ctrl_ctx, i))
+			xhci_drop_ep_from_interval_table(xhci,
+					&virt_dev->eps[i].bw_info,
+					virt_dev->bw_table,
+					virt_dev->udev,
+					&virt_dev->eps[i],
+					virt_dev->tt_info);
+	}
+	/* Overwrite the information stored in the endpoints' bw_info */
+	xhci_update_bw_info(xhci, virt_dev->in_ctx, ctrl_ctx, virt_dev);
+	for (i = 0; i < 31; i++) {
+		/* Add any changed or added endpoints to the interval table */
+		if (EP_IS_ADDED(ctrl_ctx, i))
+			xhci_add_ep_to_interval_table(xhci,
+					&virt_dev->eps[i].bw_info,
+					virt_dev->bw_table,
+					virt_dev->udev,
+					&virt_dev->eps[i],
+					virt_dev->tt_info);
+	}
+
+	if (!xhci_check_bw_table(xhci, virt_dev, old_active_eps)) {
+		/* Ok, this fits in the bandwidth we have.
+		 * Update the number of active TTs.
+		 */
+		xhci_update_tt_active_eps(xhci, virt_dev, old_active_eps);
+		return 0;
+	}
+
+	/* We don't have enough bandwidth for this, revert the stored info. */
+	for (i = 0; i < 31; i++) {
+		if (!EP_IS_ADDED(ctrl_ctx, i) && !EP_IS_DROPPED(ctrl_ctx, i))
+			continue;
+
+		/* Drop the new copies of any added or changed endpoints from
+		 * the interval table.
+		 */
+		if (EP_IS_ADDED(ctrl_ctx, i)) {
+			xhci_drop_ep_from_interval_table(xhci,
+					&virt_dev->eps[i].bw_info,
+					virt_dev->bw_table,
+					virt_dev->udev,
+					&virt_dev->eps[i],
+					virt_dev->tt_info);
+		}
+		/* Revert the endpoint back to its old information */
+		memcpy(&virt_dev->eps[i].bw_info, &ep_bw_info[i],
+				sizeof(ep_bw_info[i]));
+		/* Add any changed or dropped endpoints back into the table */
+		if (EP_IS_DROPPED(ctrl_ctx, i))
+			xhci_add_ep_to_interval_table(xhci,
+					&virt_dev->eps[i].bw_info,
+					virt_dev->bw_table,
+					virt_dev->udev,
+					&virt_dev->eps[i],
+					virt_dev->tt_info);
+	}
+	return -ENOMEM;
+}
+
+
+/* Issue a configure endpoint command or evaluate context command
+ * and wait for it to finish.
+ */
+static int xhci_configure_endpoint(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct xhci_command *command,
+		bool ctx_change, bool must_succeed)
+{
+	int ret;
+	int timeleft;
+	unsigned long flags;
+	struct xhci_container_ctx *in_ctx;
+	struct completion *cmd_completion;
+	u32 *cmd_status;
+	struct xhci_virt_device *virt_dev;
+	union xhci_trb *cmd_trb;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	virt_dev = xhci->devs[udev->slot_id];
+
+	if (command)
+		in_ctx = command->in_ctx;
+	else
+		in_ctx = virt_dev->in_ctx;
+
+	if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK) &&
+			xhci_reserve_host_resources(xhci, in_ctx)) {
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		xhci_warn(xhci, "Not enough host resources, "
+				"active endpoint contexts = %u\n",
+				xhci->num_active_eps);
+		return -ENOMEM;
+	}
+	if ((xhci->quirks & XHCI_SW_BW_CHECKING) &&
+			xhci_reserve_bandwidth(xhci, virt_dev, in_ctx)) {
+		if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK))
+			xhci_free_host_resources(xhci, in_ctx);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		xhci_warn(xhci, "Not enough bandwidth\n");
+		return -ENOMEM;
+	}
+
+	if (command) {
+		cmd_completion = command->completion;
+		cmd_status = &command->status;
+		command->command_trb = xhci->cmd_ring->enqueue;
+
+		/* Enqueue pointer can be left pointing to the link TRB,
+		 * we must handle that
+		 */
+		if (TRB_TYPE_LINK_LE32(command->command_trb->link.control))
+			command->command_trb =
+				xhci->cmd_ring->enq_seg->next->trbs;
+
+		list_add_tail(&command->cmd_list, &virt_dev->cmd_list);
+	} else {
+		cmd_completion = &virt_dev->cmd_completion;
+		cmd_status = &virt_dev->cmd_status;
+	}
+	init_completion(cmd_completion);
+
+	cmd_trb = xhci->cmd_ring->dequeue;
+	if (!ctx_change)
+		ret = xhci_queue_configure_endpoint(xhci, in_ctx->dma,
+				udev->slot_id, must_succeed);
+	else
+		ret = xhci_queue_evaluate_context(xhci, in_ctx->dma,
+				udev->slot_id, must_succeed);
+	if (ret < 0) {
+		if (command)
+			list_del(&command->cmd_list);
+		if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK))
+			xhci_free_host_resources(xhci, in_ctx);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		xhci_dbg(xhci, "FIXME allocate a new ring segment\n");
+		return -ENOMEM;
+	}
+	xhci_ring_cmd_db(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* Wait for the configure endpoint command to complete */
+	timeleft = wait_for_completion_interruptible_timeout(
+			cmd_completion,
+			XHCI_CMD_DEFAULT_TIMEOUT);
+	if (timeleft <= 0) {
+		xhci_warn(xhci, "%s while waiting for %s command\n",
+				timeleft == 0 ? "Timeout" : "Signal",
+				ctx_change == 0 ?
+					"configure endpoint" :
+					"evaluate context");
+		/* cancel the configure endpoint command */
+		ret = xhci_cancel_cmd(xhci, command, cmd_trb);
+		if (ret < 0)
+			return ret;
+		return -ETIME;
+	}
+
+	if (!ctx_change)
+		ret = xhci_configure_endpoint_result(xhci, udev, cmd_status);
+	else
+		ret = xhci_evaluate_context_result(xhci, udev, cmd_status);
+
+	if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK)) {
+		spin_lock_irqsave(&xhci->lock, flags);
+		/* If the command failed, remove the reserved resources.
+		 * Otherwise, clean up the estimate to include dropped eps.
+		 */
+		if (ret)
+			xhci_free_host_resources(xhci, in_ctx);
+		else
+			xhci_finish_resource_reservation(xhci, in_ctx);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	}
+	return ret;
+}
+
+/* Called after one or more calls to xhci_add_endpoint() or
+ * xhci_drop_endpoint().  If this call fails, the USB core is expected
+ * to call xhci_reset_bandwidth().
+ *
+ * Since we are in the middle of changing either configuration or
+ * installing a new alt setting, the USB core won't allow URBs to be
+ * enqueued for any endpoint on the old config or interface.  Nothing
+ * else should be touching the xhci->devs[slot_id] structure, so we
+ * don't need to take the xhci->lock for manipulating that.
+ */
+int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	int i;
+	int ret = 0;
+	struct xhci_hcd *xhci;
+	struct xhci_virt_device	*virt_dev;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	struct xhci_slot_ctx *slot_ctx;
+
+	ret = xhci_check_args(hcd, udev, NULL, 0, true, __func__);
+	if (ret <= 0)
+		return ret;
+	xhci = hcd_to_xhci(hcd);
+	if (xhci->xhc_state & XHCI_STATE_DYING)
+		return -ENODEV;
+
+	xhci_dbg(xhci, "%s called for udev %p\n", __func__, udev);
+	virt_dev = xhci->devs[udev->slot_id];
+
+	/* See section 4.6.6 - A0 = 1; A1 = D0 = D1 = 0 */
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, virt_dev->in_ctx);
+	ctrl_ctx->add_flags |= cpu_to_le32(SLOT_FLAG);
+	ctrl_ctx->add_flags &= cpu_to_le32(~EP0_FLAG);
+	ctrl_ctx->drop_flags &= cpu_to_le32(~(SLOT_FLAG | EP0_FLAG));
+
+	/* Don't issue the command if there's no endpoints to update. */
+	if (ctrl_ctx->add_flags == cpu_to_le32(SLOT_FLAG) &&
+			ctrl_ctx->drop_flags == 0)
+		return 0;
+
+	xhci_dbg(xhci, "New Input Control Context:\n");
+	slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->in_ctx);
+	xhci_dbg_ctx(xhci, virt_dev->in_ctx,
+		     LAST_CTX_TO_EP_NUM(le32_to_cpu(slot_ctx->dev_info)));
+
+	ret = xhci_configure_endpoint(xhci, udev, NULL,
+			false, false);
+	if (ret) {
+		/* Callee should call reset_bandwidth() */
+		return ret;
+	}
+
+	xhci_dbg(xhci, "Output context after successful config ep cmd:\n");
+	xhci_dbg_ctx(xhci, virt_dev->out_ctx,
+		     LAST_CTX_TO_EP_NUM(le32_to_cpu(slot_ctx->dev_info)));
+
+	/* Free any rings that were dropped, but not changed. */
+	for (i = 1; i < 31; ++i) {
+		if ((le32_to_cpu(ctrl_ctx->drop_flags) & (1 << (i + 1))) &&
+		    !(le32_to_cpu(ctrl_ctx->add_flags) & (1 << (i + 1))))
+			xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i);
+	}
+	xhci_zero_in_ctx(xhci, virt_dev);
+	/*
+	 * Install any rings for completely new endpoints or changed endpoints,
+	 * and free or cache any old rings from changed endpoints.
+	 */
+	for (i = 1; i < 31; ++i) {
+		if (!virt_dev->eps[i].new_ring)
+			continue;
+		/* Only cache or free the old ring if it exists.
+		 * It may not if this is the first add of an endpoint.
+		 */
+		if (virt_dev->eps[i].ring) {
+			xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i);
+		}
+		virt_dev->eps[i].ring = virt_dev->eps[i].new_ring;
+		virt_dev->eps[i].new_ring = NULL;
+	}
+
+	return ret;
+}
+
+void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	struct xhci_hcd *xhci;
+	struct xhci_virt_device	*virt_dev;
+	int i, ret;
+
+	ret = xhci_check_args(hcd, udev, NULL, 0, true, __func__);
+	if (ret <= 0)
+		return;
+	xhci = hcd_to_xhci(hcd);
+
+	xhci_dbg(xhci, "%s called for udev %p\n", __func__, udev);
+	virt_dev = xhci->devs[udev->slot_id];
+	/* Free any rings allocated for added endpoints */
+	for (i = 0; i < 31; ++i) {
+		if (virt_dev->eps[i].new_ring) {
+			xhci_ring_free(xhci, virt_dev->eps[i].new_ring);
+			virt_dev->eps[i].new_ring = NULL;
+		}
+	}
+	xhci_zero_in_ctx(xhci, virt_dev);
+}
+
+static void xhci_setup_input_ctx_for_config_ep(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx,
+		struct xhci_container_ctx *out_ctx,
+		u32 add_flags, u32 drop_flags)
+{
+	struct xhci_input_control_ctx *ctrl_ctx;
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx);
+	ctrl_ctx->add_flags = cpu_to_le32(add_flags);
+	ctrl_ctx->drop_flags = cpu_to_le32(drop_flags);
+	xhci_slot_copy(xhci, in_ctx, out_ctx);
+	ctrl_ctx->add_flags |= cpu_to_le32(SLOT_FLAG);
+
+	xhci_dbg(xhci, "Input Context:\n");
+	xhci_dbg_ctx(xhci, in_ctx, xhci_last_valid_endpoint(add_flags));
+}
+
+static void xhci_setup_input_ctx_for_quirk(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		struct xhci_dequeue_state *deq_state)
+{
+	struct xhci_container_ctx *in_ctx;
+	struct xhci_ep_ctx *ep_ctx;
+	u32 added_ctxs;
+	dma_addr_t addr;
+
+	xhci_endpoint_copy(xhci, xhci->devs[slot_id]->in_ctx,
+			xhci->devs[slot_id]->out_ctx, ep_index);
+	in_ctx = xhci->devs[slot_id]->in_ctx;
+	ep_ctx = xhci_get_ep_ctx(xhci, in_ctx, ep_index);
+	addr = xhci_trb_virt_to_dma(deq_state->new_deq_seg,
+			deq_state->new_deq_ptr);
+	if (addr == 0) {
+		xhci_warn(xhci, "WARN Cannot submit config ep after "
+				"reset ep command\n");
+		xhci_warn(xhci, "WARN deq seg = %p, deq ptr = %p\n",
+				deq_state->new_deq_seg,
+				deq_state->new_deq_ptr);
+		return;
+	}
+	ep_ctx->deq = cpu_to_le64(addr | deq_state->new_cycle_state);
+
+	added_ctxs = xhci_get_endpoint_flag_from_index(ep_index);
+	xhci_setup_input_ctx_for_config_ep(xhci, xhci->devs[slot_id]->in_ctx,
+			xhci->devs[slot_id]->out_ctx, added_ctxs, added_ctxs);
+}
+
+void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci,
+		struct usb_device *udev, unsigned int ep_index)
+{
+	struct xhci_dequeue_state deq_state;
+	struct xhci_virt_ep *ep;
+
+	xhci_dbg(xhci, "Cleaning up stalled endpoint ring\n");
+	ep = &xhci->devs[udev->slot_id]->eps[ep_index];
+	/* We need to move the HW's dequeue pointer past this TD,
+	 * or it will attempt to resend it on the next doorbell ring.
+	 */
+	xhci_find_new_dequeue_state(xhci, udev->slot_id,
+			ep_index, ep->stopped_stream, ep->stopped_td,
+			&deq_state);
+
+	/* HW with the reset endpoint quirk will use the saved dequeue state to
+	 * issue a configure endpoint command later.
+	 */
+	if (!(xhci->quirks & XHCI_RESET_EP_QUIRK)) {
+		xhci_dbg(xhci, "Queueing new dequeue state\n");
+		xhci_queue_new_dequeue_state(xhci, udev->slot_id,
+				ep_index, ep->stopped_stream, &deq_state);
+	} else {
+		/* Better hope no one uses the input context between now and the
+		 * reset endpoint completion!
+		 * XXX: No idea how this hardware will react when stream rings
+		 * are enabled.
+		 */
+		xhci_dbg(xhci, "Setting up input context for "
+				"configure endpoint command\n");
+		xhci_setup_input_ctx_for_quirk(xhci, udev->slot_id,
+				ep_index, &deq_state);
+	}
+}
+
+/* Deal with stalled endpoints.  The core should have sent the control message
+ * to clear the halt condition.  However, we need to make the xHCI hardware
+ * reset its sequence number, since a device will expect a sequence number of
+ * zero after the halt condition is cleared.
+ * Context: in_interrupt
+ */
+void xhci_endpoint_reset(struct usb_hcd *hcd,
+		struct usb_host_endpoint *ep)
+{
+	struct xhci_hcd *xhci;
+	struct usb_device *udev;
+	unsigned int ep_index;
+	unsigned long flags;
+	int ret;
+	struct xhci_virt_ep *virt_ep;
+
+	xhci = hcd_to_xhci(hcd);
+	udev = (struct usb_device *) ep->hcpriv;
+	/* Called with a root hub endpoint (or an endpoint that wasn't added
+	 * with xhci_add_endpoint()
+	 */
+	if (!ep->hcpriv)
+		return;
+	ep_index = xhci_get_endpoint_index(&ep->desc);
+	virt_ep = &xhci->devs[udev->slot_id]->eps[ep_index];
+	if (!virt_ep->stopped_td) {
+		xhci_dbg(xhci, "Endpoint 0x%x not halted, refusing to reset.\n",
+				ep->desc.bEndpointAddress);
+		return;
+	}
+	if (usb_endpoint_xfer_control(&ep->desc)) {
+		xhci_dbg(xhci, "Control endpoint stall already handled.\n");
+		return;
+	}
+
+	xhci_dbg(xhci, "Queueing reset endpoint command\n");
+	spin_lock_irqsave(&xhci->lock, flags);
+	ret = xhci_queue_reset_ep(xhci, udev->slot_id, ep_index);
+	/*
+	 * Can't change the ring dequeue pointer until it's transitioned to the
+	 * stopped state, which is only upon a successful reset endpoint
+	 * command.  Better hope that last command worked!
+	 */
+	if (!ret) {
+		xhci_cleanup_stalled_ring(xhci, udev, ep_index);
+		kfree(virt_ep->stopped_td);
+		xhci_ring_cmd_db(xhci);
+	}
+	virt_ep->stopped_td = NULL;
+	virt_ep->stopped_trb = NULL;
+	virt_ep->stopped_stream = 0;
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	if (ret)
+		xhci_warn(xhci, "FIXME allocate a new ring segment\n");
+}
+
+static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+		struct usb_device *udev, struct usb_host_endpoint *ep,
+		unsigned int slot_id)
+{
+	int ret;
+	unsigned int ep_index;
+	unsigned int ep_state;
+
+	if (!ep)
+		return -EINVAL;
+	ret = xhci_check_args(xhci_to_hcd(xhci), udev, ep, 1, true, __func__);
+	if (ret <= 0)
+		return -EINVAL;
+	if (ep->ss_ep_comp.bmAttributes == 0) {
+		xhci_warn(xhci, "WARN: SuperSpeed Endpoint Companion"
+				" descriptor for ep 0x%x does not support streams\n",
+				ep->desc.bEndpointAddress);
+		return -EINVAL;
+	}
+
+	ep_index = xhci_get_endpoint_index(&ep->desc);
+	ep_state = xhci->devs[slot_id]->eps[ep_index].ep_state;
+	if (ep_state & EP_HAS_STREAMS ||
+			ep_state & EP_GETTING_STREAMS) {
+		xhci_warn(xhci, "WARN: SuperSpeed bulk endpoint 0x%x "
+				"already has streams set up.\n",
+				ep->desc.bEndpointAddress);
+		xhci_warn(xhci, "Send email to xHCI maintainer and ask for "
+				"dynamic stream context array reallocation.\n");
+		return -EINVAL;
+	}
+	if (!list_empty(&xhci->devs[slot_id]->eps[ep_index].ring->td_list)) {
+		xhci_warn(xhci, "Cannot setup streams for SuperSpeed bulk "
+				"endpoint 0x%x; URBs are pending.\n",
+				ep->desc.bEndpointAddress);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static void xhci_calculate_streams_entries(struct xhci_hcd *xhci,
+		unsigned int *num_streams, unsigned int *num_stream_ctxs)
+{
+	unsigned int max_streams;
+
+	/* The stream context array size must be a power of two */
+	*num_stream_ctxs = roundup_pow_of_two(*num_streams);
+	/*
+	 * Find out how many primary stream array entries the host controller
+	 * supports.  Later we may use secondary stream arrays (similar to 2nd
+	 * level page entries), but that's an optional feature for xHCI host
+	 * controllers. xHCs must support at least 4 stream IDs.
+	 */
+	max_streams = HCC_MAX_PSA(xhci->hcc_params);
+	if (*num_stream_ctxs > max_streams) {
+		xhci_dbg(xhci, "xHCI HW only supports %u stream ctx entries.\n",
+				max_streams);
+		*num_stream_ctxs = max_streams;
+		*num_streams = max_streams;
+	}
+}
+
+/* Returns an error code if one of the endpoint already has streams.
+ * This does not change any data structures, it only checks and gathers
+ * information.
+ */
+static int xhci_calculate_streams_and_bitmask(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		unsigned int *num_streams, u32 *changed_ep_bitmask)
+{
+	unsigned int max_streams;
+	unsigned int endpoint_flag;
+	int i;
+	int ret;
+
+	for (i = 0; i < num_eps; i++) {
+		ret = xhci_check_streams_endpoint(xhci, udev,
+				eps[i], udev->slot_id);
+		if (ret < 0)
+			return ret;
+
+		max_streams = usb_ss_max_streams(&eps[i]->ss_ep_comp);
+		if (max_streams < (*num_streams - 1)) {
+			xhci_dbg(xhci, "Ep 0x%x only supports %u stream IDs.\n",
+					eps[i]->desc.bEndpointAddress,
+					max_streams);
+			*num_streams = max_streams+1;
+		}
+
+		endpoint_flag = xhci_get_endpoint_flag(&eps[i]->desc);
+		if (*changed_ep_bitmask & endpoint_flag)
+			return -EINVAL;
+		*changed_ep_bitmask |= endpoint_flag;
+	}
+	return 0;
+}
+
+static u32 xhci_calculate_no_streams_bitmask(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps)
+{
+	u32 changed_ep_bitmask = 0;
+	unsigned int slot_id;
+	unsigned int ep_index;
+	unsigned int ep_state;
+	int i;
+
+	slot_id = udev->slot_id;
+	if (!xhci->devs[slot_id])
+		return 0;
+
+	for (i = 0; i < num_eps; i++) {
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		ep_state = xhci->devs[slot_id]->eps[ep_index].ep_state;
+		/* Are streams already being freed for the endpoint? */
+		if (ep_state & EP_GETTING_NO_STREAMS) {
+			xhci_warn(xhci, "WARN Can't disable streams for "
+					"endpoint 0x%x\n, "
+					"streams are being disabled already.",
+					eps[i]->desc.bEndpointAddress);
+			return 0;
+		}
+		/* Are there actually any streams to free? */
+		if (!(ep_state & EP_HAS_STREAMS) &&
+				!(ep_state & EP_GETTING_STREAMS)) {
+			xhci_warn(xhci, "WARN Can't disable streams for "
+					"endpoint 0x%x\n, "
+					"streams are already disabled!",
+					eps[i]->desc.bEndpointAddress);
+			xhci_warn(xhci, "WARN xhci_free_streams() called "
+					"with non-streams endpoint\n");
+			return 0;
+		}
+		changed_ep_bitmask |= xhci_get_endpoint_flag(&eps[i]->desc);
+	}
+	return changed_ep_bitmask;
+}
+
+/*
+ * The USB device drivers use this function (though the HCD interface in USB
+ * core) to prepare a set of bulk endpoints to use streams.  Streams are used to
+ * coordinate mass storage command queueing across multiple endpoints (basically
+ * a stream ID == a task ID).
+ *
+ * Setting up streams involves allocating the same size stream context array
+ * for each endpoint and issuing a configure endpoint command for all endpoints.
+ *
+ * Don't allow the call to succeed if one endpoint only supports one stream
+ * (which means it doesn't support streams at all).
+ *
+ * Drivers may get less stream IDs than they asked for, if the host controller
+ * hardware or endpoints claim they can't support the number of requested
+ * stream IDs.
+ */
+int xhci_alloc_streams(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		unsigned int num_streams, gfp_t mem_flags)
+{
+	int i, ret;
+	struct xhci_hcd *xhci;
+	struct xhci_virt_device *vdev;
+	struct xhci_command *config_cmd;
+	unsigned int ep_index;
+	unsigned int num_stream_ctxs;
+	unsigned long flags;
+	u32 changed_ep_bitmask = 0;
+
+	if (!eps)
+		return -EINVAL;
+
+	/* Add one to the number of streams requested to account for
+	 * stream 0 that is reserved for xHCI usage.
+	 */
+	num_streams += 1;
+	xhci = hcd_to_xhci(hcd);
+	xhci_dbg(xhci, "Driver wants %u stream IDs (including stream 0).\n",
+			num_streams);
+
+	config_cmd = xhci_alloc_command(xhci, true, true, mem_flags);
+	if (!config_cmd) {
+		xhci_dbg(xhci, "Could not allocate xHCI command structure.\n");
+		return -ENOMEM;
+	}
+
+	/* Check to make sure all endpoints are not already configured for
+	 * streams.  While we're at it, find the maximum number of streams that
+	 * all the endpoints will support and check for duplicate endpoints.
+	 */
+	spin_lock_irqsave(&xhci->lock, flags);
+	ret = xhci_calculate_streams_and_bitmask(xhci, udev, eps,
+			num_eps, &num_streams, &changed_ep_bitmask);
+	if (ret < 0) {
+		xhci_free_command(xhci, config_cmd);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return ret;
+	}
+	if (num_streams <= 1) {
+		xhci_warn(xhci, "WARN: endpoints can't handle "
+				"more than one stream.\n");
+		xhci_free_command(xhci, config_cmd);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return -EINVAL;
+	}
+	vdev = xhci->devs[udev->slot_id];
+	/* Mark each endpoint as being in transition, so
+	 * xhci_urb_enqueue() will reject all URBs.
+	 */
+	for (i = 0; i < num_eps; i++) {
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		vdev->eps[ep_index].ep_state |= EP_GETTING_STREAMS;
+	}
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* Setup internal data structures and allocate HW data structures for
+	 * streams (but don't install the HW structures in the input context
+	 * until we're sure all memory allocation succeeded).
+	 */
+	xhci_calculate_streams_entries(xhci, &num_streams, &num_stream_ctxs);
+	xhci_dbg(xhci, "Need %u stream ctx entries for %u stream IDs.\n",
+			num_stream_ctxs, num_streams);
+
+	for (i = 0; i < num_eps; i++) {
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		vdev->eps[ep_index].stream_info = xhci_alloc_stream_info(xhci,
+				num_stream_ctxs,
+				num_streams, mem_flags);
+		if (!vdev->eps[ep_index].stream_info)
+			goto cleanup;
+		/* Set maxPstreams in endpoint context and update deq ptr to
+		 * point to stream context array. FIXME
+		 */
+	}
+
+	/* Set up the input context for a configure endpoint command. */
+	for (i = 0; i < num_eps; i++) {
+		struct xhci_ep_ctx *ep_ctx;
+
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		ep_ctx = xhci_get_ep_ctx(xhci, config_cmd->in_ctx, ep_index);
+
+		xhci_endpoint_copy(xhci, config_cmd->in_ctx,
+				vdev->out_ctx, ep_index);
+		xhci_setup_streams_ep_input_ctx(xhci, ep_ctx,
+				vdev->eps[ep_index].stream_info);
+	}
+	/* Tell the HW to drop its old copy of the endpoint context info
+	 * and add the updated copy from the input context.
+	 */
+	xhci_setup_input_ctx_for_config_ep(xhci, config_cmd->in_ctx,
+			vdev->out_ctx, changed_ep_bitmask, changed_ep_bitmask);
+
+	/* Issue and wait for the configure endpoint command */
+	ret = xhci_configure_endpoint(xhci, udev, config_cmd,
+			false, false);
+
+	/* xHC rejected the configure endpoint command for some reason, so we
+	 * leave the old ring intact and free our internal streams data
+	 * structure.
+	 */
+	if (ret < 0)
+		goto cleanup;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	for (i = 0; i < num_eps; i++) {
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		vdev->eps[ep_index].ep_state &= ~EP_GETTING_STREAMS;
+		xhci_dbg(xhci, "Slot %u ep ctx %u now has streams.\n",
+			 udev->slot_id, ep_index);
+		vdev->eps[ep_index].ep_state |= EP_HAS_STREAMS;
+	}
+	xhci_free_command(xhci, config_cmd);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* Subtract 1 for stream 0, which drivers can't use */
+	return num_streams - 1;
+
+cleanup:
+	/* If it didn't work, free the streams! */
+	for (i = 0; i < num_eps; i++) {
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		xhci_free_stream_info(xhci, vdev->eps[ep_index].stream_info);
+		vdev->eps[ep_index].stream_info = NULL;
+		/* FIXME Unset maxPstreams in endpoint context and
+		 * update deq ptr to point to normal string ring.
+		 */
+		vdev->eps[ep_index].ep_state &= ~EP_GETTING_STREAMS;
+		vdev->eps[ep_index].ep_state &= ~EP_HAS_STREAMS;
+		xhci_endpoint_zero(xhci, vdev, eps[i]);
+	}
+	xhci_free_command(xhci, config_cmd);
+	return -ENOMEM;
+}
+
+/* Transition the endpoint from using streams to being a "normal" endpoint
+ * without streams.
+ *
+ * Modify the endpoint context state, submit a configure endpoint command,
+ * and free all endpoint rings for streams if that completes successfully.
+ */
+int xhci_free_streams(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		gfp_t mem_flags)
+{
+	int i, ret;
+	struct xhci_hcd *xhci;
+	struct xhci_virt_device *vdev;
+	struct xhci_command *command;
+	unsigned int ep_index;
+	unsigned long flags;
+	u32 changed_ep_bitmask;
+
+	xhci = hcd_to_xhci(hcd);
+	vdev = xhci->devs[udev->slot_id];
+
+	/* Set up a configure endpoint command to remove the streams rings */
+	spin_lock_irqsave(&xhci->lock, flags);
+	changed_ep_bitmask = xhci_calculate_no_streams_bitmask(xhci,
+			udev, eps, num_eps);
+	if (changed_ep_bitmask == 0) {
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return -EINVAL;
+	}
+
+	/* Use the xhci_command structure from the first endpoint.  We may have
+	 * allocated too many, but the driver may call xhci_free_streams() for
+	 * each endpoint it grouped into one call to xhci_alloc_streams().
+	 */
+	ep_index = xhci_get_endpoint_index(&eps[0]->desc);
+	command = vdev->eps[ep_index].stream_info->free_streams_command;
+	for (i = 0; i < num_eps; i++) {
+		struct xhci_ep_ctx *ep_ctx;
+
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		ep_ctx = xhci_get_ep_ctx(xhci, command->in_ctx, ep_index);
+		xhci->devs[udev->slot_id]->eps[ep_index].ep_state |=
+			EP_GETTING_NO_STREAMS;
+
+		xhci_endpoint_copy(xhci, command->in_ctx,
+				vdev->out_ctx, ep_index);
+		xhci_setup_no_streams_ep_input_ctx(xhci, ep_ctx,
+				&vdev->eps[ep_index]);
+	}
+	xhci_setup_input_ctx_for_config_ep(xhci, command->in_ctx,
+			vdev->out_ctx, changed_ep_bitmask, changed_ep_bitmask);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* Issue and wait for the configure endpoint command,
+	 * which must succeed.
+	 */
+	ret = xhci_configure_endpoint(xhci, udev, command,
+			false, true);
+
+	/* xHC rejected the configure endpoint command for some reason, so we
+	 * leave the streams rings intact.
+	 */
+	if (ret < 0)
+		return ret;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	for (i = 0; i < num_eps; i++) {
+		ep_index = xhci_get_endpoint_index(&eps[i]->desc);
+		xhci_free_stream_info(xhci, vdev->eps[ep_index].stream_info);
+		vdev->eps[ep_index].stream_info = NULL;
+		/* FIXME Unset maxPstreams in endpoint context and
+		 * update deq ptr to point to normal string ring.
+		 */
+		vdev->eps[ep_index].ep_state &= ~EP_GETTING_NO_STREAMS;
+		vdev->eps[ep_index].ep_state &= ~EP_HAS_STREAMS;
+	}
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	return 0;
+}
+
+/*
+ * Deletes endpoint resources for endpoints that were active before a Reset
+ * Device command, or a Disable Slot command.  The Reset Device command leaves
+ * the control endpoint intact, whereas the Disable Slot command deletes it.
+ *
+ * Must be called with xhci->lock held.
+ */
+void xhci_free_device_endpoint_resources(struct xhci_hcd *xhci,
+	struct xhci_virt_device *virt_dev, bool drop_control_ep)
+{
+	int i;
+	unsigned int num_dropped_eps = 0;
+	unsigned int drop_flags = 0;
+
+	for (i = (drop_control_ep ? 0 : 1); i < 31; i++) {
+		if (virt_dev->eps[i].ring) {
+			drop_flags |= 1 << i;
+			num_dropped_eps++;
+		}
+	}
+	xhci->num_active_eps -= num_dropped_eps;
+	if (num_dropped_eps)
+		xhci_dbg(xhci, "Dropped %u ep ctxs, flags = 0x%x, "
+				"%u now active.\n",
+				num_dropped_eps, drop_flags,
+				xhci->num_active_eps);
+}
+
+/*
+ * This submits a Reset Device Command, which will set the device state to 0,
+ * set the device address to 0, and disable all the endpoints except the default
+ * control endpoint.  The USB core should come back and call
+ * xhci_address_device(), and then re-set up the configuration.  If this is
+ * called because of a usb_reset_and_verify_device(), then the old alternate
+ * settings will be re-installed through the normal bandwidth allocation
+ * functions.
+ *
+ * Wait for the Reset Device command to finish.  Remove all structures
+ * associated with the endpoints that were disabled.  Clear the input device
+ * structure?  Cache the rings?  Reset the control endpoint 0 max packet size?
+ *
+ * If the virt_dev to be reset does not exist or does not match the udev,
+ * it means the device is lost, possibly due to the xHC restore error and
+ * re-initialization during S3/S4. In this case, call xhci_alloc_dev() to
+ * re-allocate the device.
+ */
+int xhci_discover_or_reset_device(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	int ret, i;
+	unsigned long flags;
+	struct xhci_hcd *xhci;
+	unsigned int slot_id;
+	struct xhci_virt_device *virt_dev;
+	struct xhci_command *reset_device_cmd;
+	int timeleft;
+	int last_freed_endpoint;
+	struct xhci_slot_ctx *slot_ctx;
+	int old_active_eps = 0;
+
+	ret = xhci_check_args(hcd, udev, NULL, 0, false, __func__);
+	if (ret <= 0)
+		return ret;
+	xhci = hcd_to_xhci(hcd);
+	slot_id = udev->slot_id;
+	virt_dev = xhci->devs[slot_id];
+	if (!virt_dev) {
+		xhci_dbg(xhci, "The device to be reset with slot ID %u does "
+				"not exist. Re-allocate the device\n", slot_id);
+		ret = xhci_alloc_dev(hcd, udev);
+		if (ret == 1)
+			return 0;
+		else
+			return -EINVAL;
+	}
+
+	if (virt_dev->udev != udev) {
+		/* If the virt_dev and the udev does not match, this virt_dev
+		 * may belong to another udev.
+		 * Re-allocate the device.
+		 */
+		xhci_dbg(xhci, "The device to be reset with slot ID %u does "
+				"not match the udev. Re-allocate the device\n",
+				slot_id);
+		ret = xhci_alloc_dev(hcd, udev);
+		if (ret == 1)
+			return 0;
+		else
+			return -EINVAL;
+	}
+
+	/* If device is not setup, there is no point in resetting it */
+	slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->out_ctx);
+	if (GET_SLOT_STATE(le32_to_cpu(slot_ctx->dev_state)) ==
+						SLOT_STATE_DISABLED)
+		return 0;
+
+	xhci_dbg(xhci, "Resetting device with slot ID %u\n", slot_id);
+	/* Allocate the command structure that holds the struct completion.
+	 * Assume we're in process context, since the normal device reset
+	 * process has to wait for the device anyway.  Storage devices are
+	 * reset as part of error handling, so use GFP_NOIO instead of
+	 * GFP_KERNEL.
+	 */
+	reset_device_cmd = xhci_alloc_command(xhci, false, true, GFP_NOIO);
+	if (!reset_device_cmd) {
+		xhci_dbg(xhci, "Couldn't allocate command structure.\n");
+		return -ENOMEM;
+	}
+
+	/* Attempt to submit the Reset Device command to the command ring */
+	spin_lock_irqsave(&xhci->lock, flags);
+	reset_device_cmd->command_trb = xhci->cmd_ring->enqueue;
+
+	/* Enqueue pointer can be left pointing to the link TRB,
+	 * we must handle that
+	 */
+	if (TRB_TYPE_LINK_LE32(reset_device_cmd->command_trb->link.control))
+		reset_device_cmd->command_trb =
+			xhci->cmd_ring->enq_seg->next->trbs;
+
+	list_add_tail(&reset_device_cmd->cmd_list, &virt_dev->cmd_list);
+	ret = xhci_queue_reset_device(xhci, slot_id);
+	if (ret) {
+		xhci_dbg(xhci, "FIXME: allocate a command ring segment\n");
+		list_del(&reset_device_cmd->cmd_list);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		goto command_cleanup;
+	}
+	xhci_ring_cmd_db(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* Wait for the Reset Device command to finish */
+	timeleft = wait_for_completion_interruptible_timeout(
+			reset_device_cmd->completion,
+			USB_CTRL_SET_TIMEOUT);
+	if (timeleft <= 0) {
+		xhci_warn(xhci, "%s while waiting for reset device command\n",
+				timeleft == 0 ? "Timeout" : "Signal");
+		spin_lock_irqsave(&xhci->lock, flags);
+		/* The timeout might have raced with the event ring handler, so
+		 * only delete from the list if the item isn't poisoned.
+		 */
+		if (reset_device_cmd->cmd_list.next != LIST_POISON1)
+			list_del(&reset_device_cmd->cmd_list);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		ret = -ETIME;
+		goto command_cleanup;
+	}
+
+	/* The Reset Device command can't fail, according to the 0.95/0.96 spec,
+	 * unless we tried to reset a slot ID that wasn't enabled,
+	 * or the device wasn't in the addressed or configured state.
+	 */
+	ret = reset_device_cmd->status;
+	switch (ret) {
+	case COMP_EBADSLT: /* 0.95 completion code for bad slot ID */
+	case COMP_CTX_STATE: /* 0.96 completion code for same thing */
+		xhci_info(xhci, "Can't reset device (slot ID %u) in %s state\n",
+				slot_id,
+				xhci_get_slot_state(xhci, virt_dev->out_ctx));
+		xhci_info(xhci, "Not freeing device rings.\n");
+		/* Don't treat this as an error.  May change my mind later. */
+		ret = 0;
+		goto command_cleanup;
+	case COMP_SUCCESS:
+		xhci_dbg(xhci, "Successful reset device command.\n");
+		break;
+	default:
+		if (xhci_is_vendor_info_code(xhci, ret))
+			break;
+		xhci_warn(xhci, "Unknown completion code %u for "
+				"reset device command.\n", ret);
+		ret = -EINVAL;
+		goto command_cleanup;
+	}
+
+	/* Free up host controller endpoint resources */
+	if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK)) {
+		spin_lock_irqsave(&xhci->lock, flags);
+		/* Don't delete the default control endpoint resources */
+		xhci_free_device_endpoint_resources(xhci, virt_dev, false);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	}
+
+	/* Everything but endpoint 0 is disabled, so free or cache the rings. */
+	last_freed_endpoint = 1;
+	for (i = 1; i < 31; ++i) {
+		struct xhci_virt_ep *ep = &virt_dev->eps[i];
+
+		if (ep->ep_state & EP_HAS_STREAMS) {
+			xhci_free_stream_info(xhci, ep->stream_info);
+			ep->stream_info = NULL;
+			ep->ep_state &= ~EP_HAS_STREAMS;
+		}
+
+		if (ep->ring) {
+			xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i);
+			last_freed_endpoint = i;
+		}
+		if (!list_empty(&virt_dev->eps[i].bw_endpoint_list))
+			xhci_drop_ep_from_interval_table(xhci,
+					&virt_dev->eps[i].bw_info,
+					virt_dev->bw_table,
+					udev,
+					&virt_dev->eps[i],
+					virt_dev->tt_info);
+		xhci_clear_endpoint_bw_info(&virt_dev->eps[i].bw_info);
+	}
+	/* If necessary, update the number of active TTs on this root port */
+	xhci_update_tt_active_eps(xhci, virt_dev, old_active_eps);
+
+	xhci_dbg(xhci, "Output context after successful reset device cmd:\n");
+	xhci_dbg_ctx(xhci, virt_dev->out_ctx, last_freed_endpoint);
+	ret = 0;
+
+command_cleanup:
+	xhci_free_command(xhci, reset_device_cmd);
+	return ret;
+}
+
+/*
+ * At this point, the struct usb_device is about to go away, the device has
+ * disconnected, and all traffic has been stopped and the endpoints have been
+ * disabled.  Free any HC data structures associated with that device.
+ */
+void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	struct xhci_virt_device *virt_dev;
+	unsigned long flags;
+	u32 state;
+	int i, ret;
+
+	ret = xhci_check_args(hcd, udev, NULL, 0, true, __func__);
+	/* If the host is halted due to driver unload, we still need to free the
+	 * device.
+	 */
+	if (ret <= 0 && ret != -ENODEV)
+		return;
+
+	virt_dev = xhci->devs[udev->slot_id];
+
+	/* Stop any wayward timer functions (which may grab the lock) */
+	for (i = 0; i < 31; ++i) {
+		virt_dev->eps[i].ep_state &= ~EP_HALT_PENDING;
+		del_timer_sync(&virt_dev->eps[i].stop_cmd_timer);
+	}
+
+	if (udev->usb2_hw_lpm_enabled) {
+		xhci_set_usb2_hardware_lpm(hcd, udev, 0);
+		udev->usb2_hw_lpm_enabled = 0;
+	}
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	/* Don't disable the slot if the host controller is dead. */
+	state = xhci_readl(xhci, &xhci->op_regs->status);
+	if (state == 0xffffffff || (xhci->xhc_state & XHCI_STATE_DYING) ||
+			(xhci->xhc_state & XHCI_STATE_HALTED)) {
+		xhci_free_virt_device(xhci, udev->slot_id);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return;
+	}
+
+	if (xhci_queue_slot_control(xhci, TRB_DISABLE_SLOT, udev->slot_id)) {
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		xhci_dbg(xhci, "FIXME: allocate a command ring segment\n");
+		return;
+	}
+	xhci_ring_cmd_db(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	/*
+	 * Event command completion handler will free any data structures
+	 * associated with the slot.  XXX Can free sleep?
+	 */
+}
+
+/*
+ * Checks if we have enough host controller resources for the default control
+ * endpoint.
+ *
+ * Must be called with xhci->lock held.
+ */
+static int xhci_reserve_host_control_ep_resources(struct xhci_hcd *xhci)
+{
+	if (xhci->num_active_eps + 1 > xhci->limit_active_eps) {
+		xhci_dbg(xhci, "Not enough ep ctxs: "
+				"%u active, need to add 1, limit is %u.\n",
+				xhci->num_active_eps, xhci->limit_active_eps);
+		return -ENOMEM;
+	}
+	xhci->num_active_eps += 1;
+	xhci_dbg(xhci, "Adding 1 ep ctx, %u now active.\n",
+			xhci->num_active_eps);
+	return 0;
+}
+
+
+/*
+ * Returns 0 if the xHC ran out of device slots, the Enable Slot command
+ * timed out, or allocating memory failed.  Returns 1 on success.
+ */
+int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	unsigned long flags;
+	int timeleft;
+	int ret;
+	union xhci_trb *cmd_trb;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	cmd_trb = xhci->cmd_ring->dequeue;
+	ret = xhci_queue_slot_control(xhci, TRB_ENABLE_SLOT, 0);
+	if (ret) {
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		xhci_dbg(xhci, "FIXME: allocate a command ring segment\n");
+		return 0;
+	}
+	xhci_ring_cmd_db(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* XXX: how much time for xHC slot assignment? */
+	timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
+			XHCI_CMD_DEFAULT_TIMEOUT);
+	if (timeleft <= 0) {
+		xhci_warn(xhci, "%s while waiting for a slot\n",
+				timeleft == 0 ? "Timeout" : "Signal");
+		/* cancel the enable slot request */
+		return xhci_cancel_cmd(xhci, NULL, cmd_trb);
+	}
+
+	if (!xhci->slot_id) {
+		xhci_err(xhci, "Error while assigning device slot ID\n");
+		return 0;
+	}
+
+	if ((xhci->quirks & XHCI_EP_LIMIT_QUIRK)) {
+		spin_lock_irqsave(&xhci->lock, flags);
+		ret = xhci_reserve_host_control_ep_resources(xhci);
+		if (ret) {
+			spin_unlock_irqrestore(&xhci->lock, flags);
+			xhci_warn(xhci, "Not enough host resources, "
+					"active endpoint contexts = %u\n",
+					xhci->num_active_eps);
+			goto disable_slot;
+		}
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	}
+	/* Use GFP_NOIO, since this function can be called from
+	 * xhci_discover_or_reset_device(), which may be called as part of
+	 * mass storage driver error handling.
+	 */
+	if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_NOIO)) {
+		xhci_warn(xhci, "Could not allocate xHCI USB device data structures\n");
+		goto disable_slot;
+	}
+	udev->slot_id = xhci->slot_id;
+	/* Is this a LS or FS device under a HS hub? */
+	/* Hub or peripherial? */
+	return 1;
+
+disable_slot:
+	/* Disable slot, if we can do it without mem alloc */
+	spin_lock_irqsave(&xhci->lock, flags);
+	if (!xhci_queue_slot_control(xhci, TRB_DISABLE_SLOT, udev->slot_id))
+		xhci_ring_cmd_db(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return 0;
+}
+
+/*
+ * Issue an Address Device command (which will issue a SetAddress request to
+ * the device).
+ * We should be protected by the usb_address0_mutex in khubd's hub_port_init, so
+ * we should only issue and wait on one address command at the same time.
+ *
+ * We add one to the device address issued by the hardware because the USB core
+ * uses address 1 for the root hubs (even though they're not really devices).
+ */
+int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	unsigned long flags;
+	int timeleft;
+	struct xhci_virt_device *virt_dev;
+	int ret = 0;
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	struct xhci_slot_ctx *slot_ctx;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	u64 temp_64;
+	union xhci_trb *cmd_trb;
+
+	if (!udev->slot_id) {
+		xhci_dbg(xhci, "Bad Slot ID %d\n", udev->slot_id);
+		return -EINVAL;
+	}
+
+	virt_dev = xhci->devs[udev->slot_id];
+
+	if (WARN_ON(!virt_dev)) {
+		/*
+		 * In plug/unplug torture test with an NEC controller,
+		 * a zero-dereference was observed once due to virt_dev = 0.
+		 * Print useful debug rather than crash if it is observed again!
+		 */
+		xhci_warn(xhci, "Virt dev invalid for slot_id 0x%x!\n",
+			udev->slot_id);
+		return -EINVAL;
+	}
+
+	slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->in_ctx);
+	/*
+	 * If this is the first Set Address since device plug-in or
+	 * virt_device realloaction after a resume with an xHCI power loss,
+	 * then set up the slot context.
+	 */
+	if (!slot_ctx->dev_info)
+		xhci_setup_addressable_virt_dev(xhci, udev);
+	/* Otherwise, update the control endpoint ring enqueue pointer. */
+	else
+		xhci_copy_ep0_dequeue_into_input_ctx(xhci, udev);
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, virt_dev->in_ctx);
+	ctrl_ctx->add_flags = cpu_to_le32(SLOT_FLAG | EP0_FLAG);
+	ctrl_ctx->drop_flags = 0;
+
+	xhci_dbg(xhci, "Slot ID %d Input Context:\n", udev->slot_id);
+	xhci_dbg_ctx(xhci, virt_dev->in_ctx, 2);
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	cmd_trb = xhci->cmd_ring->dequeue;
+	ret = xhci_queue_address_device(xhci, virt_dev->in_ctx->dma,
+					udev->slot_id);
+	if (ret) {
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		xhci_dbg(xhci, "FIXME: allocate a command ring segment\n");
+		return ret;
+	}
+	xhci_ring_cmd_db(xhci);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	/* ctrl tx can take up to 5 sec; XXX: need more time for xHC? */
+	timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
+			XHCI_CMD_DEFAULT_TIMEOUT);
+	/* FIXME: From section 4.3.4: "Software shall be responsible for timing
+	 * the SetAddress() "recovery interval" required by USB and aborting the
+	 * command on a timeout.
+	 */
+	if (timeleft <= 0) {
+		xhci_warn(xhci, "%s while waiting for address device command\n",
+				timeleft == 0 ? "Timeout" : "Signal");
+		/* cancel the address device command */
+		ret = xhci_cancel_cmd(xhci, NULL, cmd_trb);
+		if (ret < 0)
+			return ret;
+		return -ETIME;
+	}
+
+	switch (virt_dev->cmd_status) {
+	case COMP_CTX_STATE:
+	case COMP_EBADSLT:
+		xhci_err(xhci, "Setup ERROR: address device command for slot %d.\n",
+				udev->slot_id);
+		ret = -EINVAL;
+		break;
+	case COMP_TX_ERR:
+		dev_warn(&udev->dev, "Device not responding to set address.\n");
+		ret = -EPROTO;
+		break;
+	case COMP_DEV_ERR:
+		dev_warn(&udev->dev, "ERROR: Incompatible device for address "
+				"device command.\n");
+		ret = -ENODEV;
+		break;
+	case COMP_SUCCESS:
+		xhci_dbg(xhci, "Successful Address Device command\n");
+		break;
+	default:
+		xhci_err(xhci, "ERROR: unexpected command completion "
+				"code 0x%x.\n", virt_dev->cmd_status);
+		xhci_dbg(xhci, "Slot ID %d Output Context:\n", udev->slot_id);
+		xhci_dbg_ctx(xhci, virt_dev->out_ctx, 2);
+		ret = -EINVAL;
+		break;
+	}
+	if (ret) {
+		return ret;
+	}
+	temp_64 = xhci_read_64(xhci, &xhci->op_regs->dcbaa_ptr);
+	xhci_dbg(xhci, "Op regs DCBAA ptr = %#016llx\n", temp_64);
+	xhci_dbg(xhci, "Slot ID %d dcbaa entry @%p = %#016llx\n",
+		 udev->slot_id,
+		 &xhci->dcbaa->dev_context_ptrs[udev->slot_id],
+		 (unsigned long long)
+		 le64_to_cpu(xhci->dcbaa->dev_context_ptrs[udev->slot_id]));
+	xhci_dbg(xhci, "Output Context DMA address = %#08llx\n",
+			(unsigned long long)virt_dev->out_ctx->dma);
+	xhci_dbg(xhci, "Slot ID %d Input Context:\n", udev->slot_id);
+	xhci_dbg_ctx(xhci, virt_dev->in_ctx, 2);
+	xhci_dbg(xhci, "Slot ID %d Output Context:\n", udev->slot_id);
+	xhci_dbg_ctx(xhci, virt_dev->out_ctx, 2);
+	/*
+	 * USB core uses address 1 for the roothubs, so we add one to the
+	 * address given back to us by the HC.
+	 */
+	slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->out_ctx);
+	/* Use kernel assigned address for devices; store xHC assigned
+	 * address locally. */
+	virt_dev->address = (le32_to_cpu(slot_ctx->dev_state) & DEV_ADDR_MASK)
+		+ 1;
+	/* Zero the input context control for later use */
+	ctrl_ctx->add_flags = 0;
+	ctrl_ctx->drop_flags = 0;
+
+	xhci_dbg(xhci, "Internal device address = %d\n", virt_dev->address);
+
+	return 0;
+}
+
+/*
+ * Transfer the port index into real index in the HW port status
+ * registers. Caculate offset between the port's PORTSC register
+ * and port status base. Divide the number of per port register
+ * to get the real index. The raw port number bases 1.
+ */
+int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	__le32 __iomem *base_addr = &xhci->op_regs->port_status_base;
+	__le32 __iomem *addr;
+	int raw_port;
+
+	if (hcd->speed != HCD_USB3)
+		addr = xhci->usb2_ports[port1 - 1];
+	else
+		addr = xhci->usb3_ports[port1 - 1];
+
+	raw_port = (addr - base_addr)/NUM_PORT_REGS + 1;
+	return raw_port;
+}
+
+#ifdef CONFIG_PM_RUNTIME
+
+/* BESL to HIRD Encoding array for USB2 LPM */
+static int xhci_besl_encoding[16] = {125, 150, 200, 300, 400, 500, 1000, 2000,
+	3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000};
+
+/* Calculate HIRD/BESL for USB2 PORTPMSC*/
+static int xhci_calculate_hird_besl(struct xhci_hcd *xhci,
+					struct usb_device *udev)
+{
+	int u2del, besl, besl_host;
+	int besl_device = 0;
+	u32 field;
+
+	u2del = HCS_U2_LATENCY(xhci->hcs_params3);
+	field = le32_to_cpu(udev->bos->ext_cap->bmAttributes);
+
+	if (field & USB_BESL_SUPPORT) {
+		for (besl_host = 0; besl_host < 16; besl_host++) {
+			if (xhci_besl_encoding[besl_host] >= u2del)
+				break;
+		}
+		/* Use baseline BESL value as default */
+		if (field & USB_BESL_BASELINE_VALID)
+			besl_device = USB_GET_BESL_BASELINE(field);
+		else if (field & USB_BESL_DEEP_VALID)
+			besl_device = USB_GET_BESL_DEEP(field);
+	} else {
+		if (u2del <= 50)
+			besl_host = 0;
+		else
+			besl_host = (u2del - 51) / 75 + 1;
+	}
+
+	besl = besl_host + besl_device;
+	if (besl > 15)
+		besl = 15;
+
+	return besl;
+}
+
+static int xhci_usb2_software_lpm_test(struct usb_hcd *hcd,
+					struct usb_device *udev)
+{
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+	struct dev_info	*dev_info;
+	__le32 __iomem	**port_array;
+	__le32 __iomem	*addr, *pm_addr;
+	u32		temp, dev_id;
+	unsigned int	port_num;
+	unsigned long	flags;
+	int		hird;
+	int		ret;
+
+	if (hcd->speed == HCD_USB3 || !xhci->sw_lpm_support ||
+			!udev->lpm_capable)
+		return -EINVAL;
+
+	/* we only support lpm for non-hub device connected to root hub yet */
+	if (!udev->parent || udev->parent->parent ||
+			udev->descriptor.bDeviceClass == USB_CLASS_HUB)
+		return -EINVAL;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+
+	/* Look for devices in lpm_failed_devs list */
+	dev_id = le16_to_cpu(udev->descriptor.idVendor) << 16 |
+			le16_to_cpu(udev->descriptor.idProduct);
+	list_for_each_entry(dev_info, &xhci->lpm_failed_devs, list) {
+		if (dev_info->dev_id == dev_id) {
+			ret = -EINVAL;
+			goto finish;
+		}
+	}
+
+	port_array = xhci->usb2_ports;
+	port_num = udev->portnum - 1;
+
+	if (port_num > HCS_MAX_PORTS(xhci->hcs_params1)) {
+		xhci_dbg(xhci, "invalid port number %d\n", udev->portnum);
+		ret = -EINVAL;
+		goto finish;
+	}
+
+	/*
+	 * Test USB 2.0 software LPM.
+	 * FIXME: some xHCI 1.0 hosts may implement a new register to set up
+	 * hardware-controlled USB 2.0 LPM. See section 5.4.11 and 4.23.5.1.1.1
+	 * in the June 2011 errata release.
+	 */
+	xhci_dbg(xhci, "test port %d software LPM\n", port_num);
+	/*
+	 * Set L1 Device Slot and HIRD/BESL.
+	 * Check device's USB 2.0 extension descriptor to determine whether
+	 * HIRD or BESL shoule be used. See USB2.0 LPM errata.
+	 */
+	pm_addr = port_array[port_num] + 1;
+	hird = xhci_calculate_hird_besl(xhci, udev);
+	temp = PORT_L1DS(udev->slot_id) | PORT_HIRD(hird);
+	xhci_writel(xhci, temp, pm_addr);
+
+	/* Set port link state to U2(L1) */
+	addr = port_array[port_num];
+	xhci_set_link_state(xhci, port_array, port_num, XDEV_U2);
+
+	/* wait for ACK */
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	msleep(10);
+	spin_lock_irqsave(&xhci->lock, flags);
+
+	/* Check L1 Status */
+	ret = xhci_handshake(xhci, pm_addr,
+			PORT_L1S_MASK, PORT_L1S_SUCCESS, 125);
+	if (ret != -ETIMEDOUT) {
+		/* enter L1 successfully */
+		temp = xhci_readl(xhci, addr);
+		xhci_dbg(xhci, "port %d entered L1 state, port status 0x%x\n",
+				port_num, temp);
+		ret = 0;
+	} else {
+		temp = xhci_readl(xhci, pm_addr);
+		xhci_dbg(xhci, "port %d software lpm failed, L1 status %d\n",
+				port_num, temp & PORT_L1S_MASK);
+		ret = -EINVAL;
+	}
+
+	/* Resume the port */
+	xhci_set_link_state(xhci, port_array, port_num, XDEV_U0);
+
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	msleep(10);
+	spin_lock_irqsave(&xhci->lock, flags);
+
+	/* Clear PLC */
+	xhci_test_and_clear_bit(xhci, port_array, port_num, PORT_PLC);
+
+	/* Check PORTSC to make sure the device is in the right state */
+	if (!ret) {
+		temp = xhci_readl(xhci, addr);
+		xhci_dbg(xhci, "resumed port %d status 0x%x\n",	port_num, temp);
+		if (!(temp & PORT_CONNECT) || !(temp & PORT_PE) ||
+				(temp & PORT_PLS_MASK) != XDEV_U0) {
+			xhci_dbg(xhci, "port L1 resume fail\n");
+			ret = -EINVAL;
+		}
+	}
+
+	if (ret) {
+		/* Insert dev to lpm_failed_devs list */
+		xhci_warn(xhci, "device LPM test failed, may disconnect and "
+				"re-enumerate\n");
+		dev_info = kzalloc(sizeof(struct dev_info), GFP_ATOMIC);
+		if (!dev_info) {
+			ret = -ENOMEM;
+			goto finish;
+		}
+		dev_info->dev_id = dev_id;
+		INIT_LIST_HEAD(&dev_info->list);
+		list_add(&dev_info->list, &xhci->lpm_failed_devs);
+	} else {
+		xhci_ring_device(xhci, udev->slot_id);
+	}
+
+finish:
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return ret;
+}
+
+int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+			struct usb_device *udev, int enable)
+{
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+	__le32 __iomem	**port_array;
+	__le32 __iomem	*pm_addr;
+	u32		temp;
+	unsigned int	port_num;
+	unsigned long	flags;
+	int		hird;
+
+	if (hcd->speed == HCD_USB3 || !xhci->hw_lpm_support ||
+			!udev->lpm_capable)
+		return -EPERM;
+
+	if (!udev->parent || udev->parent->parent ||
+			udev->descriptor.bDeviceClass == USB_CLASS_HUB)
+		return -EPERM;
+
+	if (udev->usb2_hw_lpm_capable != 1)
+		return -EPERM;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+
+	port_array = xhci->usb2_ports;
+	port_num = udev->portnum - 1;
+	pm_addr = port_array[port_num] + 1;
+	temp = xhci_readl(xhci, pm_addr);
+
+	xhci_dbg(xhci, "%s port %d USB2 hardware LPM\n",
+			enable ? "enable" : "disable", port_num);
+
+	hird = xhci_calculate_hird_besl(xhci, udev);
+
+	if (enable) {
+		temp &= ~PORT_HIRD_MASK;
+		temp |= PORT_HIRD(hird) | PORT_RWE;
+		xhci_writel(xhci, temp, pm_addr);
+		temp = xhci_readl(xhci, pm_addr);
+		temp |= PORT_HLE;
+		xhci_writel(xhci, temp, pm_addr);
+	} else {
+		temp &= ~(PORT_HLE | PORT_RWE | PORT_HIRD_MASK);
+		xhci_writel(xhci, temp, pm_addr);
+	}
+
+	spin_unlock_irqrestore(&xhci->lock, flags);
+	return 0;
+}
+
+int xhci_update_device(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+	int		ret;
+
+	ret = xhci_usb2_software_lpm_test(hcd, udev);
+	if (!ret) {
+		xhci_dbg(xhci, "software LPM test succeed\n");
+		if (xhci->hw_lpm_support == 1) {
+			udev->usb2_hw_lpm_capable = 1;
+			ret = xhci_set_usb2_hardware_lpm(hcd, udev, 1);
+			if (!ret)
+				udev->usb2_hw_lpm_enabled = 1;
+		}
+	}
+
+	return 0;
+}
+
+#else
+
+int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+				struct usb_device *udev, int enable)
+{
+	return 0;
+}
+
+int xhci_update_device(struct usb_hcd *hcd, struct usb_device *udev)
+{
+	return 0;
+}
+
+#endif /* CONFIG_PM_RUNTIME */
+
+/*---------------------- USB 3.0 Link PM functions ------------------------*/
+
+#ifdef CONFIG_PM
+/* Service interval in nanoseconds = 2^(bInterval - 1) * 125us * 1000ns / 1us */
+static unsigned long long xhci_service_interval_to_ns(
+		struct usb_endpoint_descriptor *desc)
+{
+	return (1ULL << (desc->bInterval - 1)) * 125 * 1000;
+}
+
+static u16 xhci_get_timeout_no_hub_lpm(struct usb_device *udev,
+		enum usb3_link_state state)
+{
+	unsigned long long sel;
+	unsigned long long pel;
+	unsigned int max_sel_pel;
+	char *state_name;
+
+	switch (state) {
+	case USB3_LPM_U1:
+		/* Convert SEL and PEL stored in nanoseconds to microseconds */
+		sel = DIV_ROUND_UP(udev->u1_params.sel, 1000);
+		pel = DIV_ROUND_UP(udev->u1_params.pel, 1000);
+		max_sel_pel = USB3_LPM_MAX_U1_SEL_PEL;
+		state_name = "U1";
+		break;
+	case USB3_LPM_U2:
+		sel = DIV_ROUND_UP(udev->u2_params.sel, 1000);
+		pel = DIV_ROUND_UP(udev->u2_params.pel, 1000);
+		max_sel_pel = USB3_LPM_MAX_U2_SEL_PEL;
+		state_name = "U2";
+		break;
+	default:
+		dev_warn(&udev->dev, "%s: Can't get timeout for non-U1 or U2 state.\n",
+				__func__);
+		return USB3_LPM_DISABLED;
+	}
+
+	if (sel <= max_sel_pel && pel <= max_sel_pel)
+		return USB3_LPM_DEVICE_INITIATED;
+
+	if (sel > max_sel_pel)
+		dev_dbg(&udev->dev, "Device-initiated %s disabled "
+				"due to long SEL %llu ms\n",
+				state_name, sel);
+	else
+		dev_dbg(&udev->dev, "Device-initiated %s disabled "
+				"due to long PEL %llu\n ms",
+				state_name, pel);
+	return USB3_LPM_DISABLED;
+}
+
+/* Returns the hub-encoded U1 timeout value.
+ * The U1 timeout should be the maximum of the following values:
+ *  - For control endpoints, U1 system exit latency (SEL) * 3
+ *  - For bulk endpoints, U1 SEL * 5
+ *  - For interrupt endpoints:
+ *    - Notification EPs, U1 SEL * 3
+ *    - Periodic EPs, max(105% of bInterval, U1 SEL * 2)
+ *  - For isochronous endpoints, max(105% of bInterval, U1 SEL * 2)
+ */
+static u16 xhci_calculate_intel_u1_timeout(struct usb_device *udev,
+		struct usb_endpoint_descriptor *desc)
+{
+	unsigned long long timeout_ns;
+	int ep_type;
+	int intr_type;
+
+	ep_type = usb_endpoint_type(desc);
+	switch (ep_type) {
+	case USB_ENDPOINT_XFER_CONTROL:
+		timeout_ns = udev->u1_params.sel * 3;
+		break;
+	case USB_ENDPOINT_XFER_BULK:
+		timeout_ns = udev->u1_params.sel * 5;
+		break;
+	case USB_ENDPOINT_XFER_INT:
+		intr_type = usb_endpoint_interrupt_type(desc);
+		if (intr_type == USB_ENDPOINT_INTR_NOTIFICATION) {
+			timeout_ns = udev->u1_params.sel * 3;
+			break;
+		}
+		/* Otherwise the calculation is the same as isoc eps */
+	case USB_ENDPOINT_XFER_ISOC:
+		timeout_ns = xhci_service_interval_to_ns(desc);
+		timeout_ns = DIV_ROUND_UP_ULL(timeout_ns * 105, 100);
+		if (timeout_ns < udev->u1_params.sel * 2)
+			timeout_ns = udev->u1_params.sel * 2;
+		break;
+	default:
+		return 0;
+	}
+
+	/* The U1 timeout is encoded in 1us intervals. */
+	timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 1000);
+	/* Don't return a timeout of zero, because that's USB3_LPM_DISABLED. */
+	if (timeout_ns == USB3_LPM_DISABLED)
+		timeout_ns++;
+
+	/* If the necessary timeout value is bigger than what we can set in the
+	 * USB 3.0 hub, we have to disable hub-initiated U1.
+	 */
+	if (timeout_ns <= USB3_LPM_U1_MAX_TIMEOUT)
+		return timeout_ns;
+	dev_dbg(&udev->dev, "Hub-initiated U1 disabled "
+			"due to long timeout %llu ms\n", timeout_ns);
+	return xhci_get_timeout_no_hub_lpm(udev, USB3_LPM_U1);
+}
+
+/* Returns the hub-encoded U2 timeout value.
+ * The U2 timeout should be the maximum of:
+ *  - 10 ms (to avoid the bandwidth impact on the scheduler)
+ *  - largest bInterval of any active periodic endpoint (to avoid going
+ *    into lower power link states between intervals).
+ *  - the U2 Exit Latency of the device
+ */
+static u16 xhci_calculate_intel_u2_timeout(struct usb_device *udev,
+		struct usb_endpoint_descriptor *desc)
+{
+	unsigned long long timeout_ns;
+	unsigned long long u2_del_ns;
+
+	timeout_ns = 10 * 1000 * 1000;
+
+	if ((usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) &&
+			(xhci_service_interval_to_ns(desc) > timeout_ns))
+		timeout_ns = xhci_service_interval_to_ns(desc);
+
+	u2_del_ns = le16_to_cpu(udev->bos->ss_cap->bU2DevExitLat) * 1000ULL;
+	if (u2_del_ns > timeout_ns)
+		timeout_ns = u2_del_ns;
+
+	/* The U2 timeout is encoded in 256us intervals */
+	timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 256 * 1000);
+	/* If the necessary timeout value is bigger than what we can set in the
+	 * USB 3.0 hub, we have to disable hub-initiated U2.
+	 */
+	if (timeout_ns <= USB3_LPM_U2_MAX_TIMEOUT)
+		return timeout_ns;
+	dev_dbg(&udev->dev, "Hub-initiated U2 disabled "
+			"due to long timeout %llu ms\n", timeout_ns);
+	return xhci_get_timeout_no_hub_lpm(udev, USB3_LPM_U2);
+}
+
+static u16 xhci_call_host_update_timeout_for_endpoint(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct usb_endpoint_descriptor *desc,
+		enum usb3_link_state state,
+		u16 *timeout)
+{
+	if (state == USB3_LPM_U1) {
+		if (xhci->quirks & XHCI_INTEL_HOST)
+			return xhci_calculate_intel_u1_timeout(udev, desc);
+	} else {
+		if (xhci->quirks & XHCI_INTEL_HOST)
+			return xhci_calculate_intel_u2_timeout(udev, desc);
+	}
+
+	return USB3_LPM_DISABLED;
+}
+
+static int xhci_update_timeout_for_endpoint(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct usb_endpoint_descriptor *desc,
+		enum usb3_link_state state,
+		u16 *timeout)
+{
+	u16 alt_timeout;
+
+	alt_timeout = xhci_call_host_update_timeout_for_endpoint(xhci, udev,
+		desc, state, timeout);
+
+	/* If we found we can't enable hub-initiated LPM, or
+	 * the U1 or U2 exit latency was too high to allow
+	 * device-initiated LPM as well, just stop searching.
+	 */
+	if (alt_timeout == USB3_LPM_DISABLED ||
+			alt_timeout == USB3_LPM_DEVICE_INITIATED) {
+		*timeout = alt_timeout;
+		return -E2BIG;
+	}
+	if (alt_timeout > *timeout)
+		*timeout = alt_timeout;
+	return 0;
+}
+
+static int xhci_update_timeout_for_interface(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		struct usb_host_interface *alt,
+		enum usb3_link_state state,
+		u16 *timeout)
+{
+	int j;
+
+	for (j = 0; j < alt->desc.bNumEndpoints; j++) {
+		if (xhci_update_timeout_for_endpoint(xhci, udev,
+					&alt->endpoint[j].desc, state, timeout))
+			return -E2BIG;
+		continue;
+	}
+	return 0;
+}
+
+static int xhci_check_intel_tier_policy(struct usb_device *udev,
+		enum usb3_link_state state)
+{
+	struct usb_device *parent;
+	unsigned int num_hubs;
+
+	if (state == USB3_LPM_U2)
+		return 0;
+
+	/* Don't enable U1 if the device is on a 2nd tier hub or lower. */
+	for (parent = udev->parent, num_hubs = 0; parent->parent;
+			parent = parent->parent)
+		num_hubs++;
+
+	if (num_hubs < 2)
+		return 0;
+
+	dev_dbg(&udev->dev, "Disabling U1 link state for device"
+			" below second-tier hub.\n");
+	dev_dbg(&udev->dev, "Plug device into first-tier hub "
+			"to decrease power consumption.\n");
+	return -E2BIG;
+}
+
+static int xhci_check_tier_policy(struct xhci_hcd *xhci,
+		struct usb_device *udev,
+		enum usb3_link_state state)
+{
+	if (xhci->quirks & XHCI_INTEL_HOST)
+		return xhci_check_intel_tier_policy(udev, state);
+	return -EINVAL;
+}
+
+/* Returns the U1 or U2 timeout that should be enabled.
+ * If the tier check or timeout setting functions return with a non-zero exit
+ * code, that means the timeout value has been finalized and we shouldn't look
+ * at any more endpoints.
+ */
+static u16 xhci_calculate_lpm_timeout(struct usb_hcd *hcd,
+			struct usb_device *udev, enum usb3_link_state state)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	struct usb_host_config *config;
+	char *state_name;
+	int i;
+	u16 timeout = USB3_LPM_DISABLED;
+
+	if (state == USB3_LPM_U1)
+		state_name = "U1";
+	else if (state == USB3_LPM_U2)
+		state_name = "U2";
+	else {
+		dev_warn(&udev->dev, "Can't enable unknown link state %i\n",
+				state);
+		return timeout;
+	}
+
+	if (xhci_check_tier_policy(xhci, udev, state) < 0)
+		return timeout;
+
+	/* Gather some information about the currently installed configuration
+	 * and alternate interface settings.
+	 */
+	if (xhci_update_timeout_for_endpoint(xhci, udev, &udev->ep0.desc,
+			state, &timeout))
+		return timeout;
+
+	config = udev->actconfig;
+	if (!config)
+		return timeout;
+
+	for (i = 0; i < USB_MAXINTERFACES; i++) {
+		struct usb_driver *driver;
+		struct usb_interface *intf = config->interface[i];
+
+		if (!intf)
+			continue;
+
+		/* Check if any currently bound drivers want hub-initiated LPM
+		 * disabled.
+		 */
+		if (intf->dev.driver) {
+			driver = to_usb_driver(intf->dev.driver);
+			if (driver && driver->disable_hub_initiated_lpm) {
+				dev_dbg(&udev->dev, "Hub-initiated %s disabled "
+						"at request of driver %s\n",
+						state_name, driver->name);
+				return xhci_get_timeout_no_hub_lpm(udev, state);
+			}
+		}
+
+		/* Not sure how this could happen... */
+		if (!intf->cur_altsetting)
+			continue;
+
+		if (xhci_update_timeout_for_interface(xhci, udev,
+					intf->cur_altsetting,
+					state, &timeout))
+			return timeout;
+	}
+	return timeout;
+}
+
+/*
+ * Issue an Evaluate Context command to change the Maximum Exit Latency in the
+ * slot context.  If that succeeds, store the new MEL in the xhci_virt_device.
+ */
+static int xhci_change_max_exit_latency(struct xhci_hcd *xhci,
+			struct usb_device *udev, u16 max_exit_latency)
+{
+	struct xhci_virt_device *virt_dev;
+	struct xhci_command *command;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	struct xhci_slot_ctx *slot_ctx;
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	if (max_exit_latency == xhci->devs[udev->slot_id]->current_mel) {
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return 0;
+	}
+
+	/* Attempt to issue an Evaluate Context command to change the MEL. */
+	virt_dev = xhci->devs[udev->slot_id];
+	command = xhci->lpm_command;
+	xhci_slot_copy(xhci, command->in_ctx, virt_dev->out_ctx);
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, command->in_ctx);
+	ctrl_ctx->add_flags |= cpu_to_le32(SLOT_FLAG);
+	slot_ctx = xhci_get_slot_ctx(xhci, command->in_ctx);
+	slot_ctx->dev_info2 &= cpu_to_le32(~((u32) MAX_EXIT));
+	slot_ctx->dev_info2 |= cpu_to_le32(max_exit_latency);
+
+	xhci_dbg(xhci, "Set up evaluate context for LPM MEL change.\n");
+	xhci_dbg(xhci, "Slot %u Input Context:\n", udev->slot_id);
+	xhci_dbg_ctx(xhci, command->in_ctx, 0);
+
+	/* Issue and wait for the evaluate context command. */
+	ret = xhci_configure_endpoint(xhci, udev, command,
+			true, true);
+	xhci_dbg(xhci, "Slot %u Output Context:\n", udev->slot_id);
+	xhci_dbg_ctx(xhci, virt_dev->out_ctx, 0);
+
+	if (!ret) {
+		spin_lock_irqsave(&xhci->lock, flags);
+		virt_dev->current_mel = max_exit_latency;
+		spin_unlock_irqrestore(&xhci->lock, flags);
+	}
+	return ret;
+}
+
+static int calculate_max_exit_latency(struct usb_device *udev,
+		enum usb3_link_state state_changed,
+		u16 hub_encoded_timeout)
+{
+	unsigned long long u1_mel_us = 0;
+	unsigned long long u2_mel_us = 0;
+	unsigned long long mel_us = 0;
+	bool disabling_u1;
+	bool disabling_u2;
+	bool enabling_u1;
+	bool enabling_u2;
+
+	disabling_u1 = (state_changed == USB3_LPM_U1 &&
+			hub_encoded_timeout == USB3_LPM_DISABLED);
+	disabling_u2 = (state_changed == USB3_LPM_U2 &&
+			hub_encoded_timeout == USB3_LPM_DISABLED);
+
+	enabling_u1 = (state_changed == USB3_LPM_U1 &&
+			hub_encoded_timeout != USB3_LPM_DISABLED);
+	enabling_u2 = (state_changed == USB3_LPM_U2 &&
+			hub_encoded_timeout != USB3_LPM_DISABLED);
+
+	/* If U1 was already enabled and we're not disabling it,
+	 * or we're going to enable U1, account for the U1 max exit latency.
+	 */
+	if ((udev->u1_params.timeout != USB3_LPM_DISABLED && !disabling_u1) ||
+			enabling_u1)
+		u1_mel_us = DIV_ROUND_UP(udev->u1_params.mel, 1000);
+	if ((udev->u2_params.timeout != USB3_LPM_DISABLED && !disabling_u2) ||
+			enabling_u2)
+		u2_mel_us = DIV_ROUND_UP(udev->u2_params.mel, 1000);
+
+	if (u1_mel_us > u2_mel_us)
+		mel_us = u1_mel_us;
+	else
+		mel_us = u2_mel_us;
+	/* xHCI host controller max exit latency field is only 16 bits wide. */
+	if (mel_us > MAX_EXIT) {
+		dev_warn(&udev->dev, "Link PM max exit latency of %lluus "
+				"is too big.\n", mel_us);
+		return -E2BIG;
+	}
+	return mel_us;
+}
+
+/* Returns the USB3 hub-encoded value for the U1/U2 timeout. */
+int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
+			struct usb_device *udev, enum usb3_link_state state)
+{
+	struct xhci_hcd	*xhci;
+	u16 hub_encoded_timeout;
+	int mel;
+	int ret;
+
+	xhci = hcd_to_xhci(hcd);
+	/* The LPM timeout values are pretty host-controller specific, so don't
+	 * enable hub-initiated timeouts unless the vendor has provided
+	 * information about their timeout algorithm.
+	 */
+	if (!xhci || !(xhci->quirks & XHCI_LPM_SUPPORT) ||
+			!xhci->devs[udev->slot_id])
+		return USB3_LPM_DISABLED;
+
+	hub_encoded_timeout = xhci_calculate_lpm_timeout(hcd, udev, state);
+	mel = calculate_max_exit_latency(udev, state, hub_encoded_timeout);
+	if (mel < 0) {
+		/* Max Exit Latency is too big, disable LPM. */
+		hub_encoded_timeout = USB3_LPM_DISABLED;
+		mel = 0;
+	}
+
+	ret = xhci_change_max_exit_latency(xhci, udev, mel);
+	if (ret)
+		return ret;
+	return hub_encoded_timeout;
+}
+
+int xhci_disable_usb3_lpm_timeout(struct usb_hcd *hcd,
+			struct usb_device *udev, enum usb3_link_state state)
+{
+	struct xhci_hcd	*xhci;
+	u16 mel;
+	int ret;
+
+	xhci = hcd_to_xhci(hcd);
+	if (!xhci || !(xhci->quirks & XHCI_LPM_SUPPORT) ||
+			!xhci->devs[udev->slot_id])
+		return 0;
+
+	mel = calculate_max_exit_latency(udev, state, USB3_LPM_DISABLED);
+	ret = xhci_change_max_exit_latency(xhci, udev, mel);
+	if (ret)
+		return ret;
+	return 0;
+}
+#else /* CONFIG_PM */
+
+int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
+			struct usb_device *udev, enum usb3_link_state state)
+{
+	return USB3_LPM_DISABLED;
+}
+
+int xhci_disable_usb3_lpm_timeout(struct usb_hcd *hcd,
+			struct usb_device *udev, enum usb3_link_state state)
+{
+	return 0;
+}
+#endif	/* CONFIG_PM */
+
+/*-------------------------------------------------------------------------*/
+
+/* Once a hub descriptor is fetched for a device, we need to update the xHC's
+ * internal data structures for the device.
+ */
+int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+			struct usb_tt *tt, gfp_t mem_flags)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	struct xhci_virt_device *vdev;
+	struct xhci_command *config_cmd;
+	struct xhci_input_control_ctx *ctrl_ctx;
+	struct xhci_slot_ctx *slot_ctx;
+	unsigned long flags;
+	unsigned think_time;
+	int ret;
+
+	/* Ignore root hubs */
+	if (!hdev->parent)
+		return 0;
+
+	vdev = xhci->devs[hdev->slot_id];
+	if (!vdev) {
+		xhci_warn(xhci, "Cannot update hub desc for unknown device.\n");
+		return -EINVAL;
+	}
+	config_cmd = xhci_alloc_command(xhci, true, true, mem_flags);
+	if (!config_cmd) {
+		xhci_dbg(xhci, "Could not allocate xHCI command structure.\n");
+		return -ENOMEM;
+	}
+
+	spin_lock_irqsave(&xhci->lock, flags);
+	if (hdev->speed == USB_SPEED_HIGH &&
+			xhci_alloc_tt_info(xhci, vdev, hdev, tt, GFP_ATOMIC)) {
+		xhci_dbg(xhci, "Could not allocate xHCI TT structure.\n");
+		xhci_free_command(xhci, config_cmd);
+		spin_unlock_irqrestore(&xhci->lock, flags);
+		return -ENOMEM;
+	}
+
+	xhci_slot_copy(xhci, config_cmd->in_ctx, vdev->out_ctx);
+	ctrl_ctx = xhci_get_input_control_ctx(xhci, config_cmd->in_ctx);
+	ctrl_ctx->add_flags |= cpu_to_le32(SLOT_FLAG);
+	slot_ctx = xhci_get_slot_ctx(xhci, config_cmd->in_ctx);
+	slot_ctx->dev_info |= cpu_to_le32(DEV_HUB);
+	if (tt->multi)
+		slot_ctx->dev_info |= cpu_to_le32(DEV_MTT);
+	if (xhci->hci_version > 0x95) {
+		xhci_dbg(xhci, "xHCI version %x needs hub "
+				"TT think time and number of ports\n",
+				(unsigned int) xhci->hci_version);
+		slot_ctx->dev_info2 |= cpu_to_le32(XHCI_MAX_PORTS(hdev->maxchild));
+		/* Set TT think time - convert from ns to FS bit times.
+		 * 0 = 8 FS bit times, 1 = 16 FS bit times,
+		 * 2 = 24 FS bit times, 3 = 32 FS bit times.
+		 *
+		 * xHCI 1.0: this field shall be 0 if the device is not a
+		 * High-spped hub.
+		 */
+		think_time = tt->think_time;
+		if (think_time != 0)
+			think_time = (think_time / 666) - 1;
+		if (xhci->hci_version < 0x100 || hdev->speed == USB_SPEED_HIGH)
+			slot_ctx->tt_info |=
+				cpu_to_le32(TT_THINK_TIME(think_time));
+	} else {
+		xhci_dbg(xhci, "xHCI version %x doesn't need hub "
+				"TT think time or number of ports\n",
+				(unsigned int) xhci->hci_version);
+	}
+	slot_ctx->dev_state = 0;
+	spin_unlock_irqrestore(&xhci->lock, flags);
+
+	xhci_dbg(xhci, "Set up %s for hub device.\n",
+			(xhci->hci_version > 0x95) ?
+			"configure endpoint" : "evaluate context");
+	xhci_dbg(xhci, "Slot %u Input Context:\n", hdev->slot_id);
+	xhci_dbg_ctx(xhci, config_cmd->in_ctx, 0);
+
+	/* Issue and wait for the configure endpoint or
+	 * evaluate context command.
+	 */
+	if (xhci->hci_version > 0x95)
+		ret = xhci_configure_endpoint(xhci, hdev, config_cmd,
+				false, false);
+	else
+		ret = xhci_configure_endpoint(xhci, hdev, config_cmd,
+				true, false);
+
+	xhci_dbg(xhci, "Slot %u Output Context:\n", hdev->slot_id);
+	xhci_dbg_ctx(xhci, vdev->out_ctx, 0);
+
+	xhci_free_command(xhci, config_cmd);
+	return ret;
+}
+
+int xhci_get_frame(struct usb_hcd *hcd)
+{
+	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+	/* EHCI mods by the periodic size.  Why? */
+	return xhci_readl(xhci, &xhci->run_regs->microframe_index) >> 3;
+}
+
+int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+{
+	struct xhci_hcd		*xhci;
+	struct device		*dev = hcd->self.controller;
+	int			retval;
+	u32			temp;
+
+	/* Accept arbitrarily long scatter-gather lists */
+	hcd->self.sg_tablesize = ~0;
+	/* XHCI controllers don't stop the ep queue on short packets :| */
+	hcd->self.no_stop_on_short = 1;
+
+	if (usb_hcd_is_primary_hcd(hcd)) {
+		xhci = kzalloc(sizeof(struct xhci_hcd), GFP_KERNEL);
+		if (!xhci)
+			return -ENOMEM;
+		*((struct xhci_hcd **) hcd->hcd_priv) = xhci;
+		xhci->main_hcd = hcd;
+		/* Mark the first roothub as being USB 2.0.
+		 * The xHCI driver will register the USB 3.0 roothub.
+		 */
+		hcd->speed = HCD_USB2;
+		hcd->self.root_hub->speed = USB_SPEED_HIGH;
+		/*
+		 * USB 2.0 roothub under xHCI has an integrated TT,
+		 * (rate matching hub) as opposed to having an OHCI/UHCI
+		 * companion controller.
+		 */
+		hcd->has_tt = 1;
+	} else {
+		/* xHCI private pointer was set in xhci_pci_probe for the second
+		 * registered roothub.
+		 */
+		xhci = hcd_to_xhci(hcd);
+		temp = xhci_readl(xhci, &xhci->cap_regs->hcc_params);
+		if (HCC_64BIT_ADDR(temp)) {
+			xhci_dbg(xhci, "Enabling 64-bit DMA addresses.\n");
+			dma_set_mask(hcd->self.controller, DMA_BIT_MASK(64));
+		} else {
+			dma_set_mask(hcd->self.controller, DMA_BIT_MASK(32));
+		}
+		return 0;
+	}
+
+	xhci->cap_regs = hcd->regs;
+	xhci->op_regs = hcd->regs +
+		HC_LENGTH(xhci_readl(xhci, &xhci->cap_regs->hc_capbase));
+	xhci->run_regs = hcd->regs +
+		(xhci_readl(xhci, &xhci->cap_regs->run_regs_off) & RTSOFF_MASK);
+	/* Cache read-only capability registers */
+	xhci->hcs_params1 = xhci_readl(xhci, &xhci->cap_regs->hcs_params1);
+	xhci->hcs_params2 = xhci_readl(xhci, &xhci->cap_regs->hcs_params2);
+	xhci->hcs_params3 = xhci_readl(xhci, &xhci->cap_regs->hcs_params3);
+	xhci->hcc_params = xhci_readl(xhci, &xhci->cap_regs->hc_capbase);
+	xhci->hci_version = HC_VERSION(xhci->hcc_params);
+	xhci->hcc_params = xhci_readl(xhci, &xhci->cap_regs->hcc_params);
+	xhci_print_registers(xhci);
+
+	get_quirks(dev, xhci);
+
+	/* Make sure the HC is halted. */
+	retval = xhci_halt(xhci);
+	if (retval)
+		goto error;
+
+	xhci_dbg(xhci, "Resetting HCD\n");
+	/* Reset the internal HC memory state and registers. */
+	retval = xhci_reset(xhci);
+	if (retval)
+		goto error;
+	xhci_dbg(xhci, "Reset complete\n");
+
+	temp = xhci_readl(xhci, &xhci->cap_regs->hcc_params);
+	if (HCC_64BIT_ADDR(temp)) {
+		xhci_dbg(xhci, "Enabling 64-bit DMA addresses.\n");
+		dma_set_mask(hcd->self.controller, DMA_BIT_MASK(64));
+	} else {
+		dma_set_mask(hcd->self.controller, DMA_BIT_MASK(32));
+	}
+
+	xhci_dbg(xhci, "Calling HCD init\n");
+	/* Initialize HCD and host controller data structures. */
+	retval = xhci_init(hcd);
+	if (retval)
+		goto error;
+	xhci_dbg(xhci, "Called HCD init\n");
+	return 0;
+error:
+	kfree(xhci);
+	return retval;
+}
+
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_LICENSE("GPL");
+
+static int __init xhci_hcd_init(void)
+{
+	int retval;
+
+	retval = xhci_register_pci();
+	if (retval < 0) {
+		printk(KERN_DEBUG "Problem registering PCI driver.");
+		return retval;
+	}
+	retval = xhci_register_plat();
+	if (retval < 0) {
+		printk(KERN_DEBUG "Problem registering platform driver.");
+		goto unreg_pci;
+	}
+	/*
+	 * Check the compiler generated sizes of structures that must be laid
+	 * out in specific ways for hardware access.
+	 */
+	BUILD_BUG_ON(sizeof(struct xhci_doorbell_array) != 256*32/8);
+	BUILD_BUG_ON(sizeof(struct xhci_slot_ctx) != 8*32/8);
+	BUILD_BUG_ON(sizeof(struct xhci_ep_ctx) != 8*32/8);
+	/* xhci_device_control has eight fields, and also
+	 * embeds one xhci_slot_ctx and 31 xhci_ep_ctx
+	 */
+	BUILD_BUG_ON(sizeof(struct xhci_stream_ctx) != 4*32/8);
+	BUILD_BUG_ON(sizeof(union xhci_trb) != 4*32/8);
+	BUILD_BUG_ON(sizeof(struct xhci_erst_entry) != 4*32/8);
+	BUILD_BUG_ON(sizeof(struct xhci_cap_regs) != 7*32/8);
+	BUILD_BUG_ON(sizeof(struct xhci_intr_reg) != 8*32/8);
+	/* xhci_run_regs has eight fields and embeds 128 xhci_intr_regs */
+	BUILD_BUG_ON(sizeof(struct xhci_run_regs) != (8+8*128)*32/8);
+	return 0;
+unreg_pci:
+	xhci_unregister_pci();
+	return retval;
+}
+module_init(xhci_hcd_init);
+
+static void __exit xhci_hcd_cleanup(void)
+{
+	xhci_unregister_pci();
+	xhci_unregister_plat();
+}
+module_exit(xhci_hcd_cleanup);
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
new file mode 100644
index 0000000..29c978e
--- /dev/null
+++ b/drivers/usb/host/xhci.h
@@ -0,0 +1,1856 @@
+/*
+ * xHCI host controller driver
+ *
+ * Copyright (C) 2008 Intel Corp.
+ *
+ * Author: Sarah Sharp
+ * Some code borrowed from the Linux EHCI driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef __LINUX_XHCI_HCD_H
+#define __LINUX_XHCI_HCD_H
+
+#include <linux/usb.h>
+#include <linux/timer.h>
+#include <linux/kernel.h>
+#include <linux/usb/hcd.h>
+
+/* Code sharing between pci-quirks and xhci hcd */
+#include	"xhci-ext-caps.h"
+#include "pci-quirks.h"
+
+/* xHCI PCI Configuration Registers */
+#define XHCI_SBRN_OFFSET	(0x60)
+
+/* Max number of USB devices for any host controller - limit in section 6.1 */
+#define MAX_HC_SLOTS		256
+/* Section 5.3.3 - MaxPorts */
+#define MAX_HC_PORTS		127
+
+/*
+ * xHCI register interface.
+ * This corresponds to the eXtensible Host Controller Interface (xHCI)
+ * Revision 0.95 specification
+ */
+
+/**
+ * struct xhci_cap_regs - xHCI Host Controller Capability Registers.
+ * @hc_capbase:		length of the capabilities register and HC version number
+ * @hcs_params1:	HCSPARAMS1 - Structural Parameters 1
+ * @hcs_params2:	HCSPARAMS2 - Structural Parameters 2
+ * @hcs_params3:	HCSPARAMS3 - Structural Parameters 3
+ * @hcc_params:		HCCPARAMS - Capability Parameters
+ * @db_off:		DBOFF - Doorbell array offset
+ * @run_regs_off:	RTSOFF - Runtime register space offset
+ */
+struct xhci_cap_regs {
+	__le32	hc_capbase;
+	__le32	hcs_params1;
+	__le32	hcs_params2;
+	__le32	hcs_params3;
+	__le32	hcc_params;
+	__le32	db_off;
+	__le32	run_regs_off;
+	/* Reserved up to (CAPLENGTH - 0x1C) */
+};
+
+/* hc_capbase bitmasks */
+/* bits 7:0 - how long is the Capabilities register */
+#define HC_LENGTH(p)		XHCI_HC_LENGTH(p)
+/* bits 31:16	*/
+#define HC_VERSION(p)		(((p) >> 16) & 0xffff)
+
+/* HCSPARAMS1 - hcs_params1 - bitmasks */
+/* bits 0:7, Max Device Slots */
+#define HCS_MAX_SLOTS(p)	(((p) >> 0) & 0xff)
+#define HCS_SLOTS_MASK		0xff
+/* bits 8:18, Max Interrupters */
+#define HCS_MAX_INTRS(p)	(((p) >> 8) & 0x7ff)
+/* bits 24:31, Max Ports - max value is 0x7F = 127 ports */
+#define HCS_MAX_PORTS(p)	(((p) >> 24) & 0x7f)
+
+/* HCSPARAMS2 - hcs_params2 - bitmasks */
+/* bits 0:3, frames or uframes that SW needs to queue transactions
+ * ahead of the HW to meet periodic deadlines */
+#define HCS_IST(p)		(((p) >> 0) & 0xf)
+/* bits 4:7, max number of Event Ring segments */
+#define HCS_ERST_MAX(p)		(((p) >> 4) & 0xf)
+/* bit 26 Scratchpad restore - for save/restore HW state - not used yet */
+/* bits 27:31 number of Scratchpad buffers SW must allocate for the HW */
+#define HCS_MAX_SCRATCHPAD(p)   (((p) >> 27) & 0x1f)
+
+/* HCSPARAMS3 - hcs_params3 - bitmasks */
+/* bits 0:7, Max U1 to U0 latency for the roothub ports */
+#define HCS_U1_LATENCY(p)	(((p) >> 0) & 0xff)
+/* bits 16:31, Max U2 to U0 latency for the roothub ports */
+#define HCS_U2_LATENCY(p)	(((p) >> 16) & 0xffff)
+
+/* HCCPARAMS - hcc_params - bitmasks */
+/* true: HC can use 64-bit address pointers */
+#define HCC_64BIT_ADDR(p)	((p) & (1 << 0))
+/* true: HC can do bandwidth negotiation */
+#define HCC_BANDWIDTH_NEG(p)	((p) & (1 << 1))
+/* true: HC uses 64-byte Device Context structures
+ * FIXME 64-byte context structures aren't supported yet.
+ */
+#define HCC_64BYTE_CONTEXT(p)	((p) & (1 << 2))
+/* true: HC has port power switches */
+#define HCC_PPC(p)		((p) & (1 << 3))
+/* true: HC has port indicators */
+#define HCS_INDICATOR(p)	((p) & (1 << 4))
+/* true: HC has Light HC Reset Capability */
+#define HCC_LIGHT_RESET(p)	((p) & (1 << 5))
+/* true: HC supports latency tolerance messaging */
+#define HCC_LTC(p)		((p) & (1 << 6))
+/* true: no secondary Stream ID Support */
+#define HCC_NSS(p)		((p) & (1 << 7))
+/* Max size for Primary Stream Arrays - 2^(n+1), where n is bits 12:15 */
+#define HCC_MAX_PSA(p)		(1 << ((((p) >> 12) & 0xf) + 1))
+/* Extended Capabilities pointer from PCI base - section 5.3.6 */
+#define HCC_EXT_CAPS(p)		XHCI_HCC_EXT_CAPS(p)
+
+/* db_off bitmask - bits 0:1 reserved */
+#define	DBOFF_MASK	(~0x3)
+
+/* run_regs_off bitmask - bits 0:4 reserved */
+#define	RTSOFF_MASK	(~0x1f)
+
+
+/* Number of registers per port */
+#define	NUM_PORT_REGS	4
+
+/**
+ * struct xhci_op_regs - xHCI Host Controller Operational Registers.
+ * @command:		USBCMD - xHC command register
+ * @status:		USBSTS - xHC status register
+ * @page_size:		This indicates the page size that the host controller
+ * 			supports.  If bit n is set, the HC supports a page size
+ * 			of 2^(n+12), up to a 128MB page size.
+ * 			4K is the minimum page size.
+ * @cmd_ring:		CRP - 64-bit Command Ring Pointer
+ * @dcbaa_ptr:		DCBAAP - 64-bit Device Context Base Address Array Pointer
+ * @config_reg:		CONFIG - Configure Register
+ * @port_status_base:	PORTSCn - base address for Port Status and Control
+ * 			Each port has a Port Status and Control register,
+ * 			followed by a Port Power Management Status and Control
+ * 			register, a Port Link Info register, and a reserved
+ * 			register.
+ * @port_power_base:	PORTPMSCn - base address for
+ * 			Port Power Management Status and Control
+ * @port_link_base:	PORTLIn - base address for Port Link Info (current
+ * 			Link PM state and control) for USB 2.1 and USB 3.0
+ * 			devices.
+ */
+struct xhci_op_regs {
+	__le32	command;
+	__le32	status;
+	__le32	page_size;
+	__le32	reserved1;
+	__le32	reserved2;
+	__le32	dev_notification;
+	__le64	cmd_ring;
+	/* rsvd: offset 0x20-2F */
+	__le32	reserved3[4];
+	__le64	dcbaa_ptr;
+	__le32	config_reg;
+	/* rsvd: offset 0x3C-3FF */
+	__le32	reserved4[241];
+	/* port 1 registers, which serve as a base address for other ports */
+	__le32	port_status_base;
+	__le32	port_power_base;
+	__le32	port_link_base;
+	__le32	reserved5;
+	/* registers for ports 2-255 */
+	__le32	reserved6[NUM_PORT_REGS*254];
+};
+
+/* USBCMD - USB command - command bitmasks */
+/* start/stop HC execution - do not write unless HC is halted*/
+#define CMD_RUN		XHCI_CMD_RUN
+/* Reset HC - resets internal HC state machine and all registers (except
+ * PCI config regs).  HC does NOT drive a USB reset on the downstream ports.
+ * The xHCI driver must reinitialize the xHC after setting this bit.
+ */
+#define CMD_RESET	(1 << 1)
+/* Event Interrupt Enable - a '1' allows interrupts from the host controller */
+#define CMD_EIE		XHCI_CMD_EIE
+/* Host System Error Interrupt Enable - get out-of-band signal for HC errors */
+#define CMD_HSEIE	XHCI_CMD_HSEIE
+/* bits 4:6 are reserved (and should be preserved on writes). */
+/* light reset (port status stays unchanged) - reset completed when this is 0 */
+#define CMD_LRESET	(1 << 7)
+/* host controller save/restore state. */
+#define CMD_CSS		(1 << 8)
+#define CMD_CRS		(1 << 9)
+/* Enable Wrap Event - '1' means xHC generates an event when MFINDEX wraps. */
+#define CMD_EWE		XHCI_CMD_EWE
+/* MFINDEX power management - '1' means xHC can stop MFINDEX counter if all root
+ * hubs are in U3 (selective suspend), disconnect, disabled, or powered-off.
+ * '0' means the xHC can power it off if all ports are in the disconnect,
+ * disabled, or powered-off state.
+ */
+#define CMD_PM_INDEX	(1 << 11)
+/* bits 12:31 are reserved (and should be preserved on writes). */
+
+/* IMAN - Interrupt Management Register */
+#define IMAN_IE		(1 << 1)
+#define IMAN_IP		(1 << 0)
+
+/* USBSTS - USB status - status bitmasks */
+/* HC not running - set to 1 when run/stop bit is cleared. */
+#define STS_HALT	XHCI_STS_HALT
+/* serious error, e.g. PCI parity error.  The HC will clear the run/stop bit. */
+#define STS_FATAL	(1 << 2)
+/* event interrupt - clear this prior to clearing any IP flags in IR set*/
+#define STS_EINT	(1 << 3)
+/* port change detect */
+#define STS_PORT	(1 << 4)
+/* bits 5:7 reserved and zeroed */
+/* save state status - '1' means xHC is saving state */
+#define STS_SAVE	(1 << 8)
+/* restore state status - '1' means xHC is restoring state */
+#define STS_RESTORE	(1 << 9)
+/* true: save or restore error */
+#define STS_SRE		(1 << 10)
+/* true: Controller Not Ready to accept doorbell or op reg writes after reset */
+#define STS_CNR		XHCI_STS_CNR
+/* true: internal Host Controller Error - SW needs to reset and reinitialize */
+#define STS_HCE		(1 << 12)
+/* bits 13:31 reserved and should be preserved */
+
+/*
+ * DNCTRL - Device Notification Control Register - dev_notification bitmasks
+ * Generate a device notification event when the HC sees a transaction with a
+ * notification type that matches a bit set in this bit field.
+ */
+#define	DEV_NOTE_MASK		(0xffff)
+#define ENABLE_DEV_NOTE(x)	(1 << (x))
+/* Most of the device notification types should only be used for debug.
+ * SW does need to pay attention to function wake notifications.
+ */
+#define	DEV_NOTE_FWAKE		ENABLE_DEV_NOTE(1)
+
+/* CRCR - Command Ring Control Register - cmd_ring bitmasks */
+/* bit 0 is the command ring cycle state */
+/* stop ring operation after completion of the currently executing command */
+#define CMD_RING_PAUSE		(1 << 1)
+/* stop ring immediately - abort the currently executing command */
+#define CMD_RING_ABORT		(1 << 2)
+/* true: command ring is running */
+#define CMD_RING_RUNNING	(1 << 3)
+/* bits 4:5 reserved and should be preserved */
+/* Command Ring pointer - bit mask for the lower 32 bits. */
+#define CMD_RING_RSVD_BITS	(0x3f)
+
+/* CONFIG - Configure Register - config_reg bitmasks */
+/* bits 0:7 - maximum number of device slots enabled (NumSlotsEn) */
+#define MAX_DEVS(p)	((p) & 0xff)
+/* bits 8:31 - reserved and should be preserved */
+
+/* PORTSC - Port Status and Control Register - port_status_base bitmasks */
+/* true: device connected */
+#define PORT_CONNECT	(1 << 0)
+/* true: port enabled */
+#define PORT_PE		(1 << 1)
+/* bit 2 reserved and zeroed */
+/* true: port has an over-current condition */
+#define PORT_OC		(1 << 3)
+/* true: port reset signaling asserted */
+#define PORT_RESET	(1 << 4)
+/* Port Link State - bits 5:8
+ * A read gives the current link PM state of the port,
+ * a write with Link State Write Strobe set sets the link state.
+ */
+#define PORT_PLS_MASK	(0xf << 5)
+#define XDEV_U0		(0x0 << 5)
+#define XDEV_U2		(0x2 << 5)
+#define XDEV_U3		(0x3 << 5)
+#define XDEV_RESUME	(0xf << 5)
+/* true: port has power (see HCC_PPC) */
+#define PORT_POWER	(1 << 9)
+/* bits 10:13 indicate device speed:
+ * 0 - undefined speed - port hasn't be initialized by a reset yet
+ * 1 - full speed
+ * 2 - low speed
+ * 3 - high speed
+ * 4 - super speed
+ * 5-15 reserved
+ */
+#define DEV_SPEED_MASK		(0xf << 10)
+#define	XDEV_FS			(0x1 << 10)
+#define	XDEV_LS			(0x2 << 10)
+#define	XDEV_HS			(0x3 << 10)
+#define	XDEV_SS			(0x4 << 10)
+#define DEV_UNDEFSPEED(p)	(((p) & DEV_SPEED_MASK) == (0x0<<10))
+#define DEV_FULLSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_FS)
+#define DEV_LOWSPEED(p)		(((p) & DEV_SPEED_MASK) == XDEV_LS)
+#define DEV_HIGHSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_HS)
+#define DEV_SUPERSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_SS)
+/* Bits 20:23 in the Slot Context are the speed for the device */
+#define	SLOT_SPEED_FS		(XDEV_FS << 10)
+#define	SLOT_SPEED_LS		(XDEV_LS << 10)
+#define	SLOT_SPEED_HS		(XDEV_HS << 10)
+#define	SLOT_SPEED_SS		(XDEV_SS << 10)
+/* Port Indicator Control */
+#define PORT_LED_OFF	(0 << 14)
+#define PORT_LED_AMBER	(1 << 14)
+#define PORT_LED_GREEN	(2 << 14)
+#define PORT_LED_MASK	(3 << 14)
+/* Port Link State Write Strobe - set this when changing link state */
+#define PORT_LINK_STROBE	(1 << 16)
+/* true: connect status change */
+#define PORT_CSC	(1 << 17)
+/* true: port enable change */
+#define PORT_PEC	(1 << 18)
+/* true: warm reset for a USB 3.0 device is done.  A "hot" reset puts the port
+ * into an enabled state, and the device into the default state.  A "warm" reset
+ * also resets the link, forcing the device through the link training sequence.
+ * SW can also look@the Port Reset register to see when warm reset is done.
+ */
+#define PORT_WRC	(1 << 19)
+/* true: over-current change */
+#define PORT_OCC	(1 << 20)
+/* true: reset change - 1 to 0 transition of PORT_RESET */
+#define PORT_RC		(1 << 21)
+/* port link status change - set on some port link state transitions:
+ *  Transition				Reason
+ *  ------------------------------------------------------------------------------
+ *  - U3 to Resume			Wakeup signaling from a device
+ *  - Resume to Recovery to U0		USB 3.0 device resume
+ *  - Resume to U0			USB 2.0 device resume
+ *  - U3 to Recovery to U0		Software resume of USB 3.0 device complete
+ *  - U3 to U0				Software resume of USB 2.0 device complete
+ *  - U2 to U0				L1 resume of USB 2.1 device complete
+ *  - U0 to U0 (???)			L1 entry rejection by USB 2.1 device
+ *  - U0 to disabled			L1 entry error with USB 2.1 device
+ *  - Any state to inactive		Error on USB 3.0 port
+ */
+#define PORT_PLC	(1 << 22)
+/* port configure error change - port failed to configure its link partner */
+#define PORT_CEC	(1 << 23)
+/* Cold Attach Status - xHC can set this bit to report device attached during
+ * Sx state. Warm port reset should be perfomed to clear this bit and move port
+ * to connected state.
+ */
+#define PORT_CAS	(1 << 24)
+/* wake on connect (enable) */
+#define PORT_WKCONN_E	(1 << 25)
+/* wake on disconnect (enable) */
+#define PORT_WKDISC_E	(1 << 26)
+/* wake on over-current (enable) */
+#define PORT_WKOC_E	(1 << 27)
+/* bits 28:29 reserved */
+/* true: device is removable - for USB 3.0 roothub emulation */
+#define PORT_DEV_REMOVE	(1 << 30)
+/* Initiate a warm port reset - complete when PORT_WRC is '1' */
+#define PORT_WR		(1 << 31)
+
+/* We mark duplicate entries with -1 */
+#define DUPLICATE_ENTRY ((u8)(-1))
+
+/* Port Power Management Status and Control - port_power_base bitmasks */
+/* Inactivity timer value for transitions into U1, in microseconds.
+ * Timeout can be up to 127us.  0xFF means an infinite timeout.
+ */
+#define PORT_U1_TIMEOUT(p)	((p) & 0xff)
+#define PORT_U1_TIMEOUT_MASK	0xff
+/* Inactivity timer value for transitions into U2 */
+#define PORT_U2_TIMEOUT(p)	(((p) & 0xff) << 8)
+#define PORT_U2_TIMEOUT_MASK	(0xff << 8)
+/* Bits 24:31 for port testing */
+
+/* USB2 Protocol PORTSPMSC */
+#define	PORT_L1S_MASK		7
+#define	PORT_L1S_SUCCESS	1
+#define	PORT_RWE		(1 << 3)
+#define	PORT_HIRD(p)		(((p) & 0xf) << 4)
+#define	PORT_HIRD_MASK		(0xf << 4)
+#define	PORT_L1DS(p)		(((p) & 0xff) << 8)
+#define	PORT_HLE		(1 << 16)
+
+/**
+ * struct xhci_intr_reg - Interrupt Register Set
+ * @irq_pending:	IMAN - Interrupt Management Register.  Used to enable
+ *			interrupts and check for pending interrupts.
+ * @irq_control:	IMOD - Interrupt Moderation Register.
+ * 			Used to throttle interrupts.
+ * @erst_size:		Number of segments in the Event Ring Segment Table (ERST).
+ * @erst_base:		ERST base address.
+ * @erst_dequeue:	Event ring dequeue pointer.
+ *
+ * Each interrupter (defined by a MSI-X vector) has an event ring and an Event
+ * Ring Segment Table (ERST) associated with it.  The event ring is comprised of
+ * multiple segments of the same size.  The HC places events on the ring and
+ * "updates the Cycle bit in the TRBs to indicate to software the current
+ * position of the Enqueue Pointer." The HCD (Linux) processes those events and
+ * updates the dequeue pointer.
+ */
+struct xhci_intr_reg {
+	__le32	irq_pending;
+	__le32	irq_control;
+	__le32	erst_size;
+	__le32	rsvd;
+	__le64	erst_base;
+	__le64	erst_dequeue;
+};
+
+/* irq_pending bitmasks */
+#define	ER_IRQ_PENDING(p)	((p) & 0x1)
+/* bits 2:31 need to be preserved */
+/* THIS IS BUGGY - FIXME - IP IS WRITE 1 TO CLEAR */
+#define	ER_IRQ_CLEAR(p)		((p) & 0xfffffffe)
+#define	ER_IRQ_ENABLE(p)	((ER_IRQ_CLEAR(p)) | 0x2)
+#define	ER_IRQ_DISABLE(p)	((ER_IRQ_CLEAR(p)) & ~(0x2))
+
+/* irq_control bitmasks */
+/* Minimum interval between interrupts (in 250ns intervals).  The interval
+ * between interrupts will be longer if there are no events on the event ring.
+ * Default is 4000 (1 ms).
+ */
+#define ER_IRQ_INTERVAL_MASK	(0xffff)
+/* Counter used to count down the time to the next interrupt - HW use only */
+#define ER_IRQ_COUNTER_MASK	(0xffff << 16)
+
+/* erst_size bitmasks */
+/* Preserve bits 16:31 of erst_size */
+#define	ERST_SIZE_MASK		(0xffff << 16)
+
+/* erst_dequeue bitmasks */
+/* Dequeue ERST Segment Index (DESI) - Segment number (or alias)
+ * where the current dequeue pointer lies.  This is an optional HW hint.
+ */
+#define ERST_DESI_MASK		(0x7)
+/* Event Handler Busy (EHB) - is the event ring scheduled to be serviced by
+ * a work queue (or delayed service routine)?
+ */
+#define ERST_EHB		(1 << 3)
+#define ERST_PTR_MASK		(0xf)
+
+/**
+ * struct xhci_run_regs
+ * @microframe_index:
+ * 		MFINDEX - current microframe number
+ *
+ * Section 5.5 Host Controller Runtime Registers:
+ * "Software should read and write these registers using only Dword (32 bit)
+ * or larger accesses"
+ */
+struct xhci_run_regs {
+	__le32			microframe_index;
+	__le32			rsvd[7];
+	struct xhci_intr_reg	ir_set[128];
+};
+
+/**
+ * struct doorbell_array
+ *
+ * Bits  0 -  7: Endpoint target
+ * Bits  8 - 15: RsvdZ
+ * Bits 16 - 31: Stream ID
+ *
+ * Section 5.6
+ */
+struct xhci_doorbell_array {
+	__le32	doorbell[256];
+};
+
+#define DB_VALUE(ep, stream)	((((ep) + 1) & 0xff) | ((stream) << 16))
+#define DB_VALUE_HOST		0x00000000
+
+/**
+ * struct xhci_protocol_caps
+ * @revision:		major revision, minor revision, capability ID,
+ *			and next capability pointer.
+ * @name_string:	Four ASCII characters to say which spec this xHC
+ *			follows, typically "USB ".
+ * @port_info:		Port offset, count, and protocol-defined information.
+ */
+struct xhci_protocol_caps {
+	u32	revision;
+	u32	name_string;
+	u32	port_info;
+};
+
+#define	XHCI_EXT_PORT_MAJOR(x)	(((x) >> 24) & 0xff)
+#define	XHCI_EXT_PORT_OFF(x)	((x) & 0xff)
+#define	XHCI_EXT_PORT_COUNT(x)	(((x) >> 8) & 0xff)
+
+/**
+ * struct xhci_container_ctx
+ * @type: Type of context.  Used to calculated offsets to contained contexts.
+ * @size: Size of the context data
+ * @bytes: The raw context data given to HW
+ * @dma: dma address of the bytes
+ *
+ * Represents either a Device or Input context.  Holds a pointer to the raw
+ * memory used for the context (bytes) and dma address of it (dma).
+ */
+struct xhci_container_ctx {
+	unsigned type;
+#define XHCI_CTX_TYPE_DEVICE  0x1
+#define XHCI_CTX_TYPE_INPUT   0x2
+
+	int size;
+
+	u8 *bytes;
+	dma_addr_t dma;
+};
+
+/**
+ * struct xhci_slot_ctx
+ * @dev_info:	Route string, device speed, hub info, and last valid endpoint
+ * @dev_info2:	Max exit latency for device number, root hub port number
+ * @tt_info:	tt_info is used to construct split transaction tokens
+ * @dev_state:	slot state and device address
+ *
+ * Slot Context - section 6.2.1.1.  This assumes the HC uses 32-byte context
+ * structures.  If the HC uses 64-byte contexts, there is an additional 32 bytes
+ * reserved at the end of the slot context for HC internal use.
+ */
+struct xhci_slot_ctx {
+	__le32	dev_info;
+	__le32	dev_info2;
+	__le32	tt_info;
+	__le32	dev_state;
+	/* offset 0x10 to 0x1f reserved for HC internal use */
+	__le32	reserved[4];
+};
+
+/* dev_info bitmasks */
+/* Route String - 0:19 */
+#define ROUTE_STRING_MASK	(0xfffff)
+/* Device speed - values defined by PORTSC Device Speed field - 20:23 */
+#define DEV_SPEED	(0xf << 20)
+/* bit 24 reserved */
+/* Is this LS/FS device connected through a HS hub? - bit 25 */
+#define DEV_MTT		(0x1 << 25)
+/* Set if the device is a hub - bit 26 */
+#define DEV_HUB		(0x1 << 26)
+/* Index of the last valid endpoint context in this device context - 27:31 */
+#define LAST_CTX_MASK	(0x1f << 27)
+#define LAST_CTX(p)	((p) << 27)
+#define LAST_CTX_TO_EP_NUM(p)	(((p) >> 27) - 1)
+#define SLOT_FLAG	(1 << 0)
+#define EP0_FLAG	(1 << 1)
+
+/* dev_info2 bitmasks */
+/* Max Exit Latency (ms) - worst case time to wake up all links in dev path */
+#define MAX_EXIT	(0xffff)
+/* Root hub port number that is needed to access the USB device */
+#define ROOT_HUB_PORT(p)	(((p) & 0xff) << 16)
+#define DEVINFO_TO_ROOT_HUB_PORT(p)	(((p) >> 16) & 0xff)
+/* Maximum number of ports under a hub device */
+#define XHCI_MAX_PORTS(p)	(((p) & 0xff) << 24)
+
+/* tt_info bitmasks */
+/*
+ * TT Hub Slot ID - for low or full speed devices attached to a high-speed hub
+ * The Slot ID of the hub that isolates the high speed signaling from
+ * this low or full-speed device.  '0' if attached to root hub port.
+ */
+#define TT_SLOT		(0xff)
+/*
+ * The number of the downstream facing port of the high-speed hub
+ * '0' if the device is not low or full speed.
+ */
+#define TT_PORT		(0xff << 8)
+#define TT_THINK_TIME(p)	(((p) & 0x3) << 16)
+
+/* dev_state bitmasks */
+/* USB device address - assigned by the HC */
+#define DEV_ADDR_MASK	(0xff)
+/* bits 8:26 reserved */
+/* Slot state */
+#define SLOT_STATE	(0x1f << 27)
+#define GET_SLOT_STATE(p)	(((p) & (0x1f << 27)) >> 27)
+
+#define SLOT_STATE_DISABLED	0
+#define SLOT_STATE_ENABLED	SLOT_STATE_DISABLED
+#define SLOT_STATE_DEFAULT	1
+#define SLOT_STATE_ADDRESSED	2
+#define SLOT_STATE_CONFIGURED	3
+
+/**
+ * struct xhci_ep_ctx
+ * @ep_info:	endpoint state, streams, mult, and interval information.
+ * @ep_info2:	information on endpoint type, max packet size, max burst size,
+ * 		error count, and whether the HC will force an event for all
+ * 		transactions.
+ * @deq:	64-bit ring dequeue pointer address.  If the endpoint only
+ * 		defines one stream, this points to the endpoint transfer ring.
+ * 		Otherwise, it points to a stream context array, which has a
+ * 		ring pointer for each flow.
+ * @tx_info:
+ * 		Average TRB lengths for the endpoint ring and
+ * 		max payload within an Endpoint Service Interval Time (ESIT).
+ *
+ * Endpoint Context - section 6.2.1.2.  This assumes the HC uses 32-byte context
+ * structures.  If the HC uses 64-byte contexts, there is an additional 32 bytes
+ * reserved at the end of the endpoint context for HC internal use.
+ */
+struct xhci_ep_ctx {
+	__le32	ep_info;
+	__le32	ep_info2;
+	__le64	deq;
+	__le32	tx_info;
+	/* offset 0x14 - 0x1f reserved for HC internal use */
+	__le32	reserved[3];
+};
+
+/* ep_info bitmasks */
+/*
+ * Endpoint State - bits 0:2
+ * 0 - disabled
+ * 1 - running
+ * 2 - halted due to halt condition - ok to manipulate endpoint ring
+ * 3 - stopped
+ * 4 - TRB error
+ * 5-7 - reserved
+ */
+#define EP_STATE_MASK		(0xf)
+#define EP_STATE_DISABLED	0
+#define EP_STATE_RUNNING	1
+#define EP_STATE_HALTED		2
+#define EP_STATE_STOPPED	3
+#define EP_STATE_ERROR		4
+/* Mult - Max number of burtst within an interval, in EP companion desc. */
+#define EP_MULT(p)		(((p) & 0x3) << 8)
+#define CTX_TO_EP_MULT(p)	(((p) >> 8) & 0x3)
+/* bits 10:14 are Max Primary Streams */
+/* bit 15 is Linear Stream Array */
+/* Interval - period between requests to an endpoint - 125u increments. */
+#define EP_INTERVAL(p)		(((p) & 0xff) << 16)
+#define EP_INTERVAL_TO_UFRAMES(p)		(1 << (((p) >> 16) & 0xff))
+#define CTX_TO_EP_INTERVAL(p)	(((p) >> 16) & 0xff)
+#define EP_MAXPSTREAMS_MASK	(0x1f << 10)
+#define EP_MAXPSTREAMS(p)	(((p) << 10) & EP_MAXPSTREAMS_MASK)
+/* Endpoint is set up with a Linear Stream Array (vs. Secondary Stream Array) */
+#define	EP_HAS_LSA		(1 << 15)
+
+/* ep_info2 bitmasks */
+/*
+ * Force Event - generate transfer events for all TRBs for this endpoint
+ * This will tell the HC to ignore the IOC and ISP flags (for debugging only).
+ */
+#define	FORCE_EVENT	(0x1)
+#define ERROR_COUNT(p)	(((p) & 0x3) << 1)
+#define CTX_TO_EP_TYPE(p)	(((p) >> 3) & 0x7)
+#define EP_TYPE(p)	((p) << 3)
+#define ISOC_OUT_EP	1
+#define BULK_OUT_EP	2
+#define INT_OUT_EP	3
+#define CTRL_EP		4
+#define ISOC_IN_EP	5
+#define BULK_IN_EP	6
+#define INT_IN_EP	7
+/* bit 6 reserved */
+/* bit 7 is Host Initiate Disable - for disabling stream selection */
+#define MAX_BURST(p)	(((p)&0xff) << 8)
+#define CTX_TO_MAX_BURST(p)	(((p) >> 8) & 0xff)
+#define MAX_PACKET(p)	(((p)&0xffff) << 16)
+#define MAX_PACKET_MASK		(0xffff << 16)
+#define MAX_PACKET_DECODED(p)	(((p) >> 16) & 0xffff)
+
+/* Get max packet size from ep desc. Bit 10..0 specify the max packet size.
+ * USB2.0 spec 9.6.6.
+ */
+#define GET_MAX_PACKET(p)	((p) & 0x7ff)
+
+/* tx_info bitmasks */
+#define AVG_TRB_LENGTH_FOR_EP(p)	((p) & 0xffff)
+#define MAX_ESIT_PAYLOAD_FOR_EP(p)	(((p) & 0xffff) << 16)
+#define CTX_TO_MAX_ESIT_PAYLOAD(p)	(((p) >> 16) & 0xffff)
+
+/* deq bitmasks */
+#define EP_CTX_CYCLE_MASK		(1 << 0)
+
+
+/**
+ * struct xhci_input_control_context
+ * Input control context; see section 6.2.5.
+ *
+ * @drop_context:	set the bit of the endpoint context you want to disable
+ * @add_context:	set the bit of the endpoint context you want to enable
+ */
+struct xhci_input_control_ctx {
+	__le32	drop_flags;
+	__le32	add_flags;
+	__le32	rsvd2[6];
+};
+
+#define	EP_IS_ADDED(ctrl_ctx, i) \
+	(le32_to_cpu(ctrl_ctx->add_flags) & (1 << (i + 1)))
+#define	EP_IS_DROPPED(ctrl_ctx, i)       \
+	(le32_to_cpu(ctrl_ctx->drop_flags) & (1 << (i + 1)))
+
+/* Represents everything that is needed to issue a command on the command ring.
+ * It's useful to pre-allocate these for commands that cannot fail due to
+ * out-of-memory errors, like freeing streams.
+ */
+struct xhci_command {
+	/* Input context for changing device state */
+	struct xhci_container_ctx	*in_ctx;
+	u32				status;
+	/* If completion is null, no one is waiting on this command
+	 * and the structure can be freed after the command completes.
+	 */
+	struct completion		*completion;
+	union xhci_trb			*command_trb;
+	struct list_head		cmd_list;
+};
+
+/* drop context bitmasks */
+#define	DROP_EP(x)	(0x1 << x)
+/* add context bitmasks */
+#define	ADD_EP(x)	(0x1 << x)
+
+struct xhci_stream_ctx {
+	/* 64-bit stream ring address, cycle state, and stream type */
+	__le64	stream_ring;
+	/* offset 0x14 - 0x1f reserved for HC internal use */
+	__le32	reserved[2];
+};
+
+/* Stream Context Types (section 6.4.1) - bits 3:1 of stream ctx deq ptr */
+#define	SCT_FOR_CTX(p)		(((p) << 1) & 0x7)
+/* Secondary stream array type, dequeue pointer is to a transfer ring */
+#define	SCT_SEC_TR		0
+/* Primary stream array type, dequeue pointer is to a transfer ring */
+#define	SCT_PRI_TR		1
+/* Dequeue pointer is for a secondary stream array (SSA) with 8 entries */
+#define SCT_SSA_8		2
+#define SCT_SSA_16		3
+#define SCT_SSA_32		4
+#define SCT_SSA_64		5
+#define SCT_SSA_128		6
+#define SCT_SSA_256		7
+
+/* Assume no secondary streams for now */
+struct xhci_stream_info {
+	struct xhci_ring		**stream_rings;
+	/* Number of streams, including stream 0 (which drivers can't use) */
+	unsigned int			num_streams;
+	/* The stream context array may be bigger than
+	 * the number of streams the driver asked for
+	 */
+	struct xhci_stream_ctx		*stream_ctx_array;
+	unsigned int			num_stream_ctxs;
+	dma_addr_t			ctx_array_dma;
+	/* For mapping physical TRB addresses to segments in stream rings */
+	struct radix_tree_root		trb_address_map;
+	struct xhci_command		*free_streams_command;
+};
+
+#define	SMALL_STREAM_ARRAY_SIZE		256
+#define	MEDIUM_STREAM_ARRAY_SIZE	1024
+
+/* Some Intel xHCI host controllers need software to keep track of the bus
+ * bandwidth.  Keep track of endpoint info here.  Each root port is allocated
+ * the full bus bandwidth.  We must also treat TTs (including each port under a
+ * multi-TT hub) as a separate bandwidth domain.  The direct memory interface
+ * (DMI) also limits the total bandwidth (across all domains) that can be used.
+ */
+struct xhci_bw_info {
+	/* ep_interval is zero-based */
+	unsigned int		ep_interval;
+	/* mult and num_packets are one-based */
+	unsigned int		mult;
+	unsigned int		num_packets;
+	unsigned int		max_packet_size;
+	unsigned int		max_esit_payload;
+	unsigned int		type;
+};
+
+/* "Block" sizes in bytes the hardware uses for different device speeds.
+ * The logic in this part of the hardware limits the number of bits the hardware
+ * can use, so must represent bandwidth in a less precise manner to mimic what
+ * the scheduler hardware computes.
+ */
+#define	FS_BLOCK	1
+#define	HS_BLOCK	4
+#define	SS_BLOCK	16
+#define	DMI_BLOCK	32
+
+/* Each device speed has a protocol overhead (CRC, bit stuffing, etc) associated
+ * with each byte transferred.  SuperSpeed devices have an initial overhead to
+ * set up bursts.  These are in blocks, see above.  LS overhead has already been
+ * translated into FS blocks.
+ */
+#define DMI_OVERHEAD 8
+#define DMI_OVERHEAD_BURST 4
+#define SS_OVERHEAD 8
+#define SS_OVERHEAD_BURST 32
+#define HS_OVERHEAD 26
+#define FS_OVERHEAD 20
+#define LS_OVERHEAD 128
+/* The TTs need to claim roughly twice as much bandwidth (94 bytes per
+ * microframe ~= 24Mbps) of the HS bus as the devices can actually use because
+ * of overhead associated with split transfers crossing microframe boundaries.
+ * 31 blocks is pure protocol overhead.
+ */
+#define TT_HS_OVERHEAD (31 + 94)
+#define TT_DMI_OVERHEAD (25 + 12)
+
+/* Bandwidth limits in blocks */
+#define FS_BW_LIMIT		1285
+#define TT_BW_LIMIT		1320
+#define HS_BW_LIMIT		1607
+#define SS_BW_LIMIT_IN		3906
+#define DMI_BW_LIMIT_IN		3906
+#define SS_BW_LIMIT_OUT		3906
+#define DMI_BW_LIMIT_OUT	3906
+
+/* Percentage of bus bandwidth reserved for non-periodic transfers */
+#define FS_BW_RESERVED		10
+#define HS_BW_RESERVED		20
+#define SS_BW_RESERVED		10
+
+struct xhci_virt_ep {
+	struct xhci_ring		*ring;
+	/* Related to endpoints that are configured to use stream IDs only */
+	struct xhci_stream_info		*stream_info;
+	/* Temporary storage in case the configure endpoint command fails and we
+	 * have to restore the device state to the previous state
+	 */
+	struct xhci_ring		*new_ring;
+	unsigned int			ep_state;
+#define SET_DEQ_PENDING		(1 << 0)
+#define EP_HALTED		(1 << 1)	/* For stall handling */
+#define EP_HALT_PENDING		(1 << 2)	/* For URB cancellation */
+/* Transitioning the endpoint to using streams, don't enqueue URBs */
+#define EP_GETTING_STREAMS	(1 << 3)
+#define EP_HAS_STREAMS		(1 << 4)
+/* Transitioning the endpoint to not using streams, don't enqueue URBs */
+#define EP_GETTING_NO_STREAMS	(1 << 5)
+	/* ----  Related to URB cancellation ---- */
+	struct list_head	cancelled_td_list;
+	/* The TRB that was last reported in a stopped endpoint ring */
+	union xhci_trb		*stopped_trb;
+	struct xhci_td		*stopped_td;
+	unsigned int		stopped_stream;
+	/* Watchdog timer for stop endpoint command to cancel URBs */
+	struct timer_list	stop_cmd_timer;
+	int			stop_cmds_pending;
+	struct xhci_hcd		*xhci;
+	/* Dequeue pointer and dequeue segment for a submitted Set TR Dequeue
+	 * command.  We'll need to update the ring's dequeue segment and dequeue
+	 * pointer after the command completes.
+	 */
+	struct xhci_segment	*queued_deq_seg;
+	union xhci_trb		*queued_deq_ptr;
+	/*
+	 * Sometimes the xHC can not process isochronous endpoint ring quickly
+	 * enough, and it will miss some isoc tds on the ring and generate
+	 * a Missed Service Error Event.
+	 * Set skip flag when receive a Missed Service Error Event and
+	 * process the missed tds on the endpoint ring.
+	 */
+	bool			skip;
+	/* Bandwidth checking storage */
+	struct xhci_bw_info	bw_info;
+	struct list_head	bw_endpoint_list;
+};
+
+enum xhci_overhead_type {
+	LS_OVERHEAD_TYPE = 0,
+	FS_OVERHEAD_TYPE,
+	HS_OVERHEAD_TYPE,
+};
+
+struct xhci_interval_bw {
+	unsigned int		num_packets;
+	/* Sorted by max packet size.
+	 * Head of the list is the greatest max packet size.
+	 */
+	struct list_head	endpoints;
+	/* How many endpoints of each speed are present. */
+	unsigned int		overhead[3];
+};
+
+#define	XHCI_MAX_INTERVAL	16
+
+struct xhci_interval_bw_table {
+	unsigned int		interval0_esit_payload;
+	struct xhci_interval_bw	interval_bw[XHCI_MAX_INTERVAL];
+	/* Includes reserved bandwidth for async endpoints */
+	unsigned int		bw_used;
+	unsigned int		ss_bw_in;
+	unsigned int		ss_bw_out;
+};
+
+
+struct xhci_virt_device {
+	struct usb_device		*udev;
+	/*
+	 * Commands to the hardware are passed an "input context" that
+	 * tells the hardware what to change in its data structures.
+	 * The hardware will return changes in an "output context" that
+	 * software must allocate for the hardware.  We need to keep
+	 * track of input and output contexts separately because
+	 * these commands might fail and we don't trust the hardware.
+	 */
+	struct xhci_container_ctx       *out_ctx;
+	/* Used for addressing devices and configuration changes */
+	struct xhci_container_ctx       *in_ctx;
+	/* Rings saved to ensure old alt settings can be re-instated */
+	struct xhci_ring		**ring_cache;
+	int				num_rings_cached;
+	/* Store xHC assigned device address */
+	int				address;
+#define	XHCI_MAX_RINGS_CACHED	31
+	struct xhci_virt_ep		eps[31];
+	struct completion		cmd_completion;
+	/* Status of the last command issued for this device */
+	u32				cmd_status;
+	struct list_head		cmd_list;
+	u8				fake_port;
+	u8				real_port;
+	struct xhci_interval_bw_table	*bw_table;
+	struct xhci_tt_bw_info		*tt_info;
+	/* The current max exit latency for the enabled USB3 link states. */
+	u16				current_mel;
+};
+
+/*
+ * For each roothub, keep track of the bandwidth information for each periodic
+ * interval.
+ *
+ * If a high speed hub is attached to the roothub, each TT associated with that
+ * hub is a separate bandwidth domain.  The interval information for the
+ * endpoints on the devices under that TT will appear in the TT structure.
+ */
+struct xhci_root_port_bw_info {
+	struct list_head		tts;
+	unsigned int			num_active_tts;
+	struct xhci_interval_bw_table	bw_table;
+};
+
+struct xhci_tt_bw_info {
+	struct list_head		tt_list;
+	int				slot_id;
+	int				ttport;
+	struct xhci_interval_bw_table	bw_table;
+	int				active_eps;
+};
+
+
+/**
+ * struct xhci_device_context_array
+ * @dev_context_ptr	array of 64-bit DMA addresses for device contexts
+ */
+struct xhci_device_context_array {
+	/* 64-bit device addresses; we only write 32-bit addresses */
+	__le64			dev_context_ptrs[MAX_HC_SLOTS];
+	/* private xHCD pointers */
+	dma_addr_t	dma;
+};
+/* TODO: write function to set the 64-bit device DMA address */
+/*
+ * TODO: change this to be dynamically sized@HC mem init time since the HC
+ * might not be able to handle the maximum number of devices possible.
+ */
+
+
+struct xhci_transfer_event {
+	/* 64-bit buffer address, or immediate data */
+	__le64	buffer;
+	__le32	transfer_len;
+	/* This field is interpreted differently based on the type of TRB */
+	__le32	flags;
+};
+
+/* Transfer event TRB length bit mask */
+/* bits 0:23 */
+#define	EVENT_TRB_LEN(p)		((p) & 0xffffff)
+
+/** Transfer Event bit fields **/
+#define	TRB_TO_EP_ID(p)	(((p) >> 16) & 0x1f)
+
+/* Completion Code - only applicable for some types of TRBs */
+#define	COMP_CODE_MASK		(0xff << 24)
+#define GET_COMP_CODE(p)	(((p) & COMP_CODE_MASK) >> 24)
+#define COMP_SUCCESS	1
+/* Data Buffer Error */
+#define COMP_DB_ERR	2
+/* Babble Detected Error */
+#define COMP_BABBLE	3
+/* USB Transaction Error */
+#define COMP_TX_ERR	4
+/* TRB Error - some TRB field is invalid */
+#define COMP_TRB_ERR	5
+/* Stall Error - USB device is stalled */
+#define COMP_STALL	6
+/* Resource Error - HC doesn't have memory for that device configuration */
+#define COMP_ENOMEM	7
+/* Bandwidth Error - not enough room in schedule for this dev config */
+#define COMP_BW_ERR	8
+/* No Slots Available Error - HC ran out of device slots */
+#define COMP_ENOSLOTS	9
+/* Invalid Stream Type Error */
+#define COMP_STREAM_ERR	10
+/* Slot Not Enabled Error - doorbell rung for disabled device slot */
+#define COMP_EBADSLT	11
+/* Endpoint Not Enabled Error */
+#define COMP_EBADEP	12
+/* Short Packet */
+#define COMP_SHORT_TX	13
+/* Ring Underrun - doorbell rung for an empty isoc OUT ep ring */
+#define COMP_UNDERRUN	14
+/* Ring Overrun - isoc IN ep ring is empty when ep is scheduled to RX */
+#define COMP_OVERRUN	15
+/* Virtual Function Event Ring Full Error */
+#define COMP_VF_FULL	16
+/* Parameter Error - Context parameter is invalid */
+#define COMP_EINVAL	17
+/* Bandwidth Overrun Error - isoc ep exceeded its allocated bandwidth */
+#define COMP_BW_OVER	18
+/* Context State Error - illegal context state transition requested */
+#define COMP_CTX_STATE	19
+/* No Ping Response Error - HC didn't get PING_RESPONSE in time to TX */
+#define COMP_PING_ERR	20
+/* Event Ring is full */
+#define COMP_ER_FULL	21
+/* Incompatible Device Error */
+#define COMP_DEV_ERR	22
+/* Missed Service Error - HC couldn't service an isoc ep within interval */
+#define COMP_MISSED_INT	23
+/* Successfully stopped command ring */
+#define COMP_CMD_STOP	24
+/* Successfully aborted current command and stopped command ring */
+#define COMP_CMD_ABORT	25
+/* Stopped - transfer was terminated by a stop endpoint command */
+#define COMP_STOP	26
+/* Same as COMP_EP_STOPPED, but the transferred length in the event is invalid */
+#define COMP_STOP_INVAL	27
+/* Control Abort Error - Debug Capability - control pipe aborted */
+#define COMP_DBG_ABORT	28
+/* Max Exit Latency Too Large Error */
+#define COMP_MEL_ERR	29
+/* TRB type 30 reserved */
+/* Isoc Buffer Overrun - an isoc IN ep sent more data than could fit in TD */
+#define COMP_BUFF_OVER	31
+/* Event Lost Error - xHC has an "internal event overrun condition" */
+#define COMP_ISSUES	32
+/* Undefined Error - reported when other error codes don't apply */
+#define COMP_UNKNOWN	33
+/* Invalid Stream ID Error */
+#define COMP_STRID_ERR	34
+/* Secondary Bandwidth Error - may be returned by a Configure Endpoint cmd */
+#define COMP_2ND_BW_ERR	35
+/* Split Transaction Error */
+#define	COMP_SPLIT_ERR	36
+
+struct xhci_link_trb {
+	/* 64-bit segment pointer*/
+	__le64 segment_ptr;
+	__le32 intr_target;
+	__le32 control;
+};
+
+/* control bitfields */
+#define LINK_TOGGLE	(0x1<<1)
+
+/* Command completion event TRB */
+struct xhci_event_cmd {
+	/* Pointer to command TRB, or the value passed by the event data trb */
+	__le64 cmd_trb;
+	__le32 status;
+	__le32 flags;
+};
+
+/* flags bitmasks */
+/* bits 16:23 are the virtual function ID */
+/* bits 24:31 are the slot ID */
+#define TRB_TO_SLOT_ID(p)	(((p) & (0xff<<24)) >> 24)
+#define SLOT_ID_FOR_TRB(p)	(((p) & 0xff) << 24)
+
+/* Stop Endpoint TRB - ep_index to endpoint ID for this TRB */
+#define TRB_TO_EP_INDEX(p)		((((p) & (0x1f << 16)) >> 16) - 1)
+#define	EP_ID_FOR_TRB(p)		((((p) + 1) & 0x1f) << 16)
+
+#define SUSPEND_PORT_FOR_TRB(p)		(((p) & 1) << 23)
+#define TRB_TO_SUSPEND_PORT(p)		(((p) & (1 << 23)) >> 23)
+#define LAST_EP_INDEX			30
+
+/* Set TR Dequeue Pointer command TRB fields */
+#define TRB_TO_STREAM_ID(p)		((((p) & (0xffff << 16)) >> 16))
+#define STREAM_ID_FOR_TRB(p)		((((p)) & 0xffff) << 16)
+
+
+/* Port Status Change Event TRB fields */
+/* Port ID - bits 31:24 */
+#define GET_PORT_ID(p)		(((p) & (0xff << 24)) >> 24)
+
+/* Normal TRB fields */
+/* transfer_len bitmasks - bits 0:16 */
+#define	TRB_LEN(p)		((p) & 0x1ffff)
+/* Interrupter Target - which MSI-X vector to target the completion event at */
+#define TRB_INTR_TARGET(p)	(((p) & 0x3ff) << 22)
+#define GET_INTR_TARGET(p)	(((p) >> 22) & 0x3ff)
+#define TRB_TBC(p)		(((p) & 0x3) << 7)
+#define TRB_TLBPC(p)		(((p) & 0xf) << 16)
+
+/* Cycle bit - indicates TRB ownership by HC or HCD */
+#define TRB_CYCLE		(1<<0)
+/*
+ * Force next event data TRB to be evaluated before task switch.
+ * Used to pass OS data back after a TD completes.
+ */
+#define TRB_ENT			(1<<1)
+/* Interrupt on short packet */
+#define TRB_ISP			(1<<2)
+/* Set PCIe no snoop attribute */
+#define TRB_NO_SNOOP		(1<<3)
+/* Chain multiple TRBs into a TD */
+#define TRB_CHAIN		(1<<4)
+/* Interrupt on completion */
+#define TRB_IOC			(1<<5)
+/* The buffer pointer contains immediate data */
+#define TRB_IDT			(1<<6)
+
+/* Block Event Interrupt */
+#define	TRB_BEI			(1<<9)
+
+/* Control transfer TRB specific fields */
+#define TRB_DIR_IN		(1<<16)
+#define	TRB_TX_TYPE(p)		((p) << 16)
+#define	TRB_DATA_OUT		2
+#define	TRB_DATA_IN		3
+
+/* Isochronous TRB specific fields */
+#define TRB_SIA			(1<<31)
+
+struct xhci_generic_trb {
+	__le32 field[4];
+};
+
+union xhci_trb {
+	struct xhci_link_trb		link;
+	struct xhci_transfer_event	trans_event;
+	struct xhci_event_cmd		event_cmd;
+	struct xhci_generic_trb		generic;
+};
+
+/* TRB bit mask */
+#define	TRB_TYPE_BITMASK	(0xfc00)
+#define TRB_TYPE(p)		((p) << 10)
+#define TRB_FIELD_TO_TYPE(p)	(((p) & TRB_TYPE_BITMASK) >> 10)
+/* TRB type IDs */
+/* bulk, interrupt, isoc scatter/gather, and control data stage */
+#define TRB_NORMAL		1
+/* setup stage for control transfers */
+#define TRB_SETUP		2
+/* data stage for control transfers */
+#define TRB_DATA		3
+/* status stage for control transfers */
+#define TRB_STATUS		4
+/* isoc transfers */
+#define TRB_ISOC		5
+/* TRB for linking ring segments */
+#define TRB_LINK		6
+#define TRB_EVENT_DATA		7
+/* Transfer Ring No-op (not for the command ring) */
+#define TRB_TR_NOOP		8
+/* Command TRBs */
+/* Enable Slot Command */
+#define TRB_ENABLE_SLOT		9
+/* Disable Slot Command */
+#define TRB_DISABLE_SLOT	10
+/* Address Device Command */
+#define TRB_ADDR_DEV		11
+/* Configure Endpoint Command */
+#define TRB_CONFIG_EP		12
+/* Evaluate Context Command */
+#define TRB_EVAL_CONTEXT	13
+/* Reset Endpoint Command */
+#define TRB_RESET_EP		14
+/* Stop Transfer Ring Command */
+#define TRB_STOP_RING		15
+/* Set Transfer Ring Dequeue Pointer Command */
+#define TRB_SET_DEQ		16
+/* Reset Device Command */
+#define TRB_RESET_DEV		17
+/* Force Event Command (opt) */
+#define TRB_FORCE_EVENT		18
+/* Negotiate Bandwidth Command (opt) */
+#define TRB_NEG_BANDWIDTH	19
+/* Set Latency Tolerance Value Command (opt) */
+#define TRB_SET_LT		20
+/* Get port bandwidth Command */
+#define TRB_GET_BW		21
+/* Force Header Command - generate a transaction or link management packet */
+#define TRB_FORCE_HEADER	22
+/* No-op Command - not for transfer rings */
+#define TRB_CMD_NOOP		23
+/* TRB IDs 24-31 reserved */
+/* Event TRBS */
+/* Transfer Event */
+#define TRB_TRANSFER		32
+/* Command Completion Event */
+#define TRB_COMPLETION		33
+/* Port Status Change Event */
+#define TRB_PORT_STATUS		34
+/* Bandwidth Request Event (opt) */
+#define TRB_BANDWIDTH_EVENT	35
+/* Doorbell Event (opt) */
+#define TRB_DOORBELL		36
+/* Host Controller Event */
+#define TRB_HC_EVENT		37
+/* Device Notification Event - device sent function wake notification */
+#define TRB_DEV_NOTE		38
+/* MFINDEX Wrap Event - microframe counter wrapped */
+#define TRB_MFINDEX_WRAP	39
+/* TRB IDs 40-47 reserved, 48-63 is vendor-defined */
+
+/* Nec vendor-specific command completion event. */
+#define	TRB_NEC_CMD_COMP	48
+/* Get NEC firmware revision. */
+#define	TRB_NEC_GET_FW		49
+
+#define TRB_TYPE_LINK(x)	(((x) & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK))
+/* Above, but for __le32 types -- can avoid work by swapping constants: */
+#define TRB_TYPE_LINK_LE32(x)	(((x) & cpu_to_le32(TRB_TYPE_BITMASK)) == \
+				 cpu_to_le32(TRB_TYPE(TRB_LINK)))
+#define TRB_TYPE_NOOP_LE32(x)	(((x) & cpu_to_le32(TRB_TYPE_BITMASK)) == \
+				 cpu_to_le32(TRB_TYPE(TRB_TR_NOOP)))
+
+#define NEC_FW_MINOR(p)		(((p) >> 0) & 0xff)
+#define NEC_FW_MAJOR(p)		(((p) >> 8) & 0xff)
+
+/*
+ * TRBS_PER_SEGMENT must be a multiple of 4,
+ * since the command ring is 64-byte aligned.
+ * It must also be greater than 16.
+ */
+#define TRBS_PER_SEGMENT	64
+/* Allow two commands + a link TRB, along with any reserved command TRBs */
+#define MAX_RSVD_CMD_TRBS	(TRBS_PER_SEGMENT - 3)
+#define TRB_SEGMENT_SIZE	(TRBS_PER_SEGMENT*16)
+#define TRB_SEGMENT_SHIFT	(ilog2(TRB_SEGMENT_SIZE))
+/* TRB buffer pointers can't cross 64KB boundaries */
+#define TRB_MAX_BUFF_SHIFT		16
+#define TRB_MAX_BUFF_SIZE	(1 << TRB_MAX_BUFF_SHIFT)
+
+struct xhci_segment {
+	union xhci_trb		*trbs;
+	/* private to HCD */
+	struct xhci_segment	*next;
+	dma_addr_t		dma;
+};
+
+struct xhci_td {
+	struct list_head	td_list;
+	struct list_head	cancelled_td_list;
+	struct urb		*urb;
+	struct xhci_segment	*start_seg;
+	union xhci_trb		*first_trb;
+	union xhci_trb		*last_trb;
+};
+
+/* xHCI command default timeout value */
+#define XHCI_CMD_DEFAULT_TIMEOUT	(5 * HZ)
+
+/* command descriptor */
+struct xhci_cd {
+	struct list_head	cancel_cmd_list;
+	struct xhci_command	*command;
+	union xhci_trb		*cmd_trb;
+};
+
+struct xhci_dequeue_state {
+	struct xhci_segment *new_deq_seg;
+	union xhci_trb *new_deq_ptr;
+	int new_cycle_state;
+};
+
+enum xhci_ring_type {
+	TYPE_CTRL = 0,
+	TYPE_ISOC,
+	TYPE_BULK,
+	TYPE_INTR,
+	TYPE_STREAM,
+	TYPE_COMMAND,
+	TYPE_EVENT,
+};
+
+struct xhci_ring {
+	struct xhci_segment	*first_seg;
+	struct xhci_segment	*last_seg;
+	union  xhci_trb		*enqueue;
+	struct xhci_segment	*enq_seg;
+	unsigned int		enq_updates;
+	union  xhci_trb		*dequeue;
+	struct xhci_segment	*deq_seg;
+	unsigned int		deq_updates;
+	struct list_head	td_list;
+	/*
+	 * Write the cycle state into the TRB cycle field to give ownership of
+	 * the TRB to the host controller (if we are the producer), or to check
+	 * if we own the TRB (if we are the consumer).  See section 4.9.1.
+	 */
+	u32			cycle_state;
+	unsigned int		stream_id;
+	unsigned int		num_segs;
+	unsigned int		num_trbs_free;
+	unsigned int		num_trbs_free_temp;
+	enum xhci_ring_type	type;
+	bool			last_td_was_short;
+};
+
+struct xhci_erst_entry {
+	/* 64-bit event ring segment address */
+	__le64	seg_addr;
+	__le32	seg_size;
+	/* Set to zero */
+	__le32	rsvd;
+};
+
+struct xhci_erst {
+	struct xhci_erst_entry	*entries;
+	unsigned int		num_entries;
+	/* xhci->event_ring keeps track of segment dma addresses */
+	dma_addr_t		erst_dma_addr;
+	/* Num entries the ERST can contain */
+	unsigned int		erst_size;
+};
+
+struct xhci_scratchpad {
+	u64 *sp_array;
+	dma_addr_t sp_dma;
+	void **sp_buffers;
+	dma_addr_t *sp_dma_buffers;
+};
+
+struct urb_priv {
+	int	length;
+	int	td_cnt;
+	struct	xhci_td	*td[0];
+};
+
+/*
+ * Each segment table entry is 4*32bits long.  1K seems like an ok size:
+ * (1K bytes * 8bytes/bit) / (4*32 bits) = 64 segment entries in the table,
+ * meaning 64 ring segments.
+ * Initial allocated size of the ERST, in number of entries */
+#define	ERST_NUM_SEGS	1
+/* Initial allocated size of the ERST, in number of entries */
+#define	ERST_SIZE	64
+/* Initial number of event segment rings allocated */
+#define	ERST_ENTRIES	1
+/* Poll every 60 seconds */
+#define	POLL_TIMEOUT	60
+/* Stop endpoint command timeout (secs) for URB cancellation watchdog timer */
+#define XHCI_STOP_EP_CMD_TIMEOUT	5
+/* XXX: Make these module parameters */
+
+struct s3_save {
+	u32	command;
+	u32	dev_nt;
+	u64	dcbaa_ptr;
+	u32	config_reg;
+	u32	irq_pending;
+	u32	irq_control;
+	u32	erst_size;
+	u64	erst_base;
+	u64	erst_dequeue;
+};
+
+/* Use for lpm */
+struct dev_info {
+	u32			dev_id;
+	struct	list_head	list;
+};
+
+struct xhci_bus_state {
+	unsigned long		bus_suspended;
+	unsigned long		next_statechange;
+
+	/* Port suspend arrays are indexed by the portnum of the fake roothub */
+	/* ports suspend status arrays - max 31 ports for USB2, 15 for USB3 */
+	u32			port_c_suspend;
+	u32			suspended_ports;
+	u32			port_remote_wakeup;
+	unsigned long		resume_done[USB_MAXCHILDREN];
+	/* which ports have started to resume */
+	unsigned long		resuming_ports;
+};
+
+static inline unsigned int hcd_index(struct usb_hcd *hcd)
+{
+	if (hcd->speed == HCD_USB3)
+		return 0;
+	else
+		return 1;
+}
+
+/* There is one xhci_hcd structure per controller */
+struct xhci_hcd {
+	struct usb_hcd *main_hcd;
+	struct usb_hcd *shared_hcd;
+	/* glue to PCI and HCD framework */
+	struct xhci_cap_regs __iomem *cap_regs;
+	struct xhci_op_regs __iomem *op_regs;
+	struct xhci_run_regs __iomem *run_regs;
+	struct xhci_doorbell_array __iomem *dba;
+	/* Our HCD's current interrupter register set */
+	struct	xhci_intr_reg __iomem *ir_set;
+
+	/* Cached register copies of read-only HC data */
+	__u32		hcs_params1;
+	__u32		hcs_params2;
+	__u32		hcs_params3;
+	__u32		hcc_params;
+
+	spinlock_t	lock;
+
+	/* packed release number */
+	u8		sbrn;
+	u16		hci_version;
+	u8		max_slots;
+	u8		max_interrupters;
+	u8		max_ports;
+	u8		isoc_threshold;
+	int		event_ring_max;
+	int		addr_64;
+	/* 4KB min, 128MB max */
+	int		page_size;
+	/* Valid values are 12 to 20, inclusive */
+	int		page_shift;
+	/* msi-x vectors */
+	int		msix_count;
+	struct msix_entry	*msix_entries;
+	/* data structures */
+	struct xhci_device_context_array *dcbaa;
+	struct xhci_ring	*cmd_ring;
+	unsigned int            cmd_ring_state;
+#define CMD_RING_STATE_RUNNING         (1 << 0)
+#define CMD_RING_STATE_ABORTED         (1 << 1)
+#define CMD_RING_STATE_STOPPED         (1 << 2)
+	struct list_head        cancel_cmd_list;
+	unsigned int		cmd_ring_reserved_trbs;
+	struct xhci_ring	*event_ring;
+	struct xhci_erst	erst;
+	/* Scratchpad */
+	struct xhci_scratchpad  *scratchpad;
+	/* Store LPM test failed devices' information */
+	struct list_head	lpm_failed_devs;
+
+	/* slot enabling and address device helpers */
+	struct completion	addr_dev;
+	int slot_id;
+	/* For USB 3.0 LPM enable/disable. */
+	struct xhci_command		*lpm_command;
+	/* Internal mirror of the HW's dcbaa */
+	struct xhci_virt_device	*devs[MAX_HC_SLOTS];
+	/* For keeping track of bandwidth domains per roothub. */
+	struct xhci_root_port_bw_info	*rh_bw;
+
+	/* DMA pools */
+	struct dma_pool	*device_pool;
+	struct dma_pool	*segment_pool;
+	struct dma_pool	*small_streams_pool;
+	struct dma_pool	*medium_streams_pool;
+
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+	/* Poll the rings - for debugging */
+	struct timer_list	event_ring_timer;
+	int			zombie;
+#endif
+	/* Host controller watchdog timer structures */
+	unsigned int		xhc_state;
+
+	u32			command;
+	struct s3_save		s3;
+/* Host controller is dying - not responding to commands. "I'm not dead yet!"
+ *
+ * xHC interrupts have been disabled and a watchdog timer will (or has already)
+ * halt the xHCI host, and complete all URBs with an -ESHUTDOWN code.  Any code
+ * that sees this status (other than the timer that set it) should stop touching
+ * hardware immediately.  Interrupt handlers should return immediately when
+ * they see this status (any time they drop and re-acquire xhci->lock).
+ * xhci_urb_dequeue() should call usb_hcd_check_unlink_urb() and return without
+ * putting the TD on the canceled list, etc.
+ *
+ * There are no reports of xHCI host controllers that display this issue.
+ */
+#define XHCI_STATE_DYING	(1 << 0)
+#define XHCI_STATE_HALTED	(1 << 1)
+	/* Statistics */
+	int			error_bitmask;
+	unsigned int		quirks;
+#define	XHCI_LINK_TRB_QUIRK	(1 << 0)
+#define XHCI_RESET_EP_QUIRK	(1 << 1)
+#define XHCI_NEC_HOST		(1 << 2)
+#define XHCI_AMD_PLL_FIX	(1 << 3)
+#define XHCI_SPURIOUS_SUCCESS	(1 << 4)
+/*
+ * Certain Intel host controllers have a limit to the number of endpoint
+ * contexts they can handle.  Ideally, they would signal that they can't handle
+ * anymore endpoint contexts by returning a Resource Error for the Configure
+ * Endpoint command, but they don't.  Instead they expect software to keep track
+ * of the number of active endpoints for them, across configure endpoint
+ * commands, reset device commands, disable slot commands, and address device
+ * commands.
+ */
+#define XHCI_EP_LIMIT_QUIRK	(1 << 5)
+#define XHCI_BROKEN_MSI		(1 << 6)
+#define XHCI_RESET_ON_RESUME	(1 << 7)
+#define	XHCI_SW_BW_CHECKING	(1 << 8)
+#define XHCI_AMD_0x96_HOST	(1 << 9)
+#define XHCI_TRUST_TX_LENGTH	(1 << 10)
+#define XHCI_LPM_SUPPORT	(1 << 11)
+#define XHCI_INTEL_HOST		(1 << 12)
+#define XHCI_SPURIOUS_REBOOT	(1 << 13)
+#define XHCI_COMP_MODE_QUIRK	(1 << 14)
+#define XHCI_AVOID_BEI		(1 << 15)
+	unsigned int		num_active_eps;
+	unsigned int		limit_active_eps;
+	/* There are two roothubs to keep track of bus suspend info for */
+	struct xhci_bus_state   bus_state[2];
+	/* Is each xHCI roothub port a USB 3.0, USB 2.0, or USB 1.1 port? */
+	u8			*port_array;
+	/* Array of pointers to USB 3.0 PORTSC registers */
+	__le32 __iomem		**usb3_ports;
+	unsigned int		num_usb3_ports;
+	/* Array of pointers to USB 2.0 PORTSC registers */
+	__le32 __iomem		**usb2_ports;
+	unsigned int		num_usb2_ports;
+	/* support xHCI 0.96 spec USB2 software LPM */
+	unsigned		sw_lpm_support:1;
+	/* support xHCI 1.0 spec USB2 hardware LPM */
+	unsigned		hw_lpm_support:1;
+	/* Compliance Mode Recovery Data */
+	struct timer_list	comp_mode_recovery_timer;
+	u32			port_status_u0;
+/* Compliance Mode Timer Triggered every 2 seconds */
+#define COMP_MODE_RCVRY_MSECS 2000
+};
+
+/* convert between an HCD pointer and the corresponding EHCI_HCD */
+static inline struct xhci_hcd *hcd_to_xhci(struct usb_hcd *hcd)
+{
+	return *((struct xhci_hcd **) (hcd->hcd_priv));
+}
+
+static inline struct usb_hcd *xhci_to_hcd(struct xhci_hcd *xhci)
+{
+	return xhci->main_hcd;
+}
+
+#ifdef CONFIG_USB_XHCI_HCD_DEBUGGING
+#define XHCI_DEBUG	1
+#else
+#define XHCI_DEBUG	0
+#endif
+
+#define xhci_dbg(xhci, fmt, args...) \
+	do { if (XHCI_DEBUG) dev_dbg(xhci_to_hcd(xhci)->self.controller , fmt , ## args); } while (0)
+#define xhci_info(xhci, fmt, args...) \
+	do { if (XHCI_DEBUG) dev_info(xhci_to_hcd(xhci)->self.controller , fmt , ## args); } while (0)
+#define xhci_err(xhci, fmt, args...) \
+	dev_err(xhci_to_hcd(xhci)->self.controller , fmt , ## args)
+#define xhci_warn(xhci, fmt, args...) \
+	dev_warn(xhci_to_hcd(xhci)->self.controller , fmt , ## args)
+#define xhci_warn_ratelimited(xhci, fmt, args...) \
+	dev_warn_ratelimited(xhci_to_hcd(xhci)->self.controller , fmt , ## args)
+
+/* TODO: copied from ehci.h - can be refactored? */
+/* xHCI spec says all registers are little endian */
+static inline unsigned int xhci_readl(const struct xhci_hcd *xhci,
+		__le32 __iomem *regs)
+{
+	return readl(regs);
+}
+static inline void xhci_writel(struct xhci_hcd *xhci,
+		const unsigned int val, __le32 __iomem *regs)
+{
+	writel(val, regs);
+}
+
+/*
+ * Registers should always be accessed with double word or quad word accesses.
+ *
+ * Some xHCI implementations may support 64-bit address pointers.  Registers
+ * with 64-bit address pointers should be written to with dword accesses by
+ * writing the low dword first (ptr[0]), then the high dword (ptr[1]) second.
+ * xHCI implementations that do not support 64-bit address pointers will ignore
+ * the high dword, and write order is irrelevant.
+ */
+static inline u64 xhci_read_64(const struct xhci_hcd *xhci,
+		__le64 __iomem *regs)
+{
+	__u32 __iomem *ptr = (__u32 __iomem *) regs;
+	u64 val_lo = readl(ptr);
+	u64 val_hi = readl(ptr + 1);
+	return val_lo + (val_hi << 32);
+}
+static inline void xhci_write_64(struct xhci_hcd *xhci,
+				 const u64 val, __le64 __iomem *regs)
+{
+	__u32 __iomem *ptr = (__u32 __iomem *) regs;
+	u32 val_lo = lower_32_bits(val);
+	u32 val_hi = upper_32_bits(val);
+
+	writel(val_lo, ptr);
+	writel(val_hi, ptr + 1);
+}
+
+static inline int xhci_link_trb_quirk(struct xhci_hcd *xhci)
+{
+	return xhci->quirks & XHCI_LINK_TRB_QUIRK;
+}
+
+/* xHCI debugging */
+void xhci_print_ir_set(struct xhci_hcd *xhci, int set_num);
+void xhci_print_registers(struct xhci_hcd *xhci);
+void xhci_dbg_regs(struct xhci_hcd *xhci);
+void xhci_print_run_regs(struct xhci_hcd *xhci);
+void xhci_print_trb_offsets(struct xhci_hcd *xhci, union xhci_trb *trb);
+void xhci_debug_trb(struct xhci_hcd *xhci, union xhci_trb *trb);
+void xhci_debug_segment(struct xhci_hcd *xhci, struct xhci_segment *seg);
+void xhci_debug_ring(struct xhci_hcd *xhci, struct xhci_ring *ring);
+void xhci_dbg_erst(struct xhci_hcd *xhci, struct xhci_erst *erst);
+void xhci_dbg_cmd_ptrs(struct xhci_hcd *xhci);
+void xhci_dbg_ring_ptrs(struct xhci_hcd *xhci, struct xhci_ring *ring);
+void xhci_dbg_ctx(struct xhci_hcd *xhci, struct xhci_container_ctx *ctx, unsigned int last_ep);
+char *xhci_get_slot_state(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *ctx);
+void xhci_dbg_ep_rings(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		struct xhci_virt_ep *ep);
+
+/* xHCI memory management */
+void xhci_mem_cleanup(struct xhci_hcd *xhci);
+int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags);
+void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id);
+int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, struct usb_device *udev, gfp_t flags);
+int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *udev);
+void xhci_copy_ep0_dequeue_into_input_ctx(struct xhci_hcd *xhci,
+		struct usb_device *udev);
+unsigned int xhci_get_endpoint_index(struct usb_endpoint_descriptor *desc);
+unsigned int xhci_get_endpoint_flag(struct usb_endpoint_descriptor *desc);
+unsigned int xhci_get_endpoint_flag_from_index(unsigned int ep_index);
+unsigned int xhci_last_valid_endpoint(u32 added_ctxs);
+void xhci_endpoint_zero(struct xhci_hcd *xhci, struct xhci_virt_device *virt_dev, struct usb_host_endpoint *ep);
+void xhci_drop_ep_from_interval_table(struct xhci_hcd *xhci,
+		struct xhci_bw_info *ep_bw,
+		struct xhci_interval_bw_table *bw_table,
+		struct usb_device *udev,
+		struct xhci_virt_ep *virt_ep,
+		struct xhci_tt_bw_info *tt_info);
+void xhci_update_tt_active_eps(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		int old_active_eps);
+void xhci_clear_endpoint_bw_info(struct xhci_bw_info *bw_info);
+void xhci_update_bw_info(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx,
+		struct xhci_input_control_ctx *ctrl_ctx,
+		struct xhci_virt_device *virt_dev);
+void xhci_endpoint_copy(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx,
+		struct xhci_container_ctx *out_ctx,
+		unsigned int ep_index);
+void xhci_slot_copy(struct xhci_hcd *xhci,
+		struct xhci_container_ctx *in_ctx,
+		struct xhci_container_ctx *out_ctx);
+int xhci_endpoint_init(struct xhci_hcd *xhci, struct xhci_virt_device *virt_dev,
+		struct usb_device *udev, struct usb_host_endpoint *ep,
+		gfp_t mem_flags);
+void xhci_ring_free(struct xhci_hcd *xhci, struct xhci_ring *ring);
+int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring,
+				unsigned int num_trbs, gfp_t flags);
+void xhci_free_or_cache_endpoint_ring(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		unsigned int ep_index);
+struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
+		unsigned int num_stream_ctxs,
+		unsigned int num_streams, gfp_t flags);
+void xhci_free_stream_info(struct xhci_hcd *xhci,
+		struct xhci_stream_info *stream_info);
+void xhci_setup_streams_ep_input_ctx(struct xhci_hcd *xhci,
+		struct xhci_ep_ctx *ep_ctx,
+		struct xhci_stream_info *stream_info);
+void xhci_setup_no_streams_ep_input_ctx(struct xhci_hcd *xhci,
+		struct xhci_ep_ctx *ep_ctx,
+		struct xhci_virt_ep *ep);
+void xhci_free_device_endpoint_resources(struct xhci_hcd *xhci,
+	struct xhci_virt_device *virt_dev, bool drop_control_ep);
+struct xhci_ring *xhci_dma_to_transfer_ring(
+		struct xhci_virt_ep *ep,
+		u64 address);
+struct xhci_ring *xhci_stream_id_to_ring(
+		struct xhci_virt_device *dev,
+		unsigned int ep_index,
+		unsigned int stream_id);
+struct xhci_command *xhci_alloc_command(struct xhci_hcd *xhci,
+		bool allocate_in_ctx, bool allocate_completion,
+		gfp_t mem_flags);
+void xhci_urb_free_priv(struct xhci_hcd *xhci, struct urb_priv *urb_priv);
+void xhci_free_command(struct xhci_hcd *xhci,
+		struct xhci_command *command);
+
+#ifdef CONFIG_PCI
+/* xHCI PCI glue */
+int xhci_register_pci(void);
+void xhci_unregister_pci(void);
+#else
+static inline int xhci_register_pci(void) { return 0; }
+static inline void xhci_unregister_pci(void) {}
+#endif
+
+#if defined(CONFIG_USB_XHCI_PLATFORM) \
+	|| defined(CONFIG_USB_XHCI_PLATFORM_MODULE)
+int xhci_register_plat(void);
+void xhci_unregister_plat(void);
+#else
+static inline int xhci_register_plat(void)
+{ return 0; }
+static inline void xhci_unregister_plat(void)
+{  }
+#endif
+
+/* xHCI host controller glue */
+typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
+int xhci_handshake(struct xhci_hcd *xhci, void __iomem *ptr,
+		u32 mask, u32 done, int usec);
+void xhci_quiesce(struct xhci_hcd *xhci);
+int xhci_halt(struct xhci_hcd *xhci);
+int xhci_reset(struct xhci_hcd *xhci);
+int xhci_init(struct usb_hcd *hcd);
+int xhci_run(struct usb_hcd *hcd);
+void xhci_stop(struct usb_hcd *hcd);
+void xhci_shutdown(struct usb_hcd *hcd);
+int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks);
+
+#ifdef	CONFIG_PM
+int xhci_suspend(struct xhci_hcd *xhci);
+int xhci_resume(struct xhci_hcd *xhci, bool hibernated);
+#else
+#define	xhci_suspend	NULL
+#define	xhci_resume	NULL
+#endif
+
+int xhci_get_frame(struct usb_hcd *hcd);
+irqreturn_t xhci_irq(struct usb_hcd *hcd);
+irqreturn_t xhci_msi_irq(int irq, struct usb_hcd *hcd);
+int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev);
+void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev);
+int xhci_alloc_tt_info(struct xhci_hcd *xhci,
+		struct xhci_virt_device *virt_dev,
+		struct usb_device *hdev,
+		struct usb_tt *tt, gfp_t mem_flags);
+int xhci_alloc_streams(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		unsigned int num_streams, gfp_t mem_flags);
+int xhci_free_streams(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		gfp_t mem_flags);
+int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev);
+int xhci_update_device(struct usb_hcd *hcd, struct usb_device *udev);
+int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
+				struct usb_device *udev, int enable);
+int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+			struct usb_tt *tt, gfp_t mem_flags);
+int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags);
+int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status);
+int xhci_add_endpoint(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep);
+int xhci_drop_endpoint(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep);
+void xhci_endpoint_reset(struct usb_hcd *hcd, struct usb_host_endpoint *ep);
+int xhci_discover_or_reset_device(struct usb_hcd *hcd, struct usb_device *udev);
+int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+
+/* xHCI ring, segment, TRB, and TD functions */
+dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
+struct xhci_segment *trb_in_td(struct xhci_segment *start_seg,
+		union xhci_trb *start_trb, union xhci_trb *end_trb,
+		dma_addr_t suspect_dma);
+int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code);
+void xhci_ring_cmd_db(struct xhci_hcd *xhci);
+int xhci_queue_slot_control(struct xhci_hcd *xhci, u32 trb_type, u32 slot_id);
+int xhci_queue_address_device(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr,
+		u32 slot_id);
+int xhci_queue_vendor_command(struct xhci_hcd *xhci,
+		u32 field1, u32 field2, u32 field3, u32 field4);
+int xhci_queue_stop_endpoint(struct xhci_hcd *xhci, int slot_id,
+		unsigned int ep_index, int suspend);
+int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags, struct urb *urb,
+		int slot_id, unsigned int ep_index);
+int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, struct urb *urb,
+		int slot_id, unsigned int ep_index);
+int xhci_queue_intr_tx(struct xhci_hcd *xhci, gfp_t mem_flags, struct urb *urb,
+		int slot_id, unsigned int ep_index);
+int xhci_queue_isoc_tx_prepare(struct xhci_hcd *xhci, gfp_t mem_flags,
+		struct urb *urb, int slot_id, unsigned int ep_index);
+int xhci_queue_configure_endpoint(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr,
+		u32 slot_id, bool command_must_succeed);
+int xhci_queue_evaluate_context(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr,
+		u32 slot_id, bool command_must_succeed);
+int xhci_queue_reset_ep(struct xhci_hcd *xhci, int slot_id,
+		unsigned int ep_index);
+int xhci_queue_reset_device(struct xhci_hcd *xhci, u32 slot_id);
+void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		unsigned int stream_id, struct xhci_td *cur_td,
+		struct xhci_dequeue_state *state);
+void xhci_queue_new_dequeue_state(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		unsigned int stream_id,
+		struct xhci_dequeue_state *deq_state);
+void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci,
+		struct usb_device *udev, unsigned int ep_index);
+void xhci_queue_config_ep_quirk(struct xhci_hcd *xhci,
+		unsigned int slot_id, unsigned int ep_index,
+		struct xhci_dequeue_state *deq_state);
+void xhci_stop_endpoint_command_watchdog(unsigned long arg);
+int xhci_cancel_cmd(struct xhci_hcd *xhci, struct xhci_command *command,
+		union xhci_trb *cmd_trb);
+void xhci_ring_ep_doorbell(struct xhci_hcd *xhci, unsigned int slot_id,
+		unsigned int ep_index, unsigned int stream_id);
+
+/* xHCI roothub code */
+void xhci_set_link_state(struct xhci_hcd *xhci, __le32 __iomem **port_array,
+				int port_id, u32 link_state);
+int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
+			struct usb_device *udev, enum usb3_link_state state);
+int xhci_disable_usb3_lpm_timeout(struct usb_hcd *hcd,
+			struct usb_device *udev, enum usb3_link_state state);
+void xhci_test_and_clear_bit(struct xhci_hcd *xhci, __le32 __iomem **port_array,
+				int port_id, u32 port_bit);
+int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex,
+		char *buf, u16 wLength);
+int xhci_hub_status_data(struct usb_hcd *hcd, char *buf);
+int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1);
+
+#ifdef CONFIG_PM
+int xhci_bus_suspend(struct usb_hcd *hcd);
+int xhci_bus_resume(struct usb_hcd *hcd);
+#else
+#define	xhci_bus_suspend	NULL
+#define	xhci_bus_resume		NULL
+#endif	/* CONFIG_PM */
+
+u32 xhci_port_state_to_neutral(u32 state);
+int xhci_find_slot_id_by_port(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+		u16 port);
+void xhci_ring_device(struct xhci_hcd *xhci, int slot_id);
+
+/* xHCI contexts */
+struct xhci_input_control_ctx *xhci_get_input_control_ctx(struct xhci_hcd *xhci, struct xhci_container_ctx *ctx);
+struct xhci_slot_ctx *xhci_get_slot_ctx(struct xhci_hcd *xhci, struct xhci_container_ctx *ctx);
+struct xhci_ep_ctx *xhci_get_ep_ctx(struct xhci_hcd *xhci, struct xhci_container_ctx *ctx, unsigned int ep_index);
+
+#endif /* __LINUX_XHCI_HCD_H */
diff --git a/include/linux/usb/ch11.h b/include/linux/usb/ch11.h
new file mode 100644
index 0000000..7692dc6
--- /dev/null
+++ b/include/linux/usb/ch11.h
@@ -0,0 +1,266 @@
+/*
+ * This file holds Hub protocol constants and data structures that are
+ * defined in chapter 11 (Hub Specification) of the USB 2.0 specification.
+ *
+ * It is used/shared between the USB core, the HCDs and couple of other USB
+ * drivers.
+ */
+
+#ifndef __LINUX_CH11_H
+#define __LINUX_CH11_H
+
+#include <linux/types.h>	/* __u8 etc */
+
+/*
+ * Hub request types
+ */
+
+#define USB_RT_HUB	(USB_TYPE_CLASS | USB_RECIP_DEVICE)
+#define USB_RT_PORT	(USB_TYPE_CLASS | USB_RECIP_OTHER)
+
+/*
+ * Hub class requests
+ * See USB 2.0 spec Table 11-16
+ */
+#define HUB_CLEAR_TT_BUFFER	8
+#define HUB_RESET_TT		9
+#define HUB_GET_TT_STATE	10
+#define HUB_STOP_TT		11
+
+/*
+ * Hub class additional requests defined by USB 3.0 spec
+ * See USB 3.0 spec Table 10-6
+ */
+#define HUB_SET_DEPTH		12
+#define HUB_GET_PORT_ERR_COUNT	13
+
+/*
+ * Hub Class feature numbers
+ * See USB 2.0 spec Table 11-17
+ */
+#define C_HUB_LOCAL_POWER	0
+#define C_HUB_OVER_CURRENT	1
+
+/*
+ * Port feature numbers
+ * See USB 2.0 spec Table 11-17
+ */
+#define USB_PORT_FEAT_CONNECTION	0
+#define USB_PORT_FEAT_ENABLE		1
+#define USB_PORT_FEAT_SUSPEND		2	/* L2 suspend */
+#define USB_PORT_FEAT_OVER_CURRENT	3
+#define USB_PORT_FEAT_RESET		4
+#define USB_PORT_FEAT_L1		5	/* L1 suspend */
+#define USB_PORT_FEAT_POWER		8
+#define USB_PORT_FEAT_LOWSPEED		9	/* Should never be used */
+#define USB_PORT_FEAT_C_CONNECTION	16
+#define USB_PORT_FEAT_C_ENABLE		17
+#define USB_PORT_FEAT_C_SUSPEND		18
+#define USB_PORT_FEAT_C_OVER_CURRENT	19
+#define USB_PORT_FEAT_C_RESET		20
+#define USB_PORT_FEAT_TEST              21
+#define USB_PORT_FEAT_INDICATOR         22
+#define USB_PORT_FEAT_C_PORT_L1         23
+
+/*
+ * Port feature selectors added by USB 3.0 spec.
+ * See USB 3.0 spec Table 10-7
+ */
+#define USB_PORT_FEAT_LINK_STATE		5
+#define USB_PORT_FEAT_U1_TIMEOUT		23
+#define USB_PORT_FEAT_U2_TIMEOUT		24
+#define USB_PORT_FEAT_C_PORT_LINK_STATE		25
+#define USB_PORT_FEAT_C_PORT_CONFIG_ERROR	26
+#define USB_PORT_FEAT_REMOTE_WAKE_MASK		27
+#define USB_PORT_FEAT_BH_PORT_RESET		28
+#define USB_PORT_FEAT_C_BH_PORT_RESET		29
+#define USB_PORT_FEAT_FORCE_LINKPM_ACCEPT	30
+
+#define USB_PORT_LPM_TIMEOUT(p)			(((p) & 0xff) << 8)
+
+/* USB 3.0 hub remote wake mask bits, see table 10-14 */
+#define USB_PORT_FEAT_REMOTE_WAKE_CONNECT	(1 << 8)
+#define USB_PORT_FEAT_REMOTE_WAKE_DISCONNECT	(1 << 9)
+#define USB_PORT_FEAT_REMOTE_WAKE_OVER_CURRENT	(1 << 10)
+
+/*
+ * Hub Status and Hub Change results
+ * See USB 2.0 spec Table 11-19 and Table 11-20
+ */
+struct usb_port_status {
+	__le16 wPortStatus;
+	__le16 wPortChange;
+} __attribute__ ((packed));
+
+/*
+ * wPortStatus bit field
+ * See USB 2.0 spec Table 11-21
+ */
+#define USB_PORT_STAT_CONNECTION	0x0001
+#define USB_PORT_STAT_ENABLE		0x0002
+#define USB_PORT_STAT_SUSPEND		0x0004
+#define USB_PORT_STAT_OVERCURRENT	0x0008
+#define USB_PORT_STAT_RESET		0x0010
+#define USB_PORT_STAT_L1		0x0020
+/* bits 6 to 7 are reserved */
+#define USB_PORT_STAT_POWER		0x0100
+#define USB_PORT_STAT_LOW_SPEED		0x0200
+#define USB_PORT_STAT_HIGH_SPEED        0x0400
+#define USB_PORT_STAT_TEST              0x0800
+#define USB_PORT_STAT_INDICATOR         0x1000
+/* bits 13 to 15 are reserved */
+
+/*
+ * Additions to wPortStatus bit field from USB 3.0
+ * See USB 3.0 spec Table 10-10
+ */
+#define USB_PORT_STAT_LINK_STATE	0x01e0
+#define USB_SS_PORT_STAT_POWER		0x0200
+#define USB_SS_PORT_STAT_SPEED		0x1c00
+#define USB_PORT_STAT_SPEED_5GBPS	0x0000
+/* Valid only if port is enabled */
+/* Bits that are the same from USB 2.0 */
+#define USB_SS_PORT_STAT_MASK (USB_PORT_STAT_CONNECTION |	    \
+				USB_PORT_STAT_ENABLE |	    \
+				USB_PORT_STAT_OVERCURRENT | \
+				USB_PORT_STAT_RESET)
+
+/*
+ * Definitions for PORT_LINK_STATE values
+ * (bits 5-8) in wPortStatus
+ */
+#define USB_SS_PORT_LS_U0		0x0000
+#define USB_SS_PORT_LS_U1		0x0020
+#define USB_SS_PORT_LS_U2		0x0040
+#define USB_SS_PORT_LS_U3		0x0060
+#define USB_SS_PORT_LS_SS_DISABLED	0x0080
+#define USB_SS_PORT_LS_RX_DETECT	0x00a0
+#define USB_SS_PORT_LS_SS_INACTIVE	0x00c0
+#define USB_SS_PORT_LS_POLLING		0x00e0
+#define USB_SS_PORT_LS_RECOVERY		0x0100
+#define USB_SS_PORT_LS_HOT_RESET	0x0120
+#define USB_SS_PORT_LS_COMP_MOD		0x0140
+#define USB_SS_PORT_LS_LOOPBACK		0x0160
+
+/*
+ * wPortChange bit field
+ * See USB 2.0 spec Table 11-22 and USB 2.0 LPM ECN Table-4.10
+ * Bits 0 to 5 shown, bits 6 to 15 are reserved
+ */
+#define USB_PORT_STAT_C_CONNECTION	0x0001
+#define USB_PORT_STAT_C_ENABLE		0x0002
+#define USB_PORT_STAT_C_SUSPEND		0x0004
+#define USB_PORT_STAT_C_OVERCURRENT	0x0008
+#define USB_PORT_STAT_C_RESET		0x0010
+#define USB_PORT_STAT_C_L1		0x0020
+/*
+ * USB 3.0 wPortChange bit fields
+ * See USB 3.0 spec Table 10-11
+ */
+#define USB_PORT_STAT_C_BH_RESET	0x0020
+#define USB_PORT_STAT_C_LINK_STATE	0x0040
+#define USB_PORT_STAT_C_CONFIG_ERROR	0x0080
+
+/*
+ * wHubCharacteristics (masks)
+ * See USB 2.0 spec Table 11-13, offset 3
+ */
+#define HUB_CHAR_LPSM		0x0003 /* Logical Power Switching Mode mask */
+#define HUB_CHAR_COMMON_LPSM	0x0000 /* All ports power control@once */
+#define HUB_CHAR_INDV_PORT_LPSM	0x0001 /* per-port power control */
+#define HUB_CHAR_NO_LPSM	0x0002 /* no power switching */
+
+#define HUB_CHAR_COMPOUND	0x0004 /* hub is part of a compound device */
+
+#define HUB_CHAR_OCPM		0x0018 /* Over-Current Protection Mode mask */
+#define HUB_CHAR_COMMON_OCPM	0x0000 /* All ports Over-Current reporting */
+#define HUB_CHAR_INDV_PORT_OCPM	0x0008 /* per-port Over-current reporting */
+#define HUB_CHAR_NO_OCPM	0x0010 /* No Over-current Protection support */
+
+#define HUB_CHAR_TTTT		0x0060 /* TT Think Time mask */
+#define HUB_CHAR_PORTIND	0x0080 /* per-port indicators (LEDs) */
+
+struct usb_hub_status {
+	__le16 wHubStatus;
+	__le16 wHubChange;
+} __attribute__ ((packed));
+
+/*
+ * Hub Status & Hub Change bit masks
+ * See USB 2.0 spec Table 11-19 and Table 11-20
+ * Bits 0 and 1 for wHubStatus and wHubChange
+ * Bits 2 to 15 are reserved for both
+ */
+#define HUB_STATUS_LOCAL_POWER	0x0001
+#define HUB_STATUS_OVERCURRENT	0x0002
+#define HUB_CHANGE_LOCAL_POWER	0x0001
+#define HUB_CHANGE_OVERCURRENT	0x0002
+
+
+/*
+ * Hub descriptor
+ * See USB 2.0 spec Table 11-13
+ */
+
+#define USB_DT_HUB			(USB_TYPE_CLASS | 0x09)
+#define USB_DT_SS_HUB			(USB_TYPE_CLASS | 0x0a)
+#define USB_DT_HUB_NONVAR_SIZE		7
+#define USB_DT_SS_HUB_SIZE              12
+
+/*
+ * Hub Device descriptor
+ * USB Hub class device protocols
+ */
+
+#define USB_HUB_PR_FS		0 /* Full speed hub */
+#define USB_HUB_PR_HS_NO_TT	0 /* Hi-speed hub without TT */
+#define USB_HUB_PR_HS_SINGLE_TT	1 /* Hi-speed hub with single TT */
+#define USB_HUB_PR_HS_MULTI_TT	2 /* Hi-speed hub with multiple TT */
+#define USB_HUB_PR_SS		3 /* Super speed hub */
+
+struct usb_hub_descriptor {
+	__u8  bDescLength;
+	__u8  bDescriptorType;
+	__u8  bNbrPorts;
+	__le16 wHubCharacteristics;
+	__u8  bPwrOn2PwrGood;
+	__u8  bHubContrCurrent;
+
+	/* 2.0 and 3.0 hubs differ here */
+	union {
+		struct {
+			/* add 1 bit for hub status change; round to bytes */
+			__u8  DeviceRemovable[(USB_MAXCHILDREN + 1 + 7) / 8];
+			__u8  PortPwrCtrlMask[(USB_MAXCHILDREN + 1 + 7) / 8];
+		}  __attribute__ ((packed)) hs;
+
+		struct {
+			__u8 bHubHdrDecLat;
+			__le16 wHubDelay;
+			__le16 DeviceRemovable;
+		}  __attribute__ ((packed)) ss;
+	} u;
+} __attribute__ ((packed));
+
+/* port indicator status selectors, tables 11-7 and 11-25 */
+#define HUB_LED_AUTO	0
+#define HUB_LED_AMBER	1
+#define HUB_LED_GREEN	2
+#define HUB_LED_OFF	3
+
+enum hub_led_mode {
+	INDICATOR_AUTO = 0,
+	INDICATOR_CYCLE,
+	/* software blinks for attention:  software, hardware, reserved */
+	INDICATOR_GREEN_BLINK, INDICATOR_GREEN_BLINK_OFF,
+	INDICATOR_AMBER_BLINK, INDICATOR_AMBER_BLINK_OFF,
+	INDICATOR_ALT_BLINK, INDICATOR_ALT_BLINK_OFF
+} __attribute__ ((packed));
+
+/* Transaction Translator Think Times, in bits */
+#define HUB_TTTT_8_BITS		0x00
+#define HUB_TTTT_16_BITS	0x20
+#define HUB_TTTT_24_BITS	0x40
+#define HUB_TTTT_32_BITS	0x60
+
+#endif /* __LINUX_CH11_H */
diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
new file mode 100644
index 0000000..f5f5c7d
--- /dev/null
+++ b/include/linux/usb/hcd.h
@@ -0,0 +1,672 @@
+/*
+ * Copyright (c) 2001-2002 by David Brownell
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef __USB_CORE_HCD_H
+#define __USB_CORE_HCD_H
+
+#ifdef __KERNEL__
+
+#include <linux/rwsem.h>
+
+#define MAX_TOPO_LEVEL		6
+
+/* This file contains declarations of usbcore internals that are mostly
+ * used or exposed by Host Controller Drivers.
+ */
+
+/*
+ * USB Packet IDs (PIDs)
+ */
+#define USB_PID_EXT			0xf0	/* USB 2.0 LPM ECN */
+#define USB_PID_OUT			0xe1
+#define USB_PID_ACK			0xd2
+#define USB_PID_DATA0			0xc3
+#define USB_PID_PING			0xb4	/* USB 2.0 */
+#define USB_PID_SOF			0xa5
+#define USB_PID_NYET			0x96	/* USB 2.0 */
+#define USB_PID_DATA2			0x87	/* USB 2.0 */
+#define USB_PID_SPLIT			0x78	/* USB 2.0 */
+#define USB_PID_IN			0x69
+#define USB_PID_NAK			0x5a
+#define USB_PID_DATA1			0x4b
+#define USB_PID_PREAMBLE		0x3c	/* Token mode */
+#define USB_PID_ERR			0x3c	/* USB 2.0: handshake mode */
+#define USB_PID_SETUP			0x2d
+#define USB_PID_STALL			0x1e
+#define USB_PID_MDATA			0x0f	/* USB 2.0 */
+
+/*-------------------------------------------------------------------------*/
+
+/*
+ * USB Host Controller Driver (usb_hcd) framework
+ *
+ * Since "struct usb_bus" is so thin, you can't share much code in it.
+ * This framework is a layer over that, and should be more sharable.
+ *
+ * @authorized_default: Specifies if new devices are authorized to
+ *                      connect by default or they require explicit
+ *                      user space authorization; this bit is settable
+ *                      through /sys/class/usb_host/X/authorized_default.
+ *                      For the rest is RO, so we don't lock to r/w it.
+ */
+
+/*-------------------------------------------------------------------------*/
+
+struct usb_hcd {
+
+	/*
+	 * housekeeping
+	 */
+	struct usb_bus		self;		/* hcd is-a bus */
+	struct kref		kref;		/* reference counter */
+
+	const char		*product_desc;	/* product/vendor string */
+	int			speed;		/* Speed for this roothub.
+						 * May be different from
+						 * hcd->driver->flags & HCD_MASK
+						 */
+	char			irq_descr[24];	/* driver + bus # */
+
+	struct timer_list	rh_timer;	/* drives root-hub polling */
+	struct urb		*status_urb;	/* the current status urb */
+#ifdef CONFIG_PM_RUNTIME
+	struct work_struct	wakeup_work;	/* for remote wakeup */
+#endif
+
+	/*
+	 * hardware info/state
+	 */
+	const struct hc_driver	*driver;	/* hw-specific hooks */
+
+	/*
+	 * OTG and some Host controllers need software interaction with phys;
+	 * other external phys should be software-transparent
+	 */
+	struct usb_phy	*phy;
+
+	/* Flags that need to be manipulated atomically because they can
+	 * change while the host controller is running.  Always use
+	 * set_bit() or clear_bit() to change their values.
+	 */
+	unsigned long		flags;
+#define HCD_FLAG_HW_ACCESSIBLE		0	/* at full power */
+#define HCD_FLAG_POLL_RH		2	/* poll for rh status? */
+#define HCD_FLAG_POLL_PENDING		3	/* status has changed? */
+#define HCD_FLAG_WAKEUP_PENDING		4	/* root hub is resuming? */
+#define HCD_FLAG_RH_RUNNING		5	/* root hub is running? */
+#define HCD_FLAG_DEAD			6	/* controller has died? */
+
+	/* The flags can be tested using these macros; they are likely to
+	 * be slightly faster than test_bit().
+	 */
+#define HCD_HW_ACCESSIBLE(hcd)	((hcd)->flags & (1U << HCD_FLAG_HW_ACCESSIBLE))
+#define HCD_POLL_RH(hcd)	((hcd)->flags & (1U << HCD_FLAG_POLL_RH))
+#define HCD_POLL_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_POLL_PENDING))
+#define HCD_WAKEUP_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+#define HCD_RH_RUNNING(hcd)	((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+#define HCD_DEAD(hcd)		((hcd)->flags & (1U << HCD_FLAG_DEAD))
+
+	/* Flags that get set only during HCD registration or removal. */
+	unsigned		rh_registered:1;/* is root hub registered? */
+	unsigned		rh_pollable:1;	/* may we poll the root hub? */
+	unsigned		msix_enabled:1;	/* driver has MSI-X enabled? */
+
+	/* The next flag is a stopgap, to be removed when all the HCDs
+	 * support the new root-hub polling mechanism. */
+	unsigned		uses_new_polling:1;
+	unsigned		wireless:1;	/* Wireless USB HCD */
+	unsigned		authorized_default:1;
+	unsigned		has_tt:1;	/* Integrated TT in root hub */
+
+	unsigned int		irq;		/* irq allocated */
+	void __iomem		*regs;		/* device memory/io */
+	resource_size_t		rsrc_start;	/* memory/io resource start */
+	resource_size_t		rsrc_len;	/* memory/io resource length */
+	unsigned		power_budget;	/* in mA, 0 = no limit */
+
+	/* bandwidth_mutex should be taken before adding or removing
+	 * any new bus bandwidth constraints:
+	 *   1. Before adding a configuration for a new device.
+	 *   2. Before removing the configuration to put the device into
+	 *      the addressed state.
+	 *   3. Before selecting a different configuration.
+	 *   4. Before selecting an alternate interface setting.
+	 *
+	 * bandwidth_mutex should be dropped after a successful control message
+	 * to the device, or resetting the bandwidth after a failed attempt.
+	 */
+	struct mutex		*bandwidth_mutex;
+	struct usb_hcd		*shared_hcd;
+	struct usb_hcd		*primary_hcd;
+
+
+#define HCD_BUFFER_POOLS	4
+	struct dma_pool		*pool[HCD_BUFFER_POOLS];
+
+	int			state;
+#	define	__ACTIVE		0x01
+#	define	__SUSPEND		0x04
+#	define	__TRANSIENT		0x80
+
+#	define	HC_STATE_HALT		0
+#	define	HC_STATE_RUNNING	(__ACTIVE)
+#	define	HC_STATE_QUIESCING	(__SUSPEND|__TRANSIENT|__ACTIVE)
+#	define	HC_STATE_RESUMING	(__SUSPEND|__TRANSIENT)
+#	define	HC_STATE_SUSPENDED	(__SUSPEND)
+
+#define	HC_IS_RUNNING(state) ((state) & __ACTIVE)
+#define	HC_IS_SUSPENDED(state) ((state) & __SUSPEND)
+
+	/* more shared queuing code would be good; it should support
+	 * smarter scheduling, handle transaction translators, etc;
+	 * input size of periodic table to an interrupt scheduler.
+	 * (ohci 32, uhci 1024, ehci 256/512/1024).
+	 */
+
+	/* The HC driver's private data is stored@the end of
+	 * this structure.
+	 */
+	unsigned long hcd_priv[0]
+			__attribute__ ((aligned(sizeof(s64))));
+};
+
+/* 2.4 does this a bit differently ... */
+static inline struct usb_bus *hcd_to_bus(struct usb_hcd *hcd)
+{
+	return &hcd->self;
+}
+
+static inline struct usb_hcd *bus_to_hcd(struct usb_bus *bus)
+{
+	return container_of(bus, struct usb_hcd, self);
+}
+
+struct hcd_timeout {	/* timeouts we allocate */
+	struct list_head	timeout_list;
+	struct timer_list	timer;
+};
+
+/*-------------------------------------------------------------------------*/
+
+
+struct hc_driver {
+	const char	*description;	/* "ehci-hcd" etc */
+	const char	*product_desc;	/* product/vendor string */
+	size_t		hcd_priv_size;	/* size of private data */
+
+	/* irq handler */
+	irqreturn_t	(*irq) (struct usb_hcd *hcd);
+
+	int	flags;
+#define	HCD_MEMORY	0x0001		/* HC regs use memory (else I/O) */
+#define	HCD_LOCAL_MEM	0x0002		/* HC needs local memory */
+#define	HCD_SHARED	0x0004		/* Two (or more) usb_hcds share HW */
+#define	HCD_USB11	0x0010		/* USB 1.1 */
+#define	HCD_USB2	0x0020		/* USB 2.0 */
+#define	HCD_USB3	0x0040		/* USB 3.0 */
+#define	HCD_MASK	0x0070
+
+	/* called to init HCD and root hub */
+	int	(*reset) (struct usb_hcd *hcd);
+	int	(*start) (struct usb_hcd *hcd);
+
+	/* NOTE:  these suspend/resume calls relate to the HC as
+	 * a whole, not just the root hub; they're for PCI bus glue.
+	 */
+	/* called after suspending the hub, before entering D3 etc */
+	int	(*pci_suspend)(struct usb_hcd *hcd, bool do_wakeup);
+
+	/* called after entering D0 (etc), before resuming the hub */
+	int	(*pci_resume)(struct usb_hcd *hcd, bool hibernated);
+
+	/* cleanly make HCD stop writing memory and doing I/O */
+	void	(*stop) (struct usb_hcd *hcd);
+
+	/* shutdown HCD */
+	void	(*shutdown) (struct usb_hcd *hcd);
+
+	/* return current frame number */
+	int	(*get_frame_number) (struct usb_hcd *hcd);
+
+	/* manage i/o requests, device state */
+	int	(*urb_enqueue)(struct usb_hcd *hcd,
+				struct urb *urb, gfp_t mem_flags);
+	int	(*urb_dequeue)(struct usb_hcd *hcd,
+				struct urb *urb, int status);
+
+	/*
+	 * (optional) these hooks allow an HCD to override the default DMA
+	 * mapping and unmapping routines.  In general, they shouldn't be
+	 * necessary unless the host controller has special DMA requirements,
+	 * such as alignment contraints.  If these are not specified, the
+	 * general usb_hcd_(un)?map_urb_for_dma functions will be used instead
+	 * (and it may be a good idea to call these functions in your HCD
+	 * implementation)
+	 */
+	int	(*map_urb_for_dma)(struct usb_hcd *hcd, struct urb *urb,
+				   gfp_t mem_flags);
+	void    (*unmap_urb_for_dma)(struct usb_hcd *hcd, struct urb *urb);
+
+	/* hw synch, freeing endpoint resources that urb_dequeue can't */
+	void	(*endpoint_disable)(struct usb_hcd *hcd,
+			struct usb_host_endpoint *ep);
+
+	/* (optional) reset any endpoint state such as sequence number
+	   and current window */
+	void	(*endpoint_reset)(struct usb_hcd *hcd,
+			struct usb_host_endpoint *ep);
+
+	/* root hub support */
+	int	(*hub_status_data) (struct usb_hcd *hcd, char *buf);
+	int	(*hub_control) (struct usb_hcd *hcd,
+				u16 typeReq, u16 wValue, u16 wIndex,
+				char *buf, u16 wLength);
+	int	(*bus_suspend)(struct usb_hcd *);
+	int	(*bus_resume)(struct usb_hcd *);
+	int	(*start_port_reset)(struct usb_hcd *, unsigned port_num);
+
+		/* force handover of high-speed port to full-speed companion */
+	void	(*relinquish_port)(struct usb_hcd *, int);
+		/* has a port been handed over to a companion? */
+	int	(*port_handed_over)(struct usb_hcd *, int);
+
+		/* CLEAR_TT_BUFFER completion callback */
+	void	(*clear_tt_buffer_complete)(struct usb_hcd *,
+				struct usb_host_endpoint *);
+
+	/* xHCI specific functions */
+		/* Called by usb_alloc_dev to alloc HC device structures */
+	int	(*alloc_dev)(struct usb_hcd *, struct usb_device *);
+		/* Called by usb_disconnect to free HC device structures */
+	void	(*free_dev)(struct usb_hcd *, struct usb_device *);
+	/* Change a group of bulk endpoints to support multiple stream IDs */
+	int	(*alloc_streams)(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		unsigned int num_streams, gfp_t mem_flags);
+	/* Reverts a group of bulk endpoints back to not using stream IDs.
+	 * Can fail if we run out of memory.
+	 */
+	int	(*free_streams)(struct usb_hcd *hcd, struct usb_device *udev,
+		struct usb_host_endpoint **eps, unsigned int num_eps,
+		gfp_t mem_flags);
+
+	/* Bandwidth computation functions */
+	/* Note that add_endpoint() can only be called once per endpoint before
+	 * check_bandwidth() or reset_bandwidth() must be called.
+	 * drop_endpoint() can only be called once per endpoint also.
+	 * A call to xhci_drop_endpoint() followed by a call to
+	 * xhci_add_endpoint() will add the endpoint to the schedule with
+	 * possibly new parameters denoted by a different endpoint descriptor
+	 * in usb_host_endpoint.  A call to xhci_add_endpoint() followed by a
+	 * call to xhci_drop_endpoint() is not allowed.
+	 */
+		/* Allocate endpoint resources and add them to a new schedule */
+	int	(*add_endpoint)(struct usb_hcd *, struct usb_device *,
+				struct usb_host_endpoint *);
+		/* Drop an endpoint from a new schedule */
+	int	(*drop_endpoint)(struct usb_hcd *, struct usb_device *,
+				 struct usb_host_endpoint *);
+		/* Check that a new hardware configuration, set using
+		 * endpoint_enable and endpoint_disable, does not exceed bus
+		 * bandwidth.  This must be called before any set configuration
+		 * or set interface requests are sent to the device.
+		 */
+	int	(*check_bandwidth)(struct usb_hcd *, struct usb_device *);
+		/* Reset the device schedule to the last known good schedule,
+		 * which was set from a previous successful call to
+		 * check_bandwidth().  This reverts any add_endpoint() and
+		 * drop_endpoint() calls since that last successful call.
+		 * Used for when a check_bandwidth() call fails due to resource
+		 * or bandwidth constraints.
+		 */
+	void	(*reset_bandwidth)(struct usb_hcd *, struct usb_device *);
+		/* Returns the hardware-chosen device address */
+	int	(*address_device)(struct usb_hcd *, struct usb_device *udev);
+		/* Notifies the HCD after a hub descriptor is fetched.
+		 * Will block.
+		 */
+	int	(*update_hub_device)(struct usb_hcd *, struct usb_device *hdev,
+			struct usb_tt *tt, gfp_t mem_flags);
+	int	(*reset_device)(struct usb_hcd *, struct usb_device *);
+		/* Notifies the HCD after a device is connected and its
+		 * address is set
+		 */
+	int	(*update_device)(struct usb_hcd *, struct usb_device *);
+	int	(*set_usb2_hw_lpm)(struct usb_hcd *, struct usb_device *, int);
+	/* USB 3.0 Link Power Management */
+		/* Returns the USB3 hub-encoded value for the U1/U2 timeout. */
+	int	(*enable_usb3_lpm_timeout)(struct usb_hcd *,
+			struct usb_device *, enum usb3_link_state state);
+		/* The xHCI host controller can still fail the command to
+		 * disable the LPM timeouts, so this can return an error code.
+		 */
+	int	(*disable_usb3_lpm_timeout)(struct usb_hcd *,
+			struct usb_device *, enum usb3_link_state state);
+	int	(*find_raw_port_number)(struct usb_hcd *, int);
+};
+
+extern int usb_hcd_link_urb_to_ep(struct usb_hcd *hcd, struct urb *urb);
+extern int usb_hcd_check_unlink_urb(struct usb_hcd *hcd, struct urb *urb,
+		int status);
+extern void usb_hcd_unlink_urb_from_ep(struct usb_hcd *hcd, struct urb *urb);
+
+extern int usb_hcd_submit_urb(struct urb *urb, gfp_t mem_flags);
+extern int usb_hcd_unlink_urb(struct urb *urb, int status);
+extern void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb,
+		int status);
+extern int usb_hcd_map_urb_for_dma(struct usb_hcd *hcd, struct urb *urb,
+		gfp_t mem_flags);
+extern void usb_hcd_unmap_urb_setup_for_dma(struct usb_hcd *, struct urb *);
+extern void usb_hcd_unmap_urb_for_dma(struct usb_hcd *, struct urb *);
+extern void usb_hcd_flush_endpoint(struct usb_device *udev,
+		struct usb_host_endpoint *ep);
+extern void usb_hcd_disable_endpoint(struct usb_device *udev,
+		struct usb_host_endpoint *ep);
+extern void usb_hcd_reset_endpoint(struct usb_device *udev,
+		struct usb_host_endpoint *ep);
+extern void usb_hcd_synchronize_unlinks(struct usb_device *udev);
+extern int usb_hcd_alloc_bandwidth(struct usb_device *udev,
+		struct usb_host_config *new_config,
+		struct usb_host_interface *old_alt,
+		struct usb_host_interface *new_alt);
+extern int usb_hcd_get_frame_number(struct usb_device *udev);
+
+extern struct usb_hcd *usb_create_hcd(const struct hc_driver *driver,
+		struct device *dev, const char *bus_name);
+extern struct usb_hcd *usb_create_shared_hcd(const struct hc_driver *driver,
+		struct device *dev, const char *bus_name,
+		struct usb_hcd *shared_hcd);
+extern struct usb_hcd *usb_get_hcd(struct usb_hcd *hcd);
+extern void usb_put_hcd(struct usb_hcd *hcd);
+extern int usb_hcd_is_primary_hcd(struct usb_hcd *hcd);
+extern int usb_add_hcd(struct usb_hcd *hcd,
+		unsigned int irqnum, unsigned long irqflags);
+extern void usb_remove_hcd(struct usb_hcd *hcd);
+extern int usb_hcd_find_raw_port_number(struct usb_hcd *hcd, int port1);
+
+struct platform_device;
+extern void usb_hcd_platform_shutdown(struct platform_device *dev);
+
+#ifdef CONFIG_PCI
+struct pci_dev;
+struct pci_device_id;
+extern int usb_hcd_pci_probe(struct pci_dev *dev,
+				const struct pci_device_id *id);
+extern void usb_hcd_pci_remove(struct pci_dev *dev);
+extern void usb_hcd_pci_shutdown(struct pci_dev *dev);
+
+#ifdef CONFIG_PM_SLEEP
+extern const struct dev_pm_ops usb_hcd_pci_pm_ops;
+#endif
+#endif /* CONFIG_PCI */
+
+/* pci-ish (pdev null is ok) buffer alloc/mapping support */
+int hcd_buffer_create(struct usb_hcd *hcd);
+void hcd_buffer_destroy(struct usb_hcd *hcd);
+
+void *hcd_buffer_alloc(struct usb_bus *bus, size_t size,
+	gfp_t mem_flags, dma_addr_t *dma);
+void hcd_buffer_free(struct usb_bus *bus, size_t size,
+	void *addr, dma_addr_t dma);
+
+/* generic bus glue, needed for host controllers that don't use PCI */
+extern irqreturn_t usb_hcd_irq(int irq, void *__hcd);
+
+extern void usb_hc_died(struct usb_hcd *hcd);
+extern void usb_hcd_poll_rh_status(struct usb_hcd *hcd);
+extern void usb_wakeup_notification(struct usb_device *hdev,
+		unsigned int portnum);
+
+extern void usb_hcd_start_port_resume(struct usb_bus *bus, int portnum);
+extern void usb_hcd_end_port_resume(struct usb_bus *bus, int portnum);
+
+/* The D0/D1 toggle bits ... USE WITH CAUTION (they're almost hcd-internal) */
+#define usb_gettoggle(dev, ep, out) (((dev)->toggle[out] >> (ep)) & 1)
+#define	usb_dotoggle(dev, ep, out)  ((dev)->toggle[out] ^= (1 << (ep)))
+#define usb_settoggle(dev, ep, out, bit) \
+		((dev)->toggle[out] = ((dev)->toggle[out] & ~(1 << (ep))) | \
+		 ((bit) << (ep)))
+
+/* -------------------------------------------------------------------------- */
+
+/* Enumeration is only for the hub driver, or HCD virtual root hubs */
+extern struct usb_device *usb_alloc_dev(struct usb_device *parent,
+					struct usb_bus *, unsigned port);
+extern int usb_new_device(struct usb_device *dev);
+extern void usb_disconnect(struct usb_device **);
+
+extern int usb_get_configuration(struct usb_device *dev);
+extern void usb_destroy_configuration(struct usb_device *dev);
+
+/*-------------------------------------------------------------------------*/
+
+/*
+ * HCD Root Hub support
+ */
+
+#include <linux/usb/ch11.h>
+
+/*
+ * As of USB 2.0, full/low speed devices are segregated into trees.
+ * One type grows from USB 1.1 host controllers (OHCI, UHCI etc).
+ * The other type grows from high speed hubs when they connect to
+ * full/low speed devices using "Transaction Translators" (TTs).
+ *
+ * TTs should only be known to the hub driver, and high speed bus
+ * drivers (only EHCI for now).  They affect periodic scheduling and
+ * sometimes control/bulk error recovery.
+ */
+
+struct usb_device;
+
+struct usb_tt {
+	struct usb_device	*hub;	/* upstream highspeed hub */
+	int			multi;	/* true means one TT per port */
+	unsigned		think_time;	/* think time in ns */
+
+	/* for control/bulk error recovery (CLEAR_TT_BUFFER) */
+	spinlock_t		lock;
+	struct list_head	clear_list;	/* of usb_tt_clear */
+	struct work_struct	clear_work;
+};
+
+struct usb_tt_clear {
+	struct list_head	clear_list;
+	unsigned		tt;
+	u16			devinfo;
+	struct usb_hcd		*hcd;
+	struct usb_host_endpoint	*ep;
+};
+
+extern int usb_hub_clear_tt_buffer(struct urb *urb);
+extern void usb_ep0_reinit(struct usb_device *);
+
+/* (shifted) direction/type/recipient from the USB 2.0 spec, table 9.2 */
+#define DeviceRequest \
+	((USB_DIR_IN|USB_TYPE_STANDARD|USB_RECIP_DEVICE)<<8)
+#define DeviceOutRequest \
+	((USB_DIR_OUT|USB_TYPE_STANDARD|USB_RECIP_DEVICE)<<8)
+
+#define InterfaceRequest \
+	((USB_DIR_IN|USB_TYPE_STANDARD|USB_RECIP_INTERFACE)<<8)
+
+#define EndpointRequest \
+	((USB_DIR_IN|USB_TYPE_STANDARD|USB_RECIP_INTERFACE)<<8)
+#define EndpointOutRequest \
+	((USB_DIR_OUT|USB_TYPE_STANDARD|USB_RECIP_INTERFACE)<<8)
+
+/* class requests from the USB 2.0 hub spec, table 11-15 */
+/* GetBusState and SetHubDescriptor are optional, omitted */
+#define ClearHubFeature		(0x2000 | USB_REQ_CLEAR_FEATURE)
+#define ClearPortFeature	(0x2300 | USB_REQ_CLEAR_FEATURE)
+#define GetHubDescriptor	(0xa000 | USB_REQ_GET_DESCRIPTOR)
+#define GetHubStatus		(0xa000 | USB_REQ_GET_STATUS)
+#define GetPortStatus		(0xa300 | USB_REQ_GET_STATUS)
+#define SetHubFeature		(0x2000 | USB_REQ_SET_FEATURE)
+#define SetPortFeature		(0x2300 | USB_REQ_SET_FEATURE)
+
+
+/*-------------------------------------------------------------------------*/
+
+/* class requests from USB 3.0 hub spec, table 10-5 */
+#define SetHubDepth		(0x3000 | HUB_SET_DEPTH)
+#define GetPortErrorCount	(0x8000 | HUB_GET_PORT_ERR_COUNT)
+
+/*
+ * Generic bandwidth allocation constants/support
+ */
+#define FRAME_TIME_USECS	1000L
+#define BitTime(bytecount) (7 * 8 * bytecount / 6) /* with integer truncation */
+		/* Trying not to use worst-case bit-stuffing
+		 * of (7/6 * 8 * bytecount) = 9.33 * bytecount */
+		/* bytecount = data payload byte count */
+
+#define NS_TO_US(ns)	((ns + 500L) / 1000L)
+			/* convert & round nanoseconds to microseconds */
+
+
+/*
+ * Full/low speed bandwidth allocation constants/support.
+ */
+#define BW_HOST_DELAY	1000L		/* nanoseconds */
+#define BW_HUB_LS_SETUP	333L		/* nanoseconds */
+			/* 4 full-speed bit times (est.) */
+
+#define FRAME_TIME_BITS			12000L	/* frame = 1 millisecond */
+#define FRAME_TIME_MAX_BITS_ALLOC	(90L * FRAME_TIME_BITS / 100L)
+#define FRAME_TIME_MAX_USECS_ALLOC	(90L * FRAME_TIME_USECS / 100L)
+
+/*
+ * Ceiling [nano/micro]seconds (typical) for that many bytes@high speed
+ * ISO is a bit less, no ACK ... from USB 2.0 spec, 5.11.3 (and needed
+ * to preallocate bandwidth)
+ */
+#define USB2_HOST_DELAY	5	/* nsec, guess */
+#define HS_NSECS(bytes) (((55 * 8 * 2083) \
+	+ (2083UL * (3 + BitTime(bytes))))/1000 \
+	+ USB2_HOST_DELAY)
+#define HS_NSECS_ISO(bytes) (((38 * 8 * 2083) \
+	+ (2083UL * (3 + BitTime(bytes))))/1000 \
+	+ USB2_HOST_DELAY)
+#define HS_USECS(bytes)		NS_TO_US(HS_NSECS(bytes))
+#define HS_USECS_ISO(bytes)	NS_TO_US(HS_NSECS_ISO(bytes))
+
+extern long usb_calc_bus_time(int speed, int is_input,
+			int isoc, int bytecount);
+
+/*-------------------------------------------------------------------------*/
+
+extern void usb_set_device_state(struct usb_device *udev,
+		enum usb_device_state new_state);
+
+/*-------------------------------------------------------------------------*/
+
+/* exported only within usbcore */
+
+extern struct list_head usb_bus_list;
+extern struct mutex usb_bus_list_lock;
+extern wait_queue_head_t usb_kill_urb_queue;
+
+extern int usb_find_interface_driver(struct usb_device *dev,
+	struct usb_interface *interface);
+
+#define usb_endpoint_out(ep_dir)	(!((ep_dir) & USB_DIR_IN))
+
+#ifdef CONFIG_PM
+extern void usb_root_hub_lost_power(struct usb_device *rhdev);
+extern int hcd_bus_suspend(struct usb_device *rhdev, pm_message_t msg);
+extern int hcd_bus_resume(struct usb_device *rhdev, pm_message_t msg);
+#endif /* CONFIG_PM */
+
+#ifdef CONFIG_PM_RUNTIME
+extern void usb_hcd_resume_root_hub(struct usb_hcd *hcd);
+#else
+static inline void usb_hcd_resume_root_hub(struct usb_hcd *hcd)
+{
+	return;
+}
+#endif /* CONFIG_PM_RUNTIME */
+
+/*-------------------------------------------------------------------------*/
+
+#if defined(CONFIG_USB_MON) || defined(CONFIG_USB_MON_MODULE)
+
+struct usb_mon_operations {
+	void (*urb_submit)(struct usb_bus *bus, struct urb *urb);
+	void (*urb_submit_error)(struct usb_bus *bus, struct urb *urb, int err);
+	void (*urb_complete)(struct usb_bus *bus, struct urb *urb, int status);
+	/* void (*urb_unlink)(struct usb_bus *bus, struct urb *urb); */
+};
+
+extern struct usb_mon_operations *mon_ops;
+
+static inline void usbmon_urb_submit(struct usb_bus *bus, struct urb *urb)
+{
+	if (bus->monitored)
+		(*mon_ops->urb_submit)(bus, urb);
+}
+
+static inline void usbmon_urb_submit_error(struct usb_bus *bus, struct urb *urb,
+    int error)
+{
+	if (bus->monitored)
+		(*mon_ops->urb_submit_error)(bus, urb, error);
+}
+
+static inline void usbmon_urb_complete(struct usb_bus *bus, struct urb *urb,
+		int status)
+{
+	if (bus->monitored)
+		(*mon_ops->urb_complete)(bus, urb, status);
+}
+
+int usb_mon_register(struct usb_mon_operations *ops);
+void usb_mon_deregister(void);
+
+#else
+
+static inline void usbmon_urb_submit(struct usb_bus *bus, struct urb *urb) {}
+static inline void usbmon_urb_submit_error(struct usb_bus *bus, struct urb *urb,
+    int error) {}
+static inline void usbmon_urb_complete(struct usb_bus *bus, struct urb *urb,
+		int status) {}
+
+#endif /* CONFIG_USB_MON || CONFIG_USB_MON_MODULE */
+
+/*-------------------------------------------------------------------------*/
+
+/* random stuff */
+
+#define	RUN_CONTEXT (in_irq() ? "in_irq" \
+		: (in_interrupt() ? "in_interrupt" : "can sleep"))
+
+
+/* This rwsem is for use only by the hub driver and ehci-hcd.
+ * Nobody else should touch it.
+ */
+extern struct rw_semaphore ehci_cf_port_reset_rwsem;
+
+/* Keep track of which host controller drivers are loaded */
+#define USB_UHCI_LOADED		0
+#define USB_OHCI_LOADED		1
+#define USB_EHCI_LOADED		2
+extern unsigned long usb_hcds_loaded;
+
+#endif /* __KERNEL__ */
+
+#endif /* __USB_CORE_HCD_H */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 7/7] USB: Modify the xHCI to adapt to the uBoot code base
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
                   ` (5 preceding siblings ...)
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 6/7] USB: Add xhci linux kernel host driver Dan Murphy
@ 2013-07-02 15:15 ` Dan Murphy
  2013-07-02 21:43 ` [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Wolfgang Denk
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:15 UTC (permalink / raw)
  To: u-boot

Modify the xHCI Linux kernel code base with #ifdefs __UBOOT__ to
adapt the xHCI to the uBoot code.

DMA and Radix needs investigating.

Signed-off-by: Dan Murphy <dmurphy@ti.com>
---
 common/usb.c                      |    1 +
 common/usb_hub.c                  |    1 +
 drivers/usb/host/Makefile         |    7 +
 drivers/usb/host/xhci-ext-caps.h  |   12 ++
 drivers/usb/host/xhci-hub.c       |   19 +-
 drivers/usb/host/xhci-mem.c       |  113 ++++++++++--
 drivers/usb/host/xhci-pci.c       |  356 -------------------------------------
 drivers/usb/host/xhci-plat.c      |    8 +-
 drivers/usb/host/xhci-ring.c      |   52 +++++-
 drivers/usb/host/xhci.c           |   82 +++++++--
 drivers/usb/host/xhci.h           |   20 ++-
 include/asm-generic/scatterlist.h |   34 ++++
 include/configs/omap5_common.h    |    1 +
 include/linux/usb/ch11.h          |   13 ++
 include/linux/usb/hcd.h           |   10 +-
 include/linux/usb/linux-compat.h  |   47 +++++
 include/linux/usb/usb-compat.h    |    4 +-
 include/usb.h                     |   34 +---
 18 files changed, 382 insertions(+), 432 deletions(-)
 delete mode 100644 drivers/usb/host/xhci-pci.c
 create mode 100644 include/asm-generic/scatterlist.h

diff --git a/common/usb.c b/common/usb.c
index 55fff5b..99aead8 100644
--- a/common/usb.c
+++ b/common/usb.c
@@ -53,6 +53,7 @@
 #include <asm/unaligned.h>
 
 #include <usb.h>
+#include <linux/usb/hcd.h>
 #ifdef CONFIG_4xx
 #include <asm/4xx_pci.h>
 #endif
diff --git a/common/usb_hub.c b/common/usb_hub.c
index 774ba63..e18e34d 100644
--- a/common/usb_hub.c
+++ b/common/usb_hub.c
@@ -49,6 +49,7 @@
 #include <asm/unaligned.h>
 
 #include <usb.h>
+#include <linux/usb/ch11.h> /* usb structure information */
 #ifdef CONFIG_4xx
 #include <asm/4xx_pci.h>
 #endif
diff --git a/drivers/usb/host/Makefile b/drivers/usb/host/Makefile
index 98f2a10..a483182 100644
--- a/drivers/usb/host/Makefile
+++ b/drivers/usb/host/Makefile
@@ -58,6 +58,12 @@ COBJS-$(CONFIG_USB_EHCI_SPEAR) += ehci-spear.o
 COBJS-$(CONFIG_USB_EHCI_TEGRA) += ehci-tegra.o
 COBJS-$(CONFIG_USB_EHCI_VCT) += ehci-vct.o
 
+COBJS-$(CONFIG_USB_XHCI) += xhci.o
+COBJS-$(CONFIG_USB_XHCI) += xhci-hub.o
+COBJS-$(CONFIG_USB_XHCI) += xhci-ring.o
+COBJS-$(CONFIG_USB_XHCI) += xhci-mem.o
+COBJS-$(CONFIG_USB_XHCI) += xhci-plat.o
+
 COBJS	:= $(COBJS-y)
 SRCS	:= $(COBJS:.o=.c)
 OBJS	:= $(addprefix $(obj),$(COBJS))
@@ -75,3 +81,4 @@ include $(SRCTREE)/rules.mk
 sinclude $(obj).depend
 
 #########################################################################
+
diff --git a/drivers/usb/host/xhci-ext-caps.h b/drivers/usb/host/xhci-ext-caps.h
index 377f424..f3cddb1 100644
--- a/drivers/usb/host/xhci-ext-caps.h
+++ b/drivers/usb/host/xhci-ext-caps.h
@@ -87,6 +87,8 @@
 /* true: Controller Not Ready to accept doorbell or op reg writes after reset */
 #define XHCI_STS_CNR		(1 << 11)
 
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/io.h>
 
 /**
@@ -153,3 +155,13 @@ static inline int xhci_find_ext_cap_by_id(void __iomem *base, int ext_offset, in
 		return ext_offset;
 	return 0;
 }
+#else
+static inline int xhci_find_next_cap_offset(void __iomem *base, int ext_offset)
+{
+	return 0;
+}
+static inline int xhci_find_ext_cap_by_id(void __iomem *base, int ext_offset, int id)
+{
+	return 0;
+}
+#endif
diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
index 187a3ec..8f01cb3 100644
--- a/drivers/usb/host/xhci-hub.c
+++ b/drivers/usb/host/xhci-hub.c
@@ -19,10 +19,11 @@
  * along with this program; if not, write to the Free Software Foundation,
  * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
  */
-
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/gfp.h>
 #include <asm/unaligned.h>
-
+#endif
 #include "xhci.h"
 
 #define	PORT_WAKE_BITS	(PORT_WKOC_E | PORT_WKDISC_E | PORT_WKCONN_E)
@@ -632,8 +633,13 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
 				!DEV_SUPERSPEED(temp)) {
 			if ((temp & PORT_RESET) || !(temp & PORT_PE))
 				goto error;
+#ifndef __UBOOT__
 			if (time_after_eq(jiffies,
 					bus_state->resume_done[wIndex])) {
+#else
+			/* TODO fix this to equate to time_after_eq API */
+			if (bus_state->resume_done[wIndex]) {
+#endif
 				xhci_dbg(xhci, "Resume USB2 port %d\n",
 					wIndex + 1);
 				bus_state->resume_done[wIndex] = 0;
@@ -1009,10 +1015,19 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
 			retval = -ENODEV;
 			break;
 		}
+#ifndef __UBOOT__
 		if ((temp & mask) != 0 ||
 			(bus_state->port_c_suspend & 1 << i) ||
 			(bus_state->resume_done[i] && time_after_eq(
 			    jiffies, bus_state->resume_done[i]))) {
+#else
+		/* TODO Fix this for the time_after_eq api */
+		if ((temp & mask) != 0 ||
+			(bus_state->port_c_suspend & 1 << i) ||
+			(bus_state->resume_done[i] &&
+			(bus_state->resume_done[i]))) {
+
+#endif
 			buf[(i + 1) / 8] |= 1 << (i + 1) % 8;
 			status = 1;
 		}
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 2cfc465..4b3f559 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -19,12 +19,20 @@
  * along with this program; if not, write to the Free Software Foundation,
  * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
  */
-
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/usb.h>
 #include <linux/pci.h>
 #include <linux/slab.h>
 #include <linux/dmapool.h>
 
+#else
+#include <common.h>
+#include <linux/list.h>		/* for struct list_head */
+#include <linux/usb/linux-compat.h>
+#include <linux/usb/usb-compat.h>
+#endif
+
 #include "xhci.h"
 
 /*
@@ -44,13 +52,14 @@ static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci,
 	seg = kzalloc(sizeof *seg, flags);
 	if (!seg)
 		return NULL;
-
+#ifndef __UBOOT__
+	/* DMA */
 	seg->trbs = dma_pool_alloc(xhci->segment_pool, flags, &dma);
 	if (!seg->trbs) {
 		kfree(seg);
 		return NULL;
 	}
-
+#endif
 	memset(seg->trbs, 0, TRB_SEGMENT_SIZE);
 	/* If the cycle state is 0, set the cycle bit to 1 for all the TRBs */
 	if (cycle_state == 0) {
@@ -66,7 +75,10 @@ static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci,
 static void xhci_segment_free(struct xhci_hcd *xhci, struct xhci_segment *seg)
 {
 	if (seg->trbs) {
+#ifndef __UBOOT__
+	/* DMA */
 		dma_pool_free(xhci->segment_pool, seg->trbs, seg->dma);
+#endif
 		seg->trbs = NULL;
 	}
 	kfree(seg);
@@ -367,8 +379,10 @@ static struct xhci_container_ctx *xhci_alloc_container_ctx(struct xhci_hcd *xhci
 	ctx->size = HCC_64BYTE_CONTEXT(xhci->hcc_params) ? 2048 : 1024;
 	if (type == XHCI_CTX_TYPE_INPUT)
 		ctx->size += CTX_SIZE(xhci->hcc_params);
-
+#ifndef __UBOOT__
+	/* DMA */
 	ctx->bytes = dma_pool_alloc(xhci->device_pool, flags, &ctx->dma);
+#endif
 	memset(ctx->bytes, 0, ctx->size);
 	return ctx;
 }
@@ -378,7 +392,10 @@ static void xhci_free_container_ctx(struct xhci_hcd *xhci,
 {
 	if (!ctx)
 		return;
+#ifndef __UBOOT__
+	/* DMA */
 	dma_pool_free(xhci->device_pool, ctx->bytes, ctx->dma);
+#endif
 	kfree(ctx);
 }
 
@@ -419,6 +436,8 @@ static void xhci_free_stream_ctx(struct xhci_hcd *xhci,
 		unsigned int num_stream_ctxs,
 		struct xhci_stream_ctx *stream_ctx, dma_addr_t dma)
 {
+#ifndef __UBOOT__
+	/* PCI */
 	struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
 
 	if (num_stream_ctxs > MEDIUM_STREAM_ARRAY_SIZE)
@@ -431,6 +450,9 @@ static void xhci_free_stream_ctx(struct xhci_hcd *xhci,
 	else
 		return dma_pool_free(xhci->medium_streams_pool,
 				stream_ctx, dma);
+#else
+	return;
+#endif
 }
 
 /*
@@ -447,6 +469,8 @@ static struct xhci_stream_ctx *xhci_alloc_stream_ctx(struct xhci_hcd *xhci,
 		unsigned int num_stream_ctxs, dma_addr_t *dma,
 		gfp_t mem_flags)
 {
+#ifndef __UBOOT__
+	/* PCI */
 	struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
 
 	if (num_stream_ctxs > MEDIUM_STREAM_ARRAY_SIZE)
@@ -459,15 +483,21 @@ static struct xhci_stream_ctx *xhci_alloc_stream_ctx(struct xhci_hcd *xhci,
 	else
 		return dma_pool_alloc(xhci->medium_streams_pool,
 				mem_flags, dma);
+#else
+	return NULL;
+#endif
 }
 
 struct xhci_ring *xhci_dma_to_transfer_ring(
 		struct xhci_virt_ep *ep,
 		u64 address)
 {
+#ifndef __UBOOT__
+	/* RADIX */
 	if (ep->ep_state & EP_HAS_STREAMS)
 		return radix_tree_lookup(&ep->stream_info->trb_address_map,
 				address >> TRB_SEGMENT_SHIFT);
+#endif
 	return ep->ring;
 }
 
@@ -597,7 +627,10 @@ struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
 	struct xhci_stream_info *stream_info;
 	u32 cur_stream;
 	struct xhci_ring *cur_ring;
+#ifndef __UBOOT__
+	/* ilog2 */
 	unsigned long key;
+#endif
 	u64 addr;
 	int ret;
 
@@ -638,8 +671,10 @@ struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
 		xhci_alloc_command(xhci, true, true, mem_flags);
 	if (!stream_info->free_streams_command)
 		goto cleanup_ctx;
-
+#ifndef __UBOOT__
+	/* RADIX */
 	INIT_RADIX_TREE(&stream_info->trb_address_map, GFP_ATOMIC);
+#endif
 
 	/* Allocate rings for all the streams that the driver will use,
 	 * and add their segment DMA addresses to the radix tree.
@@ -660,11 +695,16 @@ struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
 			cpu_to_le64(addr);
 		xhci_dbg(xhci, "Setting stream %d ring ptr to 0x%08llx\n",
 				cur_stream, (unsigned long long) addr);
-
+#ifndef __UBOOT__
+	/* ilog2 */
 		key = (unsigned long)
 			(cur_ring->first_seg->dma >> TRB_SEGMENT_SHIFT);
+#endif
+#ifndef __UBOOT__
+	/* RADIX */
 		ret = radix_tree_insert(&stream_info->trb_address_map,
 				key, cur_ring);
+#endif
 		if (ret) {
 			xhci_ring_free(xhci, cur_ring);
 			stream_info->stream_rings[cur_stream] = NULL;
@@ -692,8 +732,11 @@ cleanup_rings:
 		cur_ring = stream_info->stream_rings[cur_stream];
 		if (cur_ring) {
 			addr = cur_ring->first_seg->dma;
+#ifndef __UBOOT__
+	/* RADIX */
 			radix_tree_delete(&stream_info->trb_address_map,
 					addr >> TRB_SEGMENT_SHIFT);
+#endif
 			xhci_ring_free(xhci, cur_ring);
 			stream_info->stream_rings[cur_stream] = NULL;
 		}
@@ -763,8 +806,11 @@ void xhci_free_stream_info(struct xhci_hcd *xhci,
 		cur_ring = stream_info->stream_rings[cur_stream];
 		if (cur_ring) {
 			addr = cur_ring->first_seg->dma;
+#ifndef __UBOOT__
+	/* RADIX */
 			radix_tree_delete(&stream_info->trb_address_map,
 					addr >> TRB_SEGMENT_SHIFT);
+#endif
 			xhci_ring_free(xhci, cur_ring);
 			stream_info->stream_rings[cur_stream] = NULL;
 		}
@@ -788,9 +834,12 @@ void xhci_free_stream_info(struct xhci_hcd *xhci,
 static void xhci_init_endpoint_timer(struct xhci_hcd *xhci,
 		struct xhci_virt_ep *ep)
 {
+#ifndef __UBOOT__
+	/* Timer */
 	init_timer(&ep->stop_cmd_timer);
 	ep->stop_cmd_timer.data = (unsigned long) ep;
 	ep->stop_cmd_timer.function = xhci_stop_endpoint_command_watchdog;
+#endif
 	ep->xhci = xhci;
 }
 
@@ -973,8 +1022,9 @@ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
 	if (!dev->ring_cache)
 		goto fail;
 	dev->num_rings_cached = 0;
-
+#ifndef __UBOOT__
 	init_completion(&dev->cmd_completion);
+#endif
 	INIT_LIST_HEAD(&dev->cmd_list);
 	dev->udev = udev;
 
@@ -1680,6 +1730,8 @@ static int scratchpad_alloc(struct xhci_hcd *xhci, gfp_t flags)
 
 static void scratchpad_free(struct xhci_hcd *xhci)
 {
+#ifndef __UBOOT__
+	/* PCI */
 	int num_sp;
 	int i;
 	struct pci_dev	*pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
@@ -1699,6 +1751,7 @@ static void scratchpad_free(struct xhci_hcd *xhci)
 	dma_free_coherent(&pdev->dev, num_sp * sizeof(u64),
 			    xhci->scratchpad->sp_array,
 			    xhci->scratchpad->sp_dma);
+#endif
 	kfree(xhci->scratchpad);
 	xhci->scratchpad = NULL;
 }
@@ -1731,7 +1784,9 @@ struct xhci_command *xhci_alloc_command(struct xhci_hcd *xhci,
 			kfree(command);
 			return NULL;
 		}
+#ifndef __UBOOT__
 		init_completion(command->completion);
+#endif
 	}
 
 	command->status = 0;
@@ -1758,7 +1813,12 @@ void xhci_free_command(struct xhci_hcd *xhci,
 
 void xhci_mem_cleanup(struct xhci_hcd *xhci)
 {
+#ifndef __UBOOT__
+	/* PCI */
 	struct pci_dev	*pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
+#else
+	struct pci_dev	*pdev = NULL;
+#endif
 	struct dev_info	*dev_info, *next;
 	struct xhci_cd  *cur_cd, *next_cd;
 	unsigned long	flags;
@@ -1767,9 +1827,12 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
 
 	/* Free the Event Ring Segment Table and the actual Event Ring */
 	size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries);
+#ifndef __UBOOT__
+	/* DMA */
 	if (xhci->erst.entries)
 		dma_free_coherent(&pdev->dev, size,
 				xhci->erst.entries, xhci->erst.erst_dma_addr);
+#endif
 	xhci->erst.entries = NULL;
 	xhci_dbg(xhci, "Freed ERST\n");
 	if (xhci->event_ring)
@@ -1792,30 +1855,40 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
 
 	for (i = 1; i < MAX_HC_SLOTS; ++i)
 		xhci_free_virt_device(xhci, i);
-
+#ifndef __UBOOT__
+	/* DMA */
 	if (xhci->segment_pool)
 		dma_pool_destroy(xhci->segment_pool);
+#endif
 	xhci->segment_pool = NULL;
 	xhci_dbg(xhci, "Freed segment pool\n");
-
+#ifndef __UBOOT__
+	/* DMA */
 	if (xhci->device_pool)
 		dma_pool_destroy(xhci->device_pool);
+#endif
 	xhci->device_pool = NULL;
 	xhci_dbg(xhci, "Freed device context pool\n");
-
+#ifndef __UBOOT__
+	/* DMA */
 	if (xhci->small_streams_pool)
 		dma_pool_destroy(xhci->small_streams_pool);
+#endif
 	xhci->small_streams_pool = NULL;
 	xhci_dbg(xhci, "Freed small stream array pool\n");
-
+#ifndef __UBOOT__
+	/* DMA */
 	if (xhci->medium_streams_pool)
 		dma_pool_destroy(xhci->medium_streams_pool);
+#endif
 	xhci->medium_streams_pool = NULL;
 	xhci_dbg(xhci, "Freed medium stream array pool\n");
-
+#ifndef __UBOOT__
+	/* DMA */
 	if (xhci->dcbaa)
 		dma_free_coherent(&pdev->dev, sizeof(*xhci->dcbaa),
 				xhci->dcbaa, xhci->dcbaa->dma);
+#endif
 	xhci->dcbaa = NULL;
 
 	scratchpad_free(xhci);
@@ -2022,7 +2095,11 @@ static void xhci_set_hc_event_deq(struct xhci_hcd *xhci)
 
 	deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg,
 			xhci->event_ring->dequeue);
+#ifndef __UBOOT__
 	if (deq == 0 && !in_interrupt())
+#else
+	if (deq == 0)
+#endif
 		xhci_warn(xhci, "WARN something wrong with SW event ring "
 				"dequeue ptr.\n");
 	/* Update HC event ring dequeue pointer */
@@ -2305,24 +2382,32 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
 	 * however, the command ring segment needs 64-byte aligned segments,
 	 * so we pick the greater alignment need.
 	 */
+#ifndef __UBOOT__
+	/* DMA */
 	xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
 			TRB_SEGMENT_SIZE, 64, xhci->page_size);
-
+#endif
 	/* See Table 46 and Note on Figure 55 */
+#ifndef __UBOOT__
+	/* DMA */
 	xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
 			2112, 64, xhci->page_size);
+#endif
 	if (!xhci->segment_pool || !xhci->device_pool)
 		goto fail;
 
 	/* Linear stream context arrays don't have any boundary restrictions,
 	 * and only need to be 16-byte aligned.
 	 */
+#ifndef __UBOOT__
+	/* DMA */
 	xhci->small_streams_pool =
 		dma_pool_create("xHCI 256 byte stream ctx arrays",
 			dev, SMALL_STREAM_ARRAY_SIZE, 16, 0);
 	xhci->medium_streams_pool =
 		dma_pool_create("xHCI 1KB stream ctx arrays",
 			dev, MEDIUM_STREAM_ARRAY_SIZE, 16, 0);
+#endif
 	/* Any stream context array bigger than MEDIUM_STREAM_ARRAY_SIZE
 	 * will be allocated with dma_alloc_coherent()
 	 */
@@ -2432,7 +2517,9 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
 	 * something other than the default (~1ms minimum between interrupts).
 	 * See section 5.5.1.2.
 	 */
+#ifndef __UBOOT__
 	init_completion(&xhci->addr_dev);
+#endif
 	for (i = 0; i < MAX_HC_SLOTS; ++i)
 		xhci->devs[i] = NULL;
 	for (i = 0; i < USB_MAXCHILDREN; ++i) {
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
deleted file mode 100644
index 1a30c38..0000000
--- a/drivers/usb/host/xhci-pci.c
+++ /dev/null
@@ -1,356 +0,0 @@
-/*
- * xHCI host controller driver PCI Bus Glue.
- *
- * Copyright (C) 2008 Intel Corp.
- *
- * Author: Sarah Sharp
- * Some code borrowed from the Linux EHCI driver.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
- * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
- * for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software Foundation,
- * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- */
-
-#include <linux/pci.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-
-#include "xhci.h"
-
-/* Device for a quirk */
-#define PCI_VENDOR_ID_FRESCO_LOGIC	0x1b73
-#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK	0x1000
-#define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400	0x1400
-
-#define PCI_VENDOR_ID_ETRON		0x1b6f
-#define PCI_DEVICE_ID_ASROCK_P67	0x7023
-
-static const char hcd_name[] = "xhci_hcd";
-
-/* called after powerup, by probe or system-pm "wakeup" */
-static int xhci_pci_reinit(struct xhci_hcd *xhci, struct pci_dev *pdev)
-{
-	/*
-	 * TODO: Implement finding debug ports later.
-	 * TODO: see if there are any quirks that need to be added to handle
-	 * new extended capabilities.
-	 */
-
-	/* PCI Memory-Write-Invalidate cycle support is optional (uncommon) */
-	if (!pci_set_mwi(pdev))
-		xhci_dbg(xhci, "MWI active\n");
-
-	xhci_dbg(xhci, "Finished xhci_pci_reinit\n");
-	return 0;
-}
-
-static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
-{
-	struct pci_dev		*pdev = to_pci_dev(dev);
-
-	/* Look for vendor-specific quirks */
-	if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC &&
-			(pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK ||
-			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) {
-		if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&
-				pdev->revision == 0x0) {
-			xhci->quirks |= XHCI_RESET_EP_QUIRK;
-			xhci_dbg(xhci, "QUIRK: Fresco Logic xHC needs configure"
-					" endpoint cmd after reset endpoint\n");
-		}
-		/* Fresco Logic confirms: all revisions of this chip do not
-		 * support MSI, even though some of them claim to in their PCI
-		 * capabilities.
-		 */
-		xhci->quirks |= XHCI_BROKEN_MSI;
-		xhci_dbg(xhci, "QUIRK: Fresco Logic revision %u "
-				"has broken MSI implementation\n",
-				pdev->revision);
-		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
-	}
-
-	if (pdev->vendor == PCI_VENDOR_ID_NEC)
-		xhci->quirks |= XHCI_NEC_HOST;
-
-	if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version == 0x96)
-		xhci->quirks |= XHCI_AMD_0x96_HOST;
-
-	/* AMD PLL quirk */
-	if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info())
-		xhci->quirks |= XHCI_AMD_PLL_FIX;
-	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
-		xhci->quirks |= XHCI_LPM_SUPPORT;
-		xhci->quirks |= XHCI_INTEL_HOST;
-	}
-	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
-			pdev->device == PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI) {
-		xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
-		xhci->quirks |= XHCI_EP_LIMIT_QUIRK;
-		xhci->limit_active_eps = 64;
-		xhci->quirks |= XHCI_SW_BW_CHECKING;
-		/*
-		 * PPT desktop boards DH77EB and DH77DF will power back on after
-		 * a few seconds of being shutdown.  The fix for this is to
-		 * switch the ports from xHCI to EHCI on shutdown.  We can't use
-		 * DMI information to find those particular boards (since each
-		 * vendor will change the board name), so we have to key off all
-		 * PPT chipsets.
-		 */
-		xhci->quirks |= XHCI_SPURIOUS_REBOOT;
-		xhci->quirks |= XHCI_AVOID_BEI;
-	}
-	if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
-			pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
-		xhci->quirks |= XHCI_RESET_ON_RESUME;
-		xhci_dbg(xhci, "QUIRK: Resetting on resume\n");
-		xhci->quirks |= XHCI_TRUST_TX_LENGTH;
-	}
-	if (pdev->vendor == PCI_VENDOR_ID_VIA)
-		xhci->quirks |= XHCI_RESET_ON_RESUME;
-}
-
-/* called during probe() after chip reset completes */
-static int xhci_pci_setup(struct usb_hcd *hcd)
-{
-	struct xhci_hcd		*xhci;
-	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
-	int			retval;
-
-	retval = xhci_gen_setup(hcd, xhci_pci_quirks);
-	if (retval)
-		return retval;
-
-	xhci = hcd_to_xhci(hcd);
-	if (!usb_hcd_is_primary_hcd(hcd))
-		return 0;
-
-	pci_read_config_byte(pdev, XHCI_SBRN_OFFSET, &xhci->sbrn);
-	xhci_dbg(xhci, "Got SBRN %u\n", (unsigned int) xhci->sbrn);
-
-	/* Find any debug ports */
-	retval = xhci_pci_reinit(xhci, pdev);
-	if (!retval)
-		return retval;
-
-	kfree(xhci);
-	return retval;
-}
-
-/*
- * We need to register our own PCI probe function (instead of the USB core's
- * function) in order to create a second roothub under xHCI.
- */
-static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
-{
-	int retval;
-	struct xhci_hcd *xhci;
-	struct hc_driver *driver;
-	struct usb_hcd *hcd;
-
-	driver = (struct hc_driver *)id->driver_data;
-	/* Register the USB 2.0 roothub.
-	 * FIXME: USB core must know to register the USB 2.0 roothub first.
-	 * This is sort of silly, because we could just set the HCD driver flags
-	 * to say USB 2.0, but I'm not sure what the implications would be in
-	 * the other parts of the HCD code.
-	 */
-	retval = usb_hcd_pci_probe(dev, id);
-
-	if (retval)
-		return retval;
-
-	/* USB 2.0 roothub is stored in the PCI device now. */
-	hcd = dev_get_drvdata(&dev->dev);
-	xhci = hcd_to_xhci(hcd);
-	xhci->shared_hcd = usb_create_shared_hcd(driver, &dev->dev,
-				pci_name(dev), hcd);
-	if (!xhci->shared_hcd) {
-		retval = -ENOMEM;
-		goto dealloc_usb2_hcd;
-	}
-
-	/* Set the xHCI pointer before xhci_pci_setup() (aka hcd_driver.reset)
-	 * is called by usb_add_hcd().
-	 */
-	*((struct xhci_hcd **) xhci->shared_hcd->hcd_priv) = xhci;
-
-	retval = usb_add_hcd(xhci->shared_hcd, dev->irq,
-			IRQF_SHARED);
-	if (retval)
-		goto put_usb3_hcd;
-	/* Roothub already marked as USB 3.0 speed */
-
-	/* We know the LPM timeout algorithms for this host, let the USB core
-	 * enable and disable LPM for devices under the USB 3.0 roothub.
-	 */
-	if (xhci->quirks & XHCI_LPM_SUPPORT)
-		hcd_to_bus(xhci->shared_hcd)->root_hub->lpm_capable = 1;
-
-	return 0;
-
-put_usb3_hcd:
-	usb_put_hcd(xhci->shared_hcd);
-dealloc_usb2_hcd:
-	usb_hcd_pci_remove(dev);
-	return retval;
-}
-
-static void xhci_pci_remove(struct pci_dev *dev)
-{
-	struct xhci_hcd *xhci;
-
-	xhci = hcd_to_xhci(pci_get_drvdata(dev));
-	if (xhci->shared_hcd) {
-		usb_remove_hcd(xhci->shared_hcd);
-		usb_put_hcd(xhci->shared_hcd);
-	}
-	usb_hcd_pci_remove(dev);
-	kfree(xhci);
-}
-
-#ifdef CONFIG_PM
-static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
-{
-	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
-
-	return xhci_suspend(xhci);
-}
-
-static int xhci_pci_resume(struct usb_hcd *hcd, bool hibernated)
-{
-	struct xhci_hcd		*xhci = hcd_to_xhci(hcd);
-	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
-	int			retval = 0;
-
-	/* The BIOS on systems with the Intel Panther Point chipset may or may
-	 * not support xHCI natively.  That means that during system resume, it
-	 * may switch the ports back to EHCI so that users can use their
-	 * keyboard to select a kernel from GRUB after resume from hibernate.
-	 *
-	 * The BIOS is supposed to remember whether the OS had xHCI ports
-	 * enabled before resume, and switch the ports back to xHCI when the
-	 * BIOS/OS semaphore is written, but we all know we can't trust BIOS
-	 * writers.
-	 *
-	 * Unconditionally switch the ports back to xHCI after a system resume.
-	 * We can't tell whether the EHCI or xHCI controller will be resumed
-	 * first, so we have to do the port switchover in both drivers.  Writing
-	 * a '1' to the port switchover registers should have no effect if the
-	 * port was already switched over.
-	 */
-	if (usb_is_intel_switchable_xhci(pdev))
-		usb_enable_xhci_ports(pdev);
-
-	retval = xhci_resume(xhci, hibernated);
-	return retval;
-}
-#endif /* CONFIG_PM */
-
-static const struct hc_driver xhci_pci_hc_driver = {
-	.description =		hcd_name,
-	.product_desc =		"xHCI Host Controller",
-	.hcd_priv_size =	sizeof(struct xhci_hcd *),
-
-	/*
-	 * generic hardware linkage
-	 */
-	.irq =			xhci_irq,
-	.flags =		HCD_MEMORY | HCD_USB3 | HCD_SHARED,
-
-	/*
-	 * basic lifecycle operations
-	 */
-	.reset =		xhci_pci_setup,
-	.start =		xhci_run,
-#ifdef CONFIG_PM
-	.pci_suspend =          xhci_pci_suspend,
-	.pci_resume =           xhci_pci_resume,
-#endif
-	.stop =			xhci_stop,
-	.shutdown =		xhci_shutdown,
-
-	/*
-	 * managing i/o requests and associated device resources
-	 */
-	.urb_enqueue =		xhci_urb_enqueue,
-	.urb_dequeue =		xhci_urb_dequeue,
-	.alloc_dev =		xhci_alloc_dev,
-	.free_dev =		xhci_free_dev,
-	.alloc_streams =	xhci_alloc_streams,
-	.free_streams =		xhci_free_streams,
-	.add_endpoint =		xhci_add_endpoint,
-	.drop_endpoint =	xhci_drop_endpoint,
-	.endpoint_reset =	xhci_endpoint_reset,
-	.check_bandwidth =	xhci_check_bandwidth,
-	.reset_bandwidth =	xhci_reset_bandwidth,
-	.address_device =	xhci_address_device,
-	.update_hub_device =	xhci_update_hub_device,
-	.reset_device =		xhci_discover_or_reset_device,
-
-	/*
-	 * scheduling support
-	 */
-	.get_frame_number =	xhci_get_frame,
-
-	/* Root hub support */
-	.hub_control =		xhci_hub_control,
-	.hub_status_data =	xhci_hub_status_data,
-	.bus_suspend =		xhci_bus_suspend,
-	.bus_resume =		xhci_bus_resume,
-	/*
-	 * call back when device connected and addressed
-	 */
-	.update_device =        xhci_update_device,
-	.set_usb2_hw_lpm =	xhci_set_usb2_hardware_lpm,
-	.enable_usb3_lpm_timeout =	xhci_enable_usb3_lpm_timeout,
-	.disable_usb3_lpm_timeout =	xhci_disable_usb3_lpm_timeout,
-	.find_raw_port_number =	xhci_find_raw_port_number,
-};
-
-/*-------------------------------------------------------------------------*/
-
-/* PCI driver selection metadata; PCI hotplugging uses this */
-static const struct pci_device_id pci_ids[] = { {
-	/* handle any USB 3.0 xHCI controller */
-	PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_XHCI, ~0),
-	.driver_data =	(unsigned long) &xhci_pci_hc_driver,
-	},
-	{ /* end: all zeroes */ }
-};
-MODULE_DEVICE_TABLE(pci, pci_ids);
-
-/* pci driver glue; this is a "new style" PCI driver module */
-static struct pci_driver xhci_pci_driver = {
-	.name =		(char *) hcd_name,
-	.id_table =	pci_ids,
-
-	.probe =	xhci_pci_probe,
-	.remove =	xhci_pci_remove,
-	/* suspend and resume implemented later */
-
-	.shutdown = 	usb_hcd_pci_shutdown,
-#ifdef CONFIG_PM_SLEEP
-	.driver = {
-		.pm = &usb_hcd_pci_pm_ops
-	},
-#endif
-};
-
-int __init xhci_register_pci(void)
-{
-	return pci_register_driver(&xhci_pci_driver);
-}
-
-void xhci_unregister_pci(void)
-{
-	pci_unregister_driver(&xhci_pci_driver);
-}
diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
index df90fe5..3472816 100644
--- a/drivers/usb/host/xhci-plat.c
+++ b/drivers/usb/host/xhci-plat.c
@@ -10,11 +10,12 @@
  * modify it under the terms of the GNU General Public License
  * version 2 as published by the Free Software Foundation.
  */
-
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/platform_device.h>
 #include <linux/module.h>
 #include <linux/slab.h>
-
+#endif
 #include "xhci.h"
 
 static void xhci_plat_quirks(struct device *dev, struct xhci_hcd *xhci)
@@ -81,7 +82,7 @@ static const struct hc_driver xhci_plat_xhci_driver = {
 	.bus_suspend =		xhci_bus_suspend,
 	.bus_resume =		xhci_bus_resume,
 };
-
+#ifndef __UBOOT__
 static int xhci_plat_probe(struct platform_device *pdev)
 {
 	const struct hc_driver	*driver;
@@ -203,3 +204,4 @@ void xhci_unregister_plat(void)
 {
 	platform_driver_unregister(&usb_xhci_driver);
 }
+#endif
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 1969c00..02fa41f 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -64,8 +64,18 @@
  *   endpoint rings; it generates events on the event ring for these.
  */
 
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/scatterlist.h>
 #include <linux/slab.h>
+#else
+
+#include <common.h>
+#include <linux/list.h>		/* for struct list_head */
+#include <linux/usb/linux-compat.h>
+#include <linux/usb/usb-compat.h>
+#endif
+
 #include "xhci.h"
 
 static int handle_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
@@ -699,7 +709,9 @@ static void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci,
 	 * timer is running on another CPU, we don't decrement stop_cmds_pending
 	 * (since we didn't successfully stop the watchdog timer).
 	 */
+#ifndef __UBOOT__
 	if (del_timer(&ep->stop_cmd_timer))
+#endif
 		ep->stop_cmds_pending--;
 }
 
@@ -720,10 +732,12 @@ static void xhci_giveback_urb_in_irq(struct xhci_hcd *xhci,
 	if (urb_priv->td_cnt == urb_priv->length) {
 		if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
 			xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;
+#ifndef __UBOOT__
 			if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs	== 0) {
 				if (xhci->quirks & XHCI_AMD_PLL_FIX)
 					usb_amd_quirk_pll_enable();
 			}
+#endif
 		}
 		usb_hcd_unlink_urb_from_ep(hcd, urb);
 
@@ -1178,8 +1192,10 @@ static void xhci_complete_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
 	command->status = status;
 	list_del(&command->cmd_list);
 	if (command->completion)
+#ifndef __UBOOT__
 		complete(command->completion);
 	else
+#endif
 		xhci_free_command(xhci, command);
 }
 
@@ -1354,6 +1370,7 @@ static int handle_stopped_cmd_ring(struct xhci_hcd *xhci,
 static void handle_cmd_completion(struct xhci_hcd *xhci,
 		struct xhci_event_cmd *event)
 {
+
 	int slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
 	u64 cmd_dma;
 	dma_addr_t cmd_dequeue_dma;
@@ -1399,7 +1416,9 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
 			xhci->slot_id = slot_id;
 		else
 			xhci->slot_id = 0;
+#ifndef __UBOOT__
 		complete(&xhci->addr_dev);
+#endif
 		break;
 	case TRB_TYPE(TRB_DISABLE_SLOT):
 		if (xhci->devs[slot_id]) {
@@ -1453,18 +1472,24 @@ bandwidth_change:
 		xhci_dbg(xhci, "Completed config ep cmd\n");
 		xhci->devs[slot_id]->cmd_status =
 			GET_COMP_CODE(le32_to_cpu(event->status));
+#ifndef __UBOOT__
 		complete(&xhci->devs[slot_id]->cmd_completion);
+#endif
 		break;
 	case TRB_TYPE(TRB_EVAL_CONTEXT):
 		virt_dev = xhci->devs[slot_id];
 		if (handle_cmd_in_cmd_wait_list(xhci, virt_dev, event))
 			break;
 		xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(le32_to_cpu(event->status));
+#ifndef __UBOOT__
 		complete(&xhci->devs[slot_id]->cmd_completion);
+#endif
 		break;
 	case TRB_TYPE(TRB_ADDR_DEV):
 		xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(le32_to_cpu(event->status));
+#ifndef __UBOOT__
 		complete(&xhci->addr_dev);
+#endif
 		break;
 	case TRB_TYPE(TRB_STOP_RING):
 		handle_stopped_endpoint(xhci, xhci->cmd_ring->dequeue, event);
@@ -1677,11 +1702,17 @@ static void handle_port_status(struct xhci_hcd *xhci,
 			goto cleanup;
 		} else {
 			xhci_dbg(xhci, "resume HS port %d\n", port_id);
+#ifndef __UBOOT__
 			bus_state->resume_done[faked_port_index] = jiffies +
 				msecs_to_jiffies(20);
+#else
+			bus_state->resume_done[faked_port_index] = 20;
+#endif
 			set_bit(faked_port_index, &bus_state->resuming_ports);
+#ifndef __UBOOT__
 			mod_timer(&hcd->rh_timer,
 				  bus_state->resume_done[faked_port_index]);
+#endif
 			/* Do the rest in GetPortStatus */
 		}
 	}
@@ -1960,11 +1991,13 @@ td_cleanup:
 			ret = 1;
 			if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
 				xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;
+#ifndef __UBOOT__
 				if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs
 					== 0) {
 					if (xhci->quirks & XHCI_AMD_PLL_FIX)
 						usb_amd_quirk_pll_enable();
 				}
+#endif
 			}
 		}
 	}
@@ -2374,9 +2407,14 @@ static int handle_tx_event(struct xhci_hcd *xhci,
 			break;
 		if (xhci->quirks & XHCI_TRUST_TX_LENGTH)
 			trb_comp_code = COMP_SHORT_TX;
+#ifndef __UBOOT__
 		else
 			xhci_warn_ratelimited(xhci,
 					"WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk?\n");
+#else
+		else
+			printf("WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk?\n");
+#endif
 	case COMP_SHORT_TX:
 		break;
 	case COMP_STOP:
@@ -2657,12 +2695,13 @@ static int xhci_handle_event(struct xhci_hcd *xhci)
 		xhci->error_bitmask |= 1 << 2;
 		return 0;
 	}
-
+#ifndef __UBOOT__
 	/*
 	 * Barrier between reading the TRB_CYCLE (valid) flag above and any
 	 * speculative reads of the event's flags/data below.
 	 */
 	rmb();
+#endif
 	/* FIXME: Handle more event types. */
 	switch ((le32_to_cpu(event->event_cmd.flags) & TRB_TYPE_BITMASK)) {
 	case TRB_TYPE(TRB_COMPLETION):
@@ -2977,6 +3016,7 @@ static unsigned int count_sg_trbs_needed(struct xhci_hcd *xhci, struct urb *urb)
 	temp = urb->transfer_buffer_length;
 
 	num_trbs = 0;
+	/* sg_next is not defined yet need to define it */
 	for_each_sg(urb->sg, sg, num_sgs, i) {
 		unsigned int len = sg_dma_len(sg);
 
@@ -3056,7 +3096,9 @@ int xhci_queue_intr_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
 	 * to set the polling interval (once the API is added).
 	 */
 	if (xhci_interval != ep_interval) {
+#ifndef __UBOOT__
 		if (printk_ratelimit())
+#endif
 			dev_dbg(&urb->dev->dev, "Driver uses different interval"
 					" (%d microframe%s) than xHCI "
 					"(%d microframe%s)\n",
@@ -3258,9 +3300,12 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
 			--num_sgs;
 			if (num_sgs == 0)
 				break;
+#ifndef __UBOOT__
+			/* TODO fix the scatterlist code here */
 			sg = sg_next(sg);
 			addr = (u64) sg_dma_address(sg);
 			this_sg_len = sg_dma_len(sg);
+#endif
 		} else {
 			addr += trb_buff_len;
 		}
@@ -3757,11 +3802,12 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
 			goto cleanup;
 		}
 	}
-
+#ifndef __UBOOT__
 	if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) {
 		if (xhci->quirks & XHCI_AMD_PLL_FIX)
 			usb_amd_quirk_pll_disable();
 	}
+#endif
 	xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs++;
 
 	giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
@@ -3845,7 +3891,9 @@ int xhci_queue_isoc_tx_prepare(struct xhci_hcd *xhci, gfp_t mem_flags,
 	 * to set the polling interval (once the API is added).
 	 */
 	if (xhci_interval != ep_interval) {
+#ifndef __UBOOT__
 		if (printk_ratelimit())
+#endif
 			dev_dbg(&urb->dev->dev, "Driver uses different interval"
 					" (%d microframe%s) than xHCI "
 					"(%d microframe%s)\n",
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index b4aa79d..14dcf69 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -20,6 +20,8 @@
  * Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
  */
 
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/pci.h>
 #include <linux/irq.h>
 #include <linux/log2.h>
@@ -27,6 +29,16 @@
 #include <linux/moduleparam.h>
 #include <linux/slab.h>
 #include <linux/dmi.h>
+#else
+
+#include <common.h>
+#include <linux/err.h>
+#include <linux/usb/linux-compat.h>
+#include <linux/usb/ch9.h>
+#include <linux/usb/usb-compat.h>
+#include <usb/lin_gadget_compat.h>
+
+#endif
 
 #include "xhci.h"
 
@@ -428,10 +440,12 @@ static void compliance_mode_recovery(unsigned long arg)
 			usb_hcd_poll_rh_status(hcd);
 		}
 	}
-
+#ifndef __UBOOT__
+/* TODO needs implemenation */
 	if (xhci->port_status_u0 != ((1 << xhci->num_usb3_ports)-1))
 		mod_timer(&xhci->comp_mode_recovery_timer,
 			jiffies + msecs_to_jiffies(COMP_MODE_RCVRY_MSECS));
+#endif
 }
 
 /*
@@ -447,6 +461,8 @@ static void compliance_mode_recovery(unsigned long arg)
 static void compliance_mode_recovery_timer_init(struct xhci_hcd *xhci)
 {
 	xhci->port_status_u0 = 0;
+#ifndef __UBOOT__
+/* TODO needs timer implemenation */
 	init_timer(&xhci->comp_mode_recovery_timer);
 
 	xhci->comp_mode_recovery_timer.data = (unsigned long) xhci;
@@ -457,6 +473,7 @@ static void compliance_mode_recovery_timer_init(struct xhci_hcd *xhci)
 	set_timer_slack(&xhci->comp_mode_recovery_timer,
 			msecs_to_jiffies(COMP_MODE_RCVRY_MSECS));
 	add_timer(&xhci->comp_mode_recovery_timer);
+#endif
 	xhci_dbg(xhci, "Compliance mode recovery timer initialized\n");
 }
 
@@ -468,6 +485,7 @@ static void compliance_mode_recovery_timer_init(struct xhci_hcd *xhci)
  */
 static bool compliance_mode_recovery_timer_quirk_check(void)
 {
+#ifndef __UBOOT__
 	const char *dmi_product_name, *dmi_sys_vendor;
 
 	dmi_product_name = dmi_get_system_info(DMI_PRODUCT_NAME);
@@ -483,7 +501,7 @@ static bool compliance_mode_recovery_timer_quirk_check(void)
 			strstr(dmi_product_name, "Z820") ||
 			strstr(dmi_product_name, "Z1 Workstation"))
 		return true;
-
+#endif
 	return false;
 }
 
@@ -738,10 +756,10 @@ void xhci_stop(struct usb_hcd *hcd)
 		xhci_dbg(xhci, "%s: compliance mode recovery timer deleted\n",
 				__func__);
 	}
-
+#ifndef __UBOOT__
 	if (xhci->quirks & XHCI_AMD_PLL_FIX)
 		usb_amd_dev_put();
-
+#endif
 	xhci_dbg(xhci, "// Disabling event ring interrupts\n");
 	temp = xhci_readl(xhci, &xhci->op_regs->status);
 	xhci_writel(xhci, temp & ~STS_EINT, &xhci->op_regs->status);
@@ -768,10 +786,10 @@ void xhci_stop(struct usb_hcd *hcd)
 void xhci_shutdown(struct usb_hcd *hcd)
 {
 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
-
+#ifndef __UBOOT__
 	if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
 		usb_disable_xhci_ports(to_pci_dev(hcd->self.controller));
-
+#endif
 	spin_lock_irq(&xhci->lock);
 	xhci_halt(xhci);
 	spin_unlock_irq(&xhci->lock);
@@ -1269,8 +1287,10 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
 	ep_index = xhci_get_endpoint_index(&urb->ep->desc);
 
 	if (!HCD_HW_ACCESSIBLE(hcd)) {
+#ifndef __UBOOT__
 		if (!in_interrupt())
 			xhci_dbg(xhci, "urb submitted during PCI suspend\n");
+#endif
 		ret = -ESHUTDOWN;
 		goto exit;
 	}
@@ -1531,9 +1551,11 @@ int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
 	if (!(ep->ep_state & EP_HALT_PENDING)) {
 		ep->ep_state |= EP_HALT_PENDING;
 		ep->stop_cmds_pending++;
+#ifndef __UBOOT__
 		ep->stop_cmd_timer.expires = jiffies +
 			XHCI_STOP_EP_CMD_TIMEOUT * HZ;
 		add_timer(&ep->stop_cmd_timer);
+#endif
 		xhci_queue_stop_endpoint(xhci, urb->dev->slot_id, ep_index, 0);
 		xhci_ring_cmd_db(xhci);
 	}
@@ -2541,10 +2563,14 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
 		bool ctx_change, bool must_succeed)
 {
 	int ret;
+#ifndef __UBOOT__
 	int timeleft;
+#endif
 	unsigned long flags;
 	struct xhci_container_ctx *in_ctx;
+#ifndef __UBOOT__
 	struct completion *cmd_completion;
+#endif
 	u32 *cmd_status;
 	struct xhci_virt_device *virt_dev;
 	union xhci_trb *cmd_trb;
@@ -2573,7 +2599,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
 		xhci_warn(xhci, "Not enough bandwidth\n");
 		return -ENOMEM;
 	}
-
+#ifndef __UBOOT__
 	if (command) {
 		cmd_completion = command->completion;
 		cmd_status = &command->status;
@@ -2593,6 +2619,11 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
 	}
 	init_completion(cmd_completion);
 
+#endif
+#ifndef __UBOOT__
+	init_completion(cmd_completion);
+#endif
+
 	cmd_trb = xhci->cmd_ring->dequeue;
 	if (!ctx_change)
 		ret = xhci_queue_configure_endpoint(xhci, in_ctx->dma,
@@ -2611,7 +2642,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
 	}
 	xhci_ring_cmd_db(xhci);
 	spin_unlock_irqrestore(&xhci->lock, flags);
-
+#ifndef __UBOOT__
 	/* Wait for the configure endpoint command to complete */
 	timeleft = wait_for_completion_interruptible_timeout(
 			cmd_completion,
@@ -2628,7 +2659,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
 			return ret;
 		return -ETIME;
 	}
-
+#endif
 	if (!ctx_change)
 		ret = xhci_configure_endpoint_result(xhci, udev, cmd_status);
 	else
@@ -2936,9 +2967,11 @@ static void xhci_calculate_streams_entries(struct xhci_hcd *xhci,
 		unsigned int *num_streams, unsigned int *num_stream_ctxs)
 {
 	unsigned int max_streams;
-
-	/* The stream context array size must be a power of two */
+#ifndef __UBOOT__
+/* TODO redefine this
+	The stream context array size must be a power of two */
 	*num_stream_ctxs = roundup_pow_of_two(*num_streams);
+#endif
 	/*
 	 * Find out how many primary stream array entries the host controller
 	 * supports.  Later we may use secondary stream arrays (similar to 2nd
@@ -3321,7 +3354,9 @@ int xhci_discover_or_reset_device(struct usb_hcd *hcd, struct usb_device *udev)
 	unsigned int slot_id;
 	struct xhci_virt_device *virt_dev;
 	struct xhci_command *reset_device_cmd;
+#ifndef __UBOOT__
 	int timeleft;
+#endif
 	int last_freed_endpoint;
 	struct xhci_slot_ctx *slot_ctx;
 	int old_active_eps = 0;
@@ -3397,7 +3432,8 @@ int xhci_discover_or_reset_device(struct usb_hcd *hcd, struct usb_device *udev)
 	}
 	xhci_ring_cmd_db(xhci);
 	spin_unlock_irqrestore(&xhci->lock, flags);
-
+#ifndef __UBOOT__
+/* TODO */
 	/* Wait for the Reset Device command to finish */
 	timeleft = wait_for_completion_interruptible_timeout(
 			reset_device_cmd->completion,
@@ -3415,7 +3451,7 @@ int xhci_discover_or_reset_device(struct usb_hcd *hcd, struct usb_device *udev)
 		ret = -ETIME;
 		goto command_cleanup;
 	}
-
+#endif
 	/* The Reset Device command can't fail, according to the 0.95/0.96 spec,
 	 * unless we tried to reset a slot ID that wasn't enabled,
 	 * or the device wasn't in the addressed or configured state.
@@ -3572,7 +3608,9 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
 {
 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
 	unsigned long flags;
+#ifndef __UBOOT__
 	int timeleft;
+#endif
 	int ret;
 	union xhci_trb *cmd_trb;
 
@@ -3586,7 +3624,7 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
 	}
 	xhci_ring_cmd_db(xhci);
 	spin_unlock_irqrestore(&xhci->lock, flags);
-
+#ifndef __UBOOT__
 	/* XXX: how much time for xHC slot assignment? */
 	timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
 			XHCI_CMD_DEFAULT_TIMEOUT);
@@ -3596,7 +3634,7 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
 		/* cancel the enable slot request */
 		return xhci_cancel_cmd(xhci, NULL, cmd_trb);
 	}
-
+#endif
 	if (!xhci->slot_id) {
 		xhci_err(xhci, "Error while assigning device slot ID\n");
 		return 0;
@@ -3648,7 +3686,10 @@ disable_slot:
 int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
 {
 	unsigned long flags;
+#ifndef __UBOOT__
 	int timeleft;
+
+#endif
 	struct xhci_virt_device *virt_dev;
 	int ret = 0;
 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
@@ -3663,8 +3704,11 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
 	}
 
 	virt_dev = xhci->devs[udev->slot_id];
-
+#ifndef __UBOOT__
 	if (WARN_ON(!virt_dev)) {
+#else
+	if (!virt_dev) {
+#endif
 		/*
 		 * In plug/unplug torture test with an NEC controller,
 		 * a zero-dereference was observed once due to virt_dev = 0.
@@ -3704,7 +3748,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
 	}
 	xhci_ring_cmd_db(xhci);
 	spin_unlock_irqrestore(&xhci->lock, flags);
-
+#ifndef __UBOOT__
 	/* ctrl tx can take up to 5 sec; XXX: need more time for xHC? */
 	timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
 			XHCI_CMD_DEFAULT_TIMEOUT);
@@ -3721,7 +3765,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
 			return ret;
 		return -ETIME;
 	}
-
+#endif
 	switch (virt_dev->cmd_status) {
 	case COMP_CTX_STATE:
 	case COMP_EBADSLT:
@@ -4723,6 +4767,7 @@ MODULE_DESCRIPTION(DRIVER_DESC);
 MODULE_AUTHOR(DRIVER_AUTHOR);
 MODULE_LICENSE("GPL");
 
+#ifndef __UBOOT__
 static int __init xhci_hcd_init(void)
 {
 	int retval;
@@ -4767,3 +4812,4 @@ static void __exit xhci_hcd_cleanup(void)
 	xhci_unregister_plat();
 }
 module_exit(xhci_hcd_cleanup);
+#endif
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 29c978e..330562a 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -23,14 +23,26 @@
 #ifndef __LINUX_XHCI_HCD_H
 #define __LINUX_XHCI_HCD_H
 
+#define __UBOOT__
+#ifndef __UBOOT__
 #include <linux/usb.h>
 #include <linux/timer.h>
 #include <linux/kernel.h>
 #include <linux/usb/hcd.h>
+#else
+
+#include <asm/io.h>
+#include <usb.h>
+#include <linux/usb/hcd.h>
+
+#endif
 
 /* Code sharing between pci-quirks and xhci hcd */
-#include	"xhci-ext-caps.h"
+#include "xhci-ext-caps.h"
+
+#ifndef __UBOOT__
 #include "pci-quirks.h"
+#endif
 
 /* xHCI PCI Configuration Registers */
 #define XHCI_SBRN_OFFSET	(0x60)
@@ -749,8 +761,10 @@ struct xhci_stream_info {
 	struct xhci_stream_ctx		*stream_ctx_array;
 	unsigned int			num_stream_ctxs;
 	dma_addr_t			ctx_array_dma;
+#ifndef __UBOOT__
 	/* For mapping physical TRB addresses to segments in stream rings */
 	struct radix_tree_root		trb_address_map;
+#endif
 	struct xhci_command		*free_streams_command;
 };
 
@@ -913,6 +927,7 @@ struct xhci_virt_device {
 #define	XHCI_MAX_RINGS_CACHED	31
 	struct xhci_virt_ep		eps[31];
 	struct completion		cmd_completion;
+
 	/* Status of the last command issued for this device */
 	u32				cmd_status;
 	struct list_head		cmd_list;
@@ -1447,9 +1462,10 @@ struct xhci_hcd {
 	struct xhci_scratchpad  *scratchpad;
 	/* Store LPM test failed devices' information */
 	struct list_head	lpm_failed_devs;
-
+#ifndef __UBOOT__
 	/* slot enabling and address device helpers */
 	struct completion	addr_dev;
+#endif
 	int slot_id;
 	/* For USB 3.0 LPM enable/disable. */
 	struct xhci_command		*lpm_command;
diff --git a/include/asm-generic/scatterlist.h b/include/asm-generic/scatterlist.h
new file mode 100644
index 0000000..5de0735
--- /dev/null
+++ b/include/asm-generic/scatterlist.h
@@ -0,0 +1,34 @@
+#ifndef __ASM_GENERIC_SCATTERLIST_H
+#define __ASM_GENERIC_SCATTERLIST_H
+
+#include <linux/types.h>
+
+struct scatterlist {
+#ifdef CONFIG_DEBUG_SG
+	unsigned long	sg_magic;
+#endif
+	unsigned long	page_link;
+	unsigned int	offset;
+	unsigned int	length;
+	dma_addr_t	dma_address;
+#ifdef CONFIG_NEED_SG_DMA_LENGTH
+	unsigned int	dma_length;
+#endif
+};
+
+/*
+ * These macros should be used after a dma_map_sg call has been done
+ * to get bus addresses of each of the SG entries and their lengths.
+ * You should only work with the number of sg entries pci_map_sg
+ * returns, or alternatively stop on the first sg_dma_len(sg) which
+ * is 0.
+ */
+#define sg_dma_address(sg)	((sg)->dma_address)
+
+#ifdef CONFIG_NEED_SG_DMA_LENGTH
+#define sg_dma_len(sg)		((sg)->dma_length)
+#else
+#define sg_dma_len(sg)		((sg)->length)
+#endif
+
+#endif /* __ASM_GENERIC_SCATTERLIST_H */
diff --git a/include/configs/omap5_common.h b/include/configs/omap5_common.h
index 1df553e..b7c3e9f 100644
--- a/include/configs/omap5_common.h
+++ b/include/configs/omap5_common.h
@@ -104,6 +104,7 @@
 #define CONFIG_USB_DWC3_DUAL_ROLE
 #define CONFIG_USB_DWC3_OMAP
 #define CONFIG_USB_DWC3_HOST
+#define CONFIG_USB_XHCI
 
 /* Flash */
 #define CONFIG_SYS_NO_FLASH
diff --git a/include/linux/usb/ch11.h b/include/linux/usb/ch11.h
index 7692dc6..2eac74c 100644
--- a/include/linux/usb/ch11.h
+++ b/include/linux/usb/ch11.h
@@ -11,6 +11,11 @@
 
 #include <linux/types.h>	/* __u8 etc */
 
+#define __UBOOT__
+#ifdef __UBOOT__
+#include <linux/usb/usb-compat.h>
+#endif
+
 /*
  * Hub request types
  */
@@ -240,6 +245,14 @@ struct usb_hub_descriptor {
 			__le16 DeviceRemovable;
 		}  __attribute__ ((packed)) ss;
 	} u;
+#ifdef __UBOOT__
+	/* For uBoot backwards compatibility */
+	unsigned char  bLength;
+	unsigned char  DeviceRemovable[(USB_MAXCHILDREN+1+7)/8];
+	unsigned char  PortPowerCtrlMask[(USB_MAXCHILDREN+1+7)/8];
+	/* DeviceRemovable and PortPwrCtrlMask want to be variable-length
+	   bitmaps that hold max 255 entries. (bit0 is ignored) */
+#endif
 } __attribute__ ((packed));
 
 /* port indicator status selectors, tables 11-7 and 11-25 */
diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
index f5f5c7d..df68955 100644
--- a/include/linux/usb/hcd.h
+++ b/include/linux/usb/hcd.h
@@ -19,9 +19,14 @@
 #ifndef __USB_CORE_HCD_H
 #define __USB_CORE_HCD_H
 
+#define __UBOOT__
 #ifdef __KERNEL__
 
+#ifndef __UBOOT__
 #include <linux/rwsem.h>
+#else
+#include <usb.h>
+#endif
 
 #define MAX_TOPO_LEVEL		6
 
@@ -73,8 +78,9 @@ struct usb_hcd {
 	 * housekeeping
 	 */
 	struct usb_bus		self;		/* hcd is-a bus */
+#ifndef __UBOOT__
 	struct kref		kref;		/* reference counter */
-
+#endif
 	const char		*product_desc;	/* product/vendor string */
 	int			speed;		/* Speed for this roothub.
 						 * May be different from
@@ -135,8 +141,10 @@ struct usb_hcd {
 
 	unsigned int		irq;		/* irq allocated */
 	void __iomem		*regs;		/* device memory/io */
+#ifndef __UBOOT__
 	resource_size_t		rsrc_start;	/* memory/io resource start */
 	resource_size_t		rsrc_len;	/* memory/io resource length */
+#endif
 	unsigned		power_budget;	/* in mA, 0 = no limit */
 
 	/* bandwidth_mutex should be taken before adding or removing
diff --git a/include/linux/usb/linux-compat.h b/include/linux/usb/linux-compat.h
index 9850f44..81a099b 100644
--- a/include/linux/usb/linux-compat.h
+++ b/include/linux/usb/linux-compat.h
@@ -27,6 +27,8 @@
 #include <linux/list.h>
 #include <linux/compat.h>
 #include <asm-generic/errno.h>
+#include <asm-generic/scatterlist.h>
+
 
 #define __init
 #define __devinit
@@ -44,6 +46,7 @@ struct work_struct {};
 
 struct timer_list {};
 struct notifier_block {};
+typedef int	wait_queue_head_t;
 
 typedef unsigned long dmaaddr_t;
 #define BUS_ID_SIZE		20
@@ -59,6 +62,24 @@ struct device {
 };
 
 /**
+ * clamp_val - return a value clamped to a given range using val's type
+ * @val: current value
+ * @min: minimum allowable value
+ * @max: maximum allowable value
+ *
+ * This macro does no typechecking and uses temporary variables of whatever
+ * type the input argument 'val' is.  This is useful when val is an unsigned
+ * type and min and max are literals that will otherwise be assigned a signed
+ * integer type.
+ */
+#define clamp_val(val, min, max) ({		\
+	typeof(val) __val = (val);		\
+	typeof(val) __min = (min);		\
+	typeof(val) __max = (max);		\
+	__val = __val < __min ? __min: __val;	\
+	__val > __max ? __max: __val; })
+
+/**
  * struct device_driver - The basic device driver structure
  * @name:	Name of the device driver.
  * @bus:	The bus which the device of this driver belongs to.
@@ -79,6 +100,26 @@ struct device_driver {
 
 };
 
+
+/*
+ * struct completion - structure used to maintain state for a "completion"
+ *
+ * This is the opaque structure used to maintain the state for a "completion".
+ * Completions currently use a FIFO to queue threads that have to wait for
+ * the "completion" event.
+ *
+ * See also:  complete(), wait_for_completion() (and friends _timeout,
+ * _interruptible, _interruptible_timeout, and _killable), init_completion(),
+ * and macros DECLARE_COMPLETION(), DECLARE_COMPLETION_ONSTACK(), and
+ * INIT_COMPLETION().
+ */
+struct completion {
+	unsigned int done;
+#ifndef __UBOOT__
+	wait_queue_head_t wait;
+#endif
+};
+
 /*
  * Loop over each sg element, following the pointer to a new list if necessary
  */
@@ -103,6 +144,8 @@ struct device_driver {
 	printf(fmt, ##args)
 #define dev_err(dev, fmt, args...)		\
 	printf(fmt, ##args)
+#define dev_warn(dev, fmt, args...)		\
+	printf(fmt, ##args)
 #define printk printf
 
 #define WARN(condition, fmt, args...) ({	\
@@ -123,6 +166,8 @@ struct device_driver {
 /* common */
 #define spin_lock_init(...)
 #define spin_lock(...)
+#define spin_lock_irq(...)
+#define spin_unlock_irq(...)
 #define spin_lock_irqsave(lock, flags) do { debug("%lu\n", flags); } while (0)
 #define spin_unlock(...)
 #define spin_unlock_irqrestore(lock, flags) do {flags = 0; } while (0)
@@ -134,6 +179,8 @@ struct device_driver {
 #define mutex_unlock(...)
 
 #define GFP_KERNEL	0
+#define GFP_ATOMIC 0x20u
+#define GFP_NOIO 0x10u
 
 #define IRQ_HANDLED	1
 
diff --git a/include/linux/usb/usb-compat.h b/include/linux/usb/usb-compat.h
index a46d1e6..e99f83b 100644
--- a/include/linux/usb/usb-compat.h
+++ b/include/linux/usb/usb-compat.h
@@ -1529,9 +1529,9 @@ struct urb {
 	unsigned int transfer_flags;	/* (in) URB_SHORT_NOT_OK | ...*/
 	void *transfer_buffer;		/* (in) associated data buffer */
 	dma_addr_t transfer_dma;	/* (in) dma addr for transfer_buffer */
-#ifndef __UBOOT__
+
 	struct scatterlist *sg;		/* (in) scatter gather buffer list */
-#endif
+
 	int num_mapped_sgs;		/* (internal) mapped sg entries */
 	int num_sgs;			/* (in) number of entries in the sg list */
 	u32 transfer_buffer_length;	/* (in) data buffer length */
diff --git a/include/usb.h b/include/usb.h
index 0598972..6a64c5a 100644
--- a/include/usb.h
+++ b/include/usb.h
@@ -28,6 +28,7 @@
 
 #include <usb_defs.h>
 #include <linux/usb/ch9.h>
+#include <linux/usb/ch11.h>
 #include <linux/usb/usb-compat.h>
 
 /*
@@ -214,15 +215,7 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate);
 					 default_pipe(dev) | \
 					 USB_DIR_IN)
 
-/* The D0/D1 toggle bits */
-#define usb_gettoggle(dev, ep, out) (((dev)->toggle[out] >> ep) & 1)
-#define usb_dotoggle(dev, ep, out)  ((dev)->toggle[out] ^= (1 << ep))
-#define usb_settoggle(dev, ep, out, bit) ((dev)->toggle[out] = \
-						((dev)->toggle[out] & \
-						 ~(1 << ep)) | ((bit) << ep))
-
 /* Endpoint halt control/status */
-#define usb_endpoint_out(ep_dir)	(((ep_dir >> 7) & 1) ^ 1)
 #define usb_endpoint_halt(dev, ep, out) ((dev)->halted[out] |= (1 << (ep)))
 #define usb_endpoint_running(dev, ep, out) ((dev)->halted[out] &= ~(1 << (ep)))
 #define usb_endpoint_halted(dev, ep, out) ((dev)->halted[out] & (1 << (ep)))
@@ -244,31 +237,6 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate);
 /*************************************************************************
  * Hub Stuff
  */
-struct usb_port_status {
-	unsigned short wPortStatus;
-	unsigned short wPortChange;
-} __attribute__ ((packed));
-
-struct usb_hub_status {
-	unsigned short wHubStatus;
-	unsigned short wHubChange;
-} __attribute__ ((packed));
-
-
-/* Hub descriptor */
-struct usb_hub_descriptor {
-	unsigned char  bLength;
-	unsigned char  bDescriptorType;
-	unsigned char  bNbrPorts;
-	unsigned short wHubCharacteristics;
-	unsigned char  bPwrOn2PwrGood;
-	unsigned char  bHubContrCurrent;
-	unsigned char  DeviceRemovable[(USB_MAXCHILDREN+1+7)/8];
-	unsigned char  PortPowerCtrlMask[(USB_MAXCHILDREN+1+7)/8];
-	/* DeviceRemovable and PortPwrCtrlMask want to be variable-length
-	   bitmaps that hold max 255 entries. (bit0 is ignored) */
-} __attribute__ ((packed));
-
 
 struct usb_hub_device {
 	struct usb_device *pusb_dev;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file Dan Murphy
@ 2013-07-02 15:18   ` Nishanth Menon
  2013-07-02 15:23     ` Dan Murphy
  0 siblings, 1 reply; 12+ messages in thread
From: Nishanth Menon @ 2013-07-02 15:18 UTC (permalink / raw)
  To: u-boot

On 07/02/2013 10:15 AM, Dan Murphy wrote:
> Backport the kernel USB header file include/linux/usb.h
> that contains the structures and constants for the linux kernel drivers.
> Rename the usb.h to usb-compat.h so that it is not confused with the
> uBoot include usb.h file.
>
> Kernel base commit ID:aa4f608478acb7ed69dfcff4f3c404100b78ac49
v3.10-rc4-21-gaa4f608 - is'nt it better to take a tagged kernel like 
v3.10 instead of an random intermediate commit?

<snip>
-- 
Regards,
Nishanth Menon

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file
  2013-07-02 15:18   ` Nishanth Menon
@ 2013-07-02 15:23     ` Dan Murphy
  0 siblings, 0 replies; 12+ messages in thread
From: Dan Murphy @ 2013-07-02 15:23 UTC (permalink / raw)
  To: u-boot

On 07/02/2013 10:18 AM, Nishanth Menon wrote:
> On 07/02/2013 10:15 AM, Dan Murphy wrote:
>> Backport the kernel USB header file include/linux/usb.h
>> that contains the structures and constants for the linux kernel drivers.
>> Rename the usb.h to usb-compat.h so that it is not confused with the
>> uBoot include usb.h file.
>>
>> Kernel base commit ID:aa4f608478acb7ed69dfcff4f3c404100b78ac49
> v3.10-rc4-21-gaa4f608 - is'nt it better to take a tagged kernel like v3.10 instead of an random intermediate commit?
>
> <snip>
At the time this patch was created this was the commit ID I had.

This is only RFC.  I will advance the kernel code once I have some feedback on the approach and move to a released tag.

Dan

-- 
------------------
Dan Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot
  2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
                   ` (6 preceding siblings ...)
  2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 7/7] USB: Modify the xHCI to adapt to the uBoot code base Dan Murphy
@ 2013-07-02 21:43 ` Wolfgang Denk
  2013-07-02 21:55   ` Marek Vasut
  7 siblings, 1 reply; 12+ messages in thread
From: Wolfgang Denk @ 2013-07-02 21:43 UTC (permalink / raw)
  To: u-boot

Dear Dan Murphy,

In message <1372778113-26053-1-git-send-email-dmurphy@ti.com> you wrote:
> This patch series has been generated in an effort to get comments on 
> the implementation of the dwc and xHCI code within the uBoot.

This patch series generates a number of checkpatch errors / warnings.
OK, some of these (like the "Avoid CamelCase") relate to already
existing code so there is little chance to fix them here and now, bot
others (like "ERROR: trailing statements should be on next line",
"WARNING: line over 80 characters", "CHECK: if this code is redundant
consider removing it", "WARNING: Single statement macros should not
use a do {} while (0) loop", "CHECK: memory barrier without comment",
"WARNING: __packed is preferred over __attribute__((packed))", ...)
should indeed be fixed.

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH,     MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd at denx.de
"I find this a nice feature but it is not according to  the  documen-
tation. Or is it a BUG?"   "Let's call it an accidental feature. :-)"
                       - Larry Wall in <6909@jpl-devvax.JPL.NASA.GOV>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot
  2013-07-02 21:43 ` [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Wolfgang Denk
@ 2013-07-02 21:55   ` Marek Vasut
  0 siblings, 0 replies; 12+ messages in thread
From: Marek Vasut @ 2013-07-02 21:55 UTC (permalink / raw)
  To: u-boot

Hello Wolfgang,

> Dear Dan Murphy,
> 
> In message <1372778113-26053-1-git-send-email-dmurphy@ti.com> you wrote:
> > This patch series has been generated in an effort to get comments on
> > the implementation of the dwc and xHCI code within the uBoot.
> 
> This patch series generates a number of checkpatch errors / warnings.

This is to be expected.

> OK, some of these (like the "Avoid CamelCase") relate to already
> existing code so there is little chance to fix them here and now, bot
> others (like "ERROR: trailing statements should be on next line",
> "WARNING: line over 80 characters", "CHECK: if this code is redundant
> consider removing it", "WARNING: Single statement macros should not
> use a do {} while (0) loop", "CHECK: memory barrier without comment",
> "WARNING: __packed is preferred over __attribute__((packed))", ...)
> should indeed be fixed.

Agreed. It'd be nice to fix the existing code in mainline Linux too btw.

Best regards,
Marek Vasut

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2013-07-02 21:55 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-02 15:15 [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Dan Murphy
2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 1/7] USB: Backport kernel usb header file Dan Murphy
2013-07-02 15:18   ` Nishanth Menon
2013-07-02 15:23     ` Dan Murphy
2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 2/7] USB: Adapt the usb-compat.h to uboot and fix compiler errors Dan Murphy
2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 3/7] USB: Initial kernel back port of the dwc3 kernel code Dan Murphy
2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 4/7] USB: dwc3: dwc3 code adaption for uBoot Dan Murphy
2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 5/7] omap5: usb: Add usb otg clocks and enable Dan Murphy
2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 6/7] USB: Add xhci linux kernel host driver Dan Murphy
2013-07-02 15:15 ` [U-Boot] [RFC] [UBOOT] [PATCH v3 7/7] USB: Modify the xHCI to adapt to the uBoot code base Dan Murphy
2013-07-02 21:43 ` [U-Boot] Backport of DWC3 and xHCI stack from Linux kernel to uBoot Wolfgang Denk
2013-07-02 21:55   ` Marek Vasut

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.