All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] gpu: host1x: Flesh out kerneldoc
@ 2017-04-10 10:27 Thierry Reding
  2017-04-10 10:27 ` [PATCH 2/2] drm/tegra: Add driver documentation Thierry Reding
  0 siblings, 1 reply; 3+ messages in thread
From: Thierry Reding @ 2017-04-10 10:27 UTC (permalink / raw)
  To: Thierry Reding; +Cc: dri-devel

From: Thierry Reding <treding@nvidia.com>

Improve kerneldoc for the public parts of the host1x infrastructure in
preparation for adding driver-specific part to the GPU documentation.

Signed-off-by: Thierry Reding <treding@nvidia.com>
---
 drivers/gpu/host1x/bus.c    | 75 +++++++++++++++++++++++++++++++++++++++++
 drivers/gpu/host1x/syncpt.c | 81 +++++++++++++++++++++++++++++++++++++++------
 include/linux/host1x.h      | 25 ++++++++++++++
 3 files changed, 170 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c
index 0c7ed03c235b..5e7ff8bec53d 100644
--- a/drivers/gpu/host1x/bus.c
+++ b/drivers/gpu/host1x/bus.c
@@ -40,6 +40,9 @@ struct host1x_subdev {
 
 /**
  * host1x_subdev_add() - add a new subdevice with an associated device node
+ * @device: host1x device to add the subdevice to
+ * @driver: host1x driver
+ * @np: device node
  */
 static int host1x_subdev_add(struct host1x_device *device,
 			     struct host1x_driver *driver,
@@ -80,6 +83,7 @@ static int host1x_subdev_add(struct host1x_device *device,
 
 /**
  * host1x_subdev_del() - remove subdevice
+ * @subdev: subdevice to remove
  */
 static void host1x_subdev_del(struct host1x_subdev *subdev)
 {
@@ -90,6 +94,8 @@ static void host1x_subdev_del(struct host1x_subdev *subdev)
 
 /**
  * host1x_device_parse_dt() - scan device tree and add matching subdevices
+ * @device: host1x logical device
+ * @driver: host1x driver
  */
 static int host1x_device_parse_dt(struct host1x_device *device,
 				  struct host1x_driver *driver)
@@ -190,6 +196,16 @@ static void host1x_subdev_unregister(struct host1x_device *device,
 	mutex_unlock(&device->subdevs_lock);
 }
 
+/**
+ * host1x_device_init() - initialize a host1x logical device
+ * @device: host1x logical device
+ *
+ * The driver for the host1x logical device can call this during execution of
+ * its &host1x_driver.probe implementation to initialize each of its clients.
+ * The client drivers access the subsystem specific driver data using the
+ * &host1x_client.parent field and driver data associated with it (usually by
+ * calling dev_get_drvdata()).
+ */
 int host1x_device_init(struct host1x_device *device)
 {
 	struct host1x_client *client;
@@ -216,6 +232,15 @@ int host1x_device_init(struct host1x_device *device)
 }
 EXPORT_SYMBOL(host1x_device_init);
 
+/**
+ * host1x_device_exit() - uninitialize host1x logical device
+ * @device: host1x logical device
+ *
+ * When the driver for a host1x logical device is unloaded, it can call this
+ * function to tear down each of its clients. Typically this is done after a
+ * subsystem-specific data structure is removed and the functionality can no
+ * longer be used.
+ */
 int host1x_device_exit(struct host1x_device *device)
 {
 	struct host1x_client *client;
@@ -470,6 +495,14 @@ static void host1x_detach_driver(struct host1x *host1x,
 	mutex_unlock(&host1x->devices_lock);
 }
 
+/**
+ * host1x_register() - register a host1x controller
+ * @host1x: host1x controller
+ *
+ * The host1x controller driver uses this to register a host1x controller with
+ * the infrastructure. Note that all Tegra SoC generations have only ever come
+ * with a single host1x instance, so this function is somewhat academic.
+ */
 int host1x_register(struct host1x *host1x)
 {
 	struct host1x_driver *driver;
@@ -488,6 +521,13 @@ int host1x_register(struct host1x *host1x)
 	return 0;
 }
 
+/**
+ * host1x_unregister() - unregister a host1x controller
+ * @host1x: host1x controller
+ *
+ * The host1x controller driver uses this to remove a host1x controller from
+ * the infrastructure.
+ */
 int host1x_unregister(struct host1x *host1x)
 {
 	struct host1x_driver *driver;
@@ -537,6 +577,16 @@ static void host1x_device_shutdown(struct device *dev)
 		driver->shutdown(device);
 }
 
+/**
+ * host1x_driver_register_full() - register a host1x driver
+ * @driver: host1x driver
+ * @owner: owner module
+ *
+ * Drivers for host1x logical devices call this function to register a driver
+ * with the infrastructure. Note that since these drive logical devices, the
+ * registration of the driver actually triggers tho logical device creation.
+ * A logical device will be created for each host1x instance.
+ */
 int host1x_driver_register_full(struct host1x_driver *driver,
 				struct module *owner)
 {
@@ -565,6 +615,13 @@ int host1x_driver_register_full(struct host1x_driver *driver,
 }
 EXPORT_SYMBOL(host1x_driver_register_full);
 
+/**
+ * host1x_driver_unregister() - unregister a host1x driver
+ * @driver: host1x driver
+ *
+ * Unbinds the driver from each of the host1x logical devices that it is
+ * bound to, effectively removing the subsystem devices that they represent.
+ */
 void host1x_driver_unregister(struct host1x_driver *driver)
 {
 	driver_unregister(&driver->driver);
@@ -575,6 +632,17 @@ void host1x_driver_unregister(struct host1x_driver *driver)
 }
 EXPORT_SYMBOL(host1x_driver_unregister);
 
+/**
+ * host1x_client_register() - register a host1x client
+ * @client: host1x client
+ *
+ * Registers a host1x client with each host1x controller instance. Note that
+ * each client will only match their parent host1x controller and will only be
+ * associated with that instance. Once all clients have been registered with
+ * their parent host1x controller, the infrastructure will set up the logical
+ * device and call host1x_device_init(), which will in turn call each client's
+ * &host1x_client_ops.init implementation.
+ */
 int host1x_client_register(struct host1x_client *client)
 {
 	struct host1x *host1x;
@@ -600,6 +668,13 @@ int host1x_client_register(struct host1x_client *client)
 }
 EXPORT_SYMBOL(host1x_client_register);
 
+/**
+ * host1x_client_unregister() - unregister a host1x client
+ * @client: host1x client
+ *
+ * Removes a host1x client from its host1x controller instance. If a logical
+ * device has already been initialized, it will be torn down.
+ */
 int host1x_client_unregister(struct host1x_client *client)
 {
 	struct host1x_client *c;
diff --git a/drivers/gpu/host1x/syncpt.c b/drivers/gpu/host1x/syncpt.c
index 0ac026cdc30c..048ac9e344ce 100644
--- a/drivers/gpu/host1x/syncpt.c
+++ b/drivers/gpu/host1x/syncpt.c
@@ -99,14 +99,24 @@ static struct host1x_syncpt *host1x_syncpt_alloc(struct host1x *host,
 	return NULL;
 }
 
+/**
+ * host1x_syncpt_id() - retrieve syncpoint ID
+ * @sp: host1x syncpoint
+ *
+ * Given a pointer to a struct host1x_syncpt, retrieves its ID. This ID is
+ * often used as a value to program into registers that control how hardware
+ * blocks interact with syncpoints.
+ */
 u32 host1x_syncpt_id(struct host1x_syncpt *sp)
 {
 	return sp->id;
 }
 EXPORT_SYMBOL(host1x_syncpt_id);
 
-/*
- * Updates the value sent to hardware.
+/**
+ * host1x_syncpt_incr_max() - update the value sent to hardware
+ * @sp: host1x syncpoint
+ * @incrs: number of increments
  */
 u32 host1x_syncpt_incr_max(struct host1x_syncpt *sp, u32 incrs)
 {
@@ -175,8 +185,9 @@ u32 host1x_syncpt_load_wait_base(struct host1x_syncpt *sp)
 	return sp->base_val;
 }
 
-/*
- * Increment syncpoint value from cpu, updating cache
+/**
+ * host1x_syncpt_incr() - increment syncpoint value from CPU, updating cache
+ * @sp: host1x syncpoint
  */
 int host1x_syncpt_incr(struct host1x_syncpt *sp)
 {
@@ -195,8 +206,12 @@ static bool syncpt_load_min_is_expired(struct host1x_syncpt *sp, u32 thresh)
 	return host1x_syncpt_is_expired(sp, thresh);
 }
 
-/*
- * Main entrypoint for syncpoint value waits.
+/**
+ * host1x_syncpt_wait() - wait for a syncpoint to reach a given value
+ * @sp: host1x syncpoint
+ * @thresh: threshold
+ * @timeout: maximum time to wait for the syncpoint to reach the given value
+ * @value: return location for the syncpoint value
  */
 int host1x_syncpt_wait(struct host1x_syncpt *sp, u32 thresh, long timeout,
 		       u32 *value)
@@ -402,6 +417,16 @@ int host1x_syncpt_init(struct host1x *host)
 	return 0;
 }
 
+/**
+ * host1x_syncpt_request() - request a syncpoint
+ * @dev: device requesting the syncpoint
+ * @flags: flags
+ *
+ * host1x client drivers can use this function to allocate a syncpoint for
+ * subsequent use. A syncpoint returned by this function will be reserved for
+ * use by the client exclusively. When no longer using a syncpoint, a host1x
+ * client driver needs to release it using host1x_syncpt_free().
+ */
 struct host1x_syncpt *host1x_syncpt_request(struct device *dev,
 					    unsigned long flags)
 {
@@ -411,6 +436,16 @@ struct host1x_syncpt *host1x_syncpt_request(struct device *dev,
 }
 EXPORT_SYMBOL(host1x_syncpt_request);
 
+/**
+ * host1x_syncpt_free() - free a requested syncpoint
+ * @sp: host1x syncpoint
+ *
+ * Release a syncpoint previously allocated using host1x_syncpt_request(). A
+ * host1x client driver should call this when the syncpoint is no longer in
+ * use. Note that client drivers must ensure that the syncpoint doesn't remain
+ * under the control of hardware after calling this function, otherwise two
+ * clients may end up trying to access the same syncpoint concurrently.
+ */
 void host1x_syncpt_free(struct host1x_syncpt *sp)
 {
 	if (!sp)
@@ -438,9 +473,12 @@ void host1x_syncpt_deinit(struct host1x *host)
 		kfree(sp->name);
 }
 
-/*
- * Read max. It indicates how many operations there are in queue, either in
- * channel or in a software thread.
+/**
+ * host1x_syncpt_read_max() - read maximum syncpoint value
+ * @sp: host1x syncpoint
+ *
+ * The maximum syncpoint value indicates how many operations there are in
+ * queue, either in channel or in a software thread.
  */
 u32 host1x_syncpt_read_max(struct host1x_syncpt *sp)
 {
@@ -450,8 +488,12 @@ u32 host1x_syncpt_read_max(struct host1x_syncpt *sp)
 }
 EXPORT_SYMBOL(host1x_syncpt_read_max);
 
-/*
- * Read min, which is a shadow of the current sync point value in hardware.
+/**
+ * host1x_syncpt_read_min() - read minimum syncpoint value
+ * @sp: host1x syncpoint
+ *
+ * The minimum syncpoint value is a shadow of the current sync point value in
+ * hardware.
  */
 u32 host1x_syncpt_read_min(struct host1x_syncpt *sp)
 {
@@ -461,6 +503,10 @@ u32 host1x_syncpt_read_min(struct host1x_syncpt *sp)
 }
 EXPORT_SYMBOL(host1x_syncpt_read_min);
 
+/**
+ * host1x_syncpt_read() - read the current syncpoint value
+ * @sp: host1x syncpoint
+ */
 u32 host1x_syncpt_read(struct host1x_syncpt *sp)
 {
 	return host1x_syncpt_load(sp);
@@ -482,6 +528,11 @@ unsigned int host1x_syncpt_nb_mlocks(struct host1x *host)
 	return host->info->nb_mlocks;
 }
 
+/**
+ * host1x_syncpt_get() - obtain a syncpoint by ID
+ * @host: host1x controller
+ * @id: syncpoint ID
+ */
 struct host1x_syncpt *host1x_syncpt_get(struct host1x *host, unsigned int id)
 {
 	if (id >= host->info->nb_pts)
@@ -491,12 +542,20 @@ struct host1x_syncpt *host1x_syncpt_get(struct host1x *host, unsigned int id)
 }
 EXPORT_SYMBOL(host1x_syncpt_get);
 
+/**
+ * host1x_syncpt_get_base() - obtain the wait base associated with a syncpoint
+ * @sp: host1x syncpoint
+ */
 struct host1x_syncpt_base *host1x_syncpt_get_base(struct host1x_syncpt *sp)
 {
 	return sp ? sp->base : NULL;
 }
 EXPORT_SYMBOL(host1x_syncpt_get_base);
 
+/**
+ * host1x_syncpt_base_id() - retrieve the ID of a syncpoint wait base
+ * @base: host1x syncpoint wait base
+ */
 u32 host1x_syncpt_base_id(struct host1x_syncpt_base *base)
 {
 	return base->id;
diff --git a/include/linux/host1x.h b/include/linux/host1x.h
index 3d04aa1dc83e..840a8ad627b2 100644
--- a/include/linux/host1x.h
+++ b/include/linux/host1x.h
@@ -32,11 +32,27 @@ enum host1x_class {
 
 struct host1x_client;
 
+/**
+ * struct host1x_client_ops - host1x client operations
+ * @init: host1x client initialization code
+ * @exit: host1x client tear down code
+ */
 struct host1x_client_ops {
 	int (*init)(struct host1x_client *client);
 	int (*exit)(struct host1x_client *client);
 };
 
+/**
+ * struct host1x_client - host1x client structure
+ * @list: list node for the host1x client
+ * @parent: pointer to struct device representing the host1x controller
+ * @dev: pointer to struct device backing this host1x client
+ * @ops: host1x client operations
+ * @class: host1x class represented by this client
+ * @channel: host1x channel associated with this client
+ * @syncpts: array of syncpoints requested for this client
+ * @num_syncpts: number of syncpoints requested for this client
+ */
 struct host1x_client {
 	struct list_head list;
 	struct device *parent;
@@ -251,6 +267,15 @@ void host1x_job_unpin(struct host1x_job *job);
 
 struct host1x_device;
 
+/**
+ * struct host1x_driver - host1x logical device driver
+ * @driver: core driver
+ * @subdevs: table of OF device IDs matching subdevices for this driver
+ * @list: list node for the driver
+ * @probe: called when the host1x logical device is probed
+ * @remove: called when the host1x logical device is removed
+ * @shutdown: called when the host1x logical device is shut down
+ */
 struct host1x_driver {
 	struct device_driver driver;
 
-- 
2.12.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] drm/tegra: Add driver documentation
  2017-04-10 10:27 [PATCH 1/2] gpu: host1x: Flesh out kerneldoc Thierry Reding
@ 2017-04-10 10:27 ` Thierry Reding
  2017-04-10 12:05   ` Daniel Vetter
  0 siblings, 1 reply; 3+ messages in thread
From: Thierry Reding @ 2017-04-10 10:27 UTC (permalink / raw)
  To: Thierry Reding; +Cc: dri-devel

From: Thierry Reding <treding@nvidia.com>

Adds some driver documentation for Tegra. It provides a short overview
of the hardware and software architectures.

Signed-off-by: Thierry Reding <treding@nvidia.com>
---
 Documentation/gpu/index.rst |   1 +
 Documentation/gpu/tegra.rst | 178 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 179 insertions(+)
 create mode 100644 Documentation/gpu/tegra.rst

diff --git a/Documentation/gpu/index.rst b/Documentation/gpu/index.rst
index c572f092739e..feae37fa7ca3 100644
--- a/Documentation/gpu/index.rst
+++ b/Documentation/gpu/index.rst
@@ -12,6 +12,7 @@ Linux GPU Driver Developer's Guide
    drm-uapi
    i915
    meson
+   tegra
    tinydrm
    vc4
    vga-switcheroo
diff --git a/Documentation/gpu/tegra.rst b/Documentation/gpu/tegra.rst
new file mode 100644
index 000000000000..d2ed8938ca43
--- /dev/null
+++ b/Documentation/gpu/tegra.rst
@@ -0,0 +1,178 @@
+===============================================
+ drm/tegra NVIDIA Tegra GPU and display driver
+===============================================
+
+NVIDIA Tegra SoCs support a set of display, graphics and video functions via
+the host1x controller. host1x supplies command streams, gathered from a push
+buffer provided directly by the CPU, to its clients via channels. Software,
+or blocks amongst themselves, can use syncpoints for synchronization.
+
+Up until, but not including, Tegra124 (aka Tegra K1) the drm/tegra driver
+supports the built-in GPU, comprised of the gr2d and gr3d engines. Starting
+with Tegra124 the GPU is based on the NVIDIA desktop GPU architecture and
+supported by the drm/nouveau driver.
+
+The drm/tegra driver supports NVIDIA Tegra SoC generations since Tegra20. It
+has three parts:
+
+  - A host1x driver that provides infrastructure and access to the host1x
+    services.
+
+  - A KMS driver that supports the display controllers as well as a number of
+    outputs, such as RGB, HDMI, DSI, and DisplayPort.
+
+  - A set of custom userspace IOCTLs that can be used to submit jobs to the
+    GPU and video engines via host1x.
+
+Driver Infrastructure
+=====================
+
+The various host1x clients need to be bound together into a logical device in
+order to expose their functionality to users. The infrastructure that supports
+this is implemented in the host1x driver. When a driver is registered with the
+infrastructure it provides a list of compatible strings specifying the devices
+that it needs. The infrastructure creates a logical device and scan the device
+tree for matching device nodes, adding the required clients to a list. Drivers
+for individual clients register with the infrastructure as well and are added
+to the logical host1x device.
+
+Once all clients are available, the infrastructure will initialize the logical
+device using a driver-provided function which will set up the bits specific to
+the subsystem and in turn initialize each of its clients.
+
+Similarly, when one of the clients is unregistered, the infrastructure will
+destroy the logical device by calling back into the driver, which ensures that
+the subsystem specific bits are torn down and the clients destroyed in turn.
+
+Host1x Infrastructure Reference
+-------------------------------
+
+.. kernel-doc:: include/linux/host1x.h
+
+.. kernel-doc:: drivers/gpu/host1x/bus.c
+   :export:
+
+Host1x Syncpoint Reference
+--------------------------
+
+.. kernel-doc:: drivers/gpu/host1x/syncpt.c
+   :export:
+
+KMS driver
+==========
+
+The display hardware has remained mostly backwards compatible over the various
+Tegra SoC generations, up until Tegra186 which introduces several changes that
+make it difficult to support with a parameterized driver.
+
+Display Controllers
+-------------------
+
+Tegra SoCs have two display controllers, each of which can be associated with
+zero or more outputs. Outputs can also share a single display controller, but
+only if they run with compatible display timings. Two display controllers can
+also share a single framebuffer, allowing cloned configurations even if modes
+on two outputs don't match. A display controller is modelled as a CRTC in KMS
+terms.
+
+On Tegra186, the number of display controllers has been increased to three. A
+display controller can no longer drive all of the outputs. While two of these
+controllers can drive both DSI outputs and both SOR outputs, the third cannot
+drive any DSI.
+
+Windows
+~~~~~~~
+
+A display controller controls a set of windows that can be used to composite
+multiple buffers onto the screen. While it is possible to assign arbitrary Z
+ordering to individual windows (by programming the corresponding blending
+registers), this is currently not supported by the driver. Instead, it will
+assume a fixed Z ordering of the windows (window A is the root window, that
+is, the lowest, while windows B and C are overlaid on top of window A). The
+overlay windows support multiple pixel formats and can automatically convert
+from YUV to RGB at scanout time. This makes them useful for displaying video
+content. In KMS, each window is modelled as a plane. Each display controller
+has a hardware cursor that is exposed as a cursor plane.
+
+Outputs
+-------
+
+The type and number of supported outputs varies between Tegra SoC generations.
+All generations support at least HDMI. While earlier generations supported the
+very simple RGB interfaces (one per display controller), recent generations no
+longer do and instead provide standard interfaces such as DSI and eDP/DP.
+
+Outputs are modelled as a composite encoder/connector pair.
+
+RGB/LVDS
+~~~~~~~~
+
+This interface is no longer available since Tegra124. It has been replaced by
+the more standard DSI and eDP interfaces.
+
+HDMI
+~~~~
+
+HDMI is supported on all Tegra SoCs. Starting with Tegra210, HDMI is provided
+by the versatile SOR output, which supports eDP, DP and HDMI. The SOR is able
+to support HDMI 2.0, though support for this is currently not merged.
+
+DSI
+~~~
+
+Although Tegra has supported DSI since Tegra30, the controller has changed in
+several ways in Tegra114. Since none of the publicly available development
+boards prior to Dalmore (Tegra114) have made use of DSI, only Tegra114 and
+later are supported by the drm/tegra driver.
+
+eDP/DP
+~~~~~~
+
+eDP was first introduced in Tegra124 where it was used to drive the display
+panel for notebook form factors. Tegra210 added support for full DisplayPort
+support, though this is currently not implemented in the drm/tegra driver.
+
+Userspace Interface
+===================
+
+The userspace interface provided by drm/tegra allows applications to create
+GEM buffers, access and control syncpoints as well as submit command streams
+to host1x.
+
+GEM Buffers
+-----------
+
+The ``DRM_IOCTL_TEGRA_GEM_CREATE`` IOCTL is used to create a GEM buffer object
+with Tegra-specific flags. This is useful for buffers that should be tiled, or
+that are to be scanned out upside down (useful for 3D content).
+
+After a GEM buffer object has been created, its memory can be mapped by an
+application using the mmap offset returned by the ``DRM_IOCTL_TEGRA_GEM_MMAP``
+IOCTL.
+
+Syncpoints
+----------
+
+The current value of a syncpoint can be obtained by executing the
+``DRM_IOCTL_TEGRA_SYNCPT_READ`` IOCTL. Incrementing the syncpoint is achieved
+using the ``DRM_IOCTL_TEGRA_SYNCPT_INCR`` IOCTL.
+
+Userspace can also request blocking on a syncpoint. To do so, it needs to
+execute the ``DRM_IOCTL_TEGRA_SYNCPT_WAIT`` IOCTL, specifying the value of
+the syncpoint to wait for. The kernel will release the application when the
+syncpoint reaches that value or after a specified timeout.
+
+Command Stream Submission
+-------------------------
+
+Before an application can submit command streams to host1x it needs to open a
+channel to an engine using the ``DRM_IOCTL_TEGRA_OPEN_CHANNEL`` IOCTL. Client
+IDs are used to identify the target of the channel. When a channel is no
+longer needed, it can be closed using the ``DRM_IOCTL_TEGRA_CLOSE_CHANNEL``
+IOCTL. To retrieve the syncpoint associated with a channel, an application
+can use the ``DRM_IOCTL_TEGRA_GET_SYNCPT``.
+
+After opening a channel, submitting command streams is easy. The application
+writes commands into the memory backing a GEM buffer object and passes these
+to the ``DRM_IOCTL_TEGRA_SUBMIT`` IOCTL along with various other parameters,
+such as the syncpoints or relocations used in the job submission.
-- 
2.12.0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 2/2] drm/tegra: Add driver documentation
  2017-04-10 10:27 ` [PATCH 2/2] drm/tegra: Add driver documentation Thierry Reding
@ 2017-04-10 12:05   ` Daniel Vetter
  0 siblings, 0 replies; 3+ messages in thread
From: Daniel Vetter @ 2017-04-10 12:05 UTC (permalink / raw)
  To: Thierry Reding; +Cc: dri-devel

On Mon, Apr 10, 2017 at 12:27:02PM +0200, Thierry Reding wrote:
> From: Thierry Reding <treding@nvidia.com>
> 
> Adds some driver documentation for Tegra. It provides a short overview
> of the hardware and software architectures.
> 
> Signed-off-by: Thierry Reding <treding@nvidia.com>

Yay for docs! On both patches:

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  Documentation/gpu/index.rst |   1 +
>  Documentation/gpu/tegra.rst | 178 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 179 insertions(+)
>  create mode 100644 Documentation/gpu/tegra.rst
> 
> diff --git a/Documentation/gpu/index.rst b/Documentation/gpu/index.rst
> index c572f092739e..feae37fa7ca3 100644
> --- a/Documentation/gpu/index.rst
> +++ b/Documentation/gpu/index.rst
> @@ -12,6 +12,7 @@ Linux GPU Driver Developer's Guide
>     drm-uapi
>     i915
>     meson
> +   tegra
>     tinydrm
>     vc4
>     vga-switcheroo
> diff --git a/Documentation/gpu/tegra.rst b/Documentation/gpu/tegra.rst
> new file mode 100644
> index 000000000000..d2ed8938ca43
> --- /dev/null
> +++ b/Documentation/gpu/tegra.rst
> @@ -0,0 +1,178 @@
> +===============================================
> + drm/tegra NVIDIA Tegra GPU and display driver
> +===============================================
> +
> +NVIDIA Tegra SoCs support a set of display, graphics and video functions via
> +the host1x controller. host1x supplies command streams, gathered from a push
> +buffer provided directly by the CPU, to its clients via channels. Software,
> +or blocks amongst themselves, can use syncpoints for synchronization.
> +
> +Up until, but not including, Tegra124 (aka Tegra K1) the drm/tegra driver
> +supports the built-in GPU, comprised of the gr2d and gr3d engines. Starting
> +with Tegra124 the GPU is based on the NVIDIA desktop GPU architecture and
> +supported by the drm/nouveau driver.
> +
> +The drm/tegra driver supports NVIDIA Tegra SoC generations since Tegra20. It
> +has three parts:
> +
> +  - A host1x driver that provides infrastructure and access to the host1x
> +    services.
> +
> +  - A KMS driver that supports the display controllers as well as a number of
> +    outputs, such as RGB, HDMI, DSI, and DisplayPort.
> +
> +  - A set of custom userspace IOCTLs that can be used to submit jobs to the
> +    GPU and video engines via host1x.
> +
> +Driver Infrastructure
> +=====================
> +
> +The various host1x clients need to be bound together into a logical device in
> +order to expose their functionality to users. The infrastructure that supports
> +this is implemented in the host1x driver. When a driver is registered with the
> +infrastructure it provides a list of compatible strings specifying the devices
> +that it needs. The infrastructure creates a logical device and scan the device
> +tree for matching device nodes, adding the required clients to a list. Drivers
> +for individual clients register with the infrastructure as well and are added
> +to the logical host1x device.
> +
> +Once all clients are available, the infrastructure will initialize the logical
> +device using a driver-provided function which will set up the bits specific to
> +the subsystem and in turn initialize each of its clients.
> +
> +Similarly, when one of the clients is unregistered, the infrastructure will
> +destroy the logical device by calling back into the driver, which ensures that
> +the subsystem specific bits are torn down and the clients destroyed in turn.
> +
> +Host1x Infrastructure Reference
> +-------------------------------
> +
> +.. kernel-doc:: include/linux/host1x.h
> +
> +.. kernel-doc:: drivers/gpu/host1x/bus.c
> +   :export:
> +
> +Host1x Syncpoint Reference
> +--------------------------
> +
> +.. kernel-doc:: drivers/gpu/host1x/syncpt.c
> +   :export:
> +
> +KMS driver
> +==========
> +
> +The display hardware has remained mostly backwards compatible over the various
> +Tegra SoC generations, up until Tegra186 which introduces several changes that
> +make it difficult to support with a parameterized driver.
> +
> +Display Controllers
> +-------------------
> +
> +Tegra SoCs have two display controllers, each of which can be associated with
> +zero or more outputs. Outputs can also share a single display controller, but
> +only if they run with compatible display timings. Two display controllers can
> +also share a single framebuffer, allowing cloned configurations even if modes
> +on two outputs don't match. A display controller is modelled as a CRTC in KMS
> +terms.
> +
> +On Tegra186, the number of display controllers has been increased to three. A
> +display controller can no longer drive all of the outputs. While two of these
> +controllers can drive both DSI outputs and both SOR outputs, the third cannot
> +drive any DSI.
> +
> +Windows
> +~~~~~~~
> +
> +A display controller controls a set of windows that can be used to composite
> +multiple buffers onto the screen. While it is possible to assign arbitrary Z
> +ordering to individual windows (by programming the corresponding blending
> +registers), this is currently not supported by the driver. Instead, it will
> +assume a fixed Z ordering of the windows (window A is the root window, that
> +is, the lowest, while windows B and C are overlaid on top of window A). The
> +overlay windows support multiple pixel formats and can automatically convert
> +from YUV to RGB at scanout time. This makes them useful for displaying video
> +content. In KMS, each window is modelled as a plane. Each display controller
> +has a hardware cursor that is exposed as a cursor plane.
> +
> +Outputs
> +-------
> +
> +The type and number of supported outputs varies between Tegra SoC generations.
> +All generations support at least HDMI. While earlier generations supported the
> +very simple RGB interfaces (one per display controller), recent generations no
> +longer do and instead provide standard interfaces such as DSI and eDP/DP.
> +
> +Outputs are modelled as a composite encoder/connector pair.
> +
> +RGB/LVDS
> +~~~~~~~~
> +
> +This interface is no longer available since Tegra124. It has been replaced by
> +the more standard DSI and eDP interfaces.
> +
> +HDMI
> +~~~~
> +
> +HDMI is supported on all Tegra SoCs. Starting with Tegra210, HDMI is provided
> +by the versatile SOR output, which supports eDP, DP and HDMI. The SOR is able
> +to support HDMI 2.0, though support for this is currently not merged.
> +
> +DSI
> +~~~
> +
> +Although Tegra has supported DSI since Tegra30, the controller has changed in
> +several ways in Tegra114. Since none of the publicly available development
> +boards prior to Dalmore (Tegra114) have made use of DSI, only Tegra114 and
> +later are supported by the drm/tegra driver.
> +
> +eDP/DP
> +~~~~~~
> +
> +eDP was first introduced in Tegra124 where it was used to drive the display
> +panel for notebook form factors. Tegra210 added support for full DisplayPort
> +support, though this is currently not implemented in the drm/tegra driver.
> +
> +Userspace Interface
> +===================
> +
> +The userspace interface provided by drm/tegra allows applications to create
> +GEM buffers, access and control syncpoints as well as submit command streams
> +to host1x.
> +
> +GEM Buffers
> +-----------
> +
> +The ``DRM_IOCTL_TEGRA_GEM_CREATE`` IOCTL is used to create a GEM buffer object
> +with Tegra-specific flags. This is useful for buffers that should be tiled, or
> +that are to be scanned out upside down (useful for 3D content).
> +
> +After a GEM buffer object has been created, its memory can be mapped by an
> +application using the mmap offset returned by the ``DRM_IOCTL_TEGRA_GEM_MMAP``
> +IOCTL.
> +
> +Syncpoints
> +----------
> +
> +The current value of a syncpoint can be obtained by executing the
> +``DRM_IOCTL_TEGRA_SYNCPT_READ`` IOCTL. Incrementing the syncpoint is achieved
> +using the ``DRM_IOCTL_TEGRA_SYNCPT_INCR`` IOCTL.
> +
> +Userspace can also request blocking on a syncpoint. To do so, it needs to
> +execute the ``DRM_IOCTL_TEGRA_SYNCPT_WAIT`` IOCTL, specifying the value of
> +the syncpoint to wait for. The kernel will release the application when the
> +syncpoint reaches that value or after a specified timeout.
> +
> +Command Stream Submission
> +-------------------------
> +
> +Before an application can submit command streams to host1x it needs to open a
> +channel to an engine using the ``DRM_IOCTL_TEGRA_OPEN_CHANNEL`` IOCTL. Client
> +IDs are used to identify the target of the channel. When a channel is no
> +longer needed, it can be closed using the ``DRM_IOCTL_TEGRA_CLOSE_CHANNEL``
> +IOCTL. To retrieve the syncpoint associated with a channel, an application
> +can use the ``DRM_IOCTL_TEGRA_GET_SYNCPT``.
> +
> +After opening a channel, submitting command streams is easy. The application
> +writes commands into the memory backing a GEM buffer object and passes these
> +to the ``DRM_IOCTL_TEGRA_SUBMIT`` IOCTL along with various other parameters,
> +such as the syncpoints or relocations used in the job submission.
> -- 
> 2.12.0
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-04-10 12:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-10 10:27 [PATCH 1/2] gpu: host1x: Flesh out kerneldoc Thierry Reding
2017-04-10 10:27 ` [PATCH 2/2] drm/tegra: Add driver documentation Thierry Reding
2017-04-10 12:05   ` Daniel Vetter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.