All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrzej Hajda <a.hajda@samsung.com>
To: Tomi Valkeinen <tomi.valkeinen@ti.com>,
	Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Cc: linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org,
	Jesse Barnes <jesse.barnes@intel.com>,
	Benjamin Gaignard <benjamin.gaignard@linaro.org>,
	Tom Gall <tom.gall@linaro.org>,
	Kyungmin Park <kyungmin.park@samsung.com>,
	linux-media@vger.kernel.org,
	Stephen Warren <swarren@wwwdotorg.org>,
	Mark Zhang <markz@nvidia.com>,
	Alexandre Courbot <acourbot@nvidia.com>,
	Ragesh Radhakrishnan <Ragesh.R@linaro.org>,
	Thomas Petazzoni <thomas.petazzoni@free-electrons.com>,
	Sunil Joshi <joshi@samsung.com>,
	Maxime Ripard <maxime.ripard@free-electrons.com>,
	Vikas Sajjan <vikas.sajjan@linaro.org>,
	Marcus Lorentzon <marcus.lorentzon@huawei.com>
Subject: Re: [PATCH/RFC v3 00/19] Common Display Framework
Date: Fri, 11 Oct 2013 13:19:17 +0200	[thread overview]
Message-ID: <5257DEB5.6000708@samsung.com> (raw)
In-Reply-To: <52579CB2.8050601@ti.com>

On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
> On 09/10/13 17:08, Andrzej Hajda wrote:
>
>> As I have adopted existing internal driver for MIPI-DSI bus, I did not
>> take too much
>> care for DT. You are right, 'bta-timeout' is a configuration parameter
>> (however its
>> minimal value is determined by characteristic of the DSI-slave). On the
>> other
>> side currently there is no good place for such configuration parameters
>> AFAIK.
> The minimum bta-timeout should be deducable from the DSI bus speed,
> shouldn't it? Thus there's no need to define it anywhere.
Hmm, specification says "This specified period shall be longer then
the maximum possible turnaround delay for the unit to which the
turnaround request was sent".
>
>>> - enable_hs and enable_te, used to enable/disable HS mode and
>>> tearing-elimination
>> It seems there should be a way to synchronize TE signal with panel,
>> in case signal is provided only to dsi-master. Some callback I suppose?
>> Or transfer synchronization should be done by dsi-master.
> Hmm, can you explain a bit what you mean?
>
> Do you mean that the panel driver should get a callback when DSI TE
> trigger happens?
>
> On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
> panel driver just calls update() on the dsi-master, and then the
> dsi-master will wait for TE, and then start the transfer. There's also a
> callback to the panel driver when the transfer has completed.
Yes I though about a callback, but approach with DSI-master taking care
of synchronization in fact better fits to exynos-dsi and I suspect to
omap also.
>
>>> - set_max_rx_packet_size, used to configure the max rx packet size.
>> Similar callbacks should be added to mipi-dsi-bus ops as well, to
>> make it complete/generic.
> Do you mean the same calls should exist both in the mipi-dbi-bus ops and
> on the video ops? If they are called with different values, which one
> "wins"?
No, I meant that if mipi-dbi-bus want to be complete it should have
similar ops.
I did not think about scenario with two overlaping APIs.
>
>>> http://article.gmane.org/gmane.comp.video.dri.devel/90651
>>> http://article.gmane.org/gmane.comp.video.dri.devel/91269
>>> http://article.gmane.org/gmane.comp.video.dri.devel/91272
>>>
>>> I still think that it's best to consider DSI and DBI as a video bus (not
>>> as a separate video bus and a control bus), and provide the packet
>>> transfer methods as part of the video ops.
>> I have read all posts regarding this issue and currently I tend
>> to solution where CDF is used to model only video streams,
>> with control bus implemented in different framework.
>> The only concerns I have if we should use Linux bus for that.
> Ok. I have many other concerns, as I've expressed in the mails =). I
> still don't see how it could work. So I'd very much like to see a more
> detailed explanation how the separate control & video bus approach would
> deal with different scenarios.
>
> Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
> controlled via DSI, version B is controlled via i2c. As the output of
> the chip goes to HDMI connector, the DSI bus speed needs to be set
> according to the resolution of the HDMI monitor.
>
> So, with version A, the encoder driver would have some kind of pointers
> to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
> video_bus instance), right? The ctrl_ops would need to have ops like
> set_bus_speed, enable_hs, etc, to configure the DSI bus.
>
> When the encoder driver is started, it'd probably set some safe bus
> speed, configure the encoder a bit, read the EDID, enable HS,
> re-configure the bus speed to match the monitor's video mode, configure
> the encoder, and at last enable the video stream.
>
> Version B would have i2c_client and video_ops. When the driver starts,
> it'd  probably do the same things as above, except the control messages
> would go through i2c. That means that setting the bus speed, enabling
> HS, etc, would happen through video_ops, as the i2c side has no
> knowledge of the DSI side, right? Would there be identical ops on both
> DSI ctrl and video ops?
>
> That sounds very bad. What am I missing here? How would it work?
If I undrestand correctly you think about CDF topology like below:

DispContr(SoC) ---> DSI-master(SoC) ---> encoder(DSI or I2C)

But I think with mipi-dsi-bus topology could look like:

DispContr(SoC) ---> encoder(DSI or I2C)

DSI-master will not have its own entity, in the graph it could be
represented
by the link(--->), as it really does not process the video, only
transports it.

In case of version A I think everything is clear.
In case of version B it does not seems so nice at the first sight, but
still seems quite straightforward to me - special plink in encoder's
node pointing
to DSI-master, driver will find the device in runtime and use ops as needed
(additional ops/helpers required).
This is also the way to support devices which can be controlled by DSI
and I2C
in the same time. Anyway I suspect such scenario will be quite rare.

>
> And, if we want to separate the video and control, I see no reason to
> explicitly require the video side to be present. I.e. we could as well
> have a DSI peripheral that has only the control bus used. How would that
> reflect to, say, the DT presentation? Say, if we have a version A of the
> encoder, we could have DT data like this (just a rough example):
>
> soc-dsi {
> 	encoder {
> 		input: endpoint {
> 			remote-endpoint = <&soc-dsi-ep>;
Here I would replace &soc-dsi-ep by phandle to display controller/crtc/....

> 			/* configuration for the DSI lanes */
> 			dsi-lanes = <0 1 2 3 4 5>;
Wow, quite advanced DSI.
> 		};
> 	};
> };
>
> So the encoder would be places inside the SoC's DSI node, similar to how
> an i2c device would be placed inside SoC's i2c node. DSI configuration
> would be inside the video endpoint data.
>
> Version B would be almost the same:
>
> &i2c0 {
> 	encoder {
> 		input: endpoint {
> 			remote-endpoint = <&soc-dsi-ep>;
&soc-dsi-ep => &disp-ctrl-ep
> 			/* configuration for the DSI lanes */
> 			dsi-lanes = <0 1 2 3 4 5>;
> 		};
> 	};
> };
>
> Now, how would the video-bus-less device be defined?
> It'd be inside the
> soc-dsi node, that's clear. Where would the DSI lane configuration be?
> Not inside 'endpoint' node, as that's for video and wouldn't exist in
> this case. Would we have the same lane configuration in two places, once
> for video and once for control?
I think it is control setting, so it should be put outside endpoint node.
Probably it could be placed in encoder node.
>
> I agree that having DSI/DBI control and video separated would be
> elegant. But I'd like to hear what is the technical benefit of that? At
> least to me it's clearly more complex to separate them than to keep them
> together (to the extent that I don't yet see how it is even possible),
> so there must be a good reason for the separation. I don't understand
> that reason. What is it?
Roughly speaking it is a question where is the more convenient place to
put bunch
of opses, technically both solutions can be somehow implemented.

Pros of mipi bus:
- no fake entity in CDF, with fake opses, I have to use similar entities
in MIPI-CSI
camera pipelines and it complicates life without any benefit(at least
from user side),
- CDF models only video buses, control bus is a domain of Linux buses,
- less platform_bus abusing,
- better device tree topology (at least for common cases),
- quite simple in case of typical devices.

Regards
Andrzej
>
>  Tomi
>
>


WARNING: multiple messages have this Message-ID (diff)
From: Andrzej Hajda <a.hajda@samsung.com>
To: Tomi Valkeinen <tomi.valkeinen@ti.com>,
	Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Cc: linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org,
	Jesse Barnes <jesse.barnes@intel.com>,
	Benjamin Gaignard <benjamin.gaignard@linaro.org>,
	Tom Gall <tom.gall@linaro.org>,
	Kyungmin Park <kyungmin.park@samsung.com>,
	linux-media@vger.kernel.org,
	Stephen Warren <swarren@wwwdotorg.org>,
	Mark Zhang <markz@nvidia.com>,
	Alexandre Courbot <acourbot@nvidia.com>,
	Ragesh Radhakrishnan <Ragesh.R@linaro.org>,
	Thomas Petazzoni <thomas.petazzoni@free-electrons.com>,
	Sunil Joshi <joshi@samsung.com>,
	Maxime Ripard <maxime.ripard@free-electrons.com>,
	Vikas Sajjan <vikas.sajjan@linaro.org>,
	Marcus Lorentzon <marcus.lorentzon@huawei.com>
Subject: Re: [PATCH/RFC v3 00/19] Common Display Framework
Date: Fri, 11 Oct 2013 11:19:17 +0000	[thread overview]
Message-ID: <5257DEB5.6000708@samsung.com> (raw)
In-Reply-To: <52579CB2.8050601@ti.com>

On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
> On 09/10/13 17:08, Andrzej Hajda wrote:
>
>> As I have adopted existing internal driver for MIPI-DSI bus, I did not
>> take too much
>> care for DT. You are right, 'bta-timeout' is a configuration parameter
>> (however its
>> minimal value is determined by characteristic of the DSI-slave). On the
>> other
>> side currently there is no good place for such configuration parameters
>> AFAIK.
> The minimum bta-timeout should be deducable from the DSI bus speed,
> shouldn't it? Thus there's no need to define it anywhere.
Hmm, specification says "This specified period shall be longer then
the maximum possible turnaround delay for the unit to which the
turnaround request was sent".
>
>>> - enable_hs and enable_te, used to enable/disable HS mode and
>>> tearing-elimination
>> It seems there should be a way to synchronize TE signal with panel,
>> in case signal is provided only to dsi-master. Some callback I suppose?
>> Or transfer synchronization should be done by dsi-master.
> Hmm, can you explain a bit what you mean?
>
> Do you mean that the panel driver should get a callback when DSI TE
> trigger happens?
>
> On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
> panel driver just calls update() on the dsi-master, and then the
> dsi-master will wait for TE, and then start the transfer. There's also a
> callback to the panel driver when the transfer has completed.
Yes I though about a callback, but approach with DSI-master taking care
of synchronization in fact better fits to exynos-dsi and I suspect to
omap also.
>
>>> - set_max_rx_packet_size, used to configure the max rx packet size.
>> Similar callbacks should be added to mipi-dsi-bus ops as well, to
>> make it complete/generic.
> Do you mean the same calls should exist both in the mipi-dbi-bus ops and
> on the video ops? If they are called with different values, which one
> "wins"?
No, I meant that if mipi-dbi-bus want to be complete it should have
similar ops.
I did not think about scenario with two overlaping APIs.
>
>>> http://article.gmane.org/gmane.comp.video.dri.devel/90651
>>> http://article.gmane.org/gmane.comp.video.dri.devel/91269
>>> http://article.gmane.org/gmane.comp.video.dri.devel/91272
>>>
>>> I still think that it's best to consider DSI and DBI as a video bus (not
>>> as a separate video bus and a control bus), and provide the packet
>>> transfer methods as part of the video ops.
>> I have read all posts regarding this issue and currently I tend
>> to solution where CDF is used to model only video streams,
>> with control bus implemented in different framework.
>> The only concerns I have if we should use Linux bus for that.
> Ok. I have many other concerns, as I've expressed in the mails =). I
> still don't see how it could work. So I'd very much like to see a more
> detailed explanation how the separate control & video bus approach would
> deal with different scenarios.
>
> Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
> controlled via DSI, version B is controlled via i2c. As the output of
> the chip goes to HDMI connector, the DSI bus speed needs to be set
> according to the resolution of the HDMI monitor.
>
> So, with version A, the encoder driver would have some kind of pointers
> to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
> video_bus instance), right? The ctrl_ops would need to have ops like
> set_bus_speed, enable_hs, etc, to configure the DSI bus.
>
> When the encoder driver is started, it'd probably set some safe bus
> speed, configure the encoder a bit, read the EDID, enable HS,
> re-configure the bus speed to match the monitor's video mode, configure
> the encoder, and at last enable the video stream.
>
> Version B would have i2c_client and video_ops. When the driver starts,
> it'd  probably do the same things as above, except the control messages
> would go through i2c. That means that setting the bus speed, enabling
> HS, etc, would happen through video_ops, as the i2c side has no
> knowledge of the DSI side, right? Would there be identical ops on both
> DSI ctrl and video ops?
>
> That sounds very bad. What am I missing here? How would it work?
If I undrestand correctly you think about CDF topology like below:

DispContr(SoC) ---> DSI-master(SoC) ---> encoder(DSI or I2C)

But I think with mipi-dsi-bus topology could look like:

DispContr(SoC) ---> encoder(DSI or I2C)

DSI-master will not have its own entity, in the graph it could be
represented
by the link(--->), as it really does not process the video, only
transports it.

In case of version A I think everything is clear.
In case of version B it does not seems so nice at the first sight, but
still seems quite straightforward to me - special plink in encoder's
node pointing
to DSI-master, driver will find the device in runtime and use ops as needed
(additional ops/helpers required).
This is also the way to support devices which can be controlled by DSI
and I2C
in the same time. Anyway I suspect such scenario will be quite rare.

>
> And, if we want to separate the video and control, I see no reason to
> explicitly require the video side to be present. I.e. we could as well
> have a DSI peripheral that has only the control bus used. How would that
> reflect to, say, the DT presentation? Say, if we have a version A of the
> encoder, we could have DT data like this (just a rough example):
>
> soc-dsi {
> 	encoder {
> 		input: endpoint {
> 			remote-endpoint = <&soc-dsi-ep>;
Here I would replace &soc-dsi-ep by phandle to display controller/crtc/....

> 			/* configuration for the DSI lanes */
> 			dsi-lanes = <0 1 2 3 4 5>;
Wow, quite advanced DSI.
> 		};
> 	};
> };
>
> So the encoder would be places inside the SoC's DSI node, similar to how
> an i2c device would be placed inside SoC's i2c node. DSI configuration
> would be inside the video endpoint data.
>
> Version B would be almost the same:
>
> &i2c0 {
> 	encoder {
> 		input: endpoint {
> 			remote-endpoint = <&soc-dsi-ep>;
&soc-dsi-ep => &disp-ctrl-ep
> 			/* configuration for the DSI lanes */
> 			dsi-lanes = <0 1 2 3 4 5>;
> 		};
> 	};
> };
>
> Now, how would the video-bus-less device be defined?
> It'd be inside the
> soc-dsi node, that's clear. Where would the DSI lane configuration be?
> Not inside 'endpoint' node, as that's for video and wouldn't exist in
> this case. Would we have the same lane configuration in two places, once
> for video and once for control?
I think it is control setting, so it should be put outside endpoint node.
Probably it could be placed in encoder node.
>
> I agree that having DSI/DBI control and video separated would be
> elegant. But I'd like to hear what is the technical benefit of that? At
> least to me it's clearly more complex to separate them than to keep them
> together (to the extent that I don't yet see how it is even possible),
> so there must be a good reason for the separation. I don't understand
> that reason. What is it?
Roughly speaking it is a question where is the more convenient place to
put bunch
of opses, technically both solutions can be somehow implemented.

Pros of mipi bus:
- no fake entity in CDF, with fake opses, I have to use similar entities
in MIPI-CSI
camera pipelines and it complicates life without any benefit(at least
from user side),
- CDF models only video buses, control bus is a domain of Linux buses,
- less platform_bus abusing,
- better device tree topology (at least for common cases),
- quite simple in case of typical devices.

Regards
Andrzej
>
>  Tomi
>
>


  reply	other threads:[~2013-10-11 11:19 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-09 17:14 [PATCH/RFC v3 00/19] Common Display Framework Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 01/19] OMAPDSS: panels: Rename Kconfig options to OMAP2_DISPLAY_* Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 02/19] video: Add Common Display Framework core Laurent Pinchart
2013-09-02  8:42   ` Tomi Valkeinen
2013-09-03 11:29     ` Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 03/19] video: display: Add video and stream control operations Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 04/19] video: display: Add display entity notifier Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 05/19] video: display: Graph helpers Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 06/19] video: display: OF support Laurent Pinchart
2013-08-13 14:37   ` Philipp Zabel
2013-08-21  1:02     ` Laurent Pinchart
2013-08-21  9:10       ` Philipp Zabel
2013-08-22  0:51         ` Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 07/19] video: display: Add pixel coding definitions Laurent Pinchart
2013-08-09 17:14 ` [PATCH/RFC v3 08/19] video: display: Add MIPI DBI bus support Laurent Pinchart
2013-08-14  0:52   ` Rob Clark
2013-08-20 13:26     ` Laurent Pinchart
2013-08-26 11:10   ` Tomi Valkeinen
2013-09-06 14:09     ` Laurent Pinchart
2013-09-06 15:43       ` Tomi Valkeinen
2013-09-07  9:35         ` Tomi Valkeinen
2013-09-04 10:50   ` Vikas Sajjan
2013-09-06 14:37     ` Laurent Pinchart
2013-09-18 10:59       ` Vikas Sajjan
2013-09-04 12:52   ` Vikas Sajjan
2013-09-06 14:56     ` Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 09/19] video: panel: Add DPI panel support Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 10/19] video: panel: Add R61505 " Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 11/19] video: panel: Add R61517 " Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 12/19] video: display: Add VGA Digital to Analog Converter support Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 13/19] video: display: Add VGA connector support Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 14/19] ARM: shmobile: r8a7790: Add DU clocks for DT Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 15/19] ARM: shmobile: r8a7790: Add DU device node to device tree Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 16/19] ARM: shmobile: marzen: Port DU platform data to CDF Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 17/19] ARM: shmobile: lager: " Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 18/19] ARM: shmobile: lager-reference: Add display device nodes to device tree Laurent Pinchart
2013-08-09 17:15 ` [PATCH/RFC v3 19/19] drm/rcar-du: Port to the Common Display Framework Laurent Pinchart
2013-08-14  0:43 ` [PATCH/RFC v3 00/19] " Rob Clark
2013-08-20 15:24   ` Laurent Pinchart
2013-08-20 18:40     ` Rob Clark
2013-08-21  7:09       ` Sascha Hauer
2013-08-21 12:22         ` Rob Clark
2013-09-06 16:16           ` Laurent Pinchart
2013-09-09 12:12           ` Tomi Valkeinen
2013-09-09 14:17             ` Rob Clark
2013-09-09 14:58               ` Tomi Valkeinen
2013-09-09 15:10                 ` Rob Clark
2013-09-02 11:06 ` Tomi Valkeinen
2013-09-30 13:48 ` Tomi Valkeinen
2013-10-02 12:23   ` Andrzej Hajda
2013-10-02 12:23     ` Andrzej Hajda
2013-10-02 13:24     ` Tomi Valkeinen
2013-10-02 13:24       ` Tomi Valkeinen
2013-10-02 13:24       ` Tomi Valkeinen
2013-10-09 14:08       ` Andrzej Hajda
2013-10-09 14:08         ` Andrzej Hajda
2013-10-11  6:37         ` Tomi Valkeinen
2013-10-11  6:37           ` Tomi Valkeinen
2013-10-11  6:37           ` Tomi Valkeinen
2013-10-11 11:19           ` Andrzej Hajda [this message]
2013-10-11 11:19             ` Andrzej Hajda
2013-10-11 12:30             ` Tomi Valkeinen
2013-10-11 12:30               ` Tomi Valkeinen
2013-10-11 12:30               ` Tomi Valkeinen
2013-10-11 14:16               ` Andrzej Hajda
2013-10-11 14:16                 ` Andrzej Hajda
2013-10-11 14:45                 ` Tomi Valkeinen
2013-10-11 14:45                   ` Tomi Valkeinen
2013-10-11 14:45                   ` Tomi Valkeinen
2013-10-17  7:48                   ` Andrzej Hajda
2013-10-17  7:48                     ` Andrzej Hajda
2013-10-17  8:18                     ` Tomi Valkeinen
2013-10-17  8:18                       ` Tomi Valkeinen
2013-10-17  8:18                       ` Tomi Valkeinen
2013-10-17 12:26                       ` Andrzej Hajda
2013-10-17 12:26                         ` Andrzej Hajda
2013-10-17 12:55                         ` Tomi Valkeinen
2013-10-17 12:55                           ` Tomi Valkeinen
2013-10-17 12:55                           ` Tomi Valkeinen
2013-10-18 11:55                           ` Andrzej Hajda
2013-10-18 11:55                             ` Andrzej Hajda
2013-08-09 23:02 Laurent Pinchart
2013-08-09 23:02 ` Laurent Pinchart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5257DEB5.6000708@samsung.com \
    --to=a.hajda@samsung.com \
    --cc=Ragesh.R@linaro.org \
    --cc=acourbot@nvidia.com \
    --cc=benjamin.gaignard@linaro.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jesse.barnes@intel.com \
    --cc=joshi@samsung.com \
    --cc=kyungmin.park@samsung.com \
    --cc=laurent.pinchart+renesas@ideasonboard.com \
    --cc=linux-fbdev@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=marcus.lorentzon@huawei.com \
    --cc=markz@nvidia.com \
    --cc=maxime.ripard@free-electrons.com \
    --cc=swarren@wwwdotorg.org \
    --cc=thomas.petazzoni@free-electrons.com \
    --cc=tom.gall@linaro.org \
    --cc=tomi.valkeinen@ti.com \
    --cc=vikas.sajjan@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.