All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
To: Tomasz Figa <tfiga@chromium.org>
Cc: "Hans Verkuil" <hverkuil@xs4all.nl>,
	"Jerry-ch Chen" <Jerry-Ch.chen@mediatek.com>,
	"Laurent Pinchart" <laurent.pinchart+renesas@ideasonboard.com>,
	"Matthias Brugger" <matthias.bgg@gmail.com>,
	"Mauro Carvalho Chehab" <mchehab@kernel.org>,
	"Pi-Hsun Shih" <pihsun@chromium.org>,
	yuzhao@chromium.org, zwisler@chromium.org,
	"moderated list:ARM/Mediatek SoC support"
	<linux-mediatek@lists.infradead.org>,
	"list@263.net:IOMMU DRIVERS <iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,"
	<linux-arm-kernel@lists.infradead.org>,
	"Sean Cheng (鄭昇弘)" <Sean.Cheng@mediatek.com>,
	"Sj Huang" <sj.huang@mediatek.com>,
	"Christie Yu (游雅惠)" <christie.yu@mediatek.com>,
	"Frederic Chen (陳俊元)" <frederic.chen@mediatek.com>,
	"Jungo Lin (林明俊)" <jungo.lin@mediatek.com>,
	"Rynn Wu (吳育恩)" <Rynn.Wu@mediatek.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	srv_heupstream <srv_heupstream@mediatek.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	"Jerry-ch Chen" <jerry-ch.chen@mediatek.corp-partner.google.com>
Subject: Re: [RFC PATCH V4 1/4] media: v4l2-mem2mem: add v4l2_m2m_suspend, v4l2_m2m_resume
Date: Sun, 14 Jun 2020 19:43:18 -0300	[thread overview]
Message-ID: <CAAEAJfCyqHKtwV_mVFOmtAiaCWc-mLTt=TNh8XgRag32WeWEgw@mail.gmail.com> (raw)
In-Reply-To: <CAAFQd5BH3xLNGAtbdH0Wy9KqQo30SFL9q9rcjZz8M3eviyY=hA@mail.gmail.com>

On Wed, 10 Jun 2020 at 16:26, Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Wed, Jun 10, 2020 at 9:14 PM Ezequiel Garcia
> <ezequiel@vanguardiasur.com.ar> wrote:
> >
> > On Wed, 10 Jun 2020 at 16:03, Tomasz Figa <tfiga@chromium.org> wrote:
> > >
> > > On Wed, Jun 10, 2020 at 03:52:39PM -0300, Ezequiel Garcia wrote:
> > > > Hi everyone,
> > > >
> > > > Thanks for the patch.
> > > >
> > > > On Wed, 10 Jun 2020 at 07:33, Tomasz Figa <tfiga@chromium.org> wrote:
> > > > >
> > > > > On Wed, Jun 10, 2020 at 12:29 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> > > > > >
> > > > > > On 21/05/2020 19:11, Tomasz Figa wrote:
> > > > > > > Hi Jerry,
> > > > > > >
> > > > > > > On Wed, Dec 04, 2019 at 08:47:29PM +0800, Jerry-ch Chen wrote:
> > > > > > >> From: Pi-Hsun Shih <pihsun@chromium.org>
> > > > > > >>
> > > > > > >> Add two functions that can be used to stop new jobs from being queued /
> > > > > > >> continue running queued job. This can be used while a driver using m2m
> > > > > > >> helper is going to suspend / wake up from resume, and can ensure that
> > > > > > >> there's no job running in suspend process.
> > > [snip]
> > > > > >
> > > > > > I assume this will be part of a future patch series that calls these new functions?
> > > > >
> > > > > The mtk-jpeg encoder series depends on this patch as well, so I guess
> > > > > it would go together with whichever is ready first.
> > > > >
> > > > > I would also envision someone changing the other existing drivers to
> > > > > use the helpers, as I'm pretty much sure some of them don't handle
> > > > > suspend/resume correctly.
> > > > >
> > > >
> > > > This indeed looks very good. If I understood the issue properly,
> > > > the change would be useful for both stateless (e.g. hantro, et al)
> > > > and stateful (e.g. coda) codecs.
> > > >
> > > > Hantro uses pm_runtime_force_suspend, and I believe that
> > > > could is enough for proper suspend/resume operation.
> > >
> > > Unfortunately, no. :(
> > >
> > > If the decoder is already decoding a frame, that would forcefully power
> > > off the hardware and possibly even cause a system lockup if we are
> > > unlucky to gate a clock in the middle of a bus transaction.
> > >
> >
> > pm_runtime_force_suspend calls pm_runtime_disable, which
> > says:
> >
> > """
> >  Increment power.disable_depth for the device and if it was zero previously,
> >  cancel all pending runtime PM requests for the device and wait for all
> >  operations in progress to complete.
> > """
> >
> > Doesn't this mean it waits for the current job (if there is one) and
> > prevents any new jobs to be issued?
> >
>
> I'd love if the PM runtime subsystem handled job management of all the
> driver subsystems automatically, but at the moment it's not aware of
> any jobs. :) The description says as much as it says - it stops any
> internal jobs of the PM subsystem - i.e. asynchronous suspend/resume
> requests. It doesn't have any awareness of V4L2 M2M jobs.
>

Doh, of course. I saw "pending requests" and
somehow imagined it would wait for the runtime_put.

I see now that these patches are the way to go.

> > > I just inspected the code now and actually found one more bug in its
> > > power management handling. device_run() calls clk_bulk_enable() before
> > > pm_runtime_get_sync(), but only the latter is guaranteed to actually
> > > power on the relevant power domains, so we end up clocking unpowered
> > > hardware.
> > >
> >
> > How about we just move clk_enable/disable to runtime PM?
> >
> > Since we use autosuspend delay, it theoretically has
> > some impact, which is why I was refraining from doing so.
> >
> > I can't decide if this impact would be marginal or significant.
> >
>
> I'd also refrain from doing this. Clock gating corresponds to the
> bigger part of the power savings from runtime power management, since
> it stops the dynamic power consumption and only leaves the static
> leakage. That said, the Hantro IP blocks have some internal clock
> gating as well, so it might not be as pronounced, depending on the
> custom vendor integration logic surrounding the Hantro hardware.
>

OK, I agree. We need to fix this issue then,
changing the order of the calls.

> Actually even if autosuspend is not used, the runtime PM subsystem has
> some internal back-off mechanism based on measured power on and power
> off latencies. The driver should call pm_runtime_get_sync() first and
> then enable any necessary clocks. I can see that currently inside the
> resume callback we have some hardware accesses. If those really need
> to be there, they should be surrounded with appropriate clock enable
> and clock disable calls.
>

Currently, it's only used by imx8mq, and it's enclosed
by clk_bulk_prepare_enable/disable_unprepare.

I am quite sure the prepare/unprepare usage is an oversight
on our side, but it doesn't do any harm either.

Moving the clock handling to hantro_runtime_resume is possible,
but looks like just low-hanging fruit.

> > > >
> > > > I'm not seeing any code in CODA to handle this, so not sure
> > > > how it's handling suspend/resume.
> > > >
> > > > Maybe we can have CODA as the first user, given it's a well-maintained
> > > > driver and should be fairly easy to test.
> > >
> > > I remember checking a number of drivers using the m2m helpers randomly
> > > and none of them implemented suspend/resume correctly. I suppose that
> > > was not discovered because normally the userspace itself would stop the
> > > operation before the system is suspended, although it's not an API
> > > guarantee.
> > >
> >
> > Indeed. Do you have any recomendations for how we could
> > test this case to make sure we are handling it correctly?
>
> I'd say that a simple offscreen command line gstreamer/ffmpeg decode
> with suspend/resume loop in another session should be able to trigger
> some issues.
>

I can try to fix the above clk/pm issue and take this helper
on the same series, if that's useful.

Thanks,
Ezequiel

WARNING: multiple messages have this Message-ID (diff)
From: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
To: Tomasz Figa <tfiga@chromium.org>
Cc: linux-devicetree <devicetree@vger.kernel.org>,
	"Sean Cheng (鄭昇弘)" <Sean.Cheng@mediatek.com>,
	"Laurent Pinchart" <laurent.pinchart+renesas@ideasonboard.com>,
	"Rynn Wu (吳育恩)" <Rynn.Wu@mediatek.com>,
	zwisler@chromium.org,
	srv_heupstream <srv_heupstream@mediatek.com>,
	"Jerry-ch Chen" <jerry-ch.chen@mediatek.corp-partner.google.com>,
	"Jerry-ch Chen" <Jerry-Ch.chen@mediatek.com>,
	"Hans Verkuil" <hverkuil@xs4all.nl>,
	"Jungo Lin (林明俊)" <jungo.lin@mediatek.com>,
	"Sj Huang" <sj.huang@mediatek.com>,
	yuzhao@chromium.org,
	"moderated list:ARM/Mediatek SoC support"
	<linux-mediatek@lists.infradead.org>,
	"Pi-Hsun Shih" <pihsun@chromium.org>,
	"Frederic Chen (陳俊元)" <frederic.chen@mediatek.com>,
	"Matthias Brugger" <matthias.bgg@gmail.com>,
	"Christie Yu (游雅惠)" <christie.yu@mediatek.com>,
	"Mauro Carvalho Chehab" <mchehab@kernel.org>,
	"list@263.net:IOMMU DRIVERS <iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,
	" <linux-arm-kernel@lists.infradead.org>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>
Subject: Re: [RFC PATCH V4 1/4] media: v4l2-mem2mem: add v4l2_m2m_suspend, v4l2_m2m_resume
Date: Sun, 14 Jun 2020 19:43:18 -0300	[thread overview]
Message-ID: <CAAEAJfCyqHKtwV_mVFOmtAiaCWc-mLTt=TNh8XgRag32WeWEgw@mail.gmail.com> (raw)
In-Reply-To: <CAAFQd5BH3xLNGAtbdH0Wy9KqQo30SFL9q9rcjZz8M3eviyY=hA@mail.gmail.com>

On Wed, 10 Jun 2020 at 16:26, Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Wed, Jun 10, 2020 at 9:14 PM Ezequiel Garcia
> <ezequiel@vanguardiasur.com.ar> wrote:
> >
> > On Wed, 10 Jun 2020 at 16:03, Tomasz Figa <tfiga@chromium.org> wrote:
> > >
> > > On Wed, Jun 10, 2020 at 03:52:39PM -0300, Ezequiel Garcia wrote:
> > > > Hi everyone,
> > > >
> > > > Thanks for the patch.
> > > >
> > > > On Wed, 10 Jun 2020 at 07:33, Tomasz Figa <tfiga@chromium.org> wrote:
> > > > >
> > > > > On Wed, Jun 10, 2020 at 12:29 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> > > > > >
> > > > > > On 21/05/2020 19:11, Tomasz Figa wrote:
> > > > > > > Hi Jerry,
> > > > > > >
> > > > > > > On Wed, Dec 04, 2019 at 08:47:29PM +0800, Jerry-ch Chen wrote:
> > > > > > >> From: Pi-Hsun Shih <pihsun@chromium.org>
> > > > > > >>
> > > > > > >> Add two functions that can be used to stop new jobs from being queued /
> > > > > > >> continue running queued job. This can be used while a driver using m2m
> > > > > > >> helper is going to suspend / wake up from resume, and can ensure that
> > > > > > >> there's no job running in suspend process.
> > > [snip]
> > > > > >
> > > > > > I assume this will be part of a future patch series that calls these new functions?
> > > > >
> > > > > The mtk-jpeg encoder series depends on this patch as well, so I guess
> > > > > it would go together with whichever is ready first.
> > > > >
> > > > > I would also envision someone changing the other existing drivers to
> > > > > use the helpers, as I'm pretty much sure some of them don't handle
> > > > > suspend/resume correctly.
> > > > >
> > > >
> > > > This indeed looks very good. If I understood the issue properly,
> > > > the change would be useful for both stateless (e.g. hantro, et al)
> > > > and stateful (e.g. coda) codecs.
> > > >
> > > > Hantro uses pm_runtime_force_suspend, and I believe that
> > > > could is enough for proper suspend/resume operation.
> > >
> > > Unfortunately, no. :(
> > >
> > > If the decoder is already decoding a frame, that would forcefully power
> > > off the hardware and possibly even cause a system lockup if we are
> > > unlucky to gate a clock in the middle of a bus transaction.
> > >
> >
> > pm_runtime_force_suspend calls pm_runtime_disable, which
> > says:
> >
> > """
> >  Increment power.disable_depth for the device and if it was zero previously,
> >  cancel all pending runtime PM requests for the device and wait for all
> >  operations in progress to complete.
> > """
> >
> > Doesn't this mean it waits for the current job (if there is one) and
> > prevents any new jobs to be issued?
> >
>
> I'd love if the PM runtime subsystem handled job management of all the
> driver subsystems automatically, but at the moment it's not aware of
> any jobs. :) The description says as much as it says - it stops any
> internal jobs of the PM subsystem - i.e. asynchronous suspend/resume
> requests. It doesn't have any awareness of V4L2 M2M jobs.
>

Doh, of course. I saw "pending requests" and
somehow imagined it would wait for the runtime_put.

I see now that these patches are the way to go.

> > > I just inspected the code now and actually found one more bug in its
> > > power management handling. device_run() calls clk_bulk_enable() before
> > > pm_runtime_get_sync(), but only the latter is guaranteed to actually
> > > power on the relevant power domains, so we end up clocking unpowered
> > > hardware.
> > >
> >
> > How about we just move clk_enable/disable to runtime PM?
> >
> > Since we use autosuspend delay, it theoretically has
> > some impact, which is why I was refraining from doing so.
> >
> > I can't decide if this impact would be marginal or significant.
> >
>
> I'd also refrain from doing this. Clock gating corresponds to the
> bigger part of the power savings from runtime power management, since
> it stops the dynamic power consumption and only leaves the static
> leakage. That said, the Hantro IP blocks have some internal clock
> gating as well, so it might not be as pronounced, depending on the
> custom vendor integration logic surrounding the Hantro hardware.
>

OK, I agree. We need to fix this issue then,
changing the order of the calls.

> Actually even if autosuspend is not used, the runtime PM subsystem has
> some internal back-off mechanism based on measured power on and power
> off latencies. The driver should call pm_runtime_get_sync() first and
> then enable any necessary clocks. I can see that currently inside the
> resume callback we have some hardware accesses. If those really need
> to be there, they should be surrounded with appropriate clock enable
> and clock disable calls.
>

Currently, it's only used by imx8mq, and it's enclosed
by clk_bulk_prepare_enable/disable_unprepare.

I am quite sure the prepare/unprepare usage is an oversight
on our side, but it doesn't do any harm either.

Moving the clock handling to hantro_runtime_resume is possible,
but looks like just low-hanging fruit.

> > > >
> > > > I'm not seeing any code in CODA to handle this, so not sure
> > > > how it's handling suspend/resume.
> > > >
> > > > Maybe we can have CODA as the first user, given it's a well-maintained
> > > > driver and should be fairly easy to test.
> > >
> > > I remember checking a number of drivers using the m2m helpers randomly
> > > and none of them implemented suspend/resume correctly. I suppose that
> > > was not discovered because normally the userspace itself would stop the
> > > operation before the system is suspended, although it's not an API
> > > guarantee.
> > >
> >
> > Indeed. Do you have any recomendations for how we could
> > test this case to make sure we are handling it correctly?
>
> I'd say that a simple offscreen command line gstreamer/ffmpeg decode
> with suspend/resume loop in another session should be able to trigger
> some issues.
>

I can try to fix the above clk/pm issue and take this helper
on the same series, if that's useful.

Thanks,
Ezequiel

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

WARNING: multiple messages have this Message-ID (diff)
From: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
To: Tomasz Figa <tfiga@chromium.org>
Cc: linux-devicetree <devicetree@vger.kernel.org>,
	"Sean Cheng (鄭昇弘)" <Sean.Cheng@mediatek.com>,
	"Laurent Pinchart" <laurent.pinchart+renesas@ideasonboard.com>,
	"Rynn Wu (吳育恩)" <Rynn.Wu@mediatek.com>,
	zwisler@chromium.org,
	srv_heupstream <srv_heupstream@mediatek.com>,
	"Jerry-ch Chen" <jerry-ch.chen@mediatek.corp-partner.google.com>,
	"Jerry-ch Chen" <Jerry-Ch.chen@mediatek.com>,
	"Hans Verkuil" <hverkuil@xs4all.nl>,
	"Jungo Lin (林明俊)" <jungo.lin@mediatek.com>,
	"Sj Huang" <sj.huang@mediatek.com>,
	yuzhao@chromium.org,
	"moderated list:ARM/Mediatek SoC support"
	<linux-mediatek@lists.infradead.org>,
	"Pi-Hsun Shih" <pihsun@chromium.org>,
	"Frederic Chen (陳俊元)" <frederic.chen@mediatek.com>,
	"Matthias Brugger" <matthias.bgg@gmail.com>,
	"Christie Yu (游雅惠)" <christie.yu@mediatek.com>,
	"Mauro Carvalho Chehab" <mchehab@kernel.org>,
	"list@263.net:IOMMU DRIVERS <iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,
	" <linux-arm-kernel@lists.infradead.org>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>
Subject: Re: [RFC PATCH V4 1/4] media: v4l2-mem2mem: add v4l2_m2m_suspend, v4l2_m2m_resume
Date: Sun, 14 Jun 2020 19:43:18 -0300	[thread overview]
Message-ID: <CAAEAJfCyqHKtwV_mVFOmtAiaCWc-mLTt=TNh8XgRag32WeWEgw@mail.gmail.com> (raw)
In-Reply-To: <CAAFQd5BH3xLNGAtbdH0Wy9KqQo30SFL9q9rcjZz8M3eviyY=hA@mail.gmail.com>

On Wed, 10 Jun 2020 at 16:26, Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Wed, Jun 10, 2020 at 9:14 PM Ezequiel Garcia
> <ezequiel@vanguardiasur.com.ar> wrote:
> >
> > On Wed, 10 Jun 2020 at 16:03, Tomasz Figa <tfiga@chromium.org> wrote:
> > >
> > > On Wed, Jun 10, 2020 at 03:52:39PM -0300, Ezequiel Garcia wrote:
> > > > Hi everyone,
> > > >
> > > > Thanks for the patch.
> > > >
> > > > On Wed, 10 Jun 2020 at 07:33, Tomasz Figa <tfiga@chromium.org> wrote:
> > > > >
> > > > > On Wed, Jun 10, 2020 at 12:29 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> > > > > >
> > > > > > On 21/05/2020 19:11, Tomasz Figa wrote:
> > > > > > > Hi Jerry,
> > > > > > >
> > > > > > > On Wed, Dec 04, 2019 at 08:47:29PM +0800, Jerry-ch Chen wrote:
> > > > > > >> From: Pi-Hsun Shih <pihsun@chromium.org>
> > > > > > >>
> > > > > > >> Add two functions that can be used to stop new jobs from being queued /
> > > > > > >> continue running queued job. This can be used while a driver using m2m
> > > > > > >> helper is going to suspend / wake up from resume, and can ensure that
> > > > > > >> there's no job running in suspend process.
> > > [snip]
> > > > > >
> > > > > > I assume this will be part of a future patch series that calls these new functions?
> > > > >
> > > > > The mtk-jpeg encoder series depends on this patch as well, so I guess
> > > > > it would go together with whichever is ready first.
> > > > >
> > > > > I would also envision someone changing the other existing drivers to
> > > > > use the helpers, as I'm pretty much sure some of them don't handle
> > > > > suspend/resume correctly.
> > > > >
> > > >
> > > > This indeed looks very good. If I understood the issue properly,
> > > > the change would be useful for both stateless (e.g. hantro, et al)
> > > > and stateful (e.g. coda) codecs.
> > > >
> > > > Hantro uses pm_runtime_force_suspend, and I believe that
> > > > could is enough for proper suspend/resume operation.
> > >
> > > Unfortunately, no. :(
> > >
> > > If the decoder is already decoding a frame, that would forcefully power
> > > off the hardware and possibly even cause a system lockup if we are
> > > unlucky to gate a clock in the middle of a bus transaction.
> > >
> >
> > pm_runtime_force_suspend calls pm_runtime_disable, which
> > says:
> >
> > """
> >  Increment power.disable_depth for the device and if it was zero previously,
> >  cancel all pending runtime PM requests for the device and wait for all
> >  operations in progress to complete.
> > """
> >
> > Doesn't this mean it waits for the current job (if there is one) and
> > prevents any new jobs to be issued?
> >
>
> I'd love if the PM runtime subsystem handled job management of all the
> driver subsystems automatically, but at the moment it's not aware of
> any jobs. :) The description says as much as it says - it stops any
> internal jobs of the PM subsystem - i.e. asynchronous suspend/resume
> requests. It doesn't have any awareness of V4L2 M2M jobs.
>

Doh, of course. I saw "pending requests" and
somehow imagined it would wait for the runtime_put.

I see now that these patches are the way to go.

> > > I just inspected the code now and actually found one more bug in its
> > > power management handling. device_run() calls clk_bulk_enable() before
> > > pm_runtime_get_sync(), but only the latter is guaranteed to actually
> > > power on the relevant power domains, so we end up clocking unpowered
> > > hardware.
> > >
> >
> > How about we just move clk_enable/disable to runtime PM?
> >
> > Since we use autosuspend delay, it theoretically has
> > some impact, which is why I was refraining from doing so.
> >
> > I can't decide if this impact would be marginal or significant.
> >
>
> I'd also refrain from doing this. Clock gating corresponds to the
> bigger part of the power savings from runtime power management, since
> it stops the dynamic power consumption and only leaves the static
> leakage. That said, the Hantro IP blocks have some internal clock
> gating as well, so it might not be as pronounced, depending on the
> custom vendor integration logic surrounding the Hantro hardware.
>

OK, I agree. We need to fix this issue then,
changing the order of the calls.

> Actually even if autosuspend is not used, the runtime PM subsystem has
> some internal back-off mechanism based on measured power on and power
> off latencies. The driver should call pm_runtime_get_sync() first and
> then enable any necessary clocks. I can see that currently inside the
> resume callback we have some hardware accesses. If those really need
> to be there, they should be surrounded with appropriate clock enable
> and clock disable calls.
>

Currently, it's only used by imx8mq, and it's enclosed
by clk_bulk_prepare_enable/disable_unprepare.

I am quite sure the prepare/unprepare usage is an oversight
on our side, but it doesn't do any harm either.

Moving the clock handling to hantro_runtime_resume is possible,
but looks like just low-hanging fruit.

> > > >
> > > > I'm not seeing any code in CODA to handle this, so not sure
> > > > how it's handling suspend/resume.
> > > >
> > > > Maybe we can have CODA as the first user, given it's a well-maintained
> > > > driver and should be fairly easy to test.
> > >
> > > I remember checking a number of drivers using the m2m helpers randomly
> > > and none of them implemented suspend/resume correctly. I suppose that
> > > was not discovered because normally the userspace itself would stop the
> > > operation before the system is suspended, although it's not an API
> > > guarantee.
> > >
> >
> > Indeed. Do you have any recomendations for how we could
> > test this case to make sure we are handling it correctly?
>
> I'd say that a simple offscreen command line gstreamer/ffmpeg decode
> with suspend/resume loop in another session should be able to trigger
> some issues.
>

I can try to fix the above clk/pm issue and take this helper
on the same series, if that's useful.

Thanks,
Ezequiel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-06-14 22:43 UTC|newest]

Thread overview: 96+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-04 12:47 [RFC PATCH V4 0/4] media: platform: Add support for Face Detection (FD) on mt8183 SoC Jerry-ch Chen
2019-12-04 12:47 ` Jerry-ch Chen
2019-12-04 12:47 ` Jerry-ch Chen
2019-12-04 12:47 ` [RFC PATCH V4 1/4] media: v4l2-mem2mem: add v4l2_m2m_suspend, v4l2_m2m_resume Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2020-05-21 17:11   ` Tomasz Figa
2020-05-21 17:11     ` Tomasz Figa
2020-05-21 17:11     ` Tomasz Figa
2020-05-22  6:01     ` Jerry-ch Chen
2020-05-22  6:01       ` Jerry-ch Chen
2020-05-22  6:01       ` Jerry-ch Chen
2020-06-10 10:28     ` Hans Verkuil
2020-06-10 10:28       ` Hans Verkuil
2020-06-10 10:28       ` Hans Verkuil
2020-06-10 10:32       ` Tomasz Figa
2020-06-10 10:32         ` Tomasz Figa
2020-06-10 10:32         ` Tomasz Figa
2020-06-10 18:52         ` Ezequiel Garcia
2020-06-10 18:52           ` Ezequiel Garcia
2020-06-10 18:52           ` Ezequiel Garcia
2020-06-10 19:03           ` Tomasz Figa
2020-06-10 19:03             ` Tomasz Figa
2020-06-10 19:03             ` Tomasz Figa
2020-06-10 19:14             ` Ezequiel Garcia
2020-06-10 19:14               ` Ezequiel Garcia
2020-06-10 19:14               ` Ezequiel Garcia
2020-06-10 19:26               ` Tomasz Figa
2020-06-10 19:26                 ` Tomasz Figa
2020-06-10 19:26                 ` Tomasz Figa
2020-06-14 22:43                 ` Ezequiel Garcia [this message]
2020-06-14 22:43                   ` Ezequiel Garcia
2020-06-14 22:43                   ` Ezequiel Garcia
2019-12-04 12:47 ` [RFC PATCH V4 2/4] dt-bindings: mt8183: Added FD dt-bindings Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2019-12-04 18:58   ` Rob Herring
2019-12-04 18:58     ` Rob Herring
2019-12-04 18:58     ` Rob Herring
2020-05-06  8:41     ` Jerry-ch Chen
2020-05-06  8:41       ` Jerry-ch Chen
2020-05-06  8:41       ` Jerry-ch Chen
2019-12-04 12:47 ` [RFC PATCH V4 3/4] dts: arm64: mt8183: Add FD nodes Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2019-12-04 12:47 ` [RFC PATCH V4 4/4] platform: mtk-isp: Add Mediatek FD driver Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2019-12-04 12:47   ` Jerry-ch Chen
2020-05-21 18:28   ` Tomasz Figa
2020-05-21 18:28     ` Tomasz Figa
2020-05-21 18:28     ` Tomasz Figa
2020-05-22 14:10     ` Jerry-ch Chen
2020-05-22 14:10       ` Jerry-ch Chen
2020-05-22 14:10       ` Jerry-ch Chen
2020-05-25 12:24       ` Tomasz Figa
2020-05-25 12:24         ` Tomasz Figa
2020-05-25 12:24         ` Tomasz Figa
2020-05-29 12:26         ` Jerry-ch Chen
2020-05-29 12:26           ` Jerry-ch Chen
2020-05-29 12:26           ` Jerry-ch Chen
2020-05-29 12:59           ` Tomasz Figa
2020-05-29 12:59             ` Tomasz Figa
2020-05-29 12:59             ` Tomasz Figa
2020-06-01 10:37             ` Jerry-ch Chen
2020-06-01 10:37               ` Jerry-ch Chen
2020-06-01 10:37               ` Jerry-ch Chen
2020-05-08  2:02 ` [RFC PATCH V4 0/4] media: platform: Add support for Face Detection (FD) on mt8183 SoC Jerry-ch Chen
2020-05-08  2:02   ` Jerry-ch Chen
2020-05-08  2:02   ` Jerry-ch Chen
2020-05-13 21:45   ` Tomasz Figa
2020-05-13 21:45     ` Tomasz Figa
2020-05-13 21:45     ` Tomasz Figa
2020-05-21 18:38     ` Tomasz Figa
2020-05-21 18:38       ` Tomasz Figa
2020-05-21 18:38       ` Tomasz Figa
2020-06-30 14:10       ` Jerry-ch Chen
2020-06-30 14:10         ` Jerry-ch Chen
2020-06-30 14:10         ` Jerry-ch Chen
2020-06-30 17:19         ` Tomasz Figa
2020-06-30 17:19           ` Tomasz Figa
2020-06-30 17:19           ` Tomasz Figa
2020-11-11 11:51           ` Jerry-ch Chen
2020-11-11 11:51             ` Jerry-ch Chen
2020-11-11 11:51             ` Jerry-ch Chen
2020-11-12  4:26             ` Tomasz Figa
2020-11-12  4:26               ` Tomasz Figa
2020-11-12  4:26               ` Tomasz Figa
2020-11-12 12:05               ` Jerry-ch Chen
2020-11-12 12:05                 ` Jerry-ch Chen
2020-11-12 12:05                 ` Jerry-ch Chen
2020-12-28 17:02                 ` Jerry-ch Chen
2020-12-28 17:02                   ` Jerry-ch Chen
2020-12-28 17:02                   ` Jerry-ch Chen
2021-01-06  6:30                   ` Tomasz Figa
2021-01-06  6:30                     ` Tomasz Figa
2021-01-06  6:30                     ` Tomasz Figa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAEAJfCyqHKtwV_mVFOmtAiaCWc-mLTt=TNh8XgRag32WeWEgw@mail.gmail.com' \
    --to=ezequiel@vanguardiasur.com.ar \
    --cc=Jerry-Ch.chen@mediatek.com \
    --cc=Rynn.Wu@mediatek.com \
    --cc=Sean.Cheng@mediatek.com \
    --cc=christie.yu@mediatek.com \
    --cc=devicetree@vger.kernel.org \
    --cc=frederic.chen@mediatek.com \
    --cc=hverkuil@xs4all.nl \
    --cc=jerry-ch.chen@mediatek.corp-partner.google.com \
    --cc=jungo.lin@mediatek.com \
    --cc=laurent.pinchart+renesas@ideasonboard.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=matthias.bgg@gmail.com \
    --cc=mchehab@kernel.org \
    --cc=pihsun@chromium.org \
    --cc=sj.huang@mediatek.com \
    --cc=srv_heupstream@mediatek.com \
    --cc=tfiga@chromium.org \
    --cc=yuzhao@chromium.org \
    --cc=zwisler@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.