All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Stone <daniel@fooishbar.org>
To: Maxime Ripard <mripard@kernel.org>
Cc: "Michel Dänzer" <michel.daenzer@mailbox.org>,
	emma@anholt.net, linux-doc@vger.kernel.org,
	vignesh.raman@collabora.com, dri-devel@lists.freedesktop.org,
	alyssa@rosenzweig.io, jbrunet@baylibre.com, robdclark@google.com,
	corbet@lwn.net, khilman@baylibre.com,
	sergi.blanch.torne@collabora.com, david.heidelberg@collabora.com,
	linux-rockchip@lists.infradead.org,
	"Daniel Stone" <daniels@collabora.com>,
	martin.blumenstingl@googlemail.com, robclark@freedesktop.org,
	"Helen Koike" <helen.koike@collabora.com>,
	anholt@google.com, linux-mediatek@lists.infradead.org,
	matthias.bgg@gmail.com, linux-amlogic@lists.infradead.org,
	gustavo.padovan@collabora.com,
	linux-arm-kernel@lists.infradead.org,
	angelogioacchino.delregno@collabora.com,
	neil.armstrong@linaro.org, guilherme.gallo@collabora.com,
	linux-kernel@vger.kernel.org, tzimmermann@suse.de
Subject: Re: [PATCH v11] drm: Add initial ci/ subdirectory
Date: Fri, 15 Sep 2023 16:08:42 +0100	[thread overview]
Message-ID: <CAPj87rPbJ1V1-R7WMTHkDat2A4nwSd61Df9mdGH2PR=ZzxaU=Q@mail.gmail.com> (raw)
In-Reply-To: <krhnfc4os5nqwi3iixgxwads3jbcgtdcvn7ygxpvd62uaejvsk@mx6tmser7jti>

Hey,

On Thu, 14 Sept 2023 at 10:54, Maxime Ripard <mripard@kernel.org> wrote:
> On Tue, Sep 12, 2023 at 02:16:41PM +0100, Daniel Stone wrote:
> > Hopefully less mangled formatting this time: turns out Thunderbird +
> > plain text is utterly unreadable, so that's one less MUA that is
> > actually usable to send email to kernel lists without getting shouted
> > at.
>
> Sorry if it felt that way, it definitely wasn't my intention to shout at
> you. Email is indeed kind of a pain to deal with, and I wanted to keep
> the discussion going.

My bad - I didn't mean you _at all_. I was thinking of other, much
less pleasant, kernel maintainers, and the ongoing struggle of
attempting to actually send well-formatted email to kernel lists in
2023.

> > I don't quite see the same picture from your side though. For example,
> > my reading of what you've said is that flaky tests are utterly
> > unacceptable, as are partial runs, and we shouldn't pretend otherwise.
> > With your concrete example (which is really helpful, so thanks), what
> > happens to the MT8173 hdmi-inject test? Do we skip all MT8173 testing
> > until it's perfect, or does MT8173 testing always fail because that
> > test does?
>
> It's not clear to me why that test is even running in the first place?
> There's been some confusion on my side here about what we're going to
> test with this. You've mentioned Mesa and GPUs before, but that's a KMS
> test so there must be more to it.
>
> Either way, it's a relevant test so I guess why not. It turns out that
> the test is indeed flaky, I guess we could add it to the flaky tests
> list.
>
> BUT
>
> I want to have every opportunity to fix whatever that failure is.

Agreed so far!

> So:
>
>   - Is the test broken? If so, we should report it to IGT dev and remove
>     it from the test suite.
>   - If not, is that test failure have been reported to the driver author?
>   - If no answer/fix, we can add it to the flaky tests list, but do we
>     have some way to reproduce the test failure?
>
> The last part is especially critical. Looking at the list itself, I have
> no idea what board, kernel version, configuration, or what the failure
> rate was. Assuming I spend some time looking at the infra to find the
> board and configuration, how many times do I have to run the tests to
> expect to reproduce the failure (and thus consider it fixed if it
> doesn't occur anymore).
>
> Like, with that board and test, if my first 100 runs of the test work
> fine, is it reasonable for me to consider it fixed, or is it only
> supposed to happen once every 1000 runs?
>
> So, ideally, having some (mandatory) metadata in the test lists with a
> link to the bug report, the board (DT name?) it happened with, the
> version and configuration it was first seen with, and an approximation
> of the failure rate for every flaky test list.
>
> I understand that it's probably difficult to get that after the fact on
> the tests that were already merged, but I'd really like to get that
> enforced for every new test going forward.
>
> That should hopefully get us in a much better position to fix some of
> those tests issues. And failing that, I can't see how that's
> sustainable.

OK yeah, and we're still agreed here. That is definitely the standard
we should be aiming for.  It is there for some - see
drivers/gpu/drm/ci/xfails/rockchip-rk3288-skips.txt, but should be
there for the rest, it's true. (The specific board/DT it was observed
on can be easily retconned because we only run on one specific board
type per driver, again to make things more predictable; we could go
back and retrospectively add those in a header comment?)

For flakes, it can be hard to pin them down, because, well, they're
flaky. Usually when we add things in Mesa (sorry to keep coming back
to Mesa - it's not to say that it's the objective best thing that
everything should follow, only that it's the thing we have the most
experience with that we know works well), we do a manual bisect and
try to pin the blame on a specific merge request which looks like the
most likely culprit. If nothing obvious jumps out, we just note when
it was first observed and provide some sample job logs. But yeah, it
should be more verbose.

FWIW, the reason it wasn't done here - not to say that it shouldn't
have been done better, but here we are - is that we just hammered a
load of test runs, vacuumed up the results with a script, and that's
what generated those files. Given the number of tests and devices, it
was hard to narrow each down individually, but yeah, it is something
which really wants further analysis and drilling into. It's a good
to-do, and I agree it should be the standard going forward.

> And Mesa does show what I'm talking about:
>
> $ find -name *-flakes.txt | xargs git diff --stat  e58a10af640ba58b6001f5c5ad750b782547da76
> [...]
>
> In the history of Mesa, there's never been a single test removed from a
> flaky test list.

As Rob says, that's definitely wrong. But there is a good point in
there: how do you know a test isn't flaky anymore? 100 runs is a
reasonable benchmark, but 1000 is ideal. At a 1% failure rate, with 20
devices, that's just too many spurious false-fails to have a usable
workflow.

We do have some tools to make stress testing easier, but those need to
be better documented. We'll fix that. The tools we have which also
pull out the metadata etc also need documenting - right now they
aren't because they're under _extremely_ heavy development, but they
can be further enhanced to e.g. pull out the igt results automatically
and point very clearly to the cause. Also on the to-do.

> > Only maintainers can actually fix the drivers (or the tests tbf). But
> > doing the testing does let us be really clear to everyone what the
> > actual state is, and that way people can make informed decisions too.
> > And the only way we're going to drive the test rate down is by the
> > subsystem maintainers enforcing it.
>
> Just FYI, I'm not on the other side of the fence there, I'd really like
> to have some kind of validation. I talked about it at XDC some years
> ago, and discussed it several people at length over the years. So I'm
> definitely not in the CI-is-bad camp.
>
> > Does that make sense on where I'm (and I think a lot of others are)
> > coming from?
>
> That makes sense from your perspective, but it's not clear to me how you
> can expect maintainers to own the tests if they were never involved in
> the process.
>
> They are not in Cc of the flaky tests patches, they are not reported
> that the bug is failing, how can they own that process if we never
> reached out and involved them?
>
> We're all overworked, you can't expect them to just look at the flaky
> test list every now and then and figure it out.

Absolutely. We got acks (or at least not-nacks) from the driver
developers, but yeah, they should absolutely be part of the loop for
those updates. I don't think we can necessarily block on them though.
Say we add vc4 KMS tests, then after a backmerge we start to see a
bunch of flakes on it, but you're sitting on a beach for a couple of
weeks. If we wait for you to get back, see it, and merge it, then
that's two weeks of people submitting Rockchip driver changes and
getting told that their changes failed CI. That's exactly what we want
to avoid, because it erodes confidence and usefulness of CI when
people expect failures and ignore them by default.

So I would say that it's reasonable for expectations to be updated
according to what actually happens in practice, but also to make sure
that the maintainers are explicitly informed and kept in the loop, and
not just surprised when they look at the lists and see a bunch of
stuff happened without their knowledge.

Again there's much more to be done on the tooling here. Part of it is
CLI tools and automation, part of it is dashboards and
easily-digestible reporting, and then there's integration with things
like KernelCI. KCI(DB) is actually quite high up on the list, but
we're mostly waiting until a lot of the KCI rework happens so we can
actually properly integrate with the new system.

Right now a lot of the tooling we have is pretty involved - for
example, we do have ci-collate as a Python library which can inspect a
number of pipelines, pull out detailed status and logs, etc, but it
mostly needs to be used as a library with bespoke scripts, rather than
a ready-made tool. Work on that is ongoing to make it way more clear
and accessible though.

So I think it sounds like we're on the same page and going exactly in
the same direction, just that this is a starting point rather than the
desired end state. And the main point is that having a set of
xfails/flakes parachuted in with little to no context is trying to get
an MVP bootstrapped, rather than how we expect things to go in future.
Does that sound about right?

Cheers,
Daniel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Stone <daniel@fooishbar.org>
To: Maxime Ripard <mripard@kernel.org>
Cc: "Michel Dänzer" <michel.daenzer@mailbox.org>,
	emma@anholt.net, linux-doc@vger.kernel.org,
	vignesh.raman@collabora.com, dri-devel@lists.freedesktop.org,
	alyssa@rosenzweig.io, jbrunet@baylibre.com, robdclark@google.com,
	corbet@lwn.net, khilman@baylibre.com,
	sergi.blanch.torne@collabora.com, david.heidelberg@collabora.com,
	linux-rockchip@lists.infradead.org,
	"Daniel Stone" <daniels@collabora.com>,
	martin.blumenstingl@googlemail.com, robclark@freedesktop.org,
	"Helen Koike" <helen.koike@collabora.com>,
	anholt@google.com, linux-mediatek@lists.infradead.org,
	matthias.bgg@gmail.com, linux-amlogic@lists.infradead.org,
	gustavo.padovan@collabora.com,
	linux-arm-kernel@lists.infradead.org,
	angelogioacchino.delregno@collabora.com,
	neil.armstrong@linaro.org, guilherme.gallo@collabora.com,
	linux-kernel@vger.kernel.org, tzimmermann@suse.de
Subject: Re: [PATCH v11] drm: Add initial ci/ subdirectory
Date: Fri, 15 Sep 2023 16:08:42 +0100	[thread overview]
Message-ID: <CAPj87rPbJ1V1-R7WMTHkDat2A4nwSd61Df9mdGH2PR=ZzxaU=Q@mail.gmail.com> (raw)
In-Reply-To: <krhnfc4os5nqwi3iixgxwads3jbcgtdcvn7ygxpvd62uaejvsk@mx6tmser7jti>

Hey,

On Thu, 14 Sept 2023 at 10:54, Maxime Ripard <mripard@kernel.org> wrote:
> On Tue, Sep 12, 2023 at 02:16:41PM +0100, Daniel Stone wrote:
> > Hopefully less mangled formatting this time: turns out Thunderbird +
> > plain text is utterly unreadable, so that's one less MUA that is
> > actually usable to send email to kernel lists without getting shouted
> > at.
>
> Sorry if it felt that way, it definitely wasn't my intention to shout at
> you. Email is indeed kind of a pain to deal with, and I wanted to keep
> the discussion going.

My bad - I didn't mean you _at all_. I was thinking of other, much
less pleasant, kernel maintainers, and the ongoing struggle of
attempting to actually send well-formatted email to kernel lists in
2023.

> > I don't quite see the same picture from your side though. For example,
> > my reading of what you've said is that flaky tests are utterly
> > unacceptable, as are partial runs, and we shouldn't pretend otherwise.
> > With your concrete example (which is really helpful, so thanks), what
> > happens to the MT8173 hdmi-inject test? Do we skip all MT8173 testing
> > until it's perfect, or does MT8173 testing always fail because that
> > test does?
>
> It's not clear to me why that test is even running in the first place?
> There's been some confusion on my side here about what we're going to
> test with this. You've mentioned Mesa and GPUs before, but that's a KMS
> test so there must be more to it.
>
> Either way, it's a relevant test so I guess why not. It turns out that
> the test is indeed flaky, I guess we could add it to the flaky tests
> list.
>
> BUT
>
> I want to have every opportunity to fix whatever that failure is.

Agreed so far!

> So:
>
>   - Is the test broken? If so, we should report it to IGT dev and remove
>     it from the test suite.
>   - If not, is that test failure have been reported to the driver author?
>   - If no answer/fix, we can add it to the flaky tests list, but do we
>     have some way to reproduce the test failure?
>
> The last part is especially critical. Looking at the list itself, I have
> no idea what board, kernel version, configuration, or what the failure
> rate was. Assuming I spend some time looking at the infra to find the
> board and configuration, how many times do I have to run the tests to
> expect to reproduce the failure (and thus consider it fixed if it
> doesn't occur anymore).
>
> Like, with that board and test, if my first 100 runs of the test work
> fine, is it reasonable for me to consider it fixed, or is it only
> supposed to happen once every 1000 runs?
>
> So, ideally, having some (mandatory) metadata in the test lists with a
> link to the bug report, the board (DT name?) it happened with, the
> version and configuration it was first seen with, and an approximation
> of the failure rate for every flaky test list.
>
> I understand that it's probably difficult to get that after the fact on
> the tests that were already merged, but I'd really like to get that
> enforced for every new test going forward.
>
> That should hopefully get us in a much better position to fix some of
> those tests issues. And failing that, I can't see how that's
> sustainable.

OK yeah, and we're still agreed here. That is definitely the standard
we should be aiming for.  It is there for some - see
drivers/gpu/drm/ci/xfails/rockchip-rk3288-skips.txt, but should be
there for the rest, it's true. (The specific board/DT it was observed
on can be easily retconned because we only run on one specific board
type per driver, again to make things more predictable; we could go
back and retrospectively add those in a header comment?)

For flakes, it can be hard to pin them down, because, well, they're
flaky. Usually when we add things in Mesa (sorry to keep coming back
to Mesa - it's not to say that it's the objective best thing that
everything should follow, only that it's the thing we have the most
experience with that we know works well), we do a manual bisect and
try to pin the blame on a specific merge request which looks like the
most likely culprit. If nothing obvious jumps out, we just note when
it was first observed and provide some sample job logs. But yeah, it
should be more verbose.

FWIW, the reason it wasn't done here - not to say that it shouldn't
have been done better, but here we are - is that we just hammered a
load of test runs, vacuumed up the results with a script, and that's
what generated those files. Given the number of tests and devices, it
was hard to narrow each down individually, but yeah, it is something
which really wants further analysis and drilling into. It's a good
to-do, and I agree it should be the standard going forward.

> And Mesa does show what I'm talking about:
>
> $ find -name *-flakes.txt | xargs git diff --stat  e58a10af640ba58b6001f5c5ad750b782547da76
> [...]
>
> In the history of Mesa, there's never been a single test removed from a
> flaky test list.

As Rob says, that's definitely wrong. But there is a good point in
there: how do you know a test isn't flaky anymore? 100 runs is a
reasonable benchmark, but 1000 is ideal. At a 1% failure rate, with 20
devices, that's just too many spurious false-fails to have a usable
workflow.

We do have some tools to make stress testing easier, but those need to
be better documented. We'll fix that. The tools we have which also
pull out the metadata etc also need documenting - right now they
aren't because they're under _extremely_ heavy development, but they
can be further enhanced to e.g. pull out the igt results automatically
and point very clearly to the cause. Also on the to-do.

> > Only maintainers can actually fix the drivers (or the tests tbf). But
> > doing the testing does let us be really clear to everyone what the
> > actual state is, and that way people can make informed decisions too.
> > And the only way we're going to drive the test rate down is by the
> > subsystem maintainers enforcing it.
>
> Just FYI, I'm not on the other side of the fence there, I'd really like
> to have some kind of validation. I talked about it at XDC some years
> ago, and discussed it several people at length over the years. So I'm
> definitely not in the CI-is-bad camp.
>
> > Does that make sense on where I'm (and I think a lot of others are)
> > coming from?
>
> That makes sense from your perspective, but it's not clear to me how you
> can expect maintainers to own the tests if they were never involved in
> the process.
>
> They are not in Cc of the flaky tests patches, they are not reported
> that the bug is failing, how can they own that process if we never
> reached out and involved them?
>
> We're all overworked, you can't expect them to just look at the flaky
> test list every now and then and figure it out.

Absolutely. We got acks (or at least not-nacks) from the driver
developers, but yeah, they should absolutely be part of the loop for
those updates. I don't think we can necessarily block on them though.
Say we add vc4 KMS tests, then after a backmerge we start to see a
bunch of flakes on it, but you're sitting on a beach for a couple of
weeks. If we wait for you to get back, see it, and merge it, then
that's two weeks of people submitting Rockchip driver changes and
getting told that their changes failed CI. That's exactly what we want
to avoid, because it erodes confidence and usefulness of CI when
people expect failures and ignore them by default.

So I would say that it's reasonable for expectations to be updated
according to what actually happens in practice, but also to make sure
that the maintainers are explicitly informed and kept in the loop, and
not just surprised when they look at the lists and see a bunch of
stuff happened without their knowledge.

Again there's much more to be done on the tooling here. Part of it is
CLI tools and automation, part of it is dashboards and
easily-digestible reporting, and then there's integration with things
like KernelCI. KCI(DB) is actually quite high up on the list, but
we're mostly waiting until a lot of the KCI rework happens so we can
actually properly integrate with the new system.

Right now a lot of the tooling we have is pretty involved - for
example, we do have ci-collate as a Python library which can inspect a
number of pipelines, pull out detailed status and logs, etc, but it
mostly needs to be used as a library with bespoke scripts, rather than
a ready-made tool. Work on that is ongoing to make it way more clear
and accessible though.

So I think it sounds like we're on the same page and going exactly in
the same direction, just that this is a starting point rather than the
desired end state. And the main point is that having a set of
xfails/flakes parachuted in with little to no context is trying to get
an MVP bootstrapped, rather than how we expect things to go in future.
Does that sound about right?

Cheers,
Daniel

_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Stone <daniel@fooishbar.org>
To: Maxime Ripard <mripard@kernel.org>
Cc: "Michel Dänzer" <michel.daenzer@mailbox.org>,
	emma@anholt.net, linux-doc@vger.kernel.org,
	vignesh.raman@collabora.com, dri-devel@lists.freedesktop.org,
	alyssa@rosenzweig.io, jbrunet@baylibre.com, robdclark@google.com,
	corbet@lwn.net, khilman@baylibre.com,
	sergi.blanch.torne@collabora.com, david.heidelberg@collabora.com,
	linux-rockchip@lists.infradead.org,
	"Daniel Stone" <daniels@collabora.com>,
	martin.blumenstingl@googlemail.com, robclark@freedesktop.org,
	"Helen Koike" <helen.koike@collabora.com>,
	anholt@google.com, linux-mediatek@lists.infradead.org,
	matthias.bgg@gmail.com, linux-amlogic@lists.infradead.org,
	gustavo.padovan@collabora.com,
	linux-arm-kernel@lists.infradead.org,
	angelogioacchino.delregno@collabora.com,
	neil.armstrong@linaro.org, guilherme.gallo@collabora.com,
	linux-kernel@vger.kernel.org, tzimmermann@suse.de
Subject: Re: [PATCH v11] drm: Add initial ci/ subdirectory
Date: Fri, 15 Sep 2023 16:08:42 +0100	[thread overview]
Message-ID: <CAPj87rPbJ1V1-R7WMTHkDat2A4nwSd61Df9mdGH2PR=ZzxaU=Q@mail.gmail.com> (raw)
In-Reply-To: <krhnfc4os5nqwi3iixgxwads3jbcgtdcvn7ygxpvd62uaejvsk@mx6tmser7jti>

Hey,

On Thu, 14 Sept 2023 at 10:54, Maxime Ripard <mripard@kernel.org> wrote:
> On Tue, Sep 12, 2023 at 02:16:41PM +0100, Daniel Stone wrote:
> > Hopefully less mangled formatting this time: turns out Thunderbird +
> > plain text is utterly unreadable, so that's one less MUA that is
> > actually usable to send email to kernel lists without getting shouted
> > at.
>
> Sorry if it felt that way, it definitely wasn't my intention to shout at
> you. Email is indeed kind of a pain to deal with, and I wanted to keep
> the discussion going.

My bad - I didn't mean you _at all_. I was thinking of other, much
less pleasant, kernel maintainers, and the ongoing struggle of
attempting to actually send well-formatted email to kernel lists in
2023.

> > I don't quite see the same picture from your side though. For example,
> > my reading of what you've said is that flaky tests are utterly
> > unacceptable, as are partial runs, and we shouldn't pretend otherwise.
> > With your concrete example (which is really helpful, so thanks), what
> > happens to the MT8173 hdmi-inject test? Do we skip all MT8173 testing
> > until it's perfect, or does MT8173 testing always fail because that
> > test does?
>
> It's not clear to me why that test is even running in the first place?
> There's been some confusion on my side here about what we're going to
> test with this. You've mentioned Mesa and GPUs before, but that's a KMS
> test so there must be more to it.
>
> Either way, it's a relevant test so I guess why not. It turns out that
> the test is indeed flaky, I guess we could add it to the flaky tests
> list.
>
> BUT
>
> I want to have every opportunity to fix whatever that failure is.

Agreed so far!

> So:
>
>   - Is the test broken? If so, we should report it to IGT dev and remove
>     it from the test suite.
>   - If not, is that test failure have been reported to the driver author?
>   - If no answer/fix, we can add it to the flaky tests list, but do we
>     have some way to reproduce the test failure?
>
> The last part is especially critical. Looking at the list itself, I have
> no idea what board, kernel version, configuration, or what the failure
> rate was. Assuming I spend some time looking at the infra to find the
> board and configuration, how many times do I have to run the tests to
> expect to reproduce the failure (and thus consider it fixed if it
> doesn't occur anymore).
>
> Like, with that board and test, if my first 100 runs of the test work
> fine, is it reasonable for me to consider it fixed, or is it only
> supposed to happen once every 1000 runs?
>
> So, ideally, having some (mandatory) metadata in the test lists with a
> link to the bug report, the board (DT name?) it happened with, the
> version and configuration it was first seen with, and an approximation
> of the failure rate for every flaky test list.
>
> I understand that it's probably difficult to get that after the fact on
> the tests that were already merged, but I'd really like to get that
> enforced for every new test going forward.
>
> That should hopefully get us in a much better position to fix some of
> those tests issues. And failing that, I can't see how that's
> sustainable.

OK yeah, and we're still agreed here. That is definitely the standard
we should be aiming for.  It is there for some - see
drivers/gpu/drm/ci/xfails/rockchip-rk3288-skips.txt, but should be
there for the rest, it's true. (The specific board/DT it was observed
on can be easily retconned because we only run on one specific board
type per driver, again to make things more predictable; we could go
back and retrospectively add those in a header comment?)

For flakes, it can be hard to pin them down, because, well, they're
flaky. Usually when we add things in Mesa (sorry to keep coming back
to Mesa - it's not to say that it's the objective best thing that
everything should follow, only that it's the thing we have the most
experience with that we know works well), we do a manual bisect and
try to pin the blame on a specific merge request which looks like the
most likely culprit. If nothing obvious jumps out, we just note when
it was first observed and provide some sample job logs. But yeah, it
should be more verbose.

FWIW, the reason it wasn't done here - not to say that it shouldn't
have been done better, but here we are - is that we just hammered a
load of test runs, vacuumed up the results with a script, and that's
what generated those files. Given the number of tests and devices, it
was hard to narrow each down individually, but yeah, it is something
which really wants further analysis and drilling into. It's a good
to-do, and I agree it should be the standard going forward.

> And Mesa does show what I'm talking about:
>
> $ find -name *-flakes.txt | xargs git diff --stat  e58a10af640ba58b6001f5c5ad750b782547da76
> [...]
>
> In the history of Mesa, there's never been a single test removed from a
> flaky test list.

As Rob says, that's definitely wrong. But there is a good point in
there: how do you know a test isn't flaky anymore? 100 runs is a
reasonable benchmark, but 1000 is ideal. At a 1% failure rate, with 20
devices, that's just too many spurious false-fails to have a usable
workflow.

We do have some tools to make stress testing easier, but those need to
be better documented. We'll fix that. The tools we have which also
pull out the metadata etc also need documenting - right now they
aren't because they're under _extremely_ heavy development, but they
can be further enhanced to e.g. pull out the igt results automatically
and point very clearly to the cause. Also on the to-do.

> > Only maintainers can actually fix the drivers (or the tests tbf). But
> > doing the testing does let us be really clear to everyone what the
> > actual state is, and that way people can make informed decisions too.
> > And the only way we're going to drive the test rate down is by the
> > subsystem maintainers enforcing it.
>
> Just FYI, I'm not on the other side of the fence there, I'd really like
> to have some kind of validation. I talked about it at XDC some years
> ago, and discussed it several people at length over the years. So I'm
> definitely not in the CI-is-bad camp.
>
> > Does that make sense on where I'm (and I think a lot of others are)
> > coming from?
>
> That makes sense from your perspective, but it's not clear to me how you
> can expect maintainers to own the tests if they were never involved in
> the process.
>
> They are not in Cc of the flaky tests patches, they are not reported
> that the bug is failing, how can they own that process if we never
> reached out and involved them?
>
> We're all overworked, you can't expect them to just look at the flaky
> test list every now and then and figure it out.

Absolutely. We got acks (or at least not-nacks) from the driver
developers, but yeah, they should absolutely be part of the loop for
those updates. I don't think we can necessarily block on them though.
Say we add vc4 KMS tests, then after a backmerge we start to see a
bunch of flakes on it, but you're sitting on a beach for a couple of
weeks. If we wait for you to get back, see it, and merge it, then
that's two weeks of people submitting Rockchip driver changes and
getting told that their changes failed CI. That's exactly what we want
to avoid, because it erodes confidence and usefulness of CI when
people expect failures and ignore them by default.

So I would say that it's reasonable for expectations to be updated
according to what actually happens in practice, but also to make sure
that the maintainers are explicitly informed and kept in the loop, and
not just surprised when they look at the lists and see a bunch of
stuff happened without their knowledge.

Again there's much more to be done on the tooling here. Part of it is
CLI tools and automation, part of it is dashboards and
easily-digestible reporting, and then there's integration with things
like KernelCI. KCI(DB) is actually quite high up on the list, but
we're mostly waiting until a lot of the KCI rework happens so we can
actually properly integrate with the new system.

Right now a lot of the tooling we have is pretty involved - for
example, we do have ci-collate as a Python library which can inspect a
number of pipelines, pull out detailed status and logs, etc, but it
mostly needs to be used as a library with bespoke scripts, rather than
a ready-made tool. Work on that is ongoing to make it way more clear
and accessible though.

So I think it sounds like we're on the same page and going exactly in
the same direction, just that this is a starting point rather than the
desired end state. And the main point is that having a set of
xfails/flakes parachuted in with little to no context is trying to get
an MVP bootstrapped, rather than how we expect things to go in future.
Does that sound about right?

Cheers,
Daniel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Stone <daniel@fooishbar.org>
To: Maxime Ripard <mripard@kernel.org>
Cc: emma@anholt.net, linux-doc@vger.kernel.org,
	vignesh.raman@collabora.com, dri-devel@lists.freedesktop.org,
	alyssa@rosenzweig.io, jbrunet@baylibre.com, robdclark@google.com,
	corbet@lwn.net, khilman@baylibre.com,
	sergi.blanch.torne@collabora.com, david.heidelberg@collabora.com,
	linux-rockchip@lists.infradead.org,
	"Daniel Stone" <daniels@collabora.com>,
	martin.blumenstingl@googlemail.com, robclark@freedesktop.org,
	"Helen Koike" <helen.koike@collabora.com>,
	anholt@google.com, linux-mediatek@lists.infradead.org,
	matthias.bgg@gmail.com, linux-amlogic@lists.infradead.org,
	gustavo.padovan@collabora.com,
	linux-arm-kernel@lists.infradead.org,
	angelogioacchino.delregno@collabora.com,
	neil.armstrong@linaro.org, guilherme.gallo@collabora.com,
	"Michel Dänzer" <michel.daenzer@mailbox.org>,
	linux-kernel@vger.kernel.org, tzimmermann@suse.de
Subject: Re: [PATCH v11] drm: Add initial ci/ subdirectory
Date: Fri, 15 Sep 2023 16:08:42 +0100	[thread overview]
Message-ID: <CAPj87rPbJ1V1-R7WMTHkDat2A4nwSd61Df9mdGH2PR=ZzxaU=Q@mail.gmail.com> (raw)
In-Reply-To: <krhnfc4os5nqwi3iixgxwads3jbcgtdcvn7ygxpvd62uaejvsk@mx6tmser7jti>

Hey,

On Thu, 14 Sept 2023 at 10:54, Maxime Ripard <mripard@kernel.org> wrote:
> On Tue, Sep 12, 2023 at 02:16:41PM +0100, Daniel Stone wrote:
> > Hopefully less mangled formatting this time: turns out Thunderbird +
> > plain text is utterly unreadable, so that's one less MUA that is
> > actually usable to send email to kernel lists without getting shouted
> > at.
>
> Sorry if it felt that way, it definitely wasn't my intention to shout at
> you. Email is indeed kind of a pain to deal with, and I wanted to keep
> the discussion going.

My bad - I didn't mean you _at all_. I was thinking of other, much
less pleasant, kernel maintainers, and the ongoing struggle of
attempting to actually send well-formatted email to kernel lists in
2023.

> > I don't quite see the same picture from your side though. For example,
> > my reading of what you've said is that flaky tests are utterly
> > unacceptable, as are partial runs, and we shouldn't pretend otherwise.
> > With your concrete example (which is really helpful, so thanks), what
> > happens to the MT8173 hdmi-inject test? Do we skip all MT8173 testing
> > until it's perfect, or does MT8173 testing always fail because that
> > test does?
>
> It's not clear to me why that test is even running in the first place?
> There's been some confusion on my side here about what we're going to
> test with this. You've mentioned Mesa and GPUs before, but that's a KMS
> test so there must be more to it.
>
> Either way, it's a relevant test so I guess why not. It turns out that
> the test is indeed flaky, I guess we could add it to the flaky tests
> list.
>
> BUT
>
> I want to have every opportunity to fix whatever that failure is.

Agreed so far!

> So:
>
>   - Is the test broken? If so, we should report it to IGT dev and remove
>     it from the test suite.
>   - If not, is that test failure have been reported to the driver author?
>   - If no answer/fix, we can add it to the flaky tests list, but do we
>     have some way to reproduce the test failure?
>
> The last part is especially critical. Looking at the list itself, I have
> no idea what board, kernel version, configuration, or what the failure
> rate was. Assuming I spend some time looking at the infra to find the
> board and configuration, how many times do I have to run the tests to
> expect to reproduce the failure (and thus consider it fixed if it
> doesn't occur anymore).
>
> Like, with that board and test, if my first 100 runs of the test work
> fine, is it reasonable for me to consider it fixed, or is it only
> supposed to happen once every 1000 runs?
>
> So, ideally, having some (mandatory) metadata in the test lists with a
> link to the bug report, the board (DT name?) it happened with, the
> version and configuration it was first seen with, and an approximation
> of the failure rate for every flaky test list.
>
> I understand that it's probably difficult to get that after the fact on
> the tests that were already merged, but I'd really like to get that
> enforced for every new test going forward.
>
> That should hopefully get us in a much better position to fix some of
> those tests issues. And failing that, I can't see how that's
> sustainable.

OK yeah, and we're still agreed here. That is definitely the standard
we should be aiming for.  It is there for some - see
drivers/gpu/drm/ci/xfails/rockchip-rk3288-skips.txt, but should be
there for the rest, it's true. (The specific board/DT it was observed
on can be easily retconned because we only run on one specific board
type per driver, again to make things more predictable; we could go
back and retrospectively add those in a header comment?)

For flakes, it can be hard to pin them down, because, well, they're
flaky. Usually when we add things in Mesa (sorry to keep coming back
to Mesa - it's not to say that it's the objective best thing that
everything should follow, only that it's the thing we have the most
experience with that we know works well), we do a manual bisect and
try to pin the blame on a specific merge request which looks like the
most likely culprit. If nothing obvious jumps out, we just note when
it was first observed and provide some sample job logs. But yeah, it
should be more verbose.

FWIW, the reason it wasn't done here - not to say that it shouldn't
have been done better, but here we are - is that we just hammered a
load of test runs, vacuumed up the results with a script, and that's
what generated those files. Given the number of tests and devices, it
was hard to narrow each down individually, but yeah, it is something
which really wants further analysis and drilling into. It's a good
to-do, and I agree it should be the standard going forward.

> And Mesa does show what I'm talking about:
>
> $ find -name *-flakes.txt | xargs git diff --stat  e58a10af640ba58b6001f5c5ad750b782547da76
> [...]
>
> In the history of Mesa, there's never been a single test removed from a
> flaky test list.

As Rob says, that's definitely wrong. But there is a good point in
there: how do you know a test isn't flaky anymore? 100 runs is a
reasonable benchmark, but 1000 is ideal. At a 1% failure rate, with 20
devices, that's just too many spurious false-fails to have a usable
workflow.

We do have some tools to make stress testing easier, but those need to
be better documented. We'll fix that. The tools we have which also
pull out the metadata etc also need documenting - right now they
aren't because they're under _extremely_ heavy development, but they
can be further enhanced to e.g. pull out the igt results automatically
and point very clearly to the cause. Also on the to-do.

> > Only maintainers can actually fix the drivers (or the tests tbf). But
> > doing the testing does let us be really clear to everyone what the
> > actual state is, and that way people can make informed decisions too.
> > And the only way we're going to drive the test rate down is by the
> > subsystem maintainers enforcing it.
>
> Just FYI, I'm not on the other side of the fence there, I'd really like
> to have some kind of validation. I talked about it at XDC some years
> ago, and discussed it several people at length over the years. So I'm
> definitely not in the CI-is-bad camp.
>
> > Does that make sense on where I'm (and I think a lot of others are)
> > coming from?
>
> That makes sense from your perspective, but it's not clear to me how you
> can expect maintainers to own the tests if they were never involved in
> the process.
>
> They are not in Cc of the flaky tests patches, they are not reported
> that the bug is failing, how can they own that process if we never
> reached out and involved them?
>
> We're all overworked, you can't expect them to just look at the flaky
> test list every now and then and figure it out.

Absolutely. We got acks (or at least not-nacks) from the driver
developers, but yeah, they should absolutely be part of the loop for
those updates. I don't think we can necessarily block on them though.
Say we add vc4 KMS tests, then after a backmerge we start to see a
bunch of flakes on it, but you're sitting on a beach for a couple of
weeks. If we wait for you to get back, see it, and merge it, then
that's two weeks of people submitting Rockchip driver changes and
getting told that their changes failed CI. That's exactly what we want
to avoid, because it erodes confidence and usefulness of CI when
people expect failures and ignore them by default.

So I would say that it's reasonable for expectations to be updated
according to what actually happens in practice, but also to make sure
that the maintainers are explicitly informed and kept in the loop, and
not just surprised when they look at the lists and see a bunch of
stuff happened without their knowledge.

Again there's much more to be done on the tooling here. Part of it is
CLI tools and automation, part of it is dashboards and
easily-digestible reporting, and then there's integration with things
like KernelCI. KCI(DB) is actually quite high up on the list, but
we're mostly waiting until a lot of the KCI rework happens so we can
actually properly integrate with the new system.

Right now a lot of the tooling we have is pretty involved - for
example, we do have ci-collate as a Python library which can inspect a
number of pipelines, pull out detailed status and logs, etc, but it
mostly needs to be used as a library with bespoke scripts, rather than
a ready-made tool. Work on that is ongoing to make it way more clear
and accessible though.

So I think it sounds like we're on the same page and going exactly in
the same direction, just that this is a starting point rather than the
desired end state. And the main point is that having a set of
xfails/flakes parachuted in with little to no context is trying to get
an MVP bootstrapped, rather than how we expect things to go in future.
Does that sound about right?

Cheers,
Daniel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Stone <daniel@fooishbar.org>
To: Maxime Ripard <mripard@kernel.org>
Cc: "Michel Dänzer" <michel.daenzer@mailbox.org>,
	emma@anholt.net, linux-doc@vger.kernel.org,
	vignesh.raman@collabora.com, dri-devel@lists.freedesktop.org,
	alyssa@rosenzweig.io, jbrunet@baylibre.com, robdclark@google.com,
	corbet@lwn.net, khilman@baylibre.com,
	sergi.blanch.torne@collabora.com, david.heidelberg@collabora.com,
	linux-rockchip@lists.infradead.org,
	"Daniel Stone" <daniels@collabora.com>,
	martin.blumenstingl@googlemail.com, robclark@freedesktop.org,
	"Helen Koike" <helen.koike@collabora.com>,
	anholt@google.com, linux-mediatek@lists.infradead.org,
	matthias.bgg@gmail.com, linux-amlogic@lists.infradead.org,
	gustavo.padovan@collabora.com,
	linux-arm-kernel@lists.infradead.org,
	angelogioacchino.delregno@collabora.com,
	neil.armstrong@linaro.org, guilherme.gallo@collabora.com,
	linux-kernel@vger.kernel.org, tzimmermann@suse.de
Subject: Re: [PATCH v11] drm: Add initial ci/ subdirectory
Date: Fri, 15 Sep 2023 16:08:42 +0100	[thread overview]
Message-ID: <CAPj87rPbJ1V1-R7WMTHkDat2A4nwSd61Df9mdGH2PR=ZzxaU=Q@mail.gmail.com> (raw)
In-Reply-To: <krhnfc4os5nqwi3iixgxwads3jbcgtdcvn7ygxpvd62uaejvsk@mx6tmser7jti>

Hey,

On Thu, 14 Sept 2023 at 10:54, Maxime Ripard <mripard@kernel.org> wrote:
> On Tue, Sep 12, 2023 at 02:16:41PM +0100, Daniel Stone wrote:
> > Hopefully less mangled formatting this time: turns out Thunderbird +
> > plain text is utterly unreadable, so that's one less MUA that is
> > actually usable to send email to kernel lists without getting shouted
> > at.
>
> Sorry if it felt that way, it definitely wasn't my intention to shout at
> you. Email is indeed kind of a pain to deal with, and I wanted to keep
> the discussion going.

My bad - I didn't mean you _at all_. I was thinking of other, much
less pleasant, kernel maintainers, and the ongoing struggle of
attempting to actually send well-formatted email to kernel lists in
2023.

> > I don't quite see the same picture from your side though. For example,
> > my reading of what you've said is that flaky tests are utterly
> > unacceptable, as are partial runs, and we shouldn't pretend otherwise.
> > With your concrete example (which is really helpful, so thanks), what
> > happens to the MT8173 hdmi-inject test? Do we skip all MT8173 testing
> > until it's perfect, or does MT8173 testing always fail because that
> > test does?
>
> It's not clear to me why that test is even running in the first place?
> There's been some confusion on my side here about what we're going to
> test with this. You've mentioned Mesa and GPUs before, but that's a KMS
> test so there must be more to it.
>
> Either way, it's a relevant test so I guess why not. It turns out that
> the test is indeed flaky, I guess we could add it to the flaky tests
> list.
>
> BUT
>
> I want to have every opportunity to fix whatever that failure is.

Agreed so far!

> So:
>
>   - Is the test broken? If so, we should report it to IGT dev and remove
>     it from the test suite.
>   - If not, is that test failure have been reported to the driver author?
>   - If no answer/fix, we can add it to the flaky tests list, but do we
>     have some way to reproduce the test failure?
>
> The last part is especially critical. Looking at the list itself, I have
> no idea what board, kernel version, configuration, or what the failure
> rate was. Assuming I spend some time looking at the infra to find the
> board and configuration, how many times do I have to run the tests to
> expect to reproduce the failure (and thus consider it fixed if it
> doesn't occur anymore).
>
> Like, with that board and test, if my first 100 runs of the test work
> fine, is it reasonable for me to consider it fixed, or is it only
> supposed to happen once every 1000 runs?
>
> So, ideally, having some (mandatory) metadata in the test lists with a
> link to the bug report, the board (DT name?) it happened with, the
> version and configuration it was first seen with, and an approximation
> of the failure rate for every flaky test list.
>
> I understand that it's probably difficult to get that after the fact on
> the tests that were already merged, but I'd really like to get that
> enforced for every new test going forward.
>
> That should hopefully get us in a much better position to fix some of
> those tests issues. And failing that, I can't see how that's
> sustainable.

OK yeah, and we're still agreed here. That is definitely the standard
we should be aiming for.  It is there for some - see
drivers/gpu/drm/ci/xfails/rockchip-rk3288-skips.txt, but should be
there for the rest, it's true. (The specific board/DT it was observed
on can be easily retconned because we only run on one specific board
type per driver, again to make things more predictable; we could go
back and retrospectively add those in a header comment?)

For flakes, it can be hard to pin them down, because, well, they're
flaky. Usually when we add things in Mesa (sorry to keep coming back
to Mesa - it's not to say that it's the objective best thing that
everything should follow, only that it's the thing we have the most
experience with that we know works well), we do a manual bisect and
try to pin the blame on a specific merge request which looks like the
most likely culprit. If nothing obvious jumps out, we just note when
it was first observed and provide some sample job logs. But yeah, it
should be more verbose.

FWIW, the reason it wasn't done here - not to say that it shouldn't
have been done better, but here we are - is that we just hammered a
load of test runs, vacuumed up the results with a script, and that's
what generated those files. Given the number of tests and devices, it
was hard to narrow each down individually, but yeah, it is something
which really wants further analysis and drilling into. It's a good
to-do, and I agree it should be the standard going forward.

> And Mesa does show what I'm talking about:
>
> $ find -name *-flakes.txt | xargs git diff --stat  e58a10af640ba58b6001f5c5ad750b782547da76
> [...]
>
> In the history of Mesa, there's never been a single test removed from a
> flaky test list.

As Rob says, that's definitely wrong. But there is a good point in
there: how do you know a test isn't flaky anymore? 100 runs is a
reasonable benchmark, but 1000 is ideal. At a 1% failure rate, with 20
devices, that's just too many spurious false-fails to have a usable
workflow.

We do have some tools to make stress testing easier, but those need to
be better documented. We'll fix that. The tools we have which also
pull out the metadata etc also need documenting - right now they
aren't because they're under _extremely_ heavy development, but they
can be further enhanced to e.g. pull out the igt results automatically
and point very clearly to the cause. Also on the to-do.

> > Only maintainers can actually fix the drivers (or the tests tbf). But
> > doing the testing does let us be really clear to everyone what the
> > actual state is, and that way people can make informed decisions too.
> > And the only way we're going to drive the test rate down is by the
> > subsystem maintainers enforcing it.
>
> Just FYI, I'm not on the other side of the fence there, I'd really like
> to have some kind of validation. I talked about it at XDC some years
> ago, and discussed it several people at length over the years. So I'm
> definitely not in the CI-is-bad camp.
>
> > Does that make sense on where I'm (and I think a lot of others are)
> > coming from?
>
> That makes sense from your perspective, but it's not clear to me how you
> can expect maintainers to own the tests if they were never involved in
> the process.
>
> They are not in Cc of the flaky tests patches, they are not reported
> that the bug is failing, how can they own that process if we never
> reached out and involved them?
>
> We're all overworked, you can't expect them to just look at the flaky
> test list every now and then and figure it out.

Absolutely. We got acks (or at least not-nacks) from the driver
developers, but yeah, they should absolutely be part of the loop for
those updates. I don't think we can necessarily block on them though.
Say we add vc4 KMS tests, then after a backmerge we start to see a
bunch of flakes on it, but you're sitting on a beach for a couple of
weeks. If we wait for you to get back, see it, and merge it, then
that's two weeks of people submitting Rockchip driver changes and
getting told that their changes failed CI. That's exactly what we want
to avoid, because it erodes confidence and usefulness of CI when
people expect failures and ignore them by default.

So I would say that it's reasonable for expectations to be updated
according to what actually happens in practice, but also to make sure
that the maintainers are explicitly informed and kept in the loop, and
not just surprised when they look at the lists and see a bunch of
stuff happened without their knowledge.

Again there's much more to be done on the tooling here. Part of it is
CLI tools and automation, part of it is dashboards and
easily-digestible reporting, and then there's integration with things
like KernelCI. KCI(DB) is actually quite high up on the list, but
we're mostly waiting until a lot of the KCI rework happens so we can
actually properly integrate with the new system.

Right now a lot of the tooling we have is pretty involved - for
example, we do have ci-collate as a Python library which can inspect a
number of pipelines, pull out detailed status and logs, etc, but it
mostly needs to be used as a library with bespoke scripts, rather than
a ready-made tool. Work on that is ongoing to make it way more clear
and accessible though.

So I think it sounds like we're on the same page and going exactly in
the same direction, just that this is a starting point rather than the
desired end state. And the main point is that having a set of
xfails/flakes parachuted in with little to no context is trying to get
an MVP bootstrapped, rather than how we expect things to go in future.
Does that sound about right?

Cheers,
Daniel

_______________________________________________
linux-amlogic mailing list
linux-amlogic@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-amlogic

  parent reply	other threads:[~2023-09-15 15:09 UTC|newest]

Thread overview: 107+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11 17:19 [PATCH v11] drm: Add initial ci/ subdirectory Helen Koike
2023-08-11 17:19 ` Helen Koike
2023-08-11 17:19 ` Helen Koike
2023-08-11 17:19 ` Helen Koike
2023-08-22 14:26 ` Daniel Vetter
2023-08-22 14:26   ` Daniel Vetter
2023-08-22 14:26   ` Daniel Vetter
2023-08-22 14:26   ` Daniel Vetter
2023-08-30  9:53   ` Maxime Ripard
2023-08-30  9:53     ` Maxime Ripard
2023-08-30  9:53     ` Maxime Ripard
2023-08-30  9:53     ` Maxime Ripard
2023-08-30 10:58     ` Jani Nikula
2023-08-30 10:58       ` Jani Nikula
2023-08-30 10:58       ` Jani Nikula
2023-08-30 10:58       ` Jani Nikula
2023-08-30 11:37       ` Maxime Ripard
2023-08-30 11:37         ` Maxime Ripard
2023-08-30 11:37         ` Maxime Ripard
2023-08-30 11:37         ` Maxime Ripard
2023-08-30 11:37         ` Maxime Ripard
2023-08-30 13:24         ` Helen Koike
2023-08-30 13:24           ` Helen Koike
2023-08-30 13:24           ` Helen Koike
2023-08-30 13:24           ` Helen Koike
2023-08-30 13:24           ` Helen Koike
2023-08-30 14:44           ` Rob Clark
2023-08-30 14:44             ` Rob Clark
2023-08-30 14:44             ` Rob Clark
2023-08-30 14:44             ` Rob Clark
2023-08-30 14:44             ` Rob Clark
2023-08-30 14:57           ` Maxime Ripard
2023-08-30 14:57             ` Maxime Ripard
2023-08-30 14:57             ` Maxime Ripard
2023-08-30 14:57             ` Maxime Ripard
2023-08-30 14:57             ` Maxime Ripard
2023-08-30 15:14             ` Helen Koike
2023-08-30 15:14               ` Helen Koike
2023-08-30 15:14               ` Helen Koike
2023-08-30 15:14               ` Helen Koike
2023-09-04  7:54               ` Daniel Vetter
2023-09-04  7:54                 ` Daniel Vetter
2023-09-04  7:54                 ` Daniel Vetter
2023-09-04  7:54                 ` Daniel Vetter
2023-09-04  7:54                 ` Daniel Vetter
2023-09-07 11:40                 ` Daniel Stone
2023-09-07 11:40                   ` Daniel Stone
2023-09-07 11:40                   ` Daniel Stone
2023-09-07 11:40                   ` Daniel Stone
2023-09-07 11:40                   ` Daniel Stone
2023-09-11  9:34                   ` Maxime Ripard
2023-09-11  9:34                     ` Maxime Ripard
2023-09-11  9:34                     ` Maxime Ripard
2023-09-11  9:34                     ` Maxime Ripard
2023-09-11  9:34                     ` Maxime Ripard
2023-09-11 12:13                     ` Michel Dänzer
2023-09-11 12:13                       ` Michel Dänzer
2023-09-11 12:13                       ` Michel Dänzer
2023-09-11 12:13                       ` Michel Dänzer
2023-09-11 12:13                       ` Michel Dänzer
2023-09-11 12:51                       ` Maxime Ripard
2023-09-11 12:51                         ` Maxime Ripard
2023-09-11 12:51                         ` Maxime Ripard
2023-09-11 12:51                         ` Maxime Ripard
2023-09-11 12:51                         ` Maxime Ripard
2023-09-11 13:30                         ` Michel Dänzer
2023-09-11 13:30                           ` Michel Dänzer
2023-09-11 13:30                           ` Michel Dänzer
2023-09-11 13:30                           ` Michel Dänzer
2023-09-11 14:46                           ` Maxime Ripard
2023-09-11 14:46                             ` Maxime Ripard
2023-09-11 14:46                             ` Maxime Ripard
2023-09-11 14:46                             ` Maxime Ripard
2023-09-12 13:16                             ` Daniel Stone
2023-09-12 13:16                               ` Daniel Stone
2023-09-12 13:16                               ` Daniel Stone
2023-09-12 13:16                               ` Daniel Stone
2023-09-12 13:16                               ` Daniel Stone
2023-09-14  9:54                               ` Maxime Ripard
2023-09-14  9:54                                 ` Maxime Ripard
2023-09-14  9:54                                 ` Maxime Ripard
2023-09-14  9:54                                 ` Maxime Ripard
2023-09-14  9:54                                 ` Maxime Ripard
2023-09-14 22:39                                 ` Rob Clark
2023-09-14 22:39                                   ` Rob Clark
2023-09-14 22:39                                   ` Rob Clark
2023-09-14 22:39                                   ` Rob Clark
2023-09-14 22:39                                   ` Rob Clark
2023-09-15 15:08                                 ` Daniel Stone [this message]
2023-09-15 15:08                                   ` Daniel Stone
2023-09-15 15:08                                   ` Daniel Stone
2023-09-15 15:08                                   ` Daniel Stone
2023-09-15 15:08                                   ` Daniel Stone
2023-09-18 21:35                                   ` Helen Koike
2023-09-18 21:35                                     ` Helen Koike
2023-09-18 21:35                                     ` Helen Koike
2023-09-18 21:35                                     ` Helen Koike
2023-09-19  9:53                                     ` Maxime Ripard
2023-09-19  9:53                                       ` Maxime Ripard
2023-09-19  9:53                                       ` Maxime Ripard
2023-09-19  9:53                                       ` Maxime Ripard
2023-09-19  9:53                                       ` Maxime Ripard
2023-09-19  9:48                                   ` Maxime Ripard
2023-09-19  9:48                                     ` Maxime Ripard
2023-09-19  9:48                                     ` Maxime Ripard
2023-09-19  9:48                                     ` Maxime Ripard
2023-09-19  9:48                                     ` Maxime Ripard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPj87rPbJ1V1-R7WMTHkDat2A4nwSd61Df9mdGH2PR=ZzxaU=Q@mail.gmail.com' \
    --to=daniel@fooishbar.org \
    --cc=alyssa@rosenzweig.io \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=anholt@google.com \
    --cc=corbet@lwn.net \
    --cc=daniels@collabora.com \
    --cc=david.heidelberg@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=emma@anholt.net \
    --cc=guilherme.gallo@collabora.com \
    --cc=gustavo.padovan@collabora.com \
    --cc=helen.koike@collabora.com \
    --cc=jbrunet@baylibre.com \
    --cc=khilman@baylibre.com \
    --cc=linux-amlogic@lists.infradead.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-rockchip@lists.infradead.org \
    --cc=martin.blumenstingl@googlemail.com \
    --cc=matthias.bgg@gmail.com \
    --cc=michel.daenzer@mailbox.org \
    --cc=mripard@kernel.org \
    --cc=neil.armstrong@linaro.org \
    --cc=robclark@freedesktop.org \
    --cc=robdclark@google.com \
    --cc=sergi.blanch.torne@collabora.com \
    --cc=tzimmermann@suse.de \
    --cc=vignesh.raman@collabora.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.