On Fri, May 21, 2021 at 01:52:13PM +0100, Daniel P. Berrangé wrote: > On Fri, May 21, 2021 at 02:27:26PM +0200, Philippe Mathieu-Daudé wrote: > > On 5/21/21 1:53 PM, Daniel P. Berrangé wrote: > > > On Fri, May 21, 2021 at 01:02:51PM +0200, Thomas Huth wrote: > > >> On 21/05/2021 12.50, Daniel P. Berrangé wrote: > > >>> On Fri, May 21, 2021 at 12:48:21PM +0200, Thomas Huth wrote: > > >>>> On 20/05/2021 13.27, Philippe Mathieu-Daudé wrote: > > >>>>> +Stefan/Daniel > > >>>>> > > >>>>> On 5/20/21 10:02 AM, Thomas Huth wrote: > > >>>>>> On 19/05/2021 20.45, Philippe Mathieu-Daudé wrote: > > >>>>>>> If a runner has ccache installed, use it and display statistics > > >>>>>>> at the end of the build. > > >>>>>>> > > >>>>>>> Signed-off-by: Philippe Mathieu-Daudé > > >>>>>>> --- > > >>>>>>>   .gitlab-ci.d/buildtest-template.yml | 5 +++++ > > >>>>>>>   1 file changed, 5 insertions(+) > > >>>>>>> > > >>>>>>> diff --git a/.gitlab-ci.d/buildtest-template.yml > > >>>>>>> b/.gitlab-ci.d/buildtest-template.yml > > >>>>>>> index f284d7a0eec..a625c697d3b 100644 > > >>>>>>> --- a/.gitlab-ci.d/buildtest-template.yml > > >>>>>>> +++ b/.gitlab-ci.d/buildtest-template.yml > > >>>>>>> @@ -6,13 +6,18 @@ > > >>>>>>>         then > > >>>>>>>           JOBS=$(sysctl -n hw.ncpu) > > >>>>>>>           MAKE=gmake > > >>>>>>> +        PATH=/usr/local/libexec/ccache:$PATH > > >>>>>>>           ; > > >>>>>>>         else > > >>>>>>>           JOBS=$(expr $(nproc) + 1) > > >>>>>>>           MAKE=make > > >>>>>>> +        PATH=/usr/lib/ccache:/usr/lib64/ccache:$PATH > > >>>>>> > > >>>>>> That does not make sense for the shared runners yet. We first need > > >>>>>> something to enable the caching there - see my series "Use ccache in the > > >>>>>> gitlab-CI" from April (which is currently stalled unfortunately). > > >>>>> > > >>>>> TL;DR: I don't think we should restrict our templates to shared runners. > > >>>> > > >>>> I'm certainly not voting for restricting ourselves to only use shared > > >>>> runners here - but my concern is that this actually *slows* down the shared > > >>>> runners even more! (sorry, I should have elaborated on that in my previous > > >>>> mail already) > > >>>> > > >>>> When I was experimenting with ccache in the shared runners, I saw that the > > >>>> jobs are running even slower with ccache enabled as long as the cache is not > > >>>> populated yet. You only get a speedup afterwards. So if you add this now > > >>>> without also adding the possibility to store the cache persistently, the > > >>>> shared runners will try to populate the cache each time just to throw away > > >>>> the results afterwards again. Thus all the shared runners only get slower > > >>>> without any real benefit here. > > >>>> > > >>>> Thus we either need to get ccache working properly for the shared runners > > >>>> first, or you have to think of a different way of enabling ccache for the > > >>>> non-shared runners, so that it does not affect the shared runners > > >>>> negatively. > > >>> > > >>> Is there anything functional holding up your previous full cccache support > > >>> series from last month ? Or is it just lack of reviews ? > > >> > > >> It's basically the problems mentioned in the cover letter and Stefan's > > >> comment here: > > >> > > >> https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg02219.html > > > > > > I'm not sure I understand why Stefan thinks gitlab's caching doesn't > > > benefit ccache. We add ccache for libvirt GitLab CI, and AFAIR it > > > sped up our builds significantly. > > > > I think Stefan is referring to a comment I made, when using both > > shared runners and dedicated runners (what I'm currently testing) > > various jobs are stuck transferring artifacts/cache {FROM, TO} > > {shared, dedicated} runners at the same time, which is sub-optimal > > because it saturate the dedicated runner network link. > > I think we're over thinking things a bit too much and worrying about > scenarios that we're not actually hitting that frequently today, and > delaying the benefit for everyone. Thomas' original email indicated using ccache with QEMU isn't necessarily a win: Additionally, the jobs are sometimes running even slower, e.g. if the cache has not been populated yet or if there are a lot of cache misses, Let's measure the time taken both on first run and on a subsequent run. This information can be included in the patch series cover letter so we know ccache performance has been measured and it works well. Stefan