All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matheus Tavares Bernardino <matheus.bernardino@usp.br>
To: Duy Nguyen <pclouds@gmail.com>
Cc: git <git@vger.kernel.org>,
	"Christian Couder" <christian.couder@gmail.com>,
	"Оля Тележная" <olyatelezhnaya@gmail.com>,
	"Jeff King" <peff@peff.net>
Subject: Re: [GSoC] How to protect cached_objects
Date: Sat, 25 May 2019 13:04:47 -0300	[thread overview]
Message-ID: <CAHd-oW4_u6SMPropxR0tWb2b_Q31n2rda3FKPb9qsnCKwZ=b8Q@mail.gmail.com> (raw)
In-Reply-To: <CACsJy8DFw1Y_bhE=k2ZEMTk+vFvwwmx4GDnRXEQB9cp58M3vLg@mail.gmail.com>

On Fri, May 24, 2019 at 6:55 AM Duy Nguyen <pclouds@gmail.com> wrote:
>
> On Thu, May 23, 2019 at 11:51 PM Matheus Tavares Bernardino
> <matheus.bernardino@usp.br> wrote:
> >
>
> > Hi, everyone
> >
> > As one of my first tasks in GSoC, I'm looking to protect the global
> > states at sha1-file.c for future parallelizations. Currently, I'm
> > analyzing how to deal with the cached_objects array, which is a small
> > set of in-memory objects that read_object_file() is able to return
> > although they don't really exist on disk. The only current user of
> > this set is git-blame, which adds a fake commit containing
> > non-committed changes.
> >
> > As it is now, if we start parallelizing blame, cached_objects won't be
> > a problem since it is written to only once, at the beginning, and read
> > from a couple times latter, with no possible race conditions.
> >
> > But should we make these operations thread safe for future uses that
> > could involve potential parallel writes and reads too?
> >
> > If so, we have two options:
> > - Make the array thread local, which would oblige us to replicate data, or
> > - Protect it with locks, which could impact the sequential
> > performance. We could have a macro here, to skip looking on
> > single-threaded use cases. But we don't know, a priori, the number of
> > threads that would want to use the pack access code.
> >
> > Any thought on this?
>
> I would go with "that's the problem of the future me". I'll go with a
> simple global (I mean per-object store) mutex.

Thanks for the help, Duy. What you mean by "per-object store mutex" is
to have a lock for every "struct raw_object_store" in the "struct
repository"? Maybe I didn't quite understand what the "object store"
is, yet.

> After we have a
> complete picture how many locks we need, and can run some tests to see
> the amount of lock contention we have (or even cache missess if we
> have so many locks), then we can start thinking of an optimal
> strategy.

Please correct me if I misunderstand your suggestion. The idea is to
protect the pack access code at a higher level, measure contentions,
and then start refining the locks, if needed? I'm asking because I was
going directly to the lower level protections (or thread-safe
conversions) and planning to build it up. For example, I was working
this week to eliminate static variables inside pack access functions.
Do you think this approach is OK or should I work on a more "broader"
thread-safe conversion first (like a couple wide mutex) and refine it
down?

> I mean, this is an implementation detail and can't affect object
> access API right? That gives us some breathing room to change stuff
> without preparing for something that we don't need right now (like
> multiple cached_objects writers)

Indeed, makes sense to leave the multiple writers support to the
future, if it's ever needed. Thanks.

  reply	other threads:[~2019-05-25 16:07 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-23 16:51 [GSoC] How to protect cached_objects Matheus Tavares Bernardino
2019-05-24  6:13 ` Jeff King
2019-05-25 14:42   ` Matheus Tavares Bernardino
2019-05-24  9:55 ` Duy Nguyen
2019-05-25 16:04   ` Matheus Tavares Bernardino [this message]
2019-05-26  2:43     ` Duy Nguyen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHd-oW4_u6SMPropxR0tWb2b_Q31n2rda3FKPb9qsnCKwZ=b8Q@mail.gmail.com' \
    --to=matheus.bernardino@usp.br \
    --cc=christian.couder@gmail.com \
    --cc=git@vger.kernel.org \
    --cc=olyatelezhnaya@gmail.com \
    --cc=pclouds@gmail.com \
    --cc=peff@peff.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.