[-- Attachment #1: Type: text/plain, Size: 754 bytes --] On Fri, 2005-04-22 at 15:19 -0700, atani wrote: <snip> > Martin Schlemmer, I ran "emerge sync" today and found git has been > added to portage, version 0.5. Also note that there are now two "git" > entries within portage app-misc/git and dev-util/git. app-misc/git is > GNU Interactive Tools > Yeah, I know - that is actually why I complained to r3pek, as most of the guys interested in doing patches, etc will prob pull and build themselfs, but the user that just want to get the latest kernel, will rather want cogito (or git-pasky). So basically the git I mentioned that I wanted added (or maybe replace the current one in the tree depending on what r3pek do), was Petr's stuff ... Thanks, -- Martin Schlemmer [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 189 bytes --]
Jason Riedy wrote:
> I guess our home directories recently were changed from symlinks
> to autmounts. Solaris 8's mkdir(2) returns ENOSYS when applied
> to these, breaking safe_create_leading_directories. I don't
> know if ENOSYS is available everywhere, or if this odd behavior
> is appropriate everywhere.
>
> This works for me, but should I wrap mkdir for bizarre behavior
> by adding a compat/gitmkdir.c?
Wow, Solaris really can be braindamaged sometimes...
-hpa
2007/11/1, Francesco Pretto <ceztkoml@gmail.com>:
> subscribe git
>
Sorry! Wrong address trying to subscribe.
David Aguilar, 30.03.2009:
> This is based on top of Junio's "pu" branch and is a
> continuation of the recent difftool series.
For everyone who wants to apply the patch series: Patch 5/8 depends on
this:
[PATCH v2] difftool: add support for a difftool.prompt config variable
sent about 8 minutes before this series.
Markus
Markus Heidelberg <markus.heidelberg@web.de> writes:
> David Aguilar, 30.03.2009:
>> This is based on top of Junio's "pu" branch and is a
>> continuation of the recent difftool series.
>
> For everyone who wants to apply the patch series: Patch 5/8 depends on
> this:
> [PATCH v2] difftool: add support for a difftool.prompt config variable
> sent about 8 minutes before this series.
Thanks for keeping an eye on this series, as a recent contributor to
mergetool. I do not use mergetool myself, but this refactoring seems to
be a good idea in general, and help in reviewing the series is very much
appreciated.
I am trying to create a working tree for people to read from and have it update from a bare repository regularly. Right now I am using git-pull to fetch the changes, but its running slow due to the size of my repo and the speed of the hardware as it seems to be checking the working tree for any changes. Is there a way to make the pull ignore the local working tree and only look at files that are changed in the change sets being pulled? Bevan
2009/5/7 Bevan Watkiss <bevan.watkiss@cloakware.com>:
> I am trying to create a working tree for people to read from and have it
> update from a bare repository regularly. Right now I am using git-pull to
> fetch the changes, but it’s running slow due to the size of my repo and the
> speed of the hardware as it seems to be checking the working tree for any
> changes.
>
> Is there a way to make the pull ignore the local working tree and only look
> at files that are changed in the change sets being pulled?
Assuming you didn't modify that directory you pull into,
git pull will do almost exactly what you described. Almost,
because the operation (the merge) will involve looking for local
changes (committed and not).
It should be faster to do something like this:
git fetch && git reset --hard origin/master
Again, assuming the directory supposed to be read-only.
Otherwise, you have to merge (i.e. git pull).
It's the looking for local changes I'm trying to avoid. Doing a reset still goes over the tree, which isn't helpful. Basically I have a copy of my tree where only git can write to it, so I know the files are right. The NAS box I have the tree on is slow, so reading the tree adds about 10 minutes to the process when I only want to update a few files. -----Original Message----- From: Alex Riesen [mailto:raa.lkml@gmail.com] Sent: May 7, 2009 1:14 PM To: Bevan Watkiss Cc: git@vger.kernel.org Subject: Re: 2009/5/7 Bevan Watkiss <bevan.watkiss@cloakware.com>: > I am trying to create a working tree for people to read from and have it > update from a bare repository regularly. Right now I am using git-pull to > fetch the changes, but its running slow due to the size of my repo and the > speed of the hardware as it seems to be checking the working tree for any > changes. > > Is there a way to make the pull ignore the local working tree and only look > at files that are changed in the change sets being pulled? Assuming you didn't modify that directory you pull into, git pull will do almost exactly what you described. Almost, because the operation (the merge) will involve looking for local changes (committed and not). It should be faster to do something like this: git fetch && git reset --hard origin/master Again, assuming the directory supposed to be read-only. Otherwise, you have to merge (i.e. git pull).
2009/5/7 Bevan Watkiss <bevan.watkiss@cloakware.com>: > It's the looking for local changes I'm trying to avoid. Doing a reset still > goes over the tree, which isn't helpful. The stat(2) is slow? Then try setting core.ignoreStat (see manpage of git config) to true: git config core.ignorestat true and read below. > Basically I have a copy of my tree where only git can write to it, so I know > the files are right. The NAS box I have the tree on is slow, so reading the > tree adds about 10 minutes to the process when I only want to update a few > files. Try "git checkout origin/master". It uses index and shouldn't checkout files which are uptodate with the index. And actually, git merge should fast-forward, in your case and will update just the changed files... Of course, you can always compare HEAD and origin/master, and resolve the changes yourself (see git diff -z --name-status), but it is unlikely to be any faster.
Still took 11 minutes. The idea I've come up with today is something along the lines of git fetch origin/master git log --name-only ..<hash> | xargs git checkout -f -- This should work to quickly keep my files upto date, and I can then periodically pull properly to move the HEAD. Thanks for the info Bevan -----Original Message----- From: Alex Riesen [mailto:raa.lkml@gmail.com] Sent: May 7, 2009 2:18 PM To: Bevan Watkiss Cc: git@vger.kernel.org Subject: Re: 2009/5/7 Bevan Watkiss <bevan.watkiss@cloakware.com>: > It's the looking for local changes I'm trying to avoid. Doing a reset still > goes over the tree, which isn't helpful. The stat(2) is slow? Then try setting core.ignoreStat (see manpage of git config) to true: git config core.ignorestat true and read below. > Basically I have a copy of my tree where only git can write to it, so I know > the files are right. The NAS box I have the tree on is slow, so reading the > tree adds about 10 minutes to the process when I only want to update a few > files. Try "git checkout origin/master". It uses index and shouldn't checkout files which are uptodate with the index. And actually, git merge should fast-forward, in your case and will update just the changed files... Of course, you can always compare HEAD and origin/master, and resolve the changes yourself (see git diff -z --name-status), but it is unlikely to be any faster.
On Thu, 7 May 2009, Bevan Watkiss wrote:
>
> Basically I have a copy of my tree where only git can write to it, so I know
> the files are right. The NAS box I have the tree on is slow, so reading the
> tree adds about 10 minutes to the process when I only want to update a few
> files.
Ouch.
You could try doing
[core]
preloadindex = true
and see if that helps some of your loads. It does limit even the parallel
tree stat to 20 or so, but if most of your cost is in just doing the
lstat() over the files to see that they haven't changed, you might be
getting a factor-of-20 speedup for at least _some_ of what you do.
If you can, it might also be interesting to see system call trace patterns
(with times!) to see if there is something obviously horribly bad going
on. If you're running under Linux, and don't think the data contains
anything very private, send me the output of "strace -f -T" of the most
problematic operations, and maybe I can see if I can come up with anything
interesting.
I have long refused to use networked filesystems because I used to find
them -so- painful when working with CVS, so none of my performance work
has ever really directly concentrated on long-latency filesystems. Even
the index preload was all done "blind" with other people reporting issues
(and happily I could see some of the effects with local filesystems and
multiple CPU's ;).
Linus
Looking at the trace it does appear that most of this is the lstat. It's the problem of having many tiny files on a network drive, and trying to use git for something it's not meant. The log has 265430 lines of lstat and 10887 other lines. If you still want the log file I'll strip out the directory names and send it off. It would be nice to have an option that you can pull only the files that changed in the changesets you are updating and ignore the state of the other files. Bevan -----Original Message----- From: Linus Torvalds [mailto:torvalds@linux-foundation.org] Sent: May 7, 2009 2:56 PM To: Bevan Watkiss Cc: 'Alex Riesen'; git@vger.kernel.org Subject: RE: On Thu, 7 May 2009, Bevan Watkiss wrote: > > Basically I have a copy of my tree where only git can write to it, so I know > the files are right. The NAS box I have the tree on is slow, so reading the > tree adds about 10 minutes to the process when I only want to update a few > files. Ouch. You could try doing [core] preloadindex = true and see if that helps some of your loads. It does limit even the parallel tree stat to 20 or so, but if most of your cost is in just doing the lstat() over the files to see that they haven't changed, you might be getting a factor-of-20 speedup for at least _some_ of what you do. If you can, it might also be interesting to see system call trace patterns (with times!) to see if there is something obviously horribly bad going on. If you're running under Linux, and don't think the data contains anything very private, send me the output of "strace -f -T" of the most problematic operations, and maybe I can see if I can come up with anything interesting. I have long refused to use networked filesystems because I used to find them -so- painful when working with CVS, so none of my performance work has ever really directly concentrated on long-latency filesystems. Even the index preload was all done "blind" with other people reporting issues (and happily I could see some of the effects with local filesystems and multiple CPU's ;). Linus
[Please don't top-post...]
On 2009.05.07 14:48:20 -0400, Bevan Watkiss wrote:
> From: Alex Riesen [mailto:raa.lkml@gmail.com]
> > 2009/5/7 Bevan Watkiss <bevan.watkiss@cloakware.com>:
> > > It's the looking for local changes I'm trying to avoid. Doing a
> > > reset still goes over the tree, which isn't helpful.
> >
> > The stat(2) is slow? Then try setting core.ignoreStat (see manpage
> > of git config) to true: git config core.ignorestat true and read
> > below.
>
> Still took 11 minutes.
IIRC, to see the effects of core.ignorestat, you need to have updated
all files once. So you might need, for example, "git checkout -f HEAD"
(not sure if a plain checkout is enough) once first, and then the future
"git checkout $something" should be faster.
Björn
On Thu, 7 May 2009, Bevan Watkiss wrote: > > Looking at the trace it does appear that most of this is the lstat. It's > the problem of having many tiny files on a network drive, and trying to use > git for something it's not meant. > > The log has 265430 lines of lstat and 10887 other lines. If you still want > the log file I'll strip out the directory names and send it off. Actually, if it's just the lstat's, then it's not all that interesting any more, it's a known problem with at least a known _partial_ solution. However, I think it turns out that we've only enabled the index preloading with "git diff" and "git commit". Not on "git checkout". So start off doing that > [core] > preloadindex = true AND apply the following patch to git, and see how much (if any) that helps. It sounds like you have a pretty damn large repository, together with a slow filesystem. It really could be a big improvement. The patch is TOTALLY UNTESTED. It also worries me that 'git checkout' seems to do _two_ 'lstat()' calls per file. I didn't look any more closely, but there may be other issues here. Linus --- builtin-checkout.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/builtin-checkout.c b/builtin-checkout.c index 15f0c32..3100ccd 100644 --- a/builtin-checkout.c +++ b/builtin-checkout.c @@ -216,7 +216,7 @@ static int checkout_paths(struct tree *source_tree, const char **pathspec, struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file)); newfd = hold_locked_index(lock_file, 1); - if (read_cache() < 0) + if (read_cache_preload(pathspec) < 0) return error("corrupt index file"); if (source_tree) @@ -367,7 +367,7 @@ static int merge_working_tree(struct checkout_opts *opts, int newfd = hold_locked_index(lock_file, 1); int reprime_cache_tree = 0; - if (read_cache() < 0) + if (read_cache_preload(NULL) < 0) return error("corrupt index file"); cache_tree_free(&active_cache_tree);
On Thu, 7 May 2009, Linus Torvalds wrote:
>
> The patch is TOTALLY UNTESTED. It also worries me that 'git checkout'
> seems to do _two_ 'lstat()' calls per file. I didn't look any more
> closely, but there may be other issues here.
Hmm. The second pass comes from
show_local_changes(&new->commit->object);
(this is the "git checkout" without actual filenames), and is suppressed
if we ask for a quiet checkout. But it's sad how it re-loads the index. I
wonder where the CE_VALID bit got dropped.
Linus
Linus Torvalds <torvalds@linux-foundation.org> writes:
> On Thu, 7 May 2009, Linus Torvalds wrote:
>>
>> The patch is TOTALLY UNTESTED. It also worries me that 'git checkout'
>> seems to do _two_ 'lstat()' calls per file. I didn't look any more
>> closely, but there may be other issues here.
>
> Hmm. The second pass comes from
>
> show_local_changes(&new->commit->object);
>
> (this is the "git checkout" without actual filenames), and is suppressed
> if we ask for a quiet checkout. But it's sad how it re-loads the index. I
> wonder where the CE_VALID bit got dropped.
I do not think you mean CE_VALID; CE_UPTODATE isn't it?
On Thu, 7 May 2009, Junio C Hamano wrote:
>
> I do not think you mean CE_VALID; CE_UPTODATE isn't it?
Yes, sorry.
Linus
On Thu, 7 May 2009, Linus Torvalds wrote: > > Hmm. The second pass comes from > > show_local_changes(&new->commit->object); > > (this is the "git checkout" without actual filenames), and is suppressed > if we ask for a quiet checkout. But it's sad how it re-loads the index. I > wonder where the CE_VALID bit got dropped. Ahh. It's not actually dropped, it's still there. It's just that 'get_stat_data()' doesn't check it, when asking for noncached data. The logic of 'get_stat_data()' is that it will return the stat data from the filesystem (unless we explicitly ask for just the cached case, in which case it will take it from the cache entry directly). However, the code doesn't realize that if ce_uptodate() is true, then we already know the stat data, so no need to do the lstat() again, and we can take it all from the cache entry regardless of whether we asked for filesystem data or cached data. So here's a better patch. It should cut down the 'lstat()' calls from "git checkout" a lot. It looks obvious enough, and it passes testing (and now "git checkout" only does about as many lstat's as there are files in the repository, and they seem to all be properly asynchronous if 'core.preloadindex' is set. Somebody should check. It would be interesting to hear about whether this makes a performance impact, especially with slow filesystems and/or other operating systems that have a relatively higher cost for 'lstat()'. Linus --- builtin-checkout.c | 4 ++-- diff-lib.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/builtin-checkout.c b/builtin-checkout.c index 15f0c32..3100ccd 100644 --- a/builtin-checkout.c +++ b/builtin-checkout.c @@ -216,7 +216,7 @@ static int checkout_paths(struct tree *source_tree, const char **pathspec, struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file)); newfd = hold_locked_index(lock_file, 1); - if (read_cache() < 0) + if (read_cache_preload(pathspec) < 0) return error("corrupt index file"); if (source_tree) @@ -367,7 +367,7 @@ static int merge_working_tree(struct checkout_opts *opts, int newfd = hold_locked_index(lock_file, 1); int reprime_cache_tree = 0; - if (read_cache() < 0) + if (read_cache_preload(NULL) < 0) return error("corrupt index file"); cache_tree_free(&active_cache_tree); diff --git a/diff-lib.c b/diff-lib.c index a310fb2..0aba6cd 100644 --- a/diff-lib.c +++ b/diff-lib.c @@ -214,7 +214,7 @@ static int get_stat_data(struct cache_entry *ce, const unsigned char *sha1 = ce->sha1; unsigned int mode = ce->ce_mode; - if (!cached) { + if (!cached && !ce_uptodate(ce)) { int changed; struct stat st; changed = check_removed(ce, &st);
On Thu, 7 May 2009, Linus Torvalds wrote:
this patch is worthwhile in itself, but the use case that is presented
here is slightly different, and I wonder if it's common enough to be worth
having a config option for.
his use case (as I understand it) is that the working tree is never
updated by anything other than git. it never recieves patches or manual
edits.
as such _any_ lstats of the tree are a waste of time. if git knows what
was checked out before and what is being checked out now, it can find what
files need to be changed.
this situation is not common for most developers, but it would be
reasonable for build farms, so it's not just a one-person issue.
David Lang
> On Thu, 7 May 2009, Linus Torvalds wrote:
>>
>> Hmm. The second pass comes from
>>
>> show_local_changes(&new->commit->object);
>>
>> (this is the "git checkout" without actual filenames), and is suppressed
>> if we ask for a quiet checkout. But it's sad how it re-loads the index. I
>> wonder where the CE_VALID bit got dropped.
>
> Ahh. It's not actually dropped, it's still there.
>
> It's just that 'get_stat_data()' doesn't check it, when asking for
> noncached data.
>
> The logic of 'get_stat_data()' is that it will return the stat data from
> the filesystem (unless we explicitly ask for just the cached case, in
> which case it will take it from the cache entry directly).
>
> However, the code doesn't realize that if ce_uptodate() is true, then we
> already know the stat data, so no need to do the lstat() again, and we
> can take it all from the cache entry regardless of whether we asked for
> filesystem data or cached data.
>
> So here's a better patch. It should cut down the 'lstat()' calls from "git
> checkout" a lot.
>
> It looks obvious enough, and it passes testing (and now "git checkout"
> only does about as many lstat's as there are files in the repository, and
> they seem to all be properly asynchronous if 'core.preloadindex' is set.
>
> Somebody should check. It would be interesting to hear about whether this
> makes a performance impact, especially with slow filesystems and/or other
> operating systems that have a relatively higher cost for 'lstat()'.
>
> Linus
>
> ---
> builtin-checkout.c | 4 ++--
> diff-lib.c | 2 +-
> 2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/builtin-checkout.c b/builtin-checkout.c
> index 15f0c32..3100ccd 100644
> --- a/builtin-checkout.c
> +++ b/builtin-checkout.c
> @@ -216,7 +216,7 @@ static int checkout_paths(struct tree *source_tree, const char **pathspec,
> struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file));
>
> newfd = hold_locked_index(lock_file, 1);
> - if (read_cache() < 0)
> + if (read_cache_preload(pathspec) < 0)
> return error("corrupt index file");
>
> if (source_tree)
> @@ -367,7 +367,7 @@ static int merge_working_tree(struct checkout_opts *opts,
> int newfd = hold_locked_index(lock_file, 1);
> int reprime_cache_tree = 0;
>
> - if (read_cache() < 0)
> + if (read_cache_preload(NULL) < 0)
> return error("corrupt index file");
>
> cache_tree_free(&active_cache_tree);
> diff --git a/diff-lib.c b/diff-lib.c
> index a310fb2..0aba6cd 100644
> --- a/diff-lib.c
> +++ b/diff-lib.c
> @@ -214,7 +214,7 @@ static int get_stat_data(struct cache_entry *ce,
> const unsigned char *sha1 = ce->sha1;
> unsigned int mode = ce->ce_mode;
>
> - if (!cached) {
> + if (!cached && !ce_uptodate(ce)) {
> int changed;
> struct stat st;
> changed = check_removed(ce, &st);
> --
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
On Thu, 7 May 2009, david@lang.hm wrote:
>
> his use case (as I understand it) is that the working tree is never updated by
> anything other than git. it never recieves patches or manual edits.
Well, you can certainly just use the CE_VALID bit in the index too (and
this time I really mean CE_VALID). But it won't help anybody else, so it
wouldn't be nearly as interesting. And I wonder how badly that code has
rotted, thanks to not getting used.
But yes, one thing to do would be
git update-index --assume-unchanged --refresh
which should hopefully set the bit, and then after that setting
'core.ignoreStat' should hopefully keep it set.
Of course, you had then better _never_ make any mistakes and touch the
files with non-git commands.
And hope that the code still works ;)
Linus
On Thu, 7 May 2009, Linus Torvalds wrote: > On Thu, 7 May 2009, david@lang.hm wrote: >> >> his use case (as I understand it) is that the working tree is never updated by >> anything other than git. it never recieves patches or manual edits. > > Well, you can certainly just use the CE_VALID bit in the index too (and > this time I really mean CE_VALID). But it won't help anybody else, so it > wouldn't be nearly as interesting. And I wonder how badly that code has > rotted, thanks to not getting used. > > But yes, one thing to do would be > > git update-index --assume-unchanged --refresh > > which should hopefully set the bit, and then after that setting > 'core.ignoreStat' should hopefully keep it set. > > Of course, you had then better _never_ make any mistakes and touch the > files with non-git commands. even with this a git checkout -f should replace all files, correct? David Lang > And hope that the code still works ;) > > Linus >
On Thu, 7 May 2009, david@lang.hm wrote:
>
> even with this a git checkout -f should replace all files, correct?
Hmm. I don't think so.
As far as I recall, "-f" only overrides certain errors (like unmerged
files or not up-to-date content), it doesn't change behavior wrt files
that git thinks are already up-to-date.
But I didn't check.
Linus
On Thu, 7 May 2009, Linus Torvalds wrote:
> On Thu, 7 May 2009, david@lang.hm wrote:
>>
>> even with this a git checkout -f should replace all files, correct?
>
> Hmm. I don't think so.
>
> As far as I recall, "-f" only overrides certain errors (like unmerged
> files or not up-to-date content), it doesn't change behavior wrt files
> that git thinks are already up-to-date.
what about a reset --hard? (is there any command that would force the
files to be re-written, no matter what git thinks is already there)
David Lang
On Thu, 7 May 2009, david@lang.hm wrote:
>
> what about a reset --hard? (is there any command that would force the files to
> be re-written, no matter what git thinks is already there)
No, not "git reset --hard" either, I think. Git very much tries to avoid
rewriting files, and if you've told it that file contents are stable, it
will believe you.
In fact, I think people used CE_VALID explicitly for the missing parts of
"partial checkouts", so if we'd suddenly start writing files despite them
being marked as ok in the tree, I think we'd have broken that part.
(Although again - I'm not sure who would use CE_VALID and friends).
If you want to force everything to be rewritten, you should just remove
the index (or remove the specific entries in it if you want to do it just
to a particular file) and then do a "git checkout" to re-read and
re-populate the tree.
But I'm not really seeing why you want to do this. If you told git that it
shouldn't care about the working tree, why do you now want it do care?
Linus
On Thu, 7 May 2009, Linus Torvalds wrote:
> On Thu, 7 May 2009, david@lang.hm wrote:
>>
>> what about a reset --hard? (is there any command that would force the files to
>> be re-written, no matter what git thinks is already there)
>
> No, not "git reset --hard" either, I think. Git very much tries to avoid
> rewriting files, and if you've told it that file contents are stable, it
> will believe you.
>
> In fact, I think people used CE_VALID explicitly for the missing parts of
> "partial checkouts", so if we'd suddenly start writing files despite them
> being marked as ok in the tree, I think we'd have broken that part.
>
> (Although again - I'm not sure who would use CE_VALID and friends).
>
> If you want to force everything to be rewritten, you should just remove
> the index (or remove the specific entries in it if you want to do it just
> to a particular file) and then do a "git checkout" to re-read and
> re-populate the tree.
>
> But I'm not really seeing why you want to do this. If you told git that it
> shouldn't care about the working tree, why do you now want it do care?
to be able to manually recover from the case where someone did things that
they weren't supposed to
removing the index and doing a checkout would be a reasonable thing to do
(at least conceptually), I will admit that I don't remember ever seeing a
command (or discussion of one) that would let me do that.
On Friday 08 May 2009, david@lang.hm wrote:
> removing the index and doing a checkout would be a reasonable thing to do
> (at least conceptually), I will admit that I don't remember ever seeing a
> command (or discussion of one) that would let me do that.
What about:
rm .git/index
git checkout -f
or maybe:
git update-index --no-assume-unchanged --refresh
git checkout -f
Hm?
....Johan
--
Johan Herland, <johan@herland.net>
www.herland.net
2009/5/7 Linus Torvalds <torvalds@linux-foundation.org>:
>
> Somebody should check. It would be interesting to hear about whether this
> makes a performance impact, especially with slow filesystems and/or other
> operating systems that have a relatively higher cost for 'lstat()'.
>
I did (cygwin). My guess, the improvement is completely dwarfed by the
other overheads (like starting git and writing files).
# Without the patch
real 11m22.338s
user 0m54.629s
sys 8m33.638s
# With checkout index preload
real 11m14.361s
user 0m46.609s
sys 7m56.300s
The script:
#!/bin/sh
if [ "$1" = setup ]; then
for i in 1 2 3 4
do
n=$(date)
for f in `seq 1 10000`
do
echo "$n" >file$f
done
git add .
printf "Commit $i:"
git commit -m"$n"
done
exit
fi
export GIT_EXEC_PATH=/d/git-win
time for f in `seq 1 10`
do
$GIT_EXEC_PATH/git checkout master~3 &&
$GIT_EXEC_PATH/git checkout master~2 &&
$GIT_EXEC_PATH/git checkout master~1 &&
$GIT_EXEC_PATH/git checkout master
done
exit
On Fri, 8 May 2009, Alex Riesen wrote:
>
> I did (cygwin). My guess, the improvement is completely dwarfed by the
> other overheads (like starting git and writing files).
Oh, I meant "git checkout" as in not even switching branches, or perhaps
switching branches but just changing a single file (among thousands).
If you actually end up re-writing all files, then yes, it will obviously
be totally dominated by other things.
For example, in the kernel, switching between two branches that only
differ in one file (Makefile) went from 0.18 seconds down to 0.14 seconds
for me just because of the fewer lstat() calls.
Noticeable? No. But it might be more noticeable on some other OS, or with
some networked filesystem.
Linus
Linus Torvalds wrote:
>
> On Fri, 8 May 2009, Alex Riesen wrote:
>> I did (cygwin). My guess, the improvement is completely dwarfed by the
>> other overheads (like starting git and writing files).
>
> Oh, I meant "git checkout" as in not even switching branches, or perhaps
> switching branches but just changing a single file (among thousands).
>
> If you actually end up re-writing all files, then yes, it will obviously
> be totally dominated by other things.
>
> For example, in the kernel, switching between two branches that only
> differ in one file (Makefile) went from 0.18 seconds down to 0.14 seconds
> for me just because of the fewer lstat() calls.
>
> Noticeable? No. But it might be more noticeable on some other OS, or with
> some networked filesystem.
plain 'git checkout' on linux kernel over NFS.
Best time without patch: 1.20 seconds
0.45user 0.71system 0:01.20elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+15467minor)pagefaults 0swaps
Best time with patch (core.preloadindex = true): 1.10 seconds
0.43user 4.00system 0:01.10elapsed 402%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+13999minor)pagefaults 0swaps
Best time with patch (core.preloadindex = false): 0.84 seconds
0.42user 0.39system 0:00.84elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+13965minor)pagefaults 0swaps
Best time with read_cache_preload patch only: 1.38 seconds
0.45user 4.42system 0:01.38elapsed 352%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+13990minor)pagefaults 0swaps
The read_cache_preload() changes actually slow things down for me for this
case.
Reduction in lstat's gives a nice 30% improvement.
-brandon
> -----Original Message-----
> From: david@lang.hm [mailto:david@lang.hm]
> Sent: May 7, 2009 7:31 PM
> To: Linus Torvalds
> Cc: Bevan Watkiss; 'Alex Riesen'; Git Mailing List
> Subject: RE:
>
> On Thu, 7 May 2009, Linus Torvalds wrote:
>
> > On Thu, 7 May 2009, david@lang.hm wrote:
> >>
> >> what about a reset --hard? (is there any command that would force the
> files to
> >> be re-written, no matter what git thinks is already there)
> >
> > No, not "git reset --hard" either, I think. Git very much tries to avoid
> > rewriting files, and if you've told it that file contents are stable, it
> > will believe you.
> >
> > In fact, I think people used CE_VALID explicitly for the missing parts
> of
> > "partial checkouts", so if we'd suddenly start writing files despite
> them
> > being marked as ok in the tree, I think we'd have broken that part.
> >
> > (Although again - I'm not sure who would use CE_VALID and friends).
> >
> > If you want to force everything to be rewritten, you should just remove
> > the index (or remove the specific entries in it if you want to do it
> just
> > to a particular file) and then do a "git checkout" to re-read and
> > re-populate the tree.
> >
> > But I'm not really seeing why you want to do this. If you told git that
> it
> > shouldn't care about the working tree, why do you now want it do care?
>
> to be able to manually recover from the case where someone did things that
> they weren't supposed to
>
> removing the index and doing a checkout would be a reasonable thing to do
> (at least conceptually), I will admit that I don't remember ever seeing a
> command (or discussion of one) that would let me do that.
Added the patch and now the time is down to 4 1/2 minutes. Still a little
slow for my needs though.
Since I'm looking for a more instantaneous update I'll probably use
something more along the lines of
git fetch origin/master
git log --name-only ..HEAD
to get the list of files that have changed and copy them from a local
repository. Nightly doing a real pull to confirm the files are correct and
up to date.
Bevan
On Fri, 8 May 2009, Brandon Casey wrote: > > plain 'git checkout' on linux kernel over NFS. Thanks. > Best time without patch: 1.20 seconds > > 0.45user 0.71system 0:01.20elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (0major+15467minor)pagefaults 0swaps > > Best time with patch (core.preloadindex = true): 1.10 seconds > > 0.43user 4.00system 0:01.10elapsed 402%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (0major+13999minor)pagefaults 0swaps > > Best time with patch (core.preloadindex = false): 0.84 seconds > > 0.42user 0.39system 0:00.84elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (0major+13965minor)pagefaults 0swaps Ok, that is _disgusting_. The parallelism clearly works (402%CPU), but the system time overhead is horrible. Going from 0.39s system time to 4s of system time is really quite nasty. Is there any possibility you could oprofile this (run it in a loop to get better profiles)? It very much sounds like some serious lock contention, and I'd love to hear more about exactly which lock it's hitting. Also, you're already almost totally CPU-bound, with 96% CPU for the single-threaded csase. So you may be running over NFS, but your NFS server is likely pretty good and/or the client just captures everything in the caches anyway. I don't recall what the Linux NFS stat cache timeout is, but it's less than a minute. I suspect that you ran things in a tight loop, which is why you then got effectively the local caching behavior for the best times. Can you do a "best time" check but with a 60-second pause between runs (and before), to see what happens when the client doesn't do caching? > Best time with read_cache_preload patch only: 1.38 seconds > > 0.45user 4.42system 0:01.38elapsed 352%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (0major+13990minor)pagefaults 0swaps Yeah, here you're not getting any advantage of fewer lstats, and you show the same "almost entirely CPU-bound on four cores" behavior, and the same (probable) lock contention that has pushed the system time way up. > The read_cache_preload() changes actually slow things down for me for this > case. > > Reduction in lstat's gives a nice 30% improvement. Yes, I think the one-liner lstat avoidance is a real fix regardless. And the preloading sounds like it hits serialization overhead in the kernel, which I'm not at all surprised at, but not being surprised doesn't mean that I'm not interested to hear where it is. The Linux VFS dcache itself should scale better than that (but who knows - cacheline ping-pong due to lock contention can easily cause a 10x slowdown even without being _totally_ contended all the time). So I would _suspect_ that it's some NFS lock that you're seeing, but I'd love to know more. Btw, those system times are pretty high to begin with, so I'd love to know kernel version and see a profile even without the parallel case and presumably lock contention. Because while I probably have a faster machine anyway, what I see iis: [torvalds@nehalem linux]$ /usr/bin/time git checkout 0.13user 0.05system 0:00.19elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+13334minor)pagefaults 0swaps ie my "system" time is _much_ lower than yours (and lower than your system time). This is the 'without patch' time, btw, so this has extra lstat's. And my system time is still lower than my user time, so I wonder where all _your_ system time comes from. Your system time is much more comparable to user time even in the good case, and I wonder why? Could be just that kernel code tends to have more cache misses, and my 8MB cache captures it all, and yours doesn't. Regardless, a profile would be very interesting. Linus
* Linus Torvalds <torvalds@linux-foundation.org> writes: | So here's a better patch. It should cut down the 'lstat()' calls from | "git checkout" a lot. | | It looks obvious enough, and it passes testing (and now "git checkout" | only does about as many lstat's as there are files in the repository, | and they seem to all be properly asynchronous if 'core.preloadindex' | is set. I did a test by switching from v2.6.27 to v2.6.25, and now the only "lstat()-difference" between with and without the -q option is 2 lstat() calls extra done without the -q option. And, compared to over 41 000 lstat() calls, that is not noticable. Very good! | Somebody should check. It would be interesting to hear about whether | this makes a performance impact, especially with slow filesystems | and/or other operating systems that have a relatively higher cost for | 'lstat()'. Below is a table which is output from strace -o result -T git checkout my-v2.6.25 /* from my-v2.6.27 */ where the "result" file is run through a perl script to pretty print it: TOTAL 113988 100.000% OK:107252 NOT: 6736 6.263578 sec 55 usec/call lstat64 41114 36.069% OK: 35829 NOT: 5285 0.710936 sec 17 usec/call open 15027 13.183% OK: 13872 NOT: 1155 0.559302 sec 37 usec/call unlink 14379 12.614% OK: 14374 NOT: 5 3.720167 sec 259 usec/call write 14207 12.464% OK: 14207 NOT: 0 0.754196 sec 53 usec/call close 13872 12.170% OK: 13872 NOT: 0 0.185572 sec 13 usec/call fstat64 13862 12.161% OK: 13862 NOT: 0 0.169952 sec 12 usec/call rmdir 551 0.483% OK: 269 NOT: 282 0.035534 sec 64 usec/call brk 510 0.447% OK: 510 NOT: 0 0.014804 sec 29 usec/call mkdir 174 0.153% OK: 174 NOT: 0 0.102625 sec 590 usec/call mmap2 102 0.089% OK: 102 NOT: 0 0.001725 sec 17 usec/call read 68 0.060% OK: 68 NOT: 0 0.000999 sec 15 usec/call munmap 61 0.054% OK: 61 NOT: 0 0.005037 sec 83 usec/call access 20 0.018% OK: 12 NOT: 8 0.000348 sec 17 usec/call mprotect 13 0.011% OK: 13 NOT: 0 0.000193 sec 15 usec/call stat64 7 0.006% OK: 7 NOT: 0 0.000109 sec 16 usec/call getcwd 3 0.003% OK: 3 NOT: 0 0.000053 sec 18 usec/call chdir 3 0.003% OK: 3 NOT: 0 0.000048 sec 16 usec/call fcntl64 3 0.003% OK: 3 NOT: 0 0.000036 sec 12 usec/call rename 2 0.002% OK: 2 NOT: 0 0.001553 sec 776 usec/call setitimer 2 0.002% OK: 2 NOT: 0 0.000028 sec 14 usec/call getdents64 2 0.002% OK: 2 NOT: 0 0.000039 sec 20 usec/call uname 1 0.001% OK: 1 NOT: 0 0.000013 sec 13 usec/call time 1 0.001% OK: 1 NOT: 0 0.000011 sec 11 usec/call futex 1 0.001% OK: 1 NOT: 0 0.000013 sec 13 usec/call readlink 1 0.001% OK: 0 NOT: 1 0.000018 sec 18 usec/call execve 1 0.001% OK: 1 NOT: 0 0.000256 sec 256 usec/call getrlimit 1 0.001% OK: 1 NOT: 0 0.000011 sec 11 usec/call So, if the numbers from strace is trustable, 0.71 seconds is used on 41 114 calls to lstat64(). But, look at the unlink line, where each call took 259 microseconds (= 0.259 milliseconds), and all 14 379 calls took 3.72 seconds. It should be noted that when switching branch the other way (from .25 to .27), the unlink() calls used less time (below 160 microseconds each). Also note that the above was tested by only 3 runs. Warm cache. ext4 disk partition with git compiled with the USE_NSEC=1 option. Most (all?) of the unlink() calls seems to be from the following lines from the checkout_entry() funciton in entry.c /* * We unlink the old file, to get the new one with the * right permissions (including umask, which is nasty * to emulate by hand - much easier to let the system * just do the right thing) */ if (S_ISDIR(st.st_mode)) { /* If it is a gitlink, leave it alone! */ if (S_ISGITLINK(ce->ce_mode)) return 0; if (!state->force) return error("%s is a directory", path); remove_subtree(path); } else if (unlink(path)) return error("unable to unlink old '%s' (%s)", path, strerror(errno)); -- kjetil
Linus Torvalds wrote: > > On Fri, 8 May 2009, Brandon Casey wrote: >> plain 'git checkout' on linux kernel over NFS. > > Thanks. > >> Best time without patch: 1.20 seconds >> >> 0.45user 0.71system 0:01.20elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k >> 0inputs+0outputs (0major+15467minor)pagefaults 0swaps >> >> Best time with patch (core.preloadindex = true): 1.10 seconds >> >> 0.43user 4.00system 0:01.10elapsed 402%CPU (0avgtext+0avgdata 0maxresident)k >> 0inputs+0outputs (0major+13999minor)pagefaults 0swaps >> >> Best time with patch (core.preloadindex = false): 0.84 seconds >> >> 0.42user 0.39system 0:00.84elapsed 96%CPU (0avgtext+0avgdata 0maxresident)k >> 0inputs+0outputs (0major+13965minor)pagefaults 0swaps > > Ok, that is _disgusting_. The parallelism clearly works (402%CPU), but the > system time overhead is horrible. Going from 0.39s system time to 4s of > system time is really quite nasty. > > Is there any possibility you could oprofile this (run it in a loop to get > better profiles)? It very much sounds like some serious lock contention, > and I'd love to hear more about exactly which lock it's hitting. Possibly, I'll see if our sysadmin has time to "play". > Also, you're already almost totally CPU-bound, with 96% CPU for the > single-threaded csase. So you may be running over NFS, but your NFS server > is likely pretty good and/or the client just captures everything in the > caches anyway. > > I don't recall what the Linux NFS stat cache timeout is, but it's less > than a minute. I suspect that you ran things in a tight loop, which is why > you then got effectively the local caching behavior for the best times. Yeah, that's what I did. > Can you do a "best time" check but with a 60-second pause between runs > (and before), to see what happens when the client doesn't do caching? No problem. >> Best time with read_cache_preload patch only: 1.38 seconds >> >> 0.45user 4.42system 0:01.38elapsed 352%CPU (0avgtext+0avgdata 0maxresident)k >> 0inputs+0outputs (0major+13990minor)pagefaults 0swaps > > Yeah, here you're not getting any advantage of fewer lstats, and you > show the same "almost entirely CPU-bound on four cores" behavior, and the > same (probable) lock contention that has pushed the system time way up. > >> The read_cache_preload() changes actually slow things down for me for this >> case. >> >> Reduction in lstat's gives a nice 30% improvement. > > Yes, I think the one-liner lstat avoidance is a real fix regardless. And > the preloading sounds like it hits serialization overhead in the kernel, > which I'm not at all surprised at, but not being surprised doesn't mean > that I'm not interested to hear where it is. > > The Linux VFS dcache itself should scale better than that (but who knows - > cacheline ping-pong due to lock contention can easily cause a 10x slowdown > even without being _totally_ contended all the time). So I would _suspect_ > that it's some NFS lock that you're seeing, but I'd love to know more. > > Btw, those system times are pretty high to begin with, so I'd love to know > kernel version and see a profile even without the parallel case and > presumably lock contention. Because while I probably have a faster > machine anyway, what I see iis: > > [torvalds@nehalem linux]$ /usr/bin/time git checkout > 0.13user 0.05system 0:00.19elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k > 0inputs+0outputs (0major+13334minor)pagefaults 0swaps > > ie my "system" time is _much_ lower than yours (and lower than your system > time). This is the 'without patch' time, btw, so this has extra lstat's. > And my system time is still lower than my user time, so I wonder where all > _your_ system time comes from. Your system time is much more comparable to > user time even in the good case, and I wonder why? > > Could be just that kernel code tends to have more cache misses, and my 8MB > cache captures it all, and yours doesn't. Regardless, a profile would be > very interesting. Something is definitely up. I provided timing results for your original preload_cache implementation which affected status and diff, which was part of the justification for merging it in. http://article.gmane.org/gmane.comp.version-control.git/100998 You can see that cold cache system time for 'git status' went from 0.36 to 0.52 seconds. Fine. I just ran it again, and now I'm getting system time of 10 seconds! This is the same machine. Similarly for the cold cache 'git checkout' reruns: Best without patch: 6.02 (systime 1.57) 0.43user 1.57system 0:06.02elapsed 33%CPU (0avgtext+0avgdata 0maxresident)k 5336inputs+0outputs (12major+15472minor)pagefaults 0swaps Best with patch (preload_cache,lstat reduction): 2.69 (systime 10.47) 0.45user 10.47system 0:02.69elapsed 405%CPU (0avgtext+0avgdata 0maxresident)k 5336inputs+0outputs (12major+13985minor)pagefaults 0swaps OS: Centos4.7 $ cat /proc/version Linux version 2.6.9-78.0.17.ELsmp (mockbuild@builder16.centos.org) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-9)) #1 SMP Thu Mar 12 20:05:15 EDT 2009 -brandon
Brandon Casey wrote:
> Linus Torvalds wrote:
>> And
>> the preloading sounds like it hits serialization overhead in the kernel,
>> which I'm not at all surprised at, but not being surprised doesn't mean
>> that I'm not interested to hear where it is.
>>
>> The Linux VFS dcache itself should scale better than that (but who knows -
>> cacheline ping-pong due to lock contention can easily cause a 10x slowdown
>> even without being _totally_ contended all the time). So I would _suspect_
>> that it's some NFS lock that you're seeing, but I'd love to know more.
>>
>> Btw, those system times are pretty high to begin with, so I'd love to know
>> kernel version and see a profile even without the parallel case and
>> presumably lock contention.
Here's an strace of 'git checkout':
Before (cold cache):
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
98.60 6.365501 111 57432 lstat64
0.50 0.031984 359 89 2 close
0.25 0.015818 115 137 77 open
0.12 0.007670 23 339 write
0.09 0.005631 110 51 munmap
0.08 0.004873 49 99 69 stat64
0.07 0.004771 140 34 15 access
0.05 0.003083 280 11 5 waitpid
0.05 0.002973 10 284 brk
0.04 0.002816 469 6 execve
<snip>
After (cold cache, no lstat fix, just cache_preload):
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
90.90 23.717981 413 57432 lstat64
8.72 2.273917 162423 14 2 futex
0.12 0.032241 948 34 close
0.04 0.011507 202 57 munmap
0.04 0.009648 132 73 mmap2
0.03 0.008508 149 57 20 open
0.03 0.007771 311 25 mprotect
0.03 0.007758 388 20 clone
0.03 0.007548 23 334 write
0.02 0.005247 262 20 10 access
-brandon
On Fri, 8 May 2009, Brandon Casey wrote: > > Something is definitely up. > > I provided timing results for your original preload_cache implementation > which affected status and diff, which was part of the justification for > merging it in. > > http://article.gmane.org/gmane.comp.version-control.git/100998 > > You can see that cold cache system time for 'git status' went from 0.36 to > 0.52 seconds. Fine. I just ran it again, and now I'm getting system time > of 10 seconds! This is the same machine. Grr. > OS: Centos4.7 > > $ cat /proc/version > Linux version 2.6.9-78.0.17.ELsmp (mockbuild@builder16.centos.org) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-9)) #1 SMP Thu Mar 12 20:05:15 EDT 2009 Ok, if that's really the true kernel version (2.6.9), then that's some ancient kernel there. At the same time it's obviously been recompiled recently, so it got updated. At a guess, something got screwed up. But I have absolutely _no_ way to even guess what kernel patches centos puts in their ancient kernel builds. Perhaps a centos bugzilla entry might be appropriate? Somebody there might know what changed. Of course, it _could_ be an external change too, where the NFS server or timing changed just enough to trigger a pre-existing issue. But that would be pretty unlikely. Linus
On Fri, 8 May 2009, Kjetil Barvik wrote: > > So, if the numbers from strace is trustable, 0.71 seconds is used on > 41 114 calls to lstat64(). But, look at the unlink line, where each > call took 259 microseconds (= 0.259 milliseconds), and all 14 379 > calls took 3.72 seconds. The system call times from strace are not really trustworthy. The overhead of tracing and in particular all the context switching back and forth between the tracer and the tracee means that the numbers should be taken with a large grain of salt. That said, they definitely aren't totally made up, and they tend to show real issues. In this particular case, what is going on is that 'lstat()' does no IO at all, while 'unlink()' generally at the very least will add things to some journal etc, and when the journal fills up, it will force IO. So doing 15k unlink() calls really is _much_ more expensive than doing 41k lstat() calls, since the latter will never force any IO at all (ok, so even doing just an lstat() may add atime updates etc to directories, but even if atime is enabled, that tends to only trigger one IO per second at most, and we never have to do any sync IO). > It should be noted that when switching branch the other way (from .25 > to .27), the unlink() calls used less time (below 160 microseconds > each). I don't think they are really "260 us each" or "160 us each". It's rather more likely that there are a few that are big due to forced IO, and most are in the couple of us case. Linus
On Fri, 8 May 2009, Brandon Casey wrote: > > Before (cold cache): > % time seconds usecs/call calls errors syscall > ------ ----------- ----------- --------- --------- ---------------- > 98.60 6.365501 111 57432 lstat64 > > After (cold cache, no lstat fix, just cache_preload): > % time seconds usecs/call calls errors syscall > ------ ----------- ----------- --------- --------- ---------------- > 90.90 23.717981 413 57432 lstat64 Yes, interesting. I really smells like it's all fixed performance and there is a single lock around it. That 111us -> 413us increase is very consistent with four cores all serializing on the same lock. So it parallelizes to all four cores, but then will take exactly as long in total. Quite frankly, 2.6.9 is so old that I have absolutely _no_ memory of what we used to do back then. Not that I follow NFS all that much even now - I did some of the original page cache and dentry work on the Linux NFS client way back when, but that was when I actually used NFS (and we were converting everything to the page cache). I've long since forgotten everything I knew, and I'm just as happy about that. But clearly something is bad, and equally clearly it worked much better for you a couple of months ago. Which does imply that there's probably some centos issues. Can you ask your MIS people if it would be possible to at least _test_ a new kernel? In 2.6.9, I'm quite frankly inclined to just say "it will likely never get fixed unless centos knows what it is", but if you test a more modern kernel and see similar issues, then I'll be intrigued. It's kind of sad, but at the same time, NFS was using the BKL up into 2.6.26 or something like that (about a year ago). And your kernel is based on something _much_ older. That said, even with the BKL, NFS should allow all the actual IO to be done in parallel (since the BKL is dropped on scheduling). But it's really wasting a _lot_ of CPU time, and that hurts you enormously, even though the cold-cache case still seems to win, judging by your other email: > Best without patch: 6.02 (systime 1.57) > > 0.43user 1.57system 0:06.02elapsed 33%CPU (0avgtext+0avgdata 0maxresident)k > 5336inputs+0outputs (12major+15472minor)pagefaults 0swaps > > Best with patch (preload_cache,lstat reduction): 2.69 (systime 10.47) > > 0.45user 10.47system 0:02.69elapsed 405%CPU (0avgtext+0avgdata 0maxresident)k > 5336inputs+0outputs (12major+13985minor)pagefaults 0swaps so there's a _huge_ increase in system time (again), but the change from 33% CPU -> 405% CPU makes up for it and you get lower elapsed times. But that 7x increase in system time really is sad. I do suspect it's likely due to spinning on the BKL. And if so, then a modern kernel should fix it. Linus
Linus Torvalds wrote: > > On Fri, 8 May 2009, Brandon Casey wrote: >> Before (cold cache): >> % time seconds usecs/call calls errors syscall >> ------ ----------- ----------- --------- --------- ---------------- >> 98.60 6.365501 111 57432 lstat64 >> >> After (cold cache, no lstat fix, just cache_preload): >> % time seconds usecs/call calls errors syscall >> ------ ----------- ----------- --------- --------- ---------------- >> 90.90 23.717981 413 57432 lstat64 > > Yes, interesting. I really smells like it's all fixed performance and > there is a single lock around it. That 111us -> 413us increase is very > consistent with four cores all serializing on the same lock. So it > parallelizes to all four cores, but then will take exactly as long in > total. Makes sense to me. > Quite frankly, 2.6.9 is so old that I have absolutely _no_ memory of what > we used to do back then. Not that I follow NFS all that much even now - I > did some of the original page cache and dentry work on the Linux NFS > client way back when, but that was when I actually used NFS (and we were > converting everything to the page cache). > > I've long since forgotten everything I knew, and I'm just as happy about > that. But clearly something is bad, and equally clearly it worked much > better for you a couple of months ago. Which does imply that there's > probably some centos issues. In case you're not aware CentOS is just repacked RHEL. I'm not sure if centos has the resources for investigating problems. We also have RHEL licenses, so hopefully I'll be able to come up with something to submit to them. > Can you ask your MIS people if it would be possible to at least _test_ a > new kernel? In 2.6.9, I'm quite frankly inclined to just say "it will > likely never get fixed unless centos knows what it is", but if you test a > more modern kernel and see similar issues, then I'll be intrigued. I think it's possible. Just not on this specific machine. Not sure what we have lying around multi-processor wise. Also, it won't happen until next week since it's late Friday afternoon here. btw, I've since done some more testing on some centos5.3 boxes we have. I get similar results (less ancient kernel 2.6.18). I've also scanned through the errata announcements that RedHat has released for their kernel updates. A few of them involve NFS. Possibly, whatever RedHat modified in the 5.X kernel was also backported to the 4.X kernel. > It's kind of sad, but at the same time, NFS was using the BKL up into > 2.6.26 or something like that (about a year ago). And your kernel is > based on something _much_ older. > > That said, even with the BKL, NFS should allow all the actual IO to be > done in parallel (since the BKL is dropped on scheduling). But it's really > wasting a _lot_ of CPU time, and that hurts you enormously, even though > the cold-cache case still seems to win, judging by your other email: > >> Best without patch: 6.02 (systime 1.57) >> >> 0.43user 1.57system 0:06.02elapsed 33%CPU (0avgtext+0avgdata 0maxresident)k >> 5336inputs+0outputs (12major+15472minor)pagefaults 0swaps >> >> Best with patch (preload_cache,lstat reduction): 2.69 (systime 10.47) >> >> 0.45user 10.47system 0:02.69elapsed 405%CPU (0avgtext+0avgdata 0maxresident)k >> 5336inputs+0outputs (12major+13985minor)pagefaults 0swaps > > so there's a _huge_ increase in system time (again), but the change from > 33% CPU -> 405% CPU makes up for it and you get lower elapsed times. > > But that 7x increase in system time really is sad. I do suspect it's > likely due to spinning on the BKL. And if so, then a modern kernel should > fix it. Thanks, I'll try to test next week. -brandon
On Fri, 8 May 2009, Brandon Casey wrote: > > btw, I've since done some more testing on some centos5.3 boxes we have. > I get similar results (less ancient kernel 2.6.18). Yes, 2.6.18 is still much too old to matter from a locking standpoint. When people initially worried about scalability, the issues were more about server side stuff and the cached cases. NFS (as a client) is certainly used on the server side too, but it tends to be a somewhat secondary worry where only specific parts really matter. So people worked a lot more on the core kernel, and on local high-performance filesystem scaling. Only lately have we been pretty aggressive about finally really getting rid of the old "single big lock" (BKL) model entirely, or moving outwards from the core. And while we removed the BKL from the normal NFS read/write paths long long ago, all the name lookup and directory handling code still had it until a year ago. That, btw, is directly explained by perceived scalability issues: NFS is fairly often used as the backing store for a database and scaling thus matters there. But databases tend to keep their few big files open and use pread/pwrite - so pathname lookup is not nearly as significant for server ops as plain read/write. (Pathname lookup is important for things like web servers etc, but they rely heavily on caching for that, and the cached case scales fine). > I've also scanned through the errata announcements that RedHat has > released for their kernel updates. A few of them involve NFS. > Possibly, whatever RedHat modified in the 5.X kernel was also backported > to the 4.X kernel. That is very possibly the case. Expanding the BKL usage in some case could easily trigger the lock getting contention - and the way lock contention works, once you get a just even a small _hint_ of contention, things often fall off a cliff. The contention slows locking down, which in turn causes more CPU usage, which in turn causes _more_ contention. So even a small amount of extra locking - or even just slowing down some code that was inside the lock - can have catastrophic behavioural changes when the lock is close to being a problem. You do not get a nice gradual slowdown at all - you just hit a hard wall. I guess I should really try to set up some fileserver here at home to improve my test coverage. And to do better backups (or the little private data I have that I can't just mirror out to the world by turning it into an open-source project ;^) Linus
Hi, is this the new fashion, to send mails without a subject, all of a sudden being okay only because Linus responded to one? Ciao, Dscho
Nope. It was a mistake. The local try worked, but the send to the
maillist did not. Sorry about this.
-Don
-------- Original Message --------
Subject: Re:
From: Johannes Schindelin <Johannes.Schindelin@gmx.de>
To: Don Slutz <slutz@krl.com>
CC: git@vger.kernel.org
Date: 5/11/2009 4:48 PM
> Hi,
>
> is this the new fashion, to send mails without a subject, all of a sudden
> being okay only because Linus responded to one?
>
> Ciao,
> Dscho
>
>
>
__________________________________________________________________________________________________________________
DISCLAIMER:"The information contained in this message and the attachments (if any) may be privileged and confidential and protected from disclosure. You are hereby notified that any unauthorized use, dissemination, distribution or copying of this communication, review, retransmission, or taking of any action based upon this information, by persons or entities other than the intended recipient, is strictly prohibited. If you are not the intended recipient or an employee or agent responsible for delivering this message, and have received this communication in error, please notify us immediately by replying to the message and kindly delete the original message, attachments, if any, and all its copies from your computer system. Thank you for your cooperation."
________________________________________________________________________________________________________________
Piedalies pirmaja Latvijas BEZMAKSAS pokera TV show, vinne celojumu uz Las Vegasu kur galvena balva ir $8.000.000! http://latvijastvshovs1.co.nr
This message should probably go to the msysGit mailing list. Included in CC. On Tue, Jun 12, 2012 at 11:12 PM, rohit sood <rohit.s@lycos.com> wrote: > > Hi, > When trying a remote install of the git client using winrm on a Windows 2003 box, I get the following error : > > 2012-06-12 14:59:05.476 Line 852: Creating symbolic link "E:\apps\prod\Git\libexec/git-core/git-whatchanged.exe" failed, will try a hard link. > 2012-06-12 14:59:05.523 Line 852: Creating symbolic link "E:\apps\prod\Git\libexec/git-core/git-write-tree.exe" failed, will try a hard link. > 2012-06-12 14:59:05.570 Line 852: Creating symbolic link "E:\apps\prod\Git\libexec/git-core/git.exe" failed, will try a hard link. > 2012-06-12 14:59:05.679 Message box (OK): > Unable to configure the line ending conversion: core.autocrlf true > > I use the Git-1.7.10-preview20120409.exe executable . > I am attempting to script an unattended silent install of the executable with the following options using Opscode Chef : > > options "/DIR=\"#{node['GIT']['HOME']}\" /VERYSILENT /SUPPRESSMSGBOXES /LOG=\"#{ENV['TEMP']}\\GIT_INSTALL.LOG\"" > > Please advise > > thanks, > Rohit > -- > To unsubscribe from this list: send the line "unsubscribe git" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- *** Please reply-to-all at all times *** *** (do not pretend to know who is subscribed and who is not) *** *** Please avoid top-posting. *** The msysGit Wiki is here: https://github.com/msysgit/msysgit/wiki - Github accounts are free. You received this message because you are subscribed to the Google Groups "msysGit" group. To post to this group, send email to msysgit@googlegroups.com To unsubscribe from this group, send email to msysgit+unsubscribe@googlegroups.com For more options, and view previous threads, visit this group at http://groups.google.com/group/msysgit?hl=en_US?hl=en
Johannes Sixt <j.sixt <at> viscovery.net> writes:
>
> Am 2/6/2014 12:54, schrieb konstunn <at> ngs.ru:
> > However I typed the checkout directory in file
> > ..git/info/sparse-checkout by using different formats with
> > and without the leading and the trailing slashes, with and
> > without asterisk after trailing slash, having tried all
> > the possible combinations, but, all the same,
> > nevertheless, the error occured.
>
> Make sure that you do not use CRLF line terminators in the sparse-checkout
> file.
>
This is it. Right you are. I've just tried to edit "manually" with notepad
.git\info\sparse-checkout and found out that there really was a CRLF line
terminator. After I removed it I managed to succeed in my sparse checkout.
Ok great! That indeed fixed the issue.
Although I still don't understand why it didn't work without -solo..
since it didn't work when no instance of Beyond Compare was running as
well.
There must be something not quite right in either Git or Beyond Compare.
On Mon, Sep 8, 2014 at 3:37 PM, Jim Naslund <jnaslund@gmail.com> wrote:
>
> On Sep 8, 2014 7:39 AM, "R. Klomp" <r.klomp@students.uu.nl> wrote:
>>
>> It seems like there's a bug involving git difftool's -d flag and Beyond
>> Compare.
>>
>> When using the difftool Beyond Compare, git difftool <..> <..> -d
>> immidiatly shuts down once the diff tree has been created. Beyond
>> Compare successfully shows the files that differ.
>> However, since git difftool doesn't wait for Beyond Compare to shut
>> down, all temporary files are gone. Due to this it's impossible to
>> view changes made inside files using the -d flag.
>>
>> I haven't tested if this issue also happens with other difftools.
>>
>> I'm using the latest versions of both Beyond Compare 3 (3.3.12, Pro
>> Edition for Windows) and Git (1.9.4 for Windows).
>>
>>
>> Thanks in advance for your help!
>
> I see the same behavior. For me it had something to do with the diff opening
> in a new tab in an existing window. Adding -solo to difftool.cmd will make
> beyond compare use a new window which fixes the issue for me.
>
>> --
>> To unsubscribe from this list: send the line "unsubscribe git" in
>> the body of a message tomajordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Sep 08, 2014 at 04:36:49PM +0200, R. Klomp wrote:
> Ok great! That indeed fixed the issue.
> Although I still don't understand why it didn't work without -solo..
> since it didn't work when no instance of Beyond Compare was running as
> well.
>
> There must be something not quite right in either Git or Beyond Compare.
>
> On Mon, Sep 8, 2014 at 3:37 PM, Jim Naslund <jnaslund@gmail.com> wrote:
> >
> > On Sep 8, 2014 7:39 AM, "R. Klomp" <r.klomp@students.uu.nl> wrote:
> >>
> >> It seems like there's a bug involving git difftool's -d flag and Beyond
> >> Compare.
> >>
> >> When using the difftool Beyond Compare, git difftool <..> <..> -d
> >> immidiatly shuts down once the diff tree has been created. Beyond
> >> Compare successfully shows the files that differ.
> >> However, since git difftool doesn't wait for Beyond Compare to shut
> >> down, all temporary files are gone. Due to this it's impossible to
> >> view changes made inside files using the -d flag.
> >>
> >> I haven't tested if this issue also happens with other difftools.
> >>
> >> I'm using the latest versions of both Beyond Compare 3 (3.3.12, Pro
> >> Edition for Windows) and Git (1.9.4 for Windows).
> >>
> >>
> >> Thanks in advance for your help!
> >
> > I see the same behavior. For me it had something to do with the diff opening
> > in a new tab in an existing window. Adding -solo to difftool.cmd will make
> > beyond compare use a new window which fixes the issue for me.
Interesting. Would it be worth changing difftool to use -solo by default, or
are there any downsides to doing so?
Is -solo a new feature that only exists in new versions of beyond compare?
I would be okay saying that the user should use a fairly new version.
Can we rely on -solo being available on all platforms?
If so, I'd be okay with changing the default if there are no other downsides.
The --dir-diff feature is not the only one that needs this blocking behavior.
Does this issue also happen in the normal difftool mode without -d?
--
David
I couldn't find information about whether the -solo feature is
available in all Beyond Compare versions.
At the least I can say that it is available in version 3 for Windows,
since that is the version that we're using.
This issue does not occur when using the normal difftool (command: git
difftool), which is odd and indicates that something must be wrong in
either Git or Beyond Compare.
On Wed, Sep 10, 2014 at 2:00 AM, David Aguilar <davvid@gmail.com> wrote:
> On Mon, Sep 08, 2014 at 04:36:49PM +0200, R. Klomp wrote:
>> Ok great! That indeed fixed the issue.
>> Although I still don't understand why it didn't work without -solo..
>> since it didn't work when no instance of Beyond Compare was running as
>> well.
>>
>> There must be something not quite right in either Git or Beyond Compare.
>>
>> On Mon, Sep 8, 2014 at 3:37 PM, Jim Naslund <jnaslund@gmail.com> wrote:
>> >
>> > On Sep 8, 2014 7:39 AM, "R. Klomp" <r.klomp@students.uu.nl> wrote:
>> >>
>> >> It seems like there's a bug involving git difftool's -d flag and Beyond
>> >> Compare.
>> >>
>> >> When using the difftool Beyond Compare, git difftool <..> <..> -d
>> >> immidiatly shuts down once the diff tree has been created. Beyond
>> >> Compare successfully shows the files that differ.
>> >> However, since git difftool doesn't wait for Beyond Compare to shut
>> >> down, all temporary files are gone. Due to this it's impossible to
>> >> view changes made inside files using the -d flag.
>> >>
>> >> I haven't tested if this issue also happens with other difftools.
>> >>
>> >> I'm using the latest versions of both Beyond Compare 3 (3.3.12, Pro
>> >> Edition for Windows) and Git (1.9.4 for Windows).
>> >>
>> >>
>> >> Thanks in advance for your help!
>> >
>> > I see the same behavior. For me it had something to do with the diff opening
>> > in a new tab in an existing window. Adding -solo to difftool.cmd will make
>> > beyond compare use a new window which fixes the issue for me.
>
> Interesting. Would it be worth changing difftool to use -solo by default, or
> are there any downsides to doing so?
>
> Is -solo a new feature that only exists in new versions of beyond compare?
> I would be okay saying that the user should use a fairly new version.
>
> Can we rely on -solo being available on all platforms?
> If so, I'd be okay with changing the default if there are no other downsides.
>
> The --dir-diff feature is not the only one that needs this blocking behavior.
> Does this issue also happen in the normal difftool mode without -d?
> --
> David
On Fri, Mar 13, 2015 at 8:34 AM, <cody.taylor@maternityneighborhood.com> wrote:
> From 3e4e22e93bf07355b40ba0abcb3a15c4941cfee7 Mon Sep 17 00:00:00 2001
> From: Cody A Taylor <codemister99@yahoo.com>
> Date: Thu, 12 Mar 2015 20:36:44 -0400
> Subject: [PATCH] git prompt: Use toplevel to find untracked files.
>
> The __git_ps1() prompt function would not show an untracked
> state when the current working directory was not a parent of
> the untracked file.
>
> Signed-off-by: Cody A Taylor <codemister99@yahoo.com>
> ---
> contrib/completion/git-prompt.sh | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/contrib/completion/git-prompt.sh b/contrib/completion/git-prompt.sh
> index 214e859f99e7..f0d8a2669236 100644
> --- a/contrib/completion/git-prompt.sh
> +++ b/contrib/completion/git-prompt.sh
> @@ -487,7 +487,8 @@ __git_ps1 ()
>
> if [ -n "${GIT_PS1_SHOWUNTRACKEDFILES-}" ] &&
> [ "$(git config --bool bash.showUntrackedFiles)" != "false" ] &&
> - git ls-files --others --exclude-standard --error-unmatch -- '*' >/dev/null 2>/dev/null
> + git ls-files --others --exclude-standard --error-unmatch -- \
> + "$(git rev-parse --show-toplevel)/*" >/dev/null 2>/dev/null
Or make it a bit simpler, just replace '*' with ':/*'.
--
Duy
On Wed, Mar 18, 2015 at 1:33 PM, Alessandro Zanardi
<pensierinmusica@gmail.com> wrote:
> Here are other sources describing the issue:
>
> http://stackoverflow.com/questions/21109672/git-ignoring-icon-files-because-of-icon-rule
>
> http://blog.bitfluent.com/post/173740409/ignoring-icon-in-gitignore
>
> Sorry to bother the Git development team with such a minor issue, I just
> wanted to know if it's already been fixed.
I do not ship your ~/.gitignore_global file as part of my software,
so the problem is not mine to fix in the first place ;-)
Where did you get that file from? We need to find whoever is
responsible and notify them so that these users who are having
the issue will be helped.
Thanks.
On Wed, Mar 18, 2015 at 1:45 PM, Junio C Hamano <gitster@pobox.com> wrote: > On Wed, Mar 18, 2015 at 1:33 PM, Alessandro Zanardi > <pensierinmusica@gmail.com> wrote: >> Here are other sources describing the issue: >> >> http://stackoverflow.com/questions/21109672/git-ignoring-icon-files-because-of-icon-rule >> >> http://blog.bitfluent.com/post/173740409/ignoring-icon-in-gitignore >> >> Sorry to bother the Git development team with such a minor issue, I just >> wanted to know if it's already been fixed. > > I do not ship your ~/.gitignore_global file as part of my software, > so the problem is not mine to fix in the first place ;-) Maybe this can be understood as a critique on the .gitignore format specifier for paths. (Maybe not, I dunno) So the `gitignore` script/executable which would generate your .gitignore file for you introduced a bug to also ignore files in "Icons/...." when all you wanted to have is ignoring the file "Icon\r\r" (Mind that \r is an escape character to explain the meaning, gitignore cannot understand it. Sometimes it also shows up as ^M^M depending on operating system/editor used.) But as you can see, there have been several attempts at fixing it right and https://github.com/github/gitignore/pull/334 eventually got the right fix. (it was merged 2012, which has been a while now), maybe you'd want to use a new version of this gitignore script to regenerate your gitignore? > > Where did you get that file from? We need to find whoever is > responsible and notify them so that these users who are having > the issue will be helped. Given that this is part of https://github.com/github/gitignore which is the official collection of .gitignore files from Github, the company, we could ask Jeff or Michael if it is urgent. The actual fix being merged 3 years ago makes me belief it is not urgent though. Thanks, Stefan
On Wed, Mar 18, 2015 at 02:06:22PM -0700, Stefan Beller wrote:
> > Where did you get that file from? We need to find whoever is
> > responsible and notify them so that these users who are having
> > the issue will be helped.
>
> Given that this is part of https://github.com/github/gitignore
> which is the official collection of .gitignore files from Github,
> the company, we could ask Jeff or Michael if it is urgent.
> The actual fix being merged 3 years ago makes me belief
> it is not urgent though.
It looks like the fix they have in that repo does the right thing[1],
but for reference, you are much more likely to get results by creating
an issue or PR on that repository, rather than asking me.
-Peff
[1] The double-CR fix works because we strip a single CR from the end of
the line (as a convenience for CRLF systems), and then the remaining
CR is syntactically significant. But I am surprised that quoting
like:
printf '"Icon\r"' >.gitignore
does not seem to work.
On Wed, Mar 18, 2015 at 05:17:16PM -0400, Jeff King wrote:
> [1] The double-CR fix works because we strip a single CR from the end of
> the line (as a convenience for CRLF systems), and then the remaining
> CR is syntactically significant. But I am surprised that quoting
> like:
>
> printf '"Icon\r"' >.gitignore
>
> does not seem to work.
Answering myself: we don't do quoting like this in .gitignore. We allow
backslashing to escape particular characters, like trailing whitespace.
So in theory:
Icon\\r
(where "\r" is a literal CR) would work. But it doesn't, because the
CRLF chomping happens separately, and CR is therefore a special case. I
suspect you could not .gitignore a file with a literal LF in it at all
(and I equally suspect that nobody cares in practice).
-Peff
On Wed, Mar 18, 2015 at 2:28 PM, Jeff King <peff@peff.net> wrote:
> On Wed, Mar 18, 2015 at 05:17:16PM -0400, Jeff King wrote:
>
>> [1] The double-CR fix works because we strip a single CR from the end of
>> the line (as a convenience for CRLF systems), and then the remaining
>> CR is syntactically significant. But I am surprised that quoting
>> like:
>>
>> printf '"Icon\r"' >.gitignore
>>
>> does not seem to work.
>
> Answering myself: we don't do quoting like this in .gitignore. We allow
> backslashing to escape particular characters, like trailing whitespace.
> So in theory:
>
> Icon\\r
>
> (where "\r" is a literal CR) would work. But it doesn't, because the
> CRLF chomping happens separately, and CR is therefore a special case. I
> suspect you could not .gitignore a file with a literal LF in it at all
> (and I equally suspect that nobody cares in practice).
What does the Icon^M try to catch, exactly? Is it a file? Is it a directory?
Is it "anything that begins with Icon^M"?
I am wondering if we need an opposite of '/' prefix in the .gitignore file
to say "the pattern does not match a directory, only a file".
On Wed, Mar 18, 2015 at 2:33 PM, Junio C Hamano <gitster@pobox.com> wrote: > What does the Icon^M try to catch, exactly? Is it a file? Is it a directory? > Is it "anything that begins with Icon^M"? It seems to be a special hidden file on Macs for UI convenience. > On Apr 25, 2005, at 6:21 AM, Peter N. Lundblad wrote: > > The Icon^M file in a directory gives that directory a custom icon in > the Finder. They are a holdover from MacOS 9 but there are still a lot > of them out there. The "new" OS X format for icons are .icns files but > I'm not sure if you can do custom file directory icons with them (you > probably can, I just haven't found the docs yet). >
Am 08.04.2015 um 22:44 schrieb Mamta Upadhyay: > Hi git team, (CC'ing msysgit as this is the git for windows list) Hi Mamta, > I tried to research everywhere on a issue I am facing and emailing you > as the last resource. This is critical for me and I needed your help. > > I am trying to run the latest git 1.9.5 installer on windows. When I > run strings on libneon-25.dll it shows this: > > ./libneon-25.dll: OpenSSL 1.0.1h 5 Jun 2014 > > But when I load this dll in dependency walker, it picks up > msys-openssl 1.0.1m and has no trace of openssl-1.0.1h. My questions > to you: > > 1. Is libneon-25.dll statically linked with openssl-1.0.1h? > 2. If not, where is the reference to 1.0.1h coming from? I would be suprised if we link openssl statically into libneon. I guess libneon just reports against which openssl version it was *built*. > I am asked to rebuild git with libneon-25.dll linked against > openssl-1.0.1m. But I am having a feeling that this is not needed, > since libneon is already picking the latest openssl version. Can you > please confirm? You can download the development enviroment for git for windows here [1]. After installation, checkout the msys branch and then you can try to recomplile libneon using /src/subversion/release.sh. [1]: https://github.com/msysgit/msysgit/releases/download/Git-1.9.5-preview20150319/msysGit-netinstall-1.9.5-preview20150319.exe Hope that helps Thomas -- -- *** Please reply-to-all at all times *** *** (do not pretend to know who is subscribed and who is not) *** *** Please avoid top-posting. *** The msysGit Wiki is here: https://github.com/msysgit/msysgit/wiki - Github accounts are free. You received this message because you are subscribed to the Google Groups "msysGit" group. To post to this group, send email to msysgit@googlegroups.com To unsubscribe from this group, send email to msysgit+unsubscribe@googlegroups.com For more options, and view previous threads, visit this group at http://groups.google.com/group/msysgit?hl=en_US?hl=en --- You received this message because you are subscribed to the Google Groups "Git for Windows" group. To unsubscribe from this group and stop receiving emails from it, send an email to msysgit+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
On Wed, 08 Apr 2015 23:58:58 +0200 Thomas Braun <thomas.braun@virtuell-zuhause.de> wrote: [...] > > I am trying to run the latest git 1.9.5 installer on windows. When I > > run strings on libneon-25.dll it shows this: > > > > ./libneon-25.dll: OpenSSL 1.0.1h 5 Jun 2014 > > > > But when I load this dll in dependency walker, it picks up > > msys-openssl 1.0.1m and has no trace of openssl-1.0.1h. My questions > > to you: > > > > 1. Is libneon-25.dll statically linked with openssl-1.0.1h? > > 2. If not, where is the reference to 1.0.1h coming from? > > I would be suprised if we link openssl statically into libneon. I > guess libneon just reports against which openssl version it was > *built*. > > > I am asked to rebuild git with libneon-25.dll linked against > > openssl-1.0.1m. But I am having a feeling that this is not needed, > > since libneon is already picking the latest openssl version. Can you > > please confirm? > > You can download the development enviroment for git for windows here > [1]. After installation, checkout the msys branch and then you can try > to recomplile libneon using /src/subversion/release.sh. > > [1]: > https://github.com/msysgit/msysgit/releases/download/Git-1.9.5-preview20150319/msysGit-netinstall-1.9.5-preview20150319.exe [...] JFTR, the discussion about the same issue has been brought up on git-users as well [2]. (People should really somehow use the basics of netiquette and mention in their posts where they cross-post things.) 2. https://groups.google.com/d/topic/git-users/WXyWE5_JfNc/discussion -- -- *** Please reply-to-all at all times *** *** (do not pretend to know who is subscribed and who is not) *** *** Please avoid top-posting. *** The msysGit Wiki is here: https://github.com/msysgit/msysgit/wiki - Github accounts are free. You received this message because you are subscribed to the Google Groups "msysGit" group. To post to this group, send email to msysgit@googlegroups.com To unsubscribe from this group, send email to msysgit+unsubscribe@googlegroups.com For more options, and view previous threads, visit this group at http://groups.google.com/group/msysgit?hl=en_US?hl=en --- You received this message because you are subscribed to the Google Groups "Git for Windows" group. To unsubscribe from this group and stop receiving emails from it, send an email to msysgit+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
On Wed, Aug 5, 2015 at 7:47 PM, Ivan Chernyavsky <camposer@yandex.ru> wrote: > Dear community, > > For some time I'm wondering why there's no "--grep" option to the "git branch" command, which would request to print only branches having specified string/regexp in their history. Probably because nobody is interested and steps up to do it. The lack of response to you mail is a sign. Maybe you can try make a patch? I imagine it would not be so different from current --contains code, but this time we need to look into commits, not just commit id. > So for example: > > $ git branch -r --grep=BUG12345 > > should be roughly equivalent to following expression I'm using now for the same task: > > $ for r in `git rev-list --grep=BUG12345 --remotes=origin`; do git branch -r --list --contains=$r 'origin/*'; done | sort -u > > Am I missing something, is there some smarter/simpler way to do this? > > Thanks a lot in advance! > > -- > Ivan > -- > To unsubscribe from this list: send the line "unsubscribe git" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Duy
Duy Nguyen <pclouds@gmail.com> writes: > On Wed, Aug 5, 2015 at 7:47 PM, Ivan Chernyavsky <camposer@yandex.ru> wrote: >> Dear community, >> >> For some time I'm wondering why there's no "--grep" option to the >> "git branch" command, which would request to print only branches >> having specified string/regexp in their history. > > Probably because nobody is interested and steps up to do it. The lack > of response to you mail is a sign. Maybe you can try make a patch? I > imagine it would not be so different from current --contains code, but > this time we need to look into commits, not just commit id. That is a dangeous thought. I'd understand if it were internally two step process, i.e. (1) the first pass finds commits that hits the --grep criteria and then (2) the second pass does "--contains" for all the hits found in the first pass using existing code, but still, this operation is bound to dig all the way through the root of the history when asked to find something that does not exist. >> So for example: >> >> $ git branch -r --grep=BUG12345 >> >> should be roughly equivalent to following expression I'm using now for the same task: >> >> $ for r in `git rev-list --grep=BUG12345 --remotes=origin`; do git branch -r --list --contains=$r 'origin/*'; done | sort -u You should at least feed all --contains to a single invocation of "git branch". They are designed to be OR'ed together.
Good day,hoping you read this email and respond to me in good time.I do not intend to solicit for funds but your time and energy in using my own resources to assist the less privileged.I am medically confined at the moment hence I request your indulgence. I will give you a comprehensive brief once I hear from you. Please forward your response to my private email address: gudworks104@yahoo.com Thanks and reply. Robert Grondahl
Good day,hoping you read this email and respond to me in good time.I do not intend to solicit for funds but your time and energy in using my own resources to assist the less privileged.I am medically confined at the moment hence I request your indulgence. I will give you a comprehensive brief once I hear from you. Please forward your response to my private email address: gudworks104@yahoo.com Thanks and reply. Robert Grondahl
On Mon, Apr 11, 2016 at 12:04 PM, <miwilliams@google.com> wrote: > From 7201fe08ede76e502211a781250c9a0b702a78b2 Mon Sep 17 00:00:00 2001 > From: Mike Williams <miwilliams@google.com> > Date: Mon, 11 Apr 2016 14:18:39 -0400 > Subject: [PATCH 1/1] wt-status: Remove '!!' from > wt_status_collect_changed_cb > > The wt_status_collect_changed_cb function uses an extraneous double negation > (!!) How is an !! errornous? It serves the purpose to map an integer value(-1,0,1,2,3,4) to a boolean (0,1, or a real bit in a bit field). > when determining whether or not a submodule has new commits. > > Signed-off-by: Mike Williams <miwilliams@google.com> > --- > wt-status.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/wt-status.c b/wt-status.c > index ef74864..b955179 100644 > --- a/wt-status.c > +++ b/wt-status.c > @@ -431,7 +431,7 @@ static void wt_status_collect_changed_cb(struct > diff_queue_struct *q, > d->worktree_status = p->status; > d->dirty_submodule = p->two->dirty_submodule; > if (S_ISGITLINK(p->two->mode)) > - d->new_submodule_commits = !!hashcmp(p->one->sha1, > p->two->sha1); > + d->new_submodule_commits = hashcmp(p->one->sha1, > p->two->sha1); > } > } > > -- > 2.8.0 > > -- > To unsubscribe from this list: send the line "unsubscribe git" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
On 01/25/2017 01:21 AM, Stefan Beller wrote: >> >>> Do not PGP sign your patch, at least *for now*. (...) >> > > And maybe these 2 small words are the bug in the documentation? > Shall we drop the "at least for now" part, like so: > > ---8<--- > From 2c4fe0e67451892186ff6257b20c53e088c9ec67 Mon Sep 17 00:00:00 2001 > From: Stefan Beller <sbeller@google.com> > Date: Tue, 24 Jan 2017 16:19:13 -0800 > Subject: [PATCH] SubmittingPatches: drop temporal reference for PGP signing > > Signed-off-by: Stefan Beller <sbeller@google.com> > --- > Documentation/SubmittingPatches | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/Documentation/SubmittingPatches b/Documentation/SubmittingPatches > index 08352deaae..28da4ad2d4 100644 > --- a/Documentation/SubmittingPatches > +++ b/Documentation/SubmittingPatches > @@ -216,12 +216,12 @@ that it will be postponed. > Exception: If your mailer is mangling patches then someone may ask > you to re-send them using MIME, that is OK. > > -Do not PGP sign your patch, at least for now. Most likely, your > -maintainer or other people on the list would not have your PGP > -key and would not bother obtaining it anyway. Your patch is not > -judged by who you are; a good patch from an unknown origin has a > -far better chance of being accepted than a patch from a known, > -respected origin that is done poorly or does incorrect things. > +Do not PGP sign your patch. Most likely, your maintainer or other > +people on the list would not have your PGP key and would not bother > +obtaining it anyway. Your patch is not judged by who you are; a good > +patch from an unknown origin has a far better chance of being accepted > +than a patch from a known, respected origin that is done poorly or > +does incorrect things. > > If you really really really really want to do a PGP signed > patch, format it as "multipart/signed", not a text/plain message > It definitely is an improvement. Though it would still leave me puzzled when finding a section about signing just below. Is changing heading (5) too big a change? Like so: diff --git a/Documentation/SubmittingPatches b/Documentation/SubmittingPatches index 08352de..71898dc 100644 --- a/Documentation/SubmittingPatches +++ b/Documentation/SubmittingPatches @@ -246,7 +246,7 @@ patch. *2* The mailing list: git@vger.kernel.org -(5) Sign your work +(5) Certify your work by signing off. To improve tracking of who did what, we've borrowed the "sign-off" procedure from the Linux kernel project on patches
On Tue, Jan 24, 2017 at 4:43 PM, Cornelius Weig
<cornelius.weig@tngtech.com> wrote:
> -(5) Sign your work
> +(5) Certify your work by signing off.
That sounds better than what I proposed.
Thanks,
Stefan
On Tue, Jan 24, 2017 at 4:21 PM, Stefan Beller <sbeller@google.com> wrote:
>
> +Do not PGP sign your patch. Most likely, your maintainer or other
> +people on the list would not have your PGP key and would not bother
> +obtaining it anyway.
I think even that could be further simplified - by just removing all
comments about pgp email
Because it's not that the PGP keys would be hard to get, it's that
PGP-signed email is an abject failure, and nobody sane does it.
Google for "phil zimmerman doesn't use pgp email".
It's dead. So I'm not sure it's worth mentioning at all.
You might as well talk about how you shouldn't use EBCDIC encoding for
your patches, or about why git assumes that an email address has an
'@' sign in it, instead of being an UUCP bang path address.
Linus
Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Jan 24, 2017 at 4:21 PM, Stefan Beller <sbeller@google.com> wrote: > > > > +Do not PGP sign your patch. Most likely, your maintainer or other > > +people on the list would not have your PGP key and would not bother > > +obtaining it anyway. > > I think even that could be further simplified - by just removing all > comments about pgp email > > Because it's not that the PGP keys would be hard to get, it's that > PGP-signed email is an abject failure, and nobody sane does it. > > Google for "phil zimmerman doesn't use pgp email". > > It's dead. So I'm not sure it's worth mentioning at all. I disagree, we still see it, and Debian still advocates it. In fact, we may also want to mention S/MIME in the same breath: https://public-inbox.org/git/20170110004031.57985-2-hansenr@google.com/ Richard's more recent mails seem to indicate he's reformed :)
To subscribe to the git mailing list, send the email to
majordomo@vger.kernel.org, not the mailing list itself.
On Sat, Nov 11, 2017 at 6:21 PM, <hsed@unimetic.com> wrote:
> subscribe git
did you mean majordomo@kernel.org instead?
On Mon, Nov 20, 2017 at 7:10 AM, Viet Nguyen <ntviet18@gmail.com> wrote:
> unsubscribe git
Am 27.02.2018 um 02:18 schrieb Alan Gage: > Hello, I recently noticed a bug involving GitBash and Python. I was > running a function that would post the system time once every second > using a while loop but the text was only sent after the while loop > ended due to a timer I had set. Essesntially, instead of it being > entered every second into the terminal, it was entered all at once, > when the loop ended. I tried this with the Command Line, among other > things, and it worked as intended, with the text being entered every > second. This is on Windows 10 Pro with the Fall Creators Update and > the most recent version of GitBash. Python buffers its output by default. On terminals it enables line buffering, i.e. the accumulated output is flushed when a newline character is reached. Otherwise it uses a system-dependent buffer size in the range of a few kilobytes. You can check if your output is a terminal e.g. with: python -c "import sys; print(sys.stdout.isatty())" You can disable buffering by running your script with "python -u". This discussion mentions more options: https://stackoverflow.com/questions/107705/disable-output-buffering You can also start bash on the command line. I do wonder why Git CMD seems to be started in what passes as a terminal, while Git BASH is not, though. You may want to check out https://gitforwindows.org/ and report your findings using their issue tracker. (This mailing list here, git@vger.kernel.org, is mostly used for discussing Git itself, not so much about extra tools like bash or Python that are packaged with Git for Windows.) René
On 4/27/2018 2:19 PM, Elijah Newren wrote:
> From: Elijah Newren <newren@gmail.com>
>
> On Thu, Apr 26, 2018 at 5:54 PM, Ben Peart <peartben@gmail.com> wrote:
>
>> Can you write the documentation that clearly explains the exact behavior you
>> want? That would kill two birds with one stone... :)
>
> Sure, something like the following is what I envision, and I've tried to
> include the suggestion from Junio to document the copy behavior in the
> merge-recursive documentation.
>
> -- 8< --
> Subject: [PATCH] fixup! merge: Add merge.renames config setting
>
> ---
> Documentation/merge-config.txt | 3 +--
> Documentation/merge-strategies.txt | 5 +++--
> merge-recursive.c | 8 ++++++++
> 3 files changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/merge-config.txt b/Documentation/merge-config.txt
> index 59848e5634..662c2713ca 100644
> --- a/Documentation/merge-config.txt
> +++ b/Documentation/merge-config.txt
> @@ -41,8 +41,7 @@ merge.renameLimit::
> merge.renames::
> Whether and how Git detects renames. If set to "false",
> rename detection is disabled. If set to "true", basic rename
> - detection is enabled. If set to "copies" or "copy", Git will
> - detect copies, as well. Defaults to the value of diff.renames.
> + detection is enabled. Defaults to the value of diff.renames.
>
> merge.renormalize::
> Tell Git that canonical representation of files in the
> diff --git a/Documentation/merge-strategies.txt b/Documentation/merge-strategies.txt
> index 1e0728aa12..aa66cbe41e 100644
> --- a/Documentation/merge-strategies.txt
> +++ b/Documentation/merge-strategies.txt
> @@ -23,8 +23,9 @@ recursive::
> causing mismerges by tests done on actual merge commits
> taken from Linux 2.6 kernel development history.
> Additionally this can detect and handle merges involving
> - renames. This is the default merge strategy when
> - pulling or merging one branch.
> + renames, but currently cannot make use of detected
> + copies. This is the default merge strategy when pulling
> + or merging one branch.
> +
> The 'recursive' strategy can take the following options:
>
> diff --git a/merge-recursive.c b/merge-recursive.c
> index 6cc4404144..b618f134d2 100644
> --- a/merge-recursive.c
> +++ b/merge-recursive.c
> @@ -564,6 +564,14 @@ static struct string_list *get_renames(struct merge_options *o,
> opts.flags.recursive = 1;
> opts.flags.rename_empty = 0;
> opts.detect_rename = merge_detect_rename(o);
> + /*
> + * We do not have logic to handle the detection of copies. In
> + * fact, it may not even make sense to add such logic: would we
> + * really want a change to a base file to be propagated through
> + * multiple other files by a merge?
> + */
> + if (opts.detect_rename > DIFF_DETECT_RENAME)
> + opts.detect_rename = DIFF_DETECT_RENAME;
> opts.rename_limit = o->merge_rename_limit >= 0 ? o->merge_rename_limit :
> o->diff_rename_limit >= 0 ? o->diff_rename_limit :
> 1000;
>
Thanks Elijah. I've applied this patch and reviewed and tested it. It
works and addresses the concerns around the settings inheritance from
diff.renames. I still _prefer_ the simpler model that doesn't do the
partial inheritance but I can use this model as well.
I'm unsure on the protocol here. Should I incorporate this patch and
submit a reroll or can it just be applied as is?
On Mon, Apr 30, 2018 at 6:11 AM, Ben Peart <peartben@gmail.com> wrote: > On 4/27/2018 2:19 PM, Elijah Newren wrote: >> >> From: Elijah Newren <newren@gmail.com> >> >> On Thu, Apr 26, 2018 at 5:54 PM, Ben Peart <peartben@gmail.com> wrote: >> >>> Can you write the documentation that clearly explains the exact behavior >>> you >>> want? That would kill two birds with one stone... :) >> >> >> Sure, something like the following is what I envision, and I've tried to >> include the suggestion from Junio to document the copy behavior in the >> merge-recursive documentation. >> <snip> > > Thanks Elijah. I've applied this patch and reviewed and tested it. It works > and addresses the concerns around the settings inheritance from > diff.renames. I still _prefer_ the simpler model that doesn't do the > partial inheritance but I can use this model as well. > > I'm unsure on the protocol here. Should I incorporate this patch and submit > a reroll or can it just be applied as is? I suspect you'll want to re-roll anyway, to base your series on en/rename-directory-detection-reboot instead of on master. (Junio plans to merge it down to next, and your series has four different merge conflicts with it.) There are two other loose ends with this series that Junio will need to weigh in on: - I'm obviously a strong proponent of the inherited setting, but Junio may change his mind after reading Dscho's arguments against it (or after reading my arguments for it). - I like the setting as-is, and think we could allow a "copy" setting for merge.renames to specify that the post-merge diffstat should detect copies (not part of your series, but a useful addition I'd like to tackle afterwards). However, Junio had comments in xmqqwox19ohw.fsf@gitster-ct.c.googlers.com about merge.renames handling the scoring as well, like -Xfind-renames. Those sound incompatible to me for a single setting, and I'm unsure if Junio would resolve them the way I do or still feels strongly about the scoring.
On 4/30/2018 12:12 PM, Elijah Newren wrote:
> On Mon, Apr 30, 2018 at 6:11 AM, Ben Peart <peartben@gmail.com> wrote:
>> On 4/27/2018 2:19 PM, Elijah Newren wrote:
>>>
>>> From: Elijah Newren <newren@gmail.com>
>>>
>>> On Thu, Apr 26, 2018 at 5:54 PM, Ben Peart <peartben@gmail.com> wrote:
>>>
>>>> Can you write the documentation that clearly explains the exact behavior
>>>> you
>>>> want? That would kill two birds with one stone... :)
>>>
>>>
>>> Sure, something like the following is what I envision, and I've tried to
>>> include the suggestion from Junio to document the copy behavior in the
>>> merge-recursive documentation.
>>>
> <snip>
>>
>> Thanks Elijah. I've applied this patch and reviewed and tested it. It works
>> and addresses the concerns around the settings inheritance from
>> diff.renames. I still _prefer_ the simpler model that doesn't do the
>> partial inheritance but I can use this model as well.
>>
>> I'm unsure on the protocol here. Should I incorporate this patch and submit
>> a reroll or can it just be applied as is?
>
> I suspect you'll want to re-roll anyway, to base your series on
> en/rename-directory-detection-reboot instead of on master. (Junio
> plans to merge it down to next, and your series has four different
> merge conflicts with it.)
>
> There are two other loose ends with this series that Junio will need
> to weigh in on:
>
> - I'm obviously a strong proponent of the inherited setting, but Junio
> may change his mind after reading Dscho's arguments against it (or
> after reading my arguments for it).
>
> - I like the setting as-is, and think we could allow a "copy" setting
> for merge.renames to specify that the post-merge diffstat should
> detect copies (not part of your series, but a useful addition I'd like
> to tackle afterwards). However, Junio had comments in
> xmqqwox19ohw.fsf@gitster-ct.c.googlers.com about merge.renames
> handling the scoring as well, like -Xfind-renames. Those sound
> incompatible to me for a single setting, and I'm unsure if Junio would
> resolve them the way I do or still feels strongly about the scoring.
>
I think this patch series (including Elijah's fixup!) improves the
situation from where we were and it provides the necessary functionality
to solve the problem I started out to solve. While there are other
changes that could be made, I think they should be done in separate
follow up patches.
I'm happy to reroll this incorporating the fixup! so that we can make
progress. Junio, would you prefer I reroll this based on
en/rename-directory-detection-reboot or master?
unsubscribe git
I've sent this same email 3 times. I don't think it works. I'm
researching this morning how to unsubscribe from this git group.
CODY KRATZER WEB DEVELOPMENT MANAGER
866-344-3875 x145
CODY@LIGHTINGNEWYORK.COM
M - F 9 - 5:30
On Wed, Jan 23, 2019 at 5:51 AM Christopher Hagler
<haglerchristopher@gmail.com> wrote:
>
> Unsubscribe git
>
> Sent from my iPhone
Am 23.01.2019 um 15:16 schrieb Cody Kratzer: > I've sent this same email 3 times. I don't think it works. I'm > researching this morning how to unsubscribe from this git group. Hi Cody, https://git-scm.com/community says to subscribe you should send an email with body content subscribe git to majordomo@vger.kernel.org so maybe sending unsubscribe git to *that* address does unsubscribe you.
Send the email to this address
Majordomo@vger.kernel.org and it will work
Sent from my iPhone
> On Jan 23, 2019, at 8:16 AM, Cody Kratzer <cody@lightingnewyork.com> wrote:
>
> I've sent this same email 3 times. I don't think it works. I'm
> researching this morning how to unsubscribe from this git group.
>
> CODY KRATZER WEB DEVELOPMENT MANAGER
> 866-344-3875 x145
> CODY@LIGHTINGNEWYORK.COM
> M - F 9 - 5:30
>
>
> On Wed, Jan 23, 2019 at 5:51 AM Christopher Hagler
> <haglerchristopher@gmail.com> wrote:
>>
>> Unsubscribe git
>>
>> Sent from my iPhone
On January 23, 2019 11:00, Christopher Hagler wrote: > Send the email to this address > Majordomo@vger.kernel.org and it will work > > On Jan 23, 2019, at 8:16 AM, Cody Kratzer <cody@lightingnewyork.com> > > I've sent this same email 3 times. I don't think it works. I'm > > researching this morning how to unsubscribe from this git group. > > > > CODY KRATZER WEB DEVELOPMENT MANAGER > > 866-344-3875 x145 > > CODY@LIGHTINGNEWYORK.COM > > M - F 9 - 5:30 > > On Wed, Jan 23, 2019 at 5:51 AM Christopher Hagler > > <haglerchristopher@gmail.com> wrote: > >> > >> Unsubscribe git Reference information to the mailing lists are available here: http://vger.kernel.org/vger-lists.html#git
Hi,
don't send this to git@vger.kernel.org. Send it to
majordomo@vger.kernel.org instead.
Thanks,
Johannes
On Wed, 23 Jan 2019, Cody Kratzer wrote:
> I've sent this same email 3 times. I don't think it works. I'm
> researching this morning how to unsubscribe from this git group.
>
> CODY KRATZER WEB DEVELOPMENT MANAGER
> 866-344-3875 x145
> CODY@LIGHTINGNEWYORK.COM
> M - F 9 - 5:30
>
>
> On Wed, Jan 23, 2019 at 5:51 AM Christopher Hagler
> <haglerchristopher@gmail.com> wrote:
> >
> > Unsubscribe git
> >
> > Sent from my iPhone
>
Hey Eric On Tue, 5 Mar 2019 09:57:40 -0500 Eric Sunshine <sunshine@sunshineco.com> wrote: > This patch, due to its length and repetitive nature, falls under the > category of being tedious to review, which makes it all the more > likely that a reviewer will overlook a problem. Yes, I clearly understand that this patch has become too big to review. It will require time to carefully review and reviewers are doing their best to maintain the utmost quality of code. > And, it's not always obvious at a glance that a change is correct. For > instance, taking a look at the final patch band: > > - ! test -d submod && > - ! test -d submod/subsubmod/.git && > + test_path_is_missing submod && > + test_path_is_missing submod/subsubmod/.git && Duy actually confirms that this transformation is correct in this[1] email. (I know that, it was given as an example, but I'll leave the link anyway). Thanks Rohit [1]: https://public-inbox.org/git/CACsJy8BYeLvB7BSM_Jt4vwfGsEBuhaCZfzGPOHe=B=7cvnRwrg@mail.gmail.com/
No, not like that. See here: https://git.wiki.kernel.org/index.php/GitCommunity The email address you send the "subscribe" message to is NOT the mailing list itself. What you just did is send the words "subscribe git" to everyone already on the mailing list :) -----Original Message----- From: git-owner@vger.kernel.org [mailto:git-owner@vger.kernel.org] On Behalf Of William Baker Sent: Tuesday, August 20, 2019 1:23 PM To: git@vger.kernel.org Subject: subscribe git ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message.
Hi Martin On Fri, 15 Nov 2019 at 17:17, Martin Nicolay <martin@wsmn.osm-gmbh.de> wrote: > While working with complex scripts invoking git multiple times my > editor detects the changes and calls "git status". This leads to > aborts in "git-stash". With this patch and an appropriate value > core.fileslocktimeout this problem goes away. Are you able to patch your editor to call git --no-optional-locks status instead? See the bottom of git-status(1) ("BACKGROUND REFRESH") for more on this. > +long get_files_lock_timeout_ms(void) > +{ > + static int configured = 0; > + > + /* The default timeout is 100 ms: */ > + static int timeout_ms = 100; > + > + if (!configured) { > + git_config_get_int("core.fileslocktimeout", &timeout_ms); > + configured = 1; > + } > + > + return timeout_ms; > +} > + > @@ -172,7 +174,7 @@ static inline int hold_lock_file_for_update( > struct lock_file *lk, const char *path, > int flags) > { > - return hold_lock_file_for_update_timeout(lk, path, flags, 0); > + return hold_lock_file_for_update_timeout(lk, path, flags, get_files_lock_timeout_ms() ); > } This looks like it changes the default from 0 ("try exactly once") to 100ms. Maybe we should stick with 0 for those who don't jump onto this new config knob? Martin
[Trying with another e-mail address for Martin Nicolay. Maybe the one
from the in-body From header works better. wsmn.osm-gmbh.de couldn't be
found.]
On Fri, 15 Nov 2019 at 17:29, Martin Ågren <martin.agren@gmail.com> wrote:
>
> Hi Martin
>
> On Fri, 15 Nov 2019 at 17:17, Martin Nicolay <martin@wsmn.osm-gmbh.de> wrote:
>
> > While working with complex scripts invoking git multiple times my
> > editor detects the changes and calls "git status". This leads to
> > aborts in "git-stash". With this patch and an appropriate value
> > core.fileslocktimeout this problem goes away.
>
> Are you able to patch your editor to call
> git --no-optional-locks status
> instead? See the bottom of git-status(1) ("BACKGROUND REFRESH") for more
> on this.
>
> > +long get_files_lock_timeout_ms(void)
> > +{
> > + static int configured = 0;
> > +
> > + /* The default timeout is 100 ms: */
> > + static int timeout_ms = 100;
> > +
> > + if (!configured) {
> > + git_config_get_int("core.fileslocktimeout", &timeout_ms);
> > + configured = 1;
> > + }
> > +
> > + return timeout_ms;
> > +}
> > +
>
> > @@ -172,7 +174,7 @@ static inline int hold_lock_file_for_update(
> > struct lock_file *lk, const char *path,
> > int flags)
> > {
> > - return hold_lock_file_for_update_timeout(lk, path, flags, 0);
> > + return hold_lock_file_for_update_timeout(lk, path, flags, get_files_lock_timeout_ms() );
> > }
>
> This looks like it changes the default from 0 ("try exactly once") to
> 100ms. Maybe we should stick with 0 for those who don't jump onto this
> new config knob?
>
> Martin
On Sat, Aug 21, 2021 at 08:10:59PM +0530, TECOB270_Ganesh Pawar wrote:
> To reproduce:
> 1. Set the contents of .git/hooks/prepare-commit-msg to this:
> ```
> #!/bin/sh
>
> COMMIT_MSG_FILE=$1
>
> echo "Initial Commit." > "$COMMIT_MSG_FILE"
> echo "" >> "$COMMIT_MSG_FILE"
> echo "# Some random comment." >> "$COMMIT_MSG_FILE"
> ```
> Notice the comment being added to the file.
>
> 2. Append a commit with the --no-edit flag.
> `git commit --amend --no-edit`
>
> The comment ("Some random comment" in this case) is included in the
> final commit message, but it shouldn't right?
>
> If I don't pass the flag and just save the commit without changing
> anything, the comment isn't included. Shouldn't this be the case with
> the --no-edit flag too?
No, the behavior you're seeing is expected. Try this:
git commit --cleanup=strip --amend --no-edit
The default for "--cleanup" is "strip" when the editor is run, and
"whitespace" otherwise. I.e., if Git did not insert comments, then it
doesn't remove them either.
If you have a hook which is inserting comments which may need to be
stripped, you may want to set the commit.cleanup config to tell Git to
always remove them (but beware that invocations like "git commit -F"
will also start stripping comments).
See "--cleanup" in "git help commit" for the possible values.
-Peff
Please ignore this patch. I think I made some mistake when copy-pasting the In-reply-to code. Sorry for the trouble. I have sent this same patch on the appropriate thread. Thanks, Jaydeep.
Hello, I'm James, an Entrepreneur, Venture Capitalist & Private Lender. I represent a group of Ultra High Net Worth Donors worldwide. Kindly let me know if you can be trusted to distribute charitable items which include Cash, Food Items and Clothing in your region. Thank you James.
What is the question? Den ons 9 aug. 2023 kl 03:31 skrev <5598162950@mms.cricketwireless.net>: