* [PATCH 00/18] make test "linting" more comprehensive
@ 2022-09-01 0:29 Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 01/18] t: add skeleton chainlint.pl Eric Sunshine via GitGitGadget
` (18 more replies)
0 siblings, 19 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine
A while back, Peff successfully nerd-sniped[1] me into tackling a
long-brewing idea I had about (possibly) improving "chainlint" performance
by linting all tests in all scripts with a single command invocation instead
of running "sed" 26800+ times (once for each test). The new linter
introduced by this series can check all test definitions in the entire
project in a single invocation, and each test definition is checked only
once no matter how many times the test is actually run (unlike chainlint.sed
which will check a test repeatedly if, for instance, the test is run in a
loop). Moreover, all test definitions in the project are "linted" even if
some of those tests would not run on a particular platform or under a
certain configuration (unlike chainlint.sed which only lints tests which
actually run).
The new linter is a good deal smarter than chainlint.sed and understands not
just shell syntax but also some semantics of test construction, unlike
chainlint.sed which is merely heuristics-based. For instance, the new linter
recognizes cases when a broken &&-chain is legitimate, such as when "$?" is
handled explicitly or when a failure is signaled directly with "false", in
which case the &&-chain leading up to the "false" is immaterial, as well as
other cases. Unlike chainlint.sed, it recognizes that a semicolon after the
last command in a compound statement is harmless, thus won't interpret the
semicolon as breaking the &&-chain.
The new linter also provides considerably better coverage for broken
&&-chains. The "magic exit code 117" &&-chain checker built into test-lib.sh
only works for top-level command invocations; it doesn't work within "{...}"
groups, "(...)" subshells, "$(...)" substitutions, or within bodies of
compound statements, such as "if", "for", "while", "case", etc.
chainlint.sed partly fills the gap by catching broken &&-chains in "(...)"
subshells one level deep, but bugs can still lurk behind broken &&-chains in
the other cases. The new linter catches broken &&-chains within all those
constructs to any depth.
Another important improvement is that the new linter understands that shell
loops do not terminate automatically when a command in the loop body fails,
and that the condition needs to be handled explicitly by the test author by
using "|| return 1" (or "|| exit 1" in a subshell) to signal failure.
Consequently, the new linter will complain when a loop is lacking "|| return
1" (or "|| exit 1").
Finally, unlike chainlint.sed which (not surprisingly) is implemented in
"sed", the new linter is written in Perl, thus should be more accessible to
a wider audience, and is structured as a traditional top-down parser which
makes it much easier to reason about.
The new linter could eventually subsume other linting tasks such as
check-nonportable-shell.pl (which itself takes a couple seconds to run on my
machine), though it probably should be renamed to something other than
"chainlint" since it is no longer targeted only at spotting &&-chain breaks,
but that can wait for another day.
Ævar offered some sensible comments[2,3] about optimizing the Makefile rules
related to chainlint, but those optimizations are not tackled here for a few
reasons: (1) this series is already quite long, (2) I'd like to keep the
series focused on its primary goal of installing a new and improved linter,
(3) these patches do not make the Makefile situation any worse[4], and (4)
those optimizations can easily be done atop this series[5].
Junio: This series is nominally atop es/t4301-sed-portability-fix which is
in "next", and es/fix-chained-tests, es/test-chain-lint, and es/chainlint,
all of which are already in "master".
Dscho: This series conflicts with some patches carried only by the Git for
Windows project; the resolutions are obvious and simple. The new linter also
identifies some problems in tests carried only by the Git for Windows
project.
[1] https://lore.kernel.org/git/YJzGcZpZ+E9R0gYd@coredump.intra.peff.net/
[2]
https://lore.kernel.org/git/RFC-patch-1.1-bb3f1577829-20211213T095456Z-avarab@gmail.com/
[3] https://lore.kernel.org/git/211213.86tufc8oop.gmgdl@evledraar.gmail.com/
[4]
https://lore.kernel.org/git/CAPig+cSFtpt6ExbVDbcx3tZodrKFuM-r2GMW4TQ2tJmLvHBFtQ@mail.gmail.com/
[5] https://lore.kernel.org/git/211214.86tufbbbu3.gmgdl@evledraar.gmail.com/
Eric Sunshine (18):
t: add skeleton chainlint.pl
chainlint.pl: add POSIX shell lexical analyzer
chainlint.pl: add POSIX shell parser
chainlint.pl: add parser to validate tests
chainlint.pl: add parser to identify test definitions
chainlint.pl: validate test scripts in parallel
chainlint.pl: don't require `return|exit|continue` to end with `&&`
t/Makefile: apply chainlint.pl to existing self-tests
chainlint.pl: don't require `&` background command to end with `&&`
chainlint.pl: don't flag broken &&-chain if `$?` handled explicitly
chainlint.pl: don't flag broken &&-chain if failure indicated
explicitly
chainlint.pl: complain about loops lacking explicit failure handling
chainlint.pl: allow `|| echo` to signal failure upstream of a pipe
t/chainlint: add more chainlint.pl self-tests
test-lib: retire "lint harder" optimization hack
test-lib: replace chainlint.sed with chainlint.pl
t/Makefile: teach `make test` and `make prove` to run chainlint.pl
t: retire unused chainlint.sed
contrib/buildsystems/CMakeLists.txt | 2 +-
t/Makefile | 49 +-
t/README | 5 -
t/chainlint.pl | 730 ++++++++++++++++++
t/chainlint.sed | 399 ----------
t/chainlint/blank-line-before-esac.expect | 18 +
t/chainlint/blank-line-before-esac.test | 19 +
t/chainlint/block.expect | 15 +-
t/chainlint/block.test | 15 +-
t/chainlint/chain-break-background.expect | 9 +
t/chainlint/chain-break-background.test | 10 +
t/chainlint/chain-break-continue.expect | 12 +
t/chainlint/chain-break-continue.test | 13 +
t/chainlint/chain-break-false.expect | 9 +
t/chainlint/chain-break-false.test | 10 +
t/chainlint/chain-break-return-exit.expect | 19 +
t/chainlint/chain-break-return-exit.test | 23 +
t/chainlint/chain-break-status.expect | 9 +
t/chainlint/chain-break-status.test | 11 +
t/chainlint/chained-block.expect | 9 +
t/chainlint/chained-block.test | 11 +
t/chainlint/chained-subshell.expect | 10 +
t/chainlint/chained-subshell.test | 13 +
.../command-substitution-subsubshell.expect | 2 +
.../command-substitution-subsubshell.test | 3 +
t/chainlint/complex-if-in-cuddled-loop.expect | 2 +-
t/chainlint/double-here-doc.expect | 2 +
t/chainlint/double-here-doc.test | 12 +
t/chainlint/dqstring-line-splice.expect | 3 +
t/chainlint/dqstring-line-splice.test | 7 +
t/chainlint/dqstring-no-interpolate.expect | 11 +
t/chainlint/dqstring-no-interpolate.test | 15 +
t/chainlint/empty-here-doc.expect | 3 +
t/chainlint/empty-here-doc.test | 5 +
t/chainlint/exclamation.expect | 4 +
t/chainlint/exclamation.test | 8 +
t/chainlint/for-loop-abbreviated.expect | 5 +
t/chainlint/for-loop-abbreviated.test | 6 +
t/chainlint/for-loop.expect | 4 +-
t/chainlint/function.expect | 11 +
t/chainlint/function.test | 13 +
t/chainlint/here-doc-indent-operator.expect | 5 +
t/chainlint/here-doc-indent-operator.test | 13 +
t/chainlint/here-doc-multi-line-string.expect | 3 +-
t/chainlint/if-condition-split.expect | 7 +
t/chainlint/if-condition-split.test | 8 +
t/chainlint/if-in-loop.expect | 2 +-
t/chainlint/if-in-loop.test | 2 +-
t/chainlint/loop-detect-failure.expect | 15 +
t/chainlint/loop-detect-failure.test | 17 +
t/chainlint/loop-detect-status.expect | 18 +
t/chainlint/loop-detect-status.test | 19 +
t/chainlint/loop-in-if.expect | 2 +-
t/chainlint/loop-upstream-pipe.expect | 10 +
t/chainlint/loop-upstream-pipe.test | 11 +
t/chainlint/multi-line-string.expect | 11 +-
t/chainlint/nested-loop-detect-failure.expect | 31 +
t/chainlint/nested-loop-detect-failure.test | 35 +
t/chainlint/nested-subshell.expect | 2 +-
t/chainlint/one-liner-for-loop.expect | 9 +
t/chainlint/one-liner-for-loop.test | 10 +
t/chainlint/return-loop.expect | 5 +
t/chainlint/return-loop.test | 6 +
t/chainlint/semicolon.expect | 2 +-
t/chainlint/sqstring-in-sqstring.expect | 4 +
t/chainlint/sqstring-in-sqstring.test | 5 +
t/chainlint/t7900-subtree.expect | 13 +-
t/chainlint/token-pasting.expect | 27 +
t/chainlint/token-pasting.test | 32 +
t/chainlint/while-loop.expect | 4 +-
t/t0027-auto-crlf.sh | 7 +-
t/t3070-wildmatch.sh | 5 -
t/test-lib.sh | 12 +-
73 files changed, 1439 insertions(+), 449 deletions(-)
create mode 100755 t/chainlint.pl
delete mode 100644 t/chainlint.sed
create mode 100644 t/chainlint/blank-line-before-esac.expect
create mode 100644 t/chainlint/blank-line-before-esac.test
create mode 100644 t/chainlint/chain-break-background.expect
create mode 100644 t/chainlint/chain-break-background.test
create mode 100644 t/chainlint/chain-break-continue.expect
create mode 100644 t/chainlint/chain-break-continue.test
create mode 100644 t/chainlint/chain-break-false.expect
create mode 100644 t/chainlint/chain-break-false.test
create mode 100644 t/chainlint/chain-break-return-exit.expect
create mode 100644 t/chainlint/chain-break-return-exit.test
create mode 100644 t/chainlint/chain-break-status.expect
create mode 100644 t/chainlint/chain-break-status.test
create mode 100644 t/chainlint/chained-block.expect
create mode 100644 t/chainlint/chained-block.test
create mode 100644 t/chainlint/chained-subshell.expect
create mode 100644 t/chainlint/chained-subshell.test
create mode 100644 t/chainlint/command-substitution-subsubshell.expect
create mode 100644 t/chainlint/command-substitution-subsubshell.test
create mode 100644 t/chainlint/double-here-doc.expect
create mode 100644 t/chainlint/double-here-doc.test
create mode 100644 t/chainlint/dqstring-line-splice.expect
create mode 100644 t/chainlint/dqstring-line-splice.test
create mode 100644 t/chainlint/dqstring-no-interpolate.expect
create mode 100644 t/chainlint/dqstring-no-interpolate.test
create mode 100644 t/chainlint/empty-here-doc.expect
create mode 100644 t/chainlint/empty-here-doc.test
create mode 100644 t/chainlint/exclamation.expect
create mode 100644 t/chainlint/exclamation.test
create mode 100644 t/chainlint/for-loop-abbreviated.expect
create mode 100644 t/chainlint/for-loop-abbreviated.test
create mode 100644 t/chainlint/function.expect
create mode 100644 t/chainlint/function.test
create mode 100644 t/chainlint/here-doc-indent-operator.expect
create mode 100644 t/chainlint/here-doc-indent-operator.test
create mode 100644 t/chainlint/if-condition-split.expect
create mode 100644 t/chainlint/if-condition-split.test
create mode 100644 t/chainlint/loop-detect-failure.expect
create mode 100644 t/chainlint/loop-detect-failure.test
create mode 100644 t/chainlint/loop-detect-status.expect
create mode 100644 t/chainlint/loop-detect-status.test
create mode 100644 t/chainlint/loop-upstream-pipe.expect
create mode 100644 t/chainlint/loop-upstream-pipe.test
create mode 100644 t/chainlint/nested-loop-detect-failure.expect
create mode 100644 t/chainlint/nested-loop-detect-failure.test
create mode 100644 t/chainlint/one-liner-for-loop.expect
create mode 100644 t/chainlint/one-liner-for-loop.test
create mode 100644 t/chainlint/return-loop.expect
create mode 100644 t/chainlint/return-loop.test
create mode 100644 t/chainlint/sqstring-in-sqstring.expect
create mode 100644 t/chainlint/sqstring-in-sqstring.test
create mode 100644 t/chainlint/token-pasting.expect
create mode 100644 t/chainlint/token-pasting.test
base-commit: d42b38dfb5edf1a7fddd9542d722f91038407819
Published-As: https://github.com/gitgitgadget/git/releases/tag/pr-git-1322%2Fsunshineco%2Fchainlintperl-v1
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git pr-git-1322/sunshineco/chainlintperl-v1
Pull-Request: https://github.com/git/git/pull/1322
--
gitgitgadget
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 01/18] t: add skeleton chainlint.pl
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 12:27 ` Ævar Arnfjörð Bjarmason
2022-09-01 0:29 ` [PATCH 02/18] chainlint.pl: add POSIX shell lexical analyzer Eric Sunshine via GitGitGadget
` (17 subsequent siblings)
18 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Although chainlint.sed usefully identifies broken &&-chains in tests, it
has several shortcomings which include:
* only detects &&-chain breakage in subshells (one-level deep)
* does not check for broken top-level &&-chains; that task is left to
the "magic exit code 117" checker built into test-lib.sh, however,
that detection does not extend to `{...}` blocks, `$(...)`
expressions, or compound statements such as `if...fi`,
`while...done`, `case...esac`
* uses heuristics, which makes it (potentially) fallible and difficult
to tweak to handle additional real-world cases
* written in `sed` and employs advanced `sed` operators which are
probably not well-known to many programmers, thus the pool of people
who can maintain it is likely small
* manually simulates recursion into subshells which makes it much more
difficult to reason about than, say, a traditional top-down parser
* checks each test as the test is run, which can get expensive for
tests which are run repeatedly by functions or loops since their
bodies will be checked over and over (tens or hundreds of times)
unnecessarily
To address these shortcomings, begin implementing a more functional and
precise test linter which understands shell syntax and semantics rather
than employing heuristics, thus is able to recognize structural problems
with tests beyond broken &&-chains.
The new linter is written in Perl, thus should be more accessible to a
wider audience, and is structured as a traditional top-down parser which
makes it much easier to reason about, and allows it to inspect compound
statements within test bodies to any depth.
Furthermore, it can check all test definitions in the entire project in
a single invocation rather than having to be invoked once per test, and
each test definition is checked only once no matter how many times the
test is actually run.
At this stage, the new linter is just a skeleton containing boilerplate
which handles command-line options, collects and reports statistics, and
feeds its arguments -- paths of test scripts -- to a (presently)
do-nothing script parser for validation. Subsequent changes will flesh
out the functionality.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 115 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 115 insertions(+)
create mode 100755 t/chainlint.pl
diff --git a/t/chainlint.pl b/t/chainlint.pl
new file mode 100755
index 00000000000..e8ab95c7858
--- /dev/null
+++ b/t/chainlint.pl
@@ -0,0 +1,115 @@
+#!/usr/bin/env perl
+#
+# Copyright (c) 2021-2022 Eric Sunshine <sunshine@sunshineco.com>
+#
+# This tool scans shell scripts for test definitions and checks those tests for
+# problems, such as broken &&-chains, which might hide bugs in the tests
+# themselves or in behaviors being exercised by the tests.
+#
+# Input arguments are pathnames of shell scripts containing test definitions,
+# or globs referencing a collection of scripts. For each problem discovered,
+# the pathname of the script containing the test is printed along with the test
+# name and the test body with a `?!FOO?!` annotation at the location of each
+# detected problem, where "FOO" is a tag such as "AMP" which indicates a broken
+# &&-chain. Returns zero if no problems are discovered, otherwise non-zero.
+
+use warnings;
+use strict;
+use File::Glob;
+use Getopt::Long;
+
+my $show_stats;
+my $emit_all;
+
+package ScriptParser;
+
+sub new {
+ my $class = shift @_;
+ my $self = bless {} => $class;
+ $self->{output} = [];
+ $self->{ntests} = 0;
+ return $self;
+}
+
+sub parse_cmd {
+ return undef;
+}
+
+# main contains high-level functionality for processing command-line switches,
+# feeding input test scripts to ScriptParser, and reporting results.
+package main;
+
+my $getnow = sub { return time(); };
+my $interval = sub { return time() - shift; };
+if (eval {require Time::HiRes; Time::HiRes->import(); 1;}) {
+ $getnow = sub { return [Time::HiRes::gettimeofday()]; };
+ $interval = sub { return Time::HiRes::tv_interval(shift); };
+}
+
+sub show_stats {
+ my ($start_time, $stats) = @_;
+ my $walltime = $interval->($start_time);
+ my ($usertime) = times();
+ my ($total_workers, $total_scripts, $total_tests, $total_errs) = (0, 0, 0, 0);
+ for (@$stats) {
+ my ($worker, $nscripts, $ntests, $nerrs) = @$_;
+ print(STDERR "worker $worker: $nscripts scripts, $ntests tests, $nerrs errors\n");
+ $total_workers++;
+ $total_scripts += $nscripts;
+ $total_tests += $ntests;
+ $total_errs += $nerrs;
+ }
+ printf(STDERR "total: %d workers, %d scripts, %d tests, %d errors, %.2fs/%.2fs (wall/user)\n", $total_workers, $total_scripts, $total_tests, $total_errs, $walltime, $usertime);
+}
+
+sub check_script {
+ my ($id, $next_script, $emit) = @_;
+ my ($nscripts, $ntests, $nerrs) = (0, 0, 0);
+ while (my $path = $next_script->()) {
+ $nscripts++;
+ my $fh;
+ unless (open($fh, "<", $path)) {
+ $emit->("?!ERR?! $path: $!\n");
+ next;
+ }
+ my $s = do { local $/; <$fh> };
+ close($fh);
+ my $parser = ScriptParser->new(\$s);
+ 1 while $parser->parse_cmd();
+ if (@{$parser->{output}}) {
+ my $s = join('', @{$parser->{output}});
+ $emit->("# chainlint: $path\n" . $s);
+ $nerrs += () = $s =~ /\?![^?]+\?!/g;
+ }
+ $ntests += $parser->{ntests};
+ }
+ return [$id, $nscripts, $ntests, $nerrs];
+}
+
+sub exit_code {
+ my $stats = shift @_;
+ for (@$stats) {
+ my ($worker, $nscripts, $ntests, $nerrs) = @$_;
+ return 1 if $nerrs;
+ }
+ return 0;
+}
+
+Getopt::Long::Configure(qw{bundling});
+GetOptions(
+ "emit-all!" => \$emit_all,
+ "stats|show-stats!" => \$show_stats) or die("option error\n");
+
+my $start_time = $getnow->();
+my @stats;
+
+my @scripts;
+push(@scripts, File::Glob::bsd_glob($_)) for (@ARGV);
+unless (@scripts) {
+ show_stats($start_time, \@stats) if $show_stats;
+ exit;
+}
+
+push(@stats, check_script(1, sub { shift(@scripts); }, sub { print(@_); }));
+show_stats($start_time, \@stats) if $show_stats;
+exit(exit_code(\@stats));
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: [PATCH 01/18] t: add skeleton chainlint.pl
2022-09-01 0:29 ` [PATCH 01/18] t: add skeleton chainlint.pl Eric Sunshine via GitGitGadget
@ 2022-09-01 12:27 ` Ævar Arnfjörð Bjarmason
2022-09-02 18:53 ` Eric Sunshine
0 siblings, 1 reply; 131+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2022-09-01 12:27 UTC (permalink / raw)
To: Eric Sunshine via GitGitGadget
Cc: git, Jeff King, Elijah Newren, Fabian Stelzer,
Johannes Schindelin, Eric Sunshine
On Thu, Sep 01 2022, Eric Sunshine via GitGitGadget wrote:
> From: Eric Sunshine <sunshine@sunshineco.com>
> [...]
> diff --git a/t/chainlint.pl b/t/chainlint.pl
I really like this overall direction...
> +use warnings;
> +use strict;
I think that in general we're way overdue for at least a :
use v5.10.1;
Or even something more aggresive, I think we can definitely depend on a
newer version for this bit of dev tooling.
That makes a lot of things in this series more pleasing to look
at. E.g. you could use named $+{} variables for regexes.
> +package ScriptParser;
I really wish this could be changed to just put this in
t/chainlint/ScriptParser.pm early on, we could set @INC appropriately
and "use" these, which...
> +my $getnow = sub { return time(); };
> +my $interval = sub { return time() - shift; };
Would eliminate any scoping concerns about this sort of thing.
> +if (eval {require Time::HiRes; Time::HiRes->import(); 1;}) {
> + $getnow = sub { return [Time::HiRes::gettimeofday()]; };
> + $interval = sub { return Time::HiRes::tv_interval(shift); };
> +}
Is this "require" even needed, Time::HiRes is there since 5.7.* says
"corelist -l Time::HIRes".
> [...]
> +sub check_script {
> + my ($id, $next_script, $emit) = @_;
> + my ($nscripts, $ntests, $nerrs) = (0, 0, 0);
> + while (my $path = $next_script->()) {
> + $nscripts++;
> + my $fh;
> + unless (open($fh, "<", $path)) {
> + $emit->("?!ERR?! $path: $!\n");
If we can depend on v5.10.1 this can surely become:
use autodie qw(open close);
No?
> + $nerrs += () = $s =~ /\?![^?]+\?!/g;
y'know if we add some whitespace there we can conform to
https://metacpan.org/dist/perlsecret/view/lib/perlsecret.pod >:) (not
serious...)
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 01/18] t: add skeleton chainlint.pl
2022-09-01 12:27 ` Ævar Arnfjörð Bjarmason
@ 2022-09-02 18:53 ` Eric Sunshine
0 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-09-02 18:53 UTC (permalink / raw)
To: Ævar Arnfjörð Bjarmason
Cc: Eric Sunshine via GitGitGadget, Git List, Jeff King,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Thu, Sep 1, 2022 at 8:32 AM Ævar Arnfjörð Bjarmason <avarab@gmail.com> wrote:
> On Thu, Sep 01 2022, Eric Sunshine via GitGitGadget wrote:
> > From: Eric Sunshine <sunshine@sunshineco.com>
> > [...]
> > diff --git a/t/chainlint.pl b/t/chainlint.pl
>
> I really like this overall direction...
Thanks for running an eye over the patches.
> > +use warnings;
> > +use strict;
>
> I think that in general we're way overdue for at least a :
>
> use v5.10.1;
>
> Or even something more aggresive, I think we can definitely depend on a
> newer version for this bit of dev tooling.
Being stuck with an 11+ year-old primary development machine which
can't be upgraded to a newer OS due to vendor end-of-life declaration,
and with old tools installed, I have little or no interest in bumping
the minimum version, especially since older Perl versions are
perfectly adequate for this task. Undertaking such a version bump
would also be outside the scope of this patch series (and I simply
don't have the free time or desire to pursue it).
> That makes a lot of things in this series more pleasing to look
> at. E.g. you could use named $+{} variables for regexes.
Perhaps, but (1) that would not be very relevant for this script which
typically only extracts "$1", and (2) I've rarely found cases when
named variables help significantly with clarity, but then most of my
real-life regexes generally only extract one or two bits of
information, periodically three, and those bits ("$1", "$2", etc.) are
immediately assigned to variables with meaningful names.
> > +package ScriptParser;
>
> I really wish this could be changed to just put this in
> t/chainlint/ScriptParser.pm early on, we could set @INC appropriately
> and "use" these, which...
I intentionally avoided splitting this into multiple modules because I
wanted it to be easy drop into or adapt to other projects (i.e.
sharness[1]). Of course, it is effectively a shell parser written in
Perl, and it's conceivable that the parser part of it could have uses
outside of Git, so modularizing it might be a good idea, but that's a
task for some future date if such a need arises.
[1]: https://github.com/chriscool/sharness
> > +my $getnow = sub { return time(); };
> > +my $interval = sub { return time() - shift; };
>
> Would eliminate any scoping concerns about this sort of thing.
As above, this is easily addressed if/when someone ever wants to reuse
the code outside of Git for some other purpose. I doubt it's worth
worrying about now.
> > +if (eval {require Time::HiRes; Time::HiRes->import(); 1;}) {
> > + $getnow = sub { return [Time::HiRes::gettimeofday()]; };
> > + $interval = sub { return Time::HiRes::tv_interval(shift); };
> > +}
>
> Is this "require" even needed, Time::HiRes is there since 5.7.* says
> "corelist -l Time::HIRes".
Unfortunately, this is needed. The Windows CI instances the Git
project uses don't have Time::HiRes installed (and it's outside the
scope of this series to address shortcomings in the CI
infrastructure).
> > +sub check_script {
> > + my ($id, $next_script, $emit) = @_;
> > + my ($nscripts, $ntests, $nerrs) = (0, 0, 0);
> > + while (my $path = $next_script->()) {
> > + $nscripts++;
> > + my $fh;
> > + unless (open($fh, "<", $path)) {
> > + $emit->("?!ERR?! $path: $!\n");
>
> If we can depend on v5.10.1 this can surely become:
>
> use autodie qw(open close);
>
> No?
No. It's clipped in your response, but the full snippet looks like this:
unless (open($fh, "<", $path)) {
$emit->("?!ERR?! $path: $!\n");
next;
}
The important point is that I _don't_ want the program to "die" if it
can't open an input file; instead, it should continue processing all
the other input files, and the open-failure should be reported as just
another error/problem it encountered along the way.
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 02/18] chainlint.pl: add POSIX shell lexical analyzer
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 01/18] t: add skeleton chainlint.pl Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 12:32 ` Ævar Arnfjörð Bjarmason
2022-09-01 0:29 ` [PATCH 03/18] chainlint.pl: add POSIX shell parser Eric Sunshine via GitGitGadget
` (16 subsequent siblings)
18 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Begin fleshing out chainlint.pl by adding a lexical analyzer for the
POSIX shell command language. The sole entry point Lexer::scan_token()
returns the next token from the input. It will be called by the upcoming
shell language parser.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 177 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 177 insertions(+)
diff --git a/t/chainlint.pl b/t/chainlint.pl
index e8ab95c7858..81ffbf28bf3 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -21,6 +21,183 @@ use Getopt::Long;
my $show_stats;
my $emit_all;
+# Lexer tokenizes POSIX shell scripts. It is roughly modeled after section 2.3
+# "Token Recognition" of POSIX chapter 2 "Shell Command Language". Although
+# similar to lexical analyzers for other languages, this one differs in a few
+# substantial ways due to quirks of the shell command language.
+#
+# For instance, in many languages, newline is just whitespace like space or
+# TAB, but in shell a newline is a command separator, thus a distinct lexical
+# token. A newline is significant and returned as a distinct token even at the
+# end of a shell comment.
+#
+# In other languages, `1+2` would typically be scanned as three tokens
+# (`1`, `+`, and `2`), but in shell it is a single token. However, the similar
+# `1 + 2`, which embeds whitepace, is scanned as three token in shell, as well.
+# In shell, several characters with special meaning lose that meaning when not
+# surrounded by whitespace. For instance, the negation operator `!` is special
+# when standing alone surrounded by whitespace; whereas in `foo!uucp` it is
+# just a plain character in the longer token "foo!uucp". In many other
+# languages, `"string"/foo:'string'` might be scanned as five tokens ("string",
+# `/`, `foo`, `:`, and 'string'), but in shell, it is just a single token.
+#
+# The lexical analyzer for the shell command language is also somewhat unusual
+# in that it recursively invokes the parser to handle the body of `$(...)`
+# expressions which can contain arbitrary shell code. Such expressions may be
+# encountered both inside and outside of double-quoted strings.
+#
+# The lexical analyzer is responsible for consuming shell here-doc bodies which
+# extend from the line following a `<<TAG` operator until a line consisting
+# solely of `TAG`. Here-doc consumption begins when a newline is encountered.
+# It is legal for multiple here-doc `<<TAG` operators to be present on a single
+# line, in which case their bodies must be present one following the next, and
+# are consumed in the (left-to-right) order the `<<TAG` operators appear on the
+# line. A special complication is that the bodies of all here-docs must be
+# consumed when the newline is encountered even if the parse context depth has
+# changed. For instance, in `cat <<A && x=$(cat <<B &&\n`, bodies of here-docs
+# "A" and "B" must be consumed even though "A" was introduced outside the
+# recursive parse context in which "B" was introduced and in which the newline
+# is encountered.
+package Lexer;
+
+sub new {
+ my ($class, $parser, $s) = @_;
+ bless {
+ parser => $parser,
+ buff => $s,
+ heretags => []
+ } => $class;
+}
+
+sub scan_heredoc_tag {
+ my $self = shift @_;
+ ${$self->{buff}} =~ /\G(-?)/gc;
+ my $indented = $1;
+ my $tag = $self->scan_token();
+ $tag =~ s/['"\\]//g;
+ push(@{$self->{heretags}}, $indented ? "\t$tag" : "$tag");
+ return "<<$indented$tag";
+}
+
+sub scan_op {
+ my ($self, $c) = @_;
+ my $b = $self->{buff};
+ return $c unless $$b =~ /\G(.)/sgc;
+ my $cc = $c . $1;
+ return scan_heredoc_tag($self) if $cc eq '<<';
+ return $cc if $cc =~ /^(?:&&|\|\||>>|;;|<&|>&|<>|>\|)$/;
+ pos($$b)--;
+ return $c;
+}
+
+sub scan_sqstring {
+ my $self = shift @_;
+ ${$self->{buff}} =~ /\G([^']*'|.*\z)/sgc;
+ return "'" . $1;
+}
+
+sub scan_dqstring {
+ my $self = shift @_;
+ my $b = $self->{buff};
+ my $s = '"';
+ while (1) {
+ # slurp up non-special characters
+ $s .= $1 if $$b =~ /\G([^"\$\\]+)/gc;
+ # handle special characters
+ last unless $$b =~ /\G(.)/sgc;
+ my $c = $1;
+ $s .= '"', last if $c eq '"';
+ $s .= '$' . $self->scan_dollar(), next if $c eq '$';
+ if ($c eq '\\') {
+ $s .= '\\', last unless $$b =~ /\G(.)/sgc;
+ $c = $1;
+ next if $c eq "\n"; # line splice
+ # backslash escapes only $, `, ", \ in dq-string
+ $s .= '\\' unless $c =~ /^[\$`"\\]$/;
+ $s .= $c;
+ next;
+ }
+ die("internal error scanning dq-string '$c'\n");
+ }
+ return $s;
+}
+
+sub scan_balanced {
+ my ($self, $c1, $c2) = @_;
+ my $b = $self->{buff};
+ my $depth = 1;
+ my $s = $c1;
+ while ($$b =~ /\G([^\Q$c1$c2\E]*(?:[\Q$c1$c2\E]|\z))/gc) {
+ $s .= $1;
+ $depth++, next if $s =~ /\Q$c1\E$/;
+ $depth--;
+ last if $depth == 0;
+ }
+ return $s;
+}
+
+sub scan_subst {
+ my $self = shift @_;
+ my @tokens = $self->{parser}->parse(qr/^\)$/);
+ $self->{parser}->next_token(); # closing ")"
+ return @tokens;
+}
+
+sub scan_dollar {
+ my $self = shift @_;
+ my $b = $self->{buff};
+ return $self->scan_balanced('(', ')') if $$b =~ /\G\((?=\()/gc; # $((...))
+ return '(' . join(' ', $self->scan_subst()) . ')' if $$b =~ /\G\(/gc; # $(...)
+ return $self->scan_balanced('{', '}') if $$b =~ /\G\{/gc; # ${...}
+ return $1 if $$b =~ /\G(\w+)/gc; # $var
+ return $1 if $$b =~ /\G([@*#?$!0-9-])/gc; # $*, $1, $$, etc.
+ return '';
+}
+
+sub swallow_heredocs {
+ my $self = shift @_;
+ my $b = $self->{buff};
+ my $tags = $self->{heretags};
+ while (my $tag = shift @$tags) {
+ my $indent = $tag =~ s/^\t// ? '\\s*' : '';
+ $$b =~ /(?:\G|\n)$indent\Q$tag\E(?:\n|\z)/gc;
+ }
+}
+
+sub scan_token {
+ my $self = shift @_;
+ my $b = $self->{buff};
+ my $token = '';
+RESTART:
+ $$b =~ /\G[ \t]+/gc; # skip whitespace (but not newline)
+ return "\n" if $$b =~ /\G#[^\n]*(?:\n|\z)/gc; # comment
+ while (1) {
+ # slurp up non-special characters
+ $token .= $1 if $$b =~ /\G([^\\;&|<>(){}'"\$\s]+)/gc;
+ # handle special characters
+ last unless $$b =~ /\G(.)/sgc;
+ my $c = $1;
+ last if $c =~ /^[ \t]$/; # whitespace ends token
+ pos($$b)--, last if length($token) && $c =~ /^[;&|<>(){}\n]$/;
+ $token .= $self->scan_sqstring(), next if $c eq "'";
+ $token .= $self->scan_dqstring(), next if $c eq '"';
+ $token .= $c . $self->scan_dollar(), next if $c eq '$';
+ $self->swallow_heredocs(), $token = $c, last if $c eq "\n";
+ $token = $self->scan_op($c), last if $c =~ /^[;&|<>]$/;
+ $token = $c, last if $c =~ /^[(){}]$/;
+ if ($c eq '\\') {
+ $token .= '\\', last unless $$b =~ /\G(.)/sgc;
+ $c = $1;
+ next if $c eq "\n" && length($token); # line splice
+ goto RESTART if $c eq "\n"; # line splice
+ $token .= '\\' . $c;
+ next;
+ }
+ die("internal error scanning character '$c'\n");
+ }
+ return length($token) ? $token : undef;
+}
+
package ScriptParser;
sub new {
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: [PATCH 02/18] chainlint.pl: add POSIX shell lexical analyzer
2022-09-01 0:29 ` [PATCH 02/18] chainlint.pl: add POSIX shell lexical analyzer Eric Sunshine via GitGitGadget
@ 2022-09-01 12:32 ` Ævar Arnfjörð Bjarmason
2022-09-03 6:00 ` Eric Sunshine
0 siblings, 1 reply; 131+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2022-09-01 12:32 UTC (permalink / raw)
To: Eric Sunshine via GitGitGadget
Cc: git, Jeff King, Elijah Newren, Fabian Stelzer,
Johannes Schindelin, Eric Sunshine
On Thu, Sep 01 2022, Eric Sunshine via GitGitGadget wrote:
> From: Eric Sunshine <sunshine@sunshineco.com>
Just generally on this series:
> + $tag =~ s/['"\\]//g;
I think this would be a *lot* easier to read if all of these little
regex decls could be split out into some "grammar" class, or other
helper module/namespace. So e.g.:
my $SCRIPT_QUOTE_RX = qr/['"\\]/;
Then:
> + return $cc if $cc =~ /^(?:&&|\|\||>>|;;|<&|>&|<>|>\|)$/;
my $SCRIPT_WHATEVER_RX = qr/
^(?:
&&
|
\|\|
[...]
/x;
etc., i.e. we could then make use of /x to add inline comments to these.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 02/18] chainlint.pl: add POSIX shell lexical analyzer
2022-09-01 12:32 ` Ævar Arnfjörð Bjarmason
@ 2022-09-03 6:00 ` Eric Sunshine
0 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-09-03 6:00 UTC (permalink / raw)
To: Ævar Arnfjörð Bjarmason
Cc: Eric Sunshine via GitGitGadget, Git List, Jeff King,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Thu, Sep 1, 2022 at 8:35 AM Ævar Arnfjörð Bjarmason <avarab@gmail.com> wrote:
> On Thu, Sep 01 2022, Eric Sunshine via GitGitGadget wrote:
> Just generally on this series:
>
> > + $tag =~ s/['"\\]//g;
>
> I think this would be a *lot* easier to read if all of these little
> regex decls could be split out into some "grammar" class, or other
> helper module/namespace. So e.g.:
>
> my $SCRIPT_QUOTE_RX = qr/['"\\]/;
Taken out of context (as in the quoted snippet), it may indeed be
difficult to understand what that line is doing, however in context
with a meaningful function name:
sub scan_heredoc_tag {
...
my $tag = $self->scan_token();
$tag =~ s/['"\\]//g;
push(@{$self->{heretags}}, $indented ? "\t$tag" : "$tag");
...
}
for someone who is familiar with common heredoc tag quoting/escaping
(i.e. <<'EOF', <<"EOF", <<\EOF), I find the inline character class
`['"\\]` much easier to understand than some opaque name such as
$SCRIPT_QUOTE_RX, doubly so because the definition of the named regex
might be far removed from the actual code which uses it, which would
require going and studying that definition before being able to
understand what this code is doing.
I grasp you made that name up on-the-fly as an example, but that does
highlight another reason why I'd be hesitant to try to pluck out and
name these regexes. Specifically, naming is hard and I don't trust
that I could come up with succinct meaningful names which would convey
what a regex does as well as the actual regex itself conveys what it
does. In context within the well-named function, `s/['"\\]//g` is
obviously stripping quoting/escaping from the tag name; trying to come
up with a succinct yet accurate name to convey that intention is
difficult. And this is just one example. The script is littered with
little regexes like this, and they are almost all unique, thus making
the task of inventing succinct meaningful names extra difficult. And,
as noted above, I'm not at all convinced that plucking the regex out
of its natural context -- thus making the reader go elsewhere to find
the definition of the regex -- would help improve comprehension.
> Then:
>
> > + return $cc if $cc =~ /^(?:&&|\|\||>>|;;|<&|>&|<>|>\|)$/;
>
> my $SCRIPT_WHATEVER_RX = qr/
> ^(?:
> &&
> |
> \|\|
> [...]
> /x;
>
> etc., i.e. we could then make use of /x to add inline comments to these.
`/x` does make this slightly easier to grok, and this is a an example
of a regex which might be easy to name (i.e. $TWO_CHAR_OPERATOR), but
-- extra mandatory escaping aside -- it's not hard to understand this
one as-is; it's pretty obvious that it's looking for operators `&&`,
`||`, `>>`, `;;`, `<&`, `>&`, `<>`, and `>|`.
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 03/18] chainlint.pl: add POSIX shell parser
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 01/18] t: add skeleton chainlint.pl Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 02/18] chainlint.pl: add POSIX shell lexical analyzer Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 04/18] chainlint.pl: add parser to validate tests Eric Sunshine via GitGitGadget
` (15 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Continue fleshing out chainlint.pl by adding a general purpose recursive
descent parser for the POSIX shell command language. Although never
invoked directly, upcoming parser subclasses will extend its
functionality for specific purposes, such as plucking test definitions
from input scripts and applying domain-specific knowledge to perform
test validation.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 243 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 243 insertions(+)
diff --git a/t/chainlint.pl b/t/chainlint.pl
index 81ffbf28bf3..cdf136896be 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -198,6 +198,249 @@ RESTART:
return length($token) ? $token : undef;
}
+# ShellParser parses POSIX shell scripts (with minor extensions for Bash). It
+# is a recursive descent parser very roughly modeled after section 2.10 "Shell
+# Grammar" of POSIX chapter 2 "Shell Command Language".
+package ShellParser;
+
+sub new {
+ my ($class, $s) = @_;
+ my $self = bless {
+ buff => [],
+ stop => [],
+ output => []
+ } => $class;
+ $self->{lexer} = Lexer->new($self, $s);
+ return $self;
+}
+
+sub next_token {
+ my $self = shift @_;
+ return pop(@{$self->{buff}}) if @{$self->{buff}};
+ return $self->{lexer}->scan_token();
+}
+
+sub untoken {
+ my $self = shift @_;
+ push(@{$self->{buff}}, @_);
+}
+
+sub peek {
+ my $self = shift @_;
+ my $token = $self->next_token();
+ return undef unless defined($token);
+ $self->untoken($token);
+ return $token;
+}
+
+sub stop_at {
+ my ($self, $token) = @_;
+ return 1 unless defined($token);
+ my $stop = ${$self->{stop}}[-1] if @{$self->{stop}};
+ return defined($stop) && $token =~ $stop;
+}
+
+sub expect {
+ my ($self, $expect) = @_;
+ my $token = $self->next_token();
+ return $token if defined($token) && $token eq $expect;
+ push(@{$self->{output}}, "?!ERR?! expected '$expect' but found '" . (defined($token) ? $token : "<end-of-input>") . "'\n");
+ $self->untoken($token) if defined($token);
+ return ();
+}
+
+sub optional_newlines {
+ my $self = shift @_;
+ my @tokens;
+ while (my $token = $self->peek()) {
+ last unless $token eq "\n";
+ push(@tokens, $self->next_token());
+ }
+ return @tokens;
+}
+
+sub parse_group {
+ my $self = shift @_;
+ return ($self->parse(qr/^}$/),
+ $self->expect('}'));
+}
+
+sub parse_subshell {
+ my $self = shift @_;
+ return ($self->parse(qr/^\)$/),
+ $self->expect(')'));
+}
+
+sub parse_case_pattern {
+ my $self = shift @_;
+ my @tokens;
+ while (defined(my $token = $self->next_token())) {
+ push(@tokens, $token);
+ last if $token eq ')';
+ }
+ return @tokens;
+}
+
+sub parse_case {
+ my $self = shift @_;
+ my @tokens;
+ push(@tokens,
+ $self->next_token(), # subject
+ $self->optional_newlines(),
+ $self->expect('in'),
+ $self->optional_newlines());
+ while (1) {
+ my $token = $self->peek();
+ last unless defined($token) && $token ne 'esac';
+ push(@tokens,
+ $self->parse_case_pattern(),
+ $self->optional_newlines(),
+ $self->parse(qr/^(?:;;|esac)$/)); # item body
+ $token = $self->peek();
+ last unless defined($token) && $token ne 'esac';
+ push(@tokens,
+ $self->expect(';;'),
+ $self->optional_newlines());
+ }
+ push(@tokens, $self->expect('esac'));
+ return @tokens;
+}
+
+sub parse_for {
+ my $self = shift @_;
+ my @tokens;
+ push(@tokens,
+ $self->next_token(), # variable
+ $self->optional_newlines());
+ my $token = $self->peek();
+ if (defined($token) && $token eq 'in') {
+ push(@tokens,
+ $self->expect('in'),
+ $self->optional_newlines());
+ }
+ push(@tokens,
+ $self->parse(qr/^do$/), # items
+ $self->expect('do'),
+ $self->optional_newlines(),
+ $self->parse_loop_body(),
+ $self->expect('done'));
+ return @tokens;
+}
+
+sub parse_if {
+ my $self = shift @_;
+ my @tokens;
+ while (1) {
+ push(@tokens,
+ $self->parse(qr/^then$/), # if/elif condition
+ $self->expect('then'),
+ $self->optional_newlines(),
+ $self->parse(qr/^(?:elif|else|fi)$/)); # if/elif body
+ my $token = $self->peek();
+ last unless defined($token) && $token eq 'elif';
+ push(@tokens, $self->expect('elif'));
+ }
+ my $token = $self->peek();
+ if (defined($token) && $token eq 'else') {
+ push(@tokens,
+ $self->expect('else'),
+ $self->optional_newlines(),
+ $self->parse(qr/^fi$/)); # else body
+ }
+ push(@tokens, $self->expect('fi'));
+ return @tokens;
+}
+
+sub parse_loop_body {
+ my $self = shift @_;
+ return $self->parse(qr/^done$/);
+}
+
+sub parse_loop {
+ my $self = shift @_;
+ return ($self->parse(qr/^do$/), # condition
+ $self->expect('do'),
+ $self->optional_newlines(),
+ $self->parse_loop_body(),
+ $self->expect('done'));
+}
+
+sub parse_func {
+ my $self = shift @_;
+ return ($self->expect('('),
+ $self->expect(')'),
+ $self->optional_newlines(),
+ $self->parse_cmd()); # body
+}
+
+sub parse_bash_array_assignment {
+ my $self = shift @_;
+ my @tokens = $self->expect('(');
+ while (defined(my $token = $self->next_token())) {
+ push(@tokens, $token);
+ last if $token eq ')';
+ }
+ return @tokens;
+}
+
+my %compound = (
+ '{' => \&parse_group,
+ '(' => \&parse_subshell,
+ 'case' => \&parse_case,
+ 'for' => \&parse_for,
+ 'if' => \&parse_if,
+ 'until' => \&parse_loop,
+ 'while' => \&parse_loop);
+
+sub parse_cmd {
+ my $self = shift @_;
+ my $cmd = $self->next_token();
+ return () unless defined($cmd);
+ return $cmd if $cmd eq "\n";
+
+ my $token;
+ my @tokens = $cmd;
+ if ($cmd eq '!') {
+ push(@tokens, $self->parse_cmd());
+ return @tokens;
+ } elsif (my $f = $compound{$cmd}) {
+ push(@tokens, $self->$f());
+ } elsif (defined($token = $self->peek()) && $token eq '(') {
+ if ($cmd !~ /\w=$/) {
+ push(@tokens, $self->parse_func());
+ return @tokens;
+ }
+ $tokens[-1] .= join(' ', $self->parse_bash_array_assignment());
+ }
+
+ while (defined(my $token = $self->next_token())) {
+ $self->untoken($token), last if $self->stop_at($token);
+ push(@tokens, $token);
+ last if $token =~ /^(?:[;&\n|]|&&|\|\|)$/;
+ }
+ push(@tokens, $self->next_token()) if $tokens[-1] ne "\n" && defined($token = $self->peek()) && $token eq "\n";
+ return @tokens;
+}
+
+sub accumulate {
+ my ($self, $tokens, $cmd) = @_;
+ push(@$tokens, @$cmd);
+}
+
+sub parse {
+ my ($self, $stop) = @_;
+ push(@{$self->{stop}}, $stop);
+ goto DONE if $self->stop_at($self->peek());
+ my @tokens;
+ while (my @cmd = $self->parse_cmd()) {
+ $self->accumulate(\@tokens, \@cmd);
+ last if $self->stop_at($self->peek());
+ }
+DONE:
+ pop(@{$self->{stop}});
+ return @tokens;
+}
+
package ScriptParser;
sub new {
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 04/18] chainlint.pl: add parser to validate tests
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (2 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 03/18] chainlint.pl: add POSIX shell parser Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 05/18] chainlint.pl: add parser to identify test definitions Eric Sunshine via GitGitGadget
` (14 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Continue fleshing out chainlint.pl by adding TestParser, a parser with
special knowledge about how Git tests should be written; for instance,
it knows that commands within a test body should be chained together
with `&&`. An upcoming parser which plucks test definitions from test
scripts will invoke TestParser for each test body it encounters.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 46 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/t/chainlint.pl b/t/chainlint.pl
index cdf136896be..ad257106e56 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -441,6 +441,52 @@ DONE:
return @tokens;
}
+# TestParser is a subclass of ShellParser which, beyond parsing shell script
+# code, is also imbued with semantic knowledge of test construction, and checks
+# tests for common problems (such as broken &&-chains) which might hide bugs in
+# the tests themselves or in behaviors being exercised by the tests. As such,
+# TestParser is only called upon to parse test bodies, not the top-level
+# scripts in which the tests are defined.
+package TestParser;
+
+use base 'ShellParser';
+
+sub find_non_nl {
+ my $tokens = shift @_;
+ my $n = shift @_;
+ $n = $#$tokens if !defined($n);
+ $n-- while $n >= 0 && $$tokens[$n] eq "\n";
+ return $n;
+}
+
+sub ends_with {
+ my ($tokens, $needles) = @_;
+ my $n = find_non_nl($tokens);
+ for my $needle (reverse(@$needles)) {
+ return undef if $n < 0;
+ $n = find_non_nl($tokens, $n), next if $needle eq "\n";
+ return undef if $$tokens[$n] !~ $needle;
+ $n--;
+ }
+ return 1;
+}
+
+sub accumulate {
+ my ($self, $tokens, $cmd) = @_;
+ goto DONE unless @$tokens;
+ goto DONE if @$cmd == 1 && $$cmd[0] eq "\n";
+
+ # did previous command end with "&&", "||", "|"?
+ goto DONE if ends_with($tokens, [qr/^(?:&&|\|\||\|)$/]);
+
+ # flag missing "&&" at end of previous command
+ my $n = find_non_nl($tokens);
+ splice(@$tokens, $n + 1, 0, '?!AMP?!') unless $n < 0;
+
+DONE:
+ $self->SUPER::accumulate($tokens, $cmd);
+}
+
package ScriptParser;
sub new {
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 05/18] chainlint.pl: add parser to identify test definitions
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (3 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 04/18] chainlint.pl: add parser to validate tests Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 06/18] chainlint.pl: validate test scripts in parallel Eric Sunshine via GitGitGadget
` (13 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Finish fleshing out chainlint.pl by adding ScriptParser, a parser which
scans shell scripts for tests defined by test_expect_success() and
test_expect_failure(), plucks the test body from each definition, and
passes it to TestParser for validation. It recognizes test definitions
not only at the top-level of test scripts but also tests synthesized
within compound commands such as loops and function.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 63 +++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 60 insertions(+), 3 deletions(-)
diff --git a/t/chainlint.pl b/t/chainlint.pl
index ad257106e56..d526723ac00 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -487,18 +487,75 @@ DONE:
$self->SUPER::accumulate($tokens, $cmd);
}
+# ScriptParser is a subclass of ShellParser which identifies individual test
+# definitions within test scripts, and passes each test body through TestParser
+# to identify possible problems. ShellParser detects test definitions not only
+# at the top-level of test scripts but also within compound commands such as
+# loops and function definitions.
package ScriptParser;
+use base 'ShellParser';
+
sub new {
my $class = shift @_;
- my $self = bless {} => $class;
- $self->{output} = [];
+ my $self = $class->SUPER::new(@_);
$self->{ntests} = 0;
return $self;
}
+# extract the raw content of a token, which may be a single string or a
+# composition of multiple strings and non-string character runs; for instance,
+# `"test body"` unwraps to `test body`; `word"a b"42'c d'` to `worda b42c d`
+sub unwrap {
+ my $token = @_ ? shift @_ : $_;
+ # simple case: 'sqstring' or "dqstring"
+ return $token if $token =~ s/^'([^']*)'$/$1/;
+ return $token if $token =~ s/^"([^"]*)"$/$1/;
+
+ # composite case
+ my ($s, $q, $escaped);
+ while (1) {
+ # slurp up non-special characters
+ $s .= $1 if $token =~ /\G([^\\'"]*)/gc;
+ # handle special characters
+ last unless $token =~ /\G(.)/sgc;
+ my $c = $1;
+ $q = undef, next if defined($q) && $c eq $q;
+ $q = $c, next if !defined($q) && $c =~ /^['"]$/;
+ if ($c eq '\\') {
+ last unless $token =~ /\G(.)/sgc;
+ $c = $1;
+ $s .= '\\' if $c eq "\n"; # preserve line splice
+ }
+ $s .= $c;
+ }
+ return $s
+}
+
+sub check_test {
+ my $self = shift @_;
+ my ($title, $body) = map(unwrap, @_);
+ $self->{ntests}++;
+ my $parser = TestParser->new(\$body);
+ my @tokens = $parser->parse();
+ return unless $emit_all || grep(/\?![^?]+\?!/, @tokens);
+ my $checked = join(' ', @tokens);
+ $checked =~ s/^\n//;
+ $checked =~ s/^ //mg;
+ $checked =~ s/ $//mg;
+ $checked .= "\n" unless $checked =~ /\n$/;
+ push(@{$self->{output}}, "# chainlint: $title\n$checked");
+}
+
sub parse_cmd {
- return undef;
+ my $self = shift @_;
+ my @tokens = $self->SUPER::parse_cmd();
+ return @tokens unless @tokens && $tokens[0] =~ /^test_expect_(?:success|failure)$/;
+ my $n = $#tokens;
+ $n-- while $n >= 0 && $tokens[$n] =~ /^(?:[;&\n|]|&&|\|\|)$/;
+ $self->check_test($tokens[1], $tokens[2]) if $n == 2; # title body
+ $self->check_test($tokens[2], $tokens[3]) if $n > 2; # prereq title body
+ return @tokens;
}
# main contains high-level functionality for processing command-line switches,
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (4 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 05/18] chainlint.pl: add parser to identify test definitions Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 12:36 ` Ævar Arnfjörð Bjarmason
2022-09-06 22:35 ` Eric Wong
2022-09-01 0:29 ` [PATCH 07/18] chainlint.pl: don't require `return|exit|continue` to end with `&&` Eric Sunshine via GitGitGadget
` (12 subsequent siblings)
18 siblings, 2 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Although chainlint.pl has undergone a good deal of optimization during
its development -- increasing in speed significantly -- parsing and
validating 1050+ scripts and 16500+ tests via Perl is not exactly
instantaneous. However, perceived performance can be improved by taking
advantage of the fact that there is no interdependence between test
scripts or test definitions, thus parsing and validating can be done in
parallel. The number of available cores is determined automatically but
can be overridden via the --jobs option.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 50 +++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 49 insertions(+), 1 deletion(-)
diff --git a/t/chainlint.pl b/t/chainlint.pl
index d526723ac00..898573a9100 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -15,9 +15,11 @@
use warnings;
use strict;
+use Config;
use File::Glob;
use Getopt::Long;
+my $jobs = -1;
my $show_stats;
my $emit_all;
@@ -569,6 +571,16 @@ if (eval {require Time::HiRes; Time::HiRes->import(); 1;}) {
$interval = sub { return Time::HiRes::tv_interval(shift); };
}
+sub ncores {
+ # Windows
+ return $ENV{NUMBER_OF_PROCESSORS} if exists($ENV{NUMBER_OF_PROCESSORS});
+ # Linux / MSYS2 / Cygwin / WSL
+ do { local @ARGV='/proc/cpuinfo'; return scalar(grep(/^processor\s*:/, <>)); } if -r '/proc/cpuinfo';
+ # macOS & BSD
+ return qx/sysctl -n hw.ncpu/ if $^O =~ /(?:^darwin$|bsd)/;
+ return 1;
+}
+
sub show_stats {
my ($start_time, $stats) = @_;
my $walltime = $interval->($start_time);
@@ -621,7 +633,9 @@ sub exit_code {
Getopt::Long::Configure(qw{bundling});
GetOptions(
"emit-all!" => \$emit_all,
+ "jobs|j=i" => \$jobs,
"stats|show-stats!" => \$show_stats) or die("option error\n");
+$jobs = ncores() if $jobs < 1;
my $start_time = $getnow->();
my @stats;
@@ -633,6 +647,40 @@ unless (@scripts) {
exit;
}
-push(@stats, check_script(1, sub { shift(@scripts); }, sub { print(@_); }));
+unless ($Config{useithreads} && eval {
+ require threads; threads->import();
+ require Thread::Queue; Thread::Queue->import();
+ 1;
+ }) {
+ push(@stats, check_script(1, sub { shift(@scripts); }, sub { print(@_); }));
+ show_stats($start_time, \@stats) if $show_stats;
+ exit(exit_code(\@stats));
+}
+
+my $script_queue = Thread::Queue->new();
+my $output_queue = Thread::Queue->new();
+
+sub next_script { return $script_queue->dequeue(); }
+sub emit { $output_queue->enqueue(@_); }
+
+sub monitor {
+ while (my $s = $output_queue->dequeue()) {
+ print($s);
+ }
+}
+
+my $mon = threads->create({'context' => 'void'}, \&monitor);
+threads->create({'context' => 'list'}, \&check_script, $_, \&next_script, \&emit) for 1..$jobs;
+
+$script_queue->enqueue(@scripts);
+$script_queue->end();
+
+for (threads->list()) {
+ push(@stats, $_->join()) unless $_ == $mon;
+}
+
+$output_queue->end();
+$mon->join();
+
show_stats($start_time, \@stats) if $show_stats;
exit(exit_code(\@stats));
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-09-01 0:29 ` [PATCH 06/18] chainlint.pl: validate test scripts in parallel Eric Sunshine via GitGitGadget
@ 2022-09-01 12:36 ` Ævar Arnfjörð Bjarmason
2022-09-03 7:51 ` Eric Sunshine
2022-09-06 22:35 ` Eric Wong
1 sibling, 1 reply; 131+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2022-09-01 12:36 UTC (permalink / raw)
To: Eric Sunshine via GitGitGadget
Cc: git, Jeff King, Elijah Newren, Fabian Stelzer,
Johannes Schindelin, Eric Sunshine
On Thu, Sep 01 2022, Eric Sunshine via GitGitGadget wrote:
> From: Eric Sunshine <sunshine@sunshineco.com>
>
> Although chainlint.pl has undergone a good deal of optimization during
> its development -- increasing in speed significantly -- parsing and
> validating 1050+ scripts and 16500+ tests via Perl is not exactly
> instantaneous. However, perceived performance can be improved by taking
> advantage of the fact that there is no interdependence between test
> scripts or test definitions, thus parsing and validating can be done in
> parallel. The number of available cores is determined automatically but
> can be overridden via the --jobs option.
Per your CL:
Ævar offered some sensible comments[2,3] about optimizing the Makefile rules
related to chainlint, but those optimizations are not tackled here for a few
reasons: (1) this series is already quite long, (2) I'd like to keep the
series focused on its primary goal of installing a new and improved linter,
(3) these patches do not make the Makefile situation any worse[4], and (4)
those optimizations can easily be done atop this series[5].
I have been running with those t/Makefile changesg locally, but didn't
submit them. FWIW that's here:
https://github.com/git/git/compare/master...avar:git:avar/t-Makefile-use-dependency-graph-for-check-chainlint
Which I'm not entirely sure I'm happy about, and it's jeust about the
chainlint tests, but...
> +sub ncores {
> + # Windows
> + return $ENV{NUMBER_OF_PROCESSORS} if exists($ENV{NUMBER_OF_PROCESSORS});
> + # Linux / MSYS2 / Cygwin / WSL
> + do { local @ARGV='/proc/cpuinfo'; return scalar(grep(/^processor\s*:/, <>)); } if -r '/proc/cpuinfo';
> + # macOS & BSD
> + return qx/sysctl -n hw.ncpu/ if $^O =~ /(?:^darwin$|bsd)/;
> + return 1;
> +}
> +
> sub show_stats {
> my ($start_time, $stats) = @_;
> my $walltime = $interval->($start_time);
> @@ -621,7 +633,9 @@ sub exit_code {
> Getopt::Long::Configure(qw{bundling});
> GetOptions(
> "emit-all!" => \$emit_all,
> + "jobs|j=i" => \$jobs,
> "stats|show-stats!" => \$show_stats) or die("option error\n");
> +$jobs = ncores() if $jobs < 1;
>
> my $start_time = $getnow->();
> my @stats;
> @@ -633,6 +647,40 @@ unless (@scripts) {
> exit;
> }
>
> -push(@stats, check_script(1, sub { shift(@scripts); }, sub { print(@_); }));
> +unless ($Config{useithreads} && eval {
> + require threads; threads->import();
> + require Thread::Queue; Thread::Queue->import();
> + 1;
> + }) {
> + push(@stats, check_script(1, sub { shift(@scripts); }, sub { print(@_); }));
> + show_stats($start_time, \@stats) if $show_stats;
> + exit(exit_code(\@stats));
> +}
> +
> +my $script_queue = Thread::Queue->new();
> +my $output_queue = Thread::Queue->new();
> +
> +sub next_script { return $script_queue->dequeue(); }
> +sub emit { $output_queue->enqueue(@_); }
> +
> +sub monitor {
> + while (my $s = $output_queue->dequeue()) {
> + print($s);
> + }
> +}
> +
> +my $mon = threads->create({'context' => 'void'}, \&monitor);
> +threads->create({'context' => 'list'}, \&check_script, $_, \&next_script, \&emit) for 1..$jobs;
> +
> +$script_queue->enqueue(@scripts);
> +$script_queue->end();
> +
> +for (threads->list()) {
> + push(@stats, $_->join()) unless $_ == $mon;
> +}
> +
> +$output_queue->end();
> +$mon->join();
Maybe I'm misunderstanding this whole thing, but this really seems like
the wrong direction in an otherwise fantastic direction of a series.
I.e. it's *great* that we can do chain-lint without needing to actually
execute the *.sh file, this series adds a lint parser that can parse
those *.sh "at rest".
But in your 16/18 you then do:
+if test "${GIT_TEST_CHAIN_LINT:-1}" != 0
+then
+ "$PERL_PATH" "$TEST_DIRECTORY/chainlint.pl" "$0" ||
+ BUG "lint error (see '?!...!? annotations above)"
+fi
I may just be missing something here, but why not instead just borrow
what I did for "lint-docs" in 8650c6298c1 (doc lint: make "lint-docs"
non-.PHONY, 2021-10-15)?
I.e. if we can run against t0001-init.sh or whatever *once* to see if it
chain-lints OK then surely we could have a rule like:
t0001-init.sh.chainlint-ok: t0001-init.sh
perl chainlint.pl $< >$@
Then whenever you change t0001-init.sh we refresh that
t0001-init.sh.chainlint-ok, if the chainlint.pl exits non-zero we'll
fail to make it, and will unlink that t0001-init.sh.chainlint-ok.
That way you wouldn't need any parallelism in the Perl script, because
you'd have "make" take care of it, and the common case of re-testing
where the speed matters would be that we woudln't need to run this at
all, or would only re-run it for the test scripts that changed.
(Obviously a "real" implementation would want to create that ".ok" file
in t/.build/chainlint" or whatever)
A drawback is that you'd probably be slower on the initial run, as you'd
spwn N chainlint.pl. You could use $? instead of $< to get around that,
but that requires some re-structuring, and I've found it to generally
not be worth it.
It would also have the drawback that a:
./t0001-init.sh
wouldn't run the chain-lint, but this would:
make T=t0001-init.sh
But if want the former to work we could carry some
"GIT_TEST_VIA_MAKEFILE" variable or whatever, and only run the
test-via-test-lib.sh if it isn't set.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-09-01 12:36 ` Ævar Arnfjörð Bjarmason
@ 2022-09-03 7:51 ` Eric Sunshine
0 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-09-03 7:51 UTC (permalink / raw)
To: Ævar Arnfjörð Bjarmason
Cc: Eric Sunshine via GitGitGadget, Git List, Jeff King,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Thu, Sep 1, 2022 at 8:47 AM Ævar Arnfjörð Bjarmason <avarab@gmail.com> wrote:
> On Thu, Sep 01 2022, Eric Sunshine via GitGitGadget wrote:
> > Although chainlint.pl has undergone a good deal of optimization during
> > its development -- increasing in speed significantly -- parsing and
> > validating 1050+ scripts and 16500+ tests via Perl is not exactly
> > instantaneous. However, perceived performance can be improved by taking
> > advantage of the fact that there is no interdependence between test
> > scripts or test definitions, thus parsing and validating can be done in
> > parallel. The number of available cores is determined automatically but
> > can be overridden via the --jobs option.
>
> Per your CL:
>
> Ævar offered some sensible comments[2,3] about optimizing the Makefile rules
> related to chainlint, but those optimizations are not tackled here for a few
> reasons: (1) this series is already quite long, (2) I'd like to keep the
> series focused on its primary goal of installing a new and improved linter,
> (3) these patches do not make the Makefile situation any worse[4], and (4)
> those optimizations can easily be done atop this series[5].
>
> I have been running with those t/Makefile changesg locally, but didn't
> submit them. FWIW that's here:
>
> https://github.com/git/git/compare/master...avar:git:avar/t-Makefile-use-dependency-graph-for-check-chainlint
Thanks for the link. It's nice to see an actual implementation. I
think most of what you wrote in the commit message and the patch
itself are still meaningful following this series.
> > +my $script_queue = Thread::Queue->new();
> > +my $output_queue = Thread::Queue->new();
> > +
> > +my $mon = threads->create({'context' => 'void'}, \&monitor);
> > +threads->create({'context' => 'list'}, \&check_script, $_, \&next_script, \&emit) for 1..$jobs;
>
> Maybe I'm misunderstanding this whole thing, but this really seems like
> the wrong direction in an otherwise fantastic direction of a series.
>
> I.e. it's *great* that we can do chain-lint without needing to actually
> execute the *.sh file, this series adds a lint parser that can parse
> those *.sh "at rest".
>
> But in your 16/18 you then do:
>
> +if test "${GIT_TEST_CHAIN_LINT:-1}" != 0
> +then
> + "$PERL_PATH" "$TEST_DIRECTORY/chainlint.pl" "$0" ||
> + BUG "lint error (see '?!...!? annotations above)"
> +fi
>
> I may just be missing something here, but why not instead just borrow
> what I did for "lint-docs" in 8650c6298c1 (doc lint: make "lint-docs"
> non-.PHONY, 2021-10-15)?
I may be misunderstanding, but regarding patch [16/18], I think you
answered your own question at the end of your response when you
pointed out the drawback that you wouldn't get linting when running
the test script manually (i.e. `./t1234-test-stuff.sh`). Ensuring that
the linter is invoked when running a test script manually is important
(at least to me) since it's a frequent step when developing a new test
or modifying an existing test. [16/18] is present to ensure that we
still get that behavior.
> I.e. if we can run against t0001-init.sh or whatever *once* to see if it
> chain-lints OK then surely we could have a rule like:
>
> t0001-init.sh.chainlint-ok: t0001-init.sh
> perl chainlint.pl $< >$@
>
> Then whenever you change t0001-init.sh we refresh that
> t0001-init.sh.chainlint-ok, if the chainlint.pl exits non-zero we'll
> fail to make it, and will unlink that t0001-init.sh.chainlint-ok.
>
> That way you wouldn't need any parallelism in the Perl script, because
> you'd have "make" take care of it, and the common case of re-testing
> where the speed matters would be that we woudln't need to run this at
> all, or would only re-run it for the test scripts that changed.
A couple comments regarding parallelism: (1) as mentioned in another
response, when developing the script, I had in mind that it might be
useful for other projects (i.e. `sharness`), thus should be able to
stand on its own without advanced Makefile support, and (2) process
creation on Microsoft Windows is _very_ expensive and slow, so on that
platform, being able to lint all tests in all script with a single
invocation is a big win over running the linter 1050+ times, once for
each test script.
That's not to discredit any of your points... I'm just conveying some
of my thought process.
> (Obviously a "real" implementation would want to create that ".ok" file
> in t/.build/chainlint" or whatever)
>
> A drawback is that you'd probably be slower on the initial run, as you'd
> spwn N chainlint.pl. You could use $? instead of $< to get around that,
> but that requires some re-structuring, and I've found it to generally
> not be worth it.
The $? trick might be something Windows folk would appreciate, and
even those of us in macOS land (at least those of us with old hardware
and OS).
> It would also have the drawback that a:
>
> ./t0001-init.sh
>
> wouldn't run the chain-lint, but this would:
>
> make T=t0001-init.sh
>
> But if want the former to work we could carry some
> "GIT_TEST_VIA_MAKEFILE" variable or whatever, and only run the
> test-via-test-lib.sh if it isn't set.
I may be misunderstanding, but isn't the GIT_TEST_CHAIN_LINT variable
useful for this already, as in [16/18]?
Regarding your observations as a whole, I think the extract from the
cover letter which you cited above is relevant to my response. I don't
disagree with your points about using the Makefile to optimize away
unnecessary invocations of the linter, or that doing so can be a
useful future direction. As mentioned in the cover letter, though, I
think that such optimizations are outside the scope of this series
which -- aside from installing an improved linter -- aims to maintain
the status quo; in particular, this series ensures that (1) tests get
linted as they are being written/modified when the developer runs the
script manually `./t1234-test-stuff.sh`, and (2) all tests get linted
upon `make test`.
(The other reason why I'd prefer to see such optimizations applied
atop this series is that I simply don't have the time these days to
devote to major changes of direction in this series, which I think
meets its stated goals without making the situation any worse or
making it any more difficult to apply the optimizations you describe.
And the new linter has been languishing on my computer for far too
long; the implementation has been complete for well over a year, but
it took me this long to finish polishing the patch series. I'd like to
see the new linter make it into the toolchest of other developers
since it can be beneficial; it has already found scores or hundreds[1]
of possible hiding places for bugs due to broken &&-chain or missing
`|| return`, and has sniffed out some actual broken tests[2,3].)
[1]: https://lore.kernel.org/git/20211209051115.52629-1-sunshine@sunshineco.com/
[2]: https://lore.kernel.org/git/20211209051115.52629-3-sunshine@sunshineco.com/
[3]: https://lore.kernel.org/git/7b0784056f3cc0c96e9543ae44d0f5a7b0bf85fa.1661192802.git.gitgitgadget@gmail.com/
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-09-01 0:29 ` [PATCH 06/18] chainlint.pl: validate test scripts in parallel Eric Sunshine via GitGitGadget
2022-09-01 12:36 ` Ævar Arnfjörð Bjarmason
@ 2022-09-06 22:35 ` Eric Wong
2022-09-06 22:52 ` Eric Sunshine
1 sibling, 1 reply; 131+ messages in thread
From: Eric Wong @ 2022-09-06 22:35 UTC (permalink / raw)
To: Eric Sunshine via GitGitGadget
Cc: git, Jeff King, Elijah Newren,
Ævar Arnfjörð Bjarmason, Fabian Stelzer,
Johannes Schindelin, Eric Sunshine
Eric Sunshine via GitGitGadget <gitgitgadget@gmail.com> wrote:
> +unless ($Config{useithreads} && eval {
> + require threads; threads->import();
Fwiw, the threads(3perl) manpage has this since 2014:
The use of interpreter-based threads in perl is officially discouraged.
I was bummed, too :< but I've decided it wasn't worth the
effort to deal with the problems threads could cause down the
line in future Perl versions. For example, common libraries
like File::Temp will chdir behind-the-scenes which is
thread-unsafe.
(of course I only care about *BSD and Linux on MMU hardware,
so I use SOCK_SEQPACKET and fork() freely :>)
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-09-06 22:35 ` Eric Wong
@ 2022-09-06 22:52 ` Eric Sunshine
2022-09-06 23:26 ` Jeff King
0 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine @ 2022-09-06 22:52 UTC (permalink / raw)
To: Eric Wong
Cc: Eric Sunshine via GitGitGadget, Git List, Jeff King,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Tue, Sep 6, 2022 at 6:35 PM Eric Wong <e@80x24.org> wrote:
> Eric Sunshine via GitGitGadget <gitgitgadget@gmail.com> wrote:
> > +unless ($Config{useithreads} && eval {
> > + require threads; threads->import();
>
> Fwiw, the threads(3perl) manpage has this since 2014:
>
> The use of interpreter-based threads in perl is officially discouraged.
Thanks for pointing this out. I did see that, but as no better
alternative was offered, and since I did want this to work on Windows,
I went with it.
> I was bummed, too :< but I've decided it wasn't worth the
> effort to deal with the problems threads could cause down the
> line in future Perl versions. For example, common libraries
> like File::Temp will chdir behind-the-scenes which is
> thread-unsafe.
>
> (of course I only care about *BSD and Linux on MMU hardware,
> so I use SOCK_SEQPACKET and fork() freely :>)
I'm not overly worried about the deprecation at the moment since (1)
chainlint.pl isn't a widely used script -- it's audience is very
narrow; (2) the `$Config{useithreads}` conditional can be seen as an
automatic escape-hatch, and (if need be) I can even make `--jobs=1` be
an explicit escape hatch, and there's already --no-chain-lint for an
extreme escape-hatch; (3) the script is pretty much standalone -- it
doesn't rely upon any libraries like File::Temp or others; (4) Ævar
has ideas for using the Makefile for parallelism instead; (5) we can
cross the deprecation-bridge when/if it actually does become a
problem, either by dropping parallelism from chainlint.pl or by
dropping chainlint.pl itself.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-09-06 22:52 ` Eric Sunshine
@ 2022-09-06 23:26 ` Jeff King
2022-11-21 4:02 ` Eric Sunshine
0 siblings, 1 reply; 131+ messages in thread
From: Jeff King @ 2022-09-06 23:26 UTC (permalink / raw)
To: Eric Sunshine
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Tue, Sep 06, 2022 at 06:52:26PM -0400, Eric Sunshine wrote:
> On Tue, Sep 6, 2022 at 6:35 PM Eric Wong <e@80x24.org> wrote:
> > Eric Sunshine via GitGitGadget <gitgitgadget@gmail.com> wrote:
> > > +unless ($Config{useithreads} && eval {
> > > + require threads; threads->import();
> >
> > Fwiw, the threads(3perl) manpage has this since 2014:
> >
> > The use of interpreter-based threads in perl is officially discouraged.
>
> Thanks for pointing this out. I did see that, but as no better
> alternative was offered, and since I did want this to work on Windows,
> I went with it.
I did some timings the other night, and I found something quite curious
with the thread stuff.
Here's a hyperfine run of "make" in the t/ directory before any of your
patches. It uses "prove" to do parallelism under the hood:
Benchmark 1: make
Time (mean ± σ): 68.895 s ± 0.840 s [User: 620.914 s, System: 428.498 s]
Range (min … max): 67.943 s … 69.531 s 3 runs
So that gives us a baseline. Now the first thing I wondered is how bad
it would be to just run chainlint.pl once per script. So I applied up to
that patch:
Benchmark 1: make
Time (mean ± σ): 71.289 s ± 1.302 s [User: 673.300 s, System: 417.912 s]
Range (min … max): 69.788 s … 72.120 s 3 runs
I was quite surprised that it made things slower! It's nice that we're
only calling it once per script instead of once per test, but it seems
the startup overhead of the script is really high.
And since in this mode we're only feeding it one script at a time, I
tried reverting the "chainlint.pl: validate test scripts in parallel"
commit. And indeed, now things are much faster:
Benchmark 1: make
Time (mean ± σ): 61.544 s ± 3.364 s [User: 556.486 s, System: 384.001 s]
Range (min … max): 57.660 s … 63.490 s 3 runs
And you can see the same thing just running chainlint by itself:
$ time perl chainlint.pl /dev/null
real 0m0.069s
user 0m0.042s
sys 0m0.020s
$ git revert HEAD^{/validate.test.scripts.in.parallel}
$ time perl chainlint.pl /dev/null
real 0m0.014s
user 0m0.010s
sys 0m0.004s
I didn't track down the source of the slowness. Maybe it's loading extra
modules, or maybe it's opening /proc/cpuinfo, or maybe it's the thread
setup. But it's a surprising slowdown.
Now of course your intent is to do a single repo-wide invocation. And
that is indeed a bit faster. Here it is without the parallel code:
Benchmark 1: make
Time (mean ± σ): 61.727 s ± 2.140 s [User: 507.712 s, System: 377.753 s]
Range (min … max): 59.259 s … 63.074 s 3 runs
The wall-clock time didn't improve much, but the CPU time did. Restoring
the parallel code does improve the wall-clock time a bit, but at the
cost of some extra CPU:
Benchmark 1: make
Time (mean ± σ): 59.029 s ± 2.851 s [User: 515.690 s, System: 380.369 s]
Range (min … max): 55.736 s … 60.693 s 3 runs
which makes sense. If I do a with/without of just "make test-chainlint",
the parallelism is buying a few seconds of wall-clock:
Benchmark 1: make test-chainlint
Time (mean ± σ): 900.1 ms ± 102.9 ms [User: 12049.8 ms, System: 79.7 ms]
Range (min … max): 704.2 ms … 994.4 ms 10 runs
Benchmark 1: make test-chainlint
Time (mean ± σ): 3.778 s ± 0.042 s [User: 3.756 s, System: 0.023 s]
Range (min … max): 3.706 s … 3.833 s 10 runs
I'm not sure what it all means. For Linux, I think I'd be just as happy
with a single non-parallelized test-chainlint run for each file. But
maybe on Windows the startup overhead is worse? OTOH, the whole test run
is so much worse there. One process per script is not going to be that
much in relative terms either way.
And if we did cache the results and avoid extra invocations via "make",
then we'd want all the parallelism to move to there anyway.
Maybe that gives you more food for thought about whether perl's "use
threads" is worth having.
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-09-06 23:26 ` Jeff King
@ 2022-11-21 4:02 ` Eric Sunshine
2022-11-21 13:28 ` Ævar Arnfjörð Bjarmason
2022-11-21 18:04 ` Jeff King
0 siblings, 2 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-11-21 4:02 UTC (permalink / raw)
To: Jeff King
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Tue, Sep 6, 2022 at 7:27 PM Jeff King <peff@peff.net> wrote:
> I did some timings the other night, and I found something quite curious
> with the thread stuff.
>
> I was quite surprised that it made things slower! It's nice that we're
> only calling it once per script instead of once per test, but it seems
> the startup overhead of the script is really high.
>
> And since in this mode we're only feeding it one script at a time, I
> tried reverting the "chainlint.pl: validate test scripts in parallel"
> commit. And indeed, now things are much faster:
>
> Benchmark 1: make
> Time (mean ± σ): 61.544 s ± 3.364 s [User: 556.486 s, System: 384.001 s]
> Range (min … max): 57.660 s … 63.490 s 3 runs
>
> And you can see the same thing just running chainlint by itself:
>
> $ time perl chainlint.pl /dev/null
> real 0m0.069s
> user 0m0.042s
> sys 0m0.020s
>
> $ git revert HEAD^{/validate.test.scripts.in.parallel}
> $ time perl chainlint.pl /dev/null
> real 0m0.014s
> user 0m0.010s
> sys 0m0.004s
>
> I didn't track down the source of the slowness. Maybe it's loading extra
> modules, or maybe it's opening /proc/cpuinfo, or maybe it's the thread
> setup. But it's a surprising slowdown.
It is surprising, and unfortunate. Ditching "ithreads" would probably
be a good idea. (more on that below)
> Now of course your intent is to do a single repo-wide invocation. And
> that is indeed a bit faster. Here it is without the parallel code:
>
> Benchmark 1: make
> Time (mean ± σ): 61.727 s ± 2.140 s [User: 507.712 s, System: 377.753 s]
> Range (min … max): 59.259 s … 63.074 s 3 runs
>
> The wall-clock time didn't improve much, but the CPU time did. Restoring
> the parallel code does improve the wall-clock time a bit, but at the
> cost of some extra CPU:
>
> Benchmark 1: make
> Time (mean ± σ): 59.029 s ± 2.851 s [User: 515.690 s, System: 380.369 s]
> Range (min … max): 55.736 s … 60.693 s 3 runs
>
> which makes sense. If I do a with/without of just "make test-chainlint",
> the parallelism is buying a few seconds of wall-clock:
>
> Benchmark 1: make test-chainlint
> Time (mean ± σ): 900.1 ms ± 102.9 ms [User: 12049.8 ms, System: 79.7 ms]
> Range (min … max): 704.2 ms … 994.4 ms 10 runs
>
> Benchmark 1: make test-chainlint
> Time (mean ± σ): 3.778 s ± 0.042 s [User: 3.756 s, System: 0.023 s]
> Range (min … max): 3.706 s … 3.833 s 10 runs
>
> I'm not sure what it all means. For Linux, I think I'd be just as happy
> with a single non-parallelized test-chainlint run for each file. But
> maybe on Windows the startup overhead is worse? OTOH, the whole test run
> is so much worse there. One process per script is not going to be that
> much in relative terms either way.
Somehow Windows manages to be unbelievably slow no matter what. I
mentioned elsewhere (after you sent this) that I tested on a five or
six year old 8-core dual-boot machine. Booted to Linux, running a
single chainlint.pl invocation using all 8 cores to check all scripts
in the project took under 1 second walltime. The same machine booted
to Windows using all 8 cores took just under two minutes(!) walltime
for the single Perl invocation to check all scripts in the project.
So, at this point, I have no hope for making linting fast on Windows;
it seems to be a lost cause.
> And if we did cache the results and avoid extra invocations via "make",
> then we'd want all the parallelism to move to there anyway.
>
> Maybe that gives you more food for thought about whether perl's "use
> threads" is worth having.
I'm not especially happy about the significant overhead of "ithreads";
on my (old) machine, although it does improve perceived time
significantly, it eats up quite a bit of additional user-time. As
such, I would not be unhappy to see "ithreads" go away, especially
since fast linting on Windows seems unattainable (at least with Perl).
Overall, I think Ævar's plan to parallelize linting via "make" is
probably the way to go.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 4:02 ` Eric Sunshine
@ 2022-11-21 13:28 ` Ævar Arnfjörð Bjarmason
2022-11-21 14:07 ` Eric Sunshine
2022-11-21 18:04 ` Jeff King
1 sibling, 1 reply; 131+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2022-11-21 13:28 UTC (permalink / raw)
To: Eric Sunshine
Cc: Jeff King, Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Sun, Nov 20 2022, Eric Sunshine wrote:
> On Tue, Sep 6, 2022 at 7:27 PM Jeff King <peff@peff.net> wrote:
>> I did some timings the other night, and I found something quite curious
>> with the thread stuff.
>>
>> I was quite surprised that it made things slower! It's nice that we're
>> only calling it once per script instead of once per test, but it seems
>> the startup overhead of the script is really high.
>>
>> And since in this mode we're only feeding it one script at a time, I
>> tried reverting the "chainlint.pl: validate test scripts in parallel"
>> commit. And indeed, now things are much faster:
>>
>> Benchmark 1: make
>> Time (mean ± σ): 61.544 s ± 3.364 s [User: 556.486 s, System: 384.001 s]
>> Range (min … max): 57.660 s … 63.490 s 3 runs
>>
>> And you can see the same thing just running chainlint by itself:
>>
>> $ time perl chainlint.pl /dev/null
>> real 0m0.069s
>> user 0m0.042s
>> sys 0m0.020s
>>
>> $ git revert HEAD^{/validate.test.scripts.in.parallel}
>> $ time perl chainlint.pl /dev/null
>> real 0m0.014s
>> user 0m0.010s
>> sys 0m0.004s
>>
>> I didn't track down the source of the slowness. Maybe it's loading extra
>> modules, or maybe it's opening /proc/cpuinfo, or maybe it's the thread
>> setup. But it's a surprising slowdown.
>
> It is surprising, and unfortunate. Ditching "ithreads" would probably
> be a good idea. (more on that below)
>
>> Now of course your intent is to do a single repo-wide invocation. And
>> that is indeed a bit faster. Here it is without the parallel code:
>>
>> Benchmark 1: make
>> Time (mean ± σ): 61.727 s ± 2.140 s [User: 507.712 s, System: 377.753 s]
>> Range (min … max): 59.259 s … 63.074 s 3 runs
>>
>> The wall-clock time didn't improve much, but the CPU time did. Restoring
>> the parallel code does improve the wall-clock time a bit, but at the
>> cost of some extra CPU:
>>
>> Benchmark 1: make
>> Time (mean ± σ): 59.029 s ± 2.851 s [User: 515.690 s, System: 380.369 s]
>> Range (min … max): 55.736 s … 60.693 s 3 runs
>>
>> which makes sense. If I do a with/without of just "make test-chainlint",
>> the parallelism is buying a few seconds of wall-clock:
>>
>> Benchmark 1: make test-chainlint
>> Time (mean ± σ): 900.1 ms ± 102.9 ms [User: 12049.8 ms, System: 79.7 ms]
>> Range (min … max): 704.2 ms … 994.4 ms 10 runs
>>
>> Benchmark 1: make test-chainlint
>> Time (mean ± σ): 3.778 s ± 0.042 s [User: 3.756 s, System: 0.023 s]
>> Range (min … max): 3.706 s … 3.833 s 10 runs
>>
>> I'm not sure what it all means. For Linux, I think I'd be just as happy
>> with a single non-parallelized test-chainlint run for each file. But
>> maybe on Windows the startup overhead is worse? OTOH, the whole test run
>> is so much worse there. One process per script is not going to be that
>> much in relative terms either way.
>
> Somehow Windows manages to be unbelievably slow no matter what. I
> mentioned elsewhere (after you sent this) that I tested on a five or
> six year old 8-core dual-boot machine. Booted to Linux, running a
> single chainlint.pl invocation using all 8 cores to check all scripts
> in the project took under 1 second walltime. The same machine booted
> to Windows using all 8 cores took just under two minutes(!) walltime
> for the single Perl invocation to check all scripts in the project.
>
> So, at this point, I have no hope for making linting fast on Windows;
> it seems to be a lost cause.
I'd be really interested in seeing e.g. the NYTProf output for that run,
compared with that on *nix (if you could upload the HTML versions of
both somewhere, even better).
Maybe "chainlint.pl" is doing something odd, but this goes against the
usual wisdom about what is and isn't slow in Perl on windows, as I
understand it.
I.e. process star-up etc. is slow there, and I/O's a bit slower, but
once you're started up and e.g. slurping up all of those files & parsing
them you're just running "perl-native" code.
Which shouldn't be much slower at all. A perl compiled with ithreads is
(last I checked) around 10-20% slower, and the Windows version is always
compiled with that (it's needed for "fork" emulation).
But most *nix versions are compiled with that too, and certainly the one
you're using with "threads", so that's not the difference.
So I suspect something odd's going on...
>> And if we did cache the results and avoid extra invocations via "make",
>> then we'd want all the parallelism to move to there anyway.
>>
>> Maybe that gives you more food for thought about whether perl's "use
>> threads" is worth having.
>
> I'm not especially happy about the significant overhead of "ithreads";
> on my (old) machine, although it does improve perceived time
> significantly, it eats up quite a bit of additional user-time. As
> such, I would not be unhappy to see "ithreads" go away, especially
> since fast linting on Windows seems unattainable (at least with Perl).
>
> Overall, I think Ævar's plan to parallelize linting via "make" is
> probably the way to go.
Yeah, but that seems to me to be orthagonal to why it's this slow on
Windows, and if it is that wouldn't help much, except for incremental
re-runs.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 13:28 ` Ævar Arnfjörð Bjarmason
@ 2022-11-21 14:07 ` Eric Sunshine
2022-11-21 14:18 ` Ævar Arnfjörð Bjarmason
0 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine @ 2022-11-21 14:07 UTC (permalink / raw)
To: Ævar Arnfjörð Bjarmason
Cc: Jeff King, Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21, 2022 at 8:32 AM Ævar Arnfjörð Bjarmason
<avarab@gmail.com> wrote:
> On Sun, Nov 20 2022, Eric Sunshine wrote:
> > Somehow Windows manages to be unbelievably slow no matter what. I
> > mentioned elsewhere (after you sent this) that I tested on a five or
> > six year old 8-core dual-boot machine. Booted to Linux, running a
> > single chainlint.pl invocation using all 8 cores to check all scripts
> > in the project took under 1 second walltime. The same machine booted
> > to Windows using all 8 cores took just under two minutes(!) walltime
> > for the single Perl invocation to check all scripts in the project.
>
> I'd be really interested in seeing e.g. the NYTProf output for that run,
> compared with that on *nix (if you could upload the HTML versions of
> both somewhere, even better).
Unfortunately, I no longer have access to that machine, or usable
Windows in general. Of course, someone else with access to a dual-boot
machine could generate such a report, but whether anyone will offer to
do so is a different matter.
> Maybe "chainlint.pl" is doing something odd, but this goes against the
> usual wisdom about what is and isn't slow in Perl on windows, as I
> understand it.
>
> I.e. process star-up etc. is slow there, and I/O's a bit slower, but
> once you're started up and e.g. slurping up all of those files & parsing
> them you're just running "perl-native" code.
>
> Which shouldn't be much slower at all. A perl compiled with ithreads is
> (last I checked) around 10-20% slower, and the Windows version is always
> compiled with that (it's needed for "fork" emulation).
>
> But most *nix versions are compiled with that too, and certainly the one
> you're using with "threads", so that's not the difference.
>
> So I suspect something odd's going on...
This is all my understanding, as well, which is why I was so surprised
by the difference in speed. Aside from suspecting Windows I/O as the
culprit, another obvious possible culprit would be whatever
mechanism/primitives "ithreads" is using on Windows for
locking/synchronizing and passing messages between threads. I wouldn't
be surprised to learn that those mechanisms/primitives have very high
overhead on that platform.
> > Overall, I think Ævar's plan to parallelize linting via "make" is
> > probably the way to go.
>
> Yeah, but that seems to me to be orthagonal to why it's this slow on
> Windows, and if it is that wouldn't help much, except for incremental
> re-runs.
Oh, I didn't at all mean that `make` parallelism would be helpful on
Windows; I can't imagine that it ever would be (though I could once
again be wrong). What I meant was that `make` parallelism would be a
nice improvement and simplification (of sorts), in general,
considering that I've given up hope of ever seeing linting be speedy
on Windows.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 14:07 ` Eric Sunshine
@ 2022-11-21 14:18 ` Ævar Arnfjörð Bjarmason
2022-11-21 14:48 ` Eric Sunshine
0 siblings, 1 reply; 131+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2022-11-21 14:18 UTC (permalink / raw)
To: Eric Sunshine
Cc: Jeff King, Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21 2022, Eric Sunshine wrote:
> On Mon, Nov 21, 2022 at 8:32 AM Ævar Arnfjörð Bjarmason
> <avarab@gmail.com> wrote:
>> On Sun, Nov 20 2022, Eric Sunshine wrote:
>> > Somehow Windows manages to be unbelievably slow no matter what. I
>> > mentioned elsewhere (after you sent this) that I tested on a five or
>> > six year old 8-core dual-boot machine. Booted to Linux, running a
>> > single chainlint.pl invocation using all 8 cores to check all scripts
>> > in the project took under 1 second walltime. The same machine booted
>> > to Windows using all 8 cores took just under two minutes(!) walltime
>> > for the single Perl invocation to check all scripts in the project.
>>
>> I'd be really interested in seeing e.g. the NYTProf output for that run,
>> compared with that on *nix (if you could upload the HTML versions of
>> both somewhere, even better).
>
> Unfortunately, I no longer have access to that machine, or usable
> Windows in general. Of course, someone else with access to a dual-boot
> machine could generate such a report, but whether anyone will offer to
> do so is a different matter.
:(
>> Maybe "chainlint.pl" is doing something odd, but this goes against the
>> usual wisdom about what is and isn't slow in Perl on windows, as I
>> understand it.
>>
>> I.e. process star-up etc. is slow there, and I/O's a bit slower, but
>> once you're started up and e.g. slurping up all of those files & parsing
>> them you're just running "perl-native" code.
>>
>> Which shouldn't be much slower at all. A perl compiled with ithreads is
>> (last I checked) around 10-20% slower, and the Windows version is always
>> compiled with that (it's needed for "fork" emulation).
>>
>> But most *nix versions are compiled with that too, and certainly the one
>> you're using with "threads", so that's not the difference.
>>
>> So I suspect something odd's going on...
>
> This is all my understanding, as well, which is why I was so surprised
> by the difference in speed. Aside from suspecting Windows I/O as the
> culprit, another obvious possible culprit would be whatever
> mechanism/primitives "ithreads" is using on Windows for
> locking/synchronizing and passing messages between threads. I wouldn't
> be surprised to learn that those mechanisms/primitives have very high
> overhead on that platform.
Yeah, that could be, but then...
>> > Overall, I think Ævar's plan to parallelize linting via "make" is
>> > probably the way to go.
>>
>> Yeah, but that seems to me to be orthagonal to why it's this slow on
>> Windows, and if it is that wouldn't help much, except for incremental
>> re-runs.
>
> Oh, I didn't at all mean that `make` parallelism would be helpful on
> Windows; I can't imagine that it ever would be (though I could once
> again be wrong). What I meant was that `make` parallelism would be a
> nice improvement and simplification (of sorts), in general,
> considering that I've given up hope of ever seeing linting be speedy
> on Windows.
...that parallelism probably wouldn't be helpful, as it'll run into
another thing that's slow.
But just ditching the "ithreads" commit from chainlint.pl should make it
much faster, as sequentially parsing all the files isn't that slow, and
as that won't use threads should be much faster then.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 14:18 ` Ævar Arnfjörð Bjarmason
@ 2022-11-21 14:48 ` Eric Sunshine
0 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-11-21 14:48 UTC (permalink / raw)
To: Ævar Arnfjörð Bjarmason
Cc: Jeff King, Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21, 2022 at 9:20 AM Ævar Arnfjörð Bjarmason
<avarab@gmail.com> wrote:
> On Mon, Nov 21 2022, Eric Sunshine wrote:
> > Oh, I didn't at all mean that `make` parallelism would be helpful on
> > Windows; I can't imagine that it ever would be (though I could once
> > again be wrong). What I meant was that `make` parallelism would be a
> > nice improvement and simplification (of sorts), in general,
> > considering that I've given up hope of ever seeing linting be speedy
> > on Windows.
>
> But just ditching the "ithreads" commit from chainlint.pl should make it
> much faster, as sequentially parsing all the files isn't that slow, and
> as that won't use threads should be much faster then.
On my (old) machine (with spinning hard drive), `make test-chainlint`
with "ithreads" and warm filesystem cache takes about 3.8 seconds
walltime. Without "ithreads", it takes about 11.3 seconds. So, the
improvement in perceived time is significant. As such, I'm somewhat
hesitant to see "ithreads" dropped from chainlint.pl before `make`
parallelism is implemented. (I can easily see "drop ithreads" as the
final patch of a series which adds `make` parallelism.)
But perhaps I'm focussing too much on my own experience with my old
machine. Maybe linting without "ithreads" and without `make`
parallelism would be "fast enough" for developers using beefier modern
machines... (genuine question/thought since I don't have access to any
beefy modern hardware).
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 4:02 ` Eric Sunshine
2022-11-21 13:28 ` Ævar Arnfjörð Bjarmason
@ 2022-11-21 18:04 ` Jeff King
2022-11-21 18:47 ` Eric Sunshine
1 sibling, 1 reply; 131+ messages in thread
From: Jeff King @ 2022-11-21 18:04 UTC (permalink / raw)
To: Eric Sunshine
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Sun, Nov 20, 2022 at 11:02:54PM -0500, Eric Sunshine wrote:
> > And if we did cache the results and avoid extra invocations via "make",
> > then we'd want all the parallelism to move to there anyway.
> >
> > Maybe that gives you more food for thought about whether perl's "use
> > threads" is worth having.
>
> I'm not especially happy about the significant overhead of "ithreads";
> on my (old) machine, although it does improve perceived time
> significantly, it eats up quite a bit of additional user-time. As
> such, I would not be unhappy to see "ithreads" go away, especially
> since fast linting on Windows seems unattainable (at least with Perl).
>
> Overall, I think Ævar's plan to parallelize linting via "make" is
> probably the way to go.
TBH, I think just running the linter once per test script when the
script is run would be sufficient. That is one extra process per script,
but they are already shell scripts running a bunch of processes. You get
parallelism for free because you're already running the tests in
parallel. You lose out on "don't bother linting because the file hasn't
changed", but I'm not sure that's really worth the extra complexity
overall.
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 18:04 ` Jeff King
@ 2022-11-21 18:47 ` Eric Sunshine
2022-11-21 18:50 ` Eric Sunshine
2022-11-21 18:52 ` Jeff King
0 siblings, 2 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-11-21 18:47 UTC (permalink / raw)
To: Jeff King
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21, 2022 at 1:04 PM Jeff King <peff@peff.net> wrote:
> On Sun, Nov 20, 2022 at 11:02:54PM -0500, Eric Sunshine wrote:
> > Overall, I think Ævar's plan to parallelize linting via "make" is
> > probably the way to go.
>
> TBH, I think just running the linter once per test script when the
> script is run would be sufficient. That is one extra process per script,
> but they are already shell scripts running a bunch of processes. You get
> parallelism for free because you're already running the tests in
> parallel. You lose out on "don't bother linting because the file hasn't
> changed", but I'm not sure that's really worth the extra complexity
> overall.
Hmm, yes, that's appealing (especially since I've essentially given up
on making linting fast on Windows), and it wouldn't be hard to
implement. In fact, it's already implemented by 23a14f3016 (test-lib:
replace chainlint.sed with chainlint.pl, 2022-09-01); making it work
the way you describe would just involve dropping 69b9924b87
(t/Makefile: teach `make test` and `make prove` to run chainlint.pl,
2022-09-01) and 29fb2ec384 (chainlint.pl: validate test scripts in
parallel, 2022-09-01).
I think Ævar's use-case for `make` parallelization was to speed up
git-bisect runs. But thinking about it now, the likelihood of "lint"
problems cropping up during a git-bisect run is effectively nil, in
which case setting GIT_TEST_CHAIN_LINT=1 should be a perfectly
appropriate way to take linting out of the equation when bisecting.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 18:47 ` Eric Sunshine
@ 2022-11-21 18:50 ` Eric Sunshine
2022-11-21 18:52 ` Jeff King
1 sibling, 0 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-11-21 18:50 UTC (permalink / raw)
To: Jeff King
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21, 2022 at 1:47 PM Eric Sunshine <sunshine@sunshineco.com> wrote:
> I think Ævar's use-case for `make` parallelization was to speed up
> git-bisect runs. But thinking about it now, the likelihood of "lint"
> problems cropping up during a git-bisect run is effectively nil, in
> which case setting GIT_TEST_CHAIN_LINT=1 should be a perfectly
> appropriate way to take linting out of the equation when bisecting.
I mean "GIT_TEST_CHAIN_LINT=0", of course.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 18:47 ` Eric Sunshine
2022-11-21 18:50 ` Eric Sunshine
@ 2022-11-21 18:52 ` Jeff King
2022-11-21 19:00 ` Eric Sunshine
1 sibling, 1 reply; 131+ messages in thread
From: Jeff King @ 2022-11-21 18:52 UTC (permalink / raw)
To: Eric Sunshine
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21, 2022 at 01:47:42PM -0500, Eric Sunshine wrote:
> On Mon, Nov 21, 2022 at 1:04 PM Jeff King <peff@peff.net> wrote:
> > On Sun, Nov 20, 2022 at 11:02:54PM -0500, Eric Sunshine wrote:
> > > Overall, I think Ævar's plan to parallelize linting via "make" is
> > > probably the way to go.
> >
> > TBH, I think just running the linter once per test script when the
> > script is run would be sufficient. That is one extra process per script,
> > but they are already shell scripts running a bunch of processes. You get
> > parallelism for free because you're already running the tests in
> > parallel. You lose out on "don't bother linting because the file hasn't
> > changed", but I'm not sure that's really worth the extra complexity
> > overall.
>
> Hmm, yes, that's appealing (especially since I've essentially given up
> on making linting fast on Windows), and it wouldn't be hard to
> implement. In fact, it's already implemented by 23a14f3016 (test-lib:
> replace chainlint.sed with chainlint.pl, 2022-09-01); making it work
> the way you describe would just involve dropping 69b9924b87
> (t/Makefile: teach `make test` and `make prove` to run chainlint.pl,
> 2022-09-01) and 29fb2ec384 (chainlint.pl: validate test scripts in
> parallel, 2022-09-01).
Yes, that was one of the modes I timed in my original email. :)
> I think Ævar's use-case for `make` parallelization was to speed up
> git-bisect runs. But thinking about it now, the likelihood of "lint"
> problems cropping up during a git-bisect run is effectively nil, in
> which case setting GIT_TEST_CHAIN_LINT=1 should be a perfectly
> appropriate way to take linting out of the equation when bisecting.
Yes. It's also dumb to run a straight "make test" while bisecting in the
first place, because you are going to run a zillion tests that aren't
relevant to your bisection. Bisecting on "cd t && ./test-that-fails" is
faster, at which point you're only running the one lint process (and if
it really bothers you, you can disable chain lint as you suggest).
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 18:52 ` Jeff King
@ 2022-11-21 19:00 ` Eric Sunshine
2022-11-21 19:28 ` Jeff King
2022-11-22 0:11 ` Ævar Arnfjörð Bjarmason
0 siblings, 2 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-11-21 19:00 UTC (permalink / raw)
To: Jeff King
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21, 2022 at 1:52 PM Jeff King <peff@peff.net> wrote:
> On Mon, Nov 21, 2022 at 01:47:42PM -0500, Eric Sunshine wrote:
> > I think Ævar's use-case for `make` parallelization was to speed up
> > git-bisect runs. But thinking about it now, the likelihood of "lint"
> > problems cropping up during a git-bisect run is effectively nil, in
> > which case setting GIT_TEST_CHAIN_LINT=1 should be a perfectly
> > appropriate way to take linting out of the equation when bisecting.
>
> Yes. It's also dumb to run a straight "make test" while bisecting in the
> first place, because you are going to run a zillion tests that aren't
> relevant to your bisection. Bisecting on "cd t && ./test-that-fails" is
> faster, at which point you're only running the one lint process (and if
> it really bothers you, you can disable chain lint as you suggest).
I think I misspoke. Dredging up old memories, I think Ævar's use-case
is that he now runs:
git rebase -i --exec 'make test' ...
in order to ensure that the entire test suite passes for _every_ patch
in a series. (This is due to him having missed a runtime breakage by
only running "make test" after the final patch in a series was
applied, when the breakage was only temporary -- added by one patch,
but resolved by some other later patch.)
Even so, GIT_TEST_CHAIN_LINT=0 should be appropriate here too.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 19:00 ` Eric Sunshine
@ 2022-11-21 19:28 ` Jeff King
2022-11-22 0:11 ` Ævar Arnfjörð Bjarmason
1 sibling, 0 replies; 131+ messages in thread
From: Jeff King @ 2022-11-21 19:28 UTC (permalink / raw)
To: Eric Sunshine
Cc: Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21, 2022 at 02:00:41PM -0500, Eric Sunshine wrote:
> On Mon, Nov 21, 2022 at 1:52 PM Jeff King <peff@peff.net> wrote:
> > On Mon, Nov 21, 2022 at 01:47:42PM -0500, Eric Sunshine wrote:
> > > I think Ævar's use-case for `make` parallelization was to speed up
> > > git-bisect runs. But thinking about it now, the likelihood of "lint"
> > > problems cropping up during a git-bisect run is effectively nil, in
> > > which case setting GIT_TEST_CHAIN_LINT=1 should be a perfectly
> > > appropriate way to take linting out of the equation when bisecting.
> >
> > Yes. It's also dumb to run a straight "make test" while bisecting in the
> > first place, because you are going to run a zillion tests that aren't
> > relevant to your bisection. Bisecting on "cd t && ./test-that-fails" is
> > faster, at which point you're only running the one lint process (and if
> > it really bothers you, you can disable chain lint as you suggest).
>
> I think I misspoke. Dredging up old memories, I think Ævar's use-case
> is that he now runs:
>
> git rebase -i --exec 'make test' ...
>
> in order to ensure that the entire test suite passes for _every_ patch
> in a series. (This is due to him having missed a runtime breakage by
> only running "make test" after the final patch in a series was
> applied, when the breakage was only temporary -- added by one patch,
> but resolved by some other later patch.)
Yeah, I do that sometimes, too, especially when heavy refactoring is
involved.
> Even so, GIT_TEST_CHAIN_LINT=0 should be appropriate here too.
Agreed. But also, my original point stands. If you are running 10 CPU
minutes of tests, then a few CPU seconds of linting is not really that
important.
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 06/18] chainlint.pl: validate test scripts in parallel
2022-11-21 19:00 ` Eric Sunshine
2022-11-21 19:28 ` Jeff King
@ 2022-11-22 0:11 ` Ævar Arnfjörð Bjarmason
1 sibling, 0 replies; 131+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2022-11-22 0:11 UTC (permalink / raw)
To: Eric Sunshine
Cc: Jeff King, Eric Wong, Eric Sunshine via GitGitGadget, Git List,
Elijah Newren, Fabian Stelzer, Johannes Schindelin
On Mon, Nov 21 2022, Eric Sunshine wrote:
> On Mon, Nov 21, 2022 at 1:52 PM Jeff King <peff@peff.net> wrote:
>> On Mon, Nov 21, 2022 at 01:47:42PM -0500, Eric Sunshine wrote:
>> > I think Ævar's use-case for `make` parallelization was to speed up
>> > git-bisect runs. But thinking about it now, the likelihood of "lint"
>> > problems cropping up during a git-bisect run is effectively nil, in
>> > which case setting GIT_TEST_CHAIN_LINT=1 should be a perfectly
>> > appropriate way to take linting out of the equation when bisecting.
>>
>> Yes. It's also dumb to run a straight "make test" while bisecting in the
>> first place, because you are going to run a zillion tests that aren't
>> relevant to your bisection. Bisecting on "cd t && ./test-that-fails" is
>> faster, at which point you're only running the one lint process (and if
>> it really bothers you, you can disable chain lint as you suggest).
>
> I think I misspoke. Dredging up old memories, I think Ævar's use-case
> is that he now runs:
>
> git rebase -i --exec 'make test' ...
>
> in order to ensure that the entire test suite passes for _every_ patch
> in a series. (This is due to him having missed a runtime breakage by
> only running "make test" after the final patch in a series was
> applied, when the breakage was only temporary -- added by one patch,
> but resolved by some other later patch.)
>
> Even so, GIT_TEST_CHAIN_LINT=0 should be appropriate here too.
I'd like to make "make" fast in terms of avoiding its own overhead
before it gets to actual work mainly because of that use-case, but it
helps in general. E.g. if you switch branches we don't compile a file we
don't need to, we shouldn't re-run test checks we don't need either.
For t/ this is:
- Running chainlint.pl on the file, even if it didn't change
- Ditto check-non-portable-shell.pl
- Ditto "non-portable file name(s)" check
- Ditto "test -x" on all test files
I have a branch where these are all checked using dependencies instead,
e.g. we run a "test -x" on t0071-sort.sh and create a
".build/check-executable/t0071-sort.sh.ok" if that passed, we don't need
to shell out in the common case.
The results of that are, and this is a best case in picking one where
the test itself is cheap:
$ git hyperfine -L rev @{u},HEAD~,HEAD -s 'make CFLAGS=-O3' 'make test T=t0071-sort.sh' -w 1
Benchmark 1: make test T=t0071-sort.sh' in '@{u}
Time (mean ± σ): 1.168 s ± 0.074 s [User: 1.534 s, System: 0.082 s]
Range (min … max): 1.096 s … 1.316 s 10 runs
Benchmark 2: make test T=t0071-sort.sh' in 'HEAD~
Time (mean ± σ): 719.1 ms ± 46.1 ms [User: 910.6 ms, System: 79.7 ms]
Range (min … max): 682.0 ms … 828.2 ms 10 runs
Benchmark 3: make test T=t0071-sort.sh' in 'HEAD
Time (mean ± σ): 685.0 ms ± 34.2 ms [User: 645.0 ms, System: 56.8 ms]
Range (min … max): 657.6 ms … 773.6 ms 10 runs
Summary
'make test T=t0071-sort.sh' in 'HEAD' ran
1.05 ± 0.09 times faster than 'make test T=t0071-sort.sh' in 'HEAD~'
1.71 ± 0.14 times faster than 'make test T=t0071-sort.sh' in '@{u}'
The @{u} being "master", HEAD~ is "incremant without chainlint.pl", and
"HEAD" is where it's all incremental.
It's very WIP-quality, but I pushed the chainlint.pl part of it as a POC
just now, I did the others a while ago:
https://github.com/avar/git/tree/avar/t-Makefile-break-T-to-file-association
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 07/18] chainlint.pl: don't require `return|exit|continue` to end with `&&`
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (5 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 06/18] chainlint.pl: validate test scripts in parallel Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 08/18] t/Makefile: apply chainlint.pl to existing self-tests Eric Sunshine via GitGitGadget
` (11 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
In order to check for &&-chain breakage, each time TestParser encounters
a new command, it checks whether the previous command ends with `&&`,
and -- with a couple exceptions -- signals breakage if it does not. The
first exception is that a command may validly end with `||`, which is
commonly employed as `command || return 1` at the very end of a loop
body to terminate the loop early. The second is that piping one
command's output with `|` to another command does not constitute a
&&-chain break (the exit status of the pipe is the exit status of the
final command in the pipe).
However, it turns out that there are a few additional cases found in the
wild in which it is likely safe for `&&` to be missing even when other
commands follow. For instance:
while {condition-1}
do
test {condition-2} || return 1 # or `exit 1` within a subshell
more-commands
done
while {condition-1}
do
test {condition-2} || continue
more-commands
done
Such cases indicate deliberate thought about failure modes by the test
author, thus flagging them as breaking the &&-chain is not helpful.
Therefore, take these special cases into consideration when checking for
&&-chain breakage.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 20 ++++++++++++++++++--
t/chainlint/chain-break-continue.expect | 12 ++++++++++++
t/chainlint/chain-break-continue.test | 13 +++++++++++++
t/chainlint/chain-break-return-exit.expect | 4 ++++
t/chainlint/chain-break-return-exit.test | 5 +++++
t/chainlint/return-loop.expect | 5 +++++
t/chainlint/return-loop.test | 6 ++++++
7 files changed, 63 insertions(+), 2 deletions(-)
create mode 100644 t/chainlint/chain-break-continue.expect
create mode 100644 t/chainlint/chain-break-continue.test
create mode 100644 t/chainlint/chain-break-return-exit.expect
create mode 100644 t/chainlint/chain-break-return-exit.test
create mode 100644 t/chainlint/return-loop.expect
create mode 100644 t/chainlint/return-loop.test
diff --git a/t/chainlint.pl b/t/chainlint.pl
index 898573a9100..31c444067ce 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -473,13 +473,29 @@ sub ends_with {
return 1;
}
+sub match_ending {
+ my ($tokens, $endings) = @_;
+ for my $needles (@$endings) {
+ next if @$tokens < scalar(grep {$_ ne "\n"} @$needles);
+ return 1 if ends_with($tokens, $needles);
+ }
+ return undef;
+}
+
+my @safe_endings = (
+ [qr/^(?:&&|\|\||\|)$/],
+ [qr/^(?:exit|return)$/, qr/^(?:\d+|\$\?)$/],
+ [qr/^(?:exit|return)$/, qr/^(?:\d+|\$\?)$/, qr/^;$/],
+ [qr/^(?:exit|return|continue)$/],
+ [qr/^(?:exit|return|continue)$/, qr/^;$/]);
+
sub accumulate {
my ($self, $tokens, $cmd) = @_;
goto DONE unless @$tokens;
goto DONE if @$cmd == 1 && $$cmd[0] eq "\n";
- # did previous command end with "&&", "||", "|"?
- goto DONE if ends_with($tokens, [qr/^(?:&&|\|\||\|)$/]);
+ # did previous command end with "&&", "|", "|| return" or similar?
+ goto DONE if match_ending($tokens, \@safe_endings);
# flag missing "&&" at end of previous command
my $n = find_non_nl($tokens);
diff --git a/t/chainlint/chain-break-continue.expect b/t/chainlint/chain-break-continue.expect
new file mode 100644
index 00000000000..47a34577100
--- /dev/null
+++ b/t/chainlint/chain-break-continue.expect
@@ -0,0 +1,12 @@
+git ls-tree --name-only -r refs/notes/many_notes |
+while read path
+do
+ test "$path" = "foobar/non-note.txt" && continue
+ test "$path" = "deadbeef" && continue
+ test "$path" = "de/adbeef" && continue
+
+ if test $(expr length "$path") -ne $hexsz
+ then
+ return 1
+ fi
+done
diff --git a/t/chainlint/chain-break-continue.test b/t/chainlint/chain-break-continue.test
new file mode 100644
index 00000000000..f0af71d8bd9
--- /dev/null
+++ b/t/chainlint/chain-break-continue.test
@@ -0,0 +1,13 @@
+git ls-tree --name-only -r refs/notes/many_notes |
+while read path
+do
+# LINT: broken &&-chain okay if explicit "continue"
+ test "$path" = "foobar/non-note.txt" && continue
+ test "$path" = "deadbeef" && continue
+ test "$path" = "de/adbeef" && continue
+
+ if test $(expr length "$path") -ne $hexsz
+ then
+ return 1
+ fi
+done
diff --git a/t/chainlint/chain-break-return-exit.expect b/t/chainlint/chain-break-return-exit.expect
new file mode 100644
index 00000000000..dba292ee89b
--- /dev/null
+++ b/t/chainlint/chain-break-return-exit.expect
@@ -0,0 +1,4 @@
+for i in 1 2 3 4 ; do
+ git checkout main -b $i || return $?
+ test_commit $i $i $i tag$i || return $?
+done
diff --git a/t/chainlint/chain-break-return-exit.test b/t/chainlint/chain-break-return-exit.test
new file mode 100644
index 00000000000..e2b059933aa
--- /dev/null
+++ b/t/chainlint/chain-break-return-exit.test
@@ -0,0 +1,5 @@
+for i in 1 2 3 4 ; do
+# LINT: broken &&-chain okay if explicit "return $?" signals failure
+ git checkout main -b $i || return $?
+ test_commit $i $i $i tag$i || return $?
+done
diff --git a/t/chainlint/return-loop.expect b/t/chainlint/return-loop.expect
new file mode 100644
index 00000000000..cfc0549befe
--- /dev/null
+++ b/t/chainlint/return-loop.expect
@@ -0,0 +1,5 @@
+while test $i -lt $((num - 5))
+do
+ git notes add -m "notes for commit$i" HEAD~$i || return 1
+ i=$((i + 1))
+done
diff --git a/t/chainlint/return-loop.test b/t/chainlint/return-loop.test
new file mode 100644
index 00000000000..f90b1713005
--- /dev/null
+++ b/t/chainlint/return-loop.test
@@ -0,0 +1,6 @@
+while test $i -lt $((num - 5))
+do
+# LINT: "|| return {n}" valid loop escape outside subshell; no "&&" needed
+ git notes add -m "notes for commit$i" HEAD~$i || return 1
+ i=$((i + 1))
+done
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 08/18] t/Makefile: apply chainlint.pl to existing self-tests
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (6 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 07/18] chainlint.pl: don't require `return|exit|continue` to end with `&&` Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 09/18] chainlint.pl: don't require `&` background command to end with `&&` Eric Sunshine via GitGitGadget
` (10 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Now that chainlint.pl is functional, take advantage of the existing
chainlint self-tests to validate its operation. (While at it, stop
validating chainlint.sed against the self-tests since it will soon be
retired.)
Due to chainlint.sed implementation limitations leaking into the
self-test "expect" files, a few of them require minor adjustment to make
them compatible with chainlint.pl which does not share those
limitations.
First, because `sed` does not provide any sort of real recursion,
chainlint.sed only emulates recursion into subshells, and each level of
recursion leads to a multiplicative increase in complexity of the `sed`
rules. To avoid substantial complexity, chainlint.sed, therefore, only
emulates subshell recursion one level deep. Any subshell deeper than
that is passed through as-is, which means that &&-chains are not checked
in deeper subshells. chainlint.pl, on the other hand, employs a proper
recursive descent parser, thus checks subshells to any depth and
correctly flags broken &&-chains in deep subshells.
Second, due to sed's line-oriented nature, chainlint.sed, by necessity,
folds multi-line quoted strings into a single line. chainlint.pl, on the
other hand, employs a proper lexical analyzer which preserves quoted
strings as-is, including embedded newlines.
Furthermore, the output of chainlint.sed and chainlint.pl do not match
precisely in terms of whitespace. However, since the purpose of the
self-checks is to verify that the ?!AMP?! annotations are being
correctly added, minor whitespace differences are immaterial. For this
reason, rather than adjusting whitespace in all existing self-test
"expect" files to match the new linter's output, the `check-chainlint`
target ignores whitespace differences. Since `diff -w` is not POSIX,
`check-chainlint` attempts to employ `git diff -w`, and only falls back
to non-POSIX `diff -w` (and `-u`) if `git diff` is not available.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/Makefile | 29 +++++++++++++++----
t/chainlint/block.expect | 2 +-
t/chainlint/here-doc-multi-line-string.expect | 3 +-
t/chainlint/multi-line-string.expect | 11 +++++--
t/chainlint/nested-subshell.expect | 2 +-
t/chainlint/t7900-subtree.expect | 13 +++++++--
6 files changed, 46 insertions(+), 14 deletions(-)
diff --git a/t/Makefile b/t/Makefile
index 1c80c0c79a0..11f276774ea 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -38,7 +38,7 @@ T = $(sort $(wildcard t[0-9][0-9][0-9][0-9]-*.sh))
THELPERS = $(sort $(filter-out $(T),$(wildcard *.sh)))
TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
-CHAINLINT = sed -f chainlint.sed
+CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
all: $(DEFAULT_TEST_TARGET)
@@ -73,10 +73,29 @@ clean-chainlint:
check-chainlint:
@mkdir -p '$(CHAINLINTTMP_SQ)' && \
- sed -e '/^# LINT: /d' $(patsubst %,chainlint/%.test,$(CHAINLINTTESTS)) >'$(CHAINLINTTMP_SQ)'/tests && \
- sed -e '/^[ ]*$$/d' $(patsubst %,chainlint/%.expect,$(CHAINLINTTESTS)) >'$(CHAINLINTTMP_SQ)'/expect && \
- $(CHAINLINT) '$(CHAINLINTTMP_SQ)'/tests | grep -v '^[ ]*$$' >'$(CHAINLINTTMP_SQ)'/actual && \
- diff -u '$(CHAINLINTTMP_SQ)'/expect '$(CHAINLINTTMP_SQ)'/actual
+ for i in $(CHAINLINTTESTS); do \
+ echo "test_expect_success '$$i' '" && \
+ sed -e '/^# LINT: /d' chainlint/$$i.test && \
+ echo "'"; \
+ done >'$(CHAINLINTTMP_SQ)'/tests && \
+ { \
+ echo "# chainlint: $(CHAINLINTTMP_SQ)/tests" && \
+ for i in $(CHAINLINTTESTS); do \
+ echo "# chainlint: $$i" && \
+ sed -e '/^[ ]*$$/d' chainlint/$$i.expect; \
+ done \
+ } >'$(CHAINLINTTMP_SQ)'/expect && \
+ $(CHAINLINT) --emit-all '$(CHAINLINTTMP_SQ)'/tests | \
+ grep -v '^[ ]*$$' >'$(CHAINLINTTMP_SQ)'/actual && \
+ if test -f ../GIT-BUILD-OPTIONS; then \
+ . ../GIT-BUILD-OPTIONS; \
+ fi && \
+ if test -x ../git$$X; then \
+ DIFFW="../git$$X --no-pager diff -w --no-index"; \
+ else \
+ DIFFW="diff -w -u"; \
+ fi && \
+ $$DIFFW '$(CHAINLINTTMP_SQ)'/expect '$(CHAINLINTTMP_SQ)'/actual
test-lint: test-lint-duplicates test-lint-executable test-lint-shell-syntax \
test-lint-filenames
diff --git a/t/chainlint/block.expect b/t/chainlint/block.expect
index da60257ebc4..37dbf7d95fa 100644
--- a/t/chainlint/block.expect
+++ b/t/chainlint/block.expect
@@ -1,7 +1,7 @@
(
foo &&
{
- echo a
+ echo a ?!AMP?!
echo b
} &&
bar &&
diff --git a/t/chainlint/here-doc-multi-line-string.expect b/t/chainlint/here-doc-multi-line-string.expect
index 2578191ca8a..be64b26869a 100644
--- a/t/chainlint/here-doc-multi-line-string.expect
+++ b/t/chainlint/here-doc-multi-line-string.expect
@@ -1,4 +1,5 @@
(
- cat <<-TXT && echo "multi-line string" ?!AMP?!
+ cat <<-TXT && echo "multi-line
+ string" ?!AMP?!
bap
)
diff --git a/t/chainlint/multi-line-string.expect b/t/chainlint/multi-line-string.expect
index ab0dadf748e..27ff95218e7 100644
--- a/t/chainlint/multi-line-string.expect
+++ b/t/chainlint/multi-line-string.expect
@@ -1,9 +1,14 @@
(
- x="line 1 line 2 line 3" &&
- y="line 1 line2" ?!AMP?!
+ x="line 1
+ line 2
+ line 3" &&
+ y="line 1
+ line2" ?!AMP?!
foobar
) &&
(
- echo "xyz" "abc def ghi" &&
+ echo "xyz" "abc
+ def
+ ghi" &&
barfoo
)
diff --git a/t/chainlint/nested-subshell.expect b/t/chainlint/nested-subshell.expect
index 41a48adaa2b..02e0a9f1bb5 100644
--- a/t/chainlint/nested-subshell.expect
+++ b/t/chainlint/nested-subshell.expect
@@ -6,7 +6,7 @@
) >file &&
cd foo &&
(
- echo a
+ echo a ?!AMP?!
echo b
) >file
)
diff --git a/t/chainlint/t7900-subtree.expect b/t/chainlint/t7900-subtree.expect
index 1cccc7bf7e1..69167da2f27 100644
--- a/t/chainlint/t7900-subtree.expect
+++ b/t/chainlint/t7900-subtree.expect
@@ -1,10 +1,17 @@
(
- chks="sub1sub2sub3sub4" &&
+ chks="sub1
+sub2
+sub3
+sub4" &&
chks_sub=$(cat <<TXT | sed "s,^,sub dir/,"
) &&
- chkms="main-sub1main-sub2main-sub3main-sub4" &&
+ chkms="main-sub1
+main-sub2
+main-sub3
+main-sub4" &&
chkms_sub=$(cat <<TXT | sed "s,^,sub dir/,"
) &&
subfiles=$(git ls-files) &&
- check_equal "$subfiles" "$chkms$chks"
+ check_equal "$subfiles" "$chkms
+$chks"
)
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 09/18] chainlint.pl: don't require `&` background command to end with `&&`
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (7 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 08/18] t/Makefile: apply chainlint.pl to existing self-tests Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 10/18] chainlint.pl: don't flag broken &&-chain if `$?` handled explicitly Eric Sunshine via GitGitGadget
` (9 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
The exit status of the `&` asynchronous operator which starts a command
in the background is unconditionally zero, and the few places in the
test scripts which launch commands asynchronously are not interested in
the exit status of the `&` operator (though they often capture the
background command's PID). As such, there is little value in complaining
about broken &&-chain for a command launched in the background, and
doing so would only make busy-work for test authors. Therefore, take
this special case into account when checking for &&-chain breakage.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 2 +-
t/chainlint/chain-break-background.expect | 9 +++++++++
t/chainlint/chain-break-background.test | 10 ++++++++++
3 files changed, 20 insertions(+), 1 deletion(-)
create mode 100644 t/chainlint/chain-break-background.expect
create mode 100644 t/chainlint/chain-break-background.test
diff --git a/t/chainlint.pl b/t/chainlint.pl
index 31c444067ce..ba3fcb0c8e6 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -483,7 +483,7 @@ sub match_ending {
}
my @safe_endings = (
- [qr/^(?:&&|\|\||\|)$/],
+ [qr/^(?:&&|\|\||\||&)$/],
[qr/^(?:exit|return)$/, qr/^(?:\d+|\$\?)$/],
[qr/^(?:exit|return)$/, qr/^(?:\d+|\$\?)$/, qr/^;$/],
[qr/^(?:exit|return|continue)$/],
diff --git a/t/chainlint/chain-break-background.expect b/t/chainlint/chain-break-background.expect
new file mode 100644
index 00000000000..28f9114f42d
--- /dev/null
+++ b/t/chainlint/chain-break-background.expect
@@ -0,0 +1,9 @@
+JGIT_DAEMON_PID= &&
+git init --bare empty.git &&
+> empty.git/git-daemon-export-ok &&
+mkfifo jgit_daemon_output &&
+{
+ jgit daemon --port="$JGIT_DAEMON_PORT" . > jgit_daemon_output &
+ JGIT_DAEMON_PID=$!
+} &&
+test_expect_code 2 git ls-remote --exit-code git://localhost:$JGIT_DAEMON_PORT/empty.git
diff --git a/t/chainlint/chain-break-background.test b/t/chainlint/chain-break-background.test
new file mode 100644
index 00000000000..e10f656b055
--- /dev/null
+++ b/t/chainlint/chain-break-background.test
@@ -0,0 +1,10 @@
+JGIT_DAEMON_PID= &&
+git init --bare empty.git &&
+>empty.git/git-daemon-export-ok &&
+mkfifo jgit_daemon_output &&
+{
+# LINT: exit status of "&" is always 0 so &&-chaining immaterial
+ jgit daemon --port="$JGIT_DAEMON_PORT" . >jgit_daemon_output &
+ JGIT_DAEMON_PID=$!
+} &&
+test_expect_code 2 git ls-remote --exit-code git://localhost:$JGIT_DAEMON_PORT/empty.git
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 10/18] chainlint.pl: don't flag broken &&-chain if `$?` handled explicitly
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (8 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 09/18] chainlint.pl: don't require `&` background command to end with `&&` Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 11/18] chainlint.pl: don't flag broken &&-chain if failure indicated explicitly Eric Sunshine via GitGitGadget
` (8 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
There are cases in which tests capture and check a command's exit code
explicitly without employing test_expect_code(). They do so by
intentionally breaking the &&-chain since it would be impossible to
capture "$?" in the failing case if the `status=$?` assignment was part
of the &&-chain. Since such constructs are manually checking the exit
code, their &&-chain breakage is legitimate and safe, thus should not be
flagged. Therefore, stop flagging &&-chain breakage in such cases.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 6 ++++++
t/chainlint/chain-break-status.expect | 9 +++++++++
t/chainlint/chain-break-status.test | 11 +++++++++++
3 files changed, 26 insertions(+)
create mode 100644 t/chainlint/chain-break-status.expect
create mode 100644 t/chainlint/chain-break-status.test
diff --git a/t/chainlint.pl b/t/chainlint.pl
index ba3fcb0c8e6..14e1db3519a 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -497,6 +497,12 @@ sub accumulate {
# did previous command end with "&&", "|", "|| return" or similar?
goto DONE if match_ending($tokens, \@safe_endings);
+ # if this command handles "$?" specially, then okay for previous
+ # command to be missing "&&"
+ for my $token (@$cmd) {
+ goto DONE if $token =~ /\$\?/;
+ }
+
# flag missing "&&" at end of previous command
my $n = find_non_nl($tokens);
splice(@$tokens, $n + 1, 0, '?!AMP?!') unless $n < 0;
diff --git a/t/chainlint/chain-break-status.expect b/t/chainlint/chain-break-status.expect
new file mode 100644
index 00000000000..f4bada94632
--- /dev/null
+++ b/t/chainlint/chain-break-status.expect
@@ -0,0 +1,9 @@
+OUT=$(( ( large_git ; echo $? 1 >& 3 ) | : ) 3 >& 1) &&
+test_match_signal 13 "$OUT" &&
+
+{ test-tool sigchain > actual ; ret=$? ; } &&
+{
+ test_match_signal 15 "$ret" ||
+ test "$ret" = 3
+} &&
+test_cmp expect actual
diff --git a/t/chainlint/chain-break-status.test b/t/chainlint/chain-break-status.test
new file mode 100644
index 00000000000..a6602a7b99c
--- /dev/null
+++ b/t/chainlint/chain-break-status.test
@@ -0,0 +1,11 @@
+# LINT: broken &&-chain okay if next command handles "$?" explicitly
+OUT=$( ((large_git; echo $? 1>&3) | :) 3>&1 ) &&
+test_match_signal 13 "$OUT" &&
+
+# LINT: broken &&-chain okay if next command handles "$?" explicitly
+{ test-tool sigchain >actual; ret=$?; } &&
+{
+ test_match_signal 15 "$ret" ||
+ test "$ret" = 3
+} &&
+test_cmp expect actual
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 11/18] chainlint.pl: don't flag broken &&-chain if failure indicated explicitly
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (9 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 10/18] chainlint.pl: don't flag broken &&-chain if `$?` handled explicitly Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 12/18] chainlint.pl: complain about loops lacking explicit failure handling Eric Sunshine via GitGitGadget
` (7 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
There are quite a few tests which print an error messages and then
explicitly signal failure with `false`, `return 1`, or `exit 1` as the
final command in an `if` branch. In these cases, the tests don't bother
maintaining the &&-chain between `echo` and the explicit "test failed"
indicator. Since such constructs are manually signaling failure, their
&&-chain breakage is legitimate and safe -- both for the command
immediately preceding `false`, `return`, or `exit`, as well as for all
preceding commands in the `if` branch. Therefore, stop flagging &&-chain
breakage in these sorts of cases.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 8 ++++++++
t/chainlint/chain-break-false.expect | 9 +++++++++
t/chainlint/chain-break-false.test | 10 ++++++++++
t/chainlint/chain-break-return-exit.expect | 15 +++++++++++++++
t/chainlint/chain-break-return-exit.test | 18 ++++++++++++++++++
t/chainlint/if-in-loop.expect | 2 +-
t/chainlint/if-in-loop.test | 2 +-
7 files changed, 62 insertions(+), 2 deletions(-)
create mode 100644 t/chainlint/chain-break-false.expect
create mode 100644 t/chainlint/chain-break-false.test
diff --git a/t/chainlint.pl b/t/chainlint.pl
index 14e1db3519a..a76a09ecf5e 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -503,6 +503,14 @@ sub accumulate {
goto DONE if $token =~ /\$\?/;
}
+ # if this command is "false", "return 1", or "exit 1" (which signal
+ # failure explicitly), then okay for all preceding commands to be
+ # missing "&&"
+ if ($$cmd[0] =~ /^(?:false|return|exit)$/) {
+ @$tokens = grep(!/^\?!AMP\?!$/, @$tokens);
+ goto DONE;
+ }
+
# flag missing "&&" at end of previous command
my $n = find_non_nl($tokens);
splice(@$tokens, $n + 1, 0, '?!AMP?!') unless $n < 0;
diff --git a/t/chainlint/chain-break-false.expect b/t/chainlint/chain-break-false.expect
new file mode 100644
index 00000000000..989766fb856
--- /dev/null
+++ b/t/chainlint/chain-break-false.expect
@@ -0,0 +1,9 @@
+if condition not satisified
+then
+ echo it did not work...
+ echo failed!
+ false
+else
+ echo it went okay ?!AMP?!
+ congratulate user
+fi
diff --git a/t/chainlint/chain-break-false.test b/t/chainlint/chain-break-false.test
new file mode 100644
index 00000000000..a5aaff8c8a4
--- /dev/null
+++ b/t/chainlint/chain-break-false.test
@@ -0,0 +1,10 @@
+# LINT: broken &&-chain okay if explicit "false" signals failure
+if condition not satisified
+then
+ echo it did not work...
+ echo failed!
+ false
+else
+ echo it went okay
+ congratulate user
+fi
diff --git a/t/chainlint/chain-break-return-exit.expect b/t/chainlint/chain-break-return-exit.expect
index dba292ee89b..1732d221c32 100644
--- a/t/chainlint/chain-break-return-exit.expect
+++ b/t/chainlint/chain-break-return-exit.expect
@@ -1,3 +1,18 @@
+case "$(git ls-files)" in
+one ) echo pass one ;;
+* ) echo bad one ; return 1 ;;
+esac &&
+(
+ case "$(git ls-files)" in
+ two ) echo pass two ;;
+ * ) echo bad two ; exit 1 ;;
+esac
+) &&
+case "$(git ls-files)" in
+dir/two"$LF"one ) echo pass both ;;
+* ) echo bad ; return 1 ;;
+esac &&
+
for i in 1 2 3 4 ; do
git checkout main -b $i || return $?
test_commit $i $i $i tag$i || return $?
diff --git a/t/chainlint/chain-break-return-exit.test b/t/chainlint/chain-break-return-exit.test
index e2b059933aa..46542edf881 100644
--- a/t/chainlint/chain-break-return-exit.test
+++ b/t/chainlint/chain-break-return-exit.test
@@ -1,3 +1,21 @@
+case "$(git ls-files)" in
+one) echo pass one ;;
+# LINT: broken &&-chain okay if explicit "return 1" signals failuire
+*) echo bad one; return 1 ;;
+esac &&
+(
+ case "$(git ls-files)" in
+ two) echo pass two ;;
+# LINT: broken &&-chain okay if explicit "exit 1" signals failuire
+ *) echo bad two; exit 1 ;;
+ esac
+) &&
+case "$(git ls-files)" in
+dir/two"$LF"one) echo pass both ;;
+# LINT: broken &&-chain okay if explicit "return 1" signals failuire
+*) echo bad; return 1 ;;
+esac &&
+
for i in 1 2 3 4 ; do
# LINT: broken &&-chain okay if explicit "return $?" signals failure
git checkout main -b $i || return $?
diff --git a/t/chainlint/if-in-loop.expect b/t/chainlint/if-in-loop.expect
index 03b82a3e58c..d6514ae7492 100644
--- a/t/chainlint/if-in-loop.expect
+++ b/t/chainlint/if-in-loop.expect
@@ -3,7 +3,7 @@
do
if false
then
- echo "err" ?!AMP?!
+ echo "err"
exit 1
fi ?!AMP?!
foo
diff --git a/t/chainlint/if-in-loop.test b/t/chainlint/if-in-loop.test
index f0cf19cfada..90c23976fec 100644
--- a/t/chainlint/if-in-loop.test
+++ b/t/chainlint/if-in-loop.test
@@ -3,7 +3,7 @@
do
if false
then
-# LINT: missing "&&" on "echo"
+# LINT: missing "&&" on "echo" okay since "exit 1" signals error explicitly
echo "err"
exit 1
# LINT: missing "&&" on "fi"
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 12/18] chainlint.pl: complain about loops lacking explicit failure handling
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (10 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 11/18] chainlint.pl: don't flag broken &&-chain if failure indicated explicitly Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 13/18] chainlint.pl: allow `|| echo` to signal failure upstream of a pipe Eric Sunshine via GitGitGadget
` (6 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Shell `for` and `while` loops do not terminate automatically just
because a command fails within the loop body. Instead, the loop
continues to iterate and eventually returns the exit status of the final
command of the final iteration, which may not be the command which
failed, thus it is possible for failures to go undetected. Consequently,
it is important for test authors to explicitly handle failure within the
loop body by terminating the loop manually upon failure. This can be
done by returning a non-zero exit code from within the loop body
(i.e. `|| return 1`) or exiting (i.e. `|| exit 1`) if the loop is within
a subshell, or by manually checking `$?` and taking some appropriate
action. Therefore, add logic to detect and complain about loops which
lack explicit `return` or `exit`, or `$?` check.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 11 ++++++
t/chainlint/complex-if-in-cuddled-loop.expect | 2 +-
t/chainlint/for-loop.expect | 4 +--
t/chainlint/loop-detect-failure.expect | 15 ++++++++
t/chainlint/loop-detect-failure.test | 17 +++++++++
t/chainlint/loop-detect-status.expect | 18 ++++++++++
t/chainlint/loop-detect-status.test | 19 ++++++++++
t/chainlint/loop-in-if.expect | 2 +-
t/chainlint/nested-loop-detect-failure.expect | 31 ++++++++++++++++
t/chainlint/nested-loop-detect-failure.test | 35 +++++++++++++++++++
t/chainlint/semicolon.expect | 2 +-
t/chainlint/while-loop.expect | 4 +--
12 files changed, 153 insertions(+), 7 deletions(-)
create mode 100644 t/chainlint/loop-detect-failure.expect
create mode 100644 t/chainlint/loop-detect-failure.test
create mode 100644 t/chainlint/loop-detect-status.expect
create mode 100644 t/chainlint/loop-detect-status.test
create mode 100644 t/chainlint/nested-loop-detect-failure.expect
create mode 100644 t/chainlint/nested-loop-detect-failure.test
diff --git a/t/chainlint.pl b/t/chainlint.pl
index a76a09ecf5e..674b3ddf696 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -482,6 +482,17 @@ sub match_ending {
return undef;
}
+sub parse_loop_body {
+ my $self = shift @_;
+ my @tokens = $self->SUPER::parse_loop_body(@_);
+ # did loop signal failure via "|| return" or "|| exit"?
+ return @tokens if !@tokens || grep(/^(?:return|exit|\$\?)$/, @tokens);
+ # flag missing "return/exit" handling explicit failure in loop body
+ my $n = find_non_nl(\@tokens);
+ splice(@tokens, $n + 1, 0, '?!LOOP?!');
+ return @tokens;
+}
+
my @safe_endings = (
[qr/^(?:&&|\|\||\||&)$/],
[qr/^(?:exit|return)$/, qr/^(?:\d+|\$\?)$/],
diff --git a/t/chainlint/complex-if-in-cuddled-loop.expect b/t/chainlint/complex-if-in-cuddled-loop.expect
index 2fca1834095..dac2d0fd1d9 100644
--- a/t/chainlint/complex-if-in-cuddled-loop.expect
+++ b/t/chainlint/complex-if-in-cuddled-loop.expect
@@ -4,6 +4,6 @@
:
else
echo >file
- fi
+ fi ?!LOOP?!
done) &&
test ! -f file
diff --git a/t/chainlint/for-loop.expect b/t/chainlint/for-loop.expect
index 6671b8cd842..a5810c9bddd 100644
--- a/t/chainlint/for-loop.expect
+++ b/t/chainlint/for-loop.expect
@@ -2,10 +2,10 @@
for i in a b c
do
echo $i ?!AMP?!
- cat <<-EOF
+ cat <<-EOF ?!LOOP?!
done ?!AMP?!
for i in a b c; do
echo $i &&
- cat $i
+ cat $i ?!LOOP?!
done
)
diff --git a/t/chainlint/loop-detect-failure.expect b/t/chainlint/loop-detect-failure.expect
new file mode 100644
index 00000000000..a66025c39d4
--- /dev/null
+++ b/t/chainlint/loop-detect-failure.expect
@@ -0,0 +1,15 @@
+git init r1 &&
+for n in 1 2 3 4 5
+do
+ echo "This is file: $n" > r1/file.$n &&
+ git -C r1 add file.$n &&
+ git -C r1 commit -m "$n" || return 1
+done &&
+
+git init r2 &&
+for n in 1000 10000
+do
+ printf "%"$n"s" X > r2/large.$n &&
+ git -C r2 add large.$n &&
+ git -C r2 commit -m "$n" ?!LOOP?!
+done
diff --git a/t/chainlint/loop-detect-failure.test b/t/chainlint/loop-detect-failure.test
new file mode 100644
index 00000000000..b9791cc802e
--- /dev/null
+++ b/t/chainlint/loop-detect-failure.test
@@ -0,0 +1,17 @@
+git init r1 &&
+# LINT: loop handles failure explicitly with "|| return 1"
+for n in 1 2 3 4 5
+do
+ echo "This is file: $n" > r1/file.$n &&
+ git -C r1 add file.$n &&
+ git -C r1 commit -m "$n" || return 1
+done &&
+
+git init r2 &&
+# LINT: loop fails to handle failure explicitly with "|| return 1"
+for n in 1000 10000
+do
+ printf "%"$n"s" X > r2/large.$n &&
+ git -C r2 add large.$n &&
+ git -C r2 commit -m "$n"
+done
diff --git a/t/chainlint/loop-detect-status.expect b/t/chainlint/loop-detect-status.expect
new file mode 100644
index 00000000000..0ad23bb35e4
--- /dev/null
+++ b/t/chainlint/loop-detect-status.expect
@@ -0,0 +1,18 @@
+( while test $i -le $blobcount
+do
+ printf "Generating blob $i/$blobcount\r" >& 2 &&
+ printf "blob\nmark :$i\ndata $blobsize\n" &&
+
+ printf "%-${blobsize}s" $i &&
+ echo "M 100644 :$i $i" >> commit &&
+ i=$(($i+1)) ||
+ echo $? > exit-status
+done &&
+echo "commit refs/heads/main" &&
+echo "author A U Thor <author@email.com> 123456789 +0000" &&
+echo "committer C O Mitter <committer@email.com> 123456789 +0000" &&
+echo "data 5" &&
+echo ">2gb" &&
+cat commit ) |
+git fast-import --big-file-threshold=2 &&
+test ! -f exit-status
diff --git a/t/chainlint/loop-detect-status.test b/t/chainlint/loop-detect-status.test
new file mode 100644
index 00000000000..1c6c23cfc9e
--- /dev/null
+++ b/t/chainlint/loop-detect-status.test
@@ -0,0 +1,19 @@
+# LINT: "$?" handled explicitly within loop body
+(while test $i -le $blobcount
+ do
+ printf "Generating blob $i/$blobcount\r" >&2 &&
+ printf "blob\nmark :$i\ndata $blobsize\n" &&
+ #test-tool genrandom $i $blobsize &&
+ printf "%-${blobsize}s" $i &&
+ echo "M 100644 :$i $i" >> commit &&
+ i=$(($i+1)) ||
+ echo $? > exit-status
+ done &&
+ echo "commit refs/heads/main" &&
+ echo "author A U Thor <author@email.com> 123456789 +0000" &&
+ echo "committer C O Mitter <committer@email.com> 123456789 +0000" &&
+ echo "data 5" &&
+ echo ">2gb" &&
+ cat commit) |
+git fast-import --big-file-threshold=2 &&
+test ! -f exit-status
diff --git a/t/chainlint/loop-in-if.expect b/t/chainlint/loop-in-if.expect
index e1be42376c5..6c5d6e5b243 100644
--- a/t/chainlint/loop-in-if.expect
+++ b/t/chainlint/loop-in-if.expect
@@ -4,7 +4,7 @@
while true
do
echo "pop" ?!AMP?!
- echo "glup"
+ echo "glup" ?!LOOP?!
done ?!AMP?!
foo
fi ?!AMP?!
diff --git a/t/chainlint/nested-loop-detect-failure.expect b/t/chainlint/nested-loop-detect-failure.expect
new file mode 100644
index 00000000000..4793a0e8e12
--- /dev/null
+++ b/t/chainlint/nested-loop-detect-failure.expect
@@ -0,0 +1,31 @@
+for i in 0 1 2 3 4 5 6 7 8 9 ;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9 ;
+ do
+ echo "$i$j" > "path$i$j" ?!LOOP?!
+ done ?!LOOP?!
+done &&
+
+for i in 0 1 2 3 4 5 6 7 8 9 ;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9 ;
+ do
+ echo "$i$j" > "path$i$j" || return 1
+ done
+done &&
+
+for i in 0 1 2 3 4 5 6 7 8 9 ;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9 ;
+ do
+ echo "$i$j" > "path$i$j" ?!LOOP?!
+ done || return 1
+done &&
+
+for i in 0 1 2 3 4 5 6 7 8 9 ;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9 ;
+ do
+ echo "$i$j" > "path$i$j" || return 1
+ done || return 1
+done
diff --git a/t/chainlint/nested-loop-detect-failure.test b/t/chainlint/nested-loop-detect-failure.test
new file mode 100644
index 00000000000..e6f0c1acfb8
--- /dev/null
+++ b/t/chainlint/nested-loop-detect-failure.test
@@ -0,0 +1,35 @@
+# LINT: neither loop handles failure explicitly with "|| return 1"
+for i in 0 1 2 3 4 5 6 7 8 9;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9;
+ do
+ echo "$i$j" >"path$i$j"
+ done
+done &&
+
+# LINT: inner loop handles failure explicitly with "|| return 1"
+for i in 0 1 2 3 4 5 6 7 8 9;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9;
+ do
+ echo "$i$j" >"path$i$j" || return 1
+ done
+done &&
+
+# LINT: outer loop handles failure explicitly with "|| return 1"
+for i in 0 1 2 3 4 5 6 7 8 9;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9;
+ do
+ echo "$i$j" >"path$i$j"
+ done || return 1
+done &&
+
+# LINT: inner & outer loops handles failure explicitly with "|| return 1"
+for i in 0 1 2 3 4 5 6 7 8 9;
+do
+ for j in 0 1 2 3 4 5 6 7 8 9;
+ do
+ echo "$i$j" >"path$i$j" || return 1
+ done || return 1
+done
diff --git a/t/chainlint/semicolon.expect b/t/chainlint/semicolon.expect
index ed0b3707ae9..3aa2259f36c 100644
--- a/t/chainlint/semicolon.expect
+++ b/t/chainlint/semicolon.expect
@@ -15,5 +15,5 @@
) &&
(cd foo &&
for i in a b c; do
- echo;
+ echo; ?!LOOP?!
done)
diff --git a/t/chainlint/while-loop.expect b/t/chainlint/while-loop.expect
index 0d3a9b3d128..f272aa21fee 100644
--- a/t/chainlint/while-loop.expect
+++ b/t/chainlint/while-loop.expect
@@ -2,10 +2,10 @@
while true
do
echo foo ?!AMP?!
- cat <<-EOF
+ cat <<-EOF ?!LOOP?!
done ?!AMP?!
while true; do
echo foo &&
- cat bar
+ cat bar ?!LOOP?!
done
)
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 13/18] chainlint.pl: allow `|| echo` to signal failure upstream of a pipe
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (11 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 12/18] chainlint.pl: complain about loops lacking explicit failure handling Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 14/18] t/chainlint: add more chainlint.pl self-tests Eric Sunshine via GitGitGadget
` (5 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
The use of `|| return` (or `|| exit`) to signal failure within a loop
isn't effective when the loop is upstream of a pipe since the pipe
swallows all upstream exit codes and returns only the exit code of the
final command in the pipeline.
To work around this limitation, tests may adopt an alternative strategy
of signaling failure by emitting text which would never be emitted in
the non-failing case. For instance:
while condition
do
command1 &&
command2 ||
echo "impossible text"
done |
sort >actual &&
Such usage indicates deliberate thought about failure cases by the test
author, thus flagging them as missing `|| return` (or `|| exit`) is not
helpful. Therefore, take this case into consideration when checking for
explicit loop termination.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.pl | 3 +++
t/chainlint/loop-upstream-pipe.expect | 10 ++++++++++
t/chainlint/loop-upstream-pipe.test | 11 +++++++++++
3 files changed, 24 insertions(+)
create mode 100644 t/chainlint/loop-upstream-pipe.expect
create mode 100644 t/chainlint/loop-upstream-pipe.test
diff --git a/t/chainlint.pl b/t/chainlint.pl
index 674b3ddf696..386999ce65d 100755
--- a/t/chainlint.pl
+++ b/t/chainlint.pl
@@ -487,6 +487,9 @@ sub parse_loop_body {
my @tokens = $self->SUPER::parse_loop_body(@_);
# did loop signal failure via "|| return" or "|| exit"?
return @tokens if !@tokens || grep(/^(?:return|exit|\$\?)$/, @tokens);
+ # did loop upstream of a pipe signal failure via "|| echo 'impossible
+ # text'" as the final command in the loop body?
+ return @tokens if ends_with(\@tokens, [qr/^\|\|$/, "\n", qr/^echo$/, qr/^.+$/]);
# flag missing "return/exit" handling explicit failure in loop body
my $n = find_non_nl(\@tokens);
splice(@tokens, $n + 1, 0, '?!LOOP?!');
diff --git a/t/chainlint/loop-upstream-pipe.expect b/t/chainlint/loop-upstream-pipe.expect
new file mode 100644
index 00000000000..0b82ecc4b96
--- /dev/null
+++ b/t/chainlint/loop-upstream-pipe.expect
@@ -0,0 +1,10 @@
+(
+ git rev-list --objects --no-object-names base..loose |
+ while read oid
+ do
+ path="$objdir/$(test_oid_to_path "$oid")" &&
+ printf "%s %d\n" "$oid" "$(test-tool chmtime --get "$path")" ||
+ echo "object list generation failed for $oid"
+ done |
+ sort -k1
+) >expect &&
diff --git a/t/chainlint/loop-upstream-pipe.test b/t/chainlint/loop-upstream-pipe.test
new file mode 100644
index 00000000000..efb77da897c
--- /dev/null
+++ b/t/chainlint/loop-upstream-pipe.test
@@ -0,0 +1,11 @@
+(
+ git rev-list --objects --no-object-names base..loose |
+ while read oid
+ do
+# LINT: "|| echo" signals failure in loop upstream of a pipe
+ path="$objdir/$(test_oid_to_path "$oid")" &&
+ printf "%s %d\n" "$oid" "$(test-tool chmtime --get "$path")" ||
+ echo "object list generation failed for $oid"
+ done |
+ sort -k1
+) >expect &&
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 14/18] t/chainlint: add more chainlint.pl self-tests
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (12 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 13/18] chainlint.pl: allow `|| echo` to signal failure upstream of a pipe Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 15/18] test-lib: retire "lint harder" optimization hack Eric Sunshine via GitGitGadget
` (4 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
During the development of chainlint.pl, numerous new self-tests were
created to verify correct functioning beyond the checks already
represented by the existing self-tests. The new checks fall into several
categories:
* behavior of the lexical analyzer for complex cases, such as line
splicing, token pasting, entering and exiting string contexts inside
and outside of test script bodies; for instance:
test_expect_success 'title' '
x=$(echo "something" |
sed -e '\''s/\\/\\\\/g'\'' -e '\''s/[[/.*^$]/\\&/g'\''
'
* behavior of the parser for all compound grammatical constructs, such
as `if...fi`, `case...esac`, `while...done`, `{...}`, etc., and for
other legal shell grammatical constructs not covered by existing
chainlint.sed self-tests, as well as complex cases, such as:
OUT=$( ((large_git 1>&3) | :) 3>&1 ) &&
* detection of problems, such as &&-chain breakage, from top-level to
any depth since the existing self-tests do not cover any top-level
context and only cover subshells one level deep due to limitations of
chainlint.sed
* address blind spots in chainlint.sed (such as not detecting a broken
&&-chain on a one-line for-loop in a subshell[1]) which chainlint.pl
correctly detects
* real-world cases which tripped up chainlint.pl during its development
[1]: https://lore.kernel.org/git/dce35a47012fecc6edc11c68e91dbb485c5bc36f.1661663880.git.gitgitgadget@gmail.com/
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint/blank-line-before-esac.expect | 18 +++++++++++
t/chainlint/blank-line-before-esac.test | 19 +++++++++++
t/chainlint/block.expect | 13 +++++++-
t/chainlint/block.test | 15 ++++++++-
t/chainlint/chained-block.expect | 9 ++++++
t/chainlint/chained-block.test | 11 +++++++
t/chainlint/chained-subshell.expect | 10 ++++++
t/chainlint/chained-subshell.test | 13 ++++++++
.../command-substitution-subsubshell.expect | 2 ++
.../command-substitution-subsubshell.test | 3 ++
t/chainlint/double-here-doc.expect | 2 ++
t/chainlint/double-here-doc.test | 12 +++++++
t/chainlint/dqstring-line-splice.expect | 3 ++
t/chainlint/dqstring-line-splice.test | 7 ++++
t/chainlint/dqstring-no-interpolate.expect | 11 +++++++
t/chainlint/dqstring-no-interpolate.test | 15 +++++++++
t/chainlint/empty-here-doc.expect | 3 ++
t/chainlint/empty-here-doc.test | 5 +++
t/chainlint/exclamation.expect | 4 +++
t/chainlint/exclamation.test | 8 +++++
t/chainlint/for-loop-abbreviated.expect | 5 +++
t/chainlint/for-loop-abbreviated.test | 6 ++++
t/chainlint/function.expect | 11 +++++++
t/chainlint/function.test | 13 ++++++++
t/chainlint/here-doc-indent-operator.expect | 5 +++
t/chainlint/here-doc-indent-operator.test | 13 ++++++++
t/chainlint/if-condition-split.expect | 7 ++++
t/chainlint/if-condition-split.test | 8 +++++
t/chainlint/one-liner-for-loop.expect | 9 ++++++
t/chainlint/one-liner-for-loop.test | 10 ++++++
t/chainlint/sqstring-in-sqstring.expect | 4 +++
t/chainlint/sqstring-in-sqstring.test | 5 +++
t/chainlint/token-pasting.expect | 27 ++++++++++++++++
t/chainlint/token-pasting.test | 32 +++++++++++++++++++
34 files changed, 336 insertions(+), 2 deletions(-)
create mode 100644 t/chainlint/blank-line-before-esac.expect
create mode 100644 t/chainlint/blank-line-before-esac.test
create mode 100644 t/chainlint/chained-block.expect
create mode 100644 t/chainlint/chained-block.test
create mode 100644 t/chainlint/chained-subshell.expect
create mode 100644 t/chainlint/chained-subshell.test
create mode 100644 t/chainlint/command-substitution-subsubshell.expect
create mode 100644 t/chainlint/command-substitution-subsubshell.test
create mode 100644 t/chainlint/double-here-doc.expect
create mode 100644 t/chainlint/double-here-doc.test
create mode 100644 t/chainlint/dqstring-line-splice.expect
create mode 100644 t/chainlint/dqstring-line-splice.test
create mode 100644 t/chainlint/dqstring-no-interpolate.expect
create mode 100644 t/chainlint/dqstring-no-interpolate.test
create mode 100644 t/chainlint/empty-here-doc.expect
create mode 100644 t/chainlint/empty-here-doc.test
create mode 100644 t/chainlint/exclamation.expect
create mode 100644 t/chainlint/exclamation.test
create mode 100644 t/chainlint/for-loop-abbreviated.expect
create mode 100644 t/chainlint/for-loop-abbreviated.test
create mode 100644 t/chainlint/function.expect
create mode 100644 t/chainlint/function.test
create mode 100644 t/chainlint/here-doc-indent-operator.expect
create mode 100644 t/chainlint/here-doc-indent-operator.test
create mode 100644 t/chainlint/if-condition-split.expect
create mode 100644 t/chainlint/if-condition-split.test
create mode 100644 t/chainlint/one-liner-for-loop.expect
create mode 100644 t/chainlint/one-liner-for-loop.test
create mode 100644 t/chainlint/sqstring-in-sqstring.expect
create mode 100644 t/chainlint/sqstring-in-sqstring.test
create mode 100644 t/chainlint/token-pasting.expect
create mode 100644 t/chainlint/token-pasting.test
diff --git a/t/chainlint/blank-line-before-esac.expect b/t/chainlint/blank-line-before-esac.expect
new file mode 100644
index 00000000000..48ed4eb1246
--- /dev/null
+++ b/t/chainlint/blank-line-before-esac.expect
@@ -0,0 +1,18 @@
+test_done ( ) {
+ case "$test_failure" in
+ 0 )
+ test_at_end_hook_
+
+ exit 0 ;;
+
+ * )
+ if test $test_external_has_tap -eq 0
+ then
+ say_color error "# failed $test_failure among $msg"
+ say "1..$test_count"
+ fi
+
+ exit 1 ;;
+
+ esac
+}
diff --git a/t/chainlint/blank-line-before-esac.test b/t/chainlint/blank-line-before-esac.test
new file mode 100644
index 00000000000..cecccad19f5
--- /dev/null
+++ b/t/chainlint/blank-line-before-esac.test
@@ -0,0 +1,19 @@
+# LINT: blank line before "esac"
+test_done () {
+ case "$test_failure" in
+ 0)
+ test_at_end_hook_
+
+ exit 0 ;;
+
+ *)
+ if test $test_external_has_tap -eq 0
+ then
+ say_color error "# failed $test_failure among $msg"
+ say "1..$test_count"
+ fi
+
+ exit 1 ;;
+
+ esac
+}
diff --git a/t/chainlint/block.expect b/t/chainlint/block.expect
index 37dbf7d95fa..a3bcea492a9 100644
--- a/t/chainlint/block.expect
+++ b/t/chainlint/block.expect
@@ -9,4 +9,15 @@
echo c
} ?!AMP?!
baz
-)
+) &&
+
+{
+ echo a ; ?!AMP?! echo b
+} &&
+{ echo a ; ?!AMP?! echo b ; } &&
+
+{
+ echo "${var}9" &&
+ echo "done"
+} &&
+finis
diff --git a/t/chainlint/block.test b/t/chainlint/block.test
index 0a82fd579f6..4ab69a4afc4 100644
--- a/t/chainlint/block.test
+++ b/t/chainlint/block.test
@@ -11,4 +11,17 @@
echo c
}
baz
-)
+) &&
+
+# LINT: ";" not allowed in place of "&&"
+{
+ echo a; echo b
+} &&
+{ echo a; echo b; } &&
+
+# LINT: "}" inside string not mistaken as end of block
+{
+ echo "${var}9" &&
+ echo "done"
+} &&
+finis
diff --git a/t/chainlint/chained-block.expect b/t/chainlint/chained-block.expect
new file mode 100644
index 00000000000..574cdceb071
--- /dev/null
+++ b/t/chainlint/chained-block.expect
@@ -0,0 +1,9 @@
+echo nobody home && {
+ test the doohicky ?!AMP?!
+ right now
+} &&
+
+GIT_EXTERNAL_DIFF=echo git diff | {
+ read path oldfile oldhex oldmode newfile newhex newmode &&
+ test "z$oh" = "z$oldhex"
+}
diff --git a/t/chainlint/chained-block.test b/t/chainlint/chained-block.test
new file mode 100644
index 00000000000..86f81ece639
--- /dev/null
+++ b/t/chainlint/chained-block.test
@@ -0,0 +1,11 @@
+# LINT: start of block chained to preceding command
+echo nobody home && {
+ test the doohicky
+ right now
+} &&
+
+# LINT: preceding command pipes to block on same line
+GIT_EXTERNAL_DIFF=echo git diff | {
+ read path oldfile oldhex oldmode newfile newhex newmode &&
+ test "z$oh" = "z$oldhex"
+}
diff --git a/t/chainlint/chained-subshell.expect b/t/chainlint/chained-subshell.expect
new file mode 100644
index 00000000000..af0369d3285
--- /dev/null
+++ b/t/chainlint/chained-subshell.expect
@@ -0,0 +1,10 @@
+mkdir sub && (
+ cd sub &&
+ foo the bar ?!AMP?!
+ nuff said
+) &&
+
+cut "-d " -f actual | ( read s1 s2 s3 &&
+test -f $s1 ?!AMP?!
+test $(cat $s2) = tree2path1 &&
+test $(cat $s3) = tree3path1 )
diff --git a/t/chainlint/chained-subshell.test b/t/chainlint/chained-subshell.test
new file mode 100644
index 00000000000..4ff6ddd8cbd
--- /dev/null
+++ b/t/chainlint/chained-subshell.test
@@ -0,0 +1,13 @@
+# LINT: start of subshell chained to preceding command
+mkdir sub && (
+ cd sub &&
+ foo the bar
+ nuff said
+) &&
+
+# LINT: preceding command pipes to subshell on same line
+cut "-d " -f actual | (read s1 s2 s3 &&
+test -f $s1
+test $(cat $s2) = tree2path1 &&
+# LINT: closing subshell ")" correctly detected on same line as "$(...)"
+test $(cat $s3) = tree3path1)
diff --git a/t/chainlint/command-substitution-subsubshell.expect b/t/chainlint/command-substitution-subsubshell.expect
new file mode 100644
index 00000000000..ab2f79e8457
--- /dev/null
+++ b/t/chainlint/command-substitution-subsubshell.expect
@@ -0,0 +1,2 @@
+OUT=$(( ( large_git 1 >& 3 ) | : ) 3 >& 1) &&
+test_match_signal 13 "$OUT"
diff --git a/t/chainlint/command-substitution-subsubshell.test b/t/chainlint/command-substitution-subsubshell.test
new file mode 100644
index 00000000000..321de2951ce
--- /dev/null
+++ b/t/chainlint/command-substitution-subsubshell.test
@@ -0,0 +1,3 @@
+# LINT: subshell nested in subshell nested in command substitution
+OUT=$( ((large_git 1>&3) | :) 3>&1 ) &&
+test_match_signal 13 "$OUT"
diff --git a/t/chainlint/double-here-doc.expect b/t/chainlint/double-here-doc.expect
new file mode 100644
index 00000000000..75477bb1add
--- /dev/null
+++ b/t/chainlint/double-here-doc.expect
@@ -0,0 +1,2 @@
+run_sub_test_lib_test_err run-inv-range-start "--run invalid range start" --run="a-5" <<-EOF &&
+check_sub_test_lib_test_err run-inv-range-start <<-EOF_OUT 3 <<-EOF_ERR
diff --git a/t/chainlint/double-here-doc.test b/t/chainlint/double-here-doc.test
new file mode 100644
index 00000000000..cd584a43573
--- /dev/null
+++ b/t/chainlint/double-here-doc.test
@@ -0,0 +1,12 @@
+run_sub_test_lib_test_err run-inv-range-start \
+ "--run invalid range start" \
+ --run="a-5" <<-\EOF &&
+test_expect_success "passing test #1" "true"
+test_done
+EOF
+check_sub_test_lib_test_err run-inv-range-start \
+ <<-\EOF_OUT 3<<-EOF_ERR
+> FATAL: Unexpected exit with code 1
+EOF_OUT
+> error: --run: invalid non-numeric in range start: ${SQ}a-5${SQ}
+EOF_ERR
diff --git a/t/chainlint/dqstring-line-splice.expect b/t/chainlint/dqstring-line-splice.expect
new file mode 100644
index 00000000000..bf9ced60d4c
--- /dev/null
+++ b/t/chainlint/dqstring-line-splice.expect
@@ -0,0 +1,3 @@
+echo 'fatal: reword option of --fixup is mutually exclusive with' '--patch/--interactive/--all/--include/--only' > expect &&
+test_must_fail git commit --fixup=reword:HEAD~ $1 2 > actual &&
+test_cmp expect actual
diff --git a/t/chainlint/dqstring-line-splice.test b/t/chainlint/dqstring-line-splice.test
new file mode 100644
index 00000000000..b40714439f6
--- /dev/null
+++ b/t/chainlint/dqstring-line-splice.test
@@ -0,0 +1,7 @@
+# LINT: line-splice within DQ-string
+'"
+echo 'fatal: reword option of --fixup is mutually exclusive with'\
+ '--patch/--interactive/--all/--include/--only' >expect &&
+test_must_fail git commit --fixup=reword:HEAD~ $1 2>actual &&
+test_cmp expect actual
+"'
diff --git a/t/chainlint/dqstring-no-interpolate.expect b/t/chainlint/dqstring-no-interpolate.expect
new file mode 100644
index 00000000000..10724987a5f
--- /dev/null
+++ b/t/chainlint/dqstring-no-interpolate.expect
@@ -0,0 +1,11 @@
+grep "^ ! [rejected][ ]*$BRANCH -> $BRANCH (non-fast-forward)$" out &&
+
+grep "^\.git$" output.txt &&
+
+
+(
+ cd client$version &&
+ GIT_TEST_PROTOCOL_VERSION=$version git fetch-pack --no-progress .. $(cat ../input)
+) > output &&
+ cut -d ' ' -f 2 < output | sort > actual &&
+ test_cmp expect actual
diff --git a/t/chainlint/dqstring-no-interpolate.test b/t/chainlint/dqstring-no-interpolate.test
new file mode 100644
index 00000000000..d2f4219cbbb
--- /dev/null
+++ b/t/chainlint/dqstring-no-interpolate.test
@@ -0,0 +1,15 @@
+# LINT: regex dollar-sign eol anchor in double-quoted string not special
+grep "^ ! \[rejected\][ ]*$BRANCH -> $BRANCH (non-fast-forward)$" out &&
+
+# LINT: escaped "$" not mistaken for variable expansion
+grep "^\\.git\$" output.txt &&
+
+'"
+(
+ cd client$version &&
+# LINT: escaped dollar-sign in double-quoted test body
+ GIT_TEST_PROTOCOL_VERSION=$version git fetch-pack --no-progress .. \$(cat ../input)
+) >output &&
+ cut -d ' ' -f 2 <output | sort >actual &&
+ test_cmp expect actual
+"'
diff --git a/t/chainlint/empty-here-doc.expect b/t/chainlint/empty-here-doc.expect
new file mode 100644
index 00000000000..f42f2d41ba8
--- /dev/null
+++ b/t/chainlint/empty-here-doc.expect
@@ -0,0 +1,3 @@
+git ls-tree $tree path > current &&
+cat > expected <<EOF &&
+test_output
diff --git a/t/chainlint/empty-here-doc.test b/t/chainlint/empty-here-doc.test
new file mode 100644
index 00000000000..24fc165de3f
--- /dev/null
+++ b/t/chainlint/empty-here-doc.test
@@ -0,0 +1,5 @@
+git ls-tree $tree path >current &&
+# LINT: empty here-doc
+cat >expected <<\EOF &&
+EOF
+test_output
diff --git a/t/chainlint/exclamation.expect b/t/chainlint/exclamation.expect
new file mode 100644
index 00000000000..2d961a58c66
--- /dev/null
+++ b/t/chainlint/exclamation.expect
@@ -0,0 +1,4 @@
+if ! condition ; then echo nope ; else yep ; fi &&
+test_prerequisite !MINGW &&
+mail uucp!address &&
+echo !whatever!
diff --git a/t/chainlint/exclamation.test b/t/chainlint/exclamation.test
new file mode 100644
index 00000000000..323595b5bd8
--- /dev/null
+++ b/t/chainlint/exclamation.test
@@ -0,0 +1,8 @@
+# LINT: "! word" is two tokens
+if ! condition; then echo nope; else yep; fi &&
+# LINT: "!word" is single token, not two tokens "!" and "word"
+test_prerequisite !MINGW &&
+# LINT: "word!word" is single token, not three tokens "word", "!", and "word"
+mail uucp!address &&
+# LINT: "!word!" is single token, not three tokens "!", "word", and "!"
+echo !whatever!
diff --git a/t/chainlint/for-loop-abbreviated.expect b/t/chainlint/for-loop-abbreviated.expect
new file mode 100644
index 00000000000..a21007a63f1
--- /dev/null
+++ b/t/chainlint/for-loop-abbreviated.expect
@@ -0,0 +1,5 @@
+for it
+do
+ path=$(expr "$it" : ( [^:]*) ) &&
+ git update-index --add "$path" || exit
+done
diff --git a/t/chainlint/for-loop-abbreviated.test b/t/chainlint/for-loop-abbreviated.test
new file mode 100644
index 00000000000..1084eccb89c
--- /dev/null
+++ b/t/chainlint/for-loop-abbreviated.test
@@ -0,0 +1,6 @@
+# LINT: for-loop lacking optional "in [word...]" before "do"
+for it
+do
+ path=$(expr "$it" : '\([^:]*\)') &&
+ git update-index --add "$path" || exit
+done
diff --git a/t/chainlint/function.expect b/t/chainlint/function.expect
new file mode 100644
index 00000000000..a14388e6b9f
--- /dev/null
+++ b/t/chainlint/function.expect
@@ -0,0 +1,11 @@
+sha1_file ( ) {
+ echo "$*" | sed "s#..#.git/objects/&/#"
+} &&
+
+remove_object ( ) {
+ file=$(sha1_file "$*") &&
+ test -e "$file" ?!AMP?!
+ rm -f "$file"
+} ?!AMP?!
+
+sha1_file arg && remove_object arg
diff --git a/t/chainlint/function.test b/t/chainlint/function.test
new file mode 100644
index 00000000000..5ee59562c93
--- /dev/null
+++ b/t/chainlint/function.test
@@ -0,0 +1,13 @@
+# LINT: "()" in function definition not mistaken for subshell
+sha1_file() {
+ echo "$*" | sed "s#..#.git/objects/&/#"
+} &&
+
+# LINT: broken &&-chain in function and after function
+remove_object() {
+ file=$(sha1_file "$*") &&
+ test -e "$file"
+ rm -f "$file"
+}
+
+sha1_file arg && remove_object arg
diff --git a/t/chainlint/here-doc-indent-operator.expect b/t/chainlint/here-doc-indent-operator.expect
new file mode 100644
index 00000000000..fb6cf7285d0
--- /dev/null
+++ b/t/chainlint/here-doc-indent-operator.expect
@@ -0,0 +1,5 @@
+cat > expect <<-EOF &&
+
+cat > expect <<-EOF ?!AMP?!
+
+cleanup
diff --git a/t/chainlint/here-doc-indent-operator.test b/t/chainlint/here-doc-indent-operator.test
new file mode 100644
index 00000000000..c8a6f18eb45
--- /dev/null
+++ b/t/chainlint/here-doc-indent-operator.test
@@ -0,0 +1,13 @@
+# LINT: whitespace between operator "<<-" and tag legal
+cat >expect <<- EOF &&
+header: 43475048 1 $(test_oid oid_version) $NUM_CHUNKS 0
+num_commits: $1
+chunks: oid_fanout oid_lookup commit_metadata generation_data bloom_indexes bloom_data
+EOF
+
+# LINT: not an indented here-doc; just a plain here-doc with tag named "-EOF"
+cat >expect << -EOF
+this is not indented
+-EOF
+
+cleanup
diff --git a/t/chainlint/if-condition-split.expect b/t/chainlint/if-condition-split.expect
new file mode 100644
index 00000000000..ee745ef8d7f
--- /dev/null
+++ b/t/chainlint/if-condition-split.expect
@@ -0,0 +1,7 @@
+if bob &&
+ marcia ||
+ kevin
+then
+ echo "nomads" ?!AMP?!
+ echo "for sure"
+fi
diff --git a/t/chainlint/if-condition-split.test b/t/chainlint/if-condition-split.test
new file mode 100644
index 00000000000..240daa9fd5d
--- /dev/null
+++ b/t/chainlint/if-condition-split.test
@@ -0,0 +1,8 @@
+# LINT: "if" condition split across multiple lines at "&&" or "||"
+if bob &&
+ marcia ||
+ kevin
+then
+ echo "nomads"
+ echo "for sure"
+fi
diff --git a/t/chainlint/one-liner-for-loop.expect b/t/chainlint/one-liner-for-loop.expect
new file mode 100644
index 00000000000..51a3dc7c544
--- /dev/null
+++ b/t/chainlint/one-liner-for-loop.expect
@@ -0,0 +1,9 @@
+git init dir-rename-and-content &&
+(
+ cd dir-rename-and-content &&
+ test_write_lines 1 2 3 4 5 >foo &&
+ mkdir olddir &&
+ for i in a b c; do echo $i >olddir/$i; ?!LOOP?! done ?!AMP?!
+ git add foo olddir &&
+ git commit -m "original" &&
+)
diff --git a/t/chainlint/one-liner-for-loop.test b/t/chainlint/one-liner-for-loop.test
new file mode 100644
index 00000000000..4bd8c066c79
--- /dev/null
+++ b/t/chainlint/one-liner-for-loop.test
@@ -0,0 +1,10 @@
+git init dir-rename-and-content &&
+(
+ cd dir-rename-and-content &&
+ test_write_lines 1 2 3 4 5 >foo &&
+ mkdir olddir &&
+# LINT: one-liner for-loop missing "|| exit"; also broken &&-chain
+ for i in a b c; do echo $i >olddir/$i; done
+ git add foo olddir &&
+ git commit -m "original" &&
+)
diff --git a/t/chainlint/sqstring-in-sqstring.expect b/t/chainlint/sqstring-in-sqstring.expect
new file mode 100644
index 00000000000..cf0b591cf7d
--- /dev/null
+++ b/t/chainlint/sqstring-in-sqstring.expect
@@ -0,0 +1,4 @@
+perl -e '
+ defined($_ = -s $_) or die for @ARGV;
+ exit 1 if $ARGV[0] <= $ARGV[1];
+' test-2-$packname_2.pack test-3-$packname_3.pack
diff --git a/t/chainlint/sqstring-in-sqstring.test b/t/chainlint/sqstring-in-sqstring.test
new file mode 100644
index 00000000000..77a425e0c79
--- /dev/null
+++ b/t/chainlint/sqstring-in-sqstring.test
@@ -0,0 +1,5 @@
+# LINT: SQ-string Perl code fragment within SQ-string
+perl -e '\''
+ defined($_ = -s $_) or die for @ARGV;
+ exit 1 if $ARGV[0] <= $ARGV[1];
+'\'' test-2-$packname_2.pack test-3-$packname_3.pack
diff --git a/t/chainlint/token-pasting.expect b/t/chainlint/token-pasting.expect
new file mode 100644
index 00000000000..342360bcd05
--- /dev/null
+++ b/t/chainlint/token-pasting.expect
@@ -0,0 +1,27 @@
+git config filter.rot13.smudge ./rot13.sh &&
+git config filter.rot13.clean ./rot13.sh &&
+
+{
+ echo "*.t filter=rot13" ?!AMP?!
+ echo "*.i ident"
+} > .gitattributes &&
+
+{
+ echo a b c d e f g h i j k l m ?!AMP?!
+ echo n o p q r s t u v w x y z ?!AMP?!
+ echo '$Id$'
+} > test &&
+cat test > test.t &&
+cat test > test.o &&
+cat test > test.i &&
+git add test test.t test.i &&
+rm -f test test.t test.i &&
+git checkout -- test test.t test.i &&
+
+echo "content-test2" > test2.o &&
+echo "content-test3 - filename with special characters" > "test3 'sq',$x=.o" ?!AMP?!
+
+downstream_url_for_sed=$(
+ printf "%sn" "$downstream_url" |
+ sed -e 's/\/\\/g' -e 's/[[/.*^$]/\&/g'
+)
diff --git a/t/chainlint/token-pasting.test b/t/chainlint/token-pasting.test
new file mode 100644
index 00000000000..b4610ce815a
--- /dev/null
+++ b/t/chainlint/token-pasting.test
@@ -0,0 +1,32 @@
+# LINT: single token; composite of multiple strings
+git config filter.rot13.smudge ./rot13.sh &&
+git config filter.rot13.clean ./rot13.sh &&
+
+{
+ echo "*.t filter=rot13"
+ echo "*.i ident"
+} >.gitattributes &&
+
+{
+ echo a b c d e f g h i j k l m
+ echo n o p q r s t u v w x y z
+# LINT: exit/enter string context and escaped-quote outside of string
+ echo '\''$Id$'\''
+} >test &&
+cat test >test.t &&
+cat test >test.o &&
+cat test >test.i &&
+git add test test.t test.i &&
+rm -f test test.t test.i &&
+git checkout -- test test.t test.i &&
+
+echo "content-test2" >test2.o &&
+# LINT: exit/enter string context and escaped-quote outside of string
+echo "content-test3 - filename with special characters" >"test3 '\''sq'\'',\$x=.o"
+
+# LINT: single token; composite of multiple strings
+downstream_url_for_sed=$(
+ printf "%s\n" "$downstream_url" |
+# LINT: exit/enter string context; "&" inside string not command terminator
+ sed -e '\''s/\\/\\\\/g'\'' -e '\''s/[[/.*^$]/\\&/g'\''
+)
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 15/18] test-lib: retire "lint harder" optimization hack
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (13 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 14/18] t/chainlint: add more chainlint.pl self-tests Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 16/18] test-lib: replace chainlint.sed with chainlint.pl Eric Sunshine via GitGitGadget
` (3 subsequent siblings)
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
`test_run_` in test-lib.sh "lints" the body of a test by sending it down
a `sed chainlint.sed | grep` pipeline; this happens once for each test
run by a test script. Although this pipeline may seem relatively cheap
in isolation, it can become expensive when invoked 26800+ times by `make
test`, once for each test run, despite the existence of only 16500+ test
definitions across all tests scripts.
This difference in the number of tests defined in the scripts (16500+)
and the number of tests actually run by `make test` (26800+) is
explained by the fact that some test scripts run a very large number of
small tests, all driven by a series of functions/loops which fill in the
test bodies. This means that certain test definitions are being linted
repeatedly (tens or hundreds of times) unnecessarily. To avoid such
unnecessary work, 2d86a96220 (t: avoid sed-based chain-linting in some
expensive cases, 2021-05-13) added an optimization hack which allows
individual scripts to manually suppress the unnecessary repeated linting
of the same test definition.
However, unlike chainlint.sed which checks a test body as the test is
run, chainlint.pl checks each test definition just once, no matter how
many times the test is run, thus the sort of optimization hack
introduced by 2d86a96220 is no longer needed and can be retired.
Therefore, revert 2d86a96220.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/README | 5 -----
t/t0027-auto-crlf.sh | 7 +------
t/t3070-wildmatch.sh | 5 -----
t/test-lib.sh | 7 ++-----
4 files changed, 3 insertions(+), 21 deletions(-)
diff --git a/t/README b/t/README
index 2f439f96589..979b2d4833d 100644
--- a/t/README
+++ b/t/README
@@ -196,11 +196,6 @@ appropriately before running "make". Short options can be bundled, i.e.
this feature by setting the GIT_TEST_CHAIN_LINT environment
variable to "1" or "0", respectively.
- A few test scripts disable some of the more advanced
- chain-linting detection in the name of efficiency. You can
- override this by setting the GIT_TEST_CHAIN_LINT_HARDER
- environment variable to "1".
-
--stress::
Run the test script repeatedly in multiple parallel jobs until
one of them fails. Useful for reproducing rare failures in
diff --git a/t/t0027-auto-crlf.sh b/t/t0027-auto-crlf.sh
index a22e0e1382c..a94ac1eae37 100755
--- a/t/t0027-auto-crlf.sh
+++ b/t/t0027-auto-crlf.sh
@@ -387,9 +387,7 @@ test_expect_success 'setup main' '
test_tick
'
-# Disable extra chain-linting for the next set of tests. There are many
-# auto-generated ones that are not worth checking over and over.
-GIT_TEST_CHAIN_LINT_HARDER_DEFAULT=0
+
warn_LF_CRLF="LF will be replaced by CRLF"
warn_CRLF_LF="CRLF will be replaced by LF"
@@ -606,9 +604,6 @@ do
checkout_files "" "$id" "crlf" true "" CRLF CRLF CRLF CRLF_mix_CR CRLF_nul
done
-# The rest of the tests are unique; do the usual linting.
-unset GIT_TEST_CHAIN_LINT_HARDER_DEFAULT
-
# Should be the last test case: remove some files from the worktree
test_expect_success 'ls-files --eol -d -z' '
rm crlf_false_attr__CRLF.txt crlf_false_attr__CRLF_mix_LF.txt crlf_false_attr__LF.txt .gitattributes &&
diff --git a/t/t3070-wildmatch.sh b/t/t3070-wildmatch.sh
index f9539968e4c..5d871fde960 100755
--- a/t/t3070-wildmatch.sh
+++ b/t/t3070-wildmatch.sh
@@ -5,11 +5,6 @@ test_description='wildmatch tests'
TEST_PASSES_SANITIZE_LEAK=true
. ./test-lib.sh
-# Disable expensive chain-lint tests; all of the tests in this script
-# are variants of a few trivial test-tool invocations, and there are a lot of
-# them.
-GIT_TEST_CHAIN_LINT_HARDER_DEFAULT=0
-
should_create_test_file() {
file=$1
diff --git a/t/test-lib.sh b/t/test-lib.sh
index 377cc1c1203..dc0d0591095 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -1091,11 +1091,8 @@ test_run_ () {
trace=
# 117 is magic because it is unlikely to match the exit
# code of other programs
- if test "OK-117" != "$(test_eval_ "(exit 117) && $1${LF}${LF}echo OK-\$?" 3>&1)" ||
- {
- test "${GIT_TEST_CHAIN_LINT_HARDER:-${GIT_TEST_CHAIN_LINT_HARDER_DEFAULT:-1}}" != 0 &&
- $(printf '%s\n' "$1" | sed -f "$GIT_BUILD_DIR/t/chainlint.sed" | grep -q '?![A-Z][A-Z]*?!')
- }
+ if $(printf '%s\n' "$1" | sed -f "$GIT_BUILD_DIR/t/chainlint.sed" | grep -q '?![A-Z][A-Z]*?!') ||
+ test "OK-117" != "$(test_eval_ "(exit 117) && $1${LF}${LF}echo OK-\$?" 3>&1)"
then
BUG "broken &&-chain or run-away HERE-DOC: $1"
fi
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 16/18] test-lib: replace chainlint.sed with chainlint.pl
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (14 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 15/18] test-lib: retire "lint harder" optimization hack Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-03 5:07 ` Elijah Newren
2022-09-01 0:29 ` [PATCH 17/18] t/Makefile: teach `make test` and `make prove` to run chainlint.pl Eric Sunshine via GitGitGadget
` (2 subsequent siblings)
18 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
By automatically invoking chainlint.sed upon each test it runs,
`test_run_` in test-lib.sh ensures that broken &&-chains will be
detected early as tests are modified or new are tests created since it
is typical to run a test script manually (i.e. `./t1234-test-script.sh`)
during test development. Now that the implementation of chainlint.pl is
complete, modify test-lib.sh to invoke it automatically instead of
chainlint.sed each time a test script is run.
This change reduces the number of "linter" invocations from 26800+ (once
per test run) down to 1050+ (once per test script), however, a
subsequent change will drop the number of invocations to 1 per `make
test`, thus fully realizing the benefit of the new linter.
Note that the "magic exit code 117" &&-chain checker added by bb79af9d09
(t/test-lib: introduce --chain-lint option, 2015-03-20) which is built
into t/test-lib.sh is retained since it has near zero-cost and
(theoretically) may catch a broken &&-chain not caught by chainlint.pl.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
contrib/buildsystems/CMakeLists.txt | 2 +-
t/test-lib.sh | 9 +++++++--
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 2237109b57f..ca358a21a5f 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -1076,7 +1076,7 @@ if(NOT ${CMAKE_BINARY_DIR}/CMakeCache.txt STREQUAL ${CACHE_PATH})
"string(REPLACE \"\${GIT_BUILD_DIR_REPL}\" \"GIT_BUILD_DIR=\\\"$TEST_DIRECTORY/../${BUILD_DIR_RELATIVE}\\\"\" content \"\${content}\")\n"
"file(WRITE ${CMAKE_SOURCE_DIR}/t/test-lib.sh \${content})")
#misc copies
- file(COPY ${CMAKE_SOURCE_DIR}/t/chainlint.sed DESTINATION ${CMAKE_BINARY_DIR}/t/)
+ file(COPY ${CMAKE_SOURCE_DIR}/t/chainlint.pl DESTINATION ${CMAKE_BINARY_DIR}/t/)
file(COPY ${CMAKE_SOURCE_DIR}/po/is.po DESTINATION ${CMAKE_BINARY_DIR}/po/)
file(COPY ${CMAKE_SOURCE_DIR}/mergetools/tkdiff DESTINATION ${CMAKE_BINARY_DIR}/mergetools/)
file(COPY ${CMAKE_SOURCE_DIR}/contrib/completion/git-prompt.sh DESTINATION ${CMAKE_BINARY_DIR}/contrib/completion/)
diff --git a/t/test-lib.sh b/t/test-lib.sh
index dc0d0591095..a65df2fd220 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -1091,8 +1091,7 @@ test_run_ () {
trace=
# 117 is magic because it is unlikely to match the exit
# code of other programs
- if $(printf '%s\n' "$1" | sed -f "$GIT_BUILD_DIR/t/chainlint.sed" | grep -q '?![A-Z][A-Z]*?!') ||
- test "OK-117" != "$(test_eval_ "(exit 117) && $1${LF}${LF}echo OK-\$?" 3>&1)"
+ if test "OK-117" != "$(test_eval_ "(exit 117) && $1${LF}${LF}echo OK-\$?" 3>&1)"
then
BUG "broken &&-chain or run-away HERE-DOC: $1"
fi
@@ -1588,6 +1587,12 @@ then
BAIL_OUT_ENV_NEEDS_SANITIZE_LEAK "GIT_TEST_SANITIZE_LEAK_LOG=true"
fi
+if test "${GIT_TEST_CHAIN_LINT:-1}" != 0
+then
+ "$PERL_PATH" "$TEST_DIRECTORY/chainlint.pl" "$0" ||
+ BUG "lint error (see '?!...!? annotations above)"
+fi
+
# Last-minute variable setup
USER_HOME="$HOME"
HOME="$TRASH_DIRECTORY"
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: [PATCH 16/18] test-lib: replace chainlint.sed with chainlint.pl
2022-09-01 0:29 ` [PATCH 16/18] test-lib: replace chainlint.sed with chainlint.pl Eric Sunshine via GitGitGadget
@ 2022-09-03 5:07 ` Elijah Newren
2022-09-03 5:24 ` Eric Sunshine
0 siblings, 1 reply; 131+ messages in thread
From: Elijah Newren @ 2022-09-03 5:07 UTC (permalink / raw)
To: Eric Sunshine via GitGitGadget
Cc: Git Mailing List, Jeff King,
Ævar Arnfjörð Bjarmason, Fabian Stelzer,
Johannes Schindelin, Eric Sunshine
On Wed, Aug 31, 2022 at 5:30 PM Eric Sunshine via GitGitGadget
<gitgitgadget@gmail.com> wrote:
>
> From: Eric Sunshine <sunshine@sunshineco.com>
>
> By automatically invoking chainlint.sed upon each test it runs,
> `test_run_` in test-lib.sh ensures that broken &&-chains will be
> detected early as tests are modified or new are tests created since it
s/new are tests created/new tests are created/ ?
> is typical to run a test script manually (i.e. `./t1234-test-script.sh`)
> during test development. Now that the implementation of chainlint.pl is
> complete, modify test-lib.sh to invoke it automatically instead of
> chainlint.sed each time a test script is run.
>
> This change reduces the number of "linter" invocations from 26800+ (once
> per test run) down to 1050+ (once per test script), however, a
> subsequent change will drop the number of invocations to 1 per `make
> test`, thus fully realizing the benefit of the new linter.
>
> Note that the "magic exit code 117" &&-chain checker added by bb79af9d09
> (t/test-lib: introduce --chain-lint option, 2015-03-20) which is built
> into t/test-lib.sh is retained since it has near zero-cost and
> (theoretically) may catch a broken &&-chain not caught by chainlint.pl.
>
> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
> ---
> contrib/buildsystems/CMakeLists.txt | 2 +-
> t/test-lib.sh | 9 +++++++--
> 2 files changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
> index 2237109b57f..ca358a21a5f 100644
> --- a/contrib/buildsystems/CMakeLists.txt
> +++ b/contrib/buildsystems/CMakeLists.txt
> @@ -1076,7 +1076,7 @@ if(NOT ${CMAKE_BINARY_DIR}/CMakeCache.txt STREQUAL ${CACHE_PATH})
> "string(REPLACE \"\${GIT_BUILD_DIR_REPL}\" \"GIT_BUILD_DIR=\\\"$TEST_DIRECTORY/../${BUILD_DIR_RELATIVE}\\\"\" content \"\${content}\")\n"
> "file(WRITE ${CMAKE_SOURCE_DIR}/t/test-lib.sh \${content})")
> #misc copies
> - file(COPY ${CMAKE_SOURCE_DIR}/t/chainlint.sed DESTINATION ${CMAKE_BINARY_DIR}/t/)
> + file(COPY ${CMAKE_SOURCE_DIR}/t/chainlint.pl DESTINATION ${CMAKE_BINARY_DIR}/t/)
> file(COPY ${CMAKE_SOURCE_DIR}/po/is.po DESTINATION ${CMAKE_BINARY_DIR}/po/)
> file(COPY ${CMAKE_SOURCE_DIR}/mergetools/tkdiff DESTINATION ${CMAKE_BINARY_DIR}/mergetools/)
> file(COPY ${CMAKE_SOURCE_DIR}/contrib/completion/git-prompt.sh DESTINATION ${CMAKE_BINARY_DIR}/contrib/completion/)
> diff --git a/t/test-lib.sh b/t/test-lib.sh
> index dc0d0591095..a65df2fd220 100644
> --- a/t/test-lib.sh
> +++ b/t/test-lib.sh
> @@ -1091,8 +1091,7 @@ test_run_ () {
> trace=
> # 117 is magic because it is unlikely to match the exit
> # code of other programs
> - if $(printf '%s\n' "$1" | sed -f "$GIT_BUILD_DIR/t/chainlint.sed" | grep -q '?![A-Z][A-Z]*?!') ||
> - test "OK-117" != "$(test_eval_ "(exit 117) && $1${LF}${LF}echo OK-\$?" 3>&1)"
> + if test "OK-117" != "$(test_eval_ "(exit 117) && $1${LF}${LF}echo OK-\$?" 3>&1)"
> then
> BUG "broken &&-chain or run-away HERE-DOC: $1"
> fi
> @@ -1588,6 +1587,12 @@ then
> BAIL_OUT_ENV_NEEDS_SANITIZE_LEAK "GIT_TEST_SANITIZE_LEAK_LOG=true"
> fi
>
> +if test "${GIT_TEST_CHAIN_LINT:-1}" != 0
> +then
> + "$PERL_PATH" "$TEST_DIRECTORY/chainlint.pl" "$0" ||
> + BUG "lint error (see '?!...!? annotations above)"
> +fi
> +
> # Last-minute variable setup
> USER_HOME="$HOME"
> HOME="$TRASH_DIRECTORY"
> --
> gitgitgadget
>
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 16/18] test-lib: replace chainlint.sed with chainlint.pl
2022-09-03 5:07 ` Elijah Newren
@ 2022-09-03 5:24 ` Eric Sunshine
0 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine @ 2022-09-03 5:24 UTC (permalink / raw)
To: Elijah Newren
Cc: Eric Sunshine via GitGitGadget, Git Mailing List, Jeff King,
Ævar Arnfjörð Bjarmason, Fabian Stelzer,
Johannes Schindelin
On Sat, Sep 3, 2022 at 1:07 AM Elijah Newren <newren@gmail.com> wrote:
> On Wed, Aug 31, 2022 at 5:30 PM Eric Sunshine via GitGitGadget
> > By automatically invoking chainlint.sed upon each test it runs,
> > `test_run_` in test-lib.sh ensures that broken &&-chains will be
> > detected early as tests are modified or new are tests created since it
>
> s/new are tests created/new tests are created/ ?
That does sound better (except perhaps to Yoda).
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 17/18] t/Makefile: teach `make test` and `make prove` to run chainlint.pl
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (15 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 16/18] test-lib: replace chainlint.sed with chainlint.pl Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 18/18] t: retire unused chainlint.sed Eric Sunshine via GitGitGadget
2022-09-11 5:28 ` [PATCH 00/18] make test "linting" more comprehensive Jeff King
18 siblings, 0 replies; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Unlike chainlint.sed which "lints" a single test body at a time, thus is
invoked once per test, chainlint.pl can check all test bodies in all
test scripts with a single invocation. As such, it is akin to other bulk
"linters" run by the Makefile, such as `test-lint-shell-syntax`,
`test-lint-duplicates`, etc.
Therefore, teach `make test` and `make prove` to invoke chainlint.pl
along with the other bulk linters. Also, since the single chainlint.pl
invocation by `make test` or `make prove` has already checked all tests
in all scripts, instruct the individual test scripts not to run
chainlint.pl on themselves unnecessarily.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/Makefile | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/t/Makefile b/t/Makefile
index 11f276774ea..3db48c0cb64 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -36,14 +36,21 @@ CHAINLINTTMP_SQ = $(subst ','\'',$(CHAINLINTTMP))
T = $(sort $(wildcard t[0-9][0-9][0-9][0-9]-*.sh))
THELPERS = $(sort $(filter-out $(T),$(wildcard *.sh)))
+TLIBS = $(sort $(wildcard lib-*.sh)) annotate-tests.sh
TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
+TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+# `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
+# checks all tests in all scripts via a single invocation, so tell individual
+# scripts not to "chainlint" themselves
+CHAINLINTSUPPRESS = GIT_TEST_CHAIN_LINT=0 && export GIT_TEST_CHAIN_LINT &&
+
all: $(DEFAULT_TEST_TARGET)
test: pre-clean check-chainlint $(TEST_LINT)
- $(MAKE) aggregate-results-and-cleanup
+ $(CHAINLINTSUPPRESS) $(MAKE) aggregate-results-and-cleanup
failed:
@failed=$$(cd '$(TEST_RESULTS_DIRECTORY_SQ)' && \
@@ -52,7 +59,7 @@ failed:
test -z "$$failed" || $(MAKE) $$failed
prove: pre-clean check-chainlint $(TEST_LINT)
- @echo "*** prove ***"; $(PROVE) --exec '$(TEST_SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
+ @echo "*** prove ***"; $(CHAINLINTSUPPRESS) $(PROVE) --exec '$(TEST_SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
$(MAKE) clean-except-prove-cache
$(T):
@@ -99,6 +106,9 @@ check-chainlint:
test-lint: test-lint-duplicates test-lint-executable test-lint-shell-syntax \
test-lint-filenames
+ifneq ($(GIT_TEST_CHAIN_LINT),0)
+test-lint: test-chainlint
+endif
test-lint-duplicates:
@dups=`echo $(T) $(TPERF) | tr ' ' '\n' | sed 's/-.*//' | sort | uniq -d` && \
@@ -121,6 +131,9 @@ test-lint-filenames:
test -z "$$bad" || { \
echo >&2 "non-portable file name(s): $$bad"; exit 1; }
+test-chainlint:
+ @$(CHAINLINT) $(T) $(TLIBS) $(TPERF) $(TINTEROP)
+
aggregate-results-and-cleanup: $(T)
$(MAKE) aggregate-results
$(MAKE) clean
@@ -136,4 +149,5 @@ valgrind:
perf:
$(MAKE) -C perf/ all
-.PHONY: pre-clean $(T) aggregate-results clean valgrind perf check-chainlint clean-chainlint
+.PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
+ check-chainlint clean-chainlint test-chainlint
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 18/18] t: retire unused chainlint.sed
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (16 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 17/18] t/Makefile: teach `make test` and `make prove` to run chainlint.pl Eric Sunshine via GitGitGadget
@ 2022-09-01 0:29 ` Eric Sunshine via GitGitGadget
2022-09-02 12:42 ` several messages Johannes Schindelin
2022-09-11 5:28 ` [PATCH 00/18] make test "linting" more comprehensive Jeff King
18 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine via GitGitGadget @ 2022-09-01 0:29 UTC (permalink / raw)
To: git
Cc: Jeff King, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine,
Eric Sunshine
From: Eric Sunshine <sunshine@sunshineco.com>
Retire chainlint.sed since it has been replaced by a more accurate and
functional &&-chain "linter", thus is no longer used.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
---
t/chainlint.sed | 399 ------------------------------------------------
1 file changed, 399 deletions(-)
delete mode 100644 t/chainlint.sed
diff --git a/t/chainlint.sed b/t/chainlint.sed
deleted file mode 100644
index dc4ce37cb51..00000000000
--- a/t/chainlint.sed
+++ /dev/null
@@ -1,399 +0,0 @@
-#------------------------------------------------------------------------------
-# Detect broken &&-chains in tests.
-#
-# At present, only &&-chains in subshells are examined by this linter;
-# top-level &&-chains are instead checked directly by the test framework. Like
-# the top-level &&-chain linter, the subshell linter (intentionally) does not
-# check &&-chains within {...} blocks.
-#
-# Checking for &&-chain breakage is done line-by-line by pure textual
-# inspection.
-#
-# Incomplete lines (those ending with "\") are stitched together with following
-# lines to simplify processing, particularly of "one-liner" statements.
-# Top-level here-docs are swallowed to avoid false positives within the
-# here-doc body, although the statement to which the here-doc is attached is
-# retained.
-#
-# Heuristics are used to detect end-of-subshell when the closing ")" is cuddled
-# with the final subshell statement on the same line:
-#
-# (cd foo &&
-# bar)
-#
-# in order to avoid misinterpreting the ")" in constructs such as "x=$(...)"
-# and "case $x in *)" as ending the subshell.
-#
-# Lines missing a final "&&" are flagged with "?!AMP?!", as are lines which
-# chain commands with ";" internally rather than "&&". A line may be flagged
-# for both violations.
-#
-# Detection of a missing &&-link in a multi-line subshell is complicated by the
-# fact that the last statement before the closing ")" must not end with "&&".
-# Since processing is line-by-line, it is not known whether a missing "&&" is
-# legitimate or not until the _next_ line is seen. To accommodate this, within
-# multi-line subshells, each line is stored in sed's "hold" area until after
-# the next line is seen and processed. If the next line is a stand-alone ")",
-# then a missing "&&" on the previous line is legitimate; otherwise a missing
-# "&&" is a break in the &&-chain.
-#
-# (
-# cd foo &&
-# bar
-# )
-#
-# In practical terms, when "bar" is encountered, it is flagged with "?!AMP?!",
-# but when the stand-alone ")" line is seen which closes the subshell, the
-# "?!AMP?!" violation is removed from the "bar" line (retrieved from the "hold"
-# area) since the final statement of a subshell must not end with "&&". The
-# final line of a subshell may still break the &&-chain by using ";" internally
-# to chain commands together rather than "&&", but an internal "?!AMP?!" is
-# never removed from a line even though a line-ending "?!AMP?!" might be.
-#
-# Care is taken to recognize the last _statement_ of a multi-line subshell, not
-# necessarily the last textual _line_ within the subshell, since &&-chaining
-# applies to statements, not to lines. Consequently, blank lines, comment
-# lines, and here-docs are swallowed (but not the command to which the here-doc
-# is attached), leaving the last statement in the "hold" area, not the last
-# line, thus simplifying &&-link checking.
-#
-# The final statement before "done" in for- and while-loops, and before "elif",
-# "else", and "fi" in if-then-else likewise must not end with "&&", thus
-# receives similar treatment.
-#
-# Swallowing here-docs with arbitrary tags requires a bit of finesse. When a
-# line such as "cat <<EOF" is seen, the here-doc tag is copied to the front of
-# the line enclosed in angle brackets as a sentinel, giving "<EOF>cat <<EOF".
-# As each subsequent line is read, it is appended to the target line and a
-# (whitespace-loose) back-reference match /^<(.*)>\n\1$/ is attempted to see if
-# the content inside "<...>" matches the entirety of the newly-read line. For
-# instance, if the next line read is "some data", when concatenated with the
-# target line, it becomes "<EOF>cat <<EOF\nsome data", and a match is attempted
-# to see if "EOF" matches "some data". Since it doesn't, the next line is
-# attempted. When a line consisting of only "EOF" (and possible whitespace) is
-# encountered, it is appended to the target line giving "<EOF>cat <<EOF\nEOF",
-# in which case the "EOF" inside "<...>" does match the text following the
-# newline, thus the closing here-doc tag has been found. The closing tag line
-# and the "<...>" prefix on the target line are then discarded, leaving just
-# the target line "cat <<EOF".
-#------------------------------------------------------------------------------
-
-# incomplete line -- slurp up next line
-:squash
-/\\$/ {
- N
- s/\\\n//
- bsquash
-}
-
-# here-doc -- swallow it to avoid false hits within its body (but keep the
-# command to which it was attached)
-/<<-*[ ]*[\\'"]*[A-Za-z0-9_]/ {
- /"[^"]*<<[^"]*"/bnotdoc
- s/^\(.*<<-*[ ]*\)[\\'"]*\([A-Za-z0-9_][A-Za-z0-9_]*\)['"]*/<\2>\1\2/
- :hered
- N
- /^<\([^>]*\)>.*\n[ ]*\1[ ]*$/!{
- s/\n.*$//
- bhered
- }
- s/^<[^>]*>//
- s/\n.*$//
-}
-:notdoc
-
-# one-liner "(...) &&"
-/^[ ]*!*[ ]*(..*)[ ]*&&[ ]*$/boneline
-
-# same as above but without trailing "&&"
-/^[ ]*!*[ ]*(..*)[ ]*$/boneline
-
-# one-liner "(...) >x" (or "2>x" or "<x" or "|x" or "&"
-/^[ ]*!*[ ]*(..*)[ ]*[0-9]*[<>|&]/boneline
-
-# multi-line "(...\n...)"
-/^[ ]*(/bsubsh
-
-# innocuous line -- print it and advance to next line
-b
-
-# found one-liner "(...)" -- mark suspect if it uses ";" internally rather than
-# "&&" (but not ";" in a string)
-:oneline
-/;/{
- /"[^"]*;[^"]*"/!s/;/; ?!AMP?!/
-}
-b
-
-:subsh
-# bare "(" line? -- stash for later printing
-/^[ ]*([ ]*$/ {
- h
- bnextln
-}
-# "(..." line -- "(" opening subshell cuddled with command; temporarily replace
-# "(" with sentinel "^" and process the line as if "(" had been seen solo on
-# the preceding line; this temporary replacement prevents several rules from
-# accidentally thinking "(" introduces a nested subshell; "^" is changed back
-# to "(" at output time
-x
-s/.*//
-x
-s/(/^/
-bslurp
-
-:nextln
-N
-s/.*\n//
-
-:slurp
-# incomplete line "...\"
-/\\$/bicmplte
-# multi-line quoted string "...\n..."?
-/"/bdqstr
-# multi-line quoted string '...\n...'? (but not contraction in string "it's")
-/'/{
- /"[^'"]*'[^'"]*"/!bsqstr
-}
-:folded
-# here-doc -- swallow it (but not "<<" in a string)
-/<<-*[ ]*[\\'"]*[A-Za-z0-9_]/{
- /"[^"]*<<[^"]*"/!bheredoc
-}
-# comment or empty line -- discard since final non-comment, non-empty line
-# before closing ")", "done", "elsif", "else", or "fi" will need to be
-# re-visited to drop "suspect" marking since final line of those constructs
-# legitimately lacks "&&", so "suspect" mark must be removed
-/^[ ]*#/bnextln
-/^[ ]*$/bnextln
-# in-line comment -- strip it (but not "#" in a string, Bash ${#...} array
-# length, or Perforce "//depot/path#42" revision in filespec)
-/[ ]#/{
- /"[^"]*#[^"]*"/!s/[ ]#.*$//
-}
-# one-liner "case ... esac"
-/^[ ^]*case[ ]*..*esac/bchkchn
-# multi-line "case ... esac"
-/^[ ^]*case[ ]..*[ ]in/bcase
-# multi-line "for ... done" or "while ... done"
-/^[ ^]*for[ ]..*[ ]in/bcont
-/^[ ^]*while[ ]/bcont
-/^[ ]*do[ ]/bcont
-/^[ ]*do[ ]*$/bcont
-/;[ ]*do/bcont
-/^[ ]*done[ ]*&&[ ]*$/bdone
-/^[ ]*done[ ]*$/bdone
-/^[ ]*done[ ]*[<>|]/bdone
-/^[ ]*done[ ]*)/bdone
-/||[ ]*exit[ ]/bcont
-/||[ ]*exit[ ]*$/bcont
-# multi-line "if...elsif...else...fi"
-/^[ ^]*if[ ]/bcont
-/^[ ]*then[ ]/bcont
-/^[ ]*then[ ]*$/bcont
-/;[ ]*then/bcont
-/^[ ]*elif[ ]/belse
-/^[ ]*elif[ ]*$/belse
-/^[ ]*else[ ]/belse
-/^[ ]*else[ ]*$/belse
-/^[ ]*fi[ ]*&&[ ]*$/bdone
-/^[ ]*fi[ ]*$/bdone
-/^[ ]*fi[ ]*[<>|]/bdone
-/^[ ]*fi[ ]*)/bdone
-# nested one-liner "(...) &&"
-/^[ ^]*(.*)[ ]*&&[ ]*$/bchkchn
-# nested one-liner "(...)"
-/^[ ^]*(.*)[ ]*$/bchkchn
-# nested one-liner "(...) >x" (or "2>x" or "<x" or "|x")
-/^[ ^]*(.*)[ ]*[0-9]*[<>|]/bchkchn
-# nested multi-line "(...\n...)"
-/^[ ^]*(/bnest
-# multi-line "{...\n...}"
-/^[ ^]*{/bblock
-# closing ")" on own line -- exit subshell
-/^[ ]*)/bclssolo
-# "$((...))" -- arithmetic expansion; not closing ")"
-/\$(([^)][^)]*))[^)]*$/bchkchn
-# "$(...)" -- command substitution; not closing ")"
-/\$([^)][^)]*)[^)]*$/bchkchn
-# multi-line "$(...\n...)" -- command substitution; treat as nested subshell
-/\$([^)]*$/bnest
-# "=(...)" -- Bash array assignment; not closing ")"
-/=(/bchkchn
-# closing "...) &&"
-/)[ ]*&&[ ]*$/bclose
-# closing "...)"
-/)[ ]*$/bclose
-# closing "...) >x" (or "2>x" or "<x" or "|x")
-/)[ ]*[<>|]/bclose
-:chkchn
-# mark suspect if line uses ";" internally rather than "&&" (but not ";" in a
-# string and not ";;" in one-liner "case...esac")
-/;/{
- /;;/!{
- /"[^"]*;[^"]*"/!s/;/; ?!AMP?!/
- }
-}
-# line ends with pipe "...|" -- valid; not missing "&&"
-/|[ ]*$/bcont
-# missing end-of-line "&&" -- mark suspect
-/&&[ ]*$/!s/$/ ?!AMP?!/
-:cont
-# retrieve and print previous line
-x
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-n
-bslurp
-
-# found incomplete line "...\" -- slurp up next line
-:icmplte
-N
-s/\\\n//
-bslurp
-
-# check for multi-line double-quoted string "...\n..." -- fold to one line
-:dqstr
-# remove all quote pairs
-s/"\([^"]*\)"/@!\1@!/g
-# done if no dangling quote
-/"/!bdqdone
-# otherwise, slurp next line and try again
-N
-s/\n//
-bdqstr
-:dqdone
-s/@!/"/g
-bfolded
-
-# check for multi-line single-quoted string '...\n...' -- fold to one line
-:sqstr
-# remove all quote pairs
-s/'\([^']*\)'/@!\1@!/g
-# done if no dangling quote
-/'/!bsqdone
-# otherwise, slurp next line and try again
-N
-s/\n//
-bsqstr
-:sqdone
-s/@!/'/g
-bfolded
-
-# found here-doc -- swallow it to avoid false hits within its body (but keep
-# the command to which it was attached)
-:heredoc
-s/^\(.*\)<<\(-*[ ]*\)[\\'"]*\([A-Za-z0-9_][A-Za-z0-9_]*\)['"]*/<\3>\1?!HERE?!\2\3/
-:hdocsub
-N
-/^<\([^>]*\)>.*\n[ ]*\1[ ]*$/!{
- s/\n.*$//
- bhdocsub
-}
-s/^<[^>]*>//
-s/\n.*$//
-bfolded
-
-# found "case ... in" -- pass through untouched
-:case
-x
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-n
-:cascom
-/^[ ]*#/{
- N
- s/.*\n//
- bcascom
-}
-/^[ ]*esac/bslurp
-bcase
-
-# found "else" or "elif" -- drop "suspect" from final line before "else" since
-# that line legitimately lacks "&&"
-:else
-x
-s/\( ?!AMP?!\)* ?!AMP?!$//
-x
-bcont
-
-# found "done" closing for-loop or while-loop, or "fi" closing if-then -- drop
-# "suspect" from final contained line since that line legitimately lacks "&&"
-:done
-x
-s/\( ?!AMP?!\)* ?!AMP?!$//
-x
-# is 'done' or 'fi' cuddled with ")" to close subshell?
-/done.*)/bclose
-/fi.*)/bclose
-bchkchn
-
-# found nested multi-line "(...\n...)" -- pass through untouched
-:nest
-x
-:nstslrp
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-n
-:nstcom
-# comment -- not closing ")" if in comment
-/^[ ]*#/{
- N
- s/.*\n//
- bnstcom
-}
-# closing ")" on own line -- stop nested slurp
-/^[ ]*)/bnstcl
-# "$((...))" -- arithmetic expansion; not closing ")"
-/\$(([^)][^)]*))[^)]*$/bnstcnt
-# "$(...)" -- command substitution; not closing ")"
-/\$([^)][^)]*)[^)]*$/bnstcnt
-# closing "...)" -- stop nested slurp
-/)/bnstcl
-:nstcnt
-x
-bnstslrp
-:nstcl
-# is it "))" which closes nested and parent subshells?
-/)[ ]*)/bslurp
-bchkchn
-
-# found multi-line "{...\n...}" block -- pass through untouched
-:block
-x
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-n
-:blkcom
-/^[ ]*#/{
- N
- s/.*\n//
- bblkcom
-}
-# closing "}" -- stop block slurp
-/}/bchkchn
-bblock
-
-# found closing ")" on own line -- drop "suspect" from final line of subshell
-# since that line legitimately lacks "&&" and exit subshell loop
-:clssolo
-x
-s/\( ?!AMP?!\)* ?!AMP?!$//
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-p
-x
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-b
-
-# found closing "...)" -- exit subshell loop
-:close
-x
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-p
-x
-s/^\([ ]*\)^/\1(/
-s/?!HERE?!/<</g
-b
--
gitgitgadget
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: several messages
2022-09-01 0:29 ` [PATCH 18/18] t: retire unused chainlint.sed Eric Sunshine via GitGitGadget
@ 2022-09-02 12:42 ` Johannes Schindelin
2022-09-02 18:16 ` Eric Sunshine
0 siblings, 1 reply; 131+ messages in thread
From: Johannes Schindelin @ 2022-09-02 12:42 UTC (permalink / raw)
To: Eric Sunshine via GitGitGadget, Ævar Arnfjörð Bjarmason
Cc: git, Jeff King, Elijah Newren, Fabian Stelzer, Eric Sunshine
Hi Eric,
On Thu, 1 Sep 2022, Eric Sunshine via GitGitGadget wrote:
> contrib/buildsystems/CMakeLists.txt | 2 +-
> t/Makefile | 49 +-
> t/README | 5 -
> t/chainlint.pl | 730 ++++++++++++++++++
> t/chainlint.sed | 399 ----------
> t/chainlint/blank-line-before-esac.expect | 18 +
> t/chainlint/blank-line-before-esac.test | 19 +
> t/chainlint/block.expect | 15 +-
> t/chainlint/block.test | 15 +-
> t/chainlint/chain-break-background.expect | 9 +
> t/chainlint/chain-break-background.test | 10 +
> t/chainlint/chain-break-continue.expect | 12 +
> t/chainlint/chain-break-continue.test | 13 +
> t/chainlint/chain-break-false.expect | 9 +
> t/chainlint/chain-break-false.test | 10 +
> t/chainlint/chain-break-return-exit.expect | 19 +
> t/chainlint/chain-break-return-exit.test | 23 +
> t/chainlint/chain-break-status.expect | 9 +
> t/chainlint/chain-break-status.test | 11 +
> t/chainlint/chained-block.expect | 9 +
> t/chainlint/chained-block.test | 11 +
> t/chainlint/chained-subshell.expect | 10 +
> t/chainlint/chained-subshell.test | 13 +
> .../command-substitution-subsubshell.expect | 2 +
> .../command-substitution-subsubshell.test | 3 +
> t/chainlint/complex-if-in-cuddled-loop.expect | 2 +-
> t/chainlint/double-here-doc.expect | 2 +
> t/chainlint/double-here-doc.test | 12 +
> t/chainlint/dqstring-line-splice.expect | 3 +
> t/chainlint/dqstring-line-splice.test | 7 +
> t/chainlint/dqstring-no-interpolate.expect | 11 +
> t/chainlint/dqstring-no-interpolate.test | 15 +
> t/chainlint/empty-here-doc.expect | 3 +
> t/chainlint/empty-here-doc.test | 5 +
> t/chainlint/exclamation.expect | 4 +
> t/chainlint/exclamation.test | 8 +
> t/chainlint/for-loop-abbreviated.expect | 5 +
> t/chainlint/for-loop-abbreviated.test | 6 +
> t/chainlint/for-loop.expect | 4 +-
> t/chainlint/function.expect | 11 +
> t/chainlint/function.test | 13 +
> t/chainlint/here-doc-indent-operator.expect | 5 +
> t/chainlint/here-doc-indent-operator.test | 13 +
> t/chainlint/here-doc-multi-line-string.expect | 3 +-
> t/chainlint/if-condition-split.expect | 7 +
> t/chainlint/if-condition-split.test | 8 +
> t/chainlint/if-in-loop.expect | 2 +-
> t/chainlint/if-in-loop.test | 2 +-
> t/chainlint/loop-detect-failure.expect | 15 +
> t/chainlint/loop-detect-failure.test | 17 +
> t/chainlint/loop-detect-status.expect | 18 +
> t/chainlint/loop-detect-status.test | 19 +
> t/chainlint/loop-in-if.expect | 2 +-
> t/chainlint/loop-upstream-pipe.expect | 10 +
> t/chainlint/loop-upstream-pipe.test | 11 +
> t/chainlint/multi-line-string.expect | 11 +-
> t/chainlint/nested-loop-detect-failure.expect | 31 +
> t/chainlint/nested-loop-detect-failure.test | 35 +
> t/chainlint/nested-subshell.expect | 2 +-
> t/chainlint/one-liner-for-loop.expect | 9 +
> t/chainlint/one-liner-for-loop.test | 10 +
> t/chainlint/return-loop.expect | 5 +
> t/chainlint/return-loop.test | 6 +
> t/chainlint/semicolon.expect | 2 +-
> t/chainlint/sqstring-in-sqstring.expect | 4 +
> t/chainlint/sqstring-in-sqstring.test | 5 +
> t/chainlint/t7900-subtree.expect | 13 +-
> t/chainlint/token-pasting.expect | 27 +
> t/chainlint/token-pasting.test | 32 +
> t/chainlint/while-loop.expect | 4 +-
> t/t0027-auto-crlf.sh | 7 +-
> t/t3070-wildmatch.sh | 5 -
> t/test-lib.sh | 12 +-
> 73 files changed, 1439 insertions(+), 449 deletions(-)
> create mode 100755 t/chainlint.pl
> delete mode 100644 t/chainlint.sed
> create mode 100644 t/chainlint/blank-line-before-esac.expect
> create mode 100644 t/chainlint/blank-line-before-esac.test
> create mode 100644 t/chainlint/chain-break-background.expect
> create mode 100644 t/chainlint/chain-break-background.test
> create mode 100644 t/chainlint/chain-break-continue.expect
> create mode 100644 t/chainlint/chain-break-continue.test
> create mode 100644 t/chainlint/chain-break-false.expect
> create mode 100644 t/chainlint/chain-break-false.test
> create mode 100644 t/chainlint/chain-break-return-exit.expect
> create mode 100644 t/chainlint/chain-break-return-exit.test
> create mode 100644 t/chainlint/chain-break-status.expect
> create mode 100644 t/chainlint/chain-break-status.test
> create mode 100644 t/chainlint/chained-block.expect
> create mode 100644 t/chainlint/chained-block.test
> create mode 100644 t/chainlint/chained-subshell.expect
> create mode 100644 t/chainlint/chained-subshell.test
> create mode 100644 t/chainlint/command-substitution-subsubshell.expect
> create mode 100644 t/chainlint/command-substitution-subsubshell.test
> create mode 100644 t/chainlint/double-here-doc.expect
> create mode 100644 t/chainlint/double-here-doc.test
> create mode 100644 t/chainlint/dqstring-line-splice.expect
> create mode 100644 t/chainlint/dqstring-line-splice.test
> create mode 100644 t/chainlint/dqstring-no-interpolate.expect
> create mode 100644 t/chainlint/dqstring-no-interpolate.test
> create mode 100644 t/chainlint/empty-here-doc.expect
> create mode 100644 t/chainlint/empty-here-doc.test
> create mode 100644 t/chainlint/exclamation.expect
> create mode 100644 t/chainlint/exclamation.test
> create mode 100644 t/chainlint/for-loop-abbreviated.expect
> create mode 100644 t/chainlint/for-loop-abbreviated.test
> create mode 100644 t/chainlint/function.expect
> create mode 100644 t/chainlint/function.test
> create mode 100644 t/chainlint/here-doc-indent-operator.expect
> create mode 100644 t/chainlint/here-doc-indent-operator.test
> create mode 100644 t/chainlint/if-condition-split.expect
> create mode 100644 t/chainlint/if-condition-split.test
> create mode 100644 t/chainlint/loop-detect-failure.expect
> create mode 100644 t/chainlint/loop-detect-failure.test
> create mode 100644 t/chainlint/loop-detect-status.expect
> create mode 100644 t/chainlint/loop-detect-status.test
> create mode 100644 t/chainlint/loop-upstream-pipe.expect
> create mode 100644 t/chainlint/loop-upstream-pipe.test
> create mode 100644 t/chainlint/nested-loop-detect-failure.expect
> create mode 100644 t/chainlint/nested-loop-detect-failure.test
> create mode 100644 t/chainlint/one-liner-for-loop.expect
> create mode 100644 t/chainlint/one-liner-for-loop.test
> create mode 100644 t/chainlint/return-loop.expect
> create mode 100644 t/chainlint/return-loop.test
> create mode 100644 t/chainlint/sqstring-in-sqstring.expect
> create mode 100644 t/chainlint/sqstring-in-sqstring.test
> create mode 100644 t/chainlint/token-pasting.expect
> create mode 100644 t/chainlint/token-pasting.test
This looks like it was a lot of work. And that it would be a lot of work
to review, too, and certainly even more work to maintain.
Are we really sure that we want to burden the Git project with this much
stuff that is not actually related to Git's core functionality?
It would be one thing if we could use a well-maintained third-party tool
to do this job. But adding this to our plate? I hope we can avoid that.
Ciao,
Dscho
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2022-09-02 12:42 ` several messages Johannes Schindelin
@ 2022-09-02 18:16 ` Eric Sunshine
2022-09-02 18:34 ` Jeff King
0 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine @ 2022-09-02 18:16 UTC (permalink / raw)
To: Johannes Schindelin
Cc: Eric Sunshine via GitGitGadget,
Ævar Arnfjörð Bjarmason, Git List, Jeff King,
Elijah Newren, Fabian Stelzer
On Fri, Sep 2, 2022 at 8:42 AM Johannes Schindelin
<Johannes.Schindelin@gmx.de> wrote:
> On Thu, 1 Sep 2022, Eric Sunshine via GitGitGadget wrote:
> > t/chainlint.pl | 730 ++++++++++++++++++
> > t/chainlint.sed | 399 ----------
> > t/chainlint/blank-line-before-esac.expect | 18 +
> > t/chainlint/blank-line-before-esac.test | 19 +
> > ...
>
> This looks like it was a lot of work. And that it would be a lot of work
> to review, too, and certainly even more work to maintain.
>
> Are we really sure that we want to burden the Git project with this much
> stuff that is not actually related to Git's core functionality?
>
> It would be one thing if we could use a well-maintained third-party tool
> to do this job. But adding this to our plate? I hope we can avoid that.
I understand your concerns about review and maintenance burden, and
you're not the first to make such observations; when chainlint.sed was
submitted, it was greeted with similar concerns[1,2], all very
understandable. The key takeaway[3] from that conversation, though,
was that, unlike user-facing features which must be reviewed in detail
and maintained in perpetuity, this is a mere developer aid which can
be easily ejected from the project if it ever becomes a maintenance
burden or shows itself to be unreliable. Potential maintenance burden
aside, a very real benefit of such a tool is that it should help
prevent bugs from slipping into the project going forward[4], which is
indeed the aim of all our developer-focused aids.
In more practical terms, despite initial concerns, in the 4+ years
since its introduction, the maintenance cost of chainlint.sed has been
nearly zero. Very early on, there was a report[5] that chainlint.sed
was showing a false-positive in a `contrib` test script; the developer
quickly responded with a fix[6]. The only other maintenance issues
were a couple dead-simple changes[7,8] to shorten "labels" to support
older versions of `sed`. (As for the chainlint self-tests, the
maintenance cost has been exactly zero). My hope is that chainlint.pl
should have a similar track-record, but it can easily be dropped from
the project if not.
[1]: https://lore.kernel.org/git/xmqqk1q11mkj.fsf@gitster-ct.c.googlers.com/
[2]: https://lore.kernel.org/git/20180712165608.GA10515@sigill.intra.peff.net/
[3]: https://lore.kernel.org/git/CAPig+cRmAkiYqFXwRAkQALDoOo-79r2iAumdEJEZhBnETvL-fw@mail.gmail.com/
[4]: https://lore.kernel.org/git/xmqqin5kw7q3.fsf@gitster-ct.c.googlers.com/
[5]: https://lore.kernel.org/git/20180730181356.GA156463@aiede.svl.corp.google.com/
[6]: https://lore.kernel.org/git/20180807082135.60913-1-sunshine@sunshineco.com/
[7]: https://lore.kernel.org/git/20180824152016.20286-5-avarab@gmail.com/
[8]: https://lore.kernel.org/git/d15ed626de65c51ef2ba31020eeb2111fb8e091f.1596675905.git.gitgitgadget@gmail.com/
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2022-09-02 18:16 ` Eric Sunshine
@ 2022-09-02 18:34 ` Jeff King
2022-09-02 18:44 ` Junio C Hamano
0 siblings, 1 reply; 131+ messages in thread
From: Jeff King @ 2022-09-02 18:34 UTC (permalink / raw)
To: Eric Sunshine
Cc: Johannes Schindelin, Eric Sunshine via GitGitGadget,
Ævar Arnfjörð Bjarmason, Git List, Elijah Newren,
Fabian Stelzer
On Fri, Sep 02, 2022 at 02:16:21PM -0400, Eric Sunshine wrote:
> > It would be one thing if we could use a well-maintained third-party tool
> > to do this job. But adding this to our plate? I hope we can avoid that.
>
> I understand your concerns about review and maintenance burden, and
> you're not the first to make such observations; when chainlint.sed was
> submitted, it was greeted with similar concerns[1,2], all very
> understandable. The key takeaway[3] from that conversation, though,
> was that, unlike user-facing features which must be reviewed in detail
> and maintained in perpetuity, this is a mere developer aid which can
> be easily ejected from the project if it ever becomes a maintenance
> burden or shows itself to be unreliable. Potential maintenance burden
> aside, a very real benefit of such a tool is that it should help
> prevent bugs from slipping into the project going forward[4], which is
> indeed the aim of all our developer-focused aids.
Thanks for this response and especially the links. My initial gut
response was similar to Dscho's. Which is not surprising, because it
apparently was also my initial response to chainlint.sed back then. ;)
But I do think that chainlint.sed has proven itself to be both useful
and not much of a maintenance burden. My only real complaint was the
additional runtime in a few corner cases, and that is exactly what
you're addressing here.
I'm not excited about carefully reviewing it. At the same time, given
the low stakes, I'm kind of willing to accept that between the tests and
the results of running it on the current code base, the proof is in the
pudding.
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2022-09-02 18:34 ` Jeff King
@ 2022-09-02 18:44 ` Junio C Hamano
0 siblings, 0 replies; 131+ messages in thread
From: Junio C Hamano @ 2022-09-02 18:44 UTC (permalink / raw)
To: Jeff King
Cc: Eric Sunshine, Johannes Schindelin,
Eric Sunshine via GitGitGadget,
Ævar Arnfjörð Bjarmason, Git List, Elijah Newren,
Fabian Stelzer
Jeff King <peff@peff.net> writes:
> Thanks for this response and especially the links. My initial gut
> response was similar to Dscho's. Which is not surprising, because it
> apparently was also my initial response to chainlint.sed back then. ;)
>
> But I do think that chainlint.sed has proven itself to be both useful
> and not much of a maintenance burden. My only real complaint was the
> additional runtime in a few corner cases, and that is exactly what
> you're addressing here.
I have nothing to add to the above ;-) Thanks all (including Dscho
who made us be more explicit in pros-and-cons).
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 00/18] make test "linting" more comprehensive
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
` (17 preceding siblings ...)
2022-09-01 0:29 ` [PATCH 18/18] t: retire unused chainlint.sed Eric Sunshine via GitGitGadget
@ 2022-09-11 5:28 ` Jeff King
2022-09-11 7:01 ` Eric Sunshine
18 siblings, 1 reply; 131+ messages in thread
From: Jeff King @ 2022-09-11 5:28 UTC (permalink / raw)
To: Eric Sunshine via GitGitGadget
Cc: git, Elijah Newren, Ævar Arnfjörð Bjarmason,
Fabian Stelzer, Johannes Schindelin, Eric Sunshine
On Thu, Sep 01, 2022 at 12:29:38AM +0000, Eric Sunshine via GitGitGadget wrote:
> A while back, Peff successfully nerd-sniped[1] me into tackling a
> long-brewing idea I had about (possibly) improving "chainlint" performance
Oops, sorry. :)
I gave this a read-through, and it looks sensible overall. I have to
admit that I did not carefully check all of your regexes. Given the
relatively low stakes of the code (as an internal build-time tool only)
and the set of tests accompanying it, I'm willing to assume it's good
enough until we see counter-examples.
I posted some timings and thoughts on the use of threads elsewhere. But
in the end the timings are close enough that I don't care that much
either way.
I'd also note that I got some first-hand experience with the script as I
merged it with all of my other long-brewing topics, and it found a half
dozen spots, mostly LOOP annotations. At least one was a real "oops,
we'd miss a bug in Git here" spot. Several were "we'd probably notice
the problem because the loop output wouldn't be as expected". One was a
"we're on the left-hand of a pipe, so the exit code doesn't matter
anyway" case, but I am more than happy to fix those if it lets us be
linter-clean.
The output took me a minute to adjust to, just because it feels pretty
jumbled when there are several cases. Mostly this is because the
script eats indentation. So it's hard to see the "# chainlint:" comment
starts, let alone the ?! annotations. Here's an example:
-- >8 --
# chainlint: t4070-diff-pairs.sh
# chainlint: split input across multiple diff-pairs
write_script split-raw-diff "$PERL_PATH" <<-EOF &&
git diff-tree -p -M -C -C base new > expect &&
git diff-tree -r -z -M -C -C base new |
./split-raw-diff &&
for i in diff* ; do
git diff-pairs -p < $i ?!LOOP?!
done > actual &&
test_cmp expect actual
# chainlint: perf/p5305-pack-limits.sh
# chainlint: set up delta islands
head=$(git rev-parse HEAD) &&
git for-each-ref --format="delete %(refname)" |
git update-ref --no-deref --stdin &&
n=0 &&
fork=0 &&
git rev-list --first-parent $head |
while read commit ; do
n=$((n+1)) ?!AMP?!
if test "$n" = 100 ; then
echo "create refs/forks/$fork/master $commit" ?!AMP?!
fork=$((fork+1)) ?!AMP?!
n=0
fi ?!LOOP?!
done |
git update-ref --stdin &&
git config pack.island "refs/forks/([0-9]*)/"
-- 8< --
It wasn't too bad once I got the hang of it, but I wonder if a user
writing a single test for the first time may get a bit overwhelmed. I
assume that the indentation is removed as part of the normalization (I
notice extra whitespace around "<", too). That might be hard to address.
I wonder if color output for "# chainlint" and "?!" annotations would
help, too. It looks like that may be tricky, though, because the
annotations re-parsed internally in some cases.
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 00/18] make test "linting" more comprehensive
2022-09-11 5:28 ` [PATCH 00/18] make test "linting" more comprehensive Jeff King
@ 2022-09-11 7:01 ` Eric Sunshine
2022-09-11 18:31 ` Jeff King
0 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine @ 2022-09-11 7:01 UTC (permalink / raw)
To: Jeff King
Cc: Eric Sunshine via GitGitGadget, Git List, Elijah Newren,
Ævar Arnfjörð Bjarmason, Fabian Stelzer,
Johannes Schindelin
On Sun, Sep 11, 2022 at 1:28 AM Jeff King <peff@peff.net> wrote:
> On Thu, Sep 01, 2022 at 12:29:38AM +0000, Eric Sunshine via GitGitGadget wrote:
> > A while back, Peff successfully nerd-sniped[1] me into tackling a
> > long-brewing idea I had about (possibly) improving "chainlint" performance
>
> I gave this a read-through, and it looks sensible overall. I have to
> admit that I did not carefully check all of your regexes. Given the
> relatively low stakes of the code (as an internal build-time tool only)
> and the set of tests accompanying it, I'm willing to assume it's good
> enough until we see counter-examples.
Thanks for the feedback.
> I posted some timings and thoughts on the use of threads elsewhere. But
> in the end the timings are close enough that I don't care that much
> either way.
I ran my eye over that message quickly and have been meaning to dig
into it and give it a proper response but haven't yet found the time.
> I'd also note that I got some first-hand experience with the script as I
> merged it with all of my other long-brewing topics, and it found a half
> dozen spots, mostly LOOP annotations. At least one was a real "oops,
> we'd miss a bug in Git here" spot. Several were "we'd probably notice
> the problem because the loop output wouldn't be as expected". One was a
> "we're on the left-hand of a pipe, so the exit code doesn't matter
> anyway" case, but I am more than happy to fix those if it lets us be
> linter-clean.
Indeed, I'm not super happy about the linter complaining about cases
which obviously can't have an impact on the test's outcome, but (as
mentioned elsewhere in the thread), finally convinced myself that the
relatively low number of these was outweighed by the quite large
number of cases caught by the linter which could have let real
problems slip though. Perhaps some day the linter can be made smarter
about these cases.
> The output took me a minute to adjust to, just because it feels pretty
> jumbled when there are several cases. Mostly this is because the
> script eats indentation. So it's hard to see the "# chainlint:" comment
> starts, let alone the ?! annotations. Here's an example:
> [...snip...]
> It wasn't too bad once I got the hang of it, but I wonder if a user
> writing a single test for the first time may get a bit overwhelmed. I
> assume that the indentation is removed as part of the normalization (I
> notice extra whitespace around "<", too). That might be hard to address.
The script implements a proper parser and lexer, and the lexer is
tokenizing the input (throwing away whitespace in the process), thus
by the time the parser notices something to complain about with a
"?!FOO?!" annotation, the original whitespace is long gone, and it
just emits the token stream with "?!FOO?!" inserted at the correct
place. In retrospect, the way this perhaps should have been done would
have been for the parser to instruct the lexer to emit a "?!FOO?!"
annotation at the appropriate point in the input stream. But even that
might get a bit hairy since there are cases in which the parser
back-patches by removing some "?!AMP?!" annotations when it has
decided that it doesn't need to complain about &&-chain breakage. I'm
sure it's fixable, but don't know how important it is at this point.
> I wonder if color output for "# chainlint" and "?!" annotations would
> help, too. It looks like that may be tricky, though, because the
> annotations re-parsed internally in some cases.
I had the exact same thought about coloring the "# chainlint:" lines
and "?!FOO?!" annotations, and how helpful that could be to anyone
(not just newcomers). Aside from not having much free time these days,
a big reason I didn't tackle it was because doing so properly probably
means relying upon some third-party Perl module, and I intentionally
wanted to keep the linter independent of add-on modules. Even without
a "coloring" module of some sort, if Perl had a standard `curses`
module (which it doesn't), then it would have been easy enough to ask
`curses` for the proper color codes and apply them as needed. I'm
old-school, so it doesn't appeal to me, but an alternative would be to
assume it's safe to use ANSI color codes, but even that may have to be
done carefully (i.e. checking TERM and accepting only some whitelisted
entries, and worrying about about Windows consoles).
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 00/18] make test "linting" more comprehensive
2022-09-11 7:01 ` Eric Sunshine
@ 2022-09-11 18:31 ` Jeff King
2022-09-12 23:17 ` Eric Sunshine
0 siblings, 1 reply; 131+ messages in thread
From: Jeff King @ 2022-09-11 18:31 UTC (permalink / raw)
To: Eric Sunshine
Cc: Eric Sunshine via GitGitGadget, Git List, Elijah Newren,
Ævar Arnfjörð Bjarmason, Fabian Stelzer,
Johannes Schindelin
On Sun, Sep 11, 2022 at 03:01:41AM -0400, Eric Sunshine wrote:
> > I wonder if color output for "# chainlint" and "?!" annotations would
> > help, too. It looks like that may be tricky, though, because the
> > annotations re-parsed internally in some cases.
>
> I had the exact same thought about coloring the "# chainlint:" lines
> and "?!FOO?!" annotations, and how helpful that could be to anyone
> (not just newcomers). Aside from not having much free time these days,
> a big reason I didn't tackle it was because doing so properly probably
> means relying upon some third-party Perl module, and I intentionally
> wanted to keep the linter independent of add-on modules. Even without
> a "coloring" module of some sort, if Perl had a standard `curses`
> module (which it doesn't), then it would have been easy enough to ask
> `curses` for the proper color codes and apply them as needed. I'm
> old-school, so it doesn't appeal to me, but an alternative would be to
> assume it's safe to use ANSI color codes, but even that may have to be
> done carefully (i.e. checking TERM and accepting only some whitelisted
> entries, and worrying about about Windows consoles).
We're pretty happy to just use ANSI in the rest of Git, but there is a
complication on Windows. See compat/winansi.c where we decode those
internally into SetConsoleTextAttribute() calls.
I think we can live with it as-is for now and see how people react. If
lots of people are getting confused by the output, then that motivates
finding a solution. If not, then it's probably not worth the time.
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 00/18] make test "linting" more comprehensive
2022-09-11 18:31 ` Jeff King
@ 2022-09-12 23:17 ` Eric Sunshine
2022-09-13 0:04 ` Jeff King
0 siblings, 1 reply; 131+ messages in thread
From: Eric Sunshine @ 2022-09-12 23:17 UTC (permalink / raw)
To: Jeff King
Cc: Eric Sunshine via GitGitGadget, Git List, Elijah Newren,
Ævar Arnfjörð Bjarmason, Fabian Stelzer,
Johannes Schindelin
On Sun, Sep 11, 2022 at 2:31 PM Jeff King <peff@peff.net> wrote:
> On Sun, Sep 11, 2022 at 03:01:41AM -0400, Eric Sunshine wrote:
> > > I wonder if color output for "# chainlint" and "?!" annotations would
> > > help, too. It looks like that may be tricky, though, because the
> > > annotations re-parsed internally in some cases.
> >
> > I had the exact same thought about coloring the "# chainlint:" lines
> > and "?!FOO?!" annotations, and how helpful that could be to anyone
> > (not just newcomers). Aside from not having much free time these days,
> > a big reason I didn't tackle it was because doing so properly probably
> > means relying upon some third-party Perl module, and I intentionally
> > wanted to keep the linter independent of add-on modules. Even without
> > a "coloring" module of some sort, if Perl had a standard `curses`
> > module (which it doesn't), then it would have been easy enough to ask
> > `curses` for the proper color codes and apply them as needed. I'm
> > old-school, so it doesn't appeal to me, but an alternative would be to
> > assume it's safe to use ANSI color codes, but even that may have to be
> > done carefully (i.e. checking TERM and accepting only some whitelisted
> > entries, and worrying about about Windows consoles).
>
> We're pretty happy to just use ANSI in the rest of Git, but there is a
> complication on Windows. See compat/winansi.c where we decode those
> internally into SetConsoleTextAttribute() calls.
>
> I think we can live with it as-is for now and see how people react. If
> lots of people are getting confused by the output, then that motivates
> finding a solution. If not, then it's probably not worth the time.
Well, you nerd-sniped me anyhow. The result is at [1]. Following the
example of t/test-lib.sh, it uses `tput` if available to avoid
hardcoding color codes, and `tput` is invoked lazily, only if it
detects problems in the tests, so a normal (non-problematic) run
doesn't incur the overhead of shelling out to `tput`.
My first attempt just assumed ANSI color codes, but then I discovered
the precedence set by t/test-lib.sh of using `tput`, so I went with
that (since I'm old-school). The ANSI-only version was, of course,
much simpler.
[1]: https://lore.kernel.org/git/pull.1324.git.git.1663023888412.gitgitgadget@gmail.com/
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 00/18] make test "linting" more comprehensive
2022-09-12 23:17 ` Eric Sunshine
@ 2022-09-13 0:04 ` Jeff King
0 siblings, 0 replies; 131+ messages in thread
From: Jeff King @ 2022-09-13 0:04 UTC (permalink / raw)
To: Eric Sunshine
Cc: Eric Sunshine via GitGitGadget, Git List, Elijah Newren,
Ævar Arnfjörð Bjarmason, Fabian Stelzer,
Johannes Schindelin
On Mon, Sep 12, 2022 at 07:17:12PM -0400, Eric Sunshine wrote:
> > I think we can live with it as-is for now and see how people react. If
> > lots of people are getting confused by the output, then that motivates
> > finding a solution. If not, then it's probably not worth the time.
>
> Well, you nerd-sniped me anyhow. The result is at [1]. Following the
It seems we've discovered my true talent. :)
> example of t/test-lib.sh, it uses `tput` if available to avoid
> hardcoding color codes, and `tput` is invoked lazily, only if it
> detects problems in the tests, so a normal (non-problematic) run
> doesn't incur the overhead of shelling out to `tput`.
Ah, of course. I didn't think about the fact that the regular tests
already had to deal with this problem. Following that lead makes perfect
sense.
-Peff
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH mptcp-next] mptcp: drop legacy code.
@ 2023-06-12 16:02 Paolo Abeni
2023-06-13 17:37 ` mptcp: drop legacy code.: Tests Results MPTCP CI
0 siblings, 1 reply; 131+ messages in thread
From: Paolo Abeni @ 2023-06-12 16:02 UTC (permalink / raw)
To: mptcp
Thanks to the previous patch we can finally drop the "temporary hack"
used to detect rx eof.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
"previous patch" should be replace with a proper commit reference
once such patch is merged on -net, unless we target both patches on
the same tree (but this one is really net-next material, while the fix
is for -net)
---
net/mptcp/protocol.c | 49 --------------------------------------------
net/mptcp/protocol.h | 5 +----
2 files changed, 1 insertion(+), 53 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 8f3e50065c13..feaedfd2b3eb 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -898,49 +898,6 @@ bool mptcp_schedule_work(struct sock *sk)
return false;
}
-void mptcp_subflow_eof(struct sock *sk)
-{
- if (!test_and_set_bit(MPTCP_WORK_EOF, &mptcp_sk(sk)->flags))
- mptcp_schedule_work(sk);
-}
-
-static void mptcp_check_for_eof(struct mptcp_sock *msk)
-{
- struct mptcp_subflow_context *subflow;
- struct sock *sk = (struct sock *)msk;
- int receivers = 0;
-
- mptcp_for_each_subflow(msk, subflow)
- receivers += !subflow->rx_eof;
- if (receivers)
- return;
-
- if (!(sk->sk_shutdown & RCV_SHUTDOWN)) {
- /* hopefully temporary hack: propagate shutdown status
- * to msk, when all subflows agree on it
- */
- WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);
-
- smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
- sk->sk_data_ready(sk);
- }
-
- switch (sk->sk_state) {
- case TCP_ESTABLISHED:
- inet_sk_state_store(sk, TCP_CLOSE_WAIT);
- break;
- case TCP_FIN_WAIT1:
- inet_sk_state_store(sk, TCP_CLOSING);
- break;
- case TCP_FIN_WAIT2:
- inet_sk_state_store(sk, TCP_CLOSE);
- break;
- default:
- return;
- }
- mptcp_close_wake_up(sk);
-}
-
static struct sock *mptcp_subflow_recv_lookup(const struct mptcp_sock *msk)
{
struct mptcp_subflow_context *subflow;
@@ -2193,9 +2150,6 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
break;
}
- if (test_and_clear_bit(MPTCP_WORK_EOF, &msk->flags))
- mptcp_check_for_eof(msk);
-
if (sk->sk_shutdown & RCV_SHUTDOWN) {
/* race breaker: the shutdown could be after the
* previous receive queue check
@@ -2726,9 +2680,6 @@ static void mptcp_worker(struct work_struct *work)
mptcp_pm_nl_work(msk);
- if (test_and_clear_bit(MPTCP_WORK_EOF, &msk->flags))
- mptcp_check_for_eof(msk);
-
mptcp_check_send_data_fin(sk);
mptcp_check_data_fin_ack(sk);
mptcp_check_data_fin(sk);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index d2e59cf33f57..528586e2ed73 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -113,7 +113,6 @@
/* MPTCP socket atomic flags */
#define MPTCP_NOSPACE 1
#define MPTCP_WORK_RTX 2
-#define MPTCP_WORK_EOF 3
#define MPTCP_FALLBACK_DONE 4
#define MPTCP_WORK_CLOSE_SUBFLOW 5
@@ -481,14 +480,13 @@ struct mptcp_subflow_context {
send_mp_fail : 1,
send_fastclose : 1,
send_infinite_map : 1,
- rx_eof : 1,
remote_key_valid : 1, /* received the peer key from */
disposable : 1, /* ctx can be free at ulp release time */
stale : 1, /* unable to snd/rcv data, do not use for xmit */
local_id_valid : 1, /* local_id is correctly initialized */
valid_csum_seen : 1, /* at least one csum validated */
is_mptfo : 1, /* subflow is doing TFO */
- __unused : 8;
+ __unused : 9;
enum mptcp_data_avail data_avail;
bool scheduled;
u32 remote_nonce;
@@ -744,7 +742,6 @@ static inline u64 mptcp_expand_seq(u64 old_seq, u64 cur_seq, bool use_64bit)
void __mptcp_check_push(struct sock *sk, struct sock *ssk);
void __mptcp_data_acked(struct sock *sk);
void __mptcp_error_report(struct sock *sk);
-void mptcp_subflow_eof(struct sock *sk);
bool mptcp_update_rcv_data_fin(struct mptcp_sock *msk, u64 data_fin_seq, bool use_64bit);
static inline bool mptcp_data_fin_enabled(const struct mptcp_sock *msk)
{
--
2.40.1
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: mptcp: drop legacy code.: Tests Results
2023-06-12 16:02 [PATCH mptcp-next] mptcp: drop legacy code Paolo Abeni
@ 2023-06-13 17:37 ` MPTCP CI
2023-06-16 22:54 ` several messages Mat Martineau
0 siblings, 1 reply; 131+ messages in thread
From: MPTCP CI @ 2023-06-13 17:37 UTC (permalink / raw)
To: Paolo Abeni; +Cc: mptcp
Hi Paolo,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal (except selftest_mptcp_join):
- Success! ✅:
- Task: https://cirrus-ci.com/task/6108358801883136
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6108358801883136/summary/summary.txt
- KVM Validation: debug (only selftest_mptcp_join):
- Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/4630615174152192
- Summary: https://api.cirrus-ci.com/v1/artifact/task/4630615174152192/summary/summary.txt
- KVM Validation: normal (only selftest_mptcp_join):
- Success! ✅:
- Task: https://cirrus-ci.com/task/5545408848461824
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5545408848461824/summary/summary.txt
- KVM Validation: debug (except selftest_mptcp_join):
- Success! ✅:
- Task: https://cirrus-ci.com/task/6671308755304448
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6671308755304448/summary/summary.txt
Initiator: Matthieu Baerts
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/bdbf7858d22c
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2023-06-13 17:37 ` mptcp: drop legacy code.: Tests Results MPTCP CI
@ 2023-06-16 22:54 ` Mat Martineau
0 siblings, 0 replies; 131+ messages in thread
From: Mat Martineau @ 2023-06-16 22:54 UTC (permalink / raw)
To: Paolo Abeni, mptcp
[-- Attachment #1: Type: text/plain, Size: 6401 bytes --]
On Mon, 12 Jun 2023, Paolo Abeni wrote:
> Thanks to the previous patch we can finally drop the "temporary hack"
> used to detect rx eof.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> "previous patch" should be replace with a proper commit reference
> once such patch is merged on -net, unless we target both patches on
> the same tree (but this one is really net-next material, while the fix
> is for -net)
Hi Paolo,
To be clear, the "previous patch" is "mptcp: consolidate fallback and non
fallback state machine"?
> ---
> net/mptcp/protocol.c | 49 --------------------------------------------
> net/mptcp/protocol.h | 5 +----
> 2 files changed, 1 insertion(+), 53 deletions(-)
Hooray for deleting code!
Reviewed-by: Mat Martineau <martineau@kernel.org>
>
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 8f3e50065c13..feaedfd2b3eb 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -898,49 +898,6 @@ bool mptcp_schedule_work(struct sock *sk)
> return false;
> }
>
> -void mptcp_subflow_eof(struct sock *sk)
> -{
> - if (!test_and_set_bit(MPTCP_WORK_EOF, &mptcp_sk(sk)->flags))
> - mptcp_schedule_work(sk);
> -}
> -
> -static void mptcp_check_for_eof(struct mptcp_sock *msk)
> -{
> - struct mptcp_subflow_context *subflow;
> - struct sock *sk = (struct sock *)msk;
> - int receivers = 0;
> -
> - mptcp_for_each_subflow(msk, subflow)
> - receivers += !subflow->rx_eof;
> - if (receivers)
> - return;
> -
> - if (!(sk->sk_shutdown & RCV_SHUTDOWN)) {
> - /* hopefully temporary hack: propagate shutdown status
> - * to msk, when all subflows agree on it
> - */
> - WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);
> -
> - smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
> - sk->sk_data_ready(sk);
> - }
> -
> - switch (sk->sk_state) {
> - case TCP_ESTABLISHED:
> - inet_sk_state_store(sk, TCP_CLOSE_WAIT);
> - break;
> - case TCP_FIN_WAIT1:
> - inet_sk_state_store(sk, TCP_CLOSING);
> - break;
> - case TCP_FIN_WAIT2:
> - inet_sk_state_store(sk, TCP_CLOSE);
> - break;
> - default:
> - return;
> - }
> - mptcp_close_wake_up(sk);
> -}
> -
> static struct sock *mptcp_subflow_recv_lookup(const struct mptcp_sock *msk)
> {
> struct mptcp_subflow_context *subflow;
> @@ -2193,9 +2150,6 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
> break;
> }
>
> - if (test_and_clear_bit(MPTCP_WORK_EOF, &msk->flags))
> - mptcp_check_for_eof(msk);
> -
> if (sk->sk_shutdown & RCV_SHUTDOWN) {
> /* race breaker: the shutdown could be after the
> * previous receive queue check
> @@ -2726,9 +2680,6 @@ static void mptcp_worker(struct work_struct *work)
>
> mptcp_pm_nl_work(msk);
>
> - if (test_and_clear_bit(MPTCP_WORK_EOF, &msk->flags))
> - mptcp_check_for_eof(msk);
> -
> mptcp_check_send_data_fin(sk);
> mptcp_check_data_fin_ack(sk);
> mptcp_check_data_fin(sk);
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index d2e59cf33f57..528586e2ed73 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -113,7 +113,6 @@
> /* MPTCP socket atomic flags */
> #define MPTCP_NOSPACE 1
> #define MPTCP_WORK_RTX 2
> -#define MPTCP_WORK_EOF 3
> #define MPTCP_FALLBACK_DONE 4
> #define MPTCP_WORK_CLOSE_SUBFLOW 5
>
> @@ -481,14 +480,13 @@ struct mptcp_subflow_context {
> send_mp_fail : 1,
> send_fastclose : 1,
> send_infinite_map : 1,
> - rx_eof : 1,
> remote_key_valid : 1, /* received the peer key from */
> disposable : 1, /* ctx can be free at ulp release time */
> stale : 1, /* unable to snd/rcv data, do not use for xmit */
> local_id_valid : 1, /* local_id is correctly initialized */
> valid_csum_seen : 1, /* at least one csum validated */
> is_mptfo : 1, /* subflow is doing TFO */
> - __unused : 8;
> + __unused : 9;
> enum mptcp_data_avail data_avail;
> bool scheduled;
> u32 remote_nonce;
> @@ -744,7 +742,6 @@ static inline u64 mptcp_expand_seq(u64 old_seq, u64 cur_seq, bool use_64bit)
> void __mptcp_check_push(struct sock *sk, struct sock *ssk);
> void __mptcp_data_acked(struct sock *sk);
> void __mptcp_error_report(struct sock *sk);
> -void mptcp_subflow_eof(struct sock *sk);
> bool mptcp_update_rcv_data_fin(struct mptcp_sock *msk, u64 data_fin_seq, bool use_64bit);
> static inline bool mptcp_data_fin_enabled(const struct mptcp_sock *msk)
> {
> --
> 2.40.1
>
>
>
On Tue, 13 Jun 2023, MPTCP CI wrote:
> Hi Paolo,
>
> Thank you for your modifications, that's great!
>
> Our CI did some validations and here is its report:
>
> - KVM Validation: normal (except selftest_mptcp_join):
> - Success! ✅:
> - Task: https://cirrus-ci.com/task/6108358801883136
> - Summary: https://api.cirrus-ci.com/v1/artifact/task/6108358801883136/summary/summary.txt
>
> - KVM Validation: debug (only selftest_mptcp_join):
> - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
> - Task: https://cirrus-ci.com/task/4630615174152192
> - Summary: https://api.cirrus-ci.com/v1/artifact/task/4630615174152192/summary/summary.txt
>
> - KVM Validation: normal (only selftest_mptcp_join):
> - Success! ✅:
> - Task: https://cirrus-ci.com/task/5545408848461824
> - Summary: https://api.cirrus-ci.com/v1/artifact/task/5545408848461824/summary/summary.txt
>
> - KVM Validation: debug (except selftest_mptcp_join):
> - Success! ✅:
> - Task: https://cirrus-ci.com/task/6671308755304448
> - Summary: https://api.cirrus-ci.com/v1/artifact/task/6671308755304448/summary/summary.txt
>
> Initiator: Matthieu Baerts
> Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/bdbf7858d22c
>
>
> If there are some issues, you can reproduce them using the same environment as
> the one used by the CI thanks to a docker image, e.g.:
>
> $ cd [kernel source code]
> $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
> --pull always mptcp/mptcp-upstream-virtme-docker:latest \
> auto-debug
>
> For more details:
>
> https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
>
>
> Please note that despite all the efforts that have been already done to have a
> stable tests suite when executed on a public CI like here, it is possible some
> reported issues are not due to your modifications. Still, do not hesitate to
> help us improve that ;-)
>
> Cheers,
> MPTCP GH Action bot
> Bot operated by Matthieu Baerts (Tessares)
>
>
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH mptcp-next] selftests: mptcp: tweak simult_flows for debug kernels.
@ 2022-06-16 13:55 Paolo Abeni
2022-06-16 15:27 ` selftests: mptcp: tweak simult_flows for debug kernels.: Tests Results MPTCP CI
0 siblings, 1 reply; 131+ messages in thread
From: Paolo Abeni @ 2022-06-16 13:55 UTC (permalink / raw)
To: mptcp
The mentioned test measures the transfer run-time to verify
that the user-space program is able to use the full aggregate B/W.
Even on (virtual) link-speed-bound tests, debug kernel can slow
down the transfer enough to cause sporadic test failures.
Instead of unconditionally raising the maximum allowed run-time,
tweak when the running kernel is a debug one, and use some simple/
rough heuristic to guess such scenarios.
Note: this intentionally avoids looking for /boot/config-<version> as
the latter file is not always available in our reference CI
environments.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
tools/testing/selftests/net/mptcp/simult_flows.sh | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
index f441ff7904fc..141fcf0d40d1 100755
--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
+++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
@@ -12,6 +12,7 @@ timeout_test=$((timeout_poll * 2 + 1))
test_cnt=1
ret=0
bail=0
+slack=50
usage() {
echo "Usage: $0 [ -b ] [ -c ] [ -d ]"
@@ -52,6 +53,7 @@ setup()
cout=$(mktemp)
capout=$(mktemp)
size=$((2 * 2048 * 4096))
+
dd if=/dev/zero of=$small bs=4096 count=20 >/dev/null 2>&1
dd if=/dev/zero of=$large bs=4096 count=$((size / 4096)) >/dev/null 2>&1
@@ -104,6 +106,13 @@ setup()
ip -net "$ns3" route add default via dead:beef:3::2
ip netns exec "$ns3" ./pm_nl_ctl limits 1 1
+
+ # debug build can slow down measurably the test program
+ # we use quite tight time limit on the run-time, to ensure
+ # maximum B/W usage.
+ # Use the kmemleak file presence as a rough estimate for this being
+ # a debug kernel and increase the maximum run-time accordingly
+ [ -f /sys/kernel/debug/kmemleak ] && slack=$((slack+200))
}
# $1: ns, $2: port
@@ -241,7 +250,7 @@ run_test()
# mptcp_connect will do some sleeps to allow the mp_join handshake
# completion (see mptcp_connect): 200ms on each side, add some slack
- time=$((time + 450))
+ time=$((time + 400 + $slack))
printf "%-60s" "$msg"
do_transfer $small $large $time
--
2.35.3
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: selftests: mptcp: tweak simult_flows for debug kernels.: Tests Results
2022-06-16 13:55 [PATCH mptcp-next] selftests: mptcp: tweak simult_flows for debug kernels Paolo Abeni
@ 2022-06-16 15:27 ` MPTCP CI
2022-06-17 22:13 ` several messages Mat Martineau
0 siblings, 1 reply; 131+ messages in thread
From: MPTCP CI @ 2022-06-16 15:27 UTC (permalink / raw)
To: Paolo Abeni; +Cc: mptcp
Hi Paolo,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal:
- Success! ✅:
- Task: https://cirrus-ci.com/task/5708948505362432
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5708948505362432/summary/summary.txt
- KVM Validation: debug:
- Unstable: 3 failed test(s): packetdrill_add_addr selftest_diag selftest_mptcp_join 🔴:
- Task: https://cirrus-ci.com/task/5145998551941120
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5145998551941120/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/727243b29682
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2022-06-16 15:27 ` selftests: mptcp: tweak simult_flows for debug kernels.: Tests Results MPTCP CI
@ 2022-06-17 22:13 ` Mat Martineau
0 siblings, 0 replies; 131+ messages in thread
From: Mat Martineau @ 2022-06-17 22:13 UTC (permalink / raw)
To: Paolo Abeni, mptcp
[-- Attachment #1: Type: text/plain, Size: 3986 bytes --]
On Thu, 16 Jun 2022, Paolo Abeni wrote:
> The mentioned test measures the transfer run-time to verify
> that the user-space program is able to use the full aggregate B/W.
>
> Even on (virtual) link-speed-bound tests, debug kernel can slow
> down the transfer enough to cause sporadic test failures.
>
> Instead of unconditionally raising the maximum allowed run-time,
> tweak when the running kernel is a debug one, and use some simple/
> rough heuristic to guess such scenarios.
>
> Note: this intentionally avoids looking for /boot/config-<version> as
> the latter file is not always available in our reference CI
> environments.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Looks good, runs fine in my vm with debug kernel config:
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
> ---
> tools/testing/selftests/net/mptcp/simult_flows.sh | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
> index f441ff7904fc..141fcf0d40d1 100755
> --- a/tools/testing/selftests/net/mptcp/simult_flows.sh
> +++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
> @@ -12,6 +12,7 @@ timeout_test=$((timeout_poll * 2 + 1))
> test_cnt=1
> ret=0
> bail=0
> +slack=50
>
> usage() {
> echo "Usage: $0 [ -b ] [ -c ] [ -d ]"
> @@ -52,6 +53,7 @@ setup()
> cout=$(mktemp)
> capout=$(mktemp)
> size=$((2 * 2048 * 4096))
> +
> dd if=/dev/zero of=$small bs=4096 count=20 >/dev/null 2>&1
> dd if=/dev/zero of=$large bs=4096 count=$((size / 4096)) >/dev/null 2>&1
>
> @@ -104,6 +106,13 @@ setup()
> ip -net "$ns3" route add default via dead:beef:3::2
>
> ip netns exec "$ns3" ./pm_nl_ctl limits 1 1
> +
> + # debug build can slow down measurably the test program
> + # we use quite tight time limit on the run-time, to ensure
> + # maximum B/W usage.
> + # Use the kmemleak file presence as a rough estimate for this being
> + # a debug kernel and increase the maximum run-time accordingly
> + [ -f /sys/kernel/debug/kmemleak ] && slack=$((slack+200))
> }
>
> # $1: ns, $2: port
> @@ -241,7 +250,7 @@ run_test()
>
> # mptcp_connect will do some sleeps to allow the mp_join handshake
> # completion (see mptcp_connect): 200ms on each side, add some slack
> - time=$((time + 450))
> + time=$((time + 400 + $slack))
>
> printf "%-60s" "$msg"
> do_transfer $small $large $time
> --
> 2.35.3
>
>
>
On Thu, 16 Jun 2022, MPTCP CI wrote:
> Hi Paolo,
>
> Thank you for your modifications, that's great!
>
> Our CI did some validations and here is its report:
>
> - KVM Validation: normal:
> - Success! ✅:
> - Task: https://cirrus-ci.com/task/5708948505362432
> - Summary: https://api.cirrus-ci.com/v1/artifact/task/5708948505362432/summary/summary.txt
>
> - KVM Validation: debug:
> - Unstable: 3 failed test(s): packetdrill_add_addr selftest_diag selftest_mptcp_join 🔴:
> - Task: https://cirrus-ci.com/task/5145998551941120
> - Summary: https://api.cirrus-ci.com/v1/artifact/task/5145998551941120/summary/summary.txt
>
> Initiator: Patchew Applier
> Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/727243b29682
>
>
> If there are some issues, you can reproduce them using the same environment as
> the one used by the CI thanks to a docker image, e.g.:
>
> $ cd [kernel source code]
> $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
> --pull always mptcp/mptcp-upstream-virtme-docker:latest \
> auto-debug
>
> For more details:
>
> https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
>
>
> Please note that despite all the efforts that have been already done to have a
> stable tests suite when executed on a public CI like here, it is possible some
> reported issues are not due to your modifications. Still, do not hesitate to
> help us improve that ;-)
>
> Cheers,
> MPTCP GH Action bot
> Bot operated by Matthieu Baerts (Tessares)
>
>
--
Mat Martineau
Intel
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH v2 0/3] x86/mm: INVPCID support
@ 2016-01-25 18:37 Andy Lutomirski
2016-01-25 18:57 ` Ingo Molnar
0 siblings, 1 reply; 131+ messages in thread
From: Andy Lutomirski @ 2016-01-25 18:37 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Borislav Petkov, Brian Gerst, Dave Hansen, Linus Torvalds,
Oleg Nesterov, linux-mm, Andrey Ryabinin, Andy Lutomirski
Ingo, before applying this, please apply these two KASAN fixes:
http://lkml.kernel.org/g/1452516679-32040-2-git-send-email-aryabinin@virtuozzo.com
http://lkml.kernel.org/g/1452516679-32040-3-git-send-email-aryabinin@virtuozzo.com
Without those fixes, this series will trigger a KASAN bug.
This is a straightforward speedup on Ivy Bridge and newer, IIRC.
(I tested on Skylake. INVPCID is not available on Sandy Bridge.
I don't have Ivy Bridge, Haswell or Broadwell to test on, so I
could be wrong as to when the feature was introduced.)
I think we should consider these patches separately from the rest
of the PCID stuff -- they barely interact, and this part is much
simpler and is useful on its own.
This is exactly identical to patches 2-4 of the PCID RFC series.
Andy Lutomirski (3):
x86/mm: Add INVPCID helpers
x86/mm: Add a noinvpcid option to turn off INVPCID
x86/mm: If INVPCID is available, use it to flush global mappings
Documentation/kernel-parameters.txt | 2 ++
arch/x86/include/asm/tlbflush.h | 50 +++++++++++++++++++++++++++++++++++++
arch/x86/kernel/cpu/common.c | 16 ++++++++++++
3 files changed, 68 insertions(+)
--
2.5.0
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH v2 0/3] x86/mm: INVPCID support
2016-01-25 18:37 [PATCH v2 0/3] x86/mm: INVPCID support Andy Lutomirski
@ 2016-01-25 18:57 ` Ingo Molnar
2016-01-27 10:09 ` Thomas Gleixner
0 siblings, 1 reply; 131+ messages in thread
From: Ingo Molnar @ 2016-01-25 18:57 UTC (permalink / raw)
To: Andy Lutomirski
Cc: x86, linux-kernel, Borislav Petkov, Brian Gerst, Dave Hansen,
Linus Torvalds, Oleg Nesterov, linux-mm, Andrey Ryabinin
* Andy Lutomirski <luto@kernel.org> wrote:
> Ingo, before applying this, please apply these two KASAN fixes:
>
> http://lkml.kernel.org/g/1452516679-32040-2-git-send-email-aryabinin@virtuozzo.com
> http://lkml.kernel.org/g/1452516679-32040-3-git-send-email-aryabinin@virtuozzo.com
>
> Without those fixes, this series will trigger a KASAN bug.
>
> This is a straightforward speedup on Ivy Bridge and newer, IIRC.
> (I tested on Skylake. INVPCID is not available on Sandy Bridge.
> I don't have Ivy Bridge, Haswell or Broadwell to test on, so I
> could be wrong as to when the feature was introduced.)
>
> I think we should consider these patches separately from the rest
> of the PCID stuff -- they barely interact, and this part is much
> simpler and is useful on its own.
>
> This is exactly identical to patches 2-4 of the PCID RFC series.
>
> Andy Lutomirski (3):
> x86/mm: Add INVPCID helpers
> x86/mm: Add a noinvpcid option to turn off INVPCID
> x86/mm: If INVPCID is available, use it to flush global mappings
>
> Documentation/kernel-parameters.txt | 2 ++
> arch/x86/include/asm/tlbflush.h | 50 +++++++++++++++++++++++++++++++++++++
> arch/x86/kernel/cpu/common.c | 16 ++++++++++++
> 3 files changed, 68 insertions(+)
Ok, I'll pick these up tomorrow unless there are objections.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2016-01-25 18:57 ` Ingo Molnar
@ 2016-01-27 10:09 ` Thomas Gleixner
0 siblings, 0 replies; 131+ messages in thread
From: Thomas Gleixner @ 2016-01-27 10:09 UTC (permalink / raw)
To: Andy Lutomirski, Ingo Molnar
Cc: x86, linux-kernel, Borislav Petkov, Brian Gerst, Dave Hansen,
Linus Torvalds, Oleg Nesterov, linux-mm, Andrey Ryabinin
On Mon, 25 Jan 2016, Andy Lutomirski wrote:
> This is a straightforward speedup on Ivy Bridge and newer, IIRC.
> (I tested on Skylake. INVPCID is not available on Sandy Bridge.
> I don't have Ivy Bridge, Haswell or Broadwell to test on, so I
> could be wrong as to when the feature was introduced.)
Haswell and Broadwell have it. No idea about ivy bridge.
Thanks,
tglx
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2016-01-27 10:09 ` Thomas Gleixner
0 siblings, 0 replies; 131+ messages in thread
From: Thomas Gleixner @ 2016-01-27 10:09 UTC (permalink / raw)
To: Andy Lutomirski, Ingo Molnar
Cc: x86, linux-kernel, Borislav Petkov, Brian Gerst, Dave Hansen,
Linus Torvalds, Oleg Nesterov, linux-mm, Andrey Ryabinin
On Mon, 25 Jan 2016, Andy Lutomirski wrote:
> This is a straightforward speedup on Ivy Bridge and newer, IIRC.
> (I tested on Skylake. INVPCID is not available on Sandy Bridge.
> I don't have Ivy Bridge, Haswell or Broadwell to test on, so I
> could be wrong as to when the feature was introduced.)
Haswell and Broadwell have it. No idea about ivy bridge.
Thanks,
tglx
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2016-01-27 10:09 ` Thomas Gleixner
(?)
@ 2016-01-29 13:21 ` Borislav Petkov
-1 siblings, 0 replies; 131+ messages in thread
From: Borislav Petkov @ 2016-01-29 13:21 UTC (permalink / raw)
To: Thomas Gleixner
Cc: Andy Lutomirski, Ingo Molnar, x86, linux-kernel, Brian Gerst,
Dave Hansen, Linus Torvalds, Oleg Nesterov, linux-mm,
Andrey Ryabinin
On Wed, Jan 27, 2016 at 11:09:04AM +0100, Thomas Gleixner wrote:
> On Mon, 25 Jan 2016, Andy Lutomirski wrote:
> > This is a straightforward speedup on Ivy Bridge and newer, IIRC.
> > (I tested on Skylake. INVPCID is not available on Sandy Bridge.
> > I don't have Ivy Bridge, Haswell or Broadwell to test on, so I
> > could be wrong as to when the feature was introduced.)
>
> Haswell and Broadwell have it. No idea about ivy bridge.
I have an IVB model 58. It doesn't have it:
CPUID_0x00000007: EAX=0x00000000, EBX=0x00000281, ECX=0x00000000, EDX=0x00000000
INVPCID should be EBX[10].
--
Regards/Gruss,
Boris.
ECO tip #101: Trim your mails when you reply.
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 00/13] Add VT-d Posted-Interrupts support for KVM
@ 2014-11-10 6:26 Feng Wu
2014-11-10 6:26 ` [PATCH 13/13] iommu/vt-d: Add a command line parameter for VT-d posted-interrupts Feng Wu
0 siblings, 1 reply; 131+ messages in thread
From: Feng Wu @ 2014-11-10 6:26 UTC (permalink / raw)
To: gleb, pbonzini, dwmw2, joro, tglx, mingo, hpa, x86
Cc: kvm, iommu, linux-kernel, Feng Wu
VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt.
With VT-d Posted-Interrupts enabled, external interrupts from
direct-assigned devices can be delivered to guests without VMM
intervention when guest is running in non-root mode.
You can find the VT-d Posted-Interrtups Spec. in the following URL:
http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/vt-directed-io-spec.html
Feng Wu (13):
iommu/vt-d: VT-d Posted-Interrupts feature detection
KVM: Initialize VT-d Posted-Interrtups Descriptor
KVM: Add KVM_CAP_PI to detect VT-d Posted-Interrtups
iommu/vt-d: Adjust 'struct irte' to better suit for VT-d
Posted-Interrupts
KVM: Update IRTE according to guest interrupt configuration changes
KVM: Add some helper functions for Posted-Interrupts
x86, irq: Define a global vector for VT-d Posted-Interrupts
KVM: Update Posted-Interrupts descriptor during VCPU scheduling
KVM: Change NDST field after VCPU scheduling
KVM: Add the handler for Wake-up Vector
KVM: Suppress posted-interrupt when 'SN' is set
iommu/vt-d: No need to migrating irq for VT-d Posted-Interrtups
iommu/vt-d: Add a command line parameter for VT-d posted-interrupts
arch/x86/include/asm/entry_arch.h | 2 +
arch/x86/include/asm/hardirq.h | 1 +
arch/x86/include/asm/hw_irq.h | 2 +
arch/x86/include/asm/irq_remapping.h | 7 +
arch/x86/include/asm/irq_vectors.h | 1 +
arch/x86/include/asm/kvm_host.h | 9 ++
arch/x86/kernel/apic/apic.c | 1 +
arch/x86/kernel/entry_64.S | 2 +
arch/x86/kernel/irq.c | 27 ++++
arch/x86/kernel/irqinit.c | 2 +
arch/x86/kvm/vmx.c | 257 +++++++++++++++++++++++++++++++++-
arch/x86/kvm/x86.c | 53 ++++++-
drivers/iommu/amd_iommu.c | 6 +
drivers/iommu/intel_irq_remapping.c | 83 +++++++++--
drivers/iommu/irq_remapping.c | 20 +++
drivers/iommu/irq_remapping.h | 8 +
include/linux/dmar.h | 30 ++++-
include/linux/intel-iommu.h | 1 +
include/linux/kvm_host.h | 25 ++++
include/uapi/linux/kvm.h | 2 +
virt/kvm/assigned-dev.c | 141 +++++++++++++++++++
virt/kvm/irq_comm.c | 4 +-
virt/kvm/irqchip.c | 11 --
virt/kvm/kvm_main.c | 14 ++
24 files changed, 667 insertions(+), 42 deletions(-)
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 13/13] iommu/vt-d: Add a command line parameter for VT-d posted-interrupts
2014-11-10 6:26 [PATCH 00/13] Add VT-d Posted-Interrupts support for KVM Feng Wu
@ 2014-11-10 6:26 ` Feng Wu
2014-11-10 18:15 ` Thomas Gleixner
0 siblings, 1 reply; 131+ messages in thread
From: Feng Wu @ 2014-11-10 6:26 UTC (permalink / raw)
To: gleb, pbonzini, dwmw2, joro, tglx, mingo, hpa, x86
Cc: kvm, iommu, linux-kernel, Feng Wu
Enable VT-d Posted-Interrtups and add a command line
parameter for it.
Signed-off-by: Feng Wu <feng.wu@intel.com>
---
drivers/iommu/irq_remapping.c | 9 ++++++++-
1 files changed, 8 insertions(+), 1 deletions(-)
diff --git a/drivers/iommu/irq_remapping.c b/drivers/iommu/irq_remapping.c
index 0e36860..3cb9429 100644
--- a/drivers/iommu/irq_remapping.c
+++ b/drivers/iommu/irq_remapping.c
@@ -23,7 +23,7 @@ int irq_remap_broken;
int disable_sourceid_checking;
int no_x2apic_optout;
-int disable_irq_post = 1;
+int disable_irq_post = 0;
int irq_post_enabled = 0;
EXPORT_SYMBOL_GPL(irq_post_enabled);
@@ -206,6 +206,13 @@ static __init int setup_irqremap(char *str)
}
early_param("intremap", setup_irqremap);
+static __init int setup_nointpost(char *str)
+{
+ disable_irq_post = 1;
+ return 0;
+}
+early_param("nointpost", setup_nointpost);
+
void __init setup_irq_remapping_ops(void)
{
remap_ops = &intel_irq_remap_ops;
--
1.7.1
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: several messages
@ 2014-11-10 18:15 ` Thomas Gleixner
0 siblings, 0 replies; 131+ messages in thread
From: Thomas Gleixner @ 2014-11-10 18:15 UTC (permalink / raw)
To: Feng Wu
Cc: gleb, pbonzini, David Woodhouse, joro, mingo, H. Peter Anvin,
x86, kvm, iommu, LKML, Jiang Liu
On Mon, 10 Nov 2014, Feng Wu wrote:
> VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt.
> With VT-d Posted-Interrupts enabled, external interrupts from
> direct-assigned devices can be delivered to guests without VMM
> intervention when guest is running in non-root mode.
Can you please talk to Jiang and synchronize your work with his
refactoring of the x86 interrupt handling subsystem.
I want this stuff cleaned up first before we add new stuff to it.
Thanks,
tglx
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2014-11-10 18:15 ` Thomas Gleixner
0 siblings, 0 replies; 131+ messages in thread
From: Thomas Gleixner @ 2014-11-10 18:15 UTC (permalink / raw)
To: Feng Wu
Cc: kvm-u79uwXL29TY76Z2rM5mHXA, gleb-DgEjT+Ai2ygdnm+yROfE0A,
x86-DgEjT+Ai2ygdnm+yROfE0A, LKML,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
mingo-H+wXaHxf7aLQT0dZR+AlfA, H. Peter Anvin,
pbonzini-H+wXaHxf7aLQT0dZR+AlfA, David Woodhouse, Jiang Liu
On Mon, 10 Nov 2014, Feng Wu wrote:
> VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt.
> With VT-d Posted-Interrupts enabled, external interrupts from
> direct-assigned devices can be delivered to guests without VMM
> intervention when guest is running in non-root mode.
Can you please talk to Jiang and synchronize your work with his
refactoring of the x86 interrupt handling subsystem.
I want this stuff cleaned up first before we add new stuff to it.
Thanks,
tglx
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2014-11-10 18:15 ` Thomas Gleixner
@ 2014-11-11 2:28 ` Jiang Liu
-1 siblings, 0 replies; 131+ messages in thread
From: Jiang Liu @ 2014-11-11 2:28 UTC (permalink / raw)
To: Thomas Gleixner, Feng Wu
Cc: gleb, pbonzini, David Woodhouse, joro, mingo, H. Peter Anvin,
x86, kvm, iommu, LKML
On 2014/11/11 2:15, Thomas Gleixner wrote:
> On Mon, 10 Nov 2014, Feng Wu wrote:
>
>> VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt.
>> With VT-d Posted-Interrupts enabled, external interrupts from
>> direct-assigned devices can be delivered to guests without VMM
>> intervention when guest is running in non-root mode.
>
> Can you please talk to Jiang and synchronize your work with his
> refactoring of the x86 interrupt handling subsystem.
>
> I want this stuff cleaned up first before we add new stuff to it.
Hi Thomas,
Just talked with Feng, we will focused on refactor first and
then add posted interrupt support.
Regards!
Gerry
>
> Thanks,
>
> tglx
>
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2014-11-11 2:28 ` Jiang Liu
0 siblings, 0 replies; 131+ messages in thread
From: Jiang Liu @ 2014-11-11 2:28 UTC (permalink / raw)
To: Thomas Gleixner, Feng Wu
Cc: kvm-u79uwXL29TY76Z2rM5mHXA, gleb-DgEjT+Ai2ygdnm+yROfE0A,
x86-DgEjT+Ai2ygdnm+yROfE0A, LKML,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
mingo-H+wXaHxf7aLQT0dZR+AlfA, H. Peter Anvin,
pbonzini-H+wXaHxf7aLQT0dZR+AlfA, David Woodhouse
On 2014/11/11 2:15, Thomas Gleixner wrote:
> On Mon, 10 Nov 2014, Feng Wu wrote:
>
>> VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt.
>> With VT-d Posted-Interrupts enabled, external interrupts from
>> direct-assigned devices can be delivered to guests without VMM
>> intervention when guest is running in non-root mode.
>
> Can you please talk to Jiang and synchronize your work with his
> refactoring of the x86 interrupt handling subsystem.
>
> I want this stuff cleaned up first before we add new stuff to it.
Hi Thomas,
Just talked with Feng, we will focused on refactor first and
then add posted interrupt support.
Regards!
Gerry
>
> Thanks,
>
> tglx
>
^ permalink raw reply [flat|nested] 131+ messages in thread
* RE: several messages
@ 2014-11-11 6:37 ` Wu, Feng
0 siblings, 0 replies; 131+ messages in thread
From: Wu, Feng @ 2014-11-11 6:37 UTC (permalink / raw)
To: Jiang Liu, Thomas Gleixner
Cc: gleb, pbonzini, David Woodhouse, joro, mingo, H. Peter Anvin,
x86, kvm, iommu, LKML
> -----Original Message-----
> From: Jiang Liu [mailto:jiang.liu@linux.intel.com]
> Sent: Tuesday, November 11, 2014 10:29 AM
> To: Thomas Gleixner; Wu, Feng
> Cc: gleb@kernel.org; pbonzini@redhat.com; David Woodhouse;
> joro@8bytes.org; mingo@redhat.com; H. Peter Anvin; x86@kernel.org;
> kvm@vger.kernel.org; iommu@lists.linux-foundation.org; LKML
> Subject: Re: several messages
>
> On 2014/11/11 2:15, Thomas Gleixner wrote:
> > On Mon, 10 Nov 2014, Feng Wu wrote:
> >
> >> VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt.
> >> With VT-d Posted-Interrupts enabled, external interrupts from
> >> direct-assigned devices can be delivered to guests without VMM
> >> intervention when guest is running in non-root mode.
> >
> > Can you please talk to Jiang and synchronize your work with his
> > refactoring of the x86 interrupt handling subsystem.
> >
> > I want this stuff cleaned up first before we add new stuff to it.
> Hi Thomas,
> Just talked with Feng, we will focused on refactor first and
> then add posted interrupt support.
> Regards!
> Gerry
No problem!
Thanks,
Feng
>
> >
> > Thanks,
> >
> > tglx
> >
^ permalink raw reply [flat|nested] 131+ messages in thread
* RE: several messages
@ 2014-11-11 6:37 ` Wu, Feng
0 siblings, 0 replies; 131+ messages in thread
From: Wu, Feng @ 2014-11-11 6:37 UTC (permalink / raw)
To: Jiang Liu, Thomas Gleixner
Cc: kvm-u79uwXL29TY76Z2rM5mHXA, gleb-DgEjT+Ai2ygdnm+yROfE0A,
x86-DgEjT+Ai2ygdnm+yROfE0A, LKML,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
mingo-H+wXaHxf7aLQT0dZR+AlfA, H. Peter Anvin,
pbonzini-H+wXaHxf7aLQT0dZR+AlfA, David Woodhouse
> -----Original Message-----
> From: Jiang Liu [mailto:jiang.liu-VuQAYsv1563Yd54FQh9/CA@public.gmane.org]
> Sent: Tuesday, November 11, 2014 10:29 AM
> To: Thomas Gleixner; Wu, Feng
> Cc: gleb-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org; pbonzini-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org; David Woodhouse;
> joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org; mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org; H. Peter Anvin; x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org;
> kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org; LKML
> Subject: Re: several messages
>
> On 2014/11/11 2:15, Thomas Gleixner wrote:
> > On Mon, 10 Nov 2014, Feng Wu wrote:
> >
> >> VT-d Posted-Interrupts is an enhancement to CPU side Posted-Interrupt.
> >> With VT-d Posted-Interrupts enabled, external interrupts from
> >> direct-assigned devices can be delivered to guests without VMM
> >> intervention when guest is running in non-root mode.
> >
> > Can you please talk to Jiang and synchronize your work with his
> > refactoring of the x86 interrupt handling subsystem.
> >
> > I want this stuff cleaned up first before we add new stuff to it.
> Hi Thomas,
> Just talked with Feng, we will focused on refactor first and
> then add posted interrupt support.
> Regards!
> Gerry
No problem!
Thanks,
Feng
>
> >
> > Thanks,
> >
> > tglx
> >
^ permalink raw reply [flat|nested] 131+ messages in thread
* [RFC PATCH v4] ARM: EXYNOS: Use MCPM call-backs to support S2R on Exynos5420
@ 2014-07-03 5:02 Abhilash Kesavan
2014-07-03 14:46 ` [PATCH v5] " Abhilash Kesavan
0 siblings, 1 reply; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-03 5:02 UTC (permalink / raw)
To: linux-samsung-soc, linux-arm-kernel, kgene.kim, nicolas.pitre,
lorenzo.pieralisi
Cc: kesavan.abhilash, abrestic, dianders, Abhilash Kesavan
Use the MCPM layer to handle core suspend/resume on Exynos5420.
Also, restore the entry address setup code post-resume.
Signed-off-by: Abhilash Kesavan <a.kesavan@samsung.com>
---
Changes in v2:
- Made use of the MCPM suspend/powered_up call-backs
Changes in v3:
- Used the residency value to indicate the entered state
Changes in v4:
- Checked if MCPM has been enabled to prevent build error
This has been tested both on an SMDK5420 and Peach Pit Chromebook on
3.16-rc3/next-20140702.
Here are the dependencies (some of these patches did not apply cleanly):
1) Cleanup patches for mach-exynos
http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33772
2) PMU cleanup and refactoring for using DT
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg671625.html
3) Exynos5420 PMU/S2R Series
http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898
4) MCPM boot CPU CCI enablement patches
www.spinics.net/lists/linux-samsung-soc/msg32923.html
5) Exynos5420 CPUIdle Series which populates MCPM suspend/powered_up
call-backs.
www.gossamer-threads.com/lists/linux/kernel/1945347
https://patchwork.kernel.org/patch/4357461/
6) Exynos5420 MCPM cluster power down support
http://www.spinics.net/lists/arm-kernel/msg339988.html
7) TPM reset mask patch
http://www.spinics.net/lists/arm-kernel/msg341884.html
arch/arm/include/asm/mcpm.h | 6 ++++
arch/arm/mach-exynos/mcpm-exynos.c | 50 ++++++++++++++++++++++++----------
arch/arm/mach-exynos/pm.c | 38 ++++++++++++++++++++++++--
arch/arm/mach-exynos/regs-pmu.h | 1 +
drivers/cpuidle/cpuidle-big_little.c | 2 +-
5 files changed, 79 insertions(+), 18 deletions(-)
diff --git a/arch/arm/include/asm/mcpm.h b/arch/arm/include/asm/mcpm.h
index ff73aff..051fbf1 100644
--- a/arch/arm/include/asm/mcpm.h
+++ b/arch/arm/include/asm/mcpm.h
@@ -272,4 +272,10 @@ void __init mcpm_smp_set_ops(void);
#define MCPM_SYNC_CLUSTER_SIZE \
(MCPM_SYNC_CLUSTER_INBOUND + __CACHE_WRITEBACK_GRANULE)
+/* Definitions for various MCPM scenarios that might need special handling */
+#define MCPM_CPU_IDLE 0x0
+#define MCPM_CPU_SUSPEND 0x1
+#define MCPM_CPU_SWITCH 0x2
+#define MCPM_CPU_HOTPLUG 0x3
+
#endif
diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
index 0315601..9a381f6 100644
--- a/arch/arm/mach-exynos/mcpm-exynos.c
+++ b/arch/arm/mach-exynos/mcpm-exynos.c
@@ -15,6 +15,7 @@
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/of_address.h>
+#include <linux/syscore_ops.h>
#include <asm/cputype.h>
#include <asm/cp15.h>
@@ -30,6 +31,8 @@
#define EXYNOS5420_USE_ARM_CORE_DOWN_STATE BIT(29)
#define EXYNOS5420_USE_L2_COMMON_UP_STATE BIT(30)
+static void __iomem *ns_sram_base_addr;
+
/*
* The common v7_exit_coherency_flush API could not be used because of the
* Erratum 799270 workaround. This macro is the same as the common one (in
@@ -129,7 +132,7 @@ static int exynos_power_up(unsigned int cpu, unsigned int cluster)
* and can only be executed on processors like A15 and A7 that hit the cache
* with the C bit clear in the SCTLR register.
*/
-static void exynos_power_down(void)
+static void exynos_mcpm_power_down(u64 residency)
{
unsigned int mpidr, cpu, cluster;
bool last_man = false, skip_wfi = false;
@@ -150,7 +153,12 @@ static void exynos_power_down(void)
BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
cpu_use_count[cpu][cluster]--;
if (cpu_use_count[cpu][cluster] == 0) {
- exynos_cpu_power_down(cpunr);
+ /*
+ * Bypass power down for CPU0 during suspend. This is being
+ * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
+ */
+ if ((cpunr != 0) || (residency != MCPM_CPU_SUSPEND))
+ exynos_cpu_power_down(cpunr);
if (exynos_cluster_unused(cluster)) {
exynos_cluster_power_down(cluster);
@@ -209,6 +217,11 @@ static void exynos_power_down(void)
/* Not dead at this point? Let our caller cope. */
}
+static void exynos_power_down(void)
+{
+ exynos_mcpm_power_down(MCPM_CPU_SWITCH | MCPM_CPU_HOTPLUG);
+}
+
static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
{
unsigned int tries = 100;
@@ -250,11 +263,11 @@ static void exynos_suspend(u64 residency)
{
unsigned int mpidr, cpunr;
- exynos_power_down();
+ exynos_mcpm_power_down(residency);
/*
* Execution reaches here only if cpu did not power down.
- * Hence roll back the changes done in exynos_power_down function.
+ * Hence roll back the changes done in exynos_mcpm_power_down function.
*
* CAUTION: "This function requires the stack data to be visible through
* power down and can only be executed on processors like A15 and A7
@@ -319,10 +332,26 @@ static const struct of_device_id exynos_dt_mcpm_match[] = {
{},
};
+static void exynos_mcpm_setup_entry_point(void)
+{
+ /*
+ * U-Boot SPL is hardcoded to jump to the start of ns_sram_base_addr
+ * as part of secondary_cpu_start(). Let's redirect it to the
+ * mcpm_entry_point(). This is done during both secondary boot-up as
+ * well as system resume.
+ */
+ __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */
+ __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */
+ __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8);
+}
+
+static struct syscore_ops exynos_mcpm_syscore_ops = {
+ .resume = exynos_mcpm_setup_entry_point,
+};
+
static int __init exynos_mcpm_init(void)
{
struct device_node *node;
- void __iomem *ns_sram_base_addr;
unsigned int value, i;
int ret;
@@ -389,16 +418,9 @@ static int __init exynos_mcpm_init(void)
__raw_writel(value, pmu_base_addr + EXYNOS_COMMON_OPTION(i));
}
- /*
- * U-Boot SPL is hardcoded to jump to the start of ns_sram_base_addr
- * as part of secondary_cpu_start(). Let's redirect it to the
- * mcpm_entry_point().
- */
- __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */
- __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */
- __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8);
+ exynos_mcpm_setup_entry_point();
- iounmap(ns_sram_base_addr);
+ register_syscore_ops(&exynos_mcpm_syscore_ops);
return ret;
}
diff --git a/arch/arm/mach-exynos/pm.c b/arch/arm/mach-exynos/pm.c
index bf8564a..8b425df 100644
--- a/arch/arm/mach-exynos/pm.c
+++ b/arch/arm/mach-exynos/pm.c
@@ -24,6 +24,7 @@
#include <asm/cacheflush.h>
#include <asm/hardware/cache-l2x0.h>
+#include <asm/mcpm.h>
#include <asm/smp_scu.h>
#include <asm/suspend.h>
@@ -191,7 +192,6 @@ int exynos_cluster_power_state(int cluster)
pmu_base_addr + S5P_INFORM1))
#define S5P_CHECK_AFTR 0xFCBA0D10
-#define S5P_CHECK_SLEEP 0x00000BAD
/* Ext-GIC nIRQ/nFIQ is the only wakeup source in AFTR */
static void exynos_set_wakeupmask(long mask)
@@ -318,7 +318,10 @@ static void exynos_pm_prepare(void)
/* ensure at least INFORM0 has the resume address */
- pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0);
+ if (soc_is_exynos5420() && IS_ENABLED(CONFIG_MCPM))
+ pmu_raw_writel(virt_to_phys(mcpm_entry_point), S5P_INFORM0);
+ else
+ pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0);
if (soc_is_exynos5420()) {
tmp = __raw_readl(pmu_base_addr + EXYNOS5_ARM_L2_OPTION);
@@ -490,6 +493,28 @@ static struct syscore_ops exynos_pm_syscore_ops = {
.resume = exynos_pm_resume,
};
+static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
+{
+ /* MCPM works with HW CPU identifiers */
+ unsigned int mpidr = read_cpuid_mpidr();
+ unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+ unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+
+ __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
+
+ mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
+
+ /*
+ * Residency value passed to mcpm_cpu_suspend back-end
+ * has to be given clear semantics. Set to 0 as a
+ * temporary value.
+ */
+ mcpm_cpu_suspend(MCPM_CPU_SUSPEND);
+
+ /* return value != 0 means failure */
+ return 1;
+}
+
/*
* Suspend Ops
*/
@@ -517,10 +542,17 @@ static int exynos_suspend_enter(suspend_state_t state)
flush_cache_all();
s3c_pm_check_store();
- ret = cpu_suspend(0, exynos_cpu_suspend);
+ /* Use the MCPM layer to suspend 5420 which is a multi-cluster SoC */
+ if (soc_is_exynos5420() && IS_ENABLED(CONFIG_MCPM))
+ ret = cpu_suspend(0, exynos_mcpm_cpu_suspend);
+ else
+ ret = cpu_suspend(0, exynos_cpu_suspend);
if (ret)
return ret;
+ if (soc_is_exynos5420() && IS_ENABLED(CONFIG_MCPM))
+ mcpm_cpu_powered_up();
+
s3c_pm_restore_uarts();
S3C_PMDBG("%s: wakeup stat: %08x\n", __func__,
diff --git a/arch/arm/mach-exynos/regs-pmu.h b/arch/arm/mach-exynos/regs-pmu.h
index 3cf0454..e8c75db 100644
--- a/arch/arm/mach-exynos/regs-pmu.h
+++ b/arch/arm/mach-exynos/regs-pmu.h
@@ -152,6 +152,7 @@
#define S5P_PAD_RET_EBIB_OPTION 0x31A8
#define S5P_CORE_LOCAL_PWR_EN 0x3
+#define S5P_CHECK_SLEEP 0x00000BAD
/* Only for EXYNOS4210 */
#define S5P_CMU_CLKSTOP_LCD1_LOWPWR 0x1154
diff --git a/drivers/cpuidle/cpuidle-big_little.c b/drivers/cpuidle/cpuidle-big_little.c
index b45fc62..15f077e 100644
--- a/drivers/cpuidle/cpuidle-big_little.c
+++ b/drivers/cpuidle/cpuidle-big_little.c
@@ -108,7 +108,7 @@ static int notrace bl_powerdown_finisher(unsigned long arg)
* has to be given clear semantics. Set to 0 as a
* temporary value.
*/
- mcpm_cpu_suspend(0);
+ mcpm_cpu_suspend(MCPM_CPU_IDLE);
/* return value != 0 means failure */
return 1;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH v5] ARM: EXYNOS: Use MCPM call-backs to support S2R on Exynos5420
2014-07-03 5:02 [RFC PATCH v4] ARM: EXYNOS: Use MCPM call-backs to support S2R on Exynos5420 Abhilash Kesavan
@ 2014-07-03 14:46 ` Abhilash Kesavan
2014-07-03 15:45 ` Nicolas Pitre
0 siblings, 1 reply; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-03 14:46 UTC (permalink / raw)
To: linux-samsung-soc, linux-arm-kernel, kgene.kim, nicolas.pitre,
lorenzo.pieralisi
Cc: abrestic, dianders, kesavan.abhilash
Use the MCPM layer to handle core suspend/resume on Exynos5420.
Also, restore the entry address setup code post-resume.
Signed-off-by: Abhilash Kesavan <a.kesavan@samsung.com>
---
Changes in v2:
- Made use of the MCPM suspend/powered_up call-backs
Changes in v3:
- Used the residency value to indicate the entered state
Changes in v4:
- Checked if MCPM has been enabled to prevent build error
Changes in v5:
- Removed the MCPM flags and just used a local flag to
indicate that we are suspending.
This has been tested both on an SMDK5420 and Peach Pit Chromebook on
3.16-rc3/next-20140702.
Here are the dependencies (some of these patches did not apply cleanly):
1) Cleanup patches for mach-exynos
http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33772
2) PMU cleanup and refactoring for using DT
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg671625.html
3) Exynos5420 PMU/S2R Series
http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898
4) MCPM boot CPU CCI enablement patches
www.spinics.net/lists/linux-samsung-soc/msg32923.html
5) Exynos5420 CPUIdle Series which populates MCPM suspend/powered_up
call-backs.
www.gossamer-threads.com/lists/linux/kernel/1945347
https://patchwork.kernel.org/patch/4357461/
6) Exynos5420 MCPM cluster power down support
http://www.spinics.net/lists/arm-kernel/msg339988.html
7) TPM reset mask patch
http://www.spinics.net/lists/arm-kernel/msg341884.html
arch/arm/mach-exynos/mcpm-exynos.c | 50 +++++++++++++++++++++++++++-----------
arch/arm/mach-exynos/pm.c | 37 +++++++++++++++++++++++++---
arch/arm/mach-exynos/regs-pmu.h | 1 +
3 files changed, 71 insertions(+), 17 deletions(-)
diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
index 2dd51cc..60f84c9 100644
--- a/arch/arm/mach-exynos/mcpm-exynos.c
+++ b/arch/arm/mach-exynos/mcpm-exynos.c
@@ -15,6 +15,7 @@
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/of_address.h>
+#include <linux/syscore_ops.h>
#include <asm/cputype.h>
#include <asm/cp15.h>
@@ -30,6 +31,8 @@
#define EXYNOS5420_USE_ARM_CORE_DOWN_STATE BIT(29)
#define EXYNOS5420_USE_L2_COMMON_UP_STATE BIT(30)
+static void __iomem *ns_sram_base_addr;
+
/*
* The common v7_exit_coherency_flush API could not be used because of the
* Erratum 799270 workaround. This macro is the same as the common one (in
@@ -129,7 +132,7 @@ static int exynos_power_up(unsigned int cpu, unsigned int cluster)
* and can only be executed on processors like A15 and A7 that hit the cache
* with the C bit clear in the SCTLR register.
*/
-static void exynos_power_down(void)
+static void exynos_mcpm_power_down(u64 residency)
{
unsigned int mpidr, cpu, cluster;
bool last_man = false, skip_wfi = false;
@@ -150,7 +153,12 @@ static void exynos_power_down(void)
BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
cpu_use_count[cpu][cluster]--;
if (cpu_use_count[cpu][cluster] == 0) {
- exynos_cpu_power_down(cpunr);
+ /*
+ * Bypass power down for CPU0 during suspend. This is being
+ * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
+ */
+ if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
+ exynos_cpu_power_down(cpunr);
if (exynos_cluster_unused(cluster)) {
exynos_cluster_power_down(cluster);
@@ -209,6 +217,11 @@ static void exynos_power_down(void)
/* Not dead at this point? Let our caller cope. */
}
+static void exynos_power_down(void)
+{
+ exynos_mcpm_power_down(0);
+}
+
static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
{
unsigned int tries = 100;
@@ -250,11 +263,11 @@ static void exynos_suspend(u64 residency)
{
unsigned int mpidr, cpunr;
- exynos_power_down();
+ exynos_mcpm_power_down(residency);
/*
* Execution reaches here only if cpu did not power down.
- * Hence roll back the changes done in exynos_power_down function.
+ * Hence roll back the changes done in exynos_mcpm_power_down function.
*
* CAUTION: "This function requires the stack data to be visible through
* power down and can only be executed on processors like A15 and A7
@@ -319,10 +332,26 @@ static const struct of_device_id exynos_dt_mcpm_match[] = {
{},
};
+static void exynos_mcpm_setup_entry_point(void)
+{
+ /*
+ * U-Boot SPL is hardcoded to jump to the start of ns_sram_base_addr
+ * as part of secondary_cpu_start(). Let's redirect it to the
+ * mcpm_entry_point(). This is done during both secondary boot-up as
+ * well as system resume.
+ */
+ __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */
+ __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */
+ __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8);
+}
+
+static struct syscore_ops exynos_mcpm_syscore_ops = {
+ .resume = exynos_mcpm_setup_entry_point,
+};
+
static int __init exynos_mcpm_init(void)
{
struct device_node *node;
- void __iomem *ns_sram_base_addr;
unsigned int value, i;
int ret;
@@ -389,16 +418,9 @@ static int __init exynos_mcpm_init(void)
__raw_writel(value, pmu_base_addr + EXYNOS_COMMON_OPTION(i));
}
- /*
- * U-Boot SPL is hardcoded to jump to the start of ns_sram_base_addr
- * as part of secondary_cpu_start(). Let's redirect it to the
- * mcpm_entry_point().
- */
- __raw_writel(0xe59f0000, ns_sram_base_addr); /* ldr r0, [pc, #0] */
- __raw_writel(0xe12fff10, ns_sram_base_addr + 4); /* bx r0 */
- __raw_writel(virt_to_phys(mcpm_entry_point), ns_sram_base_addr + 8);
+ exynos_mcpm_setup_entry_point();
- iounmap(ns_sram_base_addr);
+ register_syscore_ops(&exynos_mcpm_syscore_ops);
return ret;
}
diff --git a/arch/arm/mach-exynos/pm.c b/arch/arm/mach-exynos/pm.c
index 69cf678..278f204 100644
--- a/arch/arm/mach-exynos/pm.c
+++ b/arch/arm/mach-exynos/pm.c
@@ -24,6 +24,7 @@
#include <asm/cacheflush.h>
#include <asm/hardware/cache-l2x0.h>
+#include <asm/mcpm.h>
#include <asm/smp_scu.h>
#include <asm/suspend.h>
@@ -191,7 +192,6 @@ int exynos_cluster_power_state(int cluster)
pmu_base_addr + S5P_INFORM1))
#define S5P_CHECK_AFTR 0xFCBA0D10
-#define S5P_CHECK_SLEEP 0x00000BAD
/* Ext-GIC nIRQ/nFIQ is the only wakeup source in AFTR */
static void exynos_set_wakeupmask(long mask)
@@ -318,7 +318,10 @@ static void exynos_pm_prepare(void)
/* ensure at least INFORM0 has the resume address */
- pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0);
+ if (soc_is_exynos5420() && IS_ENABLED(CONFIG_MCPM))
+ pmu_raw_writel(virt_to_phys(mcpm_entry_point), S5P_INFORM0);
+ else
+ pmu_raw_writel(virt_to_phys(exynos_cpu_resume), S5P_INFORM0);
if (soc_is_exynos5420()) {
tmp = __raw_readl(pmu_base_addr + EXYNOS5_ARM_L2_OPTION);
@@ -490,6 +493,27 @@ static struct syscore_ops exynos_pm_syscore_ops = {
.resume = exynos_pm_resume,
};
+static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
+{
+ /* MCPM works with HW CPU identifiers */
+ unsigned int mpidr = read_cpuid_mpidr();
+ unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+ unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+
+ __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
+
+ mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
+
+ /*
+ * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
+ * we are suspending the system and need to skip CPU0 power down.
+ */
+ mcpm_cpu_suspend(S5P_CHECK_SLEEP);
+
+ /* return value != 0 means failure */
+ return 1;
+}
+
/*
* Suspend Ops
*/
@@ -517,10 +541,17 @@ static int exynos_suspend_enter(suspend_state_t state)
flush_cache_all();
s3c_pm_check_store();
- ret = cpu_suspend(0, exynos_cpu_suspend);
+ /* Use the MCPM layer to suspend 5420 which is a multi-cluster SoC */
+ if (soc_is_exynos5420() && IS_ENABLED(CONFIG_MCPM))
+ ret = cpu_suspend(0, exynos_mcpm_cpu_suspend);
+ else
+ ret = cpu_suspend(0, exynos_cpu_suspend);
if (ret)
return ret;
+ if (soc_is_exynos5420() && IS_ENABLED(CONFIG_MCPM))
+ mcpm_cpu_powered_up();
+
s3c_pm_restore_uarts();
S3C_PMDBG("%s: wakeup stat: %08x\n", __func__,
diff --git a/arch/arm/mach-exynos/regs-pmu.h b/arch/arm/mach-exynos/regs-pmu.h
index 3cf0454..e8c75db 100644
--- a/arch/arm/mach-exynos/regs-pmu.h
+++ b/arch/arm/mach-exynos/regs-pmu.h
@@ -152,6 +152,7 @@
#define S5P_PAD_RET_EBIB_OPTION 0x31A8
#define S5P_CORE_LOCAL_PWR_EN 0x3
+#define S5P_CHECK_SLEEP 0x00000BAD
/* Only for EXYNOS4210 */
#define S5P_CMU_CLKSTOP_LCD1_LOWPWR 0x1154
--
2.0.0
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: several messages
2014-07-03 14:46 ` [PATCH v5] " Abhilash Kesavan
@ 2014-07-03 15:45 ` Nicolas Pitre
0 siblings, 0 replies; 131+ messages in thread
From: Nicolas Pitre @ 2014-07-03 15:45 UTC (permalink / raw)
To: Abhilash Kesavan, Abhilash Kesavan
Cc: linux-samsung-soc, linux-arm-kernel, kgene.kim,
Lorenzo Pieralisi, Andrew Bresticker, Douglas Anderson
On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > Please, let's avoid going that route. There is no such special handling
> > needed if the API is sufficient. And the provided API allows you to
> > suspend a CPU or shut it down. It shouldn't matter at that level if
> > this is due to a cluster switch or a hotplug event. Do you really need
> > something else?
> No, just one local flag for suspend should be enough for me. Will remove these.
[...]
> Changes in v5:
> - Removed the MCPM flags and just used a local flag to
> indicate that we are suspending.
[...]
> -static void exynos_power_down(void)
> +static void exynos_mcpm_power_down(u64 residency)
> {
> unsigned int mpidr, cpu, cluster;
> bool last_man = false, skip_wfi = false;
> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
> cpu_use_count[cpu][cluster]--;
> if (cpu_use_count[cpu][cluster] == 0) {
> - exynos_cpu_power_down(cpunr);
> + /*
> + * Bypass power down for CPU0 during suspend. This is being
> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
> + */
> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
> + exynos_cpu_power_down(cpunr);
>
> if (exynos_cluster_unused(cluster)) {
> exynos_cluster_power_down(cluster);
> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
> /* Not dead at this point? Let our caller cope. */
> }
>
> +static void exynos_power_down(void)
> +{
> + exynos_mcpm_power_down(0);
> +}
[...]
> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
> +{
> + /* MCPM works with HW CPU identifiers */
> + unsigned int mpidr = read_cpuid_mpidr();
> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
> +
> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
> +
> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
> +
> + /*
> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
> + * we are suspending the system and need to skip CPU0 power down.
> + */
> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
NAK.
When I say "local flag with local meaning", I don't want you to smuggle
that flag through a public interface either. I find it rather inelegant
to have the notion of special handling for CPU0 being spread around like
that.
If CPU0 is special, then it should be dealth with in one place only,
ideally in the MCPM backend, so the rest of the kernel doesn't have to
care.
Again, here's what I mean:
static void exynos_mcpm_down_handler(int flags)
{
[...]
if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
exynos_cpu_power_down(cpunr);
[...]
}
static void exynos_mcpm_power_down()
{
exynos_mcpm_down_handler(0);
}
static void exynos_mcpm_suspend(u64 residency)
{
/*
* Theresidency argument is ignored for now.
* However, in the CPU suspend case, we bypass power down for
* CPU0 as this is being taken care by the SYS_PWR_CFG bit in
* CORE0_SYS_PWR_REG.
*/
exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
}
And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
mcpm-exynos.c only.
Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* several messages
@ 2014-07-03 15:45 ` Nicolas Pitre
0 siblings, 0 replies; 131+ messages in thread
From: Nicolas Pitre @ 2014-07-03 15:45 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > Please, let's avoid going that route. There is no such special handling
> > needed if the API is sufficient. And the provided API allows you to
> > suspend a CPU or shut it down. It shouldn't matter at that level if
> > this is due to a cluster switch or a hotplug event. Do you really need
> > something else?
> No, just one local flag for suspend should be enough for me. Will remove these.
[...]
> Changes in v5:
> - Removed the MCPM flags and just used a local flag to
> indicate that we are suspending.
[...]
> -static void exynos_power_down(void)
> +static void exynos_mcpm_power_down(u64 residency)
> {
> unsigned int mpidr, cpu, cluster;
> bool last_man = false, skip_wfi = false;
> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
> cpu_use_count[cpu][cluster]--;
> if (cpu_use_count[cpu][cluster] == 0) {
> - exynos_cpu_power_down(cpunr);
> + /*
> + * Bypass power down for CPU0 during suspend. This is being
> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
> + */
> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
> + exynos_cpu_power_down(cpunr);
>
> if (exynos_cluster_unused(cluster)) {
> exynos_cluster_power_down(cluster);
> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
> /* Not dead at this point? Let our caller cope. */
> }
>
> +static void exynos_power_down(void)
> +{
> + exynos_mcpm_power_down(0);
> +}
[...]
> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
> +{
> + /* MCPM works with HW CPU identifiers */
> + unsigned int mpidr = read_cpuid_mpidr();
> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
> +
> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
> +
> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
> +
> + /*
> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
> + * we are suspending the system and need to skip CPU0 power down.
> + */
> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
NAK.
When I say "local flag with local meaning", I don't want you to smuggle
that flag through a public interface either. I find it rather inelegant
to have the notion of special handling for CPU0 being spread around like
that.
If CPU0 is special, then it should be dealth with in one place only,
ideally in the MCPM backend, so the rest of the kernel doesn't have to
care.
Again, here's what I mean:
static void exynos_mcpm_down_handler(int flags)
{
[...]
if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
exynos_cpu_power_down(cpunr);
[...]
}
static void exynos_mcpm_power_down()
{
exynos_mcpm_down_handler(0);
}
static void exynos_mcpm_suspend(u64 residency)
{
/*
* Theresidency argument is ignored for now.
* However, in the CPU suspend case, we bypass power down for
* CPU0 as this is being taken care by the SYS_PWR_CFG bit in
* CORE0_SYS_PWR_REG.
*/
exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
}
And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
mcpm-exynos.c only.
Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2014-07-03 15:45 ` Nicolas Pitre
@ 2014-07-03 16:19 ` Abhilash Kesavan
-1 siblings, 0 replies; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-03 16:19 UTC (permalink / raw)
To: Nicolas Pitre
Cc: linux-samsung-soc, linux-arm-kernel, Kukjin Kim,
Lorenzo Pieralisi, Andrew Bresticker, Douglas Anderson
Hi Nicolas,
On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>
>> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> > Please, let's avoid going that route. There is no such special handling
>> > needed if the API is sufficient. And the provided API allows you to
>> > suspend a CPU or shut it down. It shouldn't matter at that level if
>> > this is due to a cluster switch or a hotplug event. Do you really need
>> > something else?
>> No, just one local flag for suspend should be enough for me. Will remove these.
>
> [...]
>
>> Changes in v5:
>> - Removed the MCPM flags and just used a local flag to
>> indicate that we are suspending.
>
> [...]
>
>> -static void exynos_power_down(void)
>> +static void exynos_mcpm_power_down(u64 residency)
>> {
>> unsigned int mpidr, cpu, cluster;
>> bool last_man = false, skip_wfi = false;
>> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
>> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
>> cpu_use_count[cpu][cluster]--;
>> if (cpu_use_count[cpu][cluster] == 0) {
>> - exynos_cpu_power_down(cpunr);
>> + /*
>> + * Bypass power down for CPU0 during suspend. This is being
>> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
>> + */
>> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
>> + exynos_cpu_power_down(cpunr);
>>
>> if (exynos_cluster_unused(cluster)) {
>> exynos_cluster_power_down(cluster);
>> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
>> /* Not dead at this point? Let our caller cope. */
>> }
>>
>> +static void exynos_power_down(void)
>> +{
>> + exynos_mcpm_power_down(0);
>> +}
>
> [...]
>
>> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
>> +{
>> + /* MCPM works with HW CPU identifiers */
>> + unsigned int mpidr = read_cpuid_mpidr();
>> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
>> +
>> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
>> +
>> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
>> +
>> + /*
>> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
>> + * we are suspending the system and need to skip CPU0 power down.
>> + */
>> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
>
> NAK.
>
> When I say "local flag with local meaning", I don't want you to smuggle
> that flag through a public interface either. I find it rather inelegant
> to have the notion of special handling for CPU0 being spread around like
> that.
>
> If CPU0 is special, then it should be dealth with in one place only,
> ideally in the MCPM backend, so the rest of the kernel doesn't have to
> care.
>
> Again, here's what I mean:
>
> static void exynos_mcpm_down_handler(int flags)
> {
> [...]
> if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
> exynos_cpu_power_down(cpunr);
> [...]
> }
>
> static void exynos_mcpm_power_down()
> {
> exynos_mcpm_down_handler(0);
> }
>
> static void exynos_mcpm_suspend(u64 residency)
> {
> /*
> * Theresidency argument is ignored for now.
> * However, in the CPU suspend case, we bypass power down for
> * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
> * CORE0_SYS_PWR_REG.
> */
> exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
> }
>
> And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
> mcpm-exynos.c only.
Sorry if I am being dense, but the exynos_mcpm_suspend function would
get called from both the bL cpuidle driver as well the exynos pm code.
We want to skip CPU0 only in case of the S2R case i.e. the pm code and
keep the CPU0 power down code for all other cases including CPUIdle.
If I call exynos_mcpm_down_handler with the flag in
exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
Regards,
Abhilash
>
>
> Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* several messages
@ 2014-07-03 16:19 ` Abhilash Kesavan
0 siblings, 0 replies; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-03 16:19 UTC (permalink / raw)
To: linux-arm-kernel
Hi Nicolas,
On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>
>> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> > Please, let's avoid going that route. There is no such special handling
>> > needed if the API is sufficient. And the provided API allows you to
>> > suspend a CPU or shut it down. It shouldn't matter at that level if
>> > this is due to a cluster switch or a hotplug event. Do you really need
>> > something else?
>> No, just one local flag for suspend should be enough for me. Will remove these.
>
> [...]
>
>> Changes in v5:
>> - Removed the MCPM flags and just used a local flag to
>> indicate that we are suspending.
>
> [...]
>
>> -static void exynos_power_down(void)
>> +static void exynos_mcpm_power_down(u64 residency)
>> {
>> unsigned int mpidr, cpu, cluster;
>> bool last_man = false, skip_wfi = false;
>> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
>> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
>> cpu_use_count[cpu][cluster]--;
>> if (cpu_use_count[cpu][cluster] == 0) {
>> - exynos_cpu_power_down(cpunr);
>> + /*
>> + * Bypass power down for CPU0 during suspend. This is being
>> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
>> + */
>> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
>> + exynos_cpu_power_down(cpunr);
>>
>> if (exynos_cluster_unused(cluster)) {
>> exynos_cluster_power_down(cluster);
>> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
>> /* Not dead at this point? Let our caller cope. */
>> }
>>
>> +static void exynos_power_down(void)
>> +{
>> + exynos_mcpm_power_down(0);
>> +}
>
> [...]
>
>> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
>> +{
>> + /* MCPM works with HW CPU identifiers */
>> + unsigned int mpidr = read_cpuid_mpidr();
>> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
>> +
>> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
>> +
>> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
>> +
>> + /*
>> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
>> + * we are suspending the system and need to skip CPU0 power down.
>> + */
>> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
>
> NAK.
>
> When I say "local flag with local meaning", I don't want you to smuggle
> that flag through a public interface either. I find it rather inelegant
> to have the notion of special handling for CPU0 being spread around like
> that.
>
> If CPU0 is special, then it should be dealth with in one place only,
> ideally in the MCPM backend, so the rest of the kernel doesn't have to
> care.
>
> Again, here's what I mean:
>
> static void exynos_mcpm_down_handler(int flags)
> {
> [...]
> if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
> exynos_cpu_power_down(cpunr);
> [...]
> }
>
> static void exynos_mcpm_power_down()
> {
> exynos_mcpm_down_handler(0);
> }
>
> static void exynos_mcpm_suspend(u64 residency)
> {
> /*
> * Theresidency argument is ignored for now.
> * However, in the CPU suspend case, we bypass power down for
> * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
> * CORE0_SYS_PWR_REG.
> */
> exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
> }
>
> And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
> mcpm-exynos.c only.
Sorry if I am being dense, but the exynos_mcpm_suspend function would
get called from both the bL cpuidle driver as well the exynos pm code.
We want to skip CPU0 only in case of the S2R case i.e. the pm code and
keep the CPU0 power down code for all other cases including CPUIdle.
If I call exynos_mcpm_down_handler with the flag in
exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
Regards,
Abhilash
>
>
> Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2014-07-03 16:19 ` Abhilash Kesavan
@ 2014-07-03 19:00 ` Nicolas Pitre
-1 siblings, 0 replies; 131+ messages in thread
From: Nicolas Pitre @ 2014-07-03 19:00 UTC (permalink / raw)
To: Abhilash Kesavan
Cc: linux-samsung-soc, linux-arm-kernel, Kukjin Kim,
Lorenzo Pieralisi, Andrew Bresticker, Douglas Anderson
On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> Hi Nicolas,
>
> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> >
> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> >> > Please, let's avoid going that route. There is no such special handling
> >> > needed if the API is sufficient. And the provided API allows you to
> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
> >> > this is due to a cluster switch or a hotplug event. Do you really need
> >> > something else?
> >> No, just one local flag for suspend should be enough for me. Will remove these.
> >
> > [...]
> >
> >> Changes in v5:
> >> - Removed the MCPM flags and just used a local flag to
> >> indicate that we are suspending.
> >
> > [...]
> >
> >> -static void exynos_power_down(void)
> >> +static void exynos_mcpm_power_down(u64 residency)
> >> {
> >> unsigned int mpidr, cpu, cluster;
> >> bool last_man = false, skip_wfi = false;
> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
> >> cpu_use_count[cpu][cluster]--;
> >> if (cpu_use_count[cpu][cluster] == 0) {
> >> - exynos_cpu_power_down(cpunr);
> >> + /*
> >> + * Bypass power down for CPU0 during suspend. This is being
> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
> >> + */
> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
> >> + exynos_cpu_power_down(cpunr);
> >>
> >> if (exynos_cluster_unused(cluster)) {
> >> exynos_cluster_power_down(cluster);
> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
> >> /* Not dead at this point? Let our caller cope. */
> >> }
> >>
> >> +static void exynos_power_down(void)
> >> +{
> >> + exynos_mcpm_power_down(0);
> >> +}
> >
> > [...]
> >
> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
> >> +{
> >> + /* MCPM works with HW CPU identifiers */
> >> + unsigned int mpidr = read_cpuid_mpidr();
> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
> >> +
> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
> >> +
> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
> >> +
> >> + /*
> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
> >> + * we are suspending the system and need to skip CPU0 power down.
> >> + */
> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
> >
> > NAK.
> >
> > When I say "local flag with local meaning", I don't want you to smuggle
> > that flag through a public interface either. I find it rather inelegant
> > to have the notion of special handling for CPU0 being spread around like
> > that.
> >
> > If CPU0 is special, then it should be dealth with in one place only,
> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
> > care.
> >
> > Again, here's what I mean:
> >
> > static void exynos_mcpm_down_handler(int flags)
> > {
> > [...]
> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
> > exynos_cpu_power_down(cpunr);
> > [...]
> > }
> >
> > static void exynos_mcpm_power_down()
> > {
> > exynos_mcpm_down_handler(0);
> > }
> >
> > static void exynos_mcpm_suspend(u64 residency)
> > {
> > /*
> > * Theresidency argument is ignored for now.
> > * However, in the CPU suspend case, we bypass power down for
> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
> > * CORE0_SYS_PWR_REG.
> > */
> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
> > }
> >
> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
> > mcpm-exynos.c only.
> Sorry if I am being dense, but the exynos_mcpm_suspend function would
> get called from both the bL cpuidle driver as well the exynos pm code.
What is that exynos pm code actually doing?
> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
> keep the CPU0 power down code for all other cases including CPUIdle.
OK. If so I missed that somehow (hint hint).
> If I call exynos_mcpm_down_handler with the flag in
> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
As it is, yes. You could logically use an infinite residency time
(something like U64_MAX) to distinguish S2RAM from other types of
suspend.
Yet, why is this SYS_PWR_CFG bit set outside of MCPM? Couldn't the MCPM
backend handle it directly instead of expecting some other entity to do
it?
Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* several messages
@ 2014-07-03 19:00 ` Nicolas Pitre
0 siblings, 0 replies; 131+ messages in thread
From: Nicolas Pitre @ 2014-07-03 19:00 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> Hi Nicolas,
>
> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> >
> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> >> > Please, let's avoid going that route. There is no such special handling
> >> > needed if the API is sufficient. And the provided API allows you to
> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
> >> > this is due to a cluster switch or a hotplug event. Do you really need
> >> > something else?
> >> No, just one local flag for suspend should be enough for me. Will remove these.
> >
> > [...]
> >
> >> Changes in v5:
> >> - Removed the MCPM flags and just used a local flag to
> >> indicate that we are suspending.
> >
> > [...]
> >
> >> -static void exynos_power_down(void)
> >> +static void exynos_mcpm_power_down(u64 residency)
> >> {
> >> unsigned int mpidr, cpu, cluster;
> >> bool last_man = false, skip_wfi = false;
> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
> >> cpu_use_count[cpu][cluster]--;
> >> if (cpu_use_count[cpu][cluster] == 0) {
> >> - exynos_cpu_power_down(cpunr);
> >> + /*
> >> + * Bypass power down for CPU0 during suspend. This is being
> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
> >> + */
> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
> >> + exynos_cpu_power_down(cpunr);
> >>
> >> if (exynos_cluster_unused(cluster)) {
> >> exynos_cluster_power_down(cluster);
> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
> >> /* Not dead at this point? Let our caller cope. */
> >> }
> >>
> >> +static void exynos_power_down(void)
> >> +{
> >> + exynos_mcpm_power_down(0);
> >> +}
> >
> > [...]
> >
> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
> >> +{
> >> + /* MCPM works with HW CPU identifiers */
> >> + unsigned int mpidr = read_cpuid_mpidr();
> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
> >> +
> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
> >> +
> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
> >> +
> >> + /*
> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
> >> + * we are suspending the system and need to skip CPU0 power down.
> >> + */
> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
> >
> > NAK.
> >
> > When I say "local flag with local meaning", I don't want you to smuggle
> > that flag through a public interface either. I find it rather inelegant
> > to have the notion of special handling for CPU0 being spread around like
> > that.
> >
> > If CPU0 is special, then it should be dealth with in one place only,
> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
> > care.
> >
> > Again, here's what I mean:
> >
> > static void exynos_mcpm_down_handler(int flags)
> > {
> > [...]
> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
> > exynos_cpu_power_down(cpunr);
> > [...]
> > }
> >
> > static void exynos_mcpm_power_down()
> > {
> > exynos_mcpm_down_handler(0);
> > }
> >
> > static void exynos_mcpm_suspend(u64 residency)
> > {
> > /*
> > * Theresidency argument is ignored for now.
> > * However, in the CPU suspend case, we bypass power down for
> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
> > * CORE0_SYS_PWR_REG.
> > */
> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
> > }
> >
> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
> > mcpm-exynos.c only.
> Sorry if I am being dense, but the exynos_mcpm_suspend function would
> get called from both the bL cpuidle driver as well the exynos pm code.
What is that exynos pm code actually doing?
> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
> keep the CPU0 power down code for all other cases including CPUIdle.
OK. If so I missed that somehow (hint hint).
> If I call exynos_mcpm_down_handler with the flag in
> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
As it is, yes. You could logically use an infinite residency time
(something like U64_MAX) to distinguish S2RAM from other types of
suspend.
Yet, why is this SYS_PWR_CFG bit set outside of MCPM? Couldn't the MCPM
backend handle it directly instead of expecting some other entity to do
it?
Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2014-07-03 19:00 ` Nicolas Pitre
@ 2014-07-03 20:00 ` Abhilash Kesavan
-1 siblings, 0 replies; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-03 20:00 UTC (permalink / raw)
To: Nicolas Pitre
Cc: linux-samsung-soc, linux-arm-kernel, Kukjin Kim,
Lorenzo Pieralisi, Andrew Bresticker, Douglas Anderson
Hi Nicolas,
On Fri, Jul 4, 2014 at 12:30 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>
>> Hi Nicolas,
>>
>> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>> >
>> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> >> > Please, let's avoid going that route. There is no such special handling
>> >> > needed if the API is sufficient. And the provided API allows you to
>> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
>> >> > this is due to a cluster switch or a hotplug event. Do you really need
>> >> > something else?
>> >> No, just one local flag for suspend should be enough for me. Will remove these.
>> >
>> > [...]
>> >
>> >> Changes in v5:
>> >> - Removed the MCPM flags and just used a local flag to
>> >> indicate that we are suspending.
>> >
>> > [...]
>> >
>> >> -static void exynos_power_down(void)
>> >> +static void exynos_mcpm_power_down(u64 residency)
>> >> {
>> >> unsigned int mpidr, cpu, cluster;
>> >> bool last_man = false, skip_wfi = false;
>> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
>> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
>> >> cpu_use_count[cpu][cluster]--;
>> >> if (cpu_use_count[cpu][cluster] == 0) {
>> >> - exynos_cpu_power_down(cpunr);
>> >> + /*
>> >> + * Bypass power down for CPU0 during suspend. This is being
>> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
>> >> + */
>> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
>> >> + exynos_cpu_power_down(cpunr);
>> >>
>> >> if (exynos_cluster_unused(cluster)) {
>> >> exynos_cluster_power_down(cluster);
>> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
>> >> /* Not dead at this point? Let our caller cope. */
>> >> }
>> >>
>> >> +static void exynos_power_down(void)
>> >> +{
>> >> + exynos_mcpm_power_down(0);
>> >> +}
>> >
>> > [...]
>> >
>> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
>> >> +{
>> >> + /* MCPM works with HW CPU identifiers */
>> >> + unsigned int mpidr = read_cpuid_mpidr();
>> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
>> >> +
>> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
>> >> +
>> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
>> >> +
>> >> + /*
>> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
>> >> + * we are suspending the system and need to skip CPU0 power down.
>> >> + */
>> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
>> >
>> > NAK.
>> >
>> > When I say "local flag with local meaning", I don't want you to smuggle
>> > that flag through a public interface either. I find it rather inelegant
>> > to have the notion of special handling for CPU0 being spread around like
>> > that.
>> >
>> > If CPU0 is special, then it should be dealth with in one place only,
>> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
>> > care.
>> >
>> > Again, here's what I mean:
>> >
>> > static void exynos_mcpm_down_handler(int flags)
>> > {
>> > [...]
>> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
>> > exynos_cpu_power_down(cpunr);
>> > [...]
>> > }
>> >
>> > static void exynos_mcpm_power_down()
>> > {
>> > exynos_mcpm_down_handler(0);
>> > }
>> >
>> > static void exynos_mcpm_suspend(u64 residency)
>> > {
>> > /*
>> > * Theresidency argument is ignored for now.
>> > * However, in the CPU suspend case, we bypass power down for
>> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
>> > * CORE0_SYS_PWR_REG.
>> > */
>> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
>> > }
>> >
>> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
>> > mcpm-exynos.c only.
>> Sorry if I am being dense, but the exynos_mcpm_suspend function would
>> get called from both the bL cpuidle driver as well the exynos pm code.
>
> What is that exynos pm code actually doing?
exynos pm code is shared across Exynos4 and 5 SoCs. It primarily does
a series of register configurations which are required to put the
system to sleep (some parts of these are SoC specific and others
common). It also populates the suspend_ops for exynos. In the current
patch, exynos_suspend_enter() is where I have plugged in the
mcpm_cpu_suspend call.
This patch is based on the S2R series for 5420
(http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898), you
may also have a look at that for a clearer picture.
>
>> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
>> keep the CPU0 power down code for all other cases including CPUIdle.
>
> OK. If so I missed that somehow (hint hint).
>
>> If I call exynos_mcpm_down_handler with the flag in
>> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
>
> As it is, yes. You could logically use an infinite residency time
> (something like U64_MAX) to distinguish S2RAM from other types of
> suspend.
OK, I will use this rather than the S5P_CHECK_SLEEP macro.
>
> Yet, why is this SYS_PWR_CFG bit set outside of MCPM? Couldn't the MCPM
> backend handle it directly instead of expecting some other entity to do
> it?
Low power modes such as Sleep, Low Power Audio, AFTR (ARM Off Top
Running) require a series of register configurations as specified by
the UM to enter/exit them. All the exynos SoCs including 5420, do such
configurations (including sys_pwr_reg setup) as part of the
exynos_pm_prepare function in pm.c and so we just need to skip the cpu
power down.
Regards,
Abhilash
>
>
> Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* several messages
@ 2014-07-03 20:00 ` Abhilash Kesavan
0 siblings, 0 replies; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-03 20:00 UTC (permalink / raw)
To: linux-arm-kernel
Hi Nicolas,
On Fri, Jul 4, 2014 at 12:30 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>
>> Hi Nicolas,
>>
>> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>> >
>> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> >> > Please, let's avoid going that route. There is no such special handling
>> >> > needed if the API is sufficient. And the provided API allows you to
>> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
>> >> > this is due to a cluster switch or a hotplug event. Do you really need
>> >> > something else?
>> >> No, just one local flag for suspend should be enough for me. Will remove these.
>> >
>> > [...]
>> >
>> >> Changes in v5:
>> >> - Removed the MCPM flags and just used a local flag to
>> >> indicate that we are suspending.
>> >
>> > [...]
>> >
>> >> -static void exynos_power_down(void)
>> >> +static void exynos_mcpm_power_down(u64 residency)
>> >> {
>> >> unsigned int mpidr, cpu, cluster;
>> >> bool last_man = false, skip_wfi = false;
>> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
>> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
>> >> cpu_use_count[cpu][cluster]--;
>> >> if (cpu_use_count[cpu][cluster] == 0) {
>> >> - exynos_cpu_power_down(cpunr);
>> >> + /*
>> >> + * Bypass power down for CPU0 during suspend. This is being
>> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
>> >> + */
>> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
>> >> + exynos_cpu_power_down(cpunr);
>> >>
>> >> if (exynos_cluster_unused(cluster)) {
>> >> exynos_cluster_power_down(cluster);
>> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
>> >> /* Not dead at this point? Let our caller cope. */
>> >> }
>> >>
>> >> +static void exynos_power_down(void)
>> >> +{
>> >> + exynos_mcpm_power_down(0);
>> >> +}
>> >
>> > [...]
>> >
>> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
>> >> +{
>> >> + /* MCPM works with HW CPU identifiers */
>> >> + unsigned int mpidr = read_cpuid_mpidr();
>> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
>> >> +
>> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
>> >> +
>> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
>> >> +
>> >> + /*
>> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
>> >> + * we are suspending the system and need to skip CPU0 power down.
>> >> + */
>> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
>> >
>> > NAK.
>> >
>> > When I say "local flag with local meaning", I don't want you to smuggle
>> > that flag through a public interface either. I find it rather inelegant
>> > to have the notion of special handling for CPU0 being spread around like
>> > that.
>> >
>> > If CPU0 is special, then it should be dealth with in one place only,
>> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
>> > care.
>> >
>> > Again, here's what I mean:
>> >
>> > static void exynos_mcpm_down_handler(int flags)
>> > {
>> > [...]
>> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
>> > exynos_cpu_power_down(cpunr);
>> > [...]
>> > }
>> >
>> > static void exynos_mcpm_power_down()
>> > {
>> > exynos_mcpm_down_handler(0);
>> > }
>> >
>> > static void exynos_mcpm_suspend(u64 residency)
>> > {
>> > /*
>> > * Theresidency argument is ignored for now.
>> > * However, in the CPU suspend case, we bypass power down for
>> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
>> > * CORE0_SYS_PWR_REG.
>> > */
>> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
>> > }
>> >
>> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
>> > mcpm-exynos.c only.
>> Sorry if I am being dense, but the exynos_mcpm_suspend function would
>> get called from both the bL cpuidle driver as well the exynos pm code.
>
> What is that exynos pm code actually doing?
exynos pm code is shared across Exynos4 and 5 SoCs. It primarily does
a series of register configurations which are required to put the
system to sleep (some parts of these are SoC specific and others
common). It also populates the suspend_ops for exynos. In the current
patch, exynos_suspend_enter() is where I have plugged in the
mcpm_cpu_suspend call.
This patch is based on the S2R series for 5420
(http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898), you
may also have a look at that for a clearer picture.
>
>> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
>> keep the CPU0 power down code for all other cases including CPUIdle.
>
> OK. If so I missed that somehow (hint hint).
>
>> If I call exynos_mcpm_down_handler with the flag in
>> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
>
> As it is, yes. You could logically use an infinite residency time
> (something like U64_MAX) to distinguish S2RAM from other types of
> suspend.
OK, I will use this rather than the S5P_CHECK_SLEEP macro.
>
> Yet, why is this SYS_PWR_CFG bit set outside of MCPM? Couldn't the MCPM
> backend handle it directly instead of expecting some other entity to do
> it?
Low power modes such as Sleep, Low Power Audio, AFTR (ARM Off Top
Running) require a series of register configurations as specified by
the UM to enter/exit them. All the exynos SoCs including 5420, do such
configurations (including sys_pwr_reg setup) as part of the
exynos_pm_prepare function in pm.c and so we just need to skip the cpu
power down.
Regards,
Abhilash
>
>
> Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2014-07-03 20:00 ` Abhilash Kesavan
@ 2014-07-04 4:13 ` Nicolas Pitre
-1 siblings, 0 replies; 131+ messages in thread
From: Nicolas Pitre @ 2014-07-04 4:13 UTC (permalink / raw)
To: Abhilash Kesavan
Cc: linux-samsung-soc, linux-arm-kernel, Kukjin Kim,
Lorenzo Pieralisi, Andrew Bresticker, Douglas Anderson
On Fri, 4 Jul 2014, Abhilash Kesavan wrote:
> Hi Nicolas,
>
> On Fri, Jul 4, 2014 at 12:30 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> >
> >> Hi Nicolas,
> >>
> >> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> >> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> >> >
> >> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> >> >> > Please, let's avoid going that route. There is no such special handling
> >> >> > needed if the API is sufficient. And the provided API allows you to
> >> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
> >> >> > this is due to a cluster switch or a hotplug event. Do you really need
> >> >> > something else?
> >> >> No, just one local flag for suspend should be enough for me. Will remove these.
> >> >
> >> > [...]
> >> >
> >> >> Changes in v5:
> >> >> - Removed the MCPM flags and just used a local flag to
> >> >> indicate that we are suspending.
> >> >
> >> > [...]
> >> >
> >> >> -static void exynos_power_down(void)
> >> >> +static void exynos_mcpm_power_down(u64 residency)
> >> >> {
> >> >> unsigned int mpidr, cpu, cluster;
> >> >> bool last_man = false, skip_wfi = false;
> >> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
> >> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
> >> >> cpu_use_count[cpu][cluster]--;
> >> >> if (cpu_use_count[cpu][cluster] == 0) {
> >> >> - exynos_cpu_power_down(cpunr);
> >> >> + /*
> >> >> + * Bypass power down for CPU0 during suspend. This is being
> >> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
> >> >> + */
> >> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
> >> >> + exynos_cpu_power_down(cpunr);
> >> >>
> >> >> if (exynos_cluster_unused(cluster)) {
> >> >> exynos_cluster_power_down(cluster);
> >> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
> >> >> /* Not dead at this point? Let our caller cope. */
> >> >> }
> >> >>
> >> >> +static void exynos_power_down(void)
> >> >> +{
> >> >> + exynos_mcpm_power_down(0);
> >> >> +}
> >> >
> >> > [...]
> >> >
> >> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
> >> >> +{
> >> >> + /* MCPM works with HW CPU identifiers */
> >> >> + unsigned int mpidr = read_cpuid_mpidr();
> >> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
> >> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
> >> >> +
> >> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
> >> >> +
> >> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
> >> >> +
> >> >> + /*
> >> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
> >> >> + * we are suspending the system and need to skip CPU0 power down.
> >> >> + */
> >> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
> >> >
> >> > NAK.
> >> >
> >> > When I say "local flag with local meaning", I don't want you to smuggle
> >> > that flag through a public interface either. I find it rather inelegant
> >> > to have the notion of special handling for CPU0 being spread around like
> >> > that.
> >> >
> >> > If CPU0 is special, then it should be dealth with in one place only,
> >> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
> >> > care.
> >> >
> >> > Again, here's what I mean:
> >> >
> >> > static void exynos_mcpm_down_handler(int flags)
> >> > {
> >> > [...]
> >> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
> >> > exynos_cpu_power_down(cpunr);
> >> > [...]
> >> > }
> >> >
> >> > static void exynos_mcpm_power_down()
> >> > {
> >> > exynos_mcpm_down_handler(0);
> >> > }
> >> >
> >> > static void exynos_mcpm_suspend(u64 residency)
> >> > {
> >> > /*
> >> > * Theresidency argument is ignored for now.
> >> > * However, in the CPU suspend case, we bypass power down for
> >> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
> >> > * CORE0_SYS_PWR_REG.
> >> > */
> >> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
> >> > }
> >> >
> >> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
> >> > mcpm-exynos.c only.
> >> Sorry if I am being dense, but the exynos_mcpm_suspend function would
> >> get called from both the bL cpuidle driver as well the exynos pm code.
> >
> > What is that exynos pm code actually doing?
> exynos pm code is shared across Exynos4 and 5 SoCs. It primarily does
> a series of register configurations which are required to put the
> system to sleep (some parts of these are SoC specific and others
> common). It also populates the suspend_ops for exynos. In the current
> patch, exynos_suspend_enter() is where I have plugged in the
> mcpm_cpu_suspend call.
>
> This patch is based on the S2R series for 5420
> (http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898), you
> may also have a look at that for a clearer picture.
> >
> >> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
> >> keep the CPU0 power down code for all other cases including CPUIdle.
> >
> > OK. If so I missed that somehow (hint hint).
> >
> >> If I call exynos_mcpm_down_handler with the flag in
> >> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
> >
> > As it is, yes. You could logically use an infinite residency time
> > (something like U64_MAX) to distinguish S2RAM from other types of
> > suspend.
> OK, I will use this rather than the S5P_CHECK_SLEEP macro.
Another suggestion which might possibly be better: why not looking for
the SYS_PWR_CFG bit in exynos_cpu_power_down() directly? After all,
exynos_cpu_power_down() is semantically supposed to do what its name
suggest and could simply do nothing if the proper conditions are already
in place.
Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* several messages
@ 2014-07-04 4:13 ` Nicolas Pitre
0 siblings, 0 replies; 131+ messages in thread
From: Nicolas Pitre @ 2014-07-04 4:13 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, 4 Jul 2014, Abhilash Kesavan wrote:
> Hi Nicolas,
>
> On Fri, Jul 4, 2014 at 12:30 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> >
> >> Hi Nicolas,
> >>
> >> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> >> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
> >> >
> >> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> >> >> > Please, let's avoid going that route. There is no such special handling
> >> >> > needed if the API is sufficient. And the provided API allows you to
> >> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
> >> >> > this is due to a cluster switch or a hotplug event. Do you really need
> >> >> > something else?
> >> >> No, just one local flag for suspend should be enough for me. Will remove these.
> >> >
> >> > [...]
> >> >
> >> >> Changes in v5:
> >> >> - Removed the MCPM flags and just used a local flag to
> >> >> indicate that we are suspending.
> >> >
> >> > [...]
> >> >
> >> >> -static void exynos_power_down(void)
> >> >> +static void exynos_mcpm_power_down(u64 residency)
> >> >> {
> >> >> unsigned int mpidr, cpu, cluster;
> >> >> bool last_man = false, skip_wfi = false;
> >> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
> >> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
> >> >> cpu_use_count[cpu][cluster]--;
> >> >> if (cpu_use_count[cpu][cluster] == 0) {
> >> >> - exynos_cpu_power_down(cpunr);
> >> >> + /*
> >> >> + * Bypass power down for CPU0 during suspend. This is being
> >> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
> >> >> + */
> >> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
> >> >> + exynos_cpu_power_down(cpunr);
> >> >>
> >> >> if (exynos_cluster_unused(cluster)) {
> >> >> exynos_cluster_power_down(cluster);
> >> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
> >> >> /* Not dead at this point? Let our caller cope. */
> >> >> }
> >> >>
> >> >> +static void exynos_power_down(void)
> >> >> +{
> >> >> + exynos_mcpm_power_down(0);
> >> >> +}
> >> >
> >> > [...]
> >> >
> >> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
> >> >> +{
> >> >> + /* MCPM works with HW CPU identifiers */
> >> >> + unsigned int mpidr = read_cpuid_mpidr();
> >> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
> >> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
> >> >> +
> >> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
> >> >> +
> >> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
> >> >> +
> >> >> + /*
> >> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
> >> >> + * we are suspending the system and need to skip CPU0 power down.
> >> >> + */
> >> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
> >> >
> >> > NAK.
> >> >
> >> > When I say "local flag with local meaning", I don't want you to smuggle
> >> > that flag through a public interface either. I find it rather inelegant
> >> > to have the notion of special handling for CPU0 being spread around like
> >> > that.
> >> >
> >> > If CPU0 is special, then it should be dealth with in one place only,
> >> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
> >> > care.
> >> >
> >> > Again, here's what I mean:
> >> >
> >> > static void exynos_mcpm_down_handler(int flags)
> >> > {
> >> > [...]
> >> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
> >> > exynos_cpu_power_down(cpunr);
> >> > [...]
> >> > }
> >> >
> >> > static void exynos_mcpm_power_down()
> >> > {
> >> > exynos_mcpm_down_handler(0);
> >> > }
> >> >
> >> > static void exynos_mcpm_suspend(u64 residency)
> >> > {
> >> > /*
> >> > * Theresidency argument is ignored for now.
> >> > * However, in the CPU suspend case, we bypass power down for
> >> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
> >> > * CORE0_SYS_PWR_REG.
> >> > */
> >> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
> >> > }
> >> >
> >> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
> >> > mcpm-exynos.c only.
> >> Sorry if I am being dense, but the exynos_mcpm_suspend function would
> >> get called from both the bL cpuidle driver as well the exynos pm code.
> >
> > What is that exynos pm code actually doing?
> exynos pm code is shared across Exynos4 and 5 SoCs. It primarily does
> a series of register configurations which are required to put the
> system to sleep (some parts of these are SoC specific and others
> common). It also populates the suspend_ops for exynos. In the current
> patch, exynos_suspend_enter() is where I have plugged in the
> mcpm_cpu_suspend call.
>
> This patch is based on the S2R series for 5420
> (http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898), you
> may also have a look at that for a clearer picture.
> >
> >> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
> >> keep the CPU0 power down code for all other cases including CPUIdle.
> >
> > OK. If so I missed that somehow (hint hint).
> >
> >> If I call exynos_mcpm_down_handler with the flag in
> >> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
> >
> > As it is, yes. You could logically use an infinite residency time
> > (something like U64_MAX) to distinguish S2RAM from other types of
> > suspend.
> OK, I will use this rather than the S5P_CHECK_SLEEP macro.
Another suggestion which might possibly be better: why not looking for
the SYS_PWR_CFG bit in exynos_cpu_power_down() directly? After all,
exynos_cpu_power_down() is semantically supposed to do what its name
suggest and could simply do nothing if the proper conditions are already
in place.
Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2014-07-04 4:13 ` Nicolas Pitre
@ 2014-07-04 17:45 ` Abhilash Kesavan
-1 siblings, 0 replies; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-04 17:45 UTC (permalink / raw)
To: Nicolas Pitre
Cc: linux-samsung-soc, linux-arm-kernel, Kukjin Kim,
Lorenzo Pieralisi, Andrew Bresticker, Douglas Anderson
Hi Nicolas,
On Fri, Jul 4, 2014 at 9:43 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Fri, 4 Jul 2014, Abhilash Kesavan wrote:
>
>> Hi Nicolas,
>>
>> On Fri, Jul 4, 2014 at 12:30 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>> >
>> >> Hi Nicolas,
>> >>
>> >> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> >> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>> >> >
>> >> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> >> >> > Please, let's avoid going that route. There is no such special handling
>> >> >> > needed if the API is sufficient. And the provided API allows you to
>> >> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
>> >> >> > this is due to a cluster switch or a hotplug event. Do you really need
>> >> >> > something else?
>> >> >> No, just one local flag for suspend should be enough for me. Will remove these.
>> >> >
>> >> > [...]
>> >> >
>> >> >> Changes in v5:
>> >> >> - Removed the MCPM flags and just used a local flag to
>> >> >> indicate that we are suspending.
>> >> >
>> >> > [...]
>> >> >
>> >> >> -static void exynos_power_down(void)
>> >> >> +static void exynos_mcpm_power_down(u64 residency)
>> >> >> {
>> >> >> unsigned int mpidr, cpu, cluster;
>> >> >> bool last_man = false, skip_wfi = false;
>> >> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
>> >> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
>> >> >> cpu_use_count[cpu][cluster]--;
>> >> >> if (cpu_use_count[cpu][cluster] == 0) {
>> >> >> - exynos_cpu_power_down(cpunr);
>> >> >> + /*
>> >> >> + * Bypass power down for CPU0 during suspend. This is being
>> >> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
>> >> >> + */
>> >> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
>> >> >> + exynos_cpu_power_down(cpunr);
>> >> >>
>> >> >> if (exynos_cluster_unused(cluster)) {
>> >> >> exynos_cluster_power_down(cluster);
>> >> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
>> >> >> /* Not dead at this point? Let our caller cope. */
>> >> >> }
>> >> >>
>> >> >> +static void exynos_power_down(void)
>> >> >> +{
>> >> >> + exynos_mcpm_power_down(0);
>> >> >> +}
>> >> >
>> >> > [...]
>> >> >
>> >> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
>> >> >> +{
>> >> >> + /* MCPM works with HW CPU identifiers */
>> >> >> + unsigned int mpidr = read_cpuid_mpidr();
>> >> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>> >> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
>> >> >> +
>> >> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
>> >> >> +
>> >> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
>> >> >> +
>> >> >> + /*
>> >> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
>> >> >> + * we are suspending the system and need to skip CPU0 power down.
>> >> >> + */
>> >> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
>> >> >
>> >> > NAK.
>> >> >
>> >> > When I say "local flag with local meaning", I don't want you to smuggle
>> >> > that flag through a public interface either. I find it rather inelegant
>> >> > to have the notion of special handling for CPU0 being spread around like
>> >> > that.
>> >> >
>> >> > If CPU0 is special, then it should be dealth with in one place only,
>> >> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
>> >> > care.
>> >> >
>> >> > Again, here's what I mean:
>> >> >
>> >> > static void exynos_mcpm_down_handler(int flags)
>> >> > {
>> >> > [...]
>> >> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
>> >> > exynos_cpu_power_down(cpunr);
>> >> > [...]
>> >> > }
>> >> >
>> >> > static void exynos_mcpm_power_down()
>> >> > {
>> >> > exynos_mcpm_down_handler(0);
>> >> > }
>> >> >
>> >> > static void exynos_mcpm_suspend(u64 residency)
>> >> > {
>> >> > /*
>> >> > * Theresidency argument is ignored for now.
>> >> > * However, in the CPU suspend case, we bypass power down for
>> >> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
>> >> > * CORE0_SYS_PWR_REG.
>> >> > */
>> >> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
>> >> > }
>> >> >
>> >> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
>> >> > mcpm-exynos.c only.
>> >> Sorry if I am being dense, but the exynos_mcpm_suspend function would
>> >> get called from both the bL cpuidle driver as well the exynos pm code.
>> >
>> > What is that exynos pm code actually doing?
>> exynos pm code is shared across Exynos4 and 5 SoCs. It primarily does
>> a series of register configurations which are required to put the
>> system to sleep (some parts of these are SoC specific and others
>> common). It also populates the suspend_ops for exynos. In the current
>> patch, exynos_suspend_enter() is where I have plugged in the
>> mcpm_cpu_suspend call.
>>
>> This patch is based on the S2R series for 5420
>> (http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898), you
>> may also have a look at that for a clearer picture.
>> >
>> >> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
>> >> keep the CPU0 power down code for all other cases including CPUIdle.
>> >
>> > OK. If so I missed that somehow (hint hint).
>> >
>> >> If I call exynos_mcpm_down_handler with the flag in
>> >> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
>> >
>> > As it is, yes. You could logically use an infinite residency time
>> > (something like U64_MAX) to distinguish S2RAM from other types of
>> > suspend.
>> OK, I will use this rather than the S5P_CHECK_SLEEP macro.
>
> Another suggestion which might possibly be better: why not looking for
> the SYS_PWR_CFG bit in exynos_cpu_power_down() directly? After all,
> exynos_cpu_power_down() is semantically supposed to do what its name
> suggest and could simply do nothing if the proper conditions are already
> in place.
I have implemented this and it works fine. Patch coming up.
Regards,
Abhilash
>
>
> Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* several messages
@ 2014-07-04 17:45 ` Abhilash Kesavan
0 siblings, 0 replies; 131+ messages in thread
From: Abhilash Kesavan @ 2014-07-04 17:45 UTC (permalink / raw)
To: linux-arm-kernel
Hi Nicolas,
On Fri, Jul 4, 2014 at 9:43 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
> On Fri, 4 Jul 2014, Abhilash Kesavan wrote:
>
>> Hi Nicolas,
>>
>> On Fri, Jul 4, 2014 at 12:30 AM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>> >
>> >> Hi Nicolas,
>> >>
>> >> On Thu, Jul 3, 2014 at 9:15 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> >> > On Thu, 3 Jul 2014, Abhilash Kesavan wrote:
>> >> >
>> >> >> On Thu, Jul 3, 2014 at 6:59 PM, Nicolas Pitre <nicolas.pitre@linaro.org> wrote:
>> >> >> > Please, let's avoid going that route. There is no such special handling
>> >> >> > needed if the API is sufficient. And the provided API allows you to
>> >> >> > suspend a CPU or shut it down. It shouldn't matter at that level if
>> >> >> > this is due to a cluster switch or a hotplug event. Do you really need
>> >> >> > something else?
>> >> >> No, just one local flag for suspend should be enough for me. Will remove these.
>> >> >
>> >> > [...]
>> >> >
>> >> >> Changes in v5:
>> >> >> - Removed the MCPM flags and just used a local flag to
>> >> >> indicate that we are suspending.
>> >> >
>> >> > [...]
>> >> >
>> >> >> -static void exynos_power_down(void)
>> >> >> +static void exynos_mcpm_power_down(u64 residency)
>> >> >> {
>> >> >> unsigned int mpidr, cpu, cluster;
>> >> >> bool last_man = false, skip_wfi = false;
>> >> >> @@ -150,7 +153,12 @@ static void exynos_power_down(void)
>> >> >> BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
>> >> >> cpu_use_count[cpu][cluster]--;
>> >> >> if (cpu_use_count[cpu][cluster] == 0) {
>> >> >> - exynos_cpu_power_down(cpunr);
>> >> >> + /*
>> >> >> + * Bypass power down for CPU0 during suspend. This is being
>> >> >> + * taken care by the SYS_PWR_CFG bit in CORE0_SYS_PWR_REG.
>> >> >> + */
>> >> >> + if ((cpunr != 0) || (residency != S5P_CHECK_SLEEP))
>> >> >> + exynos_cpu_power_down(cpunr);
>> >> >>
>> >> >> if (exynos_cluster_unused(cluster)) {
>> >> >> exynos_cluster_power_down(cluster);
>> >> >> @@ -209,6 +217,11 @@ static void exynos_power_down(void)
>> >> >> /* Not dead at this point? Let our caller cope. */
>> >> >> }
>> >> >>
>> >> >> +static void exynos_power_down(void)
>> >> >> +{
>> >> >> + exynos_mcpm_power_down(0);
>> >> >> +}
>> >> >
>> >> > [...]
>> >> >
>> >> >> +static int notrace exynos_mcpm_cpu_suspend(unsigned long arg)
>> >> >> +{
>> >> >> + /* MCPM works with HW CPU identifiers */
>> >> >> + unsigned int mpidr = read_cpuid_mpidr();
>> >> >> + unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>> >> >> + unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
>> >> >> +
>> >> >> + __raw_writel(0x0, sysram_base_addr + EXYNOS5420_CPU_STATE);
>> >> >> +
>> >> >> + mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume);
>> >> >> +
>> >> >> + /*
>> >> >> + * Pass S5P_CHECK_SLEEP flag to the MCPM back-end to indicate that
>> >> >> + * we are suspending the system and need to skip CPU0 power down.
>> >> >> + */
>> >> >> + mcpm_cpu_suspend(S5P_CHECK_SLEEP);
>> >> >
>> >> > NAK.
>> >> >
>> >> > When I say "local flag with local meaning", I don't want you to smuggle
>> >> > that flag through a public interface either. I find it rather inelegant
>> >> > to have the notion of special handling for CPU0 being spread around like
>> >> > that.
>> >> >
>> >> > If CPU0 is special, then it should be dealth with in one place only,
>> >> > ideally in the MCPM backend, so the rest of the kernel doesn't have to
>> >> > care.
>> >> >
>> >> > Again, here's what I mean:
>> >> >
>> >> > static void exynos_mcpm_down_handler(int flags)
>> >> > {
>> >> > [...]
>> >> > if ((cpunr != 0) || !(flags & SKIP_CPU_POWERDOWN_IF_CPU0))
>> >> > exynos_cpu_power_down(cpunr);
>> >> > [...]
>> >> > }
>> >> >
>> >> > static void exynos_mcpm_power_down()
>> >> > {
>> >> > exynos_mcpm_down_handler(0);
>> >> > }
>> >> >
>> >> > static void exynos_mcpm_suspend(u64 residency)
>> >> > {
>> >> > /*
>> >> > * Theresidency argument is ignored for now.
>> >> > * However, in the CPU suspend case, we bypass power down for
>> >> > * CPU0 as this is being taken care by the SYS_PWR_CFG bit in
>> >> > * CORE0_SYS_PWR_REG.
>> >> > */
>> >> > exynos_mcpm_down_handler(SKIP_CPU_POWERDOWN_IF_CPU0);
>> >> > }
>> >> >
>> >> > And SKIP_CPU_POWERDOWN_IF_CPU0 is defined in and visible to
>> >> > mcpm-exynos.c only.
>> >> Sorry if I am being dense, but the exynos_mcpm_suspend function would
>> >> get called from both the bL cpuidle driver as well the exynos pm code.
>> >
>> > What is that exynos pm code actually doing?
>> exynos pm code is shared across Exynos4 and 5 SoCs. It primarily does
>> a series of register configurations which are required to put the
>> system to sleep (some parts of these are SoC specific and others
>> common). It also populates the suspend_ops for exynos. In the current
>> patch, exynos_suspend_enter() is where I have plugged in the
>> mcpm_cpu_suspend call.
>>
>> This patch is based on the S2R series for 5420
>> (http://comments.gmane.org/gmane.linux.kernel.samsung-soc/33898), you
>> may also have a look at that for a clearer picture.
>> >
>> >> We want to skip CPU0 only in case of the S2R case i.e. the pm code and
>> >> keep the CPU0 power down code for all other cases including CPUIdle.
>> >
>> > OK. If so I missed that somehow (hint hint).
>> >
>> >> If I call exynos_mcpm_down_handler with the flag in
>> >> exynos_mcpm_suspend(), CPUIdle will also skip CPU0 isn't it ?
>> >
>> > As it is, yes. You could logically use an infinite residency time
>> > (something like U64_MAX) to distinguish S2RAM from other types of
>> > suspend.
>> OK, I will use this rather than the S5P_CHECK_SLEEP macro.
>
> Another suggestion which might possibly be better: why not looking for
> the SYS_PWR_CFG bit in exynos_cpu_power_down() directly? After all,
> exynos_cpu_power_down() is semantically supposed to do what its name
> suggest and could simply do nothing if the proper conditions are already
> in place.
I have implemented this and it works fine. Patch coming up.
Regards,
Abhilash
>
>
> Nicolas
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCHv2] netfilter: add CHECKSUM target
@ 2010-07-11 15:06 Michael S. Tsirkin
2010-07-11 15:14 ` [PATCHv3] extensions: libxt_CHECKSUM extension Michael S. Tsirkin
0 siblings, 1 reply; 131+ messages in thread
From: Michael S. Tsirkin @ 2010-07-11 15:06 UTC (permalink / raw)
To: Patrick McHardy, Michael S. Tsirkin, David S. Miller,
Jan Engelhardt, Randy Dunlap, netfilter-devel, netfilter,
coreteam, linux-kernel, netdev, kvm, herbert
This adds a `CHECKSUM' target, which can be used in the iptables mangle
table.
You can use this target to compute and fill in the checksum in
a packet that lacks a checksum. This is particularly useful,
if you need to work around old applications such as dhcp clients,
that do not work well with checksum offloads, but don't want to
disable checksum offload in your device.
The problem happens in the field with virtualized applications.
For reference, see Red Hat bz 605555, as well as
http://www.spinics.net/lists/kvm/msg37660.html
Typical expected use (helps old dhclient binary running in a VM):
iptables -A POSTROUTING -t mangle -p udp --dport bootpc \
-j CHECKSUM --checksum-fill
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
Changes from v1:
moved from ipt to xt
get rid of any ipv4 dependencies
coding style tweaks
include/linux/netfilter/xt_CHECKSUM.h | 18 ++++++++
net/netfilter/Kconfig | 17 +++++++-
net/netfilter/Makefile | 1 +
net/netfilter/xt_CHECKSUM.c | 72 +++++++++++++++++++++++++++++++++
4 files changed, 107 insertions(+), 1 deletions(-)
create mode 100644 include/linux/netfilter/xt_CHECKSUM.h
create mode 100644 net/netfilter/xt_CHECKSUM.c
diff --git a/include/linux/netfilter/xt_CHECKSUM.h b/include/linux/netfilter/xt_CHECKSUM.h
new file mode 100644
index 0000000..56afe57
--- /dev/null
+++ b/include/linux/netfilter/xt_CHECKSUM.h
@@ -0,0 +1,18 @@
+/* Header file for iptables ipt_CHECKSUM target
+ *
+ * (C) 2002 by Harald Welte <laforge@gnumonks.org>
+ * (C) 2010 Red Hat Inc
+ * Author: Michael S. Tsirkin <mst@redhat.com>
+ *
+ * This software is distributed under GNU GPL v2, 1991
+*/
+#ifndef _IPT_CHECKSUM_TARGET_H
+#define _IPT_CHECKSUM_TARGET_H
+
+#define XT_CHECKSUM_OP_FILL 0x01 /* fill in checksum in IP header */
+
+struct xt_CHECKSUM_info {
+ u_int8_t operation; /* bitset of operations */
+};
+
+#endif /* _IPT_CHECKSUM_TARGET_H */
diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
index 8593a77..1cf4852 100644
--- a/net/netfilter/Kconfig
+++ b/net/netfilter/Kconfig
@@ -294,7 +294,7 @@ endif # NF_CONNTRACK
config NETFILTER_TPROXY
tristate "Transparent proxying support (EXPERIMENTAL)"
depends on EXPERIMENTAL
- depends on IP_NF_MANGLE
+ depends on IP_NF_MANGLE || IP6_NF_MANGLE
depends on NETFILTER_ADVANCED
help
This option enables transparent proxying support, that is,
@@ -347,6 +347,21 @@ config NETFILTER_XT_CONNMARK
comment "Xtables targets"
+config NETFILTER_XT_TARGET_CHECKSUM
+ tristate "CHECKSUM target support"
+ depends on NETFILTER_ADVANCED
+ ---help---
+ This option adds a `CHECKSUM' target, which can be used in the iptables mangle
+ table.
+
+ You can use this target to compute and fill in the checksum in
+ a packet that lacks a checksum. This is particularly useful,
+ if you need to work around old applications such as dhcp clients,
+ that do not work well with checksum offloads, but don't want to disable
+ checksum offload in your device.
+
+ To compile it as a module, choose M here. If unsure, say N.
+
config NETFILTER_XT_TARGET_CLASSIFY
tristate '"CLASSIFY" target support'
depends on NETFILTER_ADVANCED
diff --git a/net/netfilter/Makefile b/net/netfilter/Makefile
index 14e3a8f..8eb541d 100644
--- a/net/netfilter/Makefile
+++ b/net/netfilter/Makefile
@@ -45,6 +45,7 @@ obj-$(CONFIG_NETFILTER_XT_MARK) += xt_mark.o
obj-$(CONFIG_NETFILTER_XT_CONNMARK) += xt_connmark.o
# targets
+obj-$(CONFIG_NETFILTER_XT_TARGET_CHECKSUM) += xt_CHECKSUM.o
obj-$(CONFIG_NETFILTER_XT_TARGET_CLASSIFY) += xt_CLASSIFY.o
obj-$(CONFIG_NETFILTER_XT_TARGET_CONNSECMARK) += xt_CONNSECMARK.o
obj-$(CONFIG_NETFILTER_XT_TARGET_CT) += xt_CT.o
diff --git a/net/netfilter/xt_CHECKSUM.c b/net/netfilter/xt_CHECKSUM.c
new file mode 100644
index 0000000..0fee1a7
--- /dev/null
+++ b/net/netfilter/xt_CHECKSUM.c
@@ -0,0 +1,72 @@
+/* iptables module for the packet checksum mangling
+ *
+ * (C) 2002 by Harald Welte <laforge@netfilter.org>
+ * (C) 2010 Red Hat, Inc.
+ *
+ * Author: Michael S. Tsirkin <mst@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/in.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+
+#include <linux/netfilter/x_tables.h>
+#include <linux/netfilter/xt_CHECKSUM.h>
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Michael S. Tsirkin <mst@redhat.com>");
+MODULE_DESCRIPTION("Xtables: checksum modification");
+MODULE_ALIAS("ipt_CHECKSUM");
+MODULE_ALIAS("ip6t_CHECKSUM");
+
+static unsigned int
+checksum_tg(struct sk_buff *skb, const struct xt_action_param *par)
+{
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+ skb_checksum_help(skb);
+
+ return XT_CONTINUE;
+}
+
+static int checksum_tg_check(const struct xt_tgchk_param *par)
+{
+ const struct xt_CHECKSUM_info *einfo = par->targinfo;
+
+ if (einfo->operation & ~XT_CHECKSUM_OP_FILL) {
+ pr_info("unsupported CHECKSUM operation %x\n", einfo->operation);
+ return -EINVAL;
+ }
+ if (!einfo->operation) {
+ pr_info("no CHECKSUM operation enabled\n");
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static struct xt_target checksum_tg_reg __read_mostly = {
+ .name = "CHECKSUM",
+ .family = NFPROTO_UNSPEC,
+ .target = checksum_tg,
+ .targetsize = sizeof(struct xt_CHECKSUM_info),
+ .table = "mangle",
+ .checkentry = checksum_tg_check,
+ .me = THIS_MODULE,
+};
+
+static int __init checksum_tg_init(void)
+{
+ return xt_register_target(&checksum_tg_reg);
+}
+
+static void __exit checksum_tg_exit(void)
+{
+ xt_unregister_target(&checksum_tg_reg);
+}
+
+module_init(checksum_tg_init);
+module_exit(checksum_tg_exit);
--
1.7.2.rc0.14.g41c1c
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCHv3] extensions: libxt_CHECKSUM extension
2010-07-11 15:06 [PATCHv2] netfilter: add CHECKSUM target Michael S. Tsirkin
@ 2010-07-11 15:14 ` Michael S. Tsirkin
2010-07-15 9:39 ` Patrick McHardy
0 siblings, 1 reply; 131+ messages in thread
From: Michael S. Tsirkin @ 2010-07-11 15:14 UTC (permalink / raw)
To: Patrick McHardy, David S. Miller, Jan Engelhardt, Randy Dunlap,
netfilter-devel, netfilter, coreteam, linux-kernel, netdev, kvm,
herbert
This adds a `CHECKSUM' target, which can be used in the iptables mangle
table.
You can use this target to compute and fill in the checksum in
a packet that lacks a checksum. This is particularly useful,
if you need to work around old applications such as dhcp clients,
that do not work well with checksum offloads, but don't want to disable
checksum offload in your device.
The problem happens in the field with virtualized applications.
For reference, see Red Hat bz 605555, as well as
http://www.spinics.net/lists/kvm/msg37660.html
Typical expected use (helps old dhclient binary running in a VM):
iptables -A POSTROUTING -t mangle -p udp --dport bootpc \
-j CHECKSUM --checksum-fill
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
Correction in the documentation. Sorry about the noise.
Changes from v2:
updated man file
Changes from v1:
switched from ipt to xt
extensions/libxt_CHECKSUM.c | 99 +++++++++++++++++++++++++++++++++++++++++
extensions/libxt_CHECKSUM.man | 8 +++
2 files changed, 107 insertions(+), 0 deletions(-)
create mode 100644 extensions/libxt_CHECKSUM.c
create mode 100644 extensions/libxt_CHECKSUM.man
diff --git a/extensions/libxt_CHECKSUM.c b/extensions/libxt_CHECKSUM.c
new file mode 100644
index 0000000..00fbd8f
--- /dev/null
+++ b/extensions/libxt_CHECKSUM.c
@@ -0,0 +1,99 @@
+/* Shared library add-on to xtables for CHECKSUM
+ *
+ * (C) 2002 by Harald Welte <laforge@gnumonks.org>
+ * (C) 2010 by Red Hat, Inc
+ * Author: Michael S. Tsirkin <mst@redhat.com>
+ *
+ * This program is distributed under the terms of GNU GPL v2, 1991
+ *
+ * libxt_CHECKSUM.c borrowed some bits from libipt_ECN.c
+ *
+ * $Id$
+ */
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <getopt.h>
+
+#include <xtables.h>
+#include <linux/netfilter/xt_CHECKSUM.h>
+
+static void CHECKSUM_help(void)
+{
+ printf(
+"CHECKSUM target options\n"
+" --checksum-fill Fill in packet checksum.\n");
+}
+
+static const struct option CHECKSUM_opts[] = {
+ { "checksum-fill", 0, NULL, 'F' },
+ { .name = NULL }
+};
+
+static int CHECKSUM_parse(int c, char **argv, int invert, unsigned int *flags,
+ const void *entry, struct xt_entry_target **target)
+{
+ struct xt_CHECKSUM_info *einfo
+ = (struct xt_CHECKSUM_info *)(*target)->data;
+
+ switch (c) {
+ case 'F':
+ if (*flags)
+ xtables_error(PARAMETER_PROBLEM,
+ "CHECKSUM target: Only use --checksum-fill ONCE!");
+ einfo->operation = XT_CHECKSUM_OP_FILL;
+ *flags |= XT_CHECKSUM_OP_FILL;
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+static void CHECKSUM_check(unsigned int flags)
+{
+ if (!flags)
+ xtables_error(PARAMETER_PROBLEM,
+ "CHECKSUM target: Parameter --checksum-fill is required");
+}
+
+static void CHECKSUM_print(const void *ip, const struct xt_entry_target *target,
+ int numeric)
+{
+ const struct xt_CHECKSUM_info *einfo =
+ (const struct xt_CHECKSUM_info *)target->data;
+
+ printf("CHECKSUM ");
+
+ if (einfo->operation & XT_CHECKSUM_OP_FILL)
+ printf("fill ");
+}
+
+static void CHECKSUM_save(const void *ip, const struct xt_entry_target *target)
+{
+ const struct xt_CHECKSUM_info *einfo =
+ (const struct xt_CHECKSUM_info *)target->data;
+
+ if (einfo->operation & XT_CHECKSUM_OP_FILL)
+ printf("--checksum-fill ");
+}
+
+static struct xtables_target checksum_tg_reg = {
+ .name = "CHECKSUM",
+ .version = XTABLES_VERSION,
+ .family = NFPROTO_UNSPEC,
+ .size = XT_ALIGN(sizeof(struct xt_CHECKSUM_info)),
+ .userspacesize = XT_ALIGN(sizeof(struct xt_CHECKSUM_info)),
+ .help = CHECKSUM_help,
+ .parse = CHECKSUM_parse,
+ .final_check = CHECKSUM_check,
+ .print = CHECKSUM_print,
+ .save = CHECKSUM_save,
+ .extra_opts = CHECKSUM_opts,
+};
+
+void _init(void)
+{
+ xtables_register_target(&checksum_tg_reg);
+}
diff --git a/extensions/libxt_CHECKSUM.man b/extensions/libxt_CHECKSUM.man
new file mode 100644
index 0000000..92ae700
--- /dev/null
+++ b/extensions/libxt_CHECKSUM.man
@@ -0,0 +1,8 @@
+This target allows to selectively work around broken/old applications.
+It can only be used in the mangle table.
+.TP
+\fB\-\-checksum\-fill\fP
+Compute and fill in the checksum in a packet that lacks a checksum.
+This is particularly useful, if you need to work around old applications
+such as dhcp clients, that do not work well with checksum offloads,
+but don't want to disable checksum offload in your device.
--
1.7.2.rc0.14.g41c1c
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: [PATCHv3] extensions: libxt_CHECKSUM extension
2010-07-11 15:14 ` [PATCHv3] extensions: libxt_CHECKSUM extension Michael S. Tsirkin
@ 2010-07-15 9:39 ` Patrick McHardy
2010-07-15 10:17 ` several messages Jan Engelhardt
0 siblings, 1 reply; 131+ messages in thread
From: Patrick McHardy @ 2010-07-15 9:39 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: David S. Miller, Jan Engelhardt, Randy Dunlap, netfilter-devel,
netfilter, coreteam, linux-kernel, netdev, kvm, herbert
Am 11.07.2010 17:14, schrieb Michael S. Tsirkin:
> diff --git a/extensions/libxt_CHECKSUM.c b/extensions/libxt_CHECKSUM.c
> new file mode 100644
> index 0000000..00fbd8f
> --- /dev/null
> +++ b/extensions/libxt_CHECKSUM.c
> @@ -0,0 +1,99 @@
> +/* Shared library add-on to xtables for CHECKSUM
> + *
> + * (C) 2002 by Harald Welte <laforge@gnumonks.org>
> + * (C) 2010 by Red Hat, Inc
> + * Author: Michael S. Tsirkin <mst@redhat.com>
> + *
> + * This program is distributed under the terms of GNU GPL v2, 1991
> + *
> + * libxt_CHECKSUM.c borrowed some bits from libipt_ECN.c
> + *
> + * $Id$
Please no CVS IDs.
> + */
> +#include <stdio.h>
> +#include <string.h>
> +#include <stdlib.h>
> +#include <getopt.h>
> +
> +#include <xtables.h>
> +#include <linux/netfilter/xt_CHECKSUM.h>
> +
> +static void CHECKSUM_help(void)
> +{
> + printf(
> +"CHECKSUM target options\n"
> +" --checksum-fill Fill in packet checksum.\n");
> +}
> +
> +static const struct option CHECKSUM_opts[] = {
> + { "checksum-fill", 0, NULL, 'F' },
> + { .name = NULL }
> +};
> +
> +static int CHECKSUM_parse(int c, char **argv, int invert, unsigned int *flags,
> + const void *entry, struct xt_entry_target **target)
> +{
> + struct xt_CHECKSUM_info *einfo
> + = (struct xt_CHECKSUM_info *)(*target)->data;
> +
> + switch (c) {
> + case 'F':
> + if (*flags)
> + xtables_error(PARAMETER_PROBLEM,
> + "CHECKSUM target: Only use --checksum-fill ONCE!");
There is a helper function called xtables_param_act for checking double
arguments and printing a standarized error message.
> + einfo->operation = XT_CHECKSUM_OP_FILL;
> + *flags |= XT_CHECKSUM_OP_FILL;
> + break;
> + default:
> + return 0;
> + }
> +
> + return 1;
> +}
> +
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2010-07-15 9:39 ` Patrick McHardy
@ 2010-07-15 10:17 ` Jan Engelhardt
0 siblings, 0 replies; 131+ messages in thread
From: Jan Engelhardt @ 2010-07-15 10:17 UTC (permalink / raw)
To: Patrick McHardy
Cc: Michael S. Tsirkin, David S. Miller, Randy Dunlap,
netfilter-devel, netfilter, coreteam, linux-kernel, netdev, kvm,
herbert
On Thursday 2010-07-15 11:36, Patrick McHardy wrote:
>> +struct xt_CHECKSUM_info {
>> + u_int8_t operation; /* bitset of operations */
>
>Please use __u8 in public header files.
>
>> +#include <linux/in.h>
>
>Do you really need in.h?
>
>> + * $Id$
>
>Please no CVS IDs.
>
>> + switch (c) {
>> + case 'F':
>> + if (*flags)
>> + xtables_error(PARAMETER_PROBLEM,
>> + "CHECKSUM target: Only use --checksum-fill ONCE!");
>
>There is a helper function called xtables_param_act for checking double
>arguments and printing a standarized error message.
I took care of these for Xt-a.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Layla 3G does not recover from ACPI Suspend
@ 2009-09-06 14:16 Mark Hills
2009-09-08 19:32 ` Giuliano Pochini
0 siblings, 1 reply; 131+ messages in thread
From: Mark Hills @ 2009-09-06 14:16 UTC (permalink / raw)
To: alsa-devel
I have an Echo Layla 3G in my workstation. It works for audio, but does
not recover from ACPI suspend to RAM.
On recovery the system is fine, and the Layla exists in
/proc/asound/cards. But when the Layla is used it prints this message to
dmesg, multiple times:
wait_handshake(): Timeout waiting for DSP
Here are the relevant dmesg lines after awakening:
Echoaudio Echo3G 0000:03:03.0: restoring config space at offset 0xf (was 0x100, writing 0x104)
Echoaudio Echo3G 0000:03:03.0: restoring config space at offset 0x4 (was 0x0, writing 0xe7d00000)
Echoaudio Echo3G 0000:03:03.0: restoring config space at offset 0x3 (was 0x0, writing 0xc010)
Echoaudio Echo3G 0000:03:03.0: restoring config space at offset 0x1 (was 0x2800000, writing 0x2800112)
Echoaudio Echo3G 0000:03:03.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
I couldn't see anything loading the firmware. I thought unloading and
loading snd-echo3g after recovery would help. This shows the firmware
being loaded, but then no ALSA device is shown in /proc/asound/cards. On
loading snd-echo3g:
Echoaudio Echo3G 0000:03:03.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
Echoaudio Echo3G 0000:03:03.0: firmware: requesting ea/echo3g_dsp.fw
Echoaudio Echo3G 0000:03:03.0: firmware: requesting ea/loader_dsp.fw
Echoaudio Echo3G 0000:03:03.0: PCI INT A disabled
Echoaudio Echo3G: probe of 0000:03:03.0 failed with error -5
Is the firmware not being loaded when it should be? Or is there some extra
initialisation not being done (eg. init_hw() in echo3g_dsp.c)?
All other devices in the system recover fine, but as far as I'm aware no
other PCI devices require firmware. It's a Dell x86 system, kernel
2.6.31-rc7.
Thanks for any help,
--
Mark
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: Layla 3G does not recover from ACPI Suspend
2009-09-06 14:16 Layla 3G does not recover from ACPI Suspend Mark Hills
@ 2009-09-08 19:32 ` Giuliano Pochini
2009-09-08 22:56 ` several messages Mark Hills
0 siblings, 1 reply; 131+ messages in thread
From: Giuliano Pochini @ 2009-09-08 19:32 UTC (permalink / raw)
To: Mark Hills; +Cc: alsa-devel
On Sun, 6 Sep 2009 15:16:57 +0100 (BST)
Mark Hills <mark@pogo.org.uk> wrote:
> I have an Echo Layla 3G in my workstation. It works for audio, but does
> not recover from ACPI suspend to RAM.
I does not recover because there is not suspend/resume support. I wrote
almost complete support a lot of time ago which wasn't merged due to two
unsolved issues. The resurrection procedure (actually it is a reinit from
scratch) was very unreliable on my Gina24 for unknown reasons and there
was an atomicity issue in the rawmidi interface.
I can manage to provide you a patch for testing in a few days.
--
Giuliano.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2009-09-08 19:32 ` Giuliano Pochini
@ 2009-09-08 22:56 ` Mark Hills
0 siblings, 0 replies; 131+ messages in thread
From: Mark Hills @ 2009-09-08 22:56 UTC (permalink / raw)
To: Giuliano Pochini; +Cc: Takashi Iwai, alsa-devel
On Tue, 8 Sep 2009, Giuliano Pochini wrote:
> On Sun, 6 Sep 2009 15:16:57 +0100 (BST)
> Mark Hills <mark@pogo.org.uk> wrote:
>
>> I have an Echo Layla 3G in my workstation. It works for audio, but does
>> not recover from ACPI suspend to RAM.
>
> I does not recover because there is not suspend/resume support. I wrote
> almost complete support a lot of time ago which wasn't merged due to two
> unsolved issues. The resurrection procedure (actually it is a reinit from
> scratch) was very unreliable on my Gina24 for unknown reasons and there
> was an atomicity issue in the rawmidi interface.
> I can manage to provide you a patch for testing in a few days.
Thanks Giuliano and Takashi for the replies.
The reason for my quietness is I decided loading/unloading the module was
a hack so I followed Takashi's documentation, aiming to implement some
kind of suspend/resume.
However didn't have a great deal of luck; it's made especially hard with
not having a separate serial console or way of viewing debug messages when
the machine locks. It seemed that the amount of de-initialisation affects
the ability to reinitialise the Layla 3G card from scratch. Then I ran out
of time.
It would be great to see the patch, I'd be very happy to help with
testing. Thanks for your work on this.
--
Mark
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH] libxtables: Introduce global params structuring
@ 2009-02-09 20:57 jamal
2009-02-09 21:04 ` several messages Jan Engelhardt
0 siblings, 1 reply; 131+ messages in thread
From: jamal @ 2009-02-09 20:57 UTC (permalink / raw)
To: Patrick McHardy; +Cc: Jan Engelhardt, Pablo Neira Ayuso, netfilter-devel
[-- Attachment #1: Type: text/plain, Size: 41 bytes --]
Here's the basic change.
cheers,
jamal
[-- Attachment #2: iptv2-0 --]
[-- Type: text/plain, Size: 2246 bytes --]
commit bc259a1516e63a38496d568dff2d6135b925d968
Author: Jamal Hadi Salim <hadi@cyberus.ca>
Date: Mon Feb 9 15:20:18 2009 -0500
introduce a new struct,xtables_globals, so as to
localize the globals used and help in symbol renames.
The applications must invoke xtables_set_params() before starting
to use any iptables APIs.
xtables_set_params() is intended to free xtables from depending
(as it does right now) on existence of such externally definitions
(from iptables/iptables6 etc). At the moment, xtables wont even
compile without presence of at least one of {iptables/iptables6 etc}
Signed-off-by: Jamal Hadi Salim <hadi@cyberus.ca>
diff --git a/include/xtables.h.in b/include/xtables.h.in
index 02750fb..61dbc76 100644
--- a/include/xtables.h.in
+++ b/include/xtables.h.in
@@ -33,6 +33,14 @@
struct in_addr;
+struct xtables_globals
+{
+ unsigned int option_offset;
+ char *program_version;
+ char *program_name;
+ struct option *opts;
+};
+
/* Include file for additions: new matches and targets. */
struct xtables_match
{
@@ -195,6 +203,7 @@ extern void *xtables_malloc(size_t);
extern int xtables_insmod(const char *, const char *, bool);
extern int xtables_load_ko(const char *, bool);
+int xtables_set_params(struct xtables_globals *xtp);
extern struct xtables_match *xtables_find_match(const char *name,
enum xtables_tryload, struct xtables_rule_match **match);
diff --git a/xtables.c b/xtables.c
index 6c95475..aad5e53 100644
--- a/xtables.c
+++ b/xtables.c
@@ -46,6 +46,28 @@
#define PROC_SYS_MODPROBE "/proc/sys/kernel/modprobe"
#endif
+struct xtables_globals *xt_params;
+/**
+ * xtables_set_params - set the global parameters used by xtables
+ * @xtp: input xtables_globals structure
+ *
+ * The app is expected to pass a valid xtables_globals data-filled
+ * with proper values
+ * @xtp cannot be NULL
+ *
+ * Returns -1 on failure to set and 0 on success
+ */
+int xtables_set_params(struct xtables_globals *xtp)
+{
+ if (!xtp) {
+ fprintf(stderr, "%s: Illegal global params\n",__func__);
+ return -1;
+ }
+
+ xt_params = xtp;
+ return 0;
+}
+
/**
* xtables_afinfo - protocol family dependent information
* @kmod: kernel module basename (e.g. "ip_tables")
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: several messages
2009-02-09 20:57 [PATCH] libxtables: Introduce global params structuring jamal
@ 2009-02-09 21:04 ` Jan Engelhardt
2009-02-09 21:27 ` jamal
0 siblings, 1 reply; 131+ messages in thread
From: Jan Engelhardt @ 2009-02-09 21:04 UTC (permalink / raw)
To: jamal; +Cc: Patrick McHardy, Pablo Neira Ayuso, netfilter-devel
On Monday 2009-02-09 21:45, jamal wrote:
>
>Ok, I just synced with latest git. I will send you a few patches first.
>My path to resolving tc/ipt is to start with being able to take a basic
>useless program like:
>
>----------
>#include <xtables.h>
>int main(int argc, char **argv) {
>
> return 0;
>}
>--------
>
>then compile and link with "gcc useless.c -lxtables -ldl"
>
>As it is right now i have to define in the minimal exit_error()
I do not think a library should call exit() and cause the main program
to terminate; to this end it might be best to add a
void (*exit_error)(..
function pointer to the xtables_global struct you are proposing.
>
>Here's the basic change.
If you could convert iptables.c and friends to also make use of this,
that'd be great.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2009-02-09 21:04 ` several messages Jan Engelhardt
@ 2009-02-09 21:27 ` jamal
2009-02-09 21:44 ` Jan Engelhardt
0 siblings, 1 reply; 131+ messages in thread
From: jamal @ 2009-02-09 21:27 UTC (permalink / raw)
To: Jan Engelhardt; +Cc: Patrick McHardy, Pablo Neira Ayuso, netfilter-devel
On Mon, 2009-02-09 at 22:04 +0100, Jan Engelhardt wrote:
>
> I do not think a library should call exit() and cause the main program
> to terminate;
> to this end it might be best to add a
>
> void (*exit_error)(..
>
> function pointer to the xtables_global struct you are proposing.
>
>
Thanks for the suggestion - it sounds reasonable.
Note, however, grep says there are about 700 references to exit_error()
- so my intent of moving it into xtables.c is for usability more than
anything.
> If you could convert iptables.c and friends to also make use of this,
> that'd be great.
There are only 3 definitions as far as i can see. If i can convert those
to use that global struct then I should be able to compile that basic
program.
cheers,
jamal
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2009-02-09 21:27 ` jamal
@ 2009-02-09 21:44 ` Jan Engelhardt
0 siblings, 0 replies; 131+ messages in thread
From: Jan Engelhardt @ 2009-02-09 21:44 UTC (permalink / raw)
To: jamal; +Cc: Patrick McHardy, Pablo Neira Ayuso, netfilter-devel
On Monday 2009-02-09 22:27, jamal wrote:
>On Mon, 2009-02-09 at 22:04 +0100, Jan Engelhardt wrote:
>>
>> I do not think a library should call exit() and cause the main program
>> to terminate;
>> to this end it might be best to add a
>>
>> void (*exit_error)(..
>>
>> function pointer to the xtables_global struct you are proposing.
>
>Thanks for the suggestion - it sounds reasonable.
>Note, however, grep says there are about 700 references to exit_error()
>- so my intent of moving it into xtables.c is for usability more than
>anything.
Hm you are right; much of the code assumes that exit_error()
never returns. We need to stick to that for the time being.
>> If you could convert iptables.c and friends to also make use of this,
>> that'd be great.
>
>There are only 3 definitions as far as i can see. If i can convert those
>to use that global struct then I should be able to compile that basic
>program.
Yep.
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 0/1] HID: hid_apple is not used for apple alu wireless keyboards
@ 2008-11-26 14:33 Jan Scholz
2008-11-26 14:33 ` [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices Jan Scholz
0 siblings, 1 reply; 131+ messages in thread
From: Jan Scholz @ 2008-11-26 14:33 UTC (permalink / raw)
To: jkosina; +Cc: jirislaby, linux-kernel, Jan Scholz
Hi Jiri,
While parsing 'hid_blacklist' in hid-core.c my apple alu wireless
keyboard is not found. This happens because in the blacklist it
is declared with HID_USB_DEVICE although the keyboards are really
bluetooth devices. The same holds for 'apple_devices' list in
hid-apple.c
This patch fixes it by changing HID_USB_DEVICE to
HID_BLUETOOTH_DEVICE in those two lists.
Jan Scholz (1):
HID: Apple alu wireless keyboards are bluetooth devices
drivers/hid/hid-apple.c | 6 +++---
drivers/hid/hid-core.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices
2008-11-26 14:33 [PATCH 0/1] HID: hid_apple is not used for apple alu wireless keyboards Jan Scholz
@ 2008-11-26 14:33 ` Jan Scholz
2008-11-26 14:54 ` Jiri Kosina
0 siblings, 1 reply; 131+ messages in thread
From: Jan Scholz @ 2008-11-26 14:33 UTC (permalink / raw)
To: jkosina; +Cc: jirislaby, linux-kernel, Jan Scholz
Changed HID_USB_DEVICE to HID_BLUETOOTH_DEVICE for the apple alu
wireless keyboards
Signed-off-by: Jan Scholz <Scholz@fias.uni-frankfurt.de>
---
drivers/hid/hid-apple.c | 6 +++---
drivers/hid/hid-core.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
index 9b97795..aa28aed 100644
--- a/drivers/hid/hid-apple.c
+++ b/drivers/hid/hid-apple.c
@@ -400,12 +400,12 @@ static const struct hid_device_id apple_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
APPLE_RDESC_JIS },
- { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
- { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
APPLE_ISO_KEYBOARD },
- { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
.driver_data = APPLE_HAS_FN },
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
index 147ec59..98c7e2d 100644
--- a/drivers/hid/hid-core.c
+++ b/drivers/hid/hid-core.c
@@ -1241,9 +1241,9 @@ static const struct hid_device_id hid_blacklist[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ANSI) },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ISO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS) },
- { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI) },
- { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO) },
- { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS) },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI) },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO) },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_JIS) },
--
1.6.0.4
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices
2008-11-26 14:33 ` [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices Jan Scholz
@ 2008-11-26 14:54 ` Jiri Kosina
2008-11-26 15:17 ` Jan Scholz
0 siblings, 1 reply; 131+ messages in thread
From: Jiri Kosina @ 2008-11-26 14:54 UTC (permalink / raw)
To: Jan Scholz; +Cc: jirislaby, linux-kernel
On Wed, 26 Nov 2008, Jan Scholz wrote:
> Changed HID_USB_DEVICE to HID_BLUETOOTH_DEVICE for the apple alu
> wireless keyboards
> Signed-off-by: Jan Scholz <Scholz@fias.uni-frankfurt.de>
> ---
> drivers/hid/hid-apple.c | 6 +++---
> drivers/hid/hid-core.c | 6 +++---
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
> index 9b97795..aa28aed 100644
> --- a/drivers/hid/hid-apple.c
> +++ b/drivers/hid/hid-apple.c
> @@ -400,12 +400,12 @@ static const struct hid_device_id apple_devices[] = {
> { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS),
> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
> APPLE_RDESC_JIS },
> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
> APPLE_ISO_KEYBOARD },
> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
> { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
> .driver_data = APPLE_HAS_FN },
Hi Jan,
shouldn't we rather have both USB and Bluetooth variants?
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices
2008-11-26 14:54 ` Jiri Kosina
@ 2008-11-26 15:17 ` Jan Scholz
2008-11-26 15:33 ` Jiri Kosina
0 siblings, 1 reply; 131+ messages in thread
From: Jan Scholz @ 2008-11-26 15:17 UTC (permalink / raw)
To: Jiri Kosina; +Cc: Jan Scholz, jirislaby, linux-kernel
Jiri Kosina <jkosina@suse.cz> writes:
> On Wed, 26 Nov 2008, Jan Scholz wrote:
>
>> Changed HID_USB_DEVICE to HID_BLUETOOTH_DEVICE for the apple alu
>> wireless keyboards
>> Signed-off-by: Jan Scholz <Scholz@fias.uni-frankfurt.de>
>> ---
>> drivers/hid/hid-apple.c | 6 +++---
>> drivers/hid/hid-core.c | 6 +++---
>> 2 files changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
>> index 9b97795..aa28aed 100644
>> --- a/drivers/hid/hid-apple.c
>> +++ b/drivers/hid/hid-apple.c
>> @@ -400,12 +400,12 @@ static const struct hid_device_id apple_devices[] = {
>> { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS),
>> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
>> APPLE_RDESC_JIS },
>> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
>> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
>> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
>> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
>> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
>> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
>> APPLE_ISO_KEYBOARD },
>> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
>> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
>> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
>> { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
>> .driver_data = APPLE_HAS_FN },
>
> Hi Jan,
>
> shouldn't we rather have both USB and Bluetooth variants?
>
> Thanks,
Hi Jiri,
Hm, I thought the USB_DEVICE_ID_APPLE_ALU_{ANSI,ISO,JIS} were apples usb
aluminum keyboards (standard desktop size), while the
USB_DEVICE_ID_APPLE_ALU_WIRELESS_{ANSI,ISO,JIS} ones were the aluminum
bluetooth keyboards (notebook sized, no numeric keypad, etc).
The one I own is a USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO, german layout,
with bluetooth, unfortunately I don't have access to a usb variant.
Jan
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices
2008-11-26 15:17 ` Jan Scholz
@ 2008-11-26 15:33 ` Jiri Kosina
2008-11-26 21:06 ` Tobias Müller
0 siblings, 1 reply; 131+ messages in thread
From: Jiri Kosina @ 2008-11-26 15:33 UTC (permalink / raw)
To: Jan Scholz, Tobias Mueller; +Cc: jirislaby, linux-kernel
On Wed, 26 Nov 2008, Jan Scholz wrote:
> Jiri Kosina <jkosina@suse.cz> writes:
>
> > On Wed, 26 Nov 2008, Jan Scholz wrote:
> >
> >> Changed HID_USB_DEVICE to HID_BLUETOOTH_DEVICE for the apple alu
> >> wireless keyboards
> >> Signed-off-by: Jan Scholz <Scholz@fias.uni-frankfurt.de>
> >> ---
> >> drivers/hid/hid-apple.c | 6 +++---
> >> drivers/hid/hid-core.c | 6 +++---
> >> 2 files changed, 6 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
> >> index 9b97795..aa28aed 100644
> >> --- a/drivers/hid/hid-apple.c
> >> +++ b/drivers/hid/hid-apple.c
> >> @@ -400,12 +400,12 @@ static const struct hid_device_id apple_devices[] = {
> >> { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS),
> >> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
> >> APPLE_RDESC_JIS },
> >> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
> >> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
> >> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
> >> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
> >> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
> >> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
> >> APPLE_ISO_KEYBOARD },
> >> - { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
> >> + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
> >> .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
> >> { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
> >> .driver_data = APPLE_HAS_FN },
> >
> > Hi Jan,
> >
> > shouldn't we rather have both USB and Bluetooth variants?
> >
> > Thanks,
>
> Hi Jiri,
>
> Hm, I thought the USB_DEVICE_ID_APPLE_ALU_{ANSI,ISO,JIS} were apples usb
> aluminum keyboards (standard desktop size), while the
> USB_DEVICE_ID_APPLE_ALU_WIRELESS_{ANSI,ISO,JIS} ones were the aluminum
> bluetooth keyboards (notebook sized, no numeric keypad, etc).
>
> The one I own is a USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO, german layout,
> with bluetooth, unfortunately I don't have access to a usb variant.
Tobias Mueller added these device IDs, so persumably he has tested it and
could provide some insight. I don't have the hardware myself, so I have no
idea whether there are only Bluetooht variants, or even USB are available.
Tobias?
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices
2008-11-26 15:33 ` Jiri Kosina
@ 2008-11-26 21:06 ` Tobias Müller
2008-11-27 0:57 ` several messages Jiri Kosina
0 siblings, 1 reply; 131+ messages in thread
From: Tobias Müller @ 2008-11-26 21:06 UTC (permalink / raw)
To: Jiri Kosina; +Cc: Jan Scholz, Tobias Mueller, jirislaby, linux-kernel
>> The one I own is a USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO, german layout,
>> with bluetooth, unfortunately I don't have access to a usb variant.
>
> Tobias Mueller added these device IDs, so persumably he has tested it and
> could provide some insight. I don't have the hardware myself, so I have no
> idea whether there are only Bluetooht variants, or even USB are available.
>
> Tobias?
I own the USB variant and these are the right id for that. The wireless
IDs were from another patch I merged together with mine. I don't have a
wireless version.
Regards
Tobias
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2008-11-26 21:06 ` Tobias Müller
@ 2008-11-27 0:57 ` Jiri Kosina
0 siblings, 0 replies; 131+ messages in thread
From: Jiri Kosina @ 2008-11-27 0:57 UTC (permalink / raw)
To: J.R. Mauro, Tobias Müller; +Cc: Jan Scholz, jirislaby, linux-kernel
[-- Attachment #1: Type: TEXT/PLAIN, Size: 406 bytes --]
On Wed, 26 Nov 2008, J.R. Mauro wrote:
> There is one bluetooth model and one USB model.
On Wed, 26 Nov 2008, Tobias Müller wrote:
> I own the USB variant and these are the right id for that. The wireless
> IDs were from another patch I merged together with mine. I don't have a
> wireless version.
OK, so therefore you confirm that Jan's patch is OK, right?
Thanks a lot,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 131+ messages in thread
* [PATCH 1/2] HID: add hid_type
@ 2008-10-19 14:15 Jiri Slaby
2008-10-19 14:15 ` [PATCH 2/2] HID: fix appletouch regression Jiri Slaby
0 siblings, 1 reply; 131+ messages in thread
From: Jiri Slaby @ 2008-10-19 14:15 UTC (permalink / raw)
To: jkosina
Cc: linux-input, linux-kernel, Steven Noonan, Justin Mattock,
Sven Anders, Marcel Holtmann, linux-bluetooth, Jiri Slaby
Add type to the hid structure to distinguish to which device type
(mouse/kbd) we are talking to. Needed for per device type ignore
list support.
Note: this patch leaves the type as unknown for bluetooth devices,
there is not support for this in the hidp code.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
---
drivers/hid/usbhid/hid-core.c | 8 ++++++++
include/linux/hid.h | 7 +++++++
2 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
index 1d3b8a3..20617d8 100644
--- a/drivers/hid/usbhid/hid-core.c
+++ b/drivers/hid/usbhid/hid-core.c
@@ -972,6 +972,14 @@ static int hid_probe(struct usb_interface *intf, const struct usb_device_id *id)
hid->vendor = le16_to_cpu(dev->descriptor.idVendor);
hid->product = le16_to_cpu(dev->descriptor.idProduct);
hid->name[0] = 0;
+ switch (intf->cur_altsetting->desc.bInterfaceProtocol) {
+ case USB_INTERFACE_PROTOCOL_KEYBOARD:
+ hid->type = HID_TYPE_KEYBOARD;
+ break;
+ case USB_INTERFACE_PROTOCOL_MOUSE:
+ hid->type = HID_TYPE_MOUSE;
+ break;
+ }
if (dev->manufacturer)
strlcpy(hid->name, dev->manufacturer, sizeof(hid->name));
diff --git a/include/linux/hid.h b/include/linux/hid.h
index f13bca2..36a3953 100644
--- a/include/linux/hid.h
+++ b/include/linux/hid.h
@@ -417,6 +417,12 @@ struct hid_input {
struct input_dev *input;
};
+enum hid_type {
+ HID_TYPE_UNKNOWN = 0,
+ HID_TYPE_MOUSE,
+ HID_TYPE_KEYBOARD
+};
+
struct hid_driver;
struct hid_ll_driver;
@@ -431,6 +437,7 @@ struct hid_device { /* device report descriptor */
__u32 vendor; /* Vendor ID */
__u32 product; /* Product ID */
__u32 version; /* HID version */
+ enum hid_type type; /* device type (mouse, kbd, ...) */
unsigned country; /* HID country */
struct hid_report_enum report_enum[HID_REPORT_TYPES];
--
1.6.0.2
^ permalink raw reply related [flat|nested] 131+ messages in thread
* [PATCH 2/2] HID: fix appletouch regression
2008-10-19 14:15 [PATCH 1/2] HID: add hid_type Jiri Slaby
@ 2008-10-19 14:15 ` Jiri Slaby
2008-10-19 19:40 ` several messages Jiri Kosina
0 siblings, 1 reply; 131+ messages in thread
From: Jiri Slaby @ 2008-10-19 14:15 UTC (permalink / raw)
To: jkosina
Cc: linux-input, linux-kernel, Steven Noonan, Justin Mattock,
Sven Anders, Marcel Holtmann, linux-bluetooth, Jiri Slaby
The appletouch mouse devices are grabbed by the hid bus and not
released even if apple driver says ENODEV (as expected).
Move the ignoration one level upper to deny the hid layer to grab
the device and return error to the usb hid which, as a result,
releases the device.
Otherwise input/mouse/appletouch and others needn't be attached.
Reported-by: Justin Mattock <justinmattock@gmail.com>
Reported-by: Steven Noonan <steven@uplinklabs.net>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
---
drivers/hid/hid-apple.c | 63 ++++++++++++++++------------------------------
drivers/hid/hid-core.c | 33 ++++++++++++++++++++++++
2 files changed, 55 insertions(+), 41 deletions(-)
diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
index fd7f896..c6ab4ba 100644
--- a/drivers/hid/hid-apple.c
+++ b/drivers/hid/hid-apple.c
@@ -312,13 +312,6 @@ static int apple_probe(struct hid_device *hdev,
unsigned int connect_mask = HID_CONNECT_DEFAULT;
int ret;
- /* return something else or move to hid layer? device will reside
- allocated */
- if (id->bus == BUS_USB && (quirks & APPLE_IGNORE_MOUSE) &&
- to_usb_interface(hdev->dev.parent)->cur_altsetting->
- desc.bInterfaceProtocol == USB_INTERFACE_PROTOCOL_MOUSE)
- return -ENODEV;
-
asc = kzalloc(sizeof(*asc), GFP_KERNEL);
if (asc == NULL) {
dev_err(&hdev->dev, "can't alloc apple descriptor\n");
@@ -367,38 +360,32 @@ static const struct hid_device_id apple_devices[] = {
.driver_data = APPLE_MIGHTYMOUSE | APPLE_INVERT_HWHEEL },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_ANSI),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_ISO),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER_ANSI),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER_ISO),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE | APPLE_ISO_KEYBOARD },
+ APPLE_ISO_KEYBOARD },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER_JIS),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_ANSI),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_ISO),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE | APPLE_ISO_KEYBOARD },
+ APPLE_ISO_KEYBOARD },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_JIS),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE | APPLE_RDESC_JIS },
+ APPLE_RDESC_JIS },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_ANSI),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_ISO),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE | APPLE_ISO_KEYBOARD },
+ APPLE_ISO_KEYBOARD },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_JIS),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE | APPLE_RDESC_JIS},
+ APPLE_RDESC_JIS },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_ANSI),
.driver_data = APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_ISO),
@@ -406,14 +393,12 @@ static const struct hid_device_id apple_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_JIS),
.driver_data = APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ANSI),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ISO),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE | APPLE_RDESC_JIS },
+ APPLE_RDESC_JIS },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_ISO),
@@ -422,25 +407,21 @@ static const struct hid_device_id apple_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_JIS),
.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
- .driver_data = APPLE_HAS_FN | APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO),
- .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_JIS),
- .driver_data = APPLE_HAS_FN | APPLE_IGNORE_MOUSE | APPLE_RDESC_JIS },
+ .driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_ANSI),
- .driver_data = APPLE_HAS_FN | APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_ISO),
- .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_JIS),
- .driver_data = APPLE_HAS_FN | APPLE_IGNORE_MOUSE | APPLE_RDESC_JIS },
+ .driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_TP_ONLY),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY),
- .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN |
- APPLE_IGNORE_MOUSE },
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
/* Apple wireless Mighty Mouse */
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, 0x030c),
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
index 8a7d9db..90bdc6f 100644
--- a/drivers/hid/hid-core.c
+++ b/drivers/hid/hid-core.c
@@ -1539,6 +1539,35 @@ static const struct hid_device_id hid_ignore_list[] = {
{ }
};
+/**
+ * hid_mouse_ignore_list - mouse devices which must not be held by the hid layer
+ */
+static const struct hid_device_id hid_mouse_ignore_list[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_ANSI) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_ISO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER_ANSI) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER_ISO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER_JIS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_ANSI) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_ISO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER3_JIS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_ANSI) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_ISO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_JIS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ANSI) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_ISO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER4_HF_JIS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_JIS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_ANSI) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_ISO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING2_JIS) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_TP_ONLY) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY) },
+ { }
+};
+
static bool hid_ignore(struct hid_device *hdev)
{
switch (hdev->vendor) {
@@ -1555,6 +1584,10 @@ static bool hid_ignore(struct hid_device *hdev)
break;
}
+ if (hdev->type == HID_TYPE_MOUSE &&
+ hid_match_id(hdev, hid_mouse_ignore_list))
+ return true;
+
return !!hid_match_id(hdev, hid_ignore_list);
}
--
1.6.0.2
^ permalink raw reply related [flat|nested] 131+ messages in thread
* Re: several messages
2008-10-19 14:15 ` [PATCH 2/2] HID: fix appletouch regression Jiri Slaby
@ 2008-10-19 19:40 ` Jiri Kosina
2008-10-19 20:06 ` Justin Mattock
2008-10-19 22:09 ` Jiri Slaby
0 siblings, 2 replies; 131+ messages in thread
From: Jiri Kosina @ 2008-10-19 19:40 UTC (permalink / raw)
To: Jiri Slaby
Cc: linux-input, linux-kernel, Steven Noonan, Justin Mattock,
Sven Anders, Marcel Holtmann, linux-bluetooth
On Sun, 19 Oct 2008, Jiri Slaby wrote:
> +enum hid_type {
> + HID_TYPE_UNKNOWN = 0,
> + HID_TYPE_MOUSE,
> + HID_TYPE_KEYBOARD
> +};
> +
Do we really need the HID_TYPE_KEYBOARD at all? It's not used anywhere in
the code. I'd propose to add it when it is actually needed. I.e. have the
enum contain something like HID_TYPE_MOUSE HID_TYPE_OTHER for now, and add
whatever will become necessary in the future, what do you think?
On Sun, 19 Oct 2008, Jiri Slaby wrote:
> +/**
> + * hid_mouse_ignore_list - mouse devices which must not be held by the hid layer
> + */
I think a more descriptive comment would be appropriate here. It might not
be obvious on the first sight why this needs to be done separately from
the generic hid_blacklist. I.e. something like
/**
* There are composite devices for which we want to ignore only a certain
* interface. This is a list of devices for which only the mouse interface
* will be ignored.
*/
maybe?
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2008-10-19 20:06 ` Justin Mattock
0 siblings, 0 replies; 131+ messages in thread
From: Justin Mattock @ 2008-10-19 20:06 UTC (permalink / raw)
To: Jiri Kosina
Cc: Jiri Slaby, linux-input, linux-kernel, Steven Noonan,
Sven Anders, Marcel Holtmann, linux-bluetooth
On Sun, Oct 19, 2008 at 12:40 PM, Jiri Kosina <jkosina@suse.cz> wrote:
> On Sun, 19 Oct 2008, Jiri Slaby wrote:
>
>> +enum hid_type {
>> + HID_TYPE_UNKNOWN = 0,
>> + HID_TYPE_MOUSE,
>> + HID_TYPE_KEYBOARD
>> +};
>> +
>
> Do we really need the HID_TYPE_KEYBOARD at all? It's not used anywhere in
> the code. I'd propose to add it when it is actually needed. I.e. have the
> enum contain something like HID_TYPE_MOUSE HID_TYPE_OTHER for now, and add
> whatever will become necessary in the future, what do you think?
>
>
> On Sun, 19 Oct 2008, Jiri Slaby wrote:
>
>> +/**
>> + * hid_mouse_ignore_list - mouse devices which must not be held by the hid layer
>> + */
>
> I think a more descriptive comment would be appropriate here. It might not
> be obvious on the first sight why this needs to be done separately from
> the generic hid_blacklist. I.e. something like
>
> /**
> * There are composite devices for which we want to ignore only a certain
> * interface. This is a list of devices for which only the mouse interface
> * will be ignored.
> */
>
> maybe?
>
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs
>
I can agree with that, whats the point having something
there if it not being used,(just eating up precious space);
--
Justin P. Mattock
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2008-10-19 20:06 ` Justin Mattock
0 siblings, 0 replies; 131+ messages in thread
From: Justin Mattock @ 2008-10-19 20:06 UTC (permalink / raw)
To: Jiri Kosina
Cc: Jiri Slaby, linux-input-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Steven Noonan, Sven Anders,
Marcel Holtmann, linux-bluetooth-u79uwXL29TY76Z2rM5mHXA
On Sun, Oct 19, 2008 at 12:40 PM, Jiri Kosina <jkosina-AlSwsSmVLrQ@public.gmane.org> wrote:
> On Sun, 19 Oct 2008, Jiri Slaby wrote:
>
>> +enum hid_type {
>> + HID_TYPE_UNKNOWN = 0,
>> + HID_TYPE_MOUSE,
>> + HID_TYPE_KEYBOARD
>> +};
>> +
>
> Do we really need the HID_TYPE_KEYBOARD at all? It's not used anywhere in
> the code. I'd propose to add it when it is actually needed. I.e. have the
> enum contain something like HID_TYPE_MOUSE HID_TYPE_OTHER for now, and add
> whatever will become necessary in the future, what do you think?
>
>
> On Sun, 19 Oct 2008, Jiri Slaby wrote:
>
>> +/**
>> + * hid_mouse_ignore_list - mouse devices which must not be held by the hid layer
>> + */
>
> I think a more descriptive comment would be appropriate here. It might not
> be obvious on the first sight why this needs to be done separately from
> the generic hid_blacklist. I.e. something like
>
> /**
> * There are composite devices for which we want to ignore only a certain
> * interface. This is a list of devices for which only the mouse interface
> * will be ignored.
> */
>
> maybe?
>
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs
>
I can agree with that, whats the point having something
there if it not being used,(just eating up precious space);
--
Justin P. Mattock
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2008-10-19 19:40 ` several messages Jiri Kosina
2008-10-19 20:06 ` Justin Mattock
@ 2008-10-19 22:09 ` Jiri Slaby
1 sibling, 0 replies; 131+ messages in thread
From: Jiri Slaby @ 2008-10-19 22:09 UTC (permalink / raw)
To: Jiri Kosina
Cc: linux-input, linux-kernel, Steven Noonan, Justin Mattock,
Sven Anders, Marcel Holtmann, linux-bluetooth
Jiri Kosina napsal(a):
> On Sun, 19 Oct 2008, Jiri Slaby wrote:
>
>> +enum hid_type {
>> + HID_TYPE_UNKNOWN = 0,
>> + HID_TYPE_MOUSE,
>> + HID_TYPE_KEYBOARD
>> +};
>> +
>
> Do we really need the HID_TYPE_KEYBOARD at all? It's not used anywhere in
> the code. I'd propose to add it when it is actually needed. I.e. have the
> enum contain something like HID_TYPE_MOUSE HID_TYPE_OTHER for now, and add
> whatever will become necessary in the future, what do you think?
I would use unknown rather than other, since on bluetooth mouse is unknown
not other, if you don't mind?
Or did you mean tristate unknown, mouse and other?
Thanks for review.
^ permalink raw reply [flat|nested] 131+ messages in thread
[parent not found: <9E397A467F4DB34884A1FD0D5D27CF43018903F96E@msxaoa4.twosigma.com>]
* Re: several messages
[not found] <9E397A467F4DB34884A1FD0D5D27CF43018903F96E@msxaoa4.twosigma.com>
@ 2008-06-12 16:54 ` Benjamin L. Shi
0 siblings, 0 replies; 131+ messages in thread
From: Benjamin L. Shi @ 2008-06-12 16:54 UTC (permalink / raw)
To: xfs
Index: fs/xfs/xfs_iomap.c
===================================================================
RCS file: /src/linux/2.6.18/fs/xfs/xfs_iomap.c,v
retrieving revision 1.1.1.1
retrieving revision 1.2
diff -u -r1.1.1.1 -r1.2
--- fs/xfs/xfs_iomap.c 29 Sep 2006 13:45:19 -0000 1.1.1.1
+++ fs/xfs/xfs_iomap.c 12 Jun 2008 15:59:10 -0000 1.2
@@ -706,11 +706,24 @@
* then we must have run out of space - flush delalloc, and retry..
*/
if (nimaps == 0) {
+ if ((mp->m_flags & XFS_MOUNT_FULL) != 0) {
+ if (mp->m_sb.sb_fdblocks < 500) {
+ // printk("full again %llu\n",
+ // mp->m_sb.sb_fdblocks);
+ return XFS_ERROR(ENOSPC);
+ } else {
+ // printk("not full again %llu\n",
+ // mp->m_sb.sb_fdblocks);
+ mp->m_flags &= ~XFS_MOUNT_FULL;
+ }
+ }
xfs_iomap_enter_trace(XFS_IOMAP_WRITE_NOSPACE,
io, offset, count);
- if (xfs_flush_space(ip, &fsynced, &ioflag))
+ if (xfs_flush_space(ip, &fsynced, &ioflag)) {
+ mp->m_flags |= XFS_MOUNT_FULL;
+ //printk("set full %llu\n", mp->m_sb.sb_fdblocks);
return XFS_ERROR(ENOSPC);
-
+ }
error = 0;
goto retry;
}
Index: fs/xfs/xfs_mount.h
===================================================================
RCS file: /src/linux/2.6.18/fs/xfs/xfs_mount.h,v
retrieving revision 1.1.1.1
retrieving revision 1.2
diff -u -r1.1.1.1 -r1.2
--- fs/xfs/xfs_mount.h 29 Sep 2006 13:45:19 -0000 1.1.1.1
+++ fs/xfs/xfs_mount.h 12 Jun 2008 15:59:10 -0000 1.2
@@ -459,6 +459,7 @@
* I/O size in stat() */
#define XFS_MOUNT_NO_PERCPU_SB (1ULL << 23) /* don't use per-cpu
superblock
counters */
+#define XFS_MOUNT_FULL (1ULL << 24)
/*
>
> On Fri, 6 Oct 2006, David Chinner wrote:
>
>>> The backtrace looked like this:
>>>
>>> ... nfsd_write nfsd_vfs_write vfs_writev do_readv_writev
>>> xfs_file_writev
>>> xfs_write generic_file_buffered_write xfs_get_blocks __xfs_get_blocks
>>> xfs_bmap xfs_iomap xfs_iomap_write_delay xfs_flush_space
>>> xfs_flush_device
>>> schedule_timeout_uninterruptible.
>>
>> Ahhh, this gets hit on the ->prepare_write path
>> (xfs_iomap_write_delay()),
>
> Yes.
>
>> not the allocate path (xfs_iomap_write_allocate()). Sorry - I got myself
>> (and probably everyone else) confused there which why I suspected sync
>> writes - they trigger the allocate path in the write call. I don't think
>> 2.6.18 will change anything.
>>
>> FWIW, I don't think we can avoid this sleep when we first hit ENOSPC
>> conditions, but perhaps once we are certain of the ENOSPC status
>> we can tag the filesystem with this state (say an xfs_mount flag)
>> and only clear that tag when something is freed. We could then
>> use the tag to avoid continually trying extremely hard to allocate
>> space when we know there is none available....
>
> Yes! That's what I was trying to suggest << OLE Object: Picture (Device
> Independent Bitmap) >> . Thank you.
>
> Is that hard to do?
>
^ permalink raw reply [flat|nested] 131+ messages in thread
[parent not found: <200702211929.17203.david-b@pacbell.net>]
* [patch 6/6] rtc suspend()/resume() restores system clock
[not found] <200702211929.17203.david-b@pacbell.net>
@ 2007-02-22 3:50 ` David Brownell
2007-02-22 22:58 ` Guennadi Liakhovetski
0 siblings, 1 reply; 131+ messages in thread
From: David Brownell @ 2007-02-22 3:50 UTC (permalink / raw)
To: rtc-linux
Cc: Linux Kernel list, linux-pm, Greg KH, Andrew Morton,
Alessandro Zummo, john stultz
RTC class suspend/resume support, re-initializing the system clock on resume
from the clock used to initialize it at boot time.
- Inlining the same code used by ARM, which saves and restores the
delta between a selected RTC and the current system wall-clock time.
- Removes calls to that ARM code from AT91, OMAP, and S3C RTCs.
This goes on top of the patch series removing "struct class_device" usage
from the RTC framework. That makes class suspend()/resume() work.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
---
drivers/rtc/Kconfig | 24 +++++++++----
drivers/rtc/class.c | 74 +++++++++++++++++++++++++++++++++++++++++++
drivers/rtc/rtc-at91rm9200.c | 30 -----------------
drivers/rtc/rtc-omap.c | 17 ---------
drivers/rtc/rtc-s3c.c | 22 ------------
5 files changed, 91 insertions(+), 76 deletions(-)
Index: at91/drivers/rtc/Kconfig
===================================================================
--- at91.orig/drivers/rtc/Kconfig 2007-02-21 18:47:38.000000000 -0800
+++ at91/drivers/rtc/Kconfig 2007-02-21 18:47:41.000000000 -0800
@@ -21,21 +21,31 @@ config RTC_CLASS
will be called rtc-class.
config RTC_HCTOSYS
- bool "Set system time from RTC on startup"
+ bool "Set system time from RTC on startup and resume"
depends on RTC_CLASS = y
default y
help
- If you say yes here, the system time will be set using
- the value read from the specified RTC device. This is useful
- in order to avoid unnecessary fsck runs.
+ If you say yes here, the system time (wall clock) will be set using
+ the value read from a specified RTC device. This is useful to avoid
+ unnecessary fsck runs at boot time, and to network better.
config RTC_HCTOSYS_DEVICE
- string "The RTC to read the time from"
+ string "RTC used to set the system time"
depends on RTC_HCTOSYS = y
default "rtc0"
help
- The RTC device that will be used as the source for
- the system time, usually rtc0.
+ The RTC device that will be used to (re)initialize the system
+ clock, usually rtc0. Initialization is done when the system
+ starts up, and when it resumes from a low power state.
+
+ This clock should be battery-backed, so that it reads the correct
+ time when the system boots from a power-off state. Otherwise, your
+ system will need an external clock source (like an NTP server).
+
+ If the clock you specify here is not battery backed, it may still
+ be useful to reinitialize system time when resuming from system
+ sleep states. Do not specify an RTC here unless it stays powered
+ during all this system's supported sleep states.
config RTC_DEBUG
bool "RTC debug support"
Index: at91/drivers/rtc/class.c
===================================================================
--- at91.orig/drivers/rtc/class.c 2007-02-21 18:47:39.000000000 -0800
+++ at91/drivers/rtc/class.c 2007-02-21 18:47:41.000000000 -0800
@@ -32,6 +32,78 @@ static void rtc_device_release(struct de
kfree(rtc);
}
+#if defined(CONFIG_PM) && defined(CONFIG_RTC_HCTOSYS_DEVICE)
+
+/*
+ * On suspend(), measure the delta between one RTC and the
+ * system's wall clock; restore it on resume().
+ */
+
+static struct timespec delta;
+static time_t oldtime;
+
+static int rtc_suspend(struct device *dev, pm_message_t mesg)
+{
+ struct rtc_device *rtc = to_rtc_device(dev);
+ struct rtc_time tm;
+
+ if (strncmp(rtc->dev.bus_id,
+ CONFIG_RTC_HCTOSYS_DEVICE,
+ BUS_ID_SIZE) != 0)
+ return 0;
+
+ rtc_read_time(rtc, &tm);
+ rtc_tm_to_time(&tm, &oldtime);
+
+ /* RTC precision is 1 second; adjust delta for avg 1/2 sec err */
+ set_normalized_timespec(&delta,
+ xtime.tv_sec - oldtime,
+ xtime.tv_nsec - (NSEC_PER_SEC >> 1));
+
+ return 0;
+}
+
+static int rtc_resume(struct device *dev)
+{
+ struct rtc_device *rtc = to_rtc_device(dev);
+ struct rtc_time tm;
+ time_t newtime;
+ struct timespec time;
+
+ if (strncmp(rtc->dev.bus_id,
+ CONFIG_RTC_HCTOSYS_DEVICE,
+ BUS_ID_SIZE) != 0)
+ return 0;
+
+ rtc_read_time(rtc, &tm);
+ if (rtc_valid_tm(&tm) != 0) {
+ pr_debug("%s: bogus resume time\n", rtc->dev.bus_id);
+ return 0;
+ }
+ rtc_tm_to_time(&tm, &newtime);
+ if (newtime <= oldtime) {
+ if (newtime < oldtime)
+ pr_debug("%s: time travel!\n", rtc->dev.bus_id);
+ return 0;
+ }
+
+ /* restore wall clock using delta against this RTC;
+ * adjust again for avg 1/2 second RTC sampling error
+ */
+ set_normalized_timespec(&time,
+ newtime + delta.tv_sec,
+ (NSEC_PER_SEC >> 1) + delta.tv_nsec);
+ do_settimeofday(&time);
+
+ return 0;
+}
+
+#else
+#define rtc_suspend NULL
+#define rtc_resume NULL
+#endif
+
+
/**
* rtc_device_register - register w/ RTC class
* @dev: the device to register
@@ -138,6 +210,8 @@ static int __init rtc_init(void)
printk(KERN_ERR "%s: couldn't create class\n", __FILE__);
return PTR_ERR(rtc_class);
}
+ rtc_class->suspend = rtc_suspend;
+ rtc_class->resume = rtc_resume;
rtc_dev_init();
rtc_sysfs_init(rtc_class);
return 0;
Index: at91/drivers/rtc/rtc-at91rm9200.c
===================================================================
--- at91.orig/drivers/rtc/rtc-at91rm9200.c 2007-02-21 18:47:26.000000000 -0800
+++ at91/drivers/rtc/rtc-at91rm9200.c 2007-02-21 18:47:41.000000000 -0800
@@ -348,21 +348,10 @@ static int __exit at91_rtc_remove(struct
/* AT91RM9200 RTC Power management control */
-static struct timespec at91_rtc_delta;
static u32 at91_rtc_imr;
static int at91_rtc_suspend(struct platform_device *pdev, pm_message_t state)
{
- struct rtc_time tm;
- struct timespec time;
-
- time.tv_nsec = 0;
-
- /* calculate time delta for suspend */
- at91_rtc_readtime(&pdev->dev, &tm);
- rtc_tm_to_time(&tm, &time.tv_sec);
- save_time_delta(&at91_rtc_delta, &time);
-
/* this IRQ is shared with DBGU and other hardware which isn't
* necessarily doing PM like we are...
*/
@@ -374,36 +363,17 @@ static int at91_rtc_suspend(struct platf
else
at91_sys_write(AT91_RTC_IDR, at91_rtc_imr);
}
-
- pr_debug("%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __FUNCTION__,
- 1900 + tm.tm_year, tm.tm_mon, tm.tm_mday,
- tm.tm_hour, tm.tm_min, tm.tm_sec);
-
return 0;
}
static int at91_rtc_resume(struct platform_device *pdev)
{
- struct rtc_time tm;
- struct timespec time;
-
- time.tv_nsec = 0;
-
- at91_rtc_readtime(&pdev->dev, &tm);
- rtc_tm_to_time(&tm, &time.tv_sec);
- restore_time_delta(&at91_rtc_delta, &time);
-
if (at91_rtc_imr) {
if (device_may_wakeup(&pdev->dev))
disable_irq_wake(AT91_ID_SYS);
else
at91_sys_write(AT91_RTC_IER, at91_rtc_imr);
}
-
- pr_debug("%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __FUNCTION__,
- 1900 + tm.tm_year, tm.tm_mon, tm.tm_mday,
- tm.tm_hour, tm.tm_min, tm.tm_sec);
-
return 0;
}
#else
Index: at91/drivers/rtc/rtc-omap.c
===================================================================
--- at91.orig/drivers/rtc/rtc-omap.c 2007-02-21 18:47:39.000000000 -0800
+++ at91/drivers/rtc/rtc-omap.c 2007-02-21 18:47:41.000000000 -0800
@@ -488,19 +488,10 @@ static int __devexit omap_rtc_remove(str
#ifdef CONFIG_PM
-static struct timespec rtc_delta;
static u8 irqstat;
static int omap_rtc_suspend(struct platform_device *pdev, pm_message_t state)
{
- struct rtc_time rtc_tm;
- struct timespec time;
-
- time.tv_nsec = 0;
- omap_rtc_read_time(NULL, &rtc_tm);
- rtc_tm_to_time(&rtc_tm, &time.tv_sec);
-
- save_time_delta(&rtc_delta, &time);
irqstat = rtc_read(OMAP_RTC_INTERRUPTS_REG);
/* FIXME the RTC alarm is not currently acting as a wakeup event
@@ -517,14 +508,6 @@ static int omap_rtc_suspend(struct platf
static int omap_rtc_resume(struct platform_device *pdev)
{
- struct rtc_time rtc_tm;
- struct timespec time;
-
- time.tv_nsec = 0;
- omap_rtc_read_time(NULL, &rtc_tm);
- rtc_tm_to_time(&rtc_tm, &time.tv_sec);
-
- restore_time_delta(&rtc_delta, &time);
if (device_may_wakeup(&pdev->dev))
disable_irq_wake(omap_rtc_alarm);
else
Index: at91/drivers/rtc/rtc-s3c.c
===================================================================
--- at91.orig/drivers/rtc/rtc-s3c.c 2007-02-21 18:47:26.000000000 -0800
+++ at91/drivers/rtc/rtc-s3c.c 2007-02-21 18:47:41.000000000 -0800
@@ -548,37 +548,15 @@ static int ticnt_save;
static int s3c_rtc_suspend(struct platform_device *pdev, pm_message_t state)
{
- struct rtc_time tm;
- struct timespec time;
-
- time.tv_nsec = 0;
-
/* save TICNT for anyone using periodic interrupts */
-
ticnt_save = readb(s3c_rtc_base + S3C2410_TICNT);
-
- /* calculate time delta for suspend */
-
- s3c_rtc_gettime(&pdev->dev, &tm);
- rtc_tm_to_time(&tm, &time.tv_sec);
- save_time_delta(&s3c_rtc_delta, &time);
s3c_rtc_enable(pdev, 0);
-
return 0;
}
static int s3c_rtc_resume(struct platform_device *pdev)
{
- struct rtc_time tm;
- struct timespec time;
-
- time.tv_nsec = 0;
-
s3c_rtc_enable(pdev, 1);
- s3c_rtc_gettime(&pdev->dev, &tm);
- rtc_tm_to_time(&tm, &time.tv_sec);
- restore_time_delta(&s3c_rtc_delta, &time);
-
writeb(ticnt_save, s3c_rtc_base + S3C2410_TICNT);
return 0;
}
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2007-02-22 3:50 ` [patch 6/6] rtc suspend()/resume() restores system clock David Brownell
2007-02-22 22:58 ` Guennadi Liakhovetski
@ 2007-02-22 22:58 ` Guennadi Liakhovetski
0 siblings, 0 replies; 131+ messages in thread
From: Guennadi Liakhovetski @ 2007-02-22 22:58 UTC (permalink / raw)
To: Johannes Berg, David Brownell
Cc: linuxppc-dev, rtc-linux, linux-pm, Torrance, Alessandro Zummo,
john stultz, Andrew Morton, Linux Kernel list
of the following 2 patches:
On Mon, 5 Feb 2007, Johannes Berg wrote:
> This patch removes the time suspend/restore code that was done through
> a PMU notifier in arch/platforms/powermac/time.c.
>
> Instead, we introduce arch/powerpc/sysdev/timer.c which creates a sys
> device and handles time of day suspend/resume through that.
>
> Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
> Cc: Andrew Morton <akpm@osdl.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[patch trimmed]
On Wed, 21 Feb 2007, David Brownell wrote:
> RTC class suspend/resume support, re-initializing the system clock on resume
> >from the clock used to initialize it at boot time.
>
> - Inlining the same code used by ARM, which saves and restores the
> delta between a selected RTC and the current system wall-clock time.
>
> - Removes calls to that ARM code from AT91, OMAP, and S3C RTCs.
>
> This goes on top of the patch series removing "struct class_device" usage
> >from the RTC framework. That makes class suspend()/resume() work.
>
> Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
>
> ---
> drivers/rtc/Kconfig | 24 +++++++++----
> drivers/rtc/class.c | 74 +++++++++++++++++++++++++++++++++++++++++++
> drivers/rtc/rtc-at91rm9200.c | 30 -----------------
> drivers/rtc/rtc-omap.c | 17 ---------
> drivers/rtc/rtc-s3c.c | 22 ------------
> 5 files changed, 91 insertions(+), 76 deletions(-)
[patch trimmed]
I think, we only want 1, right? And the latter seems to be more generic /
platform independent? And as a side-effect, powermac would have to migrate
to generic rtc:-)
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2007-02-22 22:58 ` Guennadi Liakhovetski
0 siblings, 0 replies; 131+ messages in thread
From: Guennadi Liakhovetski @ 2007-02-22 22:58 UTC (permalink / raw)
To: Johannes Berg, David Brownell
Cc: Alessandro Zummo, rtc-linux, john stultz, linux-pm,
Linux Kernel list, Torrance, linuxppc-dev, Andrew Morton
of the following 2 patches:
On Mon, 5 Feb 2007, Johannes Berg wrote:
> This patch removes the time suspend/restore code that was done through
> a PMU notifier in arch/platforms/powermac/time.c.
>
> Instead, we introduce arch/powerpc/sysdev/timer.c which creates a sys
> device and handles time of day suspend/resume through that.
>
> Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
> Cc: Andrew Morton <akpm@osdl.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[patch trimmed]
On Wed, 21 Feb 2007, David Brownell wrote:
> RTC class suspend/resume support, re-initializing the system clock on resume
> >from the clock used to initialize it at boot time.
>
> - Inlining the same code used by ARM, which saves and restores the
> delta between a selected RTC and the current system wall-clock time.
>
> - Removes calls to that ARM code from AT91, OMAP, and S3C RTCs.
>
> This goes on top of the patch series removing "struct class_device" usage
> >from the RTC framework. That makes class suspend()/resume() work.
>
> Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
>
> ---
> drivers/rtc/Kconfig | 24 +++++++++----
> drivers/rtc/class.c | 74 +++++++++++++++++++++++++++++++++++++++++++
> drivers/rtc/rtc-at91rm9200.c | 30 -----------------
> drivers/rtc/rtc-omap.c | 17 ---------
> drivers/rtc/rtc-s3c.c | 22 ------------
> 5 files changed, 91 insertions(+), 76 deletions(-)
[patch trimmed]
I think, we only want 1, right? And the latter seems to be more generic /
platform independent? And as a side-effect, powermac would have to migrate
to generic rtc:-)
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2007-02-22 22:58 ` Guennadi Liakhovetski
0 siblings, 0 replies; 131+ messages in thread
From: Guennadi Liakhovetski @ 2007-02-22 22:58 UTC (permalink / raw)
To: Johannes Berg, David Brownell
Cc: linuxppc-dev, Alessandro Zummo, rtc-linux, john stultz,
Andrew Morton, linux-pm, Linux Kernel list, Torrance
of the following 2 patches:
On Mon, 5 Feb 2007, Johannes Berg wrote:
> This patch removes the time suspend/restore code that was done through
> a PMU notifier in arch/platforms/powermac/time.c.
>
> Instead, we introduce arch/powerpc/sysdev/timer.c which creates a sys
> device and handles time of day suspend/resume through that.
>
> Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
> Cc: Andrew Morton <akpm@osdl.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[patch trimmed]
On Wed, 21 Feb 2007, David Brownell wrote:
> RTC class suspend/resume support, re-initializing the system clock on resume
> >from the clock used to initialize it at boot time.
>
> - Inlining the same code used by ARM, which saves and restores the
> delta between a selected RTC and the current system wall-clock time.
>
> - Removes calls to that ARM code from AT91, OMAP, and S3C RTCs.
>
> This goes on top of the patch series removing "struct class_device" usage
> >from the RTC framework. That makes class suspend()/resume() work.
>
> Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
>
> ---
> drivers/rtc/Kconfig | 24 +++++++++----
> drivers/rtc/class.c | 74 +++++++++++++++++++++++++++++++++++++++++++
> drivers/rtc/rtc-at91rm9200.c | 30 -----------------
> drivers/rtc/rtc-omap.c | 17 ---------
> drivers/rtc/rtc-s3c.c | 22 ------------
> 5 files changed, 91 insertions(+), 76 deletions(-)
[patch trimmed]
I think, we only want 1, right? And the latter seems to be more generic /
platform independent? And as a side-effect, powermac would have to migrate
to generic rtc:-)
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2007-02-22 22:58 ` Guennadi Liakhovetski
(?)
@ 2007-02-23 1:15 ` David Brownell
-1 siblings, 0 replies; 131+ messages in thread
From: David Brownell @ 2007-02-23 1:15 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: Alessandro Zummo, Andrew Morton, Johannes Berg, john stultz,
Linux Kernel list, linux-pm, linuxppc-dev, rtc-linux, Torrance
On Thursday 22 February 2007 2:58 pm, Guennadi Liakhovetski wrote:
>
> I think, we only want 1, right? And the latter seems to be more generic /
> platform independent? And as a side-effect, powermac would have to migrate
> to generic rtc:-)
I'd certainly think that restoring the system clock should be, as much
as possible, in platform-agnostic code. Like the generic RTC framework.
And hmm, that powermac/time.c file replicates other RTC code...
Minor obstacle: removing the EXPERIMENTAL label from that code.
- Dave
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2007-02-23 1:15 ` David Brownell
0 siblings, 0 replies; 131+ messages in thread
From: David Brownell @ 2007-02-23 1:15 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: Alessandro Zummo, rtc-linux, john stultz, linux-pm,
Linux Kernel list, Torrance, linuxppc-dev, Andrew Morton,
Johannes Berg
On Thursday 22 February 2007 2:58 pm, Guennadi Liakhovetski wrote:
>
> I think, we only want 1, right? And the latter seems to be more generic /
> platform independent? And as a side-effect, powermac would have to migrate
> to generic rtc:-)
I'd certainly think that restoring the system clock should be, as much
as possible, in platform-agnostic code. Like the generic RTC framework.
And hmm, that powermac/time.c file replicates other RTC code...
Minor obstacle: removing the EXPERIMENTAL label from that code.
- Dave
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2007-02-23 1:15 ` David Brownell
0 siblings, 0 replies; 131+ messages in thread
From: David Brownell @ 2007-02-23 1:15 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: linuxppc-dev, Torrance, Alessandro Zummo, rtc-linux, john stultz,
Andrew Morton, linux-pm, Johannes Berg, Linux Kernel list
On Thursday 22 February 2007 2:58 pm, Guennadi Liakhovetski wrote:
>
> I think, we only want 1, right? And the latter seems to be more generic /
> platform independent? And as a side-effect, powermac would have to migrate
> to generic rtc:-)
I'd certainly think that restoring the system clock should be, as much
as possible, in platform-agnostic code. Like the generic RTC framework.
And hmm, that powermac/time.c file replicates other RTC code...
Minor obstacle: removing the EXPERIMENTAL label from that code.
- Dave
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2007-02-22 22:58 ` Guennadi Liakhovetski
(?)
@ 2007-02-23 11:17 ` Johannes Berg
-1 siblings, 0 replies; 131+ messages in thread
From: Johannes Berg @ 2007-02-23 11:17 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: David Brownell, linuxppc-dev, rtc-linux, linux-pm, Torrance,
Alessandro Zummo, john stultz, Andrew Morton, Linux Kernel list
[-- Attachment #1: Type: text/plain, Size: 378 bytes --]
On Thu, 2007-02-22 at 23:58 +0100, Guennadi Liakhovetski wrote:
> I think, we only want 1, right? And the latter seems to be more generic /
> platform independent? And as a side-effect, powermac would have to migrate
> to generic rtc:-)
Can we migrate all of powerpc to genrtc? But yes, I agree. Had enough to
do though already to get suspend working :)
johannes
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 190 bytes --]
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2007-02-23 11:17 ` Johannes Berg
0 siblings, 0 replies; 131+ messages in thread
From: Johannes Berg @ 2007-02-23 11:17 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: Alessandro Zummo, rtc-linux, Torrance, linux-pm,
Linux Kernel list, David Brownell, linuxppc-dev, john stultz,
Andrew Morton
[-- Attachment #1: Type: text/plain, Size: 378 bytes --]
On Thu, 2007-02-22 at 23:58 +0100, Guennadi Liakhovetski wrote:
> I think, we only want 1, right? And the latter seems to be more generic /
> platform independent? And as a side-effect, powermac would have to migrate
> to generic rtc:-)
Can we migrate all of powerpc to genrtc? But yes, I agree. Had enough to
do though already to get suspend working :)
johannes
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 190 bytes --]
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2007-02-23 11:17 ` Johannes Berg
0 siblings, 0 replies; 131+ messages in thread
From: Johannes Berg @ 2007-02-23 11:17 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: Alessandro Zummo, rtc-linux, Torrance, linux-pm,
Linux Kernel list, David Brownell, linuxppc-dev, john stultz,
Andrew Morton
[-- Attachment #1.1: Type: text/plain, Size: 378 bytes --]
On Thu, 2007-02-22 at 23:58 +0100, Guennadi Liakhovetski wrote:
> I think, we only want 1, right? And the latter seems to be more generic /
> platform independent? And as a side-effect, powermac would have to migrate
> to generic rtc:-)
Can we migrate all of powerpc to genrtc? But yes, I agree. Had enough to
do though already to get suspend working :)
johannes
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 190 bytes --]
[-- Attachment #2: Type: text/plain, Size: 146 bytes --]
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev
^ permalink raw reply [flat|nested] 131+ messages in thread
* Long sleep with i_mutex in xfs_flush_device(), affects NFS service
@ 2006-09-26 18:51 Stephane Doyon
2006-09-27 11:33 ` Shailendra Tripathi
0 siblings, 1 reply; 131+ messages in thread
From: Stephane Doyon @ 2006-09-26 18:51 UTC (permalink / raw)
To: xfs, nfs
Hi,
I'm seeing an unpleasant behavior when an XFS file system becomes full,
particularly when accessed over NFS. Both XFS and the linux NFS client
appear to be contributing to the problem.
When the file system becomes nearly full, we eventually call down to
xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to
do some work.
xfs_flush_space()does
xfs_iunlock(ip, XFS_ILOCK_EXCL);
before calling xfs_flush_device(), but i_mutex is still held, at least
when we're being called from under xfs_write(). It seems like a fairly
long time to hold a mutex. And I wonder whether it's really necessary to
keep going through that again and again for every new request after we've
hit NOSPC.
In particular this can cause a pileup when several threads are writing
concurrently to the same file. Some specialized apps might do that, and
nfsd threads do it all the time.
To reproduce locally, on a full file system:
#!/bin/sh
for i in `seq 30`; do
dd if=/dev/zero of=f bs=1 count=1 &
done
wait
time that, it takes nearly exactly 15s.
The linux NFS client typically sends bunches of 16 requests, and so if the
client is writing a single file, some NFS requests are therefore delayed
by up to 8seconds, which is kind of long for NFS.
What's worse, when my linux NFS client writes out a file's pages, it does
not react immediately on receiving a NOSPC error. It will remember and
report the error later on close(), but it still tries and issues write
requests for each page of the file. So even if there isn't a pileup on the
i_mutex on the server, the NFS client still waits 0.5s for each 32K
(typically) request. So on an NFS client on a gigabit network, on an
already full filesystem, if I open and write a 10M file and close() it, it
takes 2m40.083s for it to issue all the requests, get an NOSPC for each,
and finally have my close() call return ENOSPC. That can stretch to
several hours for gigabyte-sized files, which is how I noticed the
problem.
I'm not too familiar with the NFS client code, but would it not be
possible for it to give up when it encounters NOSPC? Or is there some
reason why this wouldn't be desirable?
The rough workaround I have come up with for the problem is to have
xfs_flush_space() skip calling xfs_flush_device() if we are within 2secs
of having returned ENOSPC. I have verified that this workaround is
effective, but I imagine there might be a cleaner solution.
Thanks
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: Long sleep with i_mutex in xfs_flush_device(), affects NFS service
2006-09-26 18:51 Long sleep with i_mutex in xfs_flush_device(), affects NFS service Stephane Doyon
@ 2006-09-27 11:33 ` Shailendra Tripathi
2006-10-02 14:45 ` Stephane Doyon
0 siblings, 1 reply; 131+ messages in thread
From: Shailendra Tripathi @ 2006-09-27 11:33 UTC (permalink / raw)
To: Stephane Doyon; +Cc: xfs, nfs
Hi Stephane,
> When the file system becomes nearly full, we eventually call down to
> xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to
> do some work.
> xfs_flush_space()does
> xfs_iunlock(ip, XFS_ILOCK_EXCL);
> before calling xfs_flush_device(), but i_mutex is still held, at least
> when we're being called from under xfs_write().
1. I agree that the delay of 500 ms is not a deterministic wait.
2. xfs_flush_device is a big operation. It has to flush all the dirty
pages possibly in the cache on the device. Depending upon the device, it
might take significant amount of time. Keeping view of it, 500 ms in
that unreasonable. Also, perhaps you would never want more than one
request to be queued for device flush.
3. The hope is that after one big flush operation, it would be able to
free up resources which are in transient state (over-reservation of
blocks, delalloc, pending removes, ...). The whole operation is intended
to make sure that ENOSPC is not returned unless really required.
4. This wait could be made deterministic by waiting for the syncer
thread to complete when device flush is triggered.
> It seems like a fairly long time to hold a mutex. And I wonder whether it's really
It might not be that good even if it doesn't. This can return pre-mature
ENOSPC or it can queue many xfs_flush_device requests (which can make
your system dead(-slow) anyway)
> necessary to keep going through that again and again for every new request after
> we've hit NOSPC.
>
> In particular this can cause a pileup when several threads are writing
> concurrently to the same file. Some specialized apps might do that, and
> nfsd threads do it all the time.
>
> To reproduce locally, on a full file system:
> #!/bin/sh
> for i in `seq 30`; do
> dd if=/dev/zero of=f bs=1 count=1 &
> done
> wait
> time that, it takes nearly exactly 15s.
>
> The linux NFS client typically sends bunches of 16 requests, and so if
> the client is writing a single file, some NFS requests are therefore
> delayed by up to 8seconds, which is kind of long for NFS.
>
> What's worse, when my linux NFS client writes out a file's pages, it
> does not react immediately on receiving a NOSPC error. It will remember
> and report the error later on close(), but it still tries and issues
> write requests for each page of the file. So even if there isn't a
> pileup on the i_mutex on the server, the NFS client still waits 0.5s for
> each 32K (typically) request. So on an NFS client on a gigabit network,
> on an already full filesystem, if I open and write a 10M file and
> close() it, it takes 2m40.083s for it to issue all the requests, get an
> NOSPC for each, and finally have my close() call return ENOSPC. That can
> stretch to several hours for gigabyte-sized files, which is how I
> noticed the problem.
>
> I'm not too familiar with the NFS client code, but would it not be
> possible for it to give up when it encounters NOSPC? Or is there some
> reason why this wouldn't be desirable?
>
> The rough workaround I have come up with for the problem is to have
> xfs_flush_space() skip calling xfs_flush_device() if we are within 2secs
> of having returned ENOSPC. I have verified that this workaround is
> effective, but I imagine there might be a cleaner solution.
The fix would not be a good idea for standalone use of XFS.
if (nimaps == 0) {
if (xfs_flush_space(ip, &fsynced, &ioflag))
return XFS_ERROR(ENOSPC);
error = 0;
goto retry;
}
xfs_flush_space:
case 2:
xfs_iunlock(ip, XFS_ILOCK_EXCL);
xfs_flush_device(ip);
xfs_ilock(ip, XFS_ILOCK_EXCL);
*fsynced = 3;
return 0;
}
return 1;
lets say that you don't enqueue it for another 2 secs. Then, in next
retry it would return 1 and, hence, outer if condition would return
ENOSPC. Please note that for standalone XFS, the application or client
mostly don't retry and, hence, it might return premature ENOSPC.
You didn't notice this because, as you said, nfs client will retry in
case of ENOSPC.
Assuming that you don't return *fsynced = 3 (instead *fsynced = 2), the
code path will loop (because of retry) and CPU itself would become busy
for no good job.
You might experiment by adding deterministic wait. When you enqueue, set
some flag. All others who come in between just get enqueued. Once,
device flush is over wake up all. If flush could free enough resources,
threads will proceed ahead and return. Otherwise, another flush would be
enqueued to flush what might have come since last flush.
> Thanks
>
>
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: Long sleep with i_mutex in xfs_flush_device(), affects NFS service
2006-09-27 11:33 ` Shailendra Tripathi
@ 2006-10-02 14:45 ` Stephane Doyon
2006-10-02 22:30 ` David Chinner
0 siblings, 1 reply; 131+ messages in thread
From: Stephane Doyon @ 2006-10-02 14:45 UTC (permalink / raw)
To: Shailendra Tripathi; +Cc: xfs
On Wed, 27 Sep 2006, Shailendra Tripathi wrote:
> Hi Stephane,
>> When the file system becomes nearly full, we eventually call down to
>> xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to
>> do some work. xfs_flush_space()does
>> xfs_iunlock(ip, XFS_ILOCK_EXCL);
>> before calling xfs_flush_device(), but i_mutex is still held, at least
>> when we're being called from under xfs_write().
>
> 1. I agree that the delay of 500 ms is not a deterministic wait.
>
> 2. xfs_flush_device is a big operation. It has to flush all the dirty pages
> possibly in the cache on the device. Depending upon the device, it might take
> significant amount of time. Keeping view of it, 500 ms in that unreasonable.
> Also, perhaps you would never want more than one request to be queued for
> device flush.
> 3. The hope is that after one big flush operation, it would be able to free
> up resources which are in transient state (over-reservation of blocks,
> delalloc, pending removes, ...). The whole operation is intended to make sure
> that ENOSPC is not returned unless really required.
Yes I had surmised as much. That last part is still a little vague to
me... But my two points were:
-It's a long time to hold a mutex. The code bothers to drop the
xfs_ilock, so I'm wondering whether the i_mutex had been forgotten?
-Once we've actually hit ENOSPC, do we need to try again? Isn't it
possible to tell when resources have actually been freed?
> 4. This wait could be made deterministic by waiting for the syncer thread to
> complete when device flush is triggered.
I remember that some time ago, there wasn't any xfs_syncd, and the
flushing operation was performed by the task wanting the free space. And
it would cause deadlocks. So I presume we would have to be careful if we
wanted to wait on sync.
>> The rough workaround I have come up with for the problem is to have
>> xfs_flush_space() skip calling xfs_flush_device() if we are within 2secs
>> of having returned ENOSPC. I have verified that this workaround is
>> effective, but I imagine there might be a cleaner solution.
>
> The fix would not be a good idea for standalone use of XFS.
>
> if (nimaps == 0) {
> if (xfs_flush_space(ip, &fsynced, &ioflag))
> return XFS_ERROR(ENOSPC);
>
> error = 0;
> goto retry;
> }
>
> xfs_flush_space:
> case 2:
> xfs_iunlock(ip, XFS_ILOCK_EXCL);
> xfs_flush_device(ip);
> xfs_ilock(ip, XFS_ILOCK_EXCL);
> *fsynced = 3;
> return 0;
> }
> return 1;
>
> lets say that you don't enqueue it for another 2 secs. Then, in next retry it
> would return 1 and, hence, outer if condition would return ENOSPC. Please
> note that for standalone XFS, the application or client mostly don't retry
> and, hence, it might return premature ENOSPC.
>
> You didn't notice this because, as you said, nfs client will retry in case of
> ENOSPC.
I'm not entirely sure I follow your explanation. The *fsynced variable is
local to the xfs_iomap_write_delay() caller, so each call will go through
the three steps in xfs_flush_space(). What my workaround does is, if we've
done the xfs_flush_device() thing and still hit ENOSPC within the last two
seconds, and we've just tried again the first two xfs_flush_space() steps,
then we skip the third step and return ENOSPC. So yes the file system
might not be exactly entirely full anymore, which is why I say it's a
rough workaround, but it seems to me the discrepancy shouldn't be very big
either. Whatever free space might have been missed would have had to be
freed after the last ENOSPC return, and must be such that only another
xfs_flush_device() call will make it available.
It seems to me ENOSPC has never been something very exact anyway: df
(statfs) often still shows a few remaining free blocks even on a full file
system. Apps can't really calculate how many blocks will be needed for
inodes, btrees and directories, so the number of remaining data blocks is
an approximation. I am not entirely sure that what xfs_flush_device_work()
does is quite deterministic, and as you said the wait period is arbitrary.
And I don't particularly care to get every single last byte out of my file
system, as long as there are no flagrant inconsistencies such as rm -fr
not freeing up some space.
> Assuming that you don't return *fsynced = 3 (instead *fsynced = 2), the code
> path will loop (because of retry) and CPU itself would become busy for no
> good job.
Indeed.
> You might experiment by adding deterministic wait. When you enqueue, set
> some flag. All others who come in between just get enqueued. Once, device
> flush is over wake up all. If flush could free enough resources, threads will
> proceed ahead and return. Otherwise, another flush would be enqueued to flush
> what might have come since last flush.
But how do you know whether you need to flush again, or whether your file
system is really full this time? And there's still the issue with the
i_mutex.
Perhaps there's a way to evaluate how much resources are "in transient
state" as you put it. Otherwise, we could set a flag when ENOSPC is
returned, and have that flag cleared at appropriate places in the code
where blocks are actually freed. I keep running into various deadlocks
related to full file systems, so I'm wary of clever solutions :-).
[Dropped nfs@lists.sourceforge.net from Cc, as this discussion is quite
specific to xfs.]
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: Long sleep with i_mutex in xfs_flush_device(), affects NFS service
2006-10-02 14:45 ` Stephane Doyon
@ 2006-10-02 22:30 ` David Chinner
2006-10-03 13:39 ` Stephane Doyon
0 siblings, 1 reply; 131+ messages in thread
From: David Chinner @ 2006-10-02 22:30 UTC (permalink / raw)
To: Stephane Doyon; +Cc: Shailendra Tripathi, xfs
On Mon, Oct 02, 2006 at 10:45:12AM -0400, Stephane Doyon wrote:
> On Wed, 27 Sep 2006, Shailendra Tripathi wrote:
>
> >Hi Stephane,
> >> When the file system becomes nearly full, we eventually call down to
> >> xfs_flush_device(), which sleeps for 0.5seconds, waiting for xfssyncd to
> >> do some work. xfs_flush_space()does
> >> xfs_iunlock(ip, XFS_ILOCK_EXCL);
> >> before calling xfs_flush_device(), but i_mutex is still held, at least
> >> when we're being called from under xfs_write().
> >
> >1. I agree that the delay of 500 ms is not a deterministic wait.
AFAICT, it was never intended to be.
It's not deterministic, and the wait is really only there to ensure
that the synchronous log force catches all the operations that may
have recently occurred so they can be unpinned and flushed.
For example, an extent that has been truncated and freed cannot be
reused until the transaction that it was freed in has actually been
commited to disk.....
> >2. xfs_flush_device is a big operation. It has to flush all the dirty
> >pages possibly in the cache on the device. Depending upon the device, it
> >might take significant amount of time. Keeping view of it, 500 ms in that
> >unreasonable. Also, perhaps you would never want more than one request to
> >be queued for device flush.
> >3. The hope is that after one big flush operation, it would be able to
> >free up resources which are in transient state (over-reservation of
> >blocks, delalloc, pending removes, ...). The whole operation is intended
> >to make sure that ENOSPC is not returned unless really required.
>
> Yes I had surmised as much. That last part is still a little vague to
> me... But my two points were:
>
> -It's a long time to hold a mutex. The code bothers to drop the
> xfs_ilock, so I'm wondering whether the i_mutex had been forgotten?
This deep in the XFS allocation functions, we cannot tell if we hold
the i_mutex or not, and it plays no part in determining if we have
space or not. Hence we don't touch it here.
> -Once we've actually hit ENOSPC, do we need to try again? Isn't it
> possible to tell when resources have actually been freed?
Given that the only way to determine if space was made available is
to query every AG in the exact same way an allocation does, it makes
sense to try the allocation again to determine if space was made
available....
> >4. This wait could be made deterministic by waiting for the syncer thread
> >to complete when device flush is triggered.
>
> I remember that some time ago, there wasn't any xfs_syncd, and the
> flushing operation was performed by the task wanting the free space. And
> it would cause deadlocks. So I presume we would have to be careful if we
> wanted to wait on sync.
*nod*
Last thing we want is more deadlocks. This code is already
convoluted enough without added yet more special cases to it....
> >> The rough workaround I have come up with for the problem is to have
> >> xfs_flush_space() skip calling xfs_flush_device() if we are within 2secs
> >> of having returned ENOSPC. I have verified that this workaround is
> >> effective, but I imagine there might be a cleaner solution.
> >
> >The fix would not be a good idea for standalone use of XFS.
I doubt it's a good idea for an NFS server, either.
Remember that XFS, like most filesystems, trades off speed for
correctness as we approach ENOSPC. Many parts of XFS slow down as we
approach ENOSPC, and this is just one example of where we need to be
correct, not fast.
> It seems to me ENOSPC has never been something very exact anyway:
> df (statfs) often still shows a few remaining free blocks even on
> a full file system. Apps can't really calculate how many blocks
> will be needed for inodes, btrees and directories, so the number
> of remaining data blocks is an approximation.
It's not supposed to be an approximation - the number reported by df
should be taking all this into account because it's coming directly
from how much space XFS thinks it has available.
> >You might experiment by adding deterministic wait. When you enqueue, set
> >some flag. All others who come in between just get enqueued. Once, device
> >flush is over wake up all. If flush could free enough resources, threads
> >will proceed ahead and return. Otherwise, another flush would be enqueued
> >to flush what might have come since last flush.
>
> But how do you know whether you need to flush again, or whether your file
> system is really full this time? And there's still the issue with the
> i_mutex.
>
> Perhaps there's a way to evaluate how much resources are "in transient
> state" as you put it.
I doubt there's any way of doing this without introducing non-enospc
performance regressions and extra memory usage.
> Otherwise, we could set a flag when ENOSPC is
> returned, and have that flag cleared at appropriate places in the code
> where blocks are actually freed. I keep running into various deadlocks
> related to full file systems, so I'm wary of clever solutions :-).
IMO, this is a non-problem. You're talking about optimising a
relatively rare corner case where correctness is more important than
speed and your test case is highly artificial. AFAIC, if you are
running at ENOSPC then you get what performance is appropriate for
correctness and if you are continually runing at ENOSPC, then buy
some more disks.....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-02 22:30 ` David Chinner
@ 2006-10-03 13:39 ` Stephane Doyon
0 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-03 13:39 UTC (permalink / raw)
To: Trond Myklebust, David Chinner; +Cc: xfs, nfs, Shailendra Tripathi
Sorry for insisting, but it seems to me there's still a problem in need of
fixing: when writing a 5GB file over NFS to an XFS file system and hitting
ENOSPC, it takes on the order of 22hours before my application gets an
error, whereas it would normally take about 2minutes if the file system
did not become full.
Perhaps I was being a bit too "constructive" and drowned my point in
explanations and proposed workarounds... You are telling me that neither
NFS nor XFS is doing anything wrong, and I can understand your points of
view, but surely that behavior isn't considered acceptable?
On Tue, 26 Sep 2006, Trond Myklebust wrote:
> On Tue, 2006-09-26 at 16:05 -0400, Stephane Doyon wrote:
>> I suppose it's not technically wrong to try to flush all the pages of the
>> file, but if the server file system is full then it will be at its worse.
>> Also if you happened to be on a slower link and have a big cache to flush,
>> you're waiting around for very little gain.
>
> That all assumes that nobody fixes the problem on the server. If
> somebody notices, and actually removes an unused file, then you may be
> happy that the kernel preserved the last 80% of the apache log file that
> was being written out.
>
> ENOSPC is a transient error: that is why the current behaviour exists.
On Tue, 3 Oct 2006, David Chinner wrote:
> This deep in the XFS allocation functions, we cannot tell if we hold
> the i_mutex or not, and it plays no part in determining if we have
> space or not. Hence we don't touch it here.
> I doubt it's a good idea for an NFS server, either.
[...]
> Remember that XFS, like most filesystems, trades off speed for
> correctness as we approach ENOSPC. Many parts of XFS slow down as we
> approach ENOSPC, and this is just one example of where we need to be
> correct, not fast.
[...]
> IMO, this is a non-problem. You're talking about optimising a
> relatively rare corner case where correctness is more important than
> speed and your test case is highly artificial. AFAIC, if you are
> running at ENOSPC then you get what performance is appropriate for
> correctness and if you are continually runing at ENOSPC, then buy
> some more disks.....
My recipe to reproduce the problem locally is admittedly somewhat
artificial, but the problematic usage definitely isn't: simply an app on
an NFS client that happens to fill up a file system. There must be some
way to handle this better.
Thanks
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-03 13:39 ` Stephane Doyon
0 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-03 13:39 UTC (permalink / raw)
To: Trond Myklebust, David Chinner; +Cc: nfs, Shailendra Tripathi, xfs
Sorry for insisting, but it seems to me there's still a problem in need of
fixing: when writing a 5GB file over NFS to an XFS file system and hitting
ENOSPC, it takes on the order of 22hours before my application gets an
error, whereas it would normally take about 2minutes if the file system
did not become full.
Perhaps I was being a bit too "constructive" and drowned my point in
explanations and proposed workarounds... You are telling me that neither
NFS nor XFS is doing anything wrong, and I can understand your points of
view, but surely that behavior isn't considered acceptable?
On Tue, 26 Sep 2006, Trond Myklebust wrote:
> On Tue, 2006-09-26 at 16:05 -0400, Stephane Doyon wrote:
>> I suppose it's not technically wrong to try to flush all the pages of the
>> file, but if the server file system is full then it will be at its worse.
>> Also if you happened to be on a slower link and have a big cache to flush,
>> you're waiting around for very little gain.
>
> That all assumes that nobody fixes the problem on the server. If
> somebody notices, and actually removes an unused file, then you may be
> happy that the kernel preserved the last 80% of the apache log file that
> was being written out.
>
> ENOSPC is a transient error: that is why the current behaviour exists.
On Tue, 3 Oct 2006, David Chinner wrote:
> This deep in the XFS allocation functions, we cannot tell if we hold
> the i_mutex or not, and it plays no part in determining if we have
> space or not. Hence we don't touch it here.
> I doubt it's a good idea for an NFS server, either.
[...]
> Remember that XFS, like most filesystems, trades off speed for
> correctness as we approach ENOSPC. Many parts of XFS slow down as we
> approach ENOSPC, and this is just one example of where we need to be
> correct, not fast.
[...]
> IMO, this is a non-problem. You're talking about optimising a
> relatively rare corner case where correctness is more important than
> speed and your test case is highly artificial. AFAIC, if you are
> running at ENOSPC then you get what performance is appropriate for
> correctness and if you are continually runing at ENOSPC, then buy
> some more disks.....
My recipe to reproduce the problem locally is admittedly somewhat
artificial, but the problematic usage definitely isn't: simply an app on
an NFS client that happens to fill up a file system. There must be some
way to handle this better.
Thanks
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-03 13:39 ` Stephane Doyon
@ 2006-10-03 16:40 ` Trond Myklebust
-1 siblings, 0 replies; 131+ messages in thread
From: Trond Myklebust @ 2006-10-03 16:40 UTC (permalink / raw)
To: Stephane Doyon; +Cc: David Chinner, xfs, nfs, Shailendra Tripathi
On Tue, 2006-10-03 at 09:39 -0400, Stephane Doyon wrote:
> Sorry for insisting, but it seems to me there's still a problem in need of
> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
> ENOSPC, it takes on the order of 22hours before my application gets an
> error, whereas it would normally take about 2minutes if the file system
> did not become full.
>
> Perhaps I was being a bit too "constructive" and drowned my point in
> explanations and proposed workarounds... You are telling me that neither
> NFS nor XFS is doing anything wrong, and I can understand your points of
> view, but surely that behavior isn't considered acceptable?
Sure it is. You are allowing the kernel to cache 5GB, and that means you
only get the error message when close() completes.
If you want faster error reporting, there are modes like O_SYNC,
O_DIRECT, that will attempt to flush the data more quickly. In addition,
you can force flushing using fsync(). Finally, you can tweak the VM into
flushing more often using /proc/sys/vm.
Cheers,
Trond
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-03 16:40 ` Trond Myklebust
0 siblings, 0 replies; 131+ messages in thread
From: Trond Myklebust @ 2006-10-03 16:40 UTC (permalink / raw)
To: Stephane Doyon; +Cc: David Chinner, nfs, Shailendra Tripathi, xfs
On Tue, 2006-10-03 at 09:39 -0400, Stephane Doyon wrote:
> Sorry for insisting, but it seems to me there's still a problem in need of
> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
> ENOSPC, it takes on the order of 22hours before my application gets an
> error, whereas it would normally take about 2minutes if the file system
> did not become full.
>
> Perhaps I was being a bit too "constructive" and drowned my point in
> explanations and proposed workarounds... You are telling me that neither
> NFS nor XFS is doing anything wrong, and I can understand your points of
> view, but surely that behavior isn't considered acceptable?
Sure it is. You are allowing the kernel to cache 5GB, and that means you
only get the error message when close() completes.
If you want faster error reporting, there are modes like O_SYNC,
O_DIRECT, that will attempt to flush the data more quickly. In addition,
you can force flushing using fsync(). Finally, you can tweak the VM into
flushing more often using /proc/sys/vm.
Cheers,
Trond
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-03 16:40 ` Trond Myklebust
@ 2006-10-05 15:39 ` Stephane Doyon
-1 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-05 15:39 UTC (permalink / raw)
To: Trond Myklebust; +Cc: David Chinner, xfs, nfs, Shailendra Tripathi
On Tue, 3 Oct 2006, Trond Myklebust wrote:
> On Tue, 2006-10-03 at 09:39 -0400, Stephane Doyon wrote:
>> Sorry for insisting, but it seems to me there's still a problem in need of
>> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
>> ENOSPC, it takes on the order of 22hours before my application gets an
>> error, whereas it would normally take about 2minutes if the file system
>> did not become full.
>>
>> Perhaps I was being a bit too "constructive" and drowned my point in
>> explanations and proposed workarounds... You are telling me that neither
>> NFS nor XFS is doing anything wrong, and I can understand your points of
>> view, but surely that behavior isn't considered acceptable?
>
> Sure it is.
If you say so :-).
> You are allowing the kernel to cache 5GB, and that means you
> only get the error message when close() completes.
But it's not actually caching the entire 5GB at once... I guess you're
saying that doesn't matter...?
> If you want faster error reporting, there are modes like O_SYNC,
> O_DIRECT, that will attempt to flush the data more quickly. In addition,
> you can force flushing using fsync().
What if the program is a standard utility like cp?
> Finally, you can tweak the VM into
> flushing more often using /proc/sys/vm.
It doesn't look to me like a question of degrees about how early to flush.
Actually my client can't possibly be caching all of 5GB, it doesn't have
the RAM or swap for that. Tracing it more carefully, it appears dirty data
starts being flushed after a few hundred MBs. No error is returned on the
subsequent writes, only on the final close(). I see some of the write()
calls are delayed, presumably when the machine reaches the dirty
threshold. So I don't see how the vm settings can help in this case.
I hadn't realized that the issue isn't just with the final flush on
close(). It's actually been flushing all along, delaying some of the
subsequent write()s, getting NOSPC errors but not reporting them until the
end.
I understand that since my application did not request any syncing, the
system cannot guarantee to report errors until cached data has been
flushed. But some data has indeed been flushed with an error; can't this
be reported earlier than on close?
Would it be incorrect for a subsequent write to return the error that
occurred while flushing data from previous writes? Then the app could
decide whether to continue and retry or not. But I guess I can see how
that might get convoluted.
Thanks for your patience,
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-05 15:39 ` Stephane Doyon
0 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-05 15:39 UTC (permalink / raw)
To: Trond Myklebust; +Cc: David Chinner, nfs, Shailendra Tripathi, xfs
On Tue, 3 Oct 2006, Trond Myklebust wrote:
> On Tue, 2006-10-03 at 09:39 -0400, Stephane Doyon wrote:
>> Sorry for insisting, but it seems to me there's still a problem in need of
>> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
>> ENOSPC, it takes on the order of 22hours before my application gets an
>> error, whereas it would normally take about 2minutes if the file system
>> did not become full.
>>
>> Perhaps I was being a bit too "constructive" and drowned my point in
>> explanations and proposed workarounds... You are telling me that neither
>> NFS nor XFS is doing anything wrong, and I can understand your points of
>> view, but surely that behavior isn't considered acceptable?
>
> Sure it is.
If you say so :-).
> You are allowing the kernel to cache 5GB, and that means you
> only get the error message when close() completes.
But it's not actually caching the entire 5GB at once... I guess you're
saying that doesn't matter...?
> If you want faster error reporting, there are modes like O_SYNC,
> O_DIRECT, that will attempt to flush the data more quickly. In addition,
> you can force flushing using fsync().
What if the program is a standard utility like cp?
> Finally, you can tweak the VM into
> flushing more often using /proc/sys/vm.
It doesn't look to me like a question of degrees about how early to flush.
Actually my client can't possibly be caching all of 5GB, it doesn't have
the RAM or swap for that. Tracing it more carefully, it appears dirty data
starts being flushed after a few hundred MBs. No error is returned on the
subsequent writes, only on the final close(). I see some of the write()
calls are delayed, presumably when the machine reaches the dirty
threshold. So I don't see how the vm settings can help in this case.
I hadn't realized that the issue isn't just with the final flush on
close(). It's actually been flushing all along, delaying some of the
subsequent write()s, getting NOSPC errors but not reporting them until the
end.
I understand that since my application did not request any syncing, the
system cannot guarantee to report errors until cached data has been
flushed. But some data has indeed been flushed with an error; can't this
be reported earlier than on close?
Would it be incorrect for a subsequent write to return the error that
occurred while flushing data from previous writes? Then the app could
decide whether to continue and retry or not. But I guess I can see how
that might get convoluted.
Thanks for your patience,
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-05 15:39 ` Stephane Doyon
@ 2006-10-06 0:33 ` David Chinner
-1 siblings, 0 replies; 131+ messages in thread
From: David Chinner @ 2006-10-06 0:33 UTC (permalink / raw)
To: Stephane Doyon
Cc: Trond Myklebust, David Chinner, xfs, nfs, Shailendra Tripathi
On Thu, Oct 05, 2006 at 11:39:45AM -0400, Stephane Doyon wrote:
>
> I hadn't realized that the issue isn't just with the final flush on
> close(). It's actually been flushing all along, delaying some of the
> subsequent write()s, getting NOSPC errors but not reporting them until the
> end.
Other NFS clients will report an ENOSPC on the next write() or close()
if the error is reported during async writeback. The clients that typically
do this throw away any unwritten data as well on the basis that the
application was returned an error ASAP and it is now Somebody Else's
Problem (i.e. the application needs to handle it from there).
> I understand that since my application did not request any syncing, the
> system cannot guarantee to report errors until cached data has been
> flushed. But some data has indeed been flushed with an error; can't this
> be reported earlier than on close?
It could, but...
> Would it be incorrect for a subsequent write to return the error that
> occurred while flushing data from previous writes? Then the app could
> decide whether to continue and retry or not. But I guess I can see how
> that might get convoluted.
.... there's many entertaining hoops to jump through to do this
reliably.
FWIW, these are simply two different approaches to handling ENOSPC
(and other server) errors. Mostly it comes down to how the ppl who
implemented the NFS client think it's best to handle the errors in
the scenarios that they most care about.
For example: when you have large amounts of cached data, expedient
error reporting and tossing unwritten data leads to much faster
error recovery than trying to write every piece of data (hence the
Irix use of this method).
OTOH, when you really want as much of the data to get to the server,
regardless of whether you lose some (e.g. log files) before
reporting an error then you try to write every bit of data before
telling the application.
There's no clear right or wrong approach here - both have their
advantages and disadvantages for different workloads. If it
weren't for the sub-optimal behaviour of XFS in this case, you
probably wouldn't have even cared about this....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-06 0:33 ` David Chinner
0 siblings, 0 replies; 131+ messages in thread
From: David Chinner @ 2006-10-06 0:33 UTC (permalink / raw)
To: Stephane Doyon
Cc: xfs, David Chinner, nfs, Shailendra Tripathi, Trond Myklebust
On Thu, Oct 05, 2006 at 11:39:45AM -0400, Stephane Doyon wrote:
>
> I hadn't realized that the issue isn't just with the final flush on
> close(). It's actually been flushing all along, delaying some of the
> subsequent write()s, getting NOSPC errors but not reporting them until the
> end.
Other NFS clients will report an ENOSPC on the next write() or close()
if the error is reported during async writeback. The clients that typically
do this throw away any unwritten data as well on the basis that the
application was returned an error ASAP and it is now Somebody Else's
Problem (i.e. the application needs to handle it from there).
> I understand that since my application did not request any syncing, the
> system cannot guarantee to report errors until cached data has been
> flushed. But some data has indeed been flushed with an error; can't this
> be reported earlier than on close?
It could, but...
> Would it be incorrect for a subsequent write to return the error that
> occurred while flushing data from previous writes? Then the app could
> decide whether to continue and retry or not. But I guess I can see how
> that might get convoluted.
.... there's many entertaining hoops to jump through to do this
reliably.
FWIW, these are simply two different approaches to handling ENOSPC
(and other server) errors. Mostly it comes down to how the ppl who
implemented the NFS client think it's best to handle the errors in
the scenarios that they most care about.
For example: when you have large amounts of cached data, expedient
error reporting and tossing unwritten data leads to much faster
error recovery than trying to write every piece of data (hence the
Irix use of this method).
OTOH, when you really want as much of the data to get to the server,
regardless of whether you lose some (e.g. log files) before
reporting an error then you try to write every bit of data before
telling the application.
There's no clear right or wrong approach here - both have their
advantages and disadvantages for different workloads. If it
weren't for the sub-optimal behaviour of XFS in this case, you
probably wouldn't have even cared about this....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-06 0:33 ` David Chinner
@ 2006-10-06 13:25 ` Stephane Doyon
-1 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-06 13:25 UTC (permalink / raw)
To: David Chinner; +Cc: Trond Myklebust, xfs, nfs, Shailendra Tripathi
On Fri, 6 Oct 2006, David Chinner wrote:
> On Thu, Oct 05, 2006 at 11:39:45AM -0400, Stephane Doyon wrote:
>>
>> I hadn't realized that the issue isn't just with the final flush on
>> close(). It's actually been flushing all along, delaying some of the
>> subsequent write()s, getting NOSPC errors but not reporting them until the
>> end.
>
> Other NFS clients will report an ENOSPC on the next write() or close()
> if the error is reported during async writeback. The clients that typically
> do this throw away any unwritten data as well on the basis that the
> application was returned an error ASAP and it is now Somebody Else's
> Problem (i.e. the application needs to handle it from there).
Well the client wouldn't necessarily have to throw away cached data. It
could conceivably be made to return ENOSPC on some subsequent write. It
would need to throw away the data for that write, but not necessarily
destroy its cache. It could then clear the error condition and allow the
application to keep trying if it wants to...
>> Would it be incorrect for a subsequent write to return the error that
>> occurred while flushing data from previous writes? Then the app could
>> decide whether to continue and retry or not. But I guess I can see how
>> that might get convoluted.
>
> .... there's many entertaining hoops to jump through to do this
> reliably.
I imagine there would be...
> For example: when you have large amounts of cached data, expedient
> error reporting and tossing unwritten data leads to much faster
> error recovery than trying to write every piece of data (hence the
> Irix use of this method).
In my case, I didn't think I was caching that much data though, only a few
hundred MBs, and I wouldn't have minded so much if an error had been
returned after that much. The way it's implemented though, I can write an
unbounded amount of data through that cache and not be told of the problem
until I close or fsync. It may not be technically wrong, but given the
outrageous delay I saw in my particular situation, it felt pretty
suboptimal.
> There's no clear right or wrong approach here - both have their
> advantages and disadvantages for different workloads. If it
> weren't for the sub-optimal behaviour of XFS in this case, you
> probably wouldn't have even cared about this....
Indeed not! In fact, changing the client is not practical for me, what I
need is a fix for the XFS behavior. I just thought it was also worth
reporting what I perceived to be an issue with the NFS client.
Thanks
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-06 13:25 ` Stephane Doyon
0 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-06 13:25 UTC (permalink / raw)
To: David Chinner; +Cc: xfs, nfs, Shailendra Tripathi, Trond Myklebust
On Fri, 6 Oct 2006, David Chinner wrote:
> On Thu, Oct 05, 2006 at 11:39:45AM -0400, Stephane Doyon wrote:
>>
>> I hadn't realized that the issue isn't just with the final flush on
>> close(). It's actually been flushing all along, delaying some of the
>> subsequent write()s, getting NOSPC errors but not reporting them until the
>> end.
>
> Other NFS clients will report an ENOSPC on the next write() or close()
> if the error is reported during async writeback. The clients that typically
> do this throw away any unwritten data as well on the basis that the
> application was returned an error ASAP and it is now Somebody Else's
> Problem (i.e. the application needs to handle it from there).
Well the client wouldn't necessarily have to throw away cached data. It
could conceivably be made to return ENOSPC on some subsequent write. It
would need to throw away the data for that write, but not necessarily
destroy its cache. It could then clear the error condition and allow the
application to keep trying if it wants to...
>> Would it be incorrect for a subsequent write to return the error that
>> occurred while flushing data from previous writes? Then the app could
>> decide whether to continue and retry or not. But I guess I can see how
>> that might get convoluted.
>
> .... there's many entertaining hoops to jump through to do this
> reliably.
I imagine there would be...
> For example: when you have large amounts of cached data, expedient
> error reporting and tossing unwritten data leads to much faster
> error recovery than trying to write every piece of data (hence the
> Irix use of this method).
In my case, I didn't think I was caching that much data though, only a few
hundred MBs, and I wouldn't have minded so much if an error had been
returned after that much. The way it's implemented though, I can write an
unbounded amount of data through that cache and not be told of the problem
until I close or fsync. It may not be technically wrong, but given the
outrageous delay I saw in my particular situation, it felt pretty
suboptimal.
> There's no clear right or wrong approach here - both have their
> advantages and disadvantages for different workloads. If it
> weren't for the sub-optimal behaviour of XFS in this case, you
> probably wouldn't have even cared about this....
Indeed not! In fact, changing the client is not practical for me, what I
need is a fix for the XFS behavior. I just thought it was also worth
reporting what I perceived to be an issue with the NFS client.
Thanks
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-03 13:39 ` Stephane Doyon
@ 2006-10-05 8:30 ` David Chinner
-1 siblings, 0 replies; 131+ messages in thread
From: David Chinner @ 2006-10-05 8:30 UTC (permalink / raw)
To: Stephane Doyon
Cc: Trond Myklebust, David Chinner, xfs, nfs, Shailendra Tripathi
On Tue, Oct 03, 2006 at 09:39:55AM -0400, Stephane Doyon wrote:
> Sorry for insisting, but it seems to me there's still a problem in need of
> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
> ENOSPC, it takes on the order of 22hours before my application gets an
> error, whereas it would normally take about 2minutes if the file system
> did not become full.
>
> Perhaps I was being a bit too "constructive" and drowned my point in
> explanations and proposed workarounds... You are telling me that neither
> NFS nor XFS is doing anything wrong, and I can understand your points of
> view, but surely that behavior isn't considered acceptable?
I agree that this a little extreme and I can't recall of seeing
anything like this before, but I can see how that may happen if the
NFS client continues to try to write every dirty page after getting
an ENOSPC and each one of those writes has to wait for 500ms.
However, you did not mention what kernel version you are running.
One recent bug (introduced by a fix for deadlocks at ENOSPC) could
allow oversubscription of free space to occur in XFS, resulting in
the write being allowed to proceed (i.e. sufficient space for the
data blocks) but then failing the allocation because there weren't
enough blocks put aside for potential btree splits that occur during
allocation. If the linux client is using sync writes on retry, then
this would trigger a 500ms sleep on every write. That's the right
sort of ballpark for the slowness you were seeing - 5GB / 32k * 0.5s
= ~22 hours....
This got fixed in 2.6.18-rc6 - can you retry with a 2.6.18 server
and see if your problem goes away?
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-05 8:30 ` David Chinner
0 siblings, 0 replies; 131+ messages in thread
From: David Chinner @ 2006-10-05 8:30 UTC (permalink / raw)
To: Stephane Doyon
Cc: xfs, David Chinner, nfs, Shailendra Tripathi, Trond Myklebust
On Tue, Oct 03, 2006 at 09:39:55AM -0400, Stephane Doyon wrote:
> Sorry for insisting, but it seems to me there's still a problem in need of
> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
> ENOSPC, it takes on the order of 22hours before my application gets an
> error, whereas it would normally take about 2minutes if the file system
> did not become full.
>
> Perhaps I was being a bit too "constructive" and drowned my point in
> explanations and proposed workarounds... You are telling me that neither
> NFS nor XFS is doing anything wrong, and I can understand your points of
> view, but surely that behavior isn't considered acceptable?
I agree that this a little extreme and I can't recall of seeing
anything like this before, but I can see how that may happen if the
NFS client continues to try to write every dirty page after getting
an ENOSPC and each one of those writes has to wait for 500ms.
However, you did not mention what kernel version you are running.
One recent bug (introduced by a fix for deadlocks at ENOSPC) could
allow oversubscription of free space to occur in XFS, resulting in
the write being allowed to proceed (i.e. sufficient space for the
data blocks) but then failing the allocation because there weren't
enough blocks put aside for potential btree splits that occur during
allocation. If the linux client is using sync writes on retry, then
this would trigger a 500ms sleep on every write. That's the right
sort of ballpark for the slowness you were seeing - 5GB / 32k * 0.5s
= ~22 hours....
This got fixed in 2.6.18-rc6 - can you retry with a 2.6.18 server
and see if your problem goes away?
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-05 8:30 ` David Chinner
@ 2006-10-05 16:33 ` Stephane Doyon
-1 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-05 16:33 UTC (permalink / raw)
To: David Chinner; +Cc: Trond Myklebust, xfs, nfs, Shailendra Tripathi
On Thu, 5 Oct 2006, David Chinner wrote:
> On Tue, Oct 03, 2006 at 09:39:55AM -0400, Stephane Doyon wrote:
>> Sorry for insisting, but it seems to me there's still a problem in need of
>> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
>> ENOSPC, it takes on the order of 22hours before my application gets an
>> error, whereas it would normally take about 2minutes if the file system
>> did not become full.
>>
>> Perhaps I was being a bit too "constructive" and drowned my point in
>> explanations and proposed workarounds... You are telling me that neither
>> NFS nor XFS is doing anything wrong, and I can understand your points of
>> view, but surely that behavior isn't considered acceptable?
>
> I agree that this a little extreme and I can't recall of seeing
> anything like this before, but I can see how that may happen if the
> NFS client continues to try to write every dirty page after getting
> an ENOSPC and each one of those writes has to wait for 500ms.
>
> However, you did not mention what kernel version you are running.
> One recent bug (introduced by a fix for deadlocks at ENOSPC) could
> allow oversubscription of free space to occur in XFS, resulting in
I do have that fix in my kernel. (I'm the one who pointed you to the patch
that introduced that particular problem.)
> the write being allowed to proceed (i.e. sufficient space for the
> data blocks) but then failing the allocation because there weren't
> enough blocks put aside for potential btree splits that occur during
> allocation. If the linux client is using sync writes on retry, then
The writes from nfsd shouldn't be sync. Technically it's not even
retrying, just plowing on...
> this would trigger a 500ms sleep on every write. That's the right
> sort of ballpark for the slowness you were seeing - 5GB / 32k * 0.5s
> = ~22 hours....
>
> This got fixed in 2.6.18-rc6 -
You mean commit 4be536debe3f7b0c right? (Actually -rc7 I believe...) I do
have that one in my kernel. My kernel is 2.6.17 plus assorted XFS fixes.
> can you retry with a 2.6.18 server
> and see if your problem goes away?
Unfortunately it will be several days before I have a chance to do that.
The backtrace looked like this:
... nfsd_write nfsd_vfs_write vfs_writev do_readv_writev xfs_file_writev
xfs_write generic_file_buffered_write xfs_get_blocks __xfs_get_blocks
xfs_bmap xfs_iomap xfs_iomap_write_delay xfs_flush_space xfs_flush_device
schedule_timeout_uninterruptible.
with a 500ms sleep in xfs_flush_device().
Thanks
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-05 16:33 ` Stephane Doyon
0 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-05 16:33 UTC (permalink / raw)
To: David Chinner; +Cc: xfs, nfs, Shailendra Tripathi, Trond Myklebust
On Thu, 5 Oct 2006, David Chinner wrote:
> On Tue, Oct 03, 2006 at 09:39:55AM -0400, Stephane Doyon wrote:
>> Sorry for insisting, but it seems to me there's still a problem in need of
>> fixing: when writing a 5GB file over NFS to an XFS file system and hitting
>> ENOSPC, it takes on the order of 22hours before my application gets an
>> error, whereas it would normally take about 2minutes if the file system
>> did not become full.
>>
>> Perhaps I was being a bit too "constructive" and drowned my point in
>> explanations and proposed workarounds... You are telling me that neither
>> NFS nor XFS is doing anything wrong, and I can understand your points of
>> view, but surely that behavior isn't considered acceptable?
>
> I agree that this a little extreme and I can't recall of seeing
> anything like this before, but I can see how that may happen if the
> NFS client continues to try to write every dirty page after getting
> an ENOSPC and each one of those writes has to wait for 500ms.
>
> However, you did not mention what kernel version you are running.
> One recent bug (introduced by a fix for deadlocks at ENOSPC) could
> allow oversubscription of free space to occur in XFS, resulting in
I do have that fix in my kernel. (I'm the one who pointed you to the patch
that introduced that particular problem.)
> the write being allowed to proceed (i.e. sufficient space for the
> data blocks) but then failing the allocation because there weren't
> enough blocks put aside for potential btree splits that occur during
> allocation. If the linux client is using sync writes on retry, then
The writes from nfsd shouldn't be sync. Technically it's not even
retrying, just plowing on...
> this would trigger a 500ms sleep on every write. That's the right
> sort of ballpark for the slowness you were seeing - 5GB / 32k * 0.5s
> = ~22 hours....
>
> This got fixed in 2.6.18-rc6 -
You mean commit 4be536debe3f7b0c right? (Actually -rc7 I believe...) I do
have that one in my kernel. My kernel is 2.6.17 plus assorted XFS fixes.
> can you retry with a 2.6.18 server
> and see if your problem goes away?
Unfortunately it will be several days before I have a chance to do that.
The backtrace looked like this:
... nfsd_write nfsd_vfs_write vfs_writev do_readv_writev xfs_file_writev
xfs_write generic_file_buffered_write xfs_get_blocks __xfs_get_blocks
xfs_bmap xfs_iomap xfs_iomap_write_delay xfs_flush_space xfs_flush_device
schedule_timeout_uninterruptible.
with a 500ms sleep in xfs_flush_device().
Thanks
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-05 16:33 ` Stephane Doyon
@ 2006-10-05 23:29 ` David Chinner
-1 siblings, 0 replies; 131+ messages in thread
From: David Chinner @ 2006-10-05 23:29 UTC (permalink / raw)
To: Stephane Doyon
Cc: David Chinner, Trond Myklebust, xfs, nfs, Shailendra Tripathi
On Thu, Oct 05, 2006 at 12:33:05PM -0400, Stephane Doyon wrote:
> retrying, just plowing on...
>
> >this would trigger a 500ms sleep on every write. That's the right
> >sort of ballpark for the slowness you were seeing - 5GB / 32k * 0.5s
> >= ~22 hours....
> >
> >This got fixed in 2.6.18-rc6 -
>
> You mean commit 4be536debe3f7b0c right? (Actually -rc7 I believe...) I do
> have that one in my kernel. My kernel is 2.6.17 plus assorted XFS fixes.
>
> >can you retry with a 2.6.18 server
> >and see if your problem goes away?
>
> Unfortunately it will be several days before I have a chance to do that.
>
> The backtrace looked like this:
>
> ... nfsd_write nfsd_vfs_write vfs_writev do_readv_writev xfs_file_writev
> xfs_write generic_file_buffered_write xfs_get_blocks __xfs_get_blocks
> xfs_bmap xfs_iomap xfs_iomap_write_delay xfs_flush_space xfs_flush_device
> schedule_timeout_uninterruptible.
Ahhh, this gets hit on the ->prepare_write path (xfs_iomap_write_delay()),
not the allocate path (xfs_iomap_write_allocate()). Sorry - I got myself
(and probably everyone else) confused there which why I suspected sync
writes - they trigger the allocate path in the write call. I don't think
2.6.18 will change anything.
FWIW, I don't think we can avoid this sleep when we first hit ENOSPC
conditions, but perhaps once we are certain of the ENOSPC status
we can tag the filesystem with this state (say an xfs_mount flag)
and only clear that tag when something is freed. We could then
use the tag to avoid continually trying extremely hard to allocate
space when we know there is none available....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-05 23:29 ` David Chinner
0 siblings, 0 replies; 131+ messages in thread
From: David Chinner @ 2006-10-05 23:29 UTC (permalink / raw)
To: Stephane Doyon
Cc: xfs, David Chinner, nfs, Shailendra Tripathi, Trond Myklebust
On Thu, Oct 05, 2006 at 12:33:05PM -0400, Stephane Doyon wrote:
> retrying, just plowing on...
>
> >this would trigger a 500ms sleep on every write. That's the right
> >sort of ballpark for the slowness you were seeing - 5GB / 32k * 0.5s
> >= ~22 hours....
> >
> >This got fixed in 2.6.18-rc6 -
>
> You mean commit 4be536debe3f7b0c right? (Actually -rc7 I believe...) I do
> have that one in my kernel. My kernel is 2.6.17 plus assorted XFS fixes.
>
> >can you retry with a 2.6.18 server
> >and see if your problem goes away?
>
> Unfortunately it will be several days before I have a chance to do that.
>
> The backtrace looked like this:
>
> ... nfsd_write nfsd_vfs_write vfs_writev do_readv_writev xfs_file_writev
> xfs_write generic_file_buffered_write xfs_get_blocks __xfs_get_blocks
> xfs_bmap xfs_iomap xfs_iomap_write_delay xfs_flush_space xfs_flush_device
> schedule_timeout_uninterruptible.
Ahhh, this gets hit on the ->prepare_write path (xfs_iomap_write_delay()),
not the allocate path (xfs_iomap_write_allocate()). Sorry - I got myself
(and probably everyone else) confused there which why I suspected sync
writes - they trigger the allocate path in the write call. I don't think
2.6.18 will change anything.
FWIW, I don't think we can avoid this sleep when we first hit ENOSPC
conditions, but perhaps once we are certain of the ENOSPC status
we can tag the filesystem with this state (say an xfs_mount flag)
and only clear that tag when something is freed. We could then
use the tag to avoid continually trying extremely hard to allocate
space when we know there is none available....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-10-05 23:29 ` David Chinner
@ 2006-10-06 13:03 ` Stephane Doyon
-1 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-06 13:03 UTC (permalink / raw)
To: David Chinner; +Cc: Trond Myklebust, xfs, nfs, Shailendra Tripathi
On Fri, 6 Oct 2006, David Chinner wrote:
>> The backtrace looked like this:
>>
>> ... nfsd_write nfsd_vfs_write vfs_writev do_readv_writev xfs_file_writev
>> xfs_write generic_file_buffered_write xfs_get_blocks __xfs_get_blocks
>> xfs_bmap xfs_iomap xfs_iomap_write_delay xfs_flush_space xfs_flush_device
>> schedule_timeout_uninterruptible.
>
> Ahhh, this gets hit on the ->prepare_write path (xfs_iomap_write_delay()),
Yes.
> not the allocate path (xfs_iomap_write_allocate()). Sorry - I got myself
> (and probably everyone else) confused there which why I suspected sync
> writes - they trigger the allocate path in the write call. I don't think
> 2.6.18 will change anything.
>
> FWIW, I don't think we can avoid this sleep when we first hit ENOSPC
> conditions, but perhaps once we are certain of the ENOSPC status
> we can tag the filesystem with this state (say an xfs_mount flag)
> and only clear that tag when something is freed. We could then
> use the tag to avoid continually trying extremely hard to allocate
> space when we know there is none available....
Yes! That's what I was trying to suggest :-). Thank you.
Is that hard to do?
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
@ 2006-10-06 13:03 ` Stephane Doyon
0 siblings, 0 replies; 131+ messages in thread
From: Stephane Doyon @ 2006-10-06 13:03 UTC (permalink / raw)
To: David Chinner; +Cc: xfs, nfs, Shailendra Tripathi, Trond Myklebust
On Fri, 6 Oct 2006, David Chinner wrote:
>> The backtrace looked like this:
>>
>> ... nfsd_write nfsd_vfs_write vfs_writev do_readv_writev xfs_file_writev
>> xfs_write generic_file_buffered_write xfs_get_blocks __xfs_get_blocks
>> xfs_bmap xfs_iomap xfs_iomap_write_delay xfs_flush_space xfs_flush_device
>> schedule_timeout_uninterruptible.
>
> Ahhh, this gets hit on the ->prepare_write path (xfs_iomap_write_delay()),
Yes.
> not the allocate path (xfs_iomap_write_allocate()). Sorry - I got myself
> (and probably everyone else) confused there which why I suspected sync
> writes - they trigger the allocate path in the write call. I don't think
> 2.6.18 will change anything.
>
> FWIW, I don't think we can avoid this sleep when we first hit ENOSPC
> conditions, but perhaps once we are certain of the ENOSPC status
> we can tag the filesystem with this state (say an xfs_mount flag)
> and only clear that tag when something is freed. We could then
> use the tag to avoid continually trying extremely hard to allocate
> space when we know there is none available....
Yes! That's what I was trying to suggest :-). Thank you.
Is that hard to do?
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
^ permalink raw reply [flat|nested] 131+ messages in thread
* Linux 2.6.16.4
@ 2006-04-11 17:33 Greg KH
2006-04-11 19:04 ` several messages Jan Engelhardt
0 siblings, 1 reply; 131+ messages in thread
From: Greg KH @ 2006-04-11 17:33 UTC (permalink / raw)
To: linux-kernel, stable; +Cc: torvalds
We (the -stable team) are announcing the release of the 2.6.16.4 kernel.
The diffstat and short summary of the fixes are below.
I'll also be replying to this message with a copy of the patch between
2.6.16.3 and 2.6.16.4, as it is small enough to do so.
The updated 2.6.16.y git tree can be found at:
rsync://rsync.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.16.y.git
and can be browsed at the normal kernel.org git web browser:
www.kernel.org/git/
thanks,
greg k-h
--------
Makefile | 2 +-
kernel/signal.c | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
Summary of changes from v2.6.16.3 to v2.6.16.4
==============================================
Greg Kroah-Hartman:
Linux 2.6.16.4
Oleg Nesterov:
RCU signal handling [CVE-2006-1523]
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-04-11 17:33 Linux 2.6.16.4 Greg KH
@ 2006-04-11 19:04 ` Jan Engelhardt
2006-04-11 19:20 ` Boris B. Zhmurov
2006-04-11 20:30 ` Greg KH
0 siblings, 2 replies; 131+ messages in thread
From: Jan Engelhardt @ 2006-04-11 19:04 UTC (permalink / raw)
To: Greg KH; +Cc: linux-kernel, stable, torvalds
>Date: Tue, 11 Apr 2006 09:26:20 -0700
>Subject: Linux 2.6.16.3
>David Howells:
> Keys: Fix oops when adding key to non-keyring [CVE-2006-1522]
>Date: Tue, 11 Apr 2006 10:33:23 -0700
>Subject: Linux 2.6.16.4
>Oleg Nesterov:
> RCU signal handling [CVE-2006-1523]
Now admins spend another hour this day just to upgrade.
These two patches could have been queued until the end of the day. Maybe
another one turns up soon.
Jan Engelhardt
--
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-04-11 19:04 ` several messages Jan Engelhardt
@ 2006-04-11 19:20 ` Boris B. Zhmurov
2006-04-11 20:30 ` Greg KH
1 sibling, 0 replies; 131+ messages in thread
From: Boris B. Zhmurov @ 2006-04-11 19:20 UTC (permalink / raw)
To: Jan Engelhardt; +Cc: Greg KH, linux-kernel, stable, torvalds
Hello, Jan Engelhardt.
On 11.04.2006 23:04 you said the following:
> Now admins spend another hour this day just to upgrade.
It's admin's job, isn't it?
> These two patches could have been queued until the end of the day. Maybe
> another one turns up soon.
> Jan Engelhardt
Hmm... Interesting. Are you blaming security officers for doing their
job? Please, don't! And many many thanks to Greg for giving us security
patches as soon as possible.
--
Boris B. Zhmurov
mailto: bb@kernelpanic.ru
"wget http://kernelpanic.ru/bb_public_key.pgp -O - | gpg --import"
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-04-11 19:04 ` several messages Jan Engelhardt
2006-04-11 19:20 ` Boris B. Zhmurov
@ 2006-04-11 20:30 ` Greg KH
2006-04-11 23:46 ` Jan Engelhardt
2006-04-12 0:36 ` Nix
1 sibling, 2 replies; 131+ messages in thread
From: Greg KH @ 2006-04-11 20:30 UTC (permalink / raw)
To: Jan Engelhardt; +Cc: linux-kernel, stable, torvalds
On Tue, Apr 11, 2006 at 09:04:42PM +0200, Jan Engelhardt wrote:
>
> >Date: Tue, 11 Apr 2006 09:26:20 -0700
> >Subject: Linux 2.6.16.3
> >David Howells:
> > Keys: Fix oops when adding key to non-keyring [CVE-2006-1522]
>
> >Date: Tue, 11 Apr 2006 10:33:23 -0700
> >Subject: Linux 2.6.16.4
> >Oleg Nesterov:
> > RCU signal handling [CVE-2006-1523]
>
> Now admins spend another hour this day just to upgrade.
> These two patches could have been queued until the end of the day. Maybe
> another one turns up soon.
The first one went out last night, as it was a real issue that affected
people and I had already waited longer than I felt comfortable with, due
to travel issues I had (two different talks in two different cities in
two different days.)
The second one went out today, because it was reported today. Should I
have waited until tomorrow to see if something else came up?
thanks,
greg k-h
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-04-11 20:30 ` Greg KH
@ 2006-04-11 23:46 ` Jan Engelhardt
2006-04-12 0:36 ` Nix
1 sibling, 0 replies; 131+ messages in thread
From: Jan Engelhardt @ 2006-04-11 23:46 UTC (permalink / raw)
To: Greg KH; +Cc: linux-kernel, stable, torvalds
>
>The first one went out last night, as it was a real issue that affected
>people and I had already waited longer than I felt comfortable with, due
>to travel issues I had (two different talks in two different cities in
>two different days.)
>
>The second one went out today, because it was reported today. Should I
>have waited until tomorrow to see if something else came up?
>
No of course not, I did not know the first one was already due long time.
[Sigh, pine changed the subject header and I did not notice.]
Jan Engelhardt
--
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2006-04-11 20:30 ` Greg KH
2006-04-11 23:46 ` Jan Engelhardt
@ 2006-04-12 0:36 ` Nix
1 sibling, 0 replies; 131+ messages in thread
From: Nix @ 2006-04-12 0:36 UTC (permalink / raw)
To: Greg KH; +Cc: Jan Engelhardt, linux-kernel, stable, torvalds
On 11 Apr 2006, Greg KH whispered secretively:
> The first one went out last night, as it was a real issue that affected
> people and I had already waited longer than I felt comfortable with, due
> to travel issues I had (two different talks in two different cities in
> two different days.)
>
> The second one went out today, because it was reported today. Should I
> have waited until tomorrow to see if something else came up?
Indeed.
On top of that, they're `only' local DoSes, so many admins (i.e. those
without untrusted local users) will probably not have installed .3 yet:
and anyone with untrusted local users probably has someone whose entire
job is handling security anyway.
There's nothing wrong with rapid-fire -stables; either the issue is (in
the judgement of the ones doing the installation) critical, in which
case it should get out as fast as possible, or it isn't, in which case
the local installing admins can put it off for a day or so themselves to
see if another release comes out immediately afterwards.
--
`On a scale of 1-10, X's "brokenness rating" is 1.1, but that's only
because bringing Windows into the picture rescaled "brokenness" by
a factor of 10.' --- Peter da Silva
^ permalink raw reply [flat|nested] 131+ messages in thread
* ata over ethernet question
@ 2005-05-04 17:31 Maciej Soltysiak
2005-05-04 19:48 ` David Hollis
0 siblings, 1 reply; 131+ messages in thread
From: Maciej Soltysiak @ 2005-05-04 17:31 UTC (permalink / raw)
To: linux-kernel
Hi,
AOE is a bit new for me.
Would it be possible to use tha AOE driver to
attach one ATA drive in a host over ethernet to another
host ? Or is it support for specific hardware devices only?
You know, something like:
# fdisk <device_on_another_host>
# mkfs.ext2 <device_on_another_host/partition1>
# mount <device_on_another_host/partition1> /mnt/part1
--
Maciej
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: ata over ethernet question
2005-05-04 17:31 ata over ethernet question Maciej Soltysiak
@ 2005-05-04 19:48 ` David Hollis
2005-05-04 21:17 ` Re[2]: " Maciej Soltysiak
0 siblings, 1 reply; 131+ messages in thread
From: David Hollis @ 2005-05-04 19:48 UTC (permalink / raw)
To: Maciej Soltysiak; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1121 bytes --]
On Wed, 2005-05-04 at 19:31 +0200, Maciej Soltysiak wrote:
> Hi,
>
> AOE is a bit new for me.
>
> Would it be possible to use tha AOE driver to
> attach one ATA drive in a host over ethernet to another
> host ? Or is it support for specific hardware devices only?
>
> You know, something like:
> # fdisk <device_on_another_host>
> # mkfs.ext2 <device_on_another_host/partition1>
> # mount <device_on_another_host/partition1> /mnt/part1
>
That seems to be the basic idea but there doesn't seem to be a provider
stack just yet, just a 'client' (though I could be wrong). AOE is
similar in concept to iSCSI with the biggest difference being that AOE
runs over Ethernet and is thus non-routeable. iSCSI operates over IP so
you can do all kinds of fun IP games with it.
> --
> Maciej
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
David Hollis <dhollis@davehollis.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re[2]: ata over ethernet question
2005-05-04 19:48 ` David Hollis
@ 2005-05-04 21:17 ` Maciej Soltysiak
2005-05-05 15:09 ` David Hollis
0 siblings, 1 reply; 131+ messages in thread
From: Maciej Soltysiak @ 2005-05-04 21:17 UTC (permalink / raw)
To: linux-kernel
Hello David,
Wednesday, May 4, 2005, 9:48:36 PM, you wrote:
> That seems to be the basic idea but there doesn't seem to be a provider
> stack just yet, just a 'client' (though I could be wrong). AOE is
> similar in concept to iSCSI with the biggest difference being that AOE
> runs over Ethernet and is thus non-routeable. iSCSI operates over IP so
> you can do all kinds of fun IP games with it.
Thanks, this is interesting. Does the iSCSI implementation out there have
this provider stack ?
Regards,
Maciej
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: Re[2]: ata over ethernet question
2005-05-04 21:17 ` Re[2]: " Maciej Soltysiak
@ 2005-05-05 15:09 ` David Hollis
2005-05-07 15:05 ` Sander
0 siblings, 1 reply; 131+ messages in thread
From: David Hollis @ 2005-05-05 15:09 UTC (permalink / raw)
To: Maciej Soltysiak; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1018 bytes --]
On Wed, 2005-05-04 at 23:17 +0200, Maciej Soltysiak wrote:
> Hello David,
>
> Wednesday, May 4, 2005, 9:48:36 PM, you wrote:
> > That seems to be the basic idea but there doesn't seem to be a provider
> > stack just yet, just a 'client' (though I could be wrong). AOE is
> > similar in concept to iSCSI with the biggest difference being that AOE
> > runs over Ethernet and is thus non-routeable. iSCSI operates over IP so
> > you can do all kinds of fun IP games with it.
> Thanks, this is interesting. Does the iSCSI implementation out there have
> this provider stack ?
>
> Regards,
> Maciej
There seem to be a few iSCSI implementations floating around for Linux,
hopefully one will be added to mainline soon. Most of those
implementations are for the client side though there is at least one
target implementation that allows you to provide local storage to iSCSI
clients. I don't remember the name of it or if it's still maintained or
not.
--
David Hollis <dhollis@davehollis.com>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: Re[2]: ata over ethernet question
2005-05-05 15:09 ` David Hollis
@ 2005-05-07 15:05 ` Sander
2005-05-10 22:00 ` Guennadi Liakhovetski
0 siblings, 1 reply; 131+ messages in thread
From: Sander @ 2005-05-07 15:05 UTC (permalink / raw)
To: David Hollis; +Cc: Maciej Soltysiak, linux-kernel
David Hollis wrote (ao):
> There seem to be a few iSCSI implementations floating around for
> Linux, hopefully one will be added to mainline soon. Most of those
> implementations are for the client side though there is at least one
> target implementation that allows you to provide local storage to
> iSCSI clients. I don't remember the name of it or if it's still
> maintained or not.
Quite active even:
http://sourceforge.net/projects/iscsitarget/
The "Quick Guide to iSCSI on Linux" is a good starting point btw.
Also check out http://www.open-iscsi.org/ (the client, aka 'initiator').
--
Humilis IT Services and Solutions
http://www.humilis.net
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: Re[2]: ata over ethernet question
2005-05-07 15:05 ` Sander
@ 2005-05-10 22:00 ` Guennadi Liakhovetski
2005-05-11 8:56 ` Vladislav Bolkhovitin
0 siblings, 1 reply; 131+ messages in thread
From: Guennadi Liakhovetski @ 2005-05-10 22:00 UTC (permalink / raw)
To: Sander; +Cc: David Hollis, Maciej Soltysiak, linux-kernel, linux-scsi
Hi
On Sat, 7 May 2005, Sander wrote:
> David Hollis wrote (ao):
> > There seem to be a few iSCSI implementations floating around for
> > Linux, hopefully one will be added to mainline soon. Most of those
> > implementations are for the client side though there is at least one
> > target implementation that allows you to provide local storage to
> > iSCSI clients. I don't remember the name of it or if it's still
> > maintained or not.
>
> Quite active even:
>
> http://sourceforge.net/projects/iscsitarget/
>
> The "Quick Guide to iSCSI on Linux" is a good starting point btw.
>
> Also check out http://www.open-iscsi.org/ (the client, aka 'initiator').
A follow up question - I recently used nbd to access a CD-ROM. It worked
nice, but, I had to read in 7 CDs, so, each time I had to replace a CD, I
had to stop the client, the server, then replace the CD, re-start the
server, re-start the client... I thought about extending NBD to (better)
support removable media, but then you start thinking about all those
features that your local block device has that don't get exported over
NBD...
Now, my understanding (sorry, without looking at any docs - yet) is, that
iSCSI is (or at least should be) free from these limitations. So, does it
make any sense at all extending NBD or just switch to iSCSI? Should NBD be
just kept simple as it is or would it be completely superseeded by iSCSI,
or is there still something that NBD does that iSCSI wouldn't (easily) do?
Or am I completely misunderstanding what iSCSI target does?
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: ata over ethernet question
2005-05-10 22:00 ` Guennadi Liakhovetski
@ 2005-05-11 8:56 ` Vladislav Bolkhovitin
2005-05-11 21:26 ` several messages Guennadi Liakhovetski
0 siblings, 1 reply; 131+ messages in thread
From: Vladislav Bolkhovitin @ 2005-05-11 8:56 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: Sander, David Hollis, Maciej Soltysiak, FUJITA Tomonori,
linux-scsi, linux-kernel
Guennadi Liakhovetski wrote:
> Hi
>
> On Sat, 7 May 2005, Sander wrote:
>
>
>>David Hollis wrote (ao):
>>
>>>There seem to be a few iSCSI implementations floating around for
>>>Linux, hopefully one will be added to mainline soon. Most of those
>>>implementations are for the client side though there is at least one
>>>target implementation that allows you to provide local storage to
>>>iSCSI clients. I don't remember the name of it or if it's still
>>>maintained or not.
>>
>>Quite active even:
>>
>>http://sourceforge.net/projects/iscsitarget/
>>
>>The "Quick Guide to iSCSI on Linux" is a good starting point btw.
>>
>>Also check out http://www.open-iscsi.org/ (the client, aka 'initiator').
>
>
> A follow up question - I recently used nbd to access a CD-ROM. It worked
> nice, but, I had to read in 7 CDs, so, each time I had to replace a CD, I
> had to stop the client, the server, then replace the CD, re-start the
> server, re-start the client... I thought about extending NBD to (better)
> support removable media, but then you start thinking about all those
> features that your local block device has that don't get exported over
> NBD...
>
> Now, my understanding (sorry, without looking at any docs - yet) is, that
> iSCSI is (or at least should be) free from these limitations. So, does it
> make any sense at all extending NBD or just switch to iSCSI? Should NBD be
> just kept simple as it is or would it be completely superseeded by iSCSI,
> or is there still something that NBD does that iSCSI wouldn't (easily) do?
>
> Or am I completely misunderstanding what iSCSI target does?
Actually, this is property not of iSCSI target itself, but of any SCSI
target. So, we implemented it as part of our SCSI target mid-level
(SCST, http://scst.sourceforge.net), therefore any target driver working
over it will automatically benefit from this feature. Unfortunately,
currently available only target drivers for Qlogic 2x00 cards and for
poor UNH iSCSI target (that works not too reliable and only with very
specific initiators). The published version supports only real SCSI
CDROMs. CDROM FILEIO module, which allows exporting ISO images as SCSI
CDROM devices, going to be available not later end of May.
Vlad
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-11 8:56 ` Vladislav Bolkhovitin
@ 2005-05-11 21:26 ` Guennadi Liakhovetski
2005-05-12 2:16 ` Ming Zhang
2005-05-12 10:17 ` Vladislav Bolkhovitin
0 siblings, 2 replies; 131+ messages in thread
From: Guennadi Liakhovetski @ 2005-05-11 21:26 UTC (permalink / raw)
To: FUJITA Tomonori, Vladislav Bolkhovitin
Cc: iscsitarget-devel, linux-scsi, dmitry_yus, Sander, David Hollis,
Maciej Soltysiak, linux-kernel
Hello and thanks for the replies
On Wed, 11 May 2005, FUJITA Tomonori wrote:
> The iSCSI protocol simply encapsulates the SCSI protocol into the
> TCP/IP protocol, and carries packets over IP networks. You can handle
...
On Wed, 11 May 2005, Vladislav Bolkhovitin wrote:
> Actually, this is property not of iSCSI target itself, but of any SCSI target.
> So, we implemented it as part of our SCSI target mid-level (SCST,
> http://scst.sourceforge.net), therefore any target driver working over it will
> automatically benefit from this feature. Unfortunately, currently available
> only target drivers for Qlogic 2x00 cards and for poor UNH iSCSI target (that
> works not too reliable and only with very specific initiators). The published
...
The above confirms basically my understanding apart from one "minor"
confusion - I thought, that parallel to hardware solutions pure software
implementations were possible / being developed, like a driver, that
implements a SCSI LDD API on one side, and forwards packets to an IP
stack, say, over an ethernet card - on the initiator side. And a counter
part on the target side. Similarly to the USB mass-storage and storage
gadget drivers?
Thanks
Guennadi
---
Guennadi Liakhovetski
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-11 21:26 ` several messages Guennadi Liakhovetski
@ 2005-05-12 2:16 ` Ming Zhang
2005-05-12 18:32 ` Dmitry Yusupov
2005-05-12 10:17 ` Vladislav Bolkhovitin
1 sibling, 1 reply; 131+ messages in thread
From: Ming Zhang @ 2005-05-12 2:16 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: FUJITA Tomonori, Vladislav Bolkhovitin, iet-dev, linux-scsi,
Dmitry Yusupov, Sander, David Hollis, Maciej Soltysiak,
linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1703 bytes --]
iscsi is scsi over ip.
usb disk is scsi over usb.
so just a different transport.
u are rite. ;)
ming
On Wed, 2005-05-11 at 23:26 +0200, Guennadi Liakhovetski wrote:
> Hello and thanks for the replies
>
> On Wed, 11 May 2005, FUJITA Tomonori wrote:
> > The iSCSI protocol simply encapsulates the SCSI protocol into the
> > TCP/IP protocol, and carries packets over IP networks. You can handle
> ...
>
> On Wed, 11 May 2005, Vladislav Bolkhovitin wrote:
> > Actually, this is property not of iSCSI target itself, but of any SCSI target.
> > So, we implemented it as part of our SCSI target mid-level (SCST,
> > http://scst.sourceforge.net), therefore any target driver working over it will
> > automatically benefit from this feature. Unfortunately, currently available
> > only target drivers for Qlogic 2x00 cards and for poor UNH iSCSI target (that
> > works not too reliable and only with very specific initiators). The published
> ...
>
> The above confirms basically my understanding apart from one "minor"
> confusion - I thought, that parallel to hardware solutions pure software
> implementations were possible / being developed, like a driver, that
> implements a SCSI LDD API on one side, and forwards packets to an IP
> stack, say, over an ethernet card - on the initiator side. And a counter
> part on the target side. Similarly to the USB mass-storage and storage
> gadget drivers?
>
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-12 2:16 ` Ming Zhang
@ 2005-05-12 18:32 ` Dmitry Yusupov
2005-05-13 8:12 ` Christoph Hellwig
0 siblings, 1 reply; 131+ messages in thread
From: Dmitry Yusupov @ 2005-05-12 18:32 UTC (permalink / raw)
To: mingz
Cc: Guennadi Liakhovetski, FUJITA Tomonori, Vladislav Bolkhovitin,
iet-dev, linux-scsi, Sander, David Hollis, Maciej Soltysiak,
linux-kernel
On Wed, 2005-05-11 at 22:16 -0400, Ming Zhang wrote:
> iscsi is scsi over ip.
correction. iSCSI today has RFC at least for two transports - TCP/IP and
iSER/RDMA(in finalized progress) with RDMA over Infiniband or RNIC. And
I think people start writing initial draft for SCTP/IP transport...
>From this perspective, iSCSI certainly more advanced and matured
comparing to NBD variations.
> usb disk is scsi over usb.
> so just a different transport.
> u are rite. ;)
>
> ming
>
> On Wed, 2005-05-11 at 23:26 +0200, Guennadi Liakhovetski wrote:
> > Hello and thanks for the replies
> >
> > On Wed, 11 May 2005, FUJITA Tomonori wrote:
> > > The iSCSI protocol simply encapsulates the SCSI protocol into the
> > > TCP/IP protocol, and carries packets over IP networks. You can handle
> > ...
> >
> > On Wed, 11 May 2005, Vladislav Bolkhovitin wrote:
> > > Actually, this is property not of iSCSI target itself, but of any SCSI target.
> > > So, we implemented it as part of our SCSI target mid-level (SCST,
> > > http://scst.sourceforge.net), therefore any target driver working over it will
> > > automatically benefit from this feature. Unfortunately, currently available
> > > only target drivers for Qlogic 2x00 cards and for poor UNH iSCSI target (that
> > > works not too reliable and only with very specific initiators). The published
> > ...
> >
> > The above confirms basically my understanding apart from one "minor"
> > confusion - I thought, that parallel to hardware solutions pure software
> > implementations were possible / being developed, like a driver, that
> > implements a SCSI LDD API on one side, and forwards packets to an IP
> > stack, say, over an ethernet card - on the initiator side. And a counter
> > part on the target side. Similarly to the USB mass-storage and storage
> > gadget drivers?
> >
> > Thanks
> > Guennadi
> > ---
> > Guennadi Liakhovetski
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-12 18:32 ` Dmitry Yusupov
@ 2005-05-13 8:12 ` Christoph Hellwig
2005-05-13 15:04 ` Dmitry Yusupov
0 siblings, 1 reply; 131+ messages in thread
From: Christoph Hellwig @ 2005-05-13 8:12 UTC (permalink / raw)
To: Dmitry Yusupov
Cc: mingz, Guennadi Liakhovetski, FUJITA Tomonori,
Vladislav Bolkhovitin, iet-dev, linux-scsi, Sander, David Hollis,
Maciej Soltysiak, linux-kernel
On Thu, May 12, 2005 at 11:32:12AM -0700, Dmitry Yusupov wrote:
> On Wed, 2005-05-11 at 22:16 -0400, Ming Zhang wrote:
> > iscsi is scsi over ip.
>
> correction. iSCSI today has RFC at least for two transports - TCP/IP and
> iSER/RDMA(in finalized progress) with RDMA over Infiniband or RNIC. And
> I think people start writing initial draft for SCTP/IP transport...
>
> >From this perspective, iSCSI certainly more advanced and matured
> comparing to NBD variations.
It's for certainly much more complicated (in marketing speak that's usually
called advanced) but far less mature.
If you want network storage to just work use nbd.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-13 8:12 ` Christoph Hellwig
@ 2005-05-13 15:04 ` Dmitry Yusupov
2005-05-13 15:07 ` Christoph Hellwig
0 siblings, 1 reply; 131+ messages in thread
From: Dmitry Yusupov @ 2005-05-13 15:04 UTC (permalink / raw)
To: Christoph Hellwig
Cc: mingz, Guennadi Liakhovetski, FUJITA Tomonori,
Vladislav Bolkhovitin, iet-dev, linux-scsi, Sander, David Hollis,
Maciej Soltysiak, linux-kernel
On Fri, 2005-05-13 at 09:12 +0100, Christoph Hellwig wrote:
> On Thu, May 12, 2005 at 11:32:12AM -0700, Dmitry Yusupov wrote:
> > On Wed, 2005-05-11 at 22:16 -0400, Ming Zhang wrote:
> > > iscsi is scsi over ip.
> >
> > correction. iSCSI today has RFC at least for two transports - TCP/IP and
> > iSER/RDMA(in finalized progress) with RDMA over Infiniband or RNIC. And
> > I think people start writing initial draft for SCTP/IP transport...
> >
> > >From this perspective, iSCSI certainly more advanced and matured
> > comparing to NBD variations.
>
> It's for certainly much more complicated (in marketing speak that's usually
> called advanced) but far less mature.
>
> If you want network storage to just work use nbd.
You could tell this to school's computer class teacher... Serious SAN
deployment will always be based either on FC or iSCSI for the reasons I
explained before.
I do not disagree, nbd is nice and simple and for sure has its own
deployment space.
Dmitry
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-13 15:04 ` Dmitry Yusupov
@ 2005-05-13 15:07 ` Christoph Hellwig
2005-05-13 15:38 ` Dmitry Yusupov
0 siblings, 1 reply; 131+ messages in thread
From: Christoph Hellwig @ 2005-05-13 15:07 UTC (permalink / raw)
To: Dmitry Yusupov
Cc: Christoph Hellwig, mingz, Guennadi Liakhovetski, FUJITA Tomonori,
Vladislav Bolkhovitin, iet-dev, linux-scsi, Sander, David Hollis,
Maciej Soltysiak, linux-kernel
On Fri, May 13, 2005 at 08:04:16AM -0700, Dmitry Yusupov wrote:
> You could tell this to school's computer class teacher... Serious SAN
> deployment will always be based either on FC or iSCSI for the reasons I
> explained before.
Just FYI Steeleye ships a very successful clustering product that builds
on nbd.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-13 15:07 ` Christoph Hellwig
@ 2005-05-13 15:38 ` Dmitry Yusupov
0 siblings, 0 replies; 131+ messages in thread
From: Dmitry Yusupov @ 2005-05-13 15:38 UTC (permalink / raw)
To: Christoph Hellwig
Cc: mingz, Guennadi Liakhovetski, FUJITA Tomonori,
Vladislav Bolkhovitin, iet-dev, linux-scsi, Sander, David Hollis,
Maciej Soltysiak, linux-kernel
On Fri, 2005-05-13 at 16:07 +0100, Christoph Hellwig wrote:
> On Fri, May 13, 2005 at 08:04:16AM -0700, Dmitry Yusupov wrote:
> > You could tell this to school's computer class teacher... Serious SAN
> > deployment will always be based either on FC or iSCSI for the reasons I
> > explained before.
>
> Just FYI Steeleye ships a very successful clustering product that builds
> on nbd.
AFAIK, it is used for Data Replication purposes only. Correct me if I'm
wrong...
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2005-05-11 21:26 ` several messages Guennadi Liakhovetski
2005-05-12 2:16 ` Ming Zhang
@ 2005-05-12 10:17 ` Vladislav Bolkhovitin
1 sibling, 0 replies; 131+ messages in thread
From: Vladislav Bolkhovitin @ 2005-05-12 10:17 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: FUJITA Tomonori, iscsitarget-devel, linux-scsi, dmitry_yus,
Sander, David Hollis, Maciej Soltysiak, linux-kernel
Guennadi Liakhovetski wrote:
> Hello and thanks for the replies
>
> On Wed, 11 May 2005, FUJITA Tomonori wrote:
>
>>The iSCSI protocol simply encapsulates the SCSI protocol into the
>>TCP/IP protocol, and carries packets over IP networks. You can handle
>
> ...
>
> On Wed, 11 May 2005, Vladislav Bolkhovitin wrote:
>
>>Actually, this is property not of iSCSI target itself, but of any SCSI target.
>>So, we implemented it as part of our SCSI target mid-level (SCST,
>>http://scst.sourceforge.net), therefore any target driver working over it will
>>automatically benefit from this feature. Unfortunately, currently available
>>only target drivers for Qlogic 2x00 cards and for poor UNH iSCSI target (that
>>works not too reliable and only with very specific initiators). The published
>
> ...
>
> The above confirms basically my understanding apart from one "minor"
> confusion - I thought, that parallel to hardware solutions pure software
> implementations were possible / being developed, like a driver, that
> implements a SCSI LDD API on one side, and forwards packets to an IP
> stack, say, over an ethernet card - on the initiator side. And a counter
> part on the target side. Similarly to the USB mass-storage and storage
> gadget drivers?
There is some confusion in the SCSI world between SCSI as a transport
and SCSI as a commands set and software communication protocol, which
works above the transport. So, you can implement SCSI transport at any
software (eg iSCSI) or hardware (parallel SCSI, Fibre Channel, SATA,
etc.) way, but if the SCSI message passing protocol is used overall
system remains SCSI with all protocol obligations like task management.
So, pure software SCSI solution is possible. BTW, there are pure
hardware iSCSI implementations as well.
Vlad
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: 2.6.6-rc3-mm2 (4KSTACK)
@ 2004-05-11 8:45 Helge Hafting
2004-05-11 17:59 ` several messages Bill Davidsen
0 siblings, 1 reply; 131+ messages in thread
From: Helge Hafting @ 2004-05-11 8:45 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-kernel
Bill Davidsen wrote:
> Arjan van de Ven wrote:
>
>>> It's probably a Bad Idea to push this to Linus before the vendors
>>> that have
>>> significant market-impact issues (again - anybody other than NVidia
>>> here?)
>>> have gotten their stuff cleaned up...
>>
>>
>>
>> Ok I don't want to start a flamewar but... Do we want to hold linux back
>> until all binary only module vendors have caught up ??
>
>
> My questions is, hold it back from what? Having the 4k option is fine,
> it's just eliminating the current default which I think is
> undesirable. I tried 4k stack, I couldn't measure any improvement in
> anything (as in no visible speedup or saving in memory).
The memory saving is usually modest: 4k per thread. It might make a
difference for
those with many thousands of threads. I believe this is unswappable
memory,
which is much more valuable than ordinary process memory.
More interesting is that it removes one way for fork() to fail. With 8k
stacks,
the new process needs to allocate two consecutive pages for those 8k. That
might be impossible due to fragmentation, even if there are megabytes of
free
memory. Such a problem usually only shows up after a long time. Now we
only need to
allocate a single page, which always works as long as there is any free
memory at all.
> For an embedded system, where space is tight and the code paths well
> known, sure, but I haven't been able to find or generate any objective
> improvement, other than some posts saying smaller is always better.
> Nothing slows a system down like a crash, even if it isn't followed by
> a restore from backup.
Consider the case when your server (web/mail/other) fails to fork, and then
you can't login because that requires fork() too. 4k stacks remove this
scenario,
and is a stability improvement.
Helge Hafting
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2004-05-11 8:45 2.6.6-rc3-mm2 (4KSTACK) Helge Hafting
@ 2004-05-11 17:59 ` Bill Davidsen
0 siblings, 0 replies; 131+ messages in thread
From: Bill Davidsen @ 2004-05-11 17:59 UTC (permalink / raw)
To: Horst von Brand, Andrew Morton, Helge Hafting; +Cc: Linux Kernel Mailing List
Thank all of you for this information. This is an interesting way to
overcome the kernel memory fragmentation issue. I would have thought it
was more important to address having the memory so fragmented that there
is no 8k chunk left "even with many megabytes free" as someone wrote.
On Mon, 10 May 2004, Horst von Brand wrote:
> Bill Davidsen <davidsen@tmr.com> said:
>
> [...]
>
> > I tried 4k stack, I couldn't measure any improvement in anything (as in
> > no visible speedup or saving in memory).
>
> 4K stacks lets the kernel create new threads/processes as long as there is
> free memory; with 8K stacks it needs two consecutive free page frames in
> physical memory, when memory is fragmented (and large) they are hard to
> come by...
> --
> Dr. Horst H. von Brand User #22616 counter.li.org
> Departamento de Informatica Fono: +56 32 654431
> Universidad Tecnica Federico Santa Maria +56 32 654239
> Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513
>
On Mon, 10 May 2004, Andrew Morton wrote:
> Horst von Brand <vonbrand@inf.utfsm.cl> wrote:
> >
> > Bill Davidsen <davidsen@tmr.com> said:
> >
> > [...]
> >
> > > I tried 4k stack, I couldn't measure any improvement in anything (as in
> > > no visible speedup or saving in memory).
> >
> > 4K stacks lets the kernel create new threads/processes as long as there is
> > free memory; with 8K stacks it needs two consecutive free page frames in
> > physical memory, when memory is fragmented (and large) they are hard to
> > come by...
>
> This is true to a surprising extent. A couple of weeks ago I observed my
> 256MB box freeing over 20MB of pages before it could successfully acquire a
> single 1-order page.
>
> That was during an updatedb run.
>
> And a 1-order GFP_NOFS allocation was actually livelocking, because
> !__GFP_FS allocations aren't allowed to enter dentry reclaim. Which is why
> VFS caches are now forced to use 0-order allocations.
>
>
On Tue, 11 May 2004, Helge Hafting wrote:
> Bill Davidsen wrote:
>
> > Arjan van de Ven wrote:
> >
> >>> It's probably a Bad Idea to push this to Linus before the vendors
> >>> that have
> >>> significant market-impact issues (again - anybody other than NVidia
> >>> here?)
> >>> have gotten their stuff cleaned up...
> >>
> >>
> >>
> >> Ok I don't want to start a flamewar but... Do we want to hold linux back
> >> until all binary only module vendors have caught up ??
> >
> >
> > My questions is, hold it back from what? Having the 4k option is fine,
> > it's just eliminating the current default which I think is
> > undesirable. I tried 4k stack, I couldn't measure any improvement in
> > anything (as in no visible speedup or saving in memory).
>
> The memory saving is usually modest: 4k per thread. It might make a
> difference for
> those with many thousands of threads. I believe this is unswappable
> memory,
> which is much more valuable than ordinary process memory.
>
> More interesting is that it removes one way for fork() to fail. With 8k
> stacks,
> the new process needs to allocate two consecutive pages for those 8k. That
> might be impossible due to fragmentation, even if there are megabytes of
> free
> memory. Such a problem usually only shows up after a long time. Now we
> only need to
> allocate a single page, which always works as long as there is any free
> memory at all.
>
> > For an embedded system, where space is tight and the code paths well
> > known, sure, but I haven't been able to find or generate any objective
> > improvement, other than some posts saying smaller is always better.
> > Nothing slows a system down like a crash, even if it isn't followed by
> > a restore from backup.
>
> Consider the case when your server (web/mail/other) fails to fork, and then
> you can't login because that requires fork() too. 4k stacks remove this
> scenario,
> and is a stability improvement.
>
> Helge Hafting
>
--
bill davidsen <davidsen@tmr.com>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: [patch] HT scheduler, sched-2.5.68-A9
@ 2003-04-22 10:34 Ingo Molnar
2003-04-22 22:16 ` several messages Bill Davidsen
0 siblings, 1 reply; 131+ messages in thread
From: Ingo Molnar @ 2003-04-22 10:34 UTC (permalink / raw)
To: Rick Lindsley; +Cc: linux-kernel
On Tue, 22 Apr 2003, Rick Lindsley wrote:
> Ingo, several questions.
>
> What makes this statement:
>
> * At this point it's sure that we have a SMT
> * imbalance: this (physical) CPU is idle but
> * another CPU has two (or more) tasks running.
>
> true? Do you mean "this cpu/sibling set are all idle but another
> cpu/sibling set are all non-idle"? [...]
yes, precisely.
> [...] Are we assuming that because both a physical processor and its
> sibling are not idle, that it is better to move a task from the sibling
> to a physical processor? In other words, we are presuming that the case
> where the task on the physical processor and the task(s) on the
> sibling(s) are actually benefitting from the relationship is rare?
yes. This 'un-sharing' of contexts happens unconditionally, whenever we
notice the situation. (ie. whenever a CPU goes completely idle and notices
an overloaded physical CPU.) On the HT system i have i have measure this
to be a beneficial move even for the most trivial things like infinite
loop-counting.
the more per-logical-CPU cache a given SMT implementation has, the less
beneficial this move becomes - in that case the system should rather be
set up as a NUMA topology and scheduled via the NUMA scheduler.
> * We wake up one of the migration threads (it
> * doesnt matter which one) and let it fix things up:
>
> So although a migration thread normally pulls tasks to it, we've altered
> migration_thread now so that when cpu_active_balance() is set for its
> cpu, it will go find a cpu/sibling set in which all siblings are busy
> (knowing it has a good chance of finding one), decrement nr_running in
> the runqueue it has found, call load_balance() on the queue which is
> idle, and hope that load_balance will again find the busy queue (the
> same queue as the migration thread's) and decide to move a task over?
yes.
> whew. So why are we perverting the migration thread to push rather than
> pull? If active_load_balance() finds a imbalance, why must we use such
> indirection? Why decrement nr_running? Couldn't we put together a
> migration_req_t for the target queue's migration thread?
i'm not sure what you mean by perverting the migration thread to push
rather to pull, as migration threads always push - it's not different in
this case either. Since the task in question is running in an
un-cooperative way at the moment of active-balancing, that CPU needs to
run the high-prio migration thread, which pushes the task to the proper
CPU after that point. [if the push is still necessary.]
we could use a migration_req_t for this, in theory, but active balancing
is independent of ->cpus_allowed, so some special code would still be
needed. Also, active balancing is non-queued by nature. Is there a big
difference?
> Making the migration thread TASK_UNINTERRUPTIBLE has the nasty side
> effect of artificially raising the load average. Why is this changed?
agreed, this is an oversight, i fixed it in my tree.
Ingo
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2003-04-22 10:34 [patch] HT scheduler, sched-2.5.68-A9 Ingo Molnar
@ 2003-04-22 22:16 ` Bill Davidsen
2003-04-22 23:38 ` Rick Lindsley
0 siblings, 1 reply; 131+ messages in thread
From: Bill Davidsen @ 2003-04-22 22:16 UTC (permalink / raw)
To: Dave Jones, Ingo Molnar; +Cc: Rick Lindsley, linux-kernel
On Tue, 22 Apr 2003, Dave Jones wrote:
> On Mon, Apr 21, 2003 at 03:13:31PM -0400, Ingo Molnar wrote:
>
> > +/*
> > + * Is there a way to do this via Kconfig?
> > + */
> > +#if CONFIG_NR_SIBLINGS_2
> > +# define CONFIG_NR_SIBLINGS 2
> > +#elif CONFIG_NR_SIBLINGS_4
> > +# define CONFIG_NR_SIBLINGS 4
> > +#else
> > +# define CONFIG_NR_SIBLINGS 0
> > +#endif
> > +
>
> Maybe this would be better resolved at runtime ?
> With the above patch, you'd need three seperate kernel images
> to run optimally on a system in each of the cases.
> The 'vendor kernel' scenario here looks ugly to me.
>
> > +#if CONFIG_NR_SIBLINGS
> > +# define CONFIG_SHARE_RUNQUEUE 1
> > +#else
> > +# define CONFIG_SHARE_RUNQUEUE 0
> > +#endif
>
> And why can't this just be a
>
> if (ht_enabled)
> shared_runqueue = 1;
>
> Dumping all this into the config system seems to be the
> wrong direction IMHO. The myriad of runtime knobs in the
> scheduler already is bad enough, without introducing
> compile time ones as well.
May I add my "I don't understand this, either" at this point? It seems
desirable to have this particular value determined at runtime.
On Tue, 22 Apr 2003, Ingo Molnar wrote:
>
> On Tue, 22 Apr 2003, Rick Lindsley wrote:
> > [...] Are we assuming that because both a physical processor and its
> > sibling are not idle, that it is better to move a task from the sibling
> > to a physical processor? In other words, we are presuming that the case
> > where the task on the physical processor and the task(s) on the
> > sibling(s) are actually benefitting from the relationship is rare?
>
> yes. This 'un-sharing' of contexts happens unconditionally, whenever we
> notice the situation. (ie. whenever a CPU goes completely idle and notices
> an overloaded physical CPU.) On the HT system i have i have measure this
> to be a beneficial move even for the most trivial things like infinite
> loop-counting.
>
> the more per-logical-CPU cache a given SMT implementation has, the less
> beneficial this move becomes - in that case the system should rather be
> set up as a NUMA topology and scheduled via the NUMA scheduler.
Have you done any tests with a threaded process running on a single CPU in
the siblings? If they are sharing data and locks in the same cache it's
not obvious (to me at least) that it would be faster in two CPUs having to
do updates. That's a question, not an implication that it is significantly
better in just one, a threaded program with only two threads is not as
likely to be doing the same thing in both, perhaps.
--
bill davidsen <davidsen@tmr.com>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2003-04-22 22:16 ` several messages Bill Davidsen
@ 2003-04-22 23:38 ` Rick Lindsley
2003-04-23 9:17 ` Ingo Molnar
0 siblings, 1 reply; 131+ messages in thread
From: Rick Lindsley @ 2003-04-22 23:38 UTC (permalink / raw)
To: Bill Davidsen; +Cc: Dave Jones, Ingo Molnar, linux-kernel
Have you done any tests with a threaded process running on a single CPU in
the siblings? If they are sharing data and locks in the same cache it's
not obvious (to me at least) that it would be faster in two CPUs having to
do updates. That's a question, not an implication that it is significantly
better in just one, a threaded program with only two threads is not as
likely to be doing the same thing in both, perhaps.
True. I have a hunch (and it's only a hunch -- no hard data!) that
two threads that are sharing the same data will do better if they can
be located on a physical/sibling processor group. For workloads where
you really do have two distinct processes, or even threads but which are
operating on wholly different portions of data or code, moving them to
separate physical processors may be warranted. The key is whether the
work of one sibling is destroying the cache of another.
Rick
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2003-04-22 23:38 ` Rick Lindsley
@ 2003-04-23 9:17 ` Ingo Molnar
0 siblings, 0 replies; 131+ messages in thread
From: Ingo Molnar @ 2003-04-23 9:17 UTC (permalink / raw)
To: Rick Lindsley; +Cc: Bill Davidsen, Dave Jones, linux-kernel
On Tue, 22 Apr 2003, Rick Lindsley wrote:
> True. I have a hunch (and it's only a hunch -- no hard data!) that two
> threads that are sharing the same data will do better if they can be
> located on a physical/sibling processor group. For workloads where you
> really do have two distinct processes, or even threads but which are
> operating on wholly different portions of data or code, moving them to
> separate physical processors may be warranted. The key is whether the
> work of one sibling is destroying the cache of another.
If two threads have a workload that wants to be co-scheduled then the SMP
scheduler will do damage to them anyway - independently of any HT
scheduling decisions. One solution for such specific cases is to use the
CPU-binding API to move those threads to the same physical CPU. If there's
some common class of applications where this is the common case, then we
could start thinking about automatic support for them.
Ingo
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: ANN: LKMB (Linux Kernel Module Builder) version 0.1.16
@ 2003-01-23 0:20 Hal Duston
2003-01-27 16:46 ` several messages Bill Davidsen
0 siblings, 1 reply; 131+ messages in thread
From: Hal Duston @ 2003-01-23 0:20 UTC (permalink / raw)
To: linux-kernel
I use "INSTALL_MOD_PATH=put/the/modules/here/instead/of/lib/modules" in my
.profile or whatever in order to drop the modules into another directory
at "make modules_install" time. Is this one of the things folks are
talking about?
Hal Duston
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2003-01-23 0:20 ANN: LKMB (Linux Kernel Module Builder) version 0.1.16 Hal Duston
@ 2003-01-27 16:46 ` Bill Davidsen
2003-01-27 16:59 ` David Woodhouse
0 siblings, 1 reply; 131+ messages in thread
From: Bill Davidsen @ 2003-01-27 16:46 UTC (permalink / raw)
To: David Woodhouse, Hal Duston; +Cc: linux-kernel, Olaf Titz
On Wed, 22 Jan 2003, David Woodhouse wrote:
>
> davidsen@tmr.com said:
> > `uname -r` is the kernel version of the running kernel. It is NOT by
> > magic the kernel version of the kernel you are building...
>
> Er, yes. And what's your point?
>
> There is _no_ magic that will find the kernel you want to build against
> today without any input from you. Using the build tree for the
> currently-running kernel, if installed in the standard place, is as good a
> default as any. Of course you should be permitted to override that default.
You make my point for me, there is no magic, and when building a module it
should require that the directory be specified by either a command line
option (as noted below) or by being built as part of a source tree. There
*is* no good default in that particular case.
On Wed, 22 Jan 2003, Hal Duston wrote:
> I use "INSTALL_MOD_PATH=put/the/modules/here/instead/of/lib/modules" in my
> .profile or whatever in order to drop the modules into another directory
> at "make modules_install" time. Is this one of the things folks are
> talking about?
Related for sure, the point I was making was that there is no good default
place to put modules built outside a kernel source tree (and probably also
when built for multiple kernels). I was suggesting that the module tree of
the running kernel might be a really poor choice. I don't think I was
clear in my first post, I was not suggesting a better default, I was
suggesting that any default is likely to bite.
I'm not unhappy that Mr. Woodhouse disagrees, I just think he missed my
point the first time and I'm trying to clarify.
--
bill davidsen <davidsen@tmr.com>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
^ permalink raw reply [flat|nested] 131+ messages in thread
* Re: several messages
2003-01-27 16:46 ` several messages Bill Davidsen
@ 2003-01-27 16:59 ` David Woodhouse
0 siblings, 0 replies; 131+ messages in thread
From: David Woodhouse @ 2003-01-27 16:59 UTC (permalink / raw)
To: Bill Davidsen; +Cc: Hal Duston, linux-kernel, Olaf Titz
davidsen@tmr.com said:
> You make my point for me, there is no magic, and when building a
> module it should require that the directory be specified by either a
> command line option (as noted below) or by being built as part of a
> source tree. There *is* no good default in that particular case.
/lib/modules/`uname -r`/build _is_ a good default for a module to build
again. It is correct in more cases than a simple failure to do anything.
For _installing_, the correct place to install the built objects is surely
/lib/modules/$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) where the
variables are obtained from the top-level Makefile in the kernel sources
you built against.
You _default_ to building and installing for the current kernel, if it's
installed properly. But of course you should be permitted to override that
on the command line.
> Related for sure, the point I was making was that there is no good
> default place to put modules built outside a kernel source tree (and
> probably also when built for multiple kernels).
I disagree. Modutils will look in only one place -- the /lib/modules/...
directory corresponding to the kernel version for which you built each
module. Each module, therefore, should go into the directory corresponding
to the version of the kernel against which it was built.
Finding the appropriate _installation_ directory is trivial, surely? You
can even find it from the 'kernel_version' stamp _inside_ the object file,
without any other information?
--
dwmw2
^ permalink raw reply [flat|nested] 131+ messages in thread
end of thread, other threads:[~2023-06-16 22:54 UTC | newest]
Thread overview: 131+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-01 0:29 [PATCH 00/18] make test "linting" more comprehensive Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 01/18] t: add skeleton chainlint.pl Eric Sunshine via GitGitGadget
2022-09-01 12:27 ` Ævar Arnfjörð Bjarmason
2022-09-02 18:53 ` Eric Sunshine
2022-09-01 0:29 ` [PATCH 02/18] chainlint.pl: add POSIX shell lexical analyzer Eric Sunshine via GitGitGadget
2022-09-01 12:32 ` Ævar Arnfjörð Bjarmason
2022-09-03 6:00 ` Eric Sunshine
2022-09-01 0:29 ` [PATCH 03/18] chainlint.pl: add POSIX shell parser Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 04/18] chainlint.pl: add parser to validate tests Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 05/18] chainlint.pl: add parser to identify test definitions Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 06/18] chainlint.pl: validate test scripts in parallel Eric Sunshine via GitGitGadget
2022-09-01 12:36 ` Ævar Arnfjörð Bjarmason
2022-09-03 7:51 ` Eric Sunshine
2022-09-06 22:35 ` Eric Wong
2022-09-06 22:52 ` Eric Sunshine
2022-09-06 23:26 ` Jeff King
2022-11-21 4:02 ` Eric Sunshine
2022-11-21 13:28 ` Ævar Arnfjörð Bjarmason
2022-11-21 14:07 ` Eric Sunshine
2022-11-21 14:18 ` Ævar Arnfjörð Bjarmason
2022-11-21 14:48 ` Eric Sunshine
2022-11-21 18:04 ` Jeff King
2022-11-21 18:47 ` Eric Sunshine
2022-11-21 18:50 ` Eric Sunshine
2022-11-21 18:52 ` Jeff King
2022-11-21 19:00 ` Eric Sunshine
2022-11-21 19:28 ` Jeff King
2022-11-22 0:11 ` Ævar Arnfjörð Bjarmason
2022-09-01 0:29 ` [PATCH 07/18] chainlint.pl: don't require `return|exit|continue` to end with `&&` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 08/18] t/Makefile: apply chainlint.pl to existing self-tests Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 09/18] chainlint.pl: don't require `&` background command to end with `&&` Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 10/18] chainlint.pl: don't flag broken &&-chain if `$?` handled explicitly Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 11/18] chainlint.pl: don't flag broken &&-chain if failure indicated explicitly Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 12/18] chainlint.pl: complain about loops lacking explicit failure handling Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 13/18] chainlint.pl: allow `|| echo` to signal failure upstream of a pipe Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 14/18] t/chainlint: add more chainlint.pl self-tests Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 15/18] test-lib: retire "lint harder" optimization hack Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 16/18] test-lib: replace chainlint.sed with chainlint.pl Eric Sunshine via GitGitGadget
2022-09-03 5:07 ` Elijah Newren
2022-09-03 5:24 ` Eric Sunshine
2022-09-01 0:29 ` [PATCH 17/18] t/Makefile: teach `make test` and `make prove` to run chainlint.pl Eric Sunshine via GitGitGadget
2022-09-01 0:29 ` [PATCH 18/18] t: retire unused chainlint.sed Eric Sunshine via GitGitGadget
2022-09-02 12:42 ` several messages Johannes Schindelin
2022-09-02 18:16 ` Eric Sunshine
2022-09-02 18:34 ` Jeff King
2022-09-02 18:44 ` Junio C Hamano
2022-09-11 5:28 ` [PATCH 00/18] make test "linting" more comprehensive Jeff King
2022-09-11 7:01 ` Eric Sunshine
2022-09-11 18:31 ` Jeff King
2022-09-12 23:17 ` Eric Sunshine
2022-09-13 0:04 ` Jeff King
-- strict thread matches above, loose matches on Subject: below --
2023-06-12 16:02 [PATCH mptcp-next] mptcp: drop legacy code Paolo Abeni
2023-06-13 17:37 ` mptcp: drop legacy code.: Tests Results MPTCP CI
2023-06-16 22:54 ` several messages Mat Martineau
2022-06-16 13:55 [PATCH mptcp-next] selftests: mptcp: tweak simult_flows for debug kernels Paolo Abeni
2022-06-16 15:27 ` selftests: mptcp: tweak simult_flows for debug kernels.: Tests Results MPTCP CI
2022-06-17 22:13 ` several messages Mat Martineau
2016-01-25 18:37 [PATCH v2 0/3] x86/mm: INVPCID support Andy Lutomirski
2016-01-25 18:57 ` Ingo Molnar
2016-01-27 10:09 ` several messages Thomas Gleixner
2016-01-27 10:09 ` Thomas Gleixner
2016-01-29 13:21 ` Borislav Petkov
2014-11-10 6:26 [PATCH 00/13] Add VT-d Posted-Interrupts support for KVM Feng Wu
2014-11-10 6:26 ` [PATCH 13/13] iommu/vt-d: Add a command line parameter for VT-d posted-interrupts Feng Wu
2014-11-10 18:15 ` several messages Thomas Gleixner
2014-11-10 18:15 ` Thomas Gleixner
2014-11-11 2:28 ` Jiang Liu
2014-11-11 2:28 ` Jiang Liu
2014-11-11 6:37 ` Wu, Feng
2014-11-11 6:37 ` Wu, Feng
2014-07-03 5:02 [RFC PATCH v4] ARM: EXYNOS: Use MCPM call-backs to support S2R on Exynos5420 Abhilash Kesavan
2014-07-03 14:46 ` [PATCH v5] " Abhilash Kesavan
2014-07-03 15:45 ` several messages Nicolas Pitre
2014-07-03 15:45 ` Nicolas Pitre
2014-07-03 16:19 ` Abhilash Kesavan
2014-07-03 16:19 ` Abhilash Kesavan
2014-07-03 19:00 ` Nicolas Pitre
2014-07-03 19:00 ` Nicolas Pitre
2014-07-03 20:00 ` Abhilash Kesavan
2014-07-03 20:00 ` Abhilash Kesavan
2014-07-04 4:13 ` Nicolas Pitre
2014-07-04 4:13 ` Nicolas Pitre
2014-07-04 17:45 ` Abhilash Kesavan
2014-07-04 17:45 ` Abhilash Kesavan
2010-07-11 15:06 [PATCHv2] netfilter: add CHECKSUM target Michael S. Tsirkin
2010-07-11 15:14 ` [PATCHv3] extensions: libxt_CHECKSUM extension Michael S. Tsirkin
2010-07-15 9:39 ` Patrick McHardy
2010-07-15 10:17 ` several messages Jan Engelhardt
2009-09-06 14:16 Layla 3G does not recover from ACPI Suspend Mark Hills
2009-09-08 19:32 ` Giuliano Pochini
2009-09-08 22:56 ` several messages Mark Hills
2009-02-09 20:57 [PATCH] libxtables: Introduce global params structuring jamal
2009-02-09 21:04 ` several messages Jan Engelhardt
2009-02-09 21:27 ` jamal
2009-02-09 21:44 ` Jan Engelhardt
2008-11-26 14:33 [PATCH 0/1] HID: hid_apple is not used for apple alu wireless keyboards Jan Scholz
2008-11-26 14:33 ` [PATCH 1/1] HID: Apple alu wireless keyboards are bluetooth devices Jan Scholz
2008-11-26 14:54 ` Jiri Kosina
2008-11-26 15:17 ` Jan Scholz
2008-11-26 15:33 ` Jiri Kosina
2008-11-26 21:06 ` Tobias Müller
2008-11-27 0:57 ` several messages Jiri Kosina
2008-10-19 14:15 [PATCH 1/2] HID: add hid_type Jiri Slaby
2008-10-19 14:15 ` [PATCH 2/2] HID: fix appletouch regression Jiri Slaby
2008-10-19 19:40 ` several messages Jiri Kosina
2008-10-19 20:06 ` Justin Mattock
2008-10-19 20:06 ` Justin Mattock
2008-10-19 22:09 ` Jiri Slaby
[not found] <9E397A467F4DB34884A1FD0D5D27CF43018903F96E@msxaoa4.twosigma.com>
2008-06-12 16:54 ` Benjamin L. Shi
[not found] <200702211929.17203.david-b@pacbell.net>
2007-02-22 3:50 ` [patch 6/6] rtc suspend()/resume() restores system clock David Brownell
2007-02-22 22:58 ` several messages Guennadi Liakhovetski
2007-02-22 22:58 ` Guennadi Liakhovetski
2007-02-22 22:58 ` Guennadi Liakhovetski
2007-02-23 1:15 ` David Brownell
2007-02-23 1:15 ` David Brownell
2007-02-23 1:15 ` David Brownell
2007-02-23 11:17 ` Johannes Berg
2007-02-23 11:17 ` Johannes Berg
2007-02-23 11:17 ` Johannes Berg
2006-09-26 18:51 Long sleep with i_mutex in xfs_flush_device(), affects NFS service Stephane Doyon
2006-09-27 11:33 ` Shailendra Tripathi
2006-10-02 14:45 ` Stephane Doyon
2006-10-02 22:30 ` David Chinner
2006-10-03 13:39 ` several messages Stephane Doyon
2006-10-03 13:39 ` Stephane Doyon
2006-10-03 16:40 ` Trond Myklebust
2006-10-03 16:40 ` Trond Myklebust
2006-10-05 15:39 ` Stephane Doyon
2006-10-05 15:39 ` Stephane Doyon
2006-10-06 0:33 ` David Chinner
2006-10-06 0:33 ` David Chinner
2006-10-06 13:25 ` Stephane Doyon
2006-10-06 13:25 ` Stephane Doyon
2006-10-05 8:30 ` David Chinner
2006-10-05 8:30 ` David Chinner
2006-10-05 16:33 ` Stephane Doyon
2006-10-05 16:33 ` Stephane Doyon
2006-10-05 23:29 ` David Chinner
2006-10-05 23:29 ` David Chinner
2006-10-06 13:03 ` Stephane Doyon
2006-10-06 13:03 ` Stephane Doyon
2006-04-11 17:33 Linux 2.6.16.4 Greg KH
2006-04-11 19:04 ` several messages Jan Engelhardt
2006-04-11 19:20 ` Boris B. Zhmurov
2006-04-11 20:30 ` Greg KH
2006-04-11 23:46 ` Jan Engelhardt
2006-04-12 0:36 ` Nix
2005-05-04 17:31 ata over ethernet question Maciej Soltysiak
2005-05-04 19:48 ` David Hollis
2005-05-04 21:17 ` Re[2]: " Maciej Soltysiak
2005-05-05 15:09 ` David Hollis
2005-05-07 15:05 ` Sander
2005-05-10 22:00 ` Guennadi Liakhovetski
2005-05-11 8:56 ` Vladislav Bolkhovitin
2005-05-11 21:26 ` several messages Guennadi Liakhovetski
2005-05-12 2:16 ` Ming Zhang
2005-05-12 18:32 ` Dmitry Yusupov
2005-05-13 8:12 ` Christoph Hellwig
2005-05-13 15:04 ` Dmitry Yusupov
2005-05-13 15:07 ` Christoph Hellwig
2005-05-13 15:38 ` Dmitry Yusupov
2005-05-12 10:17 ` Vladislav Bolkhovitin
2004-05-11 8:45 2.6.6-rc3-mm2 (4KSTACK) Helge Hafting
2004-05-11 17:59 ` several messages Bill Davidsen
2003-04-22 10:34 [patch] HT scheduler, sched-2.5.68-A9 Ingo Molnar
2003-04-22 22:16 ` several messages Bill Davidsen
2003-04-22 23:38 ` Rick Lindsley
2003-04-23 9:17 ` Ingo Molnar
2003-01-23 0:20 ANN: LKMB (Linux Kernel Module Builder) version 0.1.16 Hal Duston
2003-01-27 16:46 ` several messages Bill Davidsen
2003-01-27 16:59 ` David Woodhouse
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.