All of lore.kernel.org
 help / color / mirror / Atom feed
* feature request
@ 2016-10-27 21:55 John Rood
  2016-10-27 22:01 ` Stefan Beller
  2016-10-27 22:30 ` Stefan Beller
  0 siblings, 2 replies; 36+ messages in thread
From: John Rood @ 2016-10-27 21:55 UTC (permalink / raw)
  To: git

Users should be able to configure Git to not send them into a Vim editor.

When users pull commits, and a new commit needs to be created for a
merge, Git's current way of determining a commit message is to send
the user into a Vim window so that they can write a message. There are
2 reasons why this might not be the ideal way to prompt for a commit
message.

1. Many users are used to writing concise one-line commit messages and
would not expect to save a commit message in a multi-line file. Some
users will wonder why they are in a text editor or which file they are
editing. Others may not, in fact, realize at all that a text editor is
what they are in.

2. Many users are not familiar with Vim, and do not understand how to
modify, save, and exit. It is not very considerate to require a user
to learn Vim in order to finish a commit that they are in the middle
of.

The existing behavior should be optional, and there should be two new options:

1. Use a simple inline prompt for a commit message (in the same way
Git might prompt for a username).

2. Automatically assign names for commits in the form of "Merged x into y".

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 21:55 feature request John Rood
@ 2016-10-27 22:01 ` Stefan Beller
  2016-10-27 22:05   ` John Rood
  2016-10-27 22:30 ` Stefan Beller
  1 sibling, 1 reply; 36+ messages in thread
From: Stefan Beller @ 2016-10-27 22:01 UTC (permalink / raw)
  To: John Rood; +Cc: git

On Thu, Oct 27, 2016 at 2:55 PM, John Rood <mr.john.rood@gmail.com> wrote:
> Users should be able to configure Git to not send them into a Vim editor.

See https://git-scm.com/docs/git-var

GIT_EDITOR

Text editor for use by Git commands. The value is meant to be interpreted
by the shell when it is used. Examples: ~/bin/vi, $SOME_ENVIRONMENT_VARIABLE,
"C:\Program Files\Vim\gvim.exe" --nofork. The order of preference is the
$GIT_EDITOR environment variable, then core.editor configuration, then
$VISUAL, then $EDITOR, and then the default chosen at compile time,
which is usually vi.


So maybe

    git config --global core.editor "nano"

helps in your case?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:01 ` Stefan Beller
@ 2016-10-27 22:05   ` John Rood
  2016-10-27 22:24     ` John Rood
  0 siblings, 1 reply; 36+ messages in thread
From: John Rood @ 2016-10-27 22:05 UTC (permalink / raw)
  To: Stefan Beller; +Cc: git

Unfortunately, in my case I'm on windows (my company's choice, not mine).

On Thu, Oct 27, 2016 at 5:01 PM, Stefan Beller <sbeller@google.com> wrote:
> On Thu, Oct 27, 2016 at 2:55 PM, John Rood <mr.john.rood@gmail.com> wrote:
>> Users should be able to configure Git to not send them into a Vim editor.
>
> See https://git-scm.com/docs/git-var
>
> GIT_EDITOR
>
> Text editor for use by Git commands. The value is meant to be interpreted
> by the shell when it is used. Examples: ~/bin/vi, $SOME_ENVIRONMENT_VARIABLE,
> "C:\Program Files\Vim\gvim.exe" --nofork. The order of preference is the
> $GIT_EDITOR environment variable, then core.editor configuration, then
> $VISUAL, then $EDITOR, and then the default chosen at compile time,
> which is usually vi.
>
>
> So maybe
>
>     git config --global core.editor "nano"
>
> helps in your case?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:05   ` John Rood
@ 2016-10-27 22:24     ` John Rood
  2016-10-27 22:27       ` Junio C Hamano
  0 siblings, 1 reply; 36+ messages in thread
From: John Rood @ 2016-10-27 22:24 UTC (permalink / raw)
  To: Stefan Beller; +Cc: git

I suppose I can do git config --global core.editor notepad
However, this really only addresses my second concern.

My first concern is that using a text editor at all seems like
overkill in many scenarios.
For that reason, I still think the other two options I mentioned would
be beneficial.

On Thu, Oct 27, 2016 at 5:05 PM, John Rood <mr.john.rood@gmail.com> wrote:
> Unfortunately, in my case I'm on windows (my company's choice, not mine).
>
> On Thu, Oct 27, 2016 at 5:01 PM, Stefan Beller <sbeller@google.com> wrote:
>> On Thu, Oct 27, 2016 at 2:55 PM, John Rood <mr.john.rood@gmail.com> wrote:
>>> Users should be able to configure Git to not send them into a Vim editor.
>>
>> See https://git-scm.com/docs/git-var
>>
>> GIT_EDITOR
>>
>> Text editor for use by Git commands. The value is meant to be interpreted
>> by the shell when it is used. Examples: ~/bin/vi, $SOME_ENVIRONMENT_VARIABLE,
>> "C:\Program Files\Vim\gvim.exe" --nofork. The order of preference is the
>> $GIT_EDITOR environment variable, then core.editor configuration, then
>> $VISUAL, then $EDITOR, and then the default chosen at compile time,
>> which is usually vi.
>>
>>
>> So maybe
>>
>>     git config --global core.editor "nano"
>>
>> helps in your case?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:24     ` John Rood
@ 2016-10-27 22:27       ` Junio C Hamano
  2016-10-27 22:48         ` John Rood
  0 siblings, 1 reply; 36+ messages in thread
From: Junio C Hamano @ 2016-10-27 22:27 UTC (permalink / raw)
  To: John Rood; +Cc: Stefan Beller, git

John Rood <mr.john.rood@gmail.com> writes:

> I suppose I can do git config --global core.editor notepad
> However, this really only addresses my second concern.
>
> My first concern is that using a text editor at all seems like
> overkill in many scenarios.

Nobody stops you from writing a "type whatever you want; I won't let
you edit any mistakes as I am not even a text editor; just hit
RETURN when you are done, as you can only write a single line"
program and set it as your GIT_EDITOR.

I do not know what would happen when you need "git commit --amend",
though.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 21:55 feature request John Rood
  2016-10-27 22:01 ` Stefan Beller
@ 2016-10-27 22:30 ` Stefan Beller
  2016-10-27 22:44   ` John Rood
  1 sibling, 1 reply; 36+ messages in thread
From: Stefan Beller @ 2016-10-27 22:30 UTC (permalink / raw)
  To: John Rood; +Cc: git

On Thu, Oct 27, 2016 at 2:55 PM, John Rood <mr.john.rood@gmail.com> wrote:
> Users should be able to configure Git to not send them into a Vim editor.
>
> When users pull commits, and a new commit needs to be created for a
> merge, Git's current way of determining a commit message is to send
> the user into a Vim window so that they can write a message. There are
> 2 reasons why this might not be the ideal way to prompt for a commit
> message.
>
> 1. Many users are used to writing concise one-line commit messages and
> would not expect to save a commit message in a multi-line file. Some
> users will wonder why they are in a text editor or which file they are
> editing. Others may not, in fact, realize at all that a text editor is
> what they are in.

Look at the -m option of git commit,

git commit -a -m "look a commit with no editor, and a precise one line message"

I do not advocate this use though, as I think commit messages should be
more wordy.

>
> 2. Many users are not familiar with Vim, and do not understand how to
> modify, save, and exit. It is not very considerate to require a user
> to learn Vim in order to finish a commit that they are in the middle
> of.

That is true, but vi is like the most available editor as a relict
from ancient times;
as you are on Windows, maybe notepad is the best on that platform.

Maybe file a bug/issue at https://github.com/git-for-windows to change
the default?

>
> The existing behavior should be optional, and there should be two new options:
>
> 1. Use a simple inline prompt for a commit message (in the same way
> Git might prompt for a username).
>
> 2. Automatically assign names for commits in the form of "Merged x into y".

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:30 ` Stefan Beller
@ 2016-10-27 22:44   ` John Rood
  2016-10-27 22:46     ` Junio C Hamano
  2016-10-27 23:24     ` David Lang
  0 siblings, 2 replies; 36+ messages in thread
From: John Rood @ 2016-10-27 22:44 UTC (permalink / raw)
  To: Stefan Beller; +Cc: git

Thanks, I think changing the default for windows is a good idea.

The -m indeed accomplishes one-line messages when you are voluntarily
doing a commit. However, the scenario I mentioned is "When users pull
commits, and a new commit needs to be created for the merge"  In this
situation, the user isn't issuing the "git commit" command, and so
he/she doesn't have the opportunity to use the -m flag.

On Thu, Oct 27, 2016 at 5:30 PM, Stefan Beller <sbeller@google.com> wrote:
> On Thu, Oct 27, 2016 at 2:55 PM, John Rood <mr.john.rood@gmail.com> wrote:
>> Users should be able to configure Git to not send them into a Vim editor.
>>
>> When users pull commits, and a new commit needs to be created for a
>> merge, Git's current way of determining a commit message is to send
>> the user into a Vim window so that they can write a message. There are
>> 2 reasons why this might not be the ideal way to prompt for a commit
>> message.
>>
>> 1. Many users are used to writing concise one-line commit messages and
>> would not expect to save a commit message in a multi-line file. Some
>> users will wonder why they are in a text editor or which file they are
>> editing. Others may not, in fact, realize at all that a text editor is
>> what they are in.
>
> Look at the -m option of git commit,
>
> git commit -a -m "look a commit with no editor, and a precise one line message"
>
> I do not advocate this use though, as I think commit messages should be
> more wordy.
>
>>
>> 2. Many users are not familiar with Vim, and do not understand how to
>> modify, save, and exit. It is not very considerate to require a user
>> to learn Vim in order to finish a commit that they are in the middle
>> of.
>
> That is true, but vi is like the most available editor as a relict
> from ancient times;
> as you are on Windows, maybe notepad is the best on that platform.
>
> Maybe file a bug/issue at https://github.com/git-for-windows to change
> the default?
>
>>
>> The existing behavior should be optional, and there should be two new options:
>>
>> 1. Use a simple inline prompt for a commit message (in the same way
>> Git might prompt for a username).
>>
>> 2. Automatically assign names for commits in the form of "Merged x into y".

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:44   ` John Rood
@ 2016-10-27 22:46     ` Junio C Hamano
  2016-10-27 23:24     ` David Lang
  1 sibling, 0 replies; 36+ messages in thread
From: Junio C Hamano @ 2016-10-27 22:46 UTC (permalink / raw)
  To: John Rood; +Cc: Stefan Beller, git

John Rood <mr.john.rood@gmail.com> writes:

> On Thu, Oct 27, 2016 at 5:30 PM, Stefan Beller <sbeller@google.com> wrote:
>> On Thu, Oct 27, 2016 at 2:55 PM, John Rood <mr.john.rood@gmail.com> wrote:
>>> Users should be able to configure Git to not send them into a Vim editor.
>>>
>>> When users pull commits, and a new commit needs to be created for a
>>> merge, Git's current way of determining a commit message is to send
>>> the user into a Vim window so that they can write a message. There are
>>> 2 reasons why this might not be the ideal way to prompt for a commit
>>> message.
>>>
>>> 1. Many users are used to writing concise one-line commit messages and
>>> would not expect to save a commit message in a multi-line file. Some
>>> users will wonder why they are in a text editor or which file they are
>>> editing. Others may not, in fact, realize at all that a text editor is
>>> what they are in.
>>
>> Look at the -m option of git commit,
>>

[administrivia: do not top post]

> Thanks, I think changing the default for windows is a good idea.
>
> The -m indeed accomplishes one-line messages when you are voluntarily
> doing a commit. However, the scenario I mentioned is "When users pull
> commits, and a new commit needs to be created for the merge"  In this
> situation, the user isn't issuing the "git commit" command, and so
> he/she doesn't have the opportunity to use the -m flag.

There is --no-edit there.



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:27       ` Junio C Hamano
@ 2016-10-27 22:48         ` John Rood
  2016-10-27 22:51           ` Junio C Hamano
  0 siblings, 1 reply; 36+ messages in thread
From: John Rood @ 2016-10-27 22:48 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Stefan Beller, git

What I'm really seeking is not a make-shift solution for myself, but
an intuitive solution for the novice user-base at large.

On Thu, Oct 27, 2016 at 5:27 PM, Junio C Hamano <gitster@pobox.com> wrote:
> John Rood <mr.john.rood@gmail.com> writes:
>
>> I suppose I can do git config --global core.editor notepad
>> However, this really only addresses my second concern.
>>
>> My first concern is that using a text editor at all seems like
>> overkill in many scenarios.
>
> Nobody stops you from writing a "type whatever you want; I won't let
> you edit any mistakes as I am not even a text editor; just hit
> RETURN when you are done, as you can only write a single line"
> program and set it as your GIT_EDITOR.
>
> I do not know what would happen when you need "git commit --amend",
> though.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:48         ` John Rood
@ 2016-10-27 22:51           ` Junio C Hamano
  2016-10-27 23:16             ` John Rood
  0 siblings, 1 reply; 36+ messages in thread
From: Junio C Hamano @ 2016-10-27 22:51 UTC (permalink / raw)
  To: John Rood; +Cc: Stefan Beller, git

John Rood <mr.john.rood@gmail.com> writes:

[administrivia: do not top post]

> What I'm really seeking is not a make-shift solution for myself, but
> an intuitive solution for the novice user-base at large.

Well, there are -m and --no-edit.  Recording commits with useless
single liner is a bad habit to get into, and change to encourage
novice user-base at large to do so is not a good idea.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:51           ` Junio C Hamano
@ 2016-10-27 23:16             ` John Rood
  0 siblings, 0 replies; 36+ messages in thread
From: John Rood @ 2016-10-27 23:16 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Stefan Beller, git

Thanks. I wasn't aware of --no-edit, but that is indeed exactly what I
was looking for.

I think your point about encouraging users to make good use of commit
messages is good.
My concern though is that vim isn't encouraging users to leave good
messages as much as it is scaring them away from leaving messages at
all.


On Thu, Oct 27, 2016 at 5:51 PM, Junio C Hamano <gitster@pobox.com> wrote:
> John Rood <mr.john.rood@gmail.com> writes:
>
> [administrivia: do not top post]
>
>> What I'm really seeking is not a make-shift solution for myself, but
>> an intuitive solution for the novice user-base at large.
>
> Well, there are -m and --no-edit.  Recording commits with useless
> single liner is a bad habit to get into, and change to encourage
> novice user-base at large to do so is not a good idea.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 22:44   ` John Rood
  2016-10-27 22:46     ` Junio C Hamano
@ 2016-10-27 23:24     ` David Lang
  2016-10-28  8:49       ` Johannes Schindelin
  2016-10-28 12:54       ` Philip Oakley
  1 sibling, 2 replies; 36+ messages in thread
From: David Lang @ 2016-10-27 23:24 UTC (permalink / raw)
  To: John Rood; +Cc: Stefan Beller, git

On Thu, 27 Oct 2016, John Rood wrote:

> Thanks, I think changing the default for windows is a good idea.

notepad doesn't work well with unix line endings, wordpad handles the files much 
more cleanly.

David Lang

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 23:24     ` David Lang
@ 2016-10-28  8:49       ` Johannes Schindelin
  2016-10-28 12:54       ` Philip Oakley
  1 sibling, 0 replies; 36+ messages in thread
From: Johannes Schindelin @ 2016-10-28  8:49 UTC (permalink / raw)
  To: David Lang; +Cc: John Rood, Stefan Beller, git

Hi,

On Thu, 27 Oct 2016, David Lang wrote:

> On Thu, 27 Oct 2016, John Rood wrote:
> 
> > Thanks, I think changing the default for windows is a good idea.
> 
> notepad doesn't work well with unix line endings, wordpad handles the files
> much more cleanly.

That is why we have a `notepad` helper in Git for Windows that converts
line endings transparently before and after calling the real notepad.exe.

Ciao,
Johannes

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2016-10-27 23:24     ` David Lang
  2016-10-28  8:49       ` Johannes Schindelin
@ 2016-10-28 12:54       ` Philip Oakley
  1 sibling, 0 replies; 36+ messages in thread
From: Philip Oakley @ 2016-10-28 12:54 UTC (permalink / raw)
  To: David Lang, John Rood; +Cc: Stefan Beller, Git List

From: "David Lang" <david@lang.hm>
> On Thu, 27 Oct 2016, John Rood wrote:
>
>> Thanks, I think changing the default for windows is a good idea.
>
> notepad doesn't work well with unix line endings, wordpad handles the 
> files much more cleanly.
>
> David Lang
>

Notepad++ does work well, but isn't a standard part of Windows.

[core]
 editor = 'C:/Program 
Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noplugin

.. is one of the standard StackOverflow recipes.
--
Philip 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2013-02-18 20:45   ` Jeff King
  2013-02-19  3:26     ` Drew Northup
@ 2013-02-19 22:27     ` Shawn Pearce
  1 sibling, 0 replies; 36+ messages in thread
From: Shawn Pearce @ 2013-02-19 22:27 UTC (permalink / raw)
  To: Jeff King; +Cc: James Nylen, Jay Townsend, git

On Mon, Feb 18, 2013 at 12:45 PM, Jeff King <peff@peff.net> wrote:
>
> The thing that makes 2FA usable in the web browser setting is that you
> authenticate only occasionally, and get a token (i.e., a cookie) from
> the server that lets you have a longer session without re-authenticating.

Right, otherwise you spend all day typing in your credentials and
syncing with the 2nd factor device.

> I suspect a usable 2FA scheme for http pushes would involve a special
> credential helper that did the 2FA auth to receive a cookie on the first
> use, cached the cookie, and then provided it for subsequent auth
> requests. That would not necessarily involve changing git, but it would
> mean writing the appropriate helper (and the server side to match). I
> seem to recall Shawn mentioning that Google does something like this
> internally, but I don't know the details[1].
...
> [1] I don't know if Google's system is based on the Google Authenticator
>     system. But it would be great if there could be an open,
>     standards-based system for doing 2FA+cookie authentication like
>     this. I'd hate to have "the GitHub credential helper" and "the
>     Google credential helper". I'm not well-versed enough in the area to
>     know what's feasible and what the standards are.

Yes, it is based on the Google Authenticator system, but that's not
relevant to how Git works with it. :-)

We have a special "git-remote-sso" helper we install onto corporate
workstations. This allows Git to understand the "sso://" protocol.
git-remote-sso is a small application that:

- reads the URL from the command line,
- makes sure a Netscape style cookies file has a current cookie for
the named host,
   - acquires or updates cookie if necessary
- rewrites the URL to be https://
- execs `git -c http.cookiefile=$cookiefile remote-https $URL`

The way 2FA works is the user authenticates to a special internal SSO
management point in their web browser once per period (I decline to
say how often but its tolerable). Users typically are presented this
SSO page anyway by other applications they visit, or they bookmark the
main entry point. A Chrome or Firefox extension has been installed and
authorized to steal cookies from this host. The extension writes the
user's cookie to a local file on disk. Our git-remote-sso tool uses
this cookie file to setup per-host cookies on demand within the
authentication period.

Horrifically hacky. It would be nice if this was more integrated into
Git itself, where the cookies could be acquired/refreshed through the
credential helper system rather than wrapping git-remote-https with a
magical URL. I am a fan of the way our extension manages to get the
token conveyed automatically for me. Much easier than the OAuth
flows[2], but harder to replicate in the wild. Our IT group makes sure
the extension is installed on workstations as part of the base OS
image.

[2] https://developers.google.com/storage/docs/gsutil_install#authenticate

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2013-02-18 20:45   ` Jeff King
@ 2013-02-19  3:26     ` Drew Northup
  2013-02-19 22:27     ` Shawn Pearce
  1 sibling, 0 replies; 36+ messages in thread
From: Drew Northup @ 2013-02-19  3:26 UTC (permalink / raw)
  To: Jeff King; +Cc: James Nylen, Shawn O. Pearce, Jay Townsend, git

On Mon, Feb 18, 2013 at 3:45 PM, Jeff King <peff@peff.net> wrote:
> On Mon, Feb 18, 2013 at 02:54:30PM -0500, James Nylen wrote:
>> > Just would like to request a security feature to help secure peoples github
>> > accounts more by supporting 2 factor authentication like the yubikey more
>> > information can be found from this link www.yubico.com/develop/ and googles
>> > 2 factor authentication. Hope it gets implemented as I think it would make a
>> > great feature
>>
>> I like the idea, and I would probably use it if it were available.
>> Jeff, what do you think?
> [1] I don't know if Google's system is based on the Google Authenticator
>     system. But it would be great if there could be an open,
>     standards-based system for doing 2FA+cookie authentication like
>     this. I'd hate to have "the GitHub credential helper" and "the
>     Google credential helper". I'm not well-versed enough in the area to
>     know what's feasible and what the standards are.

I don't know what the specific infrastructure they (Google's
engineers) are using is (something written in python if I'm not
mistaken), but @$dayjob we've managed to authenticate to Google Apps
using SAML 1.1 & SAML2 wrappers "living" in both CAS and Shibboleth.
SAML is a standard and is supported (in whole or in part) by a lot of
systems and SSOs out there. Given the way that systems like that work
I don't see Git authenticating that way any time soon (but I've been
surprised before).

-- 
-Drew Northup
--------------------------------------------------------------
"As opposed to vegetable or mineral error?"
-John Pescatore, SANS NewsBites Vol. 12 Num. 59

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2013-02-18 19:54 ` James Nylen
@ 2013-02-18 20:45   ` Jeff King
  2013-02-19  3:26     ` Drew Northup
  2013-02-19 22:27     ` Shawn Pearce
  0 siblings, 2 replies; 36+ messages in thread
From: Jeff King @ 2013-02-18 20:45 UTC (permalink / raw)
  To: James Nylen; +Cc: Shawn O. Pearce, Jay Townsend, git

On Mon, Feb 18, 2013 at 02:54:30PM -0500, James Nylen wrote:

> > Just would like to request a security feature to help secure peoples github
> > accounts more by supporting 2 factor authentication like the yubikey more
> > information can be found from this link www.yubico.com/develop/ and googles
> > 2 factor authentication. Hope it gets implemented as I think it would make a
> > great feature
> 
> This would most likely be something that users would set up with their
> SSH client, and GitHub would have to provide support for it on their
> servers as well.  It shouldn't require any changes to git.  Here is an
> example of how this could be done:
> 
> http://www.howtogeek.com/121650/how-to-secure-ssh-with-google-authenticators-two-factor-authentication/
> 
> I like the idea, and I would probably use it if it were available.
> Jeff, what do you think?

When you are talking about something like GitHub, there are a lot of
times and methods to authenticate: logging into the web service, using
an ssh key for git-over-ssh, using a password for git-over-http, tokens
for API access, and probably more that I can't think of right now.

Logging into the web page can add 2-factor auth pretty easily, since
it's a web form.

Git over ssh can also do so without changes to git, because we rely on
ssh to do all of the interactive authentication.  However, I wonder how
many people would be that interested in it, as key auth already provides
some degree of two factor protection, assuming you protect your key with
a passphrase (the threat model is different, of course, because the two
factors are happening on the client, and do not involve the server at
all).

Git over http _would_ need git client support, since it asks the user
for the password directly. Or at the very least some clever encoding
scheme where your password becomes "<real_password>:<2FA_pass>" or
something. But I'm not sure that people want raw two-factor
authentication for pushes. It's a giant pain, and people were recently
happy to move to password-less pushes via credential helpers; this would
move in the opposite direction.

The thing that makes 2FA usable in the web browser setting is that you
authenticate only occasionally, and get a token (i.e., a cookie) from
the server that lets you have a longer session without re-authenticating.
I suspect a usable 2FA scheme for http pushes would involve a special
credential helper that did the 2FA auth to receive a cookie on the first
use, cached the cookie, and then provided it for subsequent auth
requests. That would not necessarily involve changing git, but it would
mean writing the appropriate helper (and the server side to match). I
seem to recall Shawn mentioning that Google does something like this
internally, but I don't know the details[1].

So yes. It's an interesting direction to go, but I think there's a fair
bit of work, and it needs to be broken down into how specific services
will interact with it. The first step would probably be securing the web
login with it, since that is the easiest one to do, and also the most
powerful interface (the other ones just let you push or fetch code; the
web interface lets you delete repos, change passwords, access billing,
etc).

But that first step is something that would happen entirely at GitHub,
with no client support necessary. We don't have schedules or plans, and
we don't promise features. So I can neither confirm nor deny that people
are working on it right now.

-Peff

[1] I don't know if Google's system is based on the Google Authenticator
    system. But it would be great if there could be an open,
    standards-based system for doing 2FA+cookie authentication like
    this. I'd hate to have "the GitHub credential helper" and "the
    Google credential helper". I'm not well-versed enough in the area to
    know what's feasible and what the standards are.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2013-02-18 18:52 Jay Townsend
@ 2013-02-18 19:54 ` James Nylen
  2013-02-18 20:45   ` Jeff King
  0 siblings, 1 reply; 36+ messages in thread
From: James Nylen @ 2013-02-18 19:54 UTC (permalink / raw)
  To: Jay Townsend; +Cc: git, peff

On Mon, Feb 18, 2013 at 1:52 PM, Jay Townsend <townsend891@hotmail.com> wrote:
> Hi everyone,
>
> Just would like to request a security feature to help secure peoples github
> accounts more by supporting 2 factor authentication like the yubikey more
> information can be found from this link www.yubico.com/develop/ and googles
> 2 factor authentication. Hope it gets implemented as I think it would make a
> great feature

This would most likely be something that users would set up with their
SSH client, and GitHub would have to provide support for it on their
servers as well.  It shouldn't require any changes to git.  Here is an
example of how this could be done:

http://www.howtogeek.com/121650/how-to-secure-ssh-with-google-authenticators-two-factor-authentication/

I like the idea, and I would probably use it if it were available.
Jeff, what do you think?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* feature request
@ 2013-02-18 18:52 Jay Townsend
  2013-02-18 19:54 ` James Nylen
  0 siblings, 1 reply; 36+ messages in thread
From: Jay Townsend @ 2013-02-18 18:52 UTC (permalink / raw)
  To: git

Hi everyone,


Just would like to request a security feature to help secure peoples 
github accounts more by supporting 2 factor authentication like the 
yubikey more information can be found from this link 
www.yubico.com/develop/ and googles 2 factor authentication. Hope it 
gets implemented as I think it would make a great feature

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2012-10-16 17:27   ` Angelo Borsotti
  2012-10-16 23:30     ` Sitaram Chamarty
@ 2012-10-17  0:00     ` Andrew Ardill
  1 sibling, 0 replies; 36+ messages in thread
From: Andrew Ardill @ 2012-10-17  0:00 UTC (permalink / raw)
  To: Angelo Borsotti; +Cc: git

On 17 October 2012 04:27, Angelo Borsotti <angelo.borsotti@gmail.com> wrote:
> Hi Andrew,
>
> one nice thing is to warn a developer that wants to modify a source
> file, that there is somebody else changing it beforehand. It is nicer
> than discovering that at push time.
> Take into account that there are changes in files that may be
> incompatible to each other, or that can be amenable to be
> automatically merged producing wrong results. So, knowing it could
> help.
>
> -Angelo

If you simply want to know when a file has been changed by someone
else, git already has this capability, but as you note it only occurs
when you try to push. Unless you force push, you have to merge changes
before committing to upstream. In a distributed situation you can only
inform the user when they 'touch-base' with the server, and if that is
what you want than one of the locking systems others have proposed
might be a good choice.

There are two concerns to deal with here. The first is when any
conflicts at all will cause problems, as there is no way to merge
them. This often happens with binary files, and is a good reason to
use a locking system. The second concern is situations where a merge
happens 'silently' (i.e. no conflicts) thus allowing potential logic
bugs to be introduced even though semantically the merge was fine. For
this situation the best option is to require whoever is merging to
check the merge output for logical errors. This has to happen anyway,
as it is possible for logical errors to be introduced across different
files, although it's probably more common to see logic conflicts
within the one file. To make it easier to discover such changes early
in the process you could write a tool that did some (or all) of the
following:
1. Automatically fetch changes from remote repositories at a regular interval.
2. Compare files changed in the working tree and index to changes
fetched from remote repositories. This would need to find the merge
base of the two and compare files touched since then.
3. Notify the user of the files that have been changed through some fashion.
4. Automatically push changes to a 'wip' branch so that others can see
what you are modifying. Alternatively, automatically publish a list of
changed files for the same purpose, though this seems a lot more hacky
(though both options are surely hacky).

2 and 3 could be combined into a single tool run whenever the user
wants to check for potential logic changes down the track. Automating
it would allow for this information to be communicated a bit faster.
Running it after each fetch would be a nice to have. Pushing the work
in progress branch is something I am not sure is a good idea, but it
would be the only way to know when someone else is working on
something before they commit and push it manually. Pushing a single
file with files being worked on is less invasive, but would require
the other aspects of the tool to use it as well (hence forming a
stronger coupling and reducing the usefulness of the other components
as standalone tools).


There is no way that I know of to force merge to stop every time the
file is changed on both theirs and ours (regardless of whether or not
it is an actual conflict or not). It could potentially be done with a
pre-merge hook, but no such hook exists to my knowledge. If this were
implemented, using it would make merging a potentially tiresome
affair, however I could see its usefulness for people who were very
concerned about introducing logic errors with merges on the same file.

The best solution is in my opinion to check what is going to be merged
before you merge it, but a tool to warn that someone else is modifying
the same file would have its uses.

Regards,

Andrew Ardill

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2012-10-16 17:27   ` Angelo Borsotti
@ 2012-10-16 23:30     ` Sitaram Chamarty
  2012-10-17  0:00     ` Andrew Ardill
  1 sibling, 0 replies; 36+ messages in thread
From: Sitaram Chamarty @ 2012-10-16 23:30 UTC (permalink / raw)
  To: Angelo Borsotti; +Cc: Andrew Ardill, git

On Tue, Oct 16, 2012 at 10:57 PM, Angelo Borsotti
<angelo.borsotti@gmail.com> wrote:
> Hi Andrew,
>
> one nice thing is to warn a developer that wants to modify a source
> file, that there is somebody else changing it beforehand. It is nicer
> than discovering that at push time.

Andrew:

also see http://sitaramc.github.com/gitolite/locking.html for a way to
do file locking (and enforce it) using gitolite.

This does warn, as long as the user remembers to try to acquire a lock
before working on a binary file.  (You can't get around that
requirement on a DVCS, sorry!)

> Take into account that there are changes in files that may be
> incompatible to each other, or that can be amenable to be
> automatically merged producing wrong results. So, knowing it could
> help.
>
> -Angelo
> --
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Sitaram

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2012-10-16 12:15 ` Andrew Ardill
@ 2012-10-16 17:27   ` Angelo Borsotti
  2012-10-16 23:30     ` Sitaram Chamarty
  2012-10-17  0:00     ` Andrew Ardill
  0 siblings, 2 replies; 36+ messages in thread
From: Angelo Borsotti @ 2012-10-16 17:27 UTC (permalink / raw)
  To: Andrew Ardill; +Cc: git

Hi Andrew,

one nice thing is to warn a developer that wants to modify a source
file, that there is somebody else changing it beforehand. It is nicer
than discovering that at push time.
Take into account that there are changes in files that may be
incompatible to each other, or that can be amenable to be
automatically merged producing wrong results. So, knowing it could
help.

-Angelo

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2012-10-16 11:36 Angelo Borsotti
  2012-10-16 12:15 ` Andrew Ardill
@ 2012-10-16 13:34 ` Christian Thaeter
  1 sibling, 0 replies; 36+ messages in thread
From: Christian Thaeter @ 2012-10-16 13:34 UTC (permalink / raw)
  To: Angelo Borsotti; +Cc: git

Am Tue, 16 Oct 2012 13:36:04 +0200
schrieb Angelo Borsotti <angelo.borsotti@gmail.com>:

> Hello,
> 
> some VCS, e.g. ClearCase, allow to control the fetching of files so as
> to warn, or
> disallow parallel changes to the same files.
> As of today, there is no way to implement the same kind of workflow
> with git because there are no fetch hooks.
> Would it be a good idea to provide them?


I've actually implemented a 'git lock' command to lock pathnames from
concurrent editing for a customer. Normally one would say this is a
rather ill and ugly feature for git but there where some reasons to do
it anyways (imagine robots crashing into each other on a production
line because of bad (developer-)communication).

The code is GPL and I can distribute it, but I didn't consider it ready
for an open announcement yet. Noteworthy some problems with msys led
to some ugly solution (the uniq command doesn't know the -z option
there).

I hope this might be useful to you. I'd also like to get contributions
and fixes if there are any problems I am not aware of.

Short into; the doc:

 http://git.pipapo.org/?p=git;a=blob_plain;f=Documentation/git-lock.txt;h=dcc7a5c34dea657ab5819e8def54e154d5d97219;hb=25ee09cf35daa03a7c2ef10537561a50db2d17b2

the code is available at

 git://git.pipapo.org/git

in the 'ct/git-lock' branch.

It is a bit fallen behind the current git version, I will update/merge
it sometime next (to keep in par with msysgit, thats what is required
here)

	Christian




> 
> -Angelo Borsotti
> --
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2012-10-16 11:36 Angelo Borsotti
@ 2012-10-16 12:15 ` Andrew Ardill
  2012-10-16 17:27   ` Angelo Borsotti
  2012-10-16 13:34 ` Christian Thaeter
  1 sibling, 1 reply; 36+ messages in thread
From: Andrew Ardill @ 2012-10-16 12:15 UTC (permalink / raw)
  To: Angelo Borsotti; +Cc: git

On 16 October 2012 22:36, Angelo Borsotti <angelo.borsotti@gmail.com> wrote:
>
> Hello,
>
> some VCS, e.g. ClearCase, allow to control the fetching of files so as
> to warn, or
> disallow parallel changes to the same files.
> As of today, there is no way to implement the same kind of workflow with
> git
> because there are no fetch hooks.
> Would it be a good idea to provide them?
>
> -Angelo Borsotti

It seems like you want to be able to lock a file for editing once
someone has 'checked-out' the file. This really only makes sense for
binary files (or files which there is no straightforward way to merge)
as otherwise you are throwing away the usefulness of git without any
gain. Git is designed so that multiple people can work on a file at
the same time. Easy merging means that collating those changes is an
easy task, and so locking a file has no use. If a file cannot be
easily merged then it does make sense to lock the file. Is this what
you need to do, or is there some other reason for wanting a lock?

In any case, locking a file is a hard thing to do right in a
distributed system, and doesn't really make sense (although it may be
useful!) When you clone a git repository you have the entire history
of the repository on your computer. What does it mean to have a locked
file? Does the file get deleted from everyone's repository every time
someone else locks it? That would seem silly. Perhaps everyone simply
can't write to that file once it has been locked - how do you impose
that restriction in a distributed system (you can't)?

Instead you can only refuse pushes that change the locked file (which
is normal - you would have to force the push for any non fast-forward
changes), and you can try to warn users that somebody else has locked
the file. This warning system might be doable in some fashion, by
using hooks to write the locked files to a text file somewhere and
checking that, however it would be near impossible to get right or
would hamstring the distributed nature of git by forcing constant
server checks.

Instead of continuing making the point here, let me point you towards
some other discussions around file locks, in particular how git
handles binary files. You should be able to find many more, but as a
starter, look at [1] and [2].

In general, I would reconsider why two users shouldn't be able to
change the same file at the same time.

Regards,

Andrew Ardill

[1] http://stackoverflow.com/questions/119444/locking-binary-files-using-git-version-control-system
[2] http://git.661346.n2.nabble.com/Lock-binairy-files-in-Git-td2422894.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* feature request
@ 2012-10-16 11:36 Angelo Borsotti
  2012-10-16 12:15 ` Andrew Ardill
  2012-10-16 13:34 ` Christian Thaeter
  0 siblings, 2 replies; 36+ messages in thread
From: Angelo Borsotti @ 2012-10-16 11:36 UTC (permalink / raw)
  To: git

Hello,

some VCS, e.g. ClearCase, allow to control the fetching of files so as
to warn, or
disallow parallel changes to the same files.
As of today, there is no way to implement the same kind of workflow with git
because there are no fetch hooks.
Would it be a good idea to provide them?

-Angelo Borsotti

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Feature Request
  2010-02-09 12:28 ` Michael Tokarev
@ 2010-02-09 14:19   ` Stefan Hübner
  0 siblings, 0 replies; 36+ messages in thread
From: Stefan Hübner @ 2010-02-09 14:19 UTC (permalink / raw)
  To: linux-raid

Am 09.02.2010 13:28, schrieb Michael Tokarev:
> Stefan *St0fF* Huebner wrote:
> []
>> Now imagine any RAID with some kind of redundancy, reading/writing
>> data.  One of the disks finds out "I cannot correctly read/write the
>> requested sector", starts its error correction, hits the respective
>> ERC-timeout and reports back a media error or unrecoverable error.  Now
>> mdraid would drop the disk.
>>
>> But actually the data of the sector can be recreated through the
>> existing redundancy.  Wouldn't it be a smart thing if the mdraid
>> recreates the sector and just tried to write it again?  And after a good
>> amount of failed retries it may well drop the disk.
>
> This is exactly what md layer is doing.  On failed _read_ it tries to
> reconstruct data from other disk drives and writes the reconstructed
> data back to the drive where read failed.  If the _write_ fails md will
> drop the disk.
Hi Mjt,

I hoped so - great it is implemented like that.

Well, then all that's needed is the check at assembly/creation time:
- (is the drive an ATA-drive) && (does it support SCT ERC)
-> and if it does, set some reasonable timeouts. (like the 7s it is with
enterprise class drives for reading.  For writing I would suggest 14s,
bearing in mind that too quick reallocation results in the spare sectors
running out quickly.)

The writing back (I guess this is done with a reasonable amount of
retries) does not make sense if the drive is still in its error recovery
procedure and does not react to any commands until it is done.

P.S.: I have already implemented the checks and setup, but in userspace
using SG_IO.

/st0ff
>
> /mjt
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Feature Request
  2010-02-09  8:43 Feature Request Stefan *St0fF* Huebner
@ 2010-02-09 12:28 ` Michael Tokarev
  2010-02-09 14:19   ` Stefan Hübner
  0 siblings, 1 reply; 36+ messages in thread
From: Michael Tokarev @ 2010-02-09 12:28 UTC (permalink / raw)
  To: st0ff; +Cc: linux-raid

Stefan *St0fF* Huebner wrote:
[]
> Now imagine any RAID with some kind of redundancy, reading/writing
> data.  One of the disks finds out "I cannot correctly read/write the
> requested sector", starts its error correction, hits the respective
> ERC-timeout and reports back a media error or unrecoverable error.  Now
> mdraid would drop the disk.
> 
> But actually the data of the sector can be recreated through the
> existing redundancy.  Wouldn't it be a smart thing if the mdraid
> recreates the sector and just tried to write it again?  And after a good
> amount of failed retries it may well drop the disk.

This is exactly what md layer is doing.  On failed _read_ it tries to
reconstruct data from other disk drives and writes the reconstructed
data back to the drive where read failed.  If the _write_ fails md will
drop the disk.

/mjt

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Feature Request
@ 2010-02-09  8:43 Stefan *St0fF* Huebner
  2010-02-09 12:28 ` Michael Tokarev
  0 siblings, 1 reply; 36+ messages in thread
From: Stefan *St0fF* Huebner @ 2010-02-09  8:43 UTC (permalink / raw)
  To: linux-raid

Hi Everybody,

I would like to propose a few probably hard-to-implement features to mdraid.

Background:
Nowadays harddisk drives, I only talk about ATA/SATA drives (SCSI
devices are too expensive for me), do their own error correction.  Most
of them also have a feature called ERC (Error Recovery Control), where
you can set timeouts for read/write error correction.  Desktop drives
are preset to run their error recovery to its fullest extend, not
reacting while this procedure is active.  RAID-edition/enterprise disks
are normally set to start error recovery, but report back a media error
after 7 seconds of unsuccessful error recovery - here this timeout
"happens".

Now imagine any RAID with some kind of redundancy, reading/writing
data.  One of the disks finds out "I cannot correctly read/write the
requested sector", starts its error correction, hits the respective
ERC-timeout and reports back a media error or unrecoverable error.  Now
mdraid would drop the disk.

But actually the data of the sector can be recreated through the
existing redundancy.  Wouldn't it be a smart thing if the mdraid
recreates the sector and just tried to write it again?  And after a good
amount of failed retries it may well drop the disk.

Prerequisites:
- upon assembling/creating of the array:
  - mdraid needs to find out if the used devices rely on (s)ata block
devices
  - if it does, the ERC-timeouts for reading/writing operations on each
device need to be set, as this feature is volatile (gets reset to
factory defaults upon power-on-reset).
  - if successful, some flag indicating the enabled feature shall be set
- error handling needs to be updated with above described "intelligence"
for devices, that have the ERC-feature set

This is a request for comments (and of course this feature).

All the best,
Stefan Hübner
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Feature Request
@ 2008-09-09  9:49 l5ynlwlcyku9kvaqc2jf.j.HadVabVobs
  0 siblings, 0 replies; 36+ messages in thread
From: l5ynlwlcyku9kvaqc2jf.j.HadVabVobs @ 2008-09-09  9:49 UTC (permalink / raw)
  To: linux-kernel

Hello,

I run linux on an AMD/VIA combination that appears to be prone to producing extremely anoying buzz on the sound card (and also in the image I get from my bt848 TV board) when I run the kernel with "make idle calls to CPU when not busy". The way out is to supply a "no-hlt"-append in lilo.conf. Doing so makes the noise go away.

Unfortunately but unevitably, it then also pulls 5-10 Watts more from the grid. I thought it might be an idea to make this "no-hlt" flexible via a /proc entry, so that it can be switched off again when the machine does something where the services from my sound card are not needed (building the kernel, e.g. ;-) ). In the Windows world there used to be a utility called "waterfall" where one could flick such a switch.

I am using a 2.4 kernel. Please, cc: me since I am not subscribed to the list.

Regards
Andreas



-- 
GMX Kostenlose Spiele: Einfach online spielen und Spaß haben mit Pastry Passion!
http://games.entertainment.gmx.net/de/entertainment/games/free/puzzle/6169196

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2005-04-14 18:37   ` Leonardo Rodrigues Magalhães
@ 2005-04-14 18:52     ` Taylor, Grant
  0 siblings, 0 replies; 36+ messages in thread
From: Taylor, Grant @ 2005-04-14 18:52 UTC (permalink / raw)
  To: Leonardo Rodrigues Magalhães; +Cc: netfilter

> 
>    Guys, how about using the new comment module for making grepping easy 
> ???? Instead of grepping the rules parameters, you can include an unique 
> ID as a comment in your rule and simply grep for it !!! What do you 
> think ??

I've considered doing that my self for other projects.  But seeing as how I did not have any real solution / method for doing so already I did not want to propose it yet.  I'm thinking of using it for more of a ""system that would manage all your rules, not unlike SysV Init scripts, for you.  You would then go through that interface and work with iptables.  I know that what ever I end up coming up with I'll end up using some sort of numeric identifiers for the rules to be matched against so it is easier to machine parse.  I'll probably end up using a comment of something like this ':<numeric ID>:<free text comment>'.  This way the machine parseable identifier is there in the form of ':<numeric ID>:' where it will be easy to find on the line.  The <numeric ID> will be at the start of the comments and starting at about the same column on screen while still allowing for free text comments (
 or as free as comment will allow it's self, just a bit shorter) thus making it easier to 
search for a specific <numeric ID> visually, vs having it at the end of the comment which would make location of the <numeric ID> of the rule depend on the length of the free text.  Seeing as how comment is a relatively new match extension and not all systems have it in the kernel this system would be valid for new and patched kernels only.  Where as something that would parse the output of iptables(|-save) would be more backwards compatible.

I personally am EXTENSIVELY using the comment match extension, as well as planing on using TARPIT targets (that is a sticky subject un to it's self.  Pun intended.  :P  )



Grant. . . .


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2005-04-14 18:18 ` Taylor, Grant
@ 2005-04-14 18:37   ` Leonardo Rodrigues Magalhães
  2005-04-14 18:52     ` Taylor, Grant
  0 siblings, 1 reply; 36+ messages in thread
From: Leonardo Rodrigues Magalhães @ 2005-04-14 18:37 UTC (permalink / raw)
  To: Taylor, Grant; +Cc: netfilter


    Guys, how about using the new comment module for making grepping 
easy ???? Instead of grepping the rules parameters, you can include an 
unique ID as a comment in your rule and simply grep for it !!! What do 
you think ??

iptables -I FORWARD -i eth0 -o ppp0 -p tcp -s 12.34.56.78 -d 10.20.30.40 
-m state --state NEW,ESTABLISHED -m time --timestart 08:00 --timestop 
15:45 --days Mon,Wed,Fri -m comment --comment "my_super_crazy_rule" -j 
ACCEPT

[root@correio ~]# iptables -nL FORWARD -v | grep my_super_crazy_rule | wc -l
1
[root@correio ~]# iptables -nL FORWARD -v | grep 
my_nonexistant_super_crazy_rule | wc -l    
0
[root@correio ~]#


    Sincerily,
    Leonardo Rodrigues

Taylor, Grant escreveu:

>> more? Why not return failure and say "rule already loaded?" It`s not a
>> critic, i just want to understand why i can need more than 1 same rule
>> for 1 chain.
>
>
> I'm just guessing here but I'd be willing to bet that the actual 
> kernel space of IPTables is more like a database that gets traversed 
> in kernel space.  The iptables command line tool is probably a user 
> land space tool for listing, inserting, updating, and deleting entries 
> in that database.  I'd say that to make things simpler the kernel does 
> not do any checking to make sure that a rule is distinct as there is 
> no harm in having multiple identical rules saver for the fact that it 
> is an additional rule to traverse.  The iptables command line tool was 
> not written to do any checking either as it is not required and this 
> would probably complicate things quite a bit more.
>
>> So, i`d prefer to write something simular to init scripts, when i have
>> to remember state of each loaded rule: is it loaded or not. But here
>> there are other problems: what if i manually add/delete rule? this
>> should not happen if i have 'my super system', but it`s life... so
>> again i have to reinvent wheel.
>
>
> You might try taking a look at iptables-save and iptables-restore 
> respectively.  From the output of iptables-save it looks like all the 
> lines that it generates would go directly after the iptables command.  
> I.e. if you would normally type:
>
> iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
>
> You would see the following in the iptables-save output:
>
> -A FORWARD -i eth0 -o eth1 -j ACCEPT
>
> I'd be willing to bet that it is easier to parse this output than the 
> normal iptables output for what you are doing.  Take a look at it and 
> see if it will work for you.
>
>
>
> Grant. . . .
>
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
  2005-04-14 16:50 feature request `VL
@ 2005-04-14 18:18 ` Taylor, Grant
  2005-04-14 18:37   ` Leonardo Rodrigues Magalhães
  0 siblings, 1 reply; 36+ messages in thread
From: Taylor, Grant @ 2005-04-14 18:18 UTC (permalink / raw)
  To: `VL; +Cc: netfilter

> more? Why not return failure and say "rule already loaded?" It`s not a
> critic, i just want to understand why i can need more than 1 same rule
> for 1 chain.

I'm just guessing here but I'd be willing to bet that the actual kernel space of IPTables is more like a database that gets traversed in kernel space.  The iptables command line tool is probably a user land space tool for listing, inserting, updating, and deleting entries in that database.  I'd say that to make things simpler the kernel does not do any checking to make sure that a rule is distinct as there is no harm in having multiple identical rules saver for the fact that it is an additional rule to traverse.  The iptables command line tool was not written to do any checking either as it is not required and this would probably complicate things quite a bit more.

> So, i`d prefer to write something simular to init scripts, when i have
> to remember state of each loaded rule: is it loaded or not. But here
> there are other problems: what if i manually add/delete rule? this
> should not happen if i have 'my super system', but it`s life... so
> again i have to reinvent wheel.

You might try taking a look at iptables-save and iptables-restore respectively.  From the output of iptables-save it looks like all the lines that it generates would go directly after the iptables command.  I.e. if you would normally type:

iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

You would see the following in the iptables-save output:

-A FORWARD -i eth0 -o eth1 -j ACCEPT

I'd be willing to bet that it is easier to parse this output than the normal iptables output for what you are doing.  Take a look at it and see if it will work for you.



Grant. . . .


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: feature request
@ 2005-04-14 16:50 `VL
  2005-04-14 18:18 ` Taylor, Grant
  0 siblings, 1 reply; 36+ messages in thread
From: `VL @ 2005-04-14 16:50 UTC (permalink / raw)
  To: netfilter

On Apr 8, 2005 4:00 PM, Timothy Earl <mehimx@gmail.com> wrote:
> Hi,
>
> I think to solve your problem you could work around it by using a series of
> awk grep and sed commands along with iptables -vL to test if your rule is
> loaded, presently that is how i get my current ip etc..
>
> man awk, man grep, man sed
>
> for example:
>
> EXTIP="`/sbin/ifconfig ppp0 | grep 'inet adr' | awk '{print $2}' | sed -e
> 's/.*$
>
> Regards,
>
> Tim

I do know that i can work around my problem with thousands of ways =)
. I was surprised that it is impossible with iptables just to test if
rule was loaded, i was sure option existed. One more question i have:
what is the reason to add rules, that already exists in chain more and
more? Why not return failure and say "rule already loaded?" It`s not a
critic, i just want to understand why i can need more than 1 same rule
for 1 chain.

Second, grepping & awking around output of iptables with certain
options doesn`t seem 'reliable' to me. I have to compare string like:

OUTPUT -o eth0 -p tcp -s 192.168.127.29 -d 192.168.127.30 -j ACCEPT
to:
0     0 ACCEPT     tcp  --  *      eth0    192.168.127.29
192.168.127.30

Not impossible, but not very pleasant. The more complex rule i will
have, the more pain. Additional parameters, for example mac addresses,
tcp flags - what will happen to my rule matching, based on shell, if i
add couple of new options to my rule?

So, i`d prefer to write something simular to init scripts, when i have
to remember state of each loaded rule: is it loaded or not. But here
there are other problems: what if i manually add/delete rule? this
should not happen if i have 'my super system', but it`s life... so
again i have to reinvent wheel.

And all of this can be solved by simple( well,i think so =))
modification. We can add -test option or we can return false while
trying to load rule, that already exists in the chain.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Feature request
  2003-08-28 13:50         ` Dominik Brodowski
@ 2003-08-28 16:04           ` Daniel Thor Kristjansson
  0 siblings, 0 replies; 36+ messages in thread
From: Daniel Thor Kristjansson @ 2003-08-28 16:04 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: Viktor Radnai, cpufreq


My laptop has C2 and ACPI throttling still saves and additional 500mW +
manages to cool the CPU enough to get the fan turned off.

Cpufreq cools the cpu more, but I've yet to get a kernel to run with
both cpufreq + tcp/ip4 so I'm stuck with ACPI throttling for now.

-- Daniel
  << When truth is outlawed; only outlaws will tell the truth. >> - RLiegh

On Thu, 28 Aug 2003, Dominik Brodowski wrote:

]On Thu, Aug 28, 2003 at 03:16:34AM +1000, Viktor Radnai wrote:
]> Dominik Brodowski wrote:
]> >Oh well, so users want to use 150 MHz instead of 1600 MHz now...
]> My laptop is still quite responsive at 150MHz and anything that lets me
]> conserve battery power when the performance isn't needed is worth
]> trying. After all, this is the very purpose of frequency scaling. It
]> doesn't really matter how fast the kernel executes idle loops ;)
]
]Well, does your computer do some real "idling" - e.g. ACPI C-States [C2 and
]above] or APM "halt"? If so, then you don't need any throttling - it saves
]approximately the same amount of energy.
]
]> >worthy of discussion for 2.7.
]> That would be great, too bad that it won't happen sooner. In the
]> meantime, do you think that the method described below is an acceptable
]> way of saving power or do you foresee any potential problems /
]> instability as a result of this?
]>
]> If you think that this is a workable method then I might hack one of the
]> userspace frequency scaling utilities to support this method.
]
]As most systems support either ACPI-C-States or APM "halt" and these methods
]are as "good" for power saving, I don't see a reason to implement this at
]the moment.
]
]	Dominik
]
]_______________________________________________
]Cpufreq mailing list
]Cpufreq@www.linux.org.uk
]http://www.linux.org.uk/mailman/listinfo/cpufreq
]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Feature request
  2003-08-27 17:16       ` Feature request Viktor Radnai
@ 2003-08-28 13:50         ` Dominik Brodowski
  2003-08-28 16:04           ` Daniel Thor Kristjansson
  0 siblings, 1 reply; 36+ messages in thread
From: Dominik Brodowski @ 2003-08-28 13:50 UTC (permalink / raw)
  To: Viktor Radnai; +Cc: cpufreq

On Thu, Aug 28, 2003 at 03:16:34AM +1000, Viktor Radnai wrote:
> Dominik Brodowski wrote:
> >Oh well, so users want to use 150 MHz instead of 1600 MHz now...
> My laptop is still quite responsive at 150MHz and anything that lets me 
> conserve battery power when the performance isn't needed is worth 
> trying. After all, this is the very purpose of frequency scaling. It 
> doesn't really matter how fast the kernel executes idle loops ;)

Well, does your computer do some real "idling" - e.g. ACPI C-States [C2 and
above] or APM "halt"? If so, then you don't need any throttling - it saves
approximately the same amount of energy.

> >worthy of discussion for 2.7.
> That would be great, too bad that it won't happen sooner. In the 
> meantime, do you think that the method described below is an acceptable 
> way of saving power or do you foresee any potential problems / 
> instability as a result of this?
> 
> If you think that this is a workable method then I might hack one of the 
> userspace frequency scaling utilities to support this method.

As most systems support either ACPI-C-States or APM "halt" and these methods
are as "good" for power saving, I don't see a reason to implement this at
the moment.

	Dominik

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Feature request
  2003-08-26 23:10     ` Dominik Brodowski
@ 2003-08-27 17:16       ` Viktor Radnai
  2003-08-28 13:50         ` Dominik Brodowski
  0 siblings, 1 reply; 36+ messages in thread
From: Viktor Radnai @ 2003-08-27 17:16 UTC (permalink / raw)
  To: Dominik Brodowski; +Cc: cpufreq

Dominik Brodowski wrote:
> Oh well, so users want to use 150 MHz instead of 1600 MHz now...
My laptop is still quite responsive at 150MHz and anything that lets me 
conserve battery power when the performance isn't needed is worth 
trying. After all, this is the very purpose of frequency scaling. It 
doesn't really matter how fast the kernel executes idle loops ;)

> In fact, I have some ideas to allow same-time usage of a 
> 	throttling
> and a
> 	frequency and voltage scaling
> driver. But IMHO NOT for 2.4. and NOT for 2.6. It's something which might be
> worthy of discussion for 2.7.
That would be great, too bad that it won't happen sooner. In the 
meantime, do you think that the method described below is an acceptable 
way of saving power or do you foresee any potential problems / 
instability as a result of this?

If you think that this is a workable method then I might hack one of the 
userspace frequency scaling utilities to support this method.

Cheers,
Vik

> On Sat, Aug 23, 2003 at 08:50:03PM +1000, Viktor Radnai wrote:
> 
>>Hi all,
>>
>>I wonder if it would be possible to modify the cpufreq driver modules so
>>that more than one could be loaded at the same time (speedstep-ich and
>>p4-clockmod are good examples). Perhaps change the location of the
>>cpufreq virtual files from /sys/devices/system/cpu/cpu0/cpufreq/ to
>>/sys/devices/system/cpu/cpu0/cpufreq/<modulename>/ ?
>>
>>Cheers,
>>Vik
>>
>>Viktor Radnai wrote:
>>
>>>Hi Martin,
>>>
>>>I've managed to clock down my 2GHz Pentium 4m to around 150MHz by doing 
>>>the following:
>>>
>>>- compile both speedstep and p4-clockmod as modules
>>>- modprobe speedstep
>>>- set the governor to powersave
>>>- rmmod speedstep
>>>- modprobe p4-clockmod
>>>- set the CPU speed
>>>
>>>Hope this helps,
>>>
>>>Vik
>>>
>>>Martin Klinkigt (multimedia-test) wrote:
>>>
>>>
>>>>Hello,
>>>>I has a Pentium 4 m and the 2.4.21 Kernel with cpufreq-2.4.21-2. Now I 
>>>>can change the frequency in each case between 1,2 and 1,6 Ghz.
>>>>Under Windows I had it however already on 400 MHz.
>>>>
>>>>Can I change it somehow that he is further as 1.2 Ghz down? 
>>>>Thanks
>>>>Martin
> 
> 
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2016-10-28 16:32 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-27 21:55 feature request John Rood
2016-10-27 22:01 ` Stefan Beller
2016-10-27 22:05   ` John Rood
2016-10-27 22:24     ` John Rood
2016-10-27 22:27       ` Junio C Hamano
2016-10-27 22:48         ` John Rood
2016-10-27 22:51           ` Junio C Hamano
2016-10-27 23:16             ` John Rood
2016-10-27 22:30 ` Stefan Beller
2016-10-27 22:44   ` John Rood
2016-10-27 22:46     ` Junio C Hamano
2016-10-27 23:24     ` David Lang
2016-10-28  8:49       ` Johannes Schindelin
2016-10-28 12:54       ` Philip Oakley
  -- strict thread matches above, loose matches on Subject: below --
2013-02-18 18:52 Jay Townsend
2013-02-18 19:54 ` James Nylen
2013-02-18 20:45   ` Jeff King
2013-02-19  3:26     ` Drew Northup
2013-02-19 22:27     ` Shawn Pearce
2012-10-16 11:36 Angelo Borsotti
2012-10-16 12:15 ` Andrew Ardill
2012-10-16 17:27   ` Angelo Borsotti
2012-10-16 23:30     ` Sitaram Chamarty
2012-10-17  0:00     ` Andrew Ardill
2012-10-16 13:34 ` Christian Thaeter
2010-02-09  8:43 Feature Request Stefan *St0fF* Huebner
2010-02-09 12:28 ` Michael Tokarev
2010-02-09 14:19   ` Stefan Hübner
2008-09-09  9:49 l5ynlwlcyku9kvaqc2jf.j.HadVabVobs
2005-04-14 16:50 feature request `VL
2005-04-14 18:18 ` Taylor, Grant
2005-04-14 18:37   ` Leonardo Rodrigues Magalhães
2005-04-14 18:52     ` Taylor, Grant
2003-08-23  7:51 Pentium 4m kernel 2.4.21 Martin Klinkigt (multimedia-test)
2003-08-23  9:49 ` Viktor Radnai
2003-08-23 10:50   ` Feature request (was: Pentium 4m kernel 2.4.21) Viktor Radnai
2003-08-26 23:10     ` Dominik Brodowski
2003-08-27 17:16       ` Feature request Viktor Radnai
2003-08-28 13:50         ` Dominik Brodowski
2003-08-28 16:04           ` Daniel Thor Kristjansson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.