All of lore.kernel.org
 help / color / mirror / Atom feed
* ssh passwords
@ 2013-01-22 18:24 Gandalf Corvotempesta
  2013-01-22 18:45 ` Gregory Farnum
  2013-01-22 18:46 ` Xing Lin
  0 siblings, 2 replies; 13+ messages in thread
From: Gandalf Corvotempesta @ 2013-01-22 18:24 UTC (permalink / raw)
  To: ceph-devel

Hi all,
i'm trying my very first ceph installation following the 5-minutes quickstart:
http://ceph.com/docs/master/start/quick-start/#install-debian-ubuntu

just a question: why ceph is asking me  for SSH password? Is ceph
trying to connect to itself via SSH?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 18:24 ssh passwords Gandalf Corvotempesta
@ 2013-01-22 18:45 ` Gregory Farnum
  2013-01-22 20:44   ` Gandalf Corvotempesta
  2013-01-22 18:46 ` Xing Lin
  1 sibling, 1 reply; 13+ messages in thread
From: Gregory Farnum @ 2013-01-22 18:45 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: ceph-devel

On Tuesday, January 22, 2013 at 10:24 AM, Gandalf Corvotempesta wrote:
> Hi all,
> i'm trying my very first ceph installation following the 5-minutes quickstart:
> http://ceph.com/docs/master/start/quick-start/#install-debian-ubuntu
> 
> just a question: why ceph is asking me for SSH password? Is ceph
> trying to connect to itself via SSH?
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org)
> More majordomo info at http://vger.kernel.org/majordomo-info.html

If you're using mkcephfs to set it up, or asking etc/init.d/ceph to start up daemons on each node, it uses ssh to go in and do that. I believe the Quick-start guide is using both of those. :) 
-Greg


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 18:24 ssh passwords Gandalf Corvotempesta
  2013-01-22 18:45 ` Gregory Farnum
@ 2013-01-22 18:46 ` Xing Lin
  2013-01-22 19:35   ` Neil Levine
  1 sibling, 1 reply; 13+ messages in thread
From: Xing Lin @ 2013-01-22 18:46 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: ceph-devel

If it is the command 'mkcephfs' that asked you for ssh password, then 
that is probably because that script needs to push some files 
(ceph.conf, e.g) to other hosts. If we open that script, we can see that 
it uses 'scp' to send some files. If I remember correctly, for every osd 
at other hosts, it will ask us ssh password seven times. So, we'd better 
set up public key first. :)

Xing

On 01/22/2013 11:24 AM, Gandalf Corvotempesta wrote:
> Hi all,
> i'm trying my very first ceph installation following the 5-minutes quickstart:
> http://ceph.com/docs/master/start/quick-start/#install-debian-ubuntu
>
> just a question: why ceph is asking me  for SSH password? Is ceph
> trying to connect to itself via SSH?
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 18:46 ` Xing Lin
@ 2013-01-22 19:35   ` Neil Levine
  2013-01-22 21:01     ` Xing Lin
  2013-01-22 23:14     ` Sage Weil
  0 siblings, 2 replies; 13+ messages in thread
From: Neil Levine @ 2013-01-22 19:35 UTC (permalink / raw)
  To: Xing Lin; +Cc: Gandalf Corvotempesta, ceph-devel

Out of interest, would people prefer that the Ceph deployment script
didn't try to handle server-server file copy and just did the local
setup only, or is it useful that it tries to be a mini-config
management tool at the same time?

Neil

On Tue, Jan 22, 2013 at 10:46 AM, Xing Lin <xinglin@cs.utah.edu> wrote:
> If it is the command 'mkcephfs' that asked you for ssh password, then that
> is probably because that script needs to push some files (ceph.conf, e.g) to
> other hosts. If we open that script, we can see that it uses 'scp' to send
> some files. If I remember correctly, for every osd at other hosts, it will
> ask us ssh password seven times. So, we'd better set up public key first. :)
>
> Xing
>
>
> On 01/22/2013 11:24 AM, Gandalf Corvotempesta wrote:
>>
>> Hi all,
>> i'm trying my very first ceph installation following the 5-minutes
>> quickstart:
>> http://ceph.com/docs/master/start/quick-start/#install-debian-ubuntu
>>
>> just a question: why ceph is asking me  for SSH password? Is ceph
>> trying to connect to itself via SSH?
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 18:45 ` Gregory Farnum
@ 2013-01-22 20:44   ` Gandalf Corvotempesta
  0 siblings, 0 replies; 13+ messages in thread
From: Gandalf Corvotempesta @ 2013-01-22 20:44 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

2013/1/22 Gregory Farnum <greg@inktank.com>:
> If you're using mkcephfs to set it up, or asking etc/init.d/ceph to start up daemons on each node, it uses ssh to go in and do that. I believe the Quick-start guide is using both of those. :)

I'm using mkcephfs but actually I have just one host, mkcephfs is
trying to connect to itself via scp.

I think that an additional check should be made: if the host is the
same as "hostname -f" a standard cp should be used in place of scp

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 19:35   ` Neil Levine
@ 2013-01-22 21:01     ` Xing Lin
  2013-01-22 21:11       ` Dan Mick
  2013-01-22 23:14     ` Sage Weil
  1 sibling, 1 reply; 13+ messages in thread
From: Xing Lin @ 2013-01-22 21:01 UTC (permalink / raw)
  To: Neil Levine; +Cc: Gandalf Corvotempesta, ceph-devel

I like the current approach. I think it is more convenient to run 
commands once at one host to do all the setup work. When the first time 
I deployed a ceph cluster with 4 hosts, I thought 'service ceph start' 
would start the whole ceph cluster. But as it turns out, it only starts 
local osd, mon processes. So, currently, I am using polysh to run the 
same commands at all hosts (mostly, to restart ceph service before every 
measurement.). Thanks.

Xing

On 01/22/2013 12:35 PM, Neil Levine wrote:
> Out of interest, would people prefer that the Ceph deployment script
> didn't try to handle server-server file copy and just did the local
> setup only, or is it useful that it tries to be a mini-config
> management tool at the same time?
>
> Neil


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 21:01     ` Xing Lin
@ 2013-01-22 21:11       ` Dan Mick
  2013-01-22 21:16         ` Xing Lin
  0 siblings, 1 reply; 13+ messages in thread
From: Dan Mick @ 2013-01-22 21:11 UTC (permalink / raw)
  To: Xing Lin; +Cc: Neil Levine, Gandalf Corvotempesta, ceph-devel

The '-a/--allhosts' parameter is to spread the command across the 
cluster...that is, service ceph -a start will start across the cluster.

On 01/22/2013 01:01 PM, Xing Lin wrote:
> I like the current approach. I think it is more convenient to run
> commands once at one host to do all the setup work. When the first time
> I deployed a ceph cluster with 4 hosts, I thought 'service ceph start'
> would start the whole ceph cluster. But as it turns out, it only starts
> local osd, mon processes. So, currently, I am using polysh to run the
> same commands at all hosts (mostly, to restart ceph service before every
> measurement.). Thanks.
>
> Xing
>
> On 01/22/2013 12:35 PM, Neil Levine wrote:
>> Out of interest, would people prefer that the Ceph deployment script
>> didn't try to handle server-server file copy and just did the local
>> setup only, or is it useful that it tries to be a mini-config
>> management tool at the same time?
>>
>> Neil
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 21:11       ` Dan Mick
@ 2013-01-22 21:16         ` Xing Lin
  0 siblings, 0 replies; 13+ messages in thread
From: Xing Lin @ 2013-01-22 21:16 UTC (permalink / raw)
  To: Dan Mick; +Cc: Neil Levine, Gandalf Corvotempesta, ceph-devel

I did not notice that there exists such a parameter. Thanks, Dan!

Xing

On 01/22/2013 02:11 PM, Dan Mick wrote:
> The '-a/--allhosts' parameter is to spread the command across the 
> cluster...that is, service ceph -a start will start across the cluster.
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 19:35   ` Neil Levine
  2013-01-22 21:01     ` Xing Lin
@ 2013-01-22 23:14     ` Sage Weil
  2013-01-22 23:57       ` Travis Rhoden
  1 sibling, 1 reply; 13+ messages in thread
From: Sage Weil @ 2013-01-22 23:14 UTC (permalink / raw)
  To: Neil Levine; +Cc: Xing Lin, Gandalf Corvotempesta, ceph-devel

On Tue, 22 Jan 2013, Neil Levine wrote:
> Out of interest, would people prefer that the Ceph deployment script
> didn't try to handle server-server file copy and just did the local
> setup only, or is it useful that it tries to be a mini-config
> management tool at the same time?

BTW, you can also run mkcephfs that way; the man page will let you run 
individual steps and do the remote execution parts yourself.

But I'm also curious what people think of the 'normal' usage... anyone?

sage


> 
> Neil
> 
> On Tue, Jan 22, 2013 at 10:46 AM, Xing Lin <xinglin@cs.utah.edu> wrote:
> > If it is the command 'mkcephfs' that asked you for ssh password, then that
> > is probably because that script needs to push some files (ceph.conf, e.g) to
> > other hosts. If we open that script, we can see that it uses 'scp' to send
> > some files. If I remember correctly, for every osd at other hosts, it will
> > ask us ssh password seven times. So, we'd better set up public key first. :)
> >
> > Xing
> >
> >
> > On 01/22/2013 11:24 AM, Gandalf Corvotempesta wrote:
> >>
> >> Hi all,
> >> i'm trying my very first ceph installation following the 5-minutes
> >> quickstart:
> >> http://ceph.com/docs/master/start/quick-start/#install-debian-ubuntu
> >>
> >> just a question: why ceph is asking me  for SSH password? Is ceph
> >> trying to connect to itself via SSH?
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 23:14     ` Sage Weil
@ 2013-01-22 23:57       ` Travis Rhoden
  2013-01-23  0:09         ` Neil Levine
  0 siblings, 1 reply; 13+ messages in thread
From: Travis Rhoden @ 2013-01-22 23:57 UTC (permalink / raw)
  To: Sage Weil; +Cc: Neil Levine, Xing Lin, Gandalf Corvotempesta, ceph-devel

On Tue, Jan 22, 2013 at 6:14 PM, Sage Weil <sage@inktank.com> wrote:
> On Tue, 22 Jan 2013, Neil Levine wrote:
>> Out of interest, would people prefer that the Ceph deployment script
>> didn't try to handle server-server file copy and just did the local
>> setup only, or is it useful that it tries to be a mini-config
>> management tool at the same time?
>
> BTW, you can also run mkcephfs that way; the man page will let you run
> individual steps and do the remote execution parts yourself.
>
> But I'm also curious what people think of the 'normal' usage... anyone?
>
> sage
>

While I am interested to see where ceph-deploy goes, I do think
mkcephfs in its current form is quite useful.  It does allow you to
stand up decent size clusters with relative ease and is fairly fast.
It has also come quite a ways since since the pre-argonaut form -- the
recent --mkfs additions coupled with the auto-mounting in
/etc/init.d/ceph is pretty slick.  It was a nice discovery for me last
week, as I hadn't created a cluster from scratch since 0.50 or so.

 - Travis

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-22 23:57       ` Travis Rhoden
@ 2013-01-23  0:09         ` Neil Levine
  2013-01-23  3:43           ` Travis Rhoden
  0 siblings, 1 reply; 13+ messages in thread
From: Neil Levine @ 2013-01-23  0:09 UTC (permalink / raw)
  To: Travis Rhoden; +Cc: Sage Weil, Xing Lin, Gandalf Corvotempesta, ceph-devel

We're having a chat about ceph-deploy tomorrow. We need to strike a
balance between its being a useful tool for standing up a quick
cluster and its ignoring the UNIX philosophy and trying to do to much.

My assumption is that for most production operations, or at the point
where people decide to invest in Ceph, users will already have
selected a parallel execution and/or configuration management tool.
Ensuring new or early PoC adopters, who perhaps don't want to wade
into the wider-operational frameworks issues, is probably where the
tool is best focused.

Neil

On Tue, Jan 22, 2013 at 3:57 PM, Travis Rhoden <trhoden@gmail.com> wrote:
> On Tue, Jan 22, 2013 at 6:14 PM, Sage Weil <sage@inktank.com> wrote:
>> On Tue, 22 Jan 2013, Neil Levine wrote:
>>> Out of interest, would people prefer that the Ceph deployment script
>>> didn't try to handle server-server file copy and just did the local
>>> setup only, or is it useful that it tries to be a mini-config
>>> management tool at the same time?
>>
>> BTW, you can also run mkcephfs that way; the man page will let you run
>> individual steps and do the remote execution parts yourself.
>>
>> But I'm also curious what people think of the 'normal' usage... anyone?
>>
>> sage
>>
>
> While I am interested to see where ceph-deploy goes, I do think
> mkcephfs in its current form is quite useful.  It does allow you to
> stand up decent size clusters with relative ease and is fairly fast.
> It has also come quite a ways since since the pre-argonaut form -- the
> recent --mkfs additions coupled with the auto-mounting in
> /etc/init.d/ceph is pretty slick.  It was a nice discovery for me last
> week, as I hadn't created a cluster from scratch since 0.50 or so.
>
>  - Travis

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-23  0:09         ` Neil Levine
@ 2013-01-23  3:43           ` Travis Rhoden
  2013-01-23  4:06             ` Neil Levine
  0 siblings, 1 reply; 13+ messages in thread
From: Travis Rhoden @ 2013-01-23  3:43 UTC (permalink / raw)
  To: Neil Levine; +Cc: Sage Weil, Xing Lin, Gandalf Corvotempesta, ceph-devel

Since you are chatting about ceph-deploy tomorrow, I'll chime in with
a bit more.

I'm interested in ceph-deploy since it can be a light-weight
production appropriate installer.  The docs repeatedly warn that
mkcephfs is not intended for production clusters, and Neil reminds us
that the expectation is that production clusters will likely use a
config management tool.  Seems like that is most likely Chef, but I
know there are others.

I've always wondered "why" mkcephfs isn't suitable.  Looking at the
Chef recipes and ceph-deploy, my best explanation is that these other
tools use ceph-disk-prepare to take advantage of GPT and to label the
disks for Ceph's use.  Then you can use the Upstart scripts to auto
recognize prepared disks and automatically add them to the cluster.
This scales a lot better than having to add each disk (assigned to a
node) in ceph.conf and using /etc/init.d/ceph to stop/start the
cluster.   It also makes it quite a lot easier to add new OSDs to the
cluster.  Is that about right?

If that's on the right track, I'm interested in ceph-deploy to achieve
these goals because at the moment we're not interested in deploying
Chef (or Puppet), great tools that they are.  Down the road, sure, but
only once we have some in-house experience/expertise, which we
currently do not.  Having a standalone tool that is just "simple"
Python seems like a nice alternative.

Those are my thoughts!

 - Travis

On Tue, Jan 22, 2013 at 7:09 PM, Neil Levine <neil.levine@inktank.com> wrote:
> We're having a chat about ceph-deploy tomorrow. We need to strike a
> balance between its being a useful tool for standing up a quick
> cluster and its ignoring the UNIX philosophy and trying to do to much.
>
> My assumption is that for most production operations, or at the point
> where people decide to invest in Ceph, users will already have
> selected a parallel execution and/or configuration management tool.
> Ensuring new or early PoC adopters, who perhaps don't want to wade
> into the wider-operational frameworks issues, is probably where the
> tool is best focused.
>
> Neil
>
> On Tue, Jan 22, 2013 at 3:57 PM, Travis Rhoden <trhoden@gmail.com> wrote:
>> On Tue, Jan 22, 2013 at 6:14 PM, Sage Weil <sage@inktank.com> wrote:
>>> On Tue, 22 Jan 2013, Neil Levine wrote:
>>>> Out of interest, would people prefer that the Ceph deployment script
>>>> didn't try to handle server-server file copy and just did the local
>>>> setup only, or is it useful that it tries to be a mini-config
>>>> management tool at the same time?
>>>
>>> BTW, you can also run mkcephfs that way; the man page will let you run
>>> individual steps and do the remote execution parts yourself.
>>>
>>> But I'm also curious what people think of the 'normal' usage... anyone?
>>>
>>> sage
>>>
>>
>> While I am interested to see where ceph-deploy goes, I do think
>> mkcephfs in its current form is quite useful.  It does allow you to
>> stand up decent size clusters with relative ease and is fairly fast.
>> It has also come quite a ways since since the pre-argonaut form -- the
>> recent --mkfs additions coupled with the auto-mounting in
>> /etc/init.d/ceph is pretty slick.  It was a nice discovery for me last
>> week, as I hadn't created a cluster from scratch since 0.50 or so.
>>
>>  - Travis

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: ssh passwords
  2013-01-23  3:43           ` Travis Rhoden
@ 2013-01-23  4:06             ` Neil Levine
  0 siblings, 0 replies; 13+ messages in thread
From: Neil Levine @ 2013-01-23  4:06 UTC (permalink / raw)
  To: Travis Rhoden; +Cc: Sage Weil, Xing Lin, Gandalf Corvotempesta, ceph-devel

From my perspective, I want to ensure that we have a script that helps
users get Ceph up and running as quickly as possible so they can play,
explore and evaluate it. With this goal in mind, I would prefer to
lean towards the KISS principle to reduce the potential failure
scenarios which a) deter casual users and b) generate lots of support
overhead.

This might be achieved in a number of ways: by disaggregating the
provisioning (one-time) from configuration (ongoing) hooks which then
allow users to insert their own tools; or, by having a more prescribed
environment which still avoids unnecessarily forcing things like a
security model (password-less SSH) on users.

I think the production-ready aspect of the script should come from how
many use-cases it meets more than the robustness of it, but right now
I think the quality of the script is not good enough for our main
use-case, the first-time evaluator of Ceph.

I should have kicked off this conversation further in advance of our
chat tomorrow but am still interested to hear from other users what
they like/dislike about the script or what other tools they eventually
adopted for their Ceph provisioning or configuration management.

Neil

On Tue, Jan 22, 2013 at 7:43 PM, Travis Rhoden <trhoden@gmail.com> wrote:
> Since you are chatting about ceph-deploy tomorrow, I'll chime in with
> a bit more.
>
> I'm interested in ceph-deploy since it can be a light-weight
> production appropriate installer.  The docs repeatedly warn that
> mkcephfs is not intended for production clusters, and Neil reminds us
> that the expectation is that production clusters will likely use a
> config management tool.  Seems like that is most likely Chef, but I
> know there are others.
>
> I've always wondered "why" mkcephfs isn't suitable.  Looking at the
> Chef recipes and ceph-deploy, my best explanation is that these other
> tools use ceph-disk-prepare to take advantage of GPT and to label the
> disks for Ceph's use.  Then you can use the Upstart scripts to auto
> recognize prepared disks and automatically add them to the cluster.
> This scales a lot better than having to add each disk (assigned to a
> node) in ceph.conf and using /etc/init.d/ceph to stop/start the
> cluster.   It also makes it quite a lot easier to add new OSDs to the
> cluster.  Is that about right?
>
> If that's on the right track, I'm interested in ceph-deploy to achieve
> these goals because at the moment we're not interested in deploying
> Chef (or Puppet), great tools that they are.  Down the road, sure, but
> only once we have some in-house experience/expertise, which we
> currently do not.  Having a standalone tool that is just "simple"
> Python seems like a nice alternative.
>
> Those are my thoughts!
>
>  - Travis
>
> On Tue, Jan 22, 2013 at 7:09 PM, Neil Levine <neil.levine@inktank.com> wrote:
>> We're having a chat about ceph-deploy tomorrow. We need to strike a
>> balance between its being a useful tool for standing up a quick
>> cluster and its ignoring the UNIX philosophy and trying to do to much.
>>
>> My assumption is that for most production operations, or at the point
>> where people decide to invest in Ceph, users will already have
>> selected a parallel execution and/or configuration management tool.
>> Ensuring new or early PoC adopters, who perhaps don't want to wade
>> into the wider-operational frameworks issues, is probably where the
>> tool is best focused.
>>
>> Neil
>>
>> On Tue, Jan 22, 2013 at 3:57 PM, Travis Rhoden <trhoden@gmail.com> wrote:
>>> On Tue, Jan 22, 2013 at 6:14 PM, Sage Weil <sage@inktank.com> wrote:
>>>> On Tue, 22 Jan 2013, Neil Levine wrote:
>>>>> Out of interest, would people prefer that the Ceph deployment script
>>>>> didn't try to handle server-server file copy and just did the local
>>>>> setup only, or is it useful that it tries to be a mini-config
>>>>> management tool at the same time?
>>>>
>>>> BTW, you can also run mkcephfs that way; the man page will let you run
>>>> individual steps and do the remote execution parts yourself.
>>>>
>>>> But I'm also curious what people think of the 'normal' usage... anyone?
>>>>
>>>> sage
>>>>
>>>
>>> While I am interested to see where ceph-deploy goes, I do think
>>> mkcephfs in its current form is quite useful.  It does allow you to
>>> stand up decent size clusters with relative ease and is fairly fast.
>>> It has also come quite a ways since since the pre-argonaut form -- the
>>> recent --mkfs additions coupled with the auto-mounting in
>>> /etc/init.d/ceph is pretty slick.  It was a nice discovery for me last
>>> week, as I hadn't created a cluster from scratch since 0.50 or so.
>>>
>>>  - Travis

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-01-23  4:06 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-22 18:24 ssh passwords Gandalf Corvotempesta
2013-01-22 18:45 ` Gregory Farnum
2013-01-22 20:44   ` Gandalf Corvotempesta
2013-01-22 18:46 ` Xing Lin
2013-01-22 19:35   ` Neil Levine
2013-01-22 21:01     ` Xing Lin
2013-01-22 21:11       ` Dan Mick
2013-01-22 21:16         ` Xing Lin
2013-01-22 23:14     ` Sage Weil
2013-01-22 23:57       ` Travis Rhoden
2013-01-23  0:09         ` Neil Levine
2013-01-23  3:43           ` Travis Rhoden
2013-01-23  4:06             ` Neil Levine

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.