All of lore.kernel.org
 help / color / mirror / Atom feed
* Weekly Ceph Performance Meeting Invitation
@ 2014-09-30 19:27 Mark Nelson
  2014-10-01  7:00 ` Haomai Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Mark Nelson @ 2014-09-30 19:27 UTC (permalink / raw)
  To: ceph-devel

Hi All,

I put together a bluejeans meeting for the Ceph performance meeting 
tomorrow at 8AM PST.  Hope to see you there!

To join the Meeting:
https://bluejeans.com/268261044

To join via Browser:
https://bluejeans.com/268261044/browser

To join with Lync:
https://bluejeans.com/268261044/lync


To join via Room System:
Video Conferencing System: bjn.vc -or- 199.48.152.152
Meeting ID: 268261044

To join via Phone:
1) Dial:
           +1 408 740 7256
           +1 888 240 2560(US Toll Free)
           +1 408 317 9253(Alternate Number)
           (see all numbers - http://bluejeans.com/numbers)
2) Enter Conference ID: 268261044

Mark

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Weekly Ceph Performance Meeting Invitation
  2014-09-30 19:27 Weekly Ceph Performance Meeting Invitation Mark Nelson
@ 2014-10-01  7:00 ` Haomai Wang
  2014-10-01 16:17 ` Sage Weil
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: Haomai Wang @ 2014-10-01  7:00 UTC (permalink / raw)
  To: Mark Nelson; +Cc: ceph-devel

Thanks for Mark!

It's a pity that I can't join it while on the flight. I hope I have
time to view the video(exists?).

As a reminder, AsyncMessenger(https://github.com/yuyuyu101/ceph/tree/msg-event-worker-mode)
is ready to test for developer. For io depth 1(4k randwrite),
AsyncMessenger saves 10% latency compared to SimpleMessenger. But I'm
still have no time to deploy a large cluster for performance test
purpose.

On Wed, Oct 1, 2014 at 3:27 AM, Mark Nelson <mark.nelson@inktank.com> wrote:
> Hi All,
>
> I put together a bluejeans meeting for the Ceph performance meeting tomorrow
> at 8AM PST.  Hope to see you there!
>
> To join the Meeting:
> https://bluejeans.com/268261044
>
> To join via Browser:
> https://bluejeans.com/268261044/browser
>
> To join with Lync:
> https://bluejeans.com/268261044/lync
>
>
> To join via Room System:
> Video Conferencing System: bjn.vc -or- 199.48.152.152
> Meeting ID: 268261044
>
> To join via Phone:
> 1) Dial:
>           +1 408 740 7256
>           +1 888 240 2560(US Toll Free)
>           +1 408 317 9253(Alternate Number)
>           (see all numbers - http://bluejeans.com/numbers)
> 2) Enter Conference ID: 268261044
>
> Mark
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Weekly Ceph Performance Meeting Invitation
  2014-09-30 19:27 Weekly Ceph Performance Meeting Invitation Mark Nelson
  2014-10-01  7:00 ` Haomai Wang
@ 2014-10-01 16:17 ` Sage Weil
  2014-10-01 16:23   ` Mark Nelson
  2014-10-01 17:33   ` Matt W. Benjamin
       [not found] ` <20141001183717.GA31430@oder.mch.fsc.net>
  2016-02-24 16:05 ` Somnath Roy
  3 siblings, 2 replies; 11+ messages in thread
From: Sage Weil @ 2014-10-01 16:17 UTC (permalink / raw)
  To: ceph-devel

Thanks, everyone, for joining!  I took some notes during the session in 
the etherpad:

	http://pad.ceph.com/p/performance_weekly

The session was also recorded, although I'm not sure how we get to that. 
:)

I think we covered most of the notes that people had added leading up to 
the meeting.  Hopefully everyone has a better view of what work in 
progress and who is contributing and we can follow up with more detailed 
discussions on ceph-devel.

Let's plan on the same time slot next week?

Thanks!
sage

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Weekly Ceph Performance Meeting Invitation
  2014-10-01 16:17 ` Sage Weil
@ 2014-10-01 16:23   ` Mark Nelson
  2014-10-01 17:33   ` Matt W. Benjamin
  1 sibling, 0 replies; 11+ messages in thread
From: Mark Nelson @ 2014-10-01 16:23 UTC (permalink / raw)
  To: Sage Weil, ceph-devel

On 10/01/2014 11:17 AM, Sage Weil wrote:
> Thanks, everyone, for joining!  I took some notes during the session in
> the etherpad:
>
> 	http://pad.ceph.com/p/performance_weekly
>
> The session was also recorded, although I'm not sure how we get to that.
> :)

Just verified that it worked, I'll send it out in a separate email to 
make it easy to find.

>
> I think we covered most of the notes that people had added leading up to
> the meeting.  Hopefully everyone has a better view of what work in
> progress and who is contributing and we can follow up with more detailed
> discussions on ceph-devel.
>
> Let's plan on the same time slot next week?
>
> Thanks!
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Weekly Ceph Performance Meeting Invitation
  2014-10-01 16:17 ` Sage Weil
  2014-10-01 16:23   ` Mark Nelson
@ 2014-10-01 17:33   ` Matt W. Benjamin
  1 sibling, 0 replies; 11+ messages in thread
From: Matt W. Benjamin @ 2014-10-01 17:33 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

Sorry we missed this one, we had a combination of meeting conflict and a technical problem,
we'll join next week.

Matt

----- "Sage Weil" <sweil@redhat.com> wrote:

> Thanks, everyone, for joining!  I took some notes during the session
> in 
> the etherpad:
> 
> 	http://pad.ceph.com/p/performance_weekly
> 

-- 
Matt Benjamin
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI  48104

http://linuxbox.com

tel.  734-761-4689 
fax.  734-769-8938 
cel.  734-216-5309 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* FW: Weekly Ceph Performance Meeting Invitation
       [not found]           ` <alpine.DEB.2.00.1410011553220.31126@cobra.newdream.net>
@ 2014-10-02 17:39             ` Somnath Roy
  2014-10-03  5:28               ` Haomai Wang
  0 siblings, 1 reply; 11+ messages in thread
From: Somnath Roy @ 2014-10-02 17:39 UTC (permalink / raw)
  To: ceph-devel

Please share your opinion on this..

-----Original Message-----
From: Sage Weil [mailto:sweil@redhat.com] 
Sent: Wednesday, October 01, 2014 3:57 PM
To: Somnath Roy
Cc: Mark Nelson; Kasper Dieter; Andreas Bluemle; Paul Von-Stamwitz
Subject: RE: Weekly Ceph Performance Meeting Invitation

On Wed, 1 Oct 2014, Somnath Roy wrote:
> Yes Sage, it's all read..Each call to lfn_open() will incur this 
> lookup in case of FDCache miss (which will be in 99% of cases).
> The following patch will certainly help the write path (which is
> exciting!)  but not read as read is not through the transaction path. 
> My understanding is in the read path per io only two calls are going 
> to filestore , one xattr ("_") and followed by read to the same 
> object. If somehow, we can club (or something) this two requests, 
> reads will be benefitted. I did some prototype earlier by passing the 
> fd (and path) to the replicated pg during getattr call and pass the 
> same fd/path during next read. This improving performance as well as 
> cpu usage. But, this is against the objectstore interface logic :-( 
> Basically, sole purpose of FDCache for serving this kind of scenario 
> but since it is sharded based on object hash now (and FDCache itself 
> is cpu
> intensive) it is not helping much. May be sharding based on PG 
> (Col_id) could help here ?

I suspect a more fruitful approach would be to make a read-side handle-based API for objectstore... so you can 'open' an object, keep that handle to the ObjectContext, and then do subsequent read operations against that.

Sharding the FDCache per PG would help with lock contention, yes, but is that the limiter or are we burning CPU?

> Also, I don't think ceph io path is very memory intensive and we can 
> leverage some memory for cache usage. For example, if we can have a 
> object_context cache at Replicated PG level (now the cache is there 
> but the contexts are not persisted), the performance (and cpu 
> usage)will be improved dramatically. I know that there can be lot of 
> PGs and thus memory usage can be a challenge. But, certainly we can 
> control that by limiting per cache size and what not. What could be 
> the size of an object_context instance, shouldn't be much I guess. I 
> did some prototyping on that too and got significant improvement. This 
> will eliminate the getattr path in case of cache hit.

Can you propose this on ceph-devel?  I think this is promising.  And probably quite easy to implement.

> Another challenge for read ( and for write too probably)  is the 
> sequential io in case of rbd . With the Linux default read_ahead , 
> performance of sequential read is significantly less than random read 
> with latest code in case of io_size say 64K. The obvious reason is 
> that with rbd, the default object size being 4MB, lot of sequential 
> 64K reads are coming to same PG and getting bottlenecked there. 
> Increasing read_ahead size improving performance but that will have an 
> effect in random workload. I think PG level cache should help here. 
> Striped images from librbd will not be facing this problem I guess but 
> krbd is not supporting striping and it is definitely a problem there.

I still think the key here is a comprehensive set of IO hints.  Then it's a problem of making sure we are using them effectively...

> We can discuss these in next meeting if this sounds interesting.

Yeah, but let's discuss on list first, no reason to wait!

s

> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, October 01, 2014 1:14 PM
> To: Somnath Roy
> Cc: Mark Nelson; Kasper Dieter; Andreas Bluemle; Paul Von-Stamwitz
> Subject: RE: Weekly Ceph Performance Meeting Invitation
> 
> On Wed, 1 Oct 2014, Somnath Roy wrote:
> > CPU wise the following are still hurting us in Giant. Lot of fixes 
> > like IndexManager stuff went in Giant that helped cpu consumption 
> > wise as well.
> >
> > 1. LFNIndex lookup logic . I have a fix that will save around one 
> > cpu core on that path. I am yet to address comments made by Greg/Sam 
> > on that. But, lot of improvement can happen here.
> 
> Have you looked at
> 
>         
> https://github.com/ceph/ceph/commit/74b1cf8bf1a7a160e6ce14603df63a46b2
> 2d8b98
> 
> The patch is incomplete, but with that change we should be able to drop to a single path lookup per ObjectStore::Transaction (as opposed to one for each op in the transaction that touches the given object).  I'm not sure if you were looking at ops that had a lot of those or they were simple single-io type operations?  That would only help on the write path; I think you said you've been focusing in reads.
> 
> > 2. Buffer class is very cpu intensive. Fixing that part will be 
> > helping every ceph components.
> 
> +1
> 
> sage
> 
> ________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: FW: Weekly Ceph Performance Meeting Invitation
  2014-10-02 17:39             ` FW: " Somnath Roy
@ 2014-10-03  5:28               ` Haomai Wang
  2014-10-03 22:17                 ` Somnath Roy
  0 siblings, 1 reply; 11+ messages in thread
From: Haomai Wang @ 2014-10-03  5:28 UTC (permalink / raw)
  To: Somnath Roy; +Cc: ceph-devel

On Fri, Oct 3, 2014 at 1:39 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Please share your opinion on this..
>
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, October 01, 2014 3:57 PM
> To: Somnath Roy
> Cc: Mark Nelson; Kasper Dieter; Andreas Bluemle; Paul Von-Stamwitz
> Subject: RE: Weekly Ceph Performance Meeting Invitation
>
> On Wed, 1 Oct 2014, Somnath Roy wrote:
>> Yes Sage, it's all read..Each call to lfn_open() will incur this
>> lookup in case of FDCache miss (which will be in 99% of cases).
>> The following patch will certainly help the write path (which is
>> exciting!)  but not read as read is not through the transaction path.
>> My understanding is in the read path per io only two calls are going
>> to filestore , one xattr ("_") and followed by read to the same
>> object. If somehow, we can club (or something) this two requests,
>> reads will be benefitted. I did some prototype earlier by passing the
>> fd (and path) to the replicated pg during getattr call and pass the
>> same fd/path during next read. This improving performance as well as
>> cpu usage. But, this is against the objectstore interface logic :-(
>> Basically, sole purpose of FDCache for serving this kind of scenario
>> but since it is sharded based on object hash now (and FDCache itself
>> is cpu
>> intensive) it is not helping much. May be sharding based on PG
>> (Col_id) could help here ?
>
> I suspect a more fruitful approach would be to make a read-side handle-based API for objectstore... so you can 'open' an object, keep that handle to the ObjectContext, and then do subsequent read operations against that.
>
> Sharding the FDCache per PG would help with lock contention, yes, but is that the limiter or are we burning CPU?
>
>> Also, I don't think ceph io path is very memory intensive and we can
>> leverage some memory for cache usage. For example, if we can have a
>> object_context cache at Replicated PG level (now the cache is there
>> but the contexts are not persisted), the performance (and cpu
>> usage)will be improved dramatically. I know that there can be lot of
>> PGs and thus memory usage can be a challenge. But, certainly we can
>> control that by limiting per cache size and what not. What could be
>> the size of an object_context instance, shouldn't be much I guess. I
>> did some prototyping on that too and got significant improvement. This
>> will eliminate the getattr path in case of cache hit.
>
> Can you propose this on ceph-devel?  I think this is promising.  And probably quite easy to implement.

Yes, we have done this impl and it can help reduce nearly 100us for
each IO if hit cache. We will make a pull request next week. :-)

>
>> Another challenge for read ( and for write too probably)  is the
>> sequential io in case of rbd . With the Linux default read_ahead ,
>> performance of sequential read is significantly less than random read
>> with latest code in case of io_size say 64K. The obvious reason is
>> that with rbd, the default object size being 4MB, lot of sequential
>> 64K reads are coming to same PG and getting bottlenecked there.
>> Increasing read_ahead size improving performance but that will have an
>> effect in random workload. I think PG level cache should help here.
>> Striped images from librbd will not be facing this problem I guess but
>> krbd is not supporting striping and it is definitely a problem there.
>
> I still think the key here is a comprehensive set of IO hints.  Then it's a problem of making sure we are using them effectively...
>
>> We can discuss these in next meeting if this sounds interesting.
>
> Yeah, but let's discuss on list first, no reason to wait!
>
> s
>
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Sage Weil [mailto:sweil@redhat.com]
>> Sent: Wednesday, October 01, 2014 1:14 PM
>> To: Somnath Roy
>> Cc: Mark Nelson; Kasper Dieter; Andreas Bluemle; Paul Von-Stamwitz
>> Subject: RE: Weekly Ceph Performance Meeting Invitation
>>
>> On Wed, 1 Oct 2014, Somnath Roy wrote:
>> > CPU wise the following are still hurting us in Giant. Lot of fixes
>> > like IndexManager stuff went in Giant that helped cpu consumption
>> > wise as well.
>> >
>> > 1. LFNIndex lookup logic . I have a fix that will save around one
>> > cpu core on that path. I am yet to address comments made by Greg/Sam
>> > on that. But, lot of improvement can happen here.
>>
>> Have you looked at
>>
>>
>> https://github.com/ceph/ceph/commit/74b1cf8bf1a7a160e6ce14603df63a46b2
>> 2d8b98
>>
>> The patch is incomplete, but with that change we should be able to drop to a single path lookup per ObjectStore::Transaction (as opposed to one for each op in the transaction that touches the given object).  I'm not sure if you were looking at ops that had a lot of those or they were simple single-io type operations?  That would only help on the write path; I think you said you've been focusing in reads.
>>
>> > 2. Buffer class is very cpu intensive. Fixing that part will be
>> > helping every ceph components.
>>
>> +1
>>
>> sage
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: FW: Weekly Ceph Performance Meeting Invitation
  2014-10-03  5:28               ` Haomai Wang
@ 2014-10-03 22:17                 ` Somnath Roy
  0 siblings, 0 replies; 11+ messages in thread
From: Somnath Roy @ 2014-10-03 22:17 UTC (permalink / raw)
  To: Haomai Wang; +Cc: ceph-devel

That's great Haomai, looking forward to this pull request.

Thanks & Regards
Somnath

-----Original Message-----
From: Haomai Wang [mailto:haomaiwang@gmail.com] 
Sent: Thursday, October 02, 2014 10:28 PM
To: Somnath Roy
Cc: ceph-devel
Subject: Re: FW: Weekly Ceph Performance Meeting Invitation

On Fri, Oct 3, 2014 at 1:39 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Please share your opinion on this..
>
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, October 01, 2014 3:57 PM
> To: Somnath Roy
> Cc: Mark Nelson; Kasper Dieter; Andreas Bluemle; Paul Von-Stamwitz
> Subject: RE: Weekly Ceph Performance Meeting Invitation
>
> On Wed, 1 Oct 2014, Somnath Roy wrote:
>> Yes Sage, it's all read..Each call to lfn_open() will incur this 
>> lookup in case of FDCache miss (which will be in 99% of cases).
>> The following patch will certainly help the write path (which is
>> exciting!)  but not read as read is not through the transaction path.
>> My understanding is in the read path per io only two calls are going 
>> to filestore , one xattr ("_") and followed by read to the same 
>> object. If somehow, we can club (or something) this two requests, 
>> reads will be benefitted. I did some prototype earlier by passing the 
>> fd (and path) to the replicated pg during getattr call and pass the 
>> same fd/path during next read. This improving performance as well as 
>> cpu usage. But, this is against the objectstore interface logic :-( 
>> Basically, sole purpose of FDCache for serving this kind of scenario 
>> but since it is sharded based on object hash now (and FDCache itself 
>> is cpu
>> intensive) it is not helping much. May be sharding based on PG
>> (Col_id) could help here ?
>
> I suspect a more fruitful approach would be to make a read-side handle-based API for objectstore... so you can 'open' an object, keep that handle to the ObjectContext, and then do subsequent read operations against that.
>
> Sharding the FDCache per PG would help with lock contention, yes, but is that the limiter or are we burning CPU?
>
>> Also, I don't think ceph io path is very memory intensive and we can 
>> leverage some memory for cache usage. For example, if we can have a 
>> object_context cache at Replicated PG level (now the cache is there 
>> but the contexts are not persisted), the performance (and cpu 
>> usage)will be improved dramatically. I know that there can be lot of 
>> PGs and thus memory usage can be a challenge. But, certainly we can 
>> control that by limiting per cache size and what not. What could be 
>> the size of an object_context instance, shouldn't be much I guess. I 
>> did some prototyping on that too and got significant improvement. 
>> This will eliminate the getattr path in case of cache hit.
>
> Can you propose this on ceph-devel?  I think this is promising.  And probably quite easy to implement.

Yes, we have done this impl and it can help reduce nearly 100us for each IO if hit cache. We will make a pull request next week. :-)

>
>> Another challenge for read ( and for write too probably)  is the 
>> sequential io in case of rbd . With the Linux default read_ahead , 
>> performance of sequential read is significantly less than random read 
>> with latest code in case of io_size say 64K. The obvious reason is 
>> that with rbd, the default object size being 4MB, lot of sequential 
>> 64K reads are coming to same PG and getting bottlenecked there.
>> Increasing read_ahead size improving performance but that will have 
>> an effect in random workload. I think PG level cache should help here.
>> Striped images from librbd will not be facing this problem I guess 
>> but krbd is not supporting striping and it is definitely a problem there.
>
> I still think the key here is a comprehensive set of IO hints.  Then it's a problem of making sure we are using them effectively...
>
>> We can discuss these in next meeting if this sounds interesting.
>
> Yeah, but let's discuss on list first, no reason to wait!
>
> s
>
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Sage Weil [mailto:sweil@redhat.com]
>> Sent: Wednesday, October 01, 2014 1:14 PM
>> To: Somnath Roy
>> Cc: Mark Nelson; Kasper Dieter; Andreas Bluemle; Paul Von-Stamwitz
>> Subject: RE: Weekly Ceph Performance Meeting Invitation
>>
>> On Wed, 1 Oct 2014, Somnath Roy wrote:
>> > CPU wise the following are still hurting us in Giant. Lot of fixes 
>> > like IndexManager stuff went in Giant that helped cpu consumption 
>> > wise as well.
>> >
>> > 1. LFNIndex lookup logic . I have a fix that will save around one 
>> > cpu core on that path. I am yet to address comments made by 
>> > Greg/Sam on that. But, lot of improvement can happen here.
>>
>> Have you looked at
>>
>>
>> https://github.com/ceph/ceph/commit/74b1cf8bf1a7a160e6ce14603df63a46b
>> 2
>> 2d8b98
>>
>> The patch is incomplete, but with that change we should be able to drop to a single path lookup per ObjectStore::Transaction (as opposed to one for each op in the transaction that touches the given object).  I'm not sure if you were looking at ops that had a lot of those or they were simple single-io type operations?  That would only help on the write path; I think you said you've been focusing in reads.
>>
>> > 2. Buffer class is very cpu intensive. Fixing that part will be 
>> > helping every ceph components.
>>
>> +1
>>
>> sage
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html



--
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: Weekly Ceph Performance Meeting Invitation
  2014-09-30 19:27 Weekly Ceph Performance Meeting Invitation Mark Nelson
                   ` (2 preceding siblings ...)
       [not found] ` <20141001183717.GA31430@oder.mch.fsc.net>
@ 2016-02-24 16:05 ` Somnath Roy
  2016-02-25  1:08   ` Mark Nelson
  3 siblings, 1 reply; 11+ messages in thread
From: Somnath Roy @ 2016-02-24 16:05 UTC (permalink / raw)
  To: Mark Nelson, ceph-devel

Mark,
Do we have meeting today ?

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, September 30, 2014 12:28 PM
To: ceph-devel@vger.kernel.org
Subject: Weekly Ceph Performance Meeting Invitation

Hi All,

I put together a bluejeans meeting for the Ceph performance meeting tomorrow at 8AM PST.  Hope to see you there!

To join the Meeting:
https://bluejeans.com/268261044

To join via Browser:
https://bluejeans.com/268261044/browser

To join with Lync:
https://bluejeans.com/268261044/lync


To join via Room System:
Video Conferencing System: bjn.vc -or- 199.48.152.152 Meeting ID: 268261044

To join via Phone:
1) Dial:
           +1 408 740 7256
           +1 888 240 2560(US Toll Free)
           +1 408 317 9253(Alternate Number)
           (see all numbers - http://bluejeans.com/numbers)
2) Enter Conference ID: 268261044

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Weekly Ceph Performance Meeting Invitation
  2016-02-24 16:05 ` Somnath Roy
@ 2016-02-25  1:08   ` Mark Nelson
  2016-02-25  1:09     ` Somnath Roy
  0 siblings, 1 reply; 11+ messages in thread
From: Mark Nelson @ 2016-02-25  1:08 UTC (permalink / raw)
  To: Somnath Roy, Mark Nelson, ceph-devel

Hi Somnath,

Several of the Red Hat folks (including me) are out at the FAST USENIX 
conference today.  I mentioned it at the meeting last week but forgot to 
send out an email.  Sorry about that!

We'll be reconvening again next week though with 2 full weeks of pull 
request updates to enjoy. ;)

Thanks,
Mark

On 02/24/2016 10:05 AM, Somnath Roy wrote:
> Mark,
> Do we have meeting today ?
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Mark Nelson
> Sent: Tuesday, September 30, 2014 12:28 PM
> To: ceph-devel@vger.kernel.org
> Subject: Weekly Ceph Performance Meeting Invitation
>
> Hi All,
>
> I put together a bluejeans meeting for the Ceph performance meeting tomorrow at 8AM PST.  Hope to see you there!
>
> To join the Meeting:
> https://bluejeans.com/268261044
>
> To join via Browser:
> https://bluejeans.com/268261044/browser
>
> To join with Lync:
> https://bluejeans.com/268261044/lync
>
>
> To join via Room System:
> Video Conferencing System: bjn.vc -or- 199.48.152.152 Meeting ID: 268261044
>
> To join via Phone:
> 1) Dial:
>             +1 408 740 7256
>             +1 888 240 2560(US Toll Free)
>             +1 408 317 9253(Alternate Number)
>             (see all numbers - http://bluejeans.com/numbers)
> 2) Enter Conference ID: 268261044
>
> Mark
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: Weekly Ceph Performance Meeting Invitation
  2016-02-25  1:08   ` Mark Nelson
@ 2016-02-25  1:09     ` Somnath Roy
  0 siblings, 0 replies; 11+ messages in thread
From: Somnath Roy @ 2016-02-25  1:09 UTC (permalink / raw)
  To: Mark Nelson, Mark Nelson, ceph-devel

Yeah, I remembered after shooting the mail...:-)

Thanks & Regards
Somnath

-----Original Message-----
From: Mark Nelson [mailto:mnelson@redhat.com] 
Sent: Wednesday, February 24, 2016 5:08 PM
To: Somnath Roy; Mark Nelson; ceph-devel@vger.kernel.org
Subject: Re: Weekly Ceph Performance Meeting Invitation

Hi Somnath,

Several of the Red Hat folks (including me) are out at the FAST USENIX conference today.  I mentioned it at the meeting last week but forgot to send out an email.  Sorry about that!

We'll be reconvening again next week though with 2 full weeks of pull request updates to enjoy. ;)

Thanks,
Mark

On 02/24/2016 10:05 AM, Somnath Roy wrote:
> Mark,
> Do we have meeting today ?
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org 
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Mark Nelson
> Sent: Tuesday, September 30, 2014 12:28 PM
> To: ceph-devel@vger.kernel.org
> Subject: Weekly Ceph Performance Meeting Invitation
>
> Hi All,
>
> I put together a bluejeans meeting for the Ceph performance meeting tomorrow at 8AM PST.  Hope to see you there!
>
> To join the Meeting:
> https://bluejeans.com/268261044
>
> To join via Browser:
> https://bluejeans.com/268261044/browser
>
> To join with Lync:
> https://bluejeans.com/268261044/lync
>
>
> To join via Room System:
> Video Conferencing System: bjn.vc -or- 199.48.152.152 Meeting ID: 
> 268261044
>
> To join via Phone:
> 1) Dial:
>             +1 408 740 7256
>             +1 888 240 2560(US Toll Free)
>             +1 408 317 9253(Alternate Number)
>             (see all numbers - http://bluejeans.com/numbers)
> 2) Enter Conference ID: 268261044
>
> Mark
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-02-25  1:09 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-30 19:27 Weekly Ceph Performance Meeting Invitation Mark Nelson
2014-10-01  7:00 ` Haomai Wang
2014-10-01 16:17 ` Sage Weil
2014-10-01 16:23   ` Mark Nelson
2014-10-01 17:33   ` Matt W. Benjamin
     [not found] ` <20141001183717.GA31430@oder.mch.fsc.net>
     [not found]   ` <542C4E18.8080201@redhat.com>
     [not found]     ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2784F027@SACMBXIP02.sdcorp.global.sandisk.com>
     [not found]       ` <alpine.DEB.2.00.1410011311330.31126@cobra.newdream.net>
     [not found]         ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2784F0C3@SACMBXIP02.sdcorp.global.sandisk.com>
     [not found]           ` <alpine.DEB.2.00.1410011553220.31126@cobra.newdream.net>
2014-10-02 17:39             ` FW: " Somnath Roy
2014-10-03  5:28               ` Haomai Wang
2014-10-03 22:17                 ` Somnath Roy
2016-02-24 16:05 ` Somnath Roy
2016-02-25  1:08   ` Mark Nelson
2016-02-25  1:09     ` Somnath Roy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.