* Dev Meeting followup on compression
@ 2016-02-08 16:34 Igor Fedotov
2016-02-08 19:23 ` Ilya Dryomov
0 siblings, 1 reply; 4+ messages in thread
From: Igor Fedotov @ 2016-02-08 16:34 UTC (permalink / raw)
To: ceph-devel
Guys,
let me summarize what we decided regarding compression support in Ceph
during the Dev Meeting last week.
Below are possible implementation options, their pros/cons and the
conclusion.
1) Add compression support to RGW.
Pros/Cons:
+ Simple
+ Reduced inter-component traffic
- Limited to specific clients
- Will conflict with partial read/writes if any appear
Alyona Kiseleva from Mirantis (akyseleva@mirantis.com) will start
implementing this promptly. You can ask additional questions to her via
e-mail or during daily RGW stendups she is planning to attend regularly.
2) Add basic compression support to BlueStore. Basic = "Append only"
functionality to be implemented. Specific "append only" hint/flag needs
to be introduced for object creation interface.
Pros/Cons:
+ Moderate complexity
+ Suits for any client/PG backend
+ Good isolation from other Ceph components
- Limited applicability
- additional 50-200% CPU load for the cluster since we compress each
replica/EC shard independently
- no inter-component traffic saving
- recovery procedure requires decompress/recompress sequence
Mirantis ( me specifically ) will start blueprint/POC creation for this
promptly. I'm planning to attend daily RBD syncup regularly to inform on
the progress.
3) Add full compression support at BlueStore. This includes 2) + support
for random object writes.
Pros/Cons:
+ Severe complexity
+ Suits for any client/PG backend
+ Good isolation from other Ceph components
- additional 50-200% CPU load for the cluster since we compress each
replica/EC shard independently
- no inter-component traffic saving
- recovery procedure requires decompress/recompress sequence
On-Hold for now. Will require insert/delete data range for the store.
4) Add basic ( append only ) compression support at OSD using interim
PGBackend.
Pros/Cons:
+ Moderate complexity
+ Suits for any client/PG backend
+ Data compressed before replication/"EC inflation"
+ inter-component traffic saving
+ recovery procedure don't need decompress/recompress sequence
- Too tight(danger) integration into Ceph internals
- Limited applicability
- Bad experience with "append only" notion for EC and desire to avoid
that for compression
Rejected ( Personally I'd prefer this option with subsequent mutation to
full RW support. The rationale is the reduced CPU load comparing to
options 2 & 3)
Any additional comments/suggestions are welcome.
Thanks,
Igor
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Dev Meeting followup on compression
2016-02-08 16:34 Dev Meeting followup on compression Igor Fedotov
@ 2016-02-08 19:23 ` Ilya Dryomov
2016-02-09 13:24 ` Igor Fedotov
0 siblings, 1 reply; 4+ messages in thread
From: Ilya Dryomov @ 2016-02-08 19:23 UTC (permalink / raw)
To: Igor Fedotov; +Cc: ceph-devel
On Mon, Feb 8, 2016 at 5:34 PM, Igor Fedotov <ifedotov@mirantis.com> wrote:
> Guys,
>
> let me summarize what we decided regarding compression support in Ceph
> during the Dev Meeting last week.
>
> Below are possible implementation options, their pros/cons and the
> conclusion.
>
> 1) Add compression support to RGW.
> Pros/Cons:
> + Simple
> + Reduced inter-component traffic
> - Limited to specific clients
> - Will conflict with partial read/writes if any appear
>
> Alyona Kiseleva from Mirantis (akyseleva@mirantis.com) will start
> implementing this promptly. You can ask additional questions to her via
> e-mail or during daily RGW stendups she is planning to attend regularly.
>
> 2) Add basic compression support to BlueStore. Basic = "Append only"
> functionality to be implemented. Specific "append only" hint/flag needs to
> be introduced for object creation interface.
>
> Pros/Cons:
> + Moderate complexity
> + Suits for any client/PG backend
> + Good isolation from other Ceph components
> - Limited applicability
> - additional 50-200% CPU load for the cluster since we compress each
> replica/EC shard independently
> - no inter-component traffic saving
> - recovery procedure requires decompress/recompress sequence
This is for EC pools only, right? Can you elaborate on this bullet?
>
> Mirantis ( me specifically ) will start blueprint/POC creation for this
> promptly. I'm planning to attend daily RBD syncup regularly to inform on the
> progress.
Core standup and/or the new EC-overwrite meeting Sam is planning on
holding is probably a better place for this.
Thanks,
Ilya
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Dev Meeting followup on compression
2016-02-08 19:23 ` Ilya Dryomov
@ 2016-02-09 13:24 ` Igor Fedotov
2016-02-09 15:34 ` Samuel Just
0 siblings, 1 reply; 4+ messages in thread
From: Igor Fedotov @ 2016-02-09 13:24 UTC (permalink / raw)
To: Ilya Dryomov; +Cc: ceph-devel
Ilya,
please find my comments inline.
On 08.02.2016 22:23, Ilya Dryomov wrote:
> On Mon, Feb 8, 2016 at 5:34 PM, Igor Fedotov <ifedotov@mirantis.com> wrote:
>> Guys,
>>
>> let me summarize what we decided regarding compression support in Ceph
>> during the Dev Meeting last week.
>>
>> Below are possible implementation options, their pros/cons and the
>> conclusion.
>>
>> 1) Add compression support to RGW.
>> Pros/Cons:
>> + Simple
>> + Reduced inter-component traffic
>> - Limited to specific clients
>> - Will conflict with partial read/writes if any appear
>>
>> Alyona Kiseleva from Mirantis (akyseleva@mirantis.com) will start
>> implementing this promptly. You can ask additional questions to her via
>> e-mail or during daily RGW stendups she is planning to attend regularly.
>>
>> 2) Add basic compression support to BlueStore. Basic = "Append only"
>> functionality to be implemented. Specific "append only" hint/flag needs to
>> be introduced for object creation interface.
>>
>> Pros/Cons:
>> + Moderate complexity
>> + Suits for any client/PG backend
>> + Good isolation from other Ceph components
>> - Limited applicability
>> - additional 50-200% CPU load for the cluster since we compress each
>> replica/EC shard independently
>> - no inter-component traffic saving
>> - recovery procedure requires decompress/recompress sequence
> This is for EC pools only, right? Can you elaborate on this bullet?
That's for any pool type. When compression takes place at object store
level and recovery is performed at OSD(PGBackend instance) you need to
retrieve an object replica from the store ( and hence decompress it )
and subsequently compress it back when saving at a new store .
Contrary having compress/decompress at PGBackend allows to bypass such
decompress/compress overhead.
>> Mirantis ( me specifically ) will start blueprint/POC creation for this
>> promptly. I'm planning to attend daily RBD syncup regularly to inform on the
>> progress.
> Core standup and/or the new EC-overwrite meeting Sam is planning on
> holding is probably a better place for this.
Where can I find a schedule for these meetings?
> Thanks,
>
> Ilya
Regards,
Igor
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Dev Meeting followup on compression
2016-02-09 13:24 ` Igor Fedotov
@ 2016-02-09 15:34 ` Samuel Just
0 siblings, 0 replies; 4+ messages in thread
From: Samuel Just @ 2016-02-09 15:34 UTC (permalink / raw)
To: Igor Fedotov; +Cc: Ilya Dryomov, ceph-devel
Core standup would be best. I'll send you an invite.
-Sam
On Tue, Feb 9, 2016 at 5:24 AM, Igor Fedotov <ifedotov@mirantis.com> wrote:
> Ilya,
>
> please find my comments inline.
>
>
> On 08.02.2016 22:23, Ilya Dryomov wrote:
>>
>> On Mon, Feb 8, 2016 at 5:34 PM, Igor Fedotov <ifedotov@mirantis.com>
>> wrote:
>>>
>>> Guys,
>>>
>>> let me summarize what we decided regarding compression support in Ceph
>>> during the Dev Meeting last week.
>>>
>>> Below are possible implementation options, their pros/cons and the
>>> conclusion.
>>>
>>> 1) Add compression support to RGW.
>>> Pros/Cons:
>>> + Simple
>>> + Reduced inter-component traffic
>>> - Limited to specific clients
>>> - Will conflict with partial read/writes if any appear
>>>
>>> Alyona Kiseleva from Mirantis (akyseleva@mirantis.com) will start
>>> implementing this promptly. You can ask additional questions to her via
>>> e-mail or during daily RGW stendups she is planning to attend regularly.
>>>
>>> 2) Add basic compression support to BlueStore. Basic = "Append only"
>>> functionality to be implemented. Specific "append only" hint/flag needs
>>> to
>>> be introduced for object creation interface.
>>>
>>> Pros/Cons:
>>> + Moderate complexity
>>> + Suits for any client/PG backend
>>> + Good isolation from other Ceph components
>>> - Limited applicability
>>> - additional 50-200% CPU load for the cluster since we compress each
>>> replica/EC shard independently
>>> - no inter-component traffic saving
>>> - recovery procedure requires decompress/recompress sequence
>>
>> This is for EC pools only, right? Can you elaborate on this bullet?
>
> That's for any pool type. When compression takes place at object store level
> and recovery is performed at OSD(PGBackend instance) you need to retrieve an
> object replica from the store ( and hence decompress it ) and subsequently
> compress it back when saving at a new store .
> Contrary having compress/decompress at PGBackend allows to bypass such
> decompress/compress overhead.
>
>>> Mirantis ( me specifically ) will start blueprint/POC creation for this
>>> promptly. I'm planning to attend daily RBD syncup regularly to inform on
>>> the
>>> progress.
>>
>> Core standup and/or the new EC-overwrite meeting Sam is planning on
>> holding is probably a better place for this.
>
> Where can I find a schedule for these meetings?
>
>> Thanks,
>>
>> Ilya
>
> Regards,
> Igor
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-02-09 15:34 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-08 16:34 Dev Meeting followup on compression Igor Fedotov
2016-02-08 19:23 ` Ilya Dryomov
2016-02-09 13:24 ` Igor Fedotov
2016-02-09 15:34 ` Samuel Just
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.