All of lore.kernel.org
 help / color / mirror / Atom feed
* increasingly large packages and longer build times
@ 2017-08-02 13:39 Alfredo Deza
  2017-08-07 14:58 ` Ken Dreyer
  0 siblings, 1 reply; 31+ messages in thread
From: Alfredo Deza @ 2017-08-02 13:39 UTC (permalink / raw)
  To: ceph-devel

The ceph-debuginfo package has continued to increase in size on almost
every release, reaching 1.5GB for the latest luminous RC (12.1.2).

To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.

Having packages that large is problematic on a few fronts:

* Building development packages take longer
* Building the repositories takes longer too
* Storage gets heavily impacted on machines that host packages
* Cutting releases continues to be a long, tedious process, even with
current automation

The current build for releases takes about 2 hours. The building of
repositories for the release added another hour, then these need to be
signed and synced again which takes another hour. That is a 4 hour
process that keeps getting longer because these packages keep getting
larger.

What are the guidelines to address what gets into a package like ceph-debuginfo?

Can a process be implemented to periodically review this in case there
are things in there that aren't really needed?

Every dependency, and every thing else that keeps getting added to the
source tree is also a concern (for all the same reasons). I am just
mentioning ceph-debuginfo because that is the easiest heavy weight to
point fingers at.

If for example, we decided that we wanted to have another dashboard
with 200K lines of CSS+JS+HTML and that it needs to live in ceph.git,
that doesn't help any of the current issues.

Here are some ideas that could help, I look forward to anything else
that can be done here too:

* Identify packages that don't change often and could easily live in a
separate repository (ceph-deploy is a good example here)
* Implement guidelines as to what goes into packages like
ceph-debuginfo, and what needs to be trimmed out
* Include package and release maintainers in discussions that mean
adding more packages to ceph.git (or even embedding them too from
forks or submodules)


Thanks!

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-02 13:39 increasingly large packages and longer build times Alfredo Deza
@ 2017-08-07 14:58 ` Ken Dreyer
  2017-08-07 15:30   ` Willem Jan Withagen
  2017-08-16 21:44   ` Gregory Farnum
  0 siblings, 2 replies; 31+ messages in thread
From: Ken Dreyer @ 2017-08-07 14:58 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-devel

On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
> The ceph-debuginfo package has continued to increase in size on almost
> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
>
> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
>
> Having packages that large is problematic on a few fronts:

I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:

Jewel sizes:
  14M ceph-10.2.7.tar.gz
  82M ceph-10.2.7 uncompressed

Luminous sizes:
  142M ceph-12.1.2.tar.gz
  709M ceph-12.1.2  uncompressed

This adds minutes onto the build times when we must shuffle these
large artifacts around:

- Upstream we're transferring the artifacts between Jenkins slaves and chacra
  and download.ceph.com.

- Downstream in Fedora/RHEL land we're uploading these source tars to
  dist-git's lookaside cache, and it takes a while just to upload/download.

- Downstream in Debian and Ubuntu (AFAICT) they upload the source tars to Git
  with git-buildpackage, and this increases the time it takes to even "git
  clone" these repos.

The bundled Boost alone is is 474MB unpacked in 12.1.2. If we could
build Boost as a separate package (and not bundle it into ceph) it
would make it easier to manage builds upstream and downstream.

We could build a boost package in the jenkins.ceph.com infrastructure,
or the CentOS Storage SIG (for RHEL-based distros), and then start
depending on that system instead of EPEL. For Debian/Ubuntu, we could
use jenkins.ceph.com/chacra or something else - any suggestions from
Debian/Ubuntu folks?

- Ken

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-07 14:58 ` Ken Dreyer
@ 2017-08-07 15:30   ` Willem Jan Withagen
  2017-08-08  6:59     ` Fabian Grünbichler
  2017-08-16 21:44   ` Gregory Farnum
  1 sibling, 1 reply; 31+ messages in thread
From: Willem Jan Withagen @ 2017-08-07 15:30 UTC (permalink / raw)
  To: Ken Dreyer, Alfredo Deza; +Cc: ceph-devel

On 7-8-2017 16:58, Ken Dreyer wrote:
> On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
>> The ceph-debuginfo package has continued to increase in size on almost
>> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
>>
>> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
>>
>> Having packages that large is problematic on a few fronts:
> 
> I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:
> 
> Jewel sizes:
>   14M ceph-10.2.7.tar.gz
>   82M ceph-10.2.7 uncompressed
> 
> Luminous sizes:
>   142M ceph-12.1.2.tar.gz
>   709M ceph-12.1.2  uncompressed

I'm on that same page.

This is one reason not the use the source at download.ceph.com for
FreeBSD package building. It is too big, and bulky. And most of the time
the package builders already have most of the required stuff for other
packages.

The only reason for downloading the tar, would be the version strings in
the package. But getting 150M just for that is rather wasteful.

And my uncompressed sources to build a package are 332M of which is
 - 172M boost.
   Which I actually not use, I'm using the ported/package one)
 - .tox dirs 2x 25M (but that is after a build/testrun)
 - 18M dpdk
   Do not use that either
 - 13M civetweb
all in all about 250M on overhead, of which 190M is really not used.

--WjW



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-07 15:30   ` Willem Jan Withagen
@ 2017-08-08  6:59     ` Fabian Grünbichler
  2017-08-08  7:29       ` Willem Jan Withagen
  0 siblings, 1 reply; 31+ messages in thread
From: Fabian Grünbichler @ 2017-08-08  6:59 UTC (permalink / raw)
  To: Willem Jan Withagen; +Cc: Ken Dreyer, Alfredo Deza, ceph-devel

On Mon, Aug 07, 2017 at 05:30:06PM +0200, Willem Jan Withagen wrote:
> On 7-8-2017 16:58, Ken Dreyer wrote:
> > On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
> >> The ceph-debuginfo package has continued to increase in size on almost
> >> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
> >>
> >> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
> >>
> >> Having packages that large is problematic on a few fronts:
> > 
> > I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:
> > 
> > Jewel sizes:
> >   14M ceph-10.2.7.tar.gz
> >   82M ceph-10.2.7 uncompressed
> > 
> > Luminous sizes:
> >   142M ceph-12.1.2.tar.gz
> >   709M ceph-12.1.2  uncompressed
> 
> I'm on that same page.
> 

+1 (Proxmox VE, also building downstream packages)

Boost seems to be by far the biggest culprit - we initially built our
packages using the system boost option to save some space, but this
broke one time too often and we are now also shuffling tons of data
around for every version bump.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-08  6:59     ` Fabian Grünbichler
@ 2017-08-08  7:29       ` Willem Jan Withagen
  0 siblings, 0 replies; 31+ messages in thread
From: Willem Jan Withagen @ 2017-08-08  7:29 UTC (permalink / raw)
  To: Fabian Grünbichler; +Cc: Ken Dreyer, Alfredo Deza, ceph-devel

On 8-8-2017 08:59, Fabian Grünbichler wrote:
> On Mon, Aug 07, 2017 at 05:30:06PM +0200, Willem Jan Withagen wrote:
>> On 7-8-2017 16:58, Ken Dreyer wrote:
>>> On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
>>>> The ceph-debuginfo package has continued to increase in size on almost
>>>> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
>>>>
>>>> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
>>>>
>>>> Having packages that large is problematic on a few fronts:
>>>
>>> I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:
>>>
>>> Jewel sizes:
>>>   14M ceph-10.2.7.tar.gz
>>>   82M ceph-10.2.7 uncompressed
>>>
>>> Luminous sizes:
>>>   142M ceph-12.1.2.tar.gz
>>>   709M ceph-12.1.2  uncompressed
>>
>> I'm on that same page.
>>
> 
> +1 (Proxmox VE, also building downstream packages)
> 
> Boost seems to be by far the biggest culprit - we initially built our
> packages using the system boost option to save some space, but this
> broke one time too often and we are now also shuffling tons of data
> around for every version bump.

I have not (yet) run into that problem.
Perhaps because the package-maintainer for FreeBSD does a good job.

Next question would be the install location for libboost stuff. Because
it needs to be out of the way for the regular boost stuff. But then
linking any Ceph code would again require extra TLC to start using the
correct libs.
Until now I did not really think of it, but rocksdb is statically linked
and that is the reason why it does not cause this kind to challenge.
But static linking only bloats codesize even more.

--WjW

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-07 14:58 ` Ken Dreyer
  2017-08-07 15:30   ` Willem Jan Withagen
@ 2017-08-16 21:44   ` Gregory Farnum
  2017-08-16 22:30     ` John Spray
  2017-08-22  7:01     ` kefu chai
  1 sibling, 2 replies; 31+ messages in thread
From: Gregory Farnum @ 2017-08-16 21:44 UTC (permalink / raw)
  To: ceph-devel; +Cc: Alfredo Deza, Ken Dreyer

On Mon, Aug 7, 2017 at 7:58 AM, Ken Dreyer <kdreyer@redhat.com> wrote:
> On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
>> The ceph-debuginfo package has continued to increase in size on almost
>> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
>>
>> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
>>
>> Having packages that large is problematic on a few fronts:
>
> I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:
>
> Jewel sizes:
>   14M ceph-10.2.7.tar.gz
>   82M ceph-10.2.7 uncompressed
>
> Luminous sizes:
>   142M ceph-12.1.2.tar.gz
>   709M ceph-12.1.2  uncompressed
>
> This adds minutes onto the build times when we must shuffle these
> large artifacts around:
>
> - Upstream we're transferring the artifacts between Jenkins slaves and chacra
>   and download.ceph.com.
>
> - Downstream in Fedora/RHEL land we're uploading these source tars to
>   dist-git's lookaside cache, and it takes a while just to upload/download.
>
> - Downstream in Debian and Ubuntu (AFAICT) they upload the source tars to Git
>   with git-buildpackage, and this increases the time it takes to even "git
>   clone" these repos.
>
> The bundled Boost alone is is 474MB unpacked in 12.1.2. If we could
> build Boost as a separate package (and not bundle it into ceph) it
> would make it easier to manage builds upstream and downstream.
>
> We could build a boost package in the jenkins.ceph.com infrastructure,
> or the CentOS Storage SIG (for RHEL-based distros), and then start
> depending on that system instead of EPEL. For Debian/Ubuntu, we could
> use jenkins.ceph.com/chacra or something else - any suggestions from
> Debian/Ubuntu folks?

I spent some time talking to Ken and Alfredo today to try and work
their concerns into something understandable by happily
package-building-unaware developers like myself. I've tried to distill
that conversation into the points below:

1) They would *love* it if we started relying more on "external"
packages and less on in-tree source, even if our packaging team is
responsible for maintaining them.

2) The actual size of a full source checkout is an actual problem when
building 600 packages a day (our systems are). If we can cut it down,
we can get dev packages built more quickly!
The biggest contributors anybody isolated are boost and inclusions
like the web dev stuff for ceph-mgr. (I'm making no promises for him,
but it sounded like Ken was going to investigate/push against the
boost wall a bit more.)

3) ceph-debuginfo (and the .deb equivalents) are ginormous enough (so
much so that it requires special configuration of our package serving
infrastructure)

Don't have much to say about (1) in isolation.

As far as (2) goes, it's really convenient from a dev perspective to
have one git checkout and its submodules to deal with, instead of
needing to install a bunch of packages. But we already have our
install-deps and we don't seem to update many of the dependencies that
often. How much would it hurt to split out stuff into separate
ceph-dev-* repos and packages we rely on? (We could probably even do
separate ones for each Ceph release stream?) We do sometimes update
the submodule and add an interface jump concurrent with that, but I
don't think it's really often. Is it feasible from both sides to
instead change what package version we depend on, and to start
building a new package?

On (3), there are a few causes. One is that we just have a lot of
code. But a far bigger impact seems to come from all the ceph_test_*
binaries and other things which we have statically linked with
ceph-common et al. There are two approaches we can take there: we can
figure out how to dynamically link them (which I haven't been involved
in but recall being difficult — but also have caused other issues to
us over the years that it would be good to resolve); separately we can
be more picky about what debug info we actually put into
ceph-debuginfo. We have a giant ceph-tests package that mixes up both
the test binaries and very disaster-recovery-helpful stuff like
ceph-objectstore-tool. If we could better segregate those, we can at
least avoid distributing them to users. (We would probably still want
debuginfo for the ceph-tests packages because we run them in
teuthology. But I assume just splitting it would still do some good.)

Hopefully that helps other people understand some of what we're all
dealing with. :)
-Greg

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-16 21:44   ` Gregory Farnum
@ 2017-08-16 22:30     ` John Spray
  2017-08-21 13:28       ` Alfredo Deza
  2017-08-22  7:01     ` kefu chai
  1 sibling, 1 reply; 31+ messages in thread
From: John Spray @ 2017-08-16 22:30 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel, Alfredo Deza, Ken Dreyer

On Wed, Aug 16, 2017 at 10:44 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
> On Mon, Aug 7, 2017 at 7:58 AM, Ken Dreyer <kdreyer@redhat.com> wrote:
>> On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
>>> The ceph-debuginfo package has continued to increase in size on almost
>>> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
>>>
>>> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
>>>
>>> Having packages that large is problematic on a few fronts:
>>
>> I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:
>>
>> Jewel sizes:
>>   14M ceph-10.2.7.tar.gz
>>   82M ceph-10.2.7 uncompressed
>>
>> Luminous sizes:
>>   142M ceph-12.1.2.tar.gz
>>   709M ceph-12.1.2  uncompressed
>>
>> This adds minutes onto the build times when we must shuffle these
>> large artifacts around:
>>
>> - Upstream we're transferring the artifacts between Jenkins slaves and chacra
>>   and download.ceph.com.
>>
>> - Downstream in Fedora/RHEL land we're uploading these source tars to
>>   dist-git's lookaside cache, and it takes a while just to upload/download.
>>
>> - Downstream in Debian and Ubuntu (AFAICT) they upload the source tars to Git
>>   with git-buildpackage, and this increases the time it takes to even "git
>>   clone" these repos.
>>
>> The bundled Boost alone is is 474MB unpacked in 12.1.2. If we could
>> build Boost as a separate package (and not bundle it into ceph) it
>> would make it easier to manage builds upstream and downstream.
>>
>> We could build a boost package in the jenkins.ceph.com infrastructure,
>> or the CentOS Storage SIG (for RHEL-based distros), and then start
>> depending on that system instead of EPEL. For Debian/Ubuntu, we could
>> use jenkins.ceph.com/chacra or something else - any suggestions from
>> Debian/Ubuntu folks?
>
> I spent some time talking to Ken and Alfredo today to try and work
> their concerns into something understandable by happily
> package-building-unaware developers like myself. I've tried to distill
> that conversation into the points below:
>
> 1) They would *love* it if we started relying more on "external"
> packages and less on in-tree source, even if our packaging team is
> responsible for maintaining them.
>
> 2) The actual size of a full source checkout is an actual problem when
> building 600 packages a day (our systems are). If we can cut it down,
> we can get dev packages built more quickly!
> The biggest contributors anybody isolated are boost and inclusions
> like the web dev stuff for ceph-mgr. (I'm making no promises for him,
> but it sounded like Ken was going to investigate/push against the
> boost wall a bit more.)

I don't want to divert too much from the main points about boost (I'd
also like if we didn't build our own, it slows down dev builds too)
and debuginfo packages (probably no silver bullet but worth
investigating if there are tweaks), but the dashboard has been brought
up twice now so I feel the need to defend it a bit.

I looked into this briefly after the original email that started this
thread, and the dashboard/static part was 24MB in total (less after
https://github.com/ceph/ceph/pull/16762).  It's pretty tiny compared
with the overall weight of the C++ binaries.  For comparison, those
dashboard files are less than 10% the size of just the ceph-mds
executable when built with debug symbols (based on a quick look at my
locally built binaries).

Are these files seriously causing build problems, or is the dashboard
being brought up as more of a "slippery slope" type of point about
including new functionality in the ceph repository?

John

>
> 3) ceph-debuginfo (and the .deb equivalents) are ginormous enough (so
> much so that it requires special configuration of our package serving
> infrastructure)
>
> Don't have much to say about (1) in isolation.
>
> As far as (2) goes, it's really convenient from a dev perspective to
> have one git checkout and its submodules to deal with, instead of
> needing to install a bunch of packages. But we already have our
> install-deps and we don't seem to update many of the dependencies that
> often. How much would it hurt to split out stuff into separate
> ceph-dev-* repos and packages we rely on? (We could probably even do
> separate ones for each Ceph release stream?) We do sometimes update
> the submodule and add an interface jump concurrent with that, but I
> don't think it's really often. Is it feasible from both sides to
> instead change what package version we depend on, and to start
> building a new package?
>
> On (3), there are a few causes. One is that we just have a lot of
> code. But a far bigger impact seems to come from all the ceph_test_*
> binaries and other things which we have statically linked with
> ceph-common et al. There are two approaches we can take there: we can
> figure out how to dynamically link them (which I haven't been involved
> in but recall being difficult — but also have caused other issues to
> us over the years that it would be good to resolve); separately we can
> be more picky about what debug info we actually put into
> ceph-debuginfo. We have a giant ceph-tests package that mixes up both
> the test binaries and very disaster-recovery-helpful stuff like
> ceph-objectstore-tool. If we could better segregate those, we can at
> least avoid distributing them to users. (We would probably still want
> debuginfo for the ceph-tests packages because we run them in
> teuthology. But I assume just splitting it would still do some good.)
>
> Hopefully that helps other people understand some of what we're all
> dealing with. :)
> -Greg
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-16 22:30     ` John Spray
@ 2017-08-21 13:28       ` Alfredo Deza
  0 siblings, 0 replies; 31+ messages in thread
From: Alfredo Deza @ 2017-08-21 13:28 UTC (permalink / raw)
  To: John Spray; +Cc: Gregory Farnum, ceph-devel, Ken Dreyer

On Wed, Aug 16, 2017 at 6:30 PM, John Spray <jspray@redhat.com> wrote:
> On Wed, Aug 16, 2017 at 10:44 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>> On Mon, Aug 7, 2017 at 7:58 AM, Ken Dreyer <kdreyer@redhat.com> wrote:
>>> On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
>>>> The ceph-debuginfo package has continued to increase in size on almost
>>>> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
>>>>
>>>> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
>>>>
>>>> Having packages that large is problematic on a few fronts:
>>>
>>> I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:
>>>
>>> Jewel sizes:
>>>   14M ceph-10.2.7.tar.gz
>>>   82M ceph-10.2.7 uncompressed
>>>
>>> Luminous sizes:
>>>   142M ceph-12.1.2.tar.gz
>>>   709M ceph-12.1.2  uncompressed
>>>
>>> This adds minutes onto the build times when we must shuffle these
>>> large artifacts around:
>>>
>>> - Upstream we're transferring the artifacts between Jenkins slaves and chacra
>>>   and download.ceph.com.
>>>
>>> - Downstream in Fedora/RHEL land we're uploading these source tars to
>>>   dist-git's lookaside cache, and it takes a while just to upload/download.
>>>
>>> - Downstream in Debian and Ubuntu (AFAICT) they upload the source tars to Git
>>>   with git-buildpackage, and this increases the time it takes to even "git
>>>   clone" these repos.
>>>
>>> The bundled Boost alone is is 474MB unpacked in 12.1.2. If we could
>>> build Boost as a separate package (and not bundle it into ceph) it
>>> would make it easier to manage builds upstream and downstream.
>>>
>>> We could build a boost package in the jenkins.ceph.com infrastructure,
>>> or the CentOS Storage SIG (for RHEL-based distros), and then start
>>> depending on that system instead of EPEL. For Debian/Ubuntu, we could
>>> use jenkins.ceph.com/chacra or something else - any suggestions from
>>> Debian/Ubuntu folks?
>>
>> I spent some time talking to Ken and Alfredo today to try and work
>> their concerns into something understandable by happily
>> package-building-unaware developers like myself. I've tried to distill
>> that conversation into the points below:
>>
>> 1) They would *love* it if we started relying more on "external"
>> packages and less on in-tree source, even if our packaging team is
>> responsible for maintaining them.
>>
>> 2) The actual size of a full source checkout is an actual problem when
>> building 600 packages a day (our systems are). If we can cut it down,
>> we can get dev packages built more quickly!
>> The biggest contributors anybody isolated are boost and inclusions
>> like the web dev stuff for ceph-mgr. (I'm making no promises for him,
>> but it sounded like Ken was going to investigate/push against the
>> boost wall a bit more.)
>
> I don't want to divert too much from the main points about boost (I'd
> also like if we didn't build our own, it slows down dev builds too)
> and debuginfo packages (probably no silver bullet but worth
> investigating if there are tweaks), but the dashboard has been brought
> up twice now so I feel the need to defend it a bit.
>
> I looked into this briefly after the original email that started this
> thread, and the dashboard/static part was 24MB in total (less after
> https://github.com/ceph/ceph/pull/16762).  It's pretty tiny compared
> with the overall weight of the C++ binaries.  For comparison, those
> dashboard files are less than 10% the size of just the ceph-mds
> executable when built with debug symbols (based on a quick look at my
> locally built binaries).
>
> Are these files seriously causing build problems, or is the dashboard
> being brought up as more of a "slippery slope" type of point about
> including new functionality in the ceph repository?
>

As a general packaging rule, one just don't embed libraries like this.
Most (all?) distributions frown upon doing this, and
explicitly ask to declare the dependencies upfront. For example, by
adding JQuery, who is going to make sure that the version that
was included will be updated when a security vulnerability is encountered?

A few have been found for JQuery in the past, including one earlier
this year: https://www.cvedetails.com/vulnerability-list/vendor_id-6538/Jquery.html

There is no need to embed something like JQuery when it could very
well be a package. This is a general packaging best practice.

You mentioned that "it's pretty tiny", but if we try to follow best
practices across the board (like no embedding/vendoring in this case)
it improves
the overall situation with other larger libraries as well, makes Ceph
packaging friendlier, and allows build, test, and deployment systems
to be faster and
as granular as they need to.

I am excited to see new functionality in Ceph, just not with the idea
of everything having to co-exist in ceph.git

> John
>
>>
>> 3) ceph-debuginfo (and the .deb equivalents) are ginormous enough (so
>> much so that it requires special configuration of our package serving
>> infrastructure)
>>
>> Don't have much to say about (1) in isolation.
>>
>> As far as (2) goes, it's really convenient from a dev perspective to
>> have one git checkout and its submodules to deal with, instead of
>> needing to install a bunch of packages. But we already have our
>> install-deps and we don't seem to update many of the dependencies that
>> often. How much would it hurt to split out stuff into separate
>> ceph-dev-* repos and packages we rely on? (We could probably even do
>> separate ones for each Ceph release stream?) We do sometimes update
>> the submodule and add an interface jump concurrent with that, but I
>> don't think it's really often. Is it feasible from both sides to
>> instead change what package version we depend on, and to start
>> building a new package?
>>
>> On (3), there are a few causes. One is that we just have a lot of
>> code. But a far bigger impact seems to come from all the ceph_test_*
>> binaries and other things which we have statically linked with
>> ceph-common et al. There are two approaches we can take there: we can
>> figure out how to dynamically link them (which I haven't been involved
>> in but recall being difficult — but also have caused other issues to
>> us over the years that it would be good to resolve); separately we can
>> be more picky about what debug info we actually put into
>> ceph-debuginfo. We have a giant ceph-tests package that mixes up both
>> the test binaries and very disaster-recovery-helpful stuff like
>> ceph-objectstore-tool. If we could better segregate those, we can at
>> least avoid distributing them to users. (We would probably still want
>> debuginfo for the ceph-tests packages because we run them in
>> teuthology. But I assume just splitting it would still do some good.)
>>
>> Hopefully that helps other people understand some of what we're all
>> dealing with. :)
>> -Greg
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-16 21:44   ` Gregory Farnum
  2017-08-16 22:30     ` John Spray
@ 2017-08-22  7:01     ` kefu chai
  2017-08-22  8:27       ` Nathan Cutler
                         ` (2 more replies)
  1 sibling, 3 replies; 31+ messages in thread
From: kefu chai @ 2017-08-22  7:01 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel, Alfredo Deza, Ken Dreyer

On Thu, Aug 17, 2017 at 5:44 AM, Gregory Farnum <gfarnum@redhat.com> wrote:
> On Mon, Aug 7, 2017 at 7:58 AM, Ken Dreyer <kdreyer@redhat.com> wrote:
>> On Wed, Aug 2, 2017 at 7:39 AM, Alfredo Deza <adeza@redhat.com> wrote:
>>> The ceph-debuginfo package has continued to increase in size on almost
>>> every release, reaching 1.5GB for the latest luminous RC (12.1.2).
>>>
>>> To contrast that, the latest ceph-debuginfo in Hammer was about 0.73GB.
>>>
>>> Having packages that large is problematic on a few fronts:
>>
>> I agree Alfredo. Here's a similar issue I am experiencing with the source sizes:
>>
>> Jewel sizes:
>>   14M ceph-10.2.7.tar.gz
>>   82M ceph-10.2.7 uncompressed
>>
>> Luminous sizes:
>>   142M ceph-12.1.2.tar.gz
>>   709M ceph-12.1.2  uncompressed
>>
>> This adds minutes onto the build times when we must shuffle these
>> large artifacts around:
>>
>> - Upstream we're transferring the artifacts between Jenkins slaves and chacra
>>   and download.ceph.com.
>>
>> - Downstream in Fedora/RHEL land we're uploading these source tars to
>>   dist-git's lookaside cache, and it takes a while just to upload/download.
>>
>> - Downstream in Debian and Ubuntu (AFAICT) they upload the source tars to Git
>>   with git-buildpackage, and this increases the time it takes to even "git
>>   clone" these repos.
>>
>> The bundled Boost alone is is 474MB unpacked in 12.1.2. If we could
>> build Boost as a separate package (and not bundle it into ceph) it
>> would make it easier to manage builds upstream and downstream.
>>
>> We could build a boost package in the jenkins.ceph.com infrastructure,
>> or the CentOS Storage SIG (for RHEL-based distros), and then start
>> depending on that system instead of EPEL. For Debian/Ubuntu, we could
>> use jenkins.ceph.com/chacra or something else - any suggestions from
>> Debian/Ubuntu folks?

we will need to build the full set of boost packages, including static
boost libraries and devel packages for all supported distros, and link
against them in our own CI and release process. or build the dynamic
boost libraries and install them into $prefix/lib/ceph, and link all
executables against them at run-time.

my main concern would be the downstream. how shall we accommodate the
packaging of downstream? for example, what if the boost package
maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
boost version we want to use in future?

but as long as we don't require newer boost to build, we are safe on
debian and ubuntu at this moment. as boost 1.61 is required for
building ceph, and both debian unstable and ubuntu artful package
boost v1.62.

>
> I spent some time talking to Ken and Alfredo today to try and work
> their concerns into something understandable by happily
> package-building-unaware developers like myself. I've tried to distill
> that conversation into the points below:
>
> 1) They would *love* it if we started relying more on "external"
> packages and less on in-tree source, even if our packaging team is
> responsible for maintaining them.
>
> 2) The actual size of a full source checkout is an actual problem when
> building 600 packages a day (our systems are). If we can cut it down,
> we can get dev packages built more quickly!
> The biggest contributors anybody isolated are boost and inclusions
> like the web dev stuff for ceph-mgr. (I'm making no promises for him,
> but it sounded like Ken was going to investigate/push against the
> boost wall a bit more.)
>
> 3) ceph-debuginfo (and the .deb equivalents) are ginormous enough (so
> much so that it requires special configuration of our package serving
> infrastructure)
>
> Don't have much to say about (1) in isolation.
>
> As far as (2) goes, it's really convenient from a dev perspective to
> have one git checkout and its submodules to deal with, instead of
> needing to install a bunch of packages. But we already have our
> install-deps and we don't seem to update many of the dependencies that
> often. How much would it hurt to split out stuff into separate
> ceph-dev-* repos and packages we rely on? (We could probably even do
> separate ones for each Ceph release stream?) We do sometimes update
> the submodule and add an interface jump concurrent with that, but I
> don't think it's really often. Is it feasible from both sides to
> instead change what package version we depend on, and to start
> building a new package?
>
> On (3), there are a few causes. One is that we just have a lot of
> code. But a far bigger impact seems to come from all the ceph_test_*
> binaries and other things which we have statically linked with
> ceph-common et al. There are two approaches we can take there: we can
> figure out how to dynamically link them (which I haven't been involved
> in but recall being difficult — but also have caused other issues to

actually almost all ceph_test_* are linking against libceph-common.
but they are linking against libglobal and libos statically.

> us over the years that it would be good to resolve); separately we can
> be more picky about what debug info we actually put into
> ceph-debuginfo. We have a giant ceph-tests package that mixes up both
> the test binaries and very disaster-recovery-helpful stuff like
> ceph-objectstore-tool. If we could better segregate those, we can at
> least avoid distributing them to users. (We would probably still want
> debuginfo for the ceph-tests packages because we run them in
> teuthology. But I assume just splitting it would still do some good.)
>
> Hopefully that helps other people understand some of what we're all
> dealing with. :)
> -Greg
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22  7:01     ` kefu chai
@ 2017-08-22  8:27       ` Nathan Cutler
  2017-08-22 13:35         ` kefu chai
  2017-08-23 14:53       ` Ken Dreyer
  2017-10-27  3:21       ` kefu chai
  2 siblings, 1 reply; 31+ messages in thread
From: Nathan Cutler @ 2017-08-22  8:27 UTC (permalink / raw)
  To: kefu chai, Gregory Farnum; +Cc: ceph-devel, Alfredo Deza, Ken Dreyer

> my main concern would be the downstream. how shall we accommodate the
> packaging of downstream? for example, what if the boost package
> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
> boost version we want to use in future?
> 
> but as long as we don't require newer boost to build, we are safe on
> debian and ubuntu at this moment. as boost 1.61 is required for
> building ceph, and both debian unstable and ubuntu artful package
> boost v1.62.

The very latest cutting-edge versions of the distros may ship boost >= 
1.61 but the stable versions most likely do not.

For luminous we (SUSE) need to support the latest stable versions of 
openSUSE and SLE, i.e. Leap 42.3 and SLE-12-SP3. Both of these come with 
boost 1.54.

Nathan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22  8:27       ` Nathan Cutler
@ 2017-08-22 13:35         ` kefu chai
  2017-08-22 13:52           ` Matt Benjamin
  2017-08-22 18:58           ` Alfredo Deza
  0 siblings, 2 replies; 31+ messages in thread
From: kefu chai @ 2017-08-22 13:35 UTC (permalink / raw)
  To: Nathan Cutler; +Cc: Gregory Farnum, ceph-devel, Alfredo Deza, Ken Dreyer

On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>> my main concern would be the downstream. how shall we accommodate the
>> packaging of downstream? for example, what if the boost package
>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>> boost version we want to use in future?
>>
>> but as long as we don't require newer boost to build, we are safe on
>> debian and ubuntu at this moment. as boost 1.61 is required for
>> building ceph, and both debian unstable and ubuntu artful package
>> boost v1.62.
>
>
> The very latest cutting-edge versions of the distros may ship boost >= 1.61
> but the stable versions most likely do not.

yeah. but, IIRC, debian stable does not accepts new packages unless
they contain critical bug fixes. the new packages will go to the
unstable or experimental distribution first. so, presumably, debian
will be fine. guess ubuntu is using similar strategy for including
packages in its LTS distros.

>
> For luminous we (SUSE) need to support the latest stable versions of
> openSUSE and SLE, i.e. Leap 42.3 and SLE-12-SP3. Both of these come with
> boost 1.54.

yeah, then we need to package libboost in one way or another.

>
> Nathan



-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22 13:35         ` kefu chai
@ 2017-08-22 13:52           ` Matt Benjamin
  2017-08-22 14:09             ` Willem Jan Withagen
  2017-08-22 18:58           ` Alfredo Deza
  1 sibling, 1 reply; 31+ messages in thread
From: Matt Benjamin @ 2017-08-22 13:52 UTC (permalink / raw)
  To: kefu chai
  Cc: Nathan Cutler, Gregory Farnum, ceph-devel, Alfredo Deza, Ken Dreyer

++kefu

Matt

On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>>> my main concern would be the downstream. how shall we accommodate the
>>> packaging of downstream? for example, what if the boost package
>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>>> boost version we want to use in future?
>>>
>>> but as long as we don't require newer boost to build, we are safe on
>>> debian and ubuntu at this moment. as boost 1.61 is required for
>>> building ceph, and both debian unstable and ubuntu artful package
>>> boost v1.62.
>>
>>
>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>> but the stable versions most likely do not.
>
> yeah. but, IIRC, debian stable does not accepts new packages unless
> they contain critical bug fixes. the new packages will go to the
> unstable or experimental distribution first. so, presumably, debian
> will be fine. guess ubuntu is using similar strategy for including
> packages in its LTS distros.
>
>>
>> For luminous we (SUSE) need to support the latest stable versions of
>> openSUSE and SLE, i.e. Leap 42.3 and SLE-12-SP3. Both of these come with
>> boost 1.54.
>
> yeah, then we need to package libboost in one way or another.
>
>>
>> Nathan
>
>
>
> --
> Regards
> Kefu Chai
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 

--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22 13:52           ` Matt Benjamin
@ 2017-08-22 14:09             ` Willem Jan Withagen
  2017-08-22 15:26               ` kefu chai
  0 siblings, 1 reply; 31+ messages in thread
From: Willem Jan Withagen @ 2017-08-22 14:09 UTC (permalink / raw)
  To: Matt Benjamin, kefu chai
  Cc: Nathan Cutler, Gregory Farnum, ceph-devel, Alfredo Deza, Ken Dreyer

On 22-8-2017 15:52, Matt Benjamin wrote:
> ++kefu
> 
> Matt
> 
> On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
>> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>>>> my main concern would be the downstream. how shall we accommodate the
>>>> packaging of downstream? for example, what if the boost package
>>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>>>> boost version we want to use in future?
>>>>
>>>> but as long as we don't require newer boost to build, we are safe on
>>>> debian and ubuntu at this moment. as boost 1.61 is required for
>>>> building ceph, and both debian unstable and ubuntu artful package
>>>> boost v1.62.
>>>
>>>
>>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>>> but the stable versions most likely do not.

For FreeBSD I use the ports version = 1.64.
Thus far no problems.

And since we are talking about boost....

How do I prevent `git submodule ` from downloading all this Boost stuff
that I'm not going to use it anyways?
So downloading this massive git-tree over and over, is a waste of time
and bandwidth.

thanx,
--WjW

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22 14:09             ` Willem Jan Withagen
@ 2017-08-22 15:26               ` kefu chai
  2017-08-22 15:43                 ` Willem Jan Withagen
  0 siblings, 1 reply; 31+ messages in thread
From: kefu chai @ 2017-08-22 15:26 UTC (permalink / raw)
  To: Willem Jan Withagen
  Cc: Matt Benjamin, Nathan Cutler, Gregory Farnum, ceph-devel,
	Alfredo Deza, Ken Dreyer

On Tue, Aug 22, 2017 at 10:09 PM, Willem Jan Withagen <wjw@digiware.nl> wrote:
> On 22-8-2017 15:52, Matt Benjamin wrote:
>> ++kefu
>>
>> Matt
>>
>> On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
>>> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>>>>> my main concern would be the downstream. how shall we accommodate the
>>>>> packaging of downstream? for example, what if the boost package
>>>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>>>>> boost version we want to use in future?
>>>>>
>>>>> but as long as we don't require newer boost to build, we are safe on
>>>>> debian and ubuntu at this moment. as boost 1.61 is required for
>>>>> building ceph, and both debian unstable and ubuntu artful package
>>>>> boost v1.62.
>>>>
>>>>
>>>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>>>> but the stable versions most likely do not.
>
> For FreeBSD I use the ports version = 1.64.
> Thus far no problems.
>
> And since we are talking about boost....
>
> How do I prevent `git submodule ` from downloading all this Boost stuff
> that I'm not going to use it anyways?
> So downloading this massive git-tree over and over, is a waste of time
> and bandwidth.

Willem, probably you could have a local change to remove that submodule?

also, FYI, i was trying to build the boost using its release tarball
in PR#15376 [0], so we don't need to pull the full repo of it. but i
didn't get enough bandwidth on updating the "make-dist" script to
accommodate the change. so the change stopped at building boost as an
external project.

but before we have a plan for fixing the boost building process , i am
not sure it's advisable to continue working on removing the boost
submodule and updating the "make-dist" script, etc.

---
[0] https://github.com/ceph/ceph/pull/15376


-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22 15:26               ` kefu chai
@ 2017-08-22 15:43                 ` Willem Jan Withagen
  0 siblings, 0 replies; 31+ messages in thread
From: Willem Jan Withagen @ 2017-08-22 15:43 UTC (permalink / raw)
  To: kefu chai
  Cc: Matt Benjamin, Nathan Cutler, Gregory Farnum, ceph-devel,
	Alfredo Deza, Ken Dreyer

On 22-8-2017 17:26, kefu chai wrote:
> On Tue, Aug 22, 2017 at 10:09 PM, Willem Jan Withagen <wjw@digiware.nl> wrote:
>> On 22-8-2017 15:52, Matt Benjamin wrote:
>>> ++kefu
>>>
>>> Matt
>>>
>>> On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
>>>> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>>>>>> my main concern would be the downstream. how shall we accommodate the
>>>>>> packaging of downstream? for example, what if the boost package
>>>>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>>>>>> boost version we want to use in future?
>>>>>>
>>>>>> but as long as we don't require newer boost to build, we are safe on
>>>>>> debian and ubuntu at this moment. as boost 1.61 is required for
>>>>>> building ceph, and both debian unstable and ubuntu artful package
>>>>>> boost v1.62.
>>>>>
>>>>>
>>>>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>>>>> but the stable versions most likely do not.
>>
>> For FreeBSD I use the ports version = 1.64.
>> Thus far no problems.
>>
>> And since we are talking about boost....
>>
>> How do I prevent `git submodule ` from downloading all this Boost stuff
>> that I'm not going to use it anyways?
>> So downloading this massive git-tree over and over, is a waste of time
>> and bandwidth.
> 
> Willem, probably you could have a local change to remove that submodule?
> 
> also, FYI, i was trying to build the boost using its release tarball
> in PR#15376 [0], so we don't need to pull the full repo of it. but i
> didn't get enough bandwidth on updating the "make-dist" script to
> accommodate the change. so the change stopped at building boost as an
> external project.
> 
> but before we have a plan for fixing the boost building process , i am
> not sure it's advisable to continue working on removing the boost
> submodule and updating the "make-dist" script, etc.

Hi Kefu,

Actually I'm not using make-dist...

I have do_freebsd.sh to get my things going.
And:
        -D WITH_SYSTEM_BOOST=ON \
works perfectly for me.

So if one way or another I can delete the submodule dependency before it
starts
    git submodule update --force --init --recursive

Would be removing it from .gitmodules enough to stop the fetching.

So I can perhaps have a .gitmodule_freebsd? That I can move over the
original before doping the update.

--WjW

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22 13:35         ` kefu chai
  2017-08-22 13:52           ` Matt Benjamin
@ 2017-08-22 18:58           ` Alfredo Deza
  2017-08-22 19:01             ` Nathan Cutler
  2017-08-24  8:41             ` kefu chai
  1 sibling, 2 replies; 31+ messages in thread
From: Alfredo Deza @ 2017-08-22 18:58 UTC (permalink / raw)
  To: kefu chai; +Cc: Nathan Cutler, Gregory Farnum, ceph-devel, Ken Dreyer

On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>>> my main concern would be the downstream. how shall we accommodate the
>>> packaging of downstream? for example, what if the boost package
>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>>> boost version we want to use in future?
>>>
>>> but as long as we don't require newer boost to build, we are safe on
>>> debian and ubuntu at this moment. as boost 1.61 is required for
>>> building ceph, and both debian unstable and ubuntu artful package
>>> boost v1.62.
>>
>>
>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>> but the stable versions most likely do not.
>
> yeah. but, IIRC, debian stable does not accepts new packages unless
> they contain critical bug fixes. the new packages will go to the
> unstable or experimental distribution first. so, presumably, debian
> will be fine. guess ubuntu is using similar strategy for including
> packages in its LTS distros.

Why are you concerned with distros and the availability to have a
package at the version that we need?

We publish our own repos, where we could have whatever boost version
we need. Distro package maintainers have to
decide what they can or can't do. For us it shouldn't matter.


>
>>
>> For luminous we (SUSE) need to support the latest stable versions of
>> openSUSE and SLE, i.e. Leap 42.3 and SLE-12-SP3. Both of these come with
>> boost 1.54.
>
> yeah, then we need to package libboost in one way or another.
>
>>
>> Nathan
>
>
>
> --
> Regards
> Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22 18:58           ` Alfredo Deza
@ 2017-08-22 19:01             ` Nathan Cutler
  2017-08-24  8:41             ` kefu chai
  1 sibling, 0 replies; 31+ messages in thread
From: Nathan Cutler @ 2017-08-22 19:01 UTC (permalink / raw)
  To: Alfredo Deza, kefu chai; +Cc: Gregory Farnum, ceph-devel, Ken Dreyer

> Why are you concerned with distros and the availability to have a
> package at the version that we need?

Clients?

Nathan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22  7:01     ` kefu chai
  2017-08-22  8:27       ` Nathan Cutler
@ 2017-08-23 14:53       ` Ken Dreyer
  2017-08-24  8:30         ` kefu chai
  2017-10-27  3:21       ` kefu chai
  2 siblings, 1 reply; 31+ messages in thread
From: Ken Dreyer @ 2017-08-23 14:53 UTC (permalink / raw)
  To: kefu chai; +Cc: Gregory Farnum, ceph-devel, Alfredo Deza

On Tue, Aug 22, 2017 at 1:01 AM, kefu chai <tchaikov@gmail.com> wrote:
> my main concern would be the downstream. how shall we accommodate the
> packaging of downstream? for example, what if the boost package
> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
> boost version we want to use in future?

Fedora has Boost 1.63.0 for a while, and f27 and f28 will have 1.64 or
newer. I think we'll be ok there, because Jonathan Wakely tends to
keep that very up-to-date.

This is just brainstorming, I have nothing concrete here: For the
CentOS side, and for the RH Ceph Storage downstream layered product, I
am thinking about building an entirely separate boost RPM and just let
it override the system version that ships in RHEL 7. I've tested
rebuilding boost-1.64.0-0.8.fc28.src.rpm on RHEL in mock and it
worked. (I had to remove the fiber, numpy, and python3 pieces).

For Debian and older Ubuntu LTSs, we could do something similar:
rebuild the boost source package from unstable/artful and let those
override the OS versions.

> actually almost all ceph_test_* are linking against libceph-common.
> but they are linking against libglobal and libos statically.

Is that the root issue with the debuginfo size?

- Ken

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-23 14:53       ` Ken Dreyer
@ 2017-08-24  8:30         ` kefu chai
  0 siblings, 0 replies; 31+ messages in thread
From: kefu chai @ 2017-08-24  8:30 UTC (permalink / raw)
  To: Ken Dreyer; +Cc: Gregory Farnum, ceph-devel, Alfredo Deza

On Wed, Aug 23, 2017 at 10:53 PM, Ken Dreyer <kdreyer@redhat.com> wrote:
> On Tue, Aug 22, 2017 at 1:01 AM, kefu chai <tchaikov@gmail.com> wrote:
>> my main concern would be the downstream. how shall we accommodate the
>> packaging of downstream? for example, what if the boost package
>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>> boost version we want to use in future?
>
> Fedora has Boost 1.63.0 for a while, and f27 and f28 will have 1.64 or
> newer. I think we'll be ok there, because Jonathan Wakely tends to
> keep that very up-to-date.
>
> This is just brainstorming, I have nothing concrete here: For the
> CentOS side, and for the RH Ceph Storage downstream layered product, I
> am thinking about building an entirely separate boost RPM and just let
> it override the system version that ships in RHEL 7. I've tested
> rebuilding boost-1.64.0-0.8.fc28.src.rpm on RHEL in mock and it
> worked. (I had to remove the fiber, numpy, and python3 pieces).
>
> For Debian and older Ubuntu LTSs, we could do something similar:
> rebuild the boost source package from unstable/artful and let those
> override the OS versions.
>
>> actually almost all ceph_test_* are linking against libceph-common.
>> but they are linking against libglobal and libos statically.
>
> Is that the root issue with the debuginfo size?

not sure about this =(. i have not tried to root-cause the issue of
debuginfo size yet.

>
> - Ken



-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22 18:58           ` Alfredo Deza
  2017-08-22 19:01             ` Nathan Cutler
@ 2017-08-24  8:41             ` kefu chai
  2017-08-24 11:35               ` Alfredo Deza
  1 sibling, 1 reply; 31+ messages in thread
From: kefu chai @ 2017-08-24  8:41 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: Nathan Cutler, Gregory Farnum, ceph-devel, Ken Dreyer

On Wed, Aug 23, 2017 at 2:58 AM, Alfredo Deza <adeza@redhat.com> wrote:
> On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
>> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>>>> my main concern would be the downstream. how shall we accommodate the
>>>> packaging of downstream? for example, what if the boost package
>>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>>>> boost version we want to use in future?
>>>>
>>>> but as long as we don't require newer boost to build, we are safe on
>>>> debian and ubuntu at this moment. as boost 1.61 is required for
>>>> building ceph, and both debian unstable and ubuntu artful package
>>>> boost v1.62.
>>>
>>>
>>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>>> but the stable versions most likely do not.
>>
>> yeah. but, IIRC, debian stable does not accepts new packages unless
>> they contain critical bug fixes. the new packages will go to the
>> unstable or experimental distribution first. so, presumably, debian
>> will be fine. guess ubuntu is using similar strategy for including
>> packages in its LTS distros.
>
> Why are you concerned with distros and the availability to have a
> package at the version that we need?
>
> We publish our own repos, where we could have whatever boost version
> we need. Distro package maintainers have to
> decide what they can or can't do. For us it shouldn't matter.
>

i think it matters. and i believe it'd be desirable if Ceph can be
easier for downstream to package. if the downstream finds that Ceph is
too difficult to package, and give up, that's surely not the end of
the world. but it could decrease the popularity level of ceph, and in
long term, it might hurt Ceph.

-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-24  8:41             ` kefu chai
@ 2017-08-24 11:35               ` Alfredo Deza
  2017-08-24 13:36                 ` Sage Weil
  0 siblings, 1 reply; 31+ messages in thread
From: Alfredo Deza @ 2017-08-24 11:35 UTC (permalink / raw)
  To: kefu chai; +Cc: Nathan Cutler, Gregory Farnum, ceph-devel, Ken Dreyer

On Thu, Aug 24, 2017 at 4:41 AM, kefu chai <tchaikov@gmail.com> wrote:
> On Wed, Aug 23, 2017 at 2:58 AM, Alfredo Deza <adeza@redhat.com> wrote:
>> On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
>>> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>>>>> my main concern would be the downstream. how shall we accommodate the
>>>>> packaging of downstream? for example, what if the boost package
>>>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>>>>> boost version we want to use in future?
>>>>>
>>>>> but as long as we don't require newer boost to build, we are safe on
>>>>> debian and ubuntu at this moment. as boost 1.61 is required for
>>>>> building ceph, and both debian unstable and ubuntu artful package
>>>>> boost v1.62.
>>>>
>>>>
>>>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>>>> but the stable versions most likely do not.
>>>
>>> yeah. but, IIRC, debian stable does not accepts new packages unless
>>> they contain critical bug fixes. the new packages will go to the
>>> unstable or experimental distribution first. so, presumably, debian
>>> will be fine. guess ubuntu is using similar strategy for including
>>> packages in its LTS distros.
>>
>> Why are you concerned with distros and the availability to have a
>> package at the version that we need?
>>
>> We publish our own repos, where we could have whatever boost version
>> we need. Distro package maintainers have to
>> decide what they can or can't do. For us it shouldn't matter.
>>
>
> i think it matters. and i believe it'd be desirable if Ceph can be
> easier for downstream to package. if the downstream finds that Ceph is
> too difficult to package, and give up, that's surely not the end of
> the world. but it could decrease the popularity level of ceph, and in
> long term, it might hurt Ceph.

I fully agree here Kefu, but we don't make it easier by embedding
libraries! Those need to
get removed in distros. Most distros will *not* allow embedding
dependencies like we do. That means that someone (probably
a package maintainer) will have to remove these, figure out what
versions are needed, and attempt to make it
work with whatever that distro version will provide.

>
> --
> Regards
> Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-24 11:35               ` Alfredo Deza
@ 2017-08-24 13:36                 ` Sage Weil
  2017-08-27 22:30                   ` Brad Hubbard
  0 siblings, 1 reply; 31+ messages in thread
From: Sage Weil @ 2017-08-24 13:36 UTC (permalink / raw)
  To: Alfredo Deza
  Cc: kefu chai, Nathan Cutler, Gregory Farnum, ceph-devel, Ken Dreyer

On Thu, 24 Aug 2017, Alfredo Deza wrote:
> On Thu, Aug 24, 2017 at 4:41 AM, kefu chai <tchaikov@gmail.com> wrote:
> > On Wed, Aug 23, 2017 at 2:58 AM, Alfredo Deza <adeza@redhat.com> wrote:
> >> On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
> >>> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
> >>>>> my main concern would be the downstream. how shall we accommodate the
> >>>>> packaging of downstream? for example, what if the boost package
> >>>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
> >>>>> boost version we want to use in future?
> >>>>>
> >>>>> but as long as we don't require newer boost to build, we are safe on
> >>>>> debian and ubuntu at this moment. as boost 1.61 is required for
> >>>>> building ceph, and both debian unstable and ubuntu artful package
> >>>>> boost v1.62.
> >>>>
> >>>>
> >>>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
> >>>> but the stable versions most likely do not.
> >>>
> >>> yeah. but, IIRC, debian stable does not accepts new packages unless
> >>> they contain critical bug fixes. the new packages will go to the
> >>> unstable or experimental distribution first. so, presumably, debian
> >>> will be fine. guess ubuntu is using similar strategy for including
> >>> packages in its LTS distros.
> >>
> >> Why are you concerned with distros and the availability to have a
> >> package at the version that we need?
> >>
> >> We publish our own repos, where we could have whatever boost version
> >> we need. Distro package maintainers have to
> >> decide what they can or can't do. For us it shouldn't matter.
> >>
> >
> > i think it matters. and i believe it'd be desirable if Ceph can be
> > easier for downstream to package. if the downstream finds that Ceph is
> > too difficult to package, and give up, that's surely not the end of
> > the world. but it could decrease the popularity level of ceph, and in
> > long term, it might hurt Ceph.
> 
> I fully agree here Kefu, but we don't make it easier by embedding
> libraries! Those need to
> get removed in distros. Most distros will *not* allow embedding
> dependencies like we do. That means that someone (probably
> a package maintainer) will have to remove these, figure out what
> versions are needed, and attempt to make it
> work with whatever that distro version will provide.

There are several categories here:

Yes:
- A bunch of stuff we embed can definitely be separated out; we embedded 
only because distros didn't have packages yet (e.g., zstd).  As soon as 
they are packages the submodules can be removed.

Maybe:
- The big one, though, is boost, which is mostly headers... only a *tiny* 
bit of code can be dynamically loaded, so there is minimal benefit to 
using the distro package... except that 'git clone' and the build are 
slightly faster. We could use a more up to date distro package, but we 
won't be able to put it in el7.  I worry that cases like this will be 
problematic: even if we build the updated package, it may be undesireable 
to require users to install a newer version on their system.

No:
- Then there is stuff that's fast-moving and new and unlikely to be 
helpful if separated out (spdk, dpdk).  The distro packages will never be 
up to date.  I'm not sure they even have any dynamically-linkable code... 
would have to check.

- And then rocksdb and civetweb are truly embedded. We sometimes have to 
carry modifications so it is a bad idea to use a distro package (our 
modifications may be incompatible with other users on the system).

Note that although distros complain about static linking, they have never 
actually taken a stand on the issue with Ceph.  *shrug*

sage

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-24 13:36                 ` Sage Weil
@ 2017-08-27 22:30                   ` Brad Hubbard
  2017-08-30 17:17                     ` Ken Dreyer
  0 siblings, 1 reply; 31+ messages in thread
From: Brad Hubbard @ 2017-08-27 22:30 UTC (permalink / raw)
  To: Sage Weil
  Cc: Alfredo Deza, kefu chai, Nathan Cutler, Gregory Farnum,
	ceph-devel, Ken Dreyer

On Thu, Aug 24, 2017 at 11:36 PM, Sage Weil <sage@newdream.net> wrote:
> On Thu, 24 Aug 2017, Alfredo Deza wrote:
>> On Thu, Aug 24, 2017 at 4:41 AM, kefu chai <tchaikov@gmail.com> wrote:
>> > On Wed, Aug 23, 2017 at 2:58 AM, Alfredo Deza <adeza@redhat.com> wrote:
>> >> On Tue, Aug 22, 2017 at 9:35 AM, kefu chai <tchaikov@gmail.com> wrote:
>> >>> On Tue, Aug 22, 2017 at 4:27 PM, Nathan Cutler <ncutler@suse.cz> wrote:
>> >>>>> my main concern would be the downstream. how shall we accommodate the
>> >>>>> packaging of downstream? for example, what if the boost package
>> >>>>> maintainers of SuSE/fedora/debian/ubuntu are not ready to package the
>> >>>>> boost version we want to use in future?
>> >>>>>
>> >>>>> but as long as we don't require newer boost to build, we are safe on
>> >>>>> debian and ubuntu at this moment. as boost 1.61 is required for
>> >>>>> building ceph, and both debian unstable and ubuntu artful package
>> >>>>> boost v1.62.
>> >>>>
>> >>>>
>> >>>> The very latest cutting-edge versions of the distros may ship boost >= 1.61
>> >>>> but the stable versions most likely do not.
>> >>>
>> >>> yeah. but, IIRC, debian stable does not accepts new packages unless
>> >>> they contain critical bug fixes. the new packages will go to the
>> >>> unstable or experimental distribution first. so, presumably, debian
>> >>> will be fine. guess ubuntu is using similar strategy for including
>> >>> packages in its LTS distros.
>> >>
>> >> Why are you concerned with distros and the availability to have a
>> >> package at the version that we need?
>> >>
>> >> We publish our own repos, where we could have whatever boost version
>> >> we need. Distro package maintainers have to
>> >> decide what they can or can't do. For us it shouldn't matter.
>> >>
>> >
>> > i think it matters. and i believe it'd be desirable if Ceph can be
>> > easier for downstream to package. if the downstream finds that Ceph is
>> > too difficult to package, and give up, that's surely not the end of
>> > the world. but it could decrease the popularity level of ceph, and in
>> > long term, it might hurt Ceph.
>>
>> I fully agree here Kefu, but we don't make it easier by embedding
>> libraries! Those need to
>> get removed in distros. Most distros will *not* allow embedding
>> dependencies like we do. That means that someone (probably
>> a package maintainer) will have to remove these, figure out what
>> versions are needed, and attempt to make it
>> work with whatever that distro version will provide.
>
> There are several categories here:
>
> Yes:
> - A bunch of stuff we embed can definitely be separated out; we embedded
> only because distros didn't have packages yet (e.g., zstd).  As soon as
> they are packages the submodules can be removed.
>
> Maybe:
> - The big one, though, is boost, which is mostly headers... only a *tiny*
> bit of code can be dynamically loaded, so there is minimal benefit to
> using the distro package... except that 'git clone' and the build are
> slightly faster. We could use a more up to date distro package, but we
> won't be able to put it in el7.  I worry that cases like this will be
> problematic: even if we build the updated package, it may be undesireable
> to require users to install a newer version on their system.

Should we ship a ceph-boost, etc. package then that does not overwrite
distro packages (installs in a different location)?

>
> No:
> - Then there is stuff that's fast-moving and new and unlikely to be
> helpful if separated out (spdk, dpdk).  The distro packages will never be
> up to date.  I'm not sure they even have any dynamically-linkable code...
> would have to check.
>
> - And then rocksdb and civetweb are truly embedded. We sometimes have to
> carry modifications so it is a bad idea to use a distro package (our
> modifications may be incompatible with other users on the system).
>
> Note that although distros complain about static linking, they have never
> actually taken a stand on the issue with Ceph.  *shrug*
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Cheers,
Brad

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-27 22:30                   ` Brad Hubbard
@ 2017-08-30 17:17                     ` Ken Dreyer
  2017-08-30 17:53                       ` John Spray
  0 siblings, 1 reply; 31+ messages in thread
From: Ken Dreyer @ 2017-08-30 17:17 UTC (permalink / raw)
  To: Brad Hubbard
  Cc: Sage Weil, Alfredo Deza, kefu chai, Nathan Cutler,
	Gregory Farnum, ceph-devel

On Sun, Aug 27, 2017 at 4:30 PM, Brad Hubbard <bhubbard@redhat.com> wrote:
> Should we ship a ceph-boost, etc. package then that does not overwrite
> distro packages (installs in a different location)?

On the RPM side, ceph depends on the libboost_* sonames directly. So
our ceph-boost package would need to provide those, and Yum may still
prefer it over the standard system boost.

What is the specific risk of overriding RHEL's and Ubuntu's system
boost? (Are there some common packages that users typically install on
ceph nodes that would also depend on the old system boost?)

- Ken

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-30 17:17                     ` Ken Dreyer
@ 2017-08-30 17:53                       ` John Spray
  2017-08-30 21:59                         ` Brad Hubbard
  2017-08-30 22:07                         ` Ken Dreyer
  0 siblings, 2 replies; 31+ messages in thread
From: John Spray @ 2017-08-30 17:53 UTC (permalink / raw)
  To: Ken Dreyer
  Cc: Brad Hubbard, Sage Weil, Alfredo Deza, kefu chai, Nathan Cutler,
	Gregory Farnum, ceph-devel

On Wed, Aug 30, 2017 at 6:17 PM, Ken Dreyer <kdreyer@redhat.com> wrote:
> On Sun, Aug 27, 2017 at 4:30 PM, Brad Hubbard <bhubbard@redhat.com> wrote:
>> Should we ship a ceph-boost, etc. package then that does not overwrite
>> distro packages (installs in a different location)?
>
> On the RPM side, ceph depends on the libboost_* sonames directly. So
> our ceph-boost package would need to provide those, and Yum may still
> prefer it over the standard system boost.
>
> What is the specific risk of overriding RHEL's and Ubuntu's system
> boost? (Are there some common packages that users typically install on
> ceph nodes that would also depend on the old system boost?)

The thing is, our boost could easily end up being the "old" one, if
the distro is shipping security updates to theirs.  Our
higher-numbered boost packages would potentially block the distro's
updates to their lower-numbered boost packages.  If we ship our own
separate boost, then maybe Ceph is stuck with an un-patched boost, but
other applications on the system are not.

It's not necessarily intolerable, but if the goal is packaging hygiene
then it's a bit self-defeating.

John

>
> - Ken
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-30 17:53                       ` John Spray
@ 2017-08-30 21:59                         ` Brad Hubbard
  2017-08-30 22:07                         ` Ken Dreyer
  1 sibling, 0 replies; 31+ messages in thread
From: Brad Hubbard @ 2017-08-30 21:59 UTC (permalink / raw)
  To: John Spray
  Cc: Ken Dreyer, Sage Weil, Alfredo Deza, kefu chai, Nathan Cutler,
	Gregory Farnum, ceph-devel

On Thu, Aug 31, 2017 at 3:53 AM, John Spray <jspray@redhat.com> wrote:
> On Wed, Aug 30, 2017 at 6:17 PM, Ken Dreyer <kdreyer@redhat.com> wrote:
>> On Sun, Aug 27, 2017 at 4:30 PM, Brad Hubbard <bhubbard@redhat.com> wrote:
>>> Should we ship a ceph-boost, etc. package then that does not overwrite
>>> distro packages (installs in a different location)?
>>
>> On the RPM side, ceph depends on the libboost_* sonames directly. So
>> our ceph-boost package would need to provide those, and Yum may still
>> prefer it over the standard system boost.
>>
>> What is the specific risk of overriding RHEL's and Ubuntu's system
>> boost? (Are there some common packages that users typically install on
>> ceph nodes that would also depend on the old system boost?)
>
> The thing is, our boost could easily end up being the "old" one, if
> the distro is shipping security updates to theirs.  Our
> higher-numbered boost packages would potentially block the distro's
> updates to their lower-numbered boost packages.  If we ship our own
> separate boost, then maybe Ceph is stuck with an un-patched boost, but
> other applications on the system are not.
>
> It's not necessarily intolerable, but if the goal is packaging hygiene
> then it's a bit self-defeating.

I'm looking into this, as well as other options to reduce the size
(hopefully). Open to any further ideas of course :)

>
> John
>
>>
>> - Ken
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Cheers,
Brad

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-30 17:53                       ` John Spray
  2017-08-30 21:59                         ` Brad Hubbard
@ 2017-08-30 22:07                         ` Ken Dreyer
  2017-08-30 23:00                           ` Brad Hubbard
  2017-08-30 23:09                           ` John Spray
  1 sibling, 2 replies; 31+ messages in thread
From: Ken Dreyer @ 2017-08-30 22:07 UTC (permalink / raw)
  To: John Spray
  Cc: Brad Hubbard, Sage Weil, Alfredo Deza, kefu chai, Nathan Cutler,
	Gregory Farnum, ceph-devel

On Wed, Aug 30, 2017 at 11:53 AM, John Spray <jspray@redhat.com> wrote:
> The thing is, our boost could easily end up being the "old" one, if
> the distro is shipping security updates to theirs.  Our
> higher-numbered boost packages would potentially block the distro's
> updates to their lower-numbered boost packages.  If we ship our own
> separate boost, then maybe Ceph is stuck with an un-patched boost, but
> other applications on the system are not.

That scenario is theoretically possible, and it's good that you bring
it up for consideration. I'm trying to understand the likelihood of
the effort/disruption there. Do you have specific applications in mind
that would benefit in the way you describe? Ones that require boost
and are often co-installed on Ceph nodes?

- Ken

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-30 22:07                         ` Ken Dreyer
@ 2017-08-30 23:00                           ` Brad Hubbard
  2017-08-30 23:09                           ` John Spray
  1 sibling, 0 replies; 31+ messages in thread
From: Brad Hubbard @ 2017-08-30 23:00 UTC (permalink / raw)
  To: Ken Dreyer
  Cc: John Spray, Sage Weil, Alfredo Deza, kefu chai, Nathan Cutler,
	Gregory Farnum, ceph-devel

On Thu, Aug 31, 2017 at 8:07 AM, Ken Dreyer <kdreyer@redhat.com> wrote:
> On Wed, Aug 30, 2017 at 11:53 AM, John Spray <jspray@redhat.com> wrote:
>> The thing is, our boost could easily end up being the "old" one, if
>> the distro is shipping security updates to theirs.  Our
>> higher-numbered boost packages would potentially block the distro's
>> updates to their lower-numbered boost packages.  If we ship our own
>> separate boost, then maybe Ceph is stuck with an un-patched boost, but
>> other applications on the system are not.
>
> That scenario is theoretically possible, and it's good that you bring
> it up for consideration. I'm trying to understand the likelihood of
> the effort/disruption there. Do you have specific applications in mind
> that would benefit in the way you describe? Ones that require boost
> and are often co-installed on Ceph nodes?

Any solution would need to protect against this.

-- 
Cheers,
Brad

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-30 22:07                         ` Ken Dreyer
  2017-08-30 23:00                           ` Brad Hubbard
@ 2017-08-30 23:09                           ` John Spray
  2017-08-31  2:25                             ` Sage Weil
  1 sibling, 1 reply; 31+ messages in thread
From: John Spray @ 2017-08-30 23:09 UTC (permalink / raw)
  To: Ken Dreyer
  Cc: Brad Hubbard, Sage Weil, Alfredo Deza, kefu chai, Nathan Cutler,
	Gregory Farnum, ceph-devel

On Wed, Aug 30, 2017 at 11:07 PM, Ken Dreyer <kdreyer@redhat.com> wrote:
> On Wed, Aug 30, 2017 at 11:53 AM, John Spray <jspray@redhat.com> wrote:
>> The thing is, our boost could easily end up being the "old" one, if
>> the distro is shipping security updates to theirs.  Our
>> higher-numbered boost packages would potentially block the distro's
>> updates to their lower-numbered boost packages.  If we ship our own
>> separate boost, then maybe Ceph is stuck with an un-patched boost, but
>> other applications on the system are not.
>
> That scenario is theoretically possible, and it's good that you bring
> it up for consideration. I'm trying to understand the likelihood of
> the effort/disruption there. Do you have specific applications in mind
> that would benefit in the way you describe? Ones that require boost
> and are often co-installed on Ceph nodes?

Lots of things depend on boost.  Naturally I don't know what
specifically people run on their Ceph servers apart from Ceph.  It's
risky to blow away distro packages in favour of our own, precisely
because of that lack of knowledge about what else is going on on the
servers.

I'm really just pointing out that there's a degree of risk that our
users would be taking on, in exchange for the (not inconsiderable)
benefit of knocking 500MB out of a fully checked out tree.

John

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-30 23:09                           ` John Spray
@ 2017-08-31  2:25                             ` Sage Weil
  0 siblings, 0 replies; 31+ messages in thread
From: Sage Weil @ 2017-08-31  2:25 UTC (permalink / raw)
  To: John Spray
  Cc: Ken Dreyer, Brad Hubbard, Alfredo Deza, kefu chai, Nathan Cutler,
	Gregory Farnum, ceph-devel

On Thu, 31 Aug 2017, John Spray wrote:
> On Wed, Aug 30, 2017 at 11:07 PM, Ken Dreyer <kdreyer@redhat.com> wrote:
> > On Wed, Aug 30, 2017 at 11:53 AM, John Spray <jspray@redhat.com> wrote:
> >> The thing is, our boost could easily end up being the "old" one, if
> >> the distro is shipping security updates to theirs.  Our
> >> higher-numbered boost packages would potentially block the distro's
> >> updates to their lower-numbered boost packages.  If we ship our own
> >> separate boost, then maybe Ceph is stuck with an un-patched boost, but
> >> other applications on the system are not.
> >
> > That scenario is theoretically possible, and it's good that you bring
> > it up for consideration. I'm trying to understand the likelihood of
> > the effort/disruption there. Do you have specific applications in mind
> > that would benefit in the way you describe? Ones that require boost
> > and are often co-installed on Ceph nodes?
> 
> Lots of things depend on boost.  Naturally I don't know what
> specifically people run on their Ceph servers apart from Ceph.  It's
> risky to blow away distro packages in favour of our own, precisely
> because of that lack of knowledge about what else is going on on the
> servers.
> 
> I'm really just pointing out that there's a degree of risk that our
> users would be taking on, in exchange for the (not inconsiderable)
> benefit of knocking 500MB out of a fully checked out tree.

We should also keep in mind that boost isn't a very compelling 
demonstration of the advantages of shared libraries because it's 99% 
headers, with only a tiny bit of code that gets dynamically linked.  The 
main impacts of moving to a packaged boost will be (1) faster git clone 
times, (2) faster shaman builds, and (3) more annoying build dependencies 
(install-deps.sh would probably have to pull boost from a new repo source 
or something, instead of relying on distro packages like it does now?).

Are we sure that ccache is working properly?  Maybe we can improve 
turnaround times elsewhere...

sage

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: increasingly large packages and longer build times
  2017-08-22  7:01     ` kefu chai
  2017-08-22  8:27       ` Nathan Cutler
  2017-08-23 14:53       ` Ken Dreyer
@ 2017-10-27  3:21       ` kefu chai
  2 siblings, 0 replies; 31+ messages in thread
From: kefu chai @ 2017-10-27  3:21 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel, Alfredo Deza, Ken Dreyer

On Tue, Aug 22, 2017 at 3:01 PM, kefu chai <tchaikov@gmail.com> wrote:
> On Thu, Aug 17, 2017 at 5:44 AM, Gregory Farnum <gfarnum@redhat.com> wrote:
>> On Mon, Aug 7, 2017 at 7:58 AM, Ken Dreyer <kdreyer@redhat.com> wrote:
>>
>> On (3), there are a few causes. One is that we just have a lot of
>> code. But a far bigger impact seems to come from all the ceph_test_*
>> binaries and other things which we have statically linked with
>> ceph-common et al. There are two approaches we can take there: we can
>> figure out how to dynamically link them (which I haven't been involved
>> in but recall being difficult — but also have caused other issues to
>
> actually almost all ceph_test_* are linking against libceph-common.
> but they are linking against libglobal and libos statically.

hey guys,

i just realized that librados.a is linked by quite a few test
binaries. so, i am posting https://github.com/ceph/ceph/pull/18576.
probably it could help a little bit regarding to reduce the size of
debug-info package.

-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-10-27  3:21 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-02 13:39 increasingly large packages and longer build times Alfredo Deza
2017-08-07 14:58 ` Ken Dreyer
2017-08-07 15:30   ` Willem Jan Withagen
2017-08-08  6:59     ` Fabian Grünbichler
2017-08-08  7:29       ` Willem Jan Withagen
2017-08-16 21:44   ` Gregory Farnum
2017-08-16 22:30     ` John Spray
2017-08-21 13:28       ` Alfredo Deza
2017-08-22  7:01     ` kefu chai
2017-08-22  8:27       ` Nathan Cutler
2017-08-22 13:35         ` kefu chai
2017-08-22 13:52           ` Matt Benjamin
2017-08-22 14:09             ` Willem Jan Withagen
2017-08-22 15:26               ` kefu chai
2017-08-22 15:43                 ` Willem Jan Withagen
2017-08-22 18:58           ` Alfredo Deza
2017-08-22 19:01             ` Nathan Cutler
2017-08-24  8:41             ` kefu chai
2017-08-24 11:35               ` Alfredo Deza
2017-08-24 13:36                 ` Sage Weil
2017-08-27 22:30                   ` Brad Hubbard
2017-08-30 17:17                     ` Ken Dreyer
2017-08-30 17:53                       ` John Spray
2017-08-30 21:59                         ` Brad Hubbard
2017-08-30 22:07                         ` Ken Dreyer
2017-08-30 23:00                           ` Brad Hubbard
2017-08-30 23:09                           ` John Spray
2017-08-31  2:25                             ` Sage Weil
2017-08-23 14:53       ` Ken Dreyer
2017-08-24  8:30         ` kefu chai
2017-10-27  3:21       ` kefu chai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.