All of lore.kernel.org
 help / color / mirror / Atom feed
* FatELF patches...
@ 2009-10-30  2:19 Ryan C. Gordon
  2009-10-30  5:42 ` Rayson Ho
  2009-11-01 19:20 ` David Hagood
  0 siblings, 2 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-10-30  2:19 UTC (permalink / raw)
  To: linux-kernel


Having heard a bunch of commentary, and made a bunch of changes based on 
some really good feedback, here are my hopefully-final FatELF patches. I'm 
pretty happy with the final results. The only changes over the last 
posting is that I cleaned up all the checkpatch.pl complaints (whitespace 
etc).

What's the best way to get this moving towards the mainline? It's not 
clear to me who the binfmt_elf maintainer would be. Is this something that 
should go to Andrew Morton for the -mm tree?

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-10-30  2:19 FatELF patches Ryan C. Gordon
@ 2009-10-30  5:42 ` Rayson Ho
  2009-10-30 14:54   ` Ryan C. Gordon
  2009-11-01 19:20 ` David Hagood
  1 sibling, 1 reply; 69+ messages in thread
From: Rayson Ho @ 2009-10-30  5:42 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

On Thu, Oct 29, 2009 at 9:19 PM, Ryan C. Gordon <icculus@icculus.org> wrote:
> What's the best way to get this moving towards the mainline? It's not
> clear to me who the binfmt_elf maintainer would be. Is this something that
> should go to Andrew Morton for the -mm tree?

Can we first find out whether it is safe from a legal point of view??
After the SCO v. IBM lawsuit, we should be way more careful.

Like it or not, Apple invented universal binaries in 1993, and so far
we are not able to find any prior arts...  If we integrate something
that infringes Apple's patent, then Apple can ban all the Linux
distributions and devices from shipping.

Rayson



>
> --ryan.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-10-30  5:42 ` Rayson Ho
@ 2009-10-30 14:54   ` Ryan C. Gordon
  0 siblings, 0 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-10-30 14:54 UTC (permalink / raw)
  To: Rayson Ho; +Cc: linux-kernel


> Can we first find out whether it is safe from a legal point of view??
> After the SCO v. IBM lawsuit, we should be way more careful.

Does anyone have a spare patent lawyer? I'm not against changing my patch 
to work around a patent, but not knowing _how_ to change it, or if it 
needs changing at all? That's maddening.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-10-30  2:19 FatELF patches Ryan C. Gordon
  2009-10-30  5:42 ` Rayson Ho
@ 2009-11-01 19:20 ` David Hagood
  2009-11-01 20:28   ` Måns Rullgård
                     ` (2 more replies)
  1 sibling, 3 replies; 69+ messages in thread
From: David Hagood @ 2009-11-01 19:20 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

On Thu, 2009-10-29 at 22:19 -0400, Ryan C. Gordon wrote:
> Having heard a bunch of commentary, and made a bunch of changes based on 
> some really good feedback, here are my hopefully-final FatELF patches.

I hope it's not too late for a request for consideration: if we start
having fat binaries, could one of the "binaries" be one of the "not
quite compiled code" formats like Architecture Neutral Distribution
Format (ANDF), such that, given a fat binary which does NOT support a
given CPU, you could at least in theory process the ANDF section to
create the needed target binary? Bonus points for being able to then
append the newly created section to the file.

That way you could have a binary that supported some "common" subset of
CPUs (e.g. x86,x86-64,PPC,ARM) but still run on the "not common"
processors (Alpha, MIPS, Sparc) - it would just take a bit more time to
start.

As an embedded systems guy who is looking to have to support multiple
CPU types, this is really very interesting to me.



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 19:20 ` David Hagood
@ 2009-11-01 20:28   ` Måns Rullgård
  2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 20:40   ` Ryan C. Gordon
  2009-11-10 10:04   ` Enrico Weigelt
  2 siblings, 1 reply; 69+ messages in thread
From: Måns Rullgård @ 2009-11-01 20:28 UTC (permalink / raw)
  To: linux-kernel

David Hagood <david.hagood@gmail.com> writes:

> On Thu, 2009-10-29 at 22:19 -0400, Ryan C. Gordon wrote:
>> Having heard a bunch of commentary, and made a bunch of changes based on 
>> some really good feedback, here are my hopefully-final FatELF patches.
>
> I hope it's not too late for a request for consideration: if we start
> having fat binaries, could one of the "binaries" be one of the "not
> quite compiled code" formats like Architecture Neutral Distribution
> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

Am I the only one who sees this as nothing bloat for its own sake?
Did I miss a massive drop in intelligence of Linux users, causing them
to no longer be capable of picking the correct file themselves?

> As an embedded systems guy who is looking to have to support multiple
> CPU types, this is really very interesting to me.

As an embedded systems guy, I'm more concerned about precious flash
space going to waste than about some hypothetical convenience.

-- 
Måns Rullgård
mans@mansr.com


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 19:20 ` David Hagood
  2009-11-01 20:28   ` Måns Rullgård
@ 2009-11-01 20:40   ` Ryan C. Gordon
  2009-11-10 10:04   ` Enrico Weigelt
  2 siblings, 0 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-01 20:40 UTC (permalink / raw)
  To: David Hagood; +Cc: linux-kernel


> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

It's not a goal of mine, but I suppose you could have an ELF OSABI for it.

I don't think it changes the FatELF kernel patch at all. I don't know much 
about ANDF, but you'd probably just want to set the ELF "interpreter" to 
something other than ld.so and do this all in userspace, and maybe add a 
change to elf_check_arch() to approve ANDF binaries...or something.

To me, ANDF is interesting in an academic sense, but not enough to spend 
effort on it. YMMV.  :)

--ryan.



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:28   ` Måns Rullgård
@ 2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 21:15       ` Måns Rullgård
                         ` (2 more replies)
  0 siblings, 3 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-01 20:59 UTC (permalink / raw)
  To: Måns Rullgård; +Cc: linux-kernel


> Am I the only one who sees this as nothing bloat for its own sake?

I posted a fairly large list of benefits here:  http://icculus.org/fatelf/

Some are more far-fetched than others, I will grant. Also, I suspect most 
people will find one benefit and ten things they don't care about, but 
that benefit is different for different people. I'm confident that the 
benefits far outweigh the size of the kernel patch.

> Did I miss a massive drop in intelligence of Linux users, causing them
> to no longer be capable of picking the correct file themselves?

Also known as "market saturation."   :)

(But really, there are benefits beyond helping dumb people, even if 
helping dumb people wasn't a worthwhile goal in itself.)

> As an embedded systems guy, I'm more concerned about precious flash
> space going to waste than about some hypothetical convenience.

I wouldn't imagine this is the target audience for FatELF. For embedded 
devices, just use the same ELF files you've always used.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:59     ` Ryan C. Gordon
@ 2009-11-01 21:15       ` Måns Rullgård
  2009-11-01 21:35         ` Ryan C. Gordon
  2009-11-01 22:08         ` Rayson Ho
  2009-11-02  0:01       ` Alan Cox
  2009-11-02 16:11       ` Chris Adams
  2 siblings, 2 replies; 69+ messages in thread
From: Måns Rullgård @ 2009-11-01 21:15 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

"Ryan C. Gordon" <icculus@icculus.org> writes:

>> Am I the only one who sees this as nothing bloat for its own sake?
>
> I posted a fairly large list of benefits here:  http://icculus.org/fatelf/

I've read the list, and I can't find anything I agree with.  Honestly.

> Some are more far-fetched than others, I will grant. Also, I suspect most 
> people will find one benefit and ten things they don't care about, but 
> that benefit is different for different people. I'm confident that the 
> benefits far outweigh the size of the kernel patch.

It's not the size of the kernel patch I'm worried about.  What worries
me is the disk space needed when *all* my executables and libraries
are suddenly 3, 4, or 5 times the size they need to be.

There is also the issue of speed to launch these things.  It *has* to
be slower than executing a native file directly.

>> Did I miss a massive drop in intelligence of Linux users, causing them
>> to no longer be capable of picking the correct file themselves?
>
> Also known as "market saturation."   :)
>
> (But really, there are benefits beyond helping dumb people, even if 
> helping dumb people wasn't a worthwhile goal in itself.)

It's far too easy to use computers already.  That's the reason for the
spam problem.

Besides, clueless users would be installing a distro, which could
easily download the correct packages automatically.  In fact, that is
what they already do.  The bootable installation media would still
need to be distributed separately, since the boot formats differ
vastly between architectures.  It is not possible to create a CD/DVD
that is bootable on multiple system types (with a few exceptions).

>> As an embedded systems guy, I'm more concerned about precious flash
>> space going to waste than about some hypothetical convenience.
>
> I wouldn't imagine this is the target audience for FatELF. For embedded 
> devices, just use the same ELF files you've always used.

Of course I will.  The question is, will everybody else?  I'm seeing
enough bloat in the embedded world as it is without handing out new
ways to make it even easier.

-- 
Måns Rullgård
mans@mansr.com

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 21:15       ` Måns Rullgård
@ 2009-11-01 21:35         ` Ryan C. Gordon
  2009-11-02  4:58           ` Valdis.Kletnieks
  2009-11-01 22:08         ` Rayson Ho
  1 sibling, 1 reply; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-01 21:35 UTC (permalink / raw)
  To: Måns Rullgård; +Cc: linux-kernel


> It's not the size of the kernel patch I'm worried about.  What worries
> me is the disk space needed when *all* my executables and libraries
> are suddenly 3, 4, or 5 times the size they need to be.

Then don't make FatELF files with 5 binaries in it. Or don't make FatELF 
files at all.

I glued two full Ubuntu installs together as a proof of concept, but I 
think if Ubuntu did this as a distribution-wide policy, then people would 
probably choose a different distribution.

Then again, I hope Ubuntu uses FatELF on a handful of binaries, and 
removes the /lib64 and /lib32 directories.

> There is also the issue of speed to launch these things.  It *has* to
> be slower than executing a native file directly.

In that there will be one extra read of 128 bytes, yes, but I'm not sure 
that's a measurable performance hit. For regular ELF files, the overhead 
is approximately one extra branch instruction. Considering that most files 
won't be FatELF, that seems like an acceptable cost.

> It's far too easy to use computers already.  That's the reason for the
> spam problem.

Clearly that's going to remain as a philosophical difference between us, 
so I won't waste your time trying to dissuade you.

--ryan.



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 21:15       ` Måns Rullgård
  2009-11-01 21:35         ` Ryan C. Gordon
@ 2009-11-01 22:08         ` Rayson Ho
  2009-11-02  1:17           ` Ryan C. Gordon
  1 sibling, 1 reply; 69+ messages in thread
From: Rayson Ho @ 2009-11-01 22:08 UTC (permalink / raw)
  To: Måns Rullgård, Ryan C. Gordon, linux-kernel

2009/11/1 Måns Rullgård <mans@mansr.com>:
> I've read the list, and I can't find anything I agree with.  Honestly.

+1.

Adding code that might bring lawsuits to Linux developers,
distributors, users is a BIG disadvantage.

And beside the legal issues, this first point is already not right:

"Given enough disc space, there's no reason you couldn't have one DVD
.iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system"

The boot loader is different on different systems, and in fact
different with different firmware. A single DVD that can boot on
different hardware platforms might not be an easy thing to do.

Also, why not build the logic for picking which binary to install into
the installer?? This way, the users don't need to have half of the
disk space wasted due to this FatELF thing.

IMO, the biggest problem users get is not with which hardware binary
to download, but the incompatibly of different Linux kernels and glibc
(the API/ABI).

Rayson

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 21:15       ` Måns Rullgård
@ 2009-11-02  0:01       ` Alan Cox
  2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02 17:52         ` FatELF patches Ryan C. Gordon
  2009-11-02 16:11       ` Chris Adams
  2 siblings, 2 replies; 69+ messages in thread
From: Alan Cox @ 2009-11-02  0:01 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

Lets go down the list of "benefits"

- Separate downloads
	- Doesn't work. The network usage would increase dramatically
	  pulling all sorts of unneeded crap.
	- Already solved by having a packaging system (in fact FatELF is
	  basically obsoleted by packaging tools)

- Separate lib, lib32, lib64
	- So you have one file with 3 files in it rather than three files
	  with one file in them. Directories were invented for a reason
	- Makes updates bigger
	- Stops users only having 32bit libs for some packages

- Third party packagers no longer have to publish multiple rpm/deb etc
	- By vastly increasing download size
	- By making updates vastly bigger
	- Assumes data files are not dependant on binary (often not true)
	- And is irrelevant really because 90% or more of the cost is
	  testing

- You no longer need to use shell scripts and flakey logic to pick the
  right binary ...
	- Since the 1990s we've used package managers to do that instead.
	  I just type "yum install bzflag", the rest is done for me.

- The ELF OSABI for your system changes someday?
	- We already handle that

- Ship a single shared library that provides bindings for a scripting
  language and not have to worry about whether the scripting language
  itself is built for the same architecture as your bindings. 
	- Except if they don't overlap it won't run

- Ship web browser plugins that work out of the box with multiple
  platforms.
	- yum install just works, and there is a search path in firefox
	  etc

- Ship kernel drivers for multiple processors in one file.
	- Not useful see separate downloads

- Transition to a new architecture in incremental steps. 
	- IFF the CPU supports both old and new
	- and we can already do that

- Support 64-bit and 32-bit compatibility binaries in one file. 
	- Not useful as we've already seen

- No more ia32 compatibility libraries! Even if your distro
  doesn't make a complete set of FatELF binaries available, they can
  still provide it for the handful of packages you need for 99% of 32-bit
  apps you want to run on a 64-bit system. 

	- Argument against FatELF - why waste the disk space if its rare ?

- Have a CPU that can handle different byte orders? Ship one binary that
  satisfies all configurations!

	- Variant of the distribution "advantage" - same problem - its
	  better to have two files, its all about testing anyway

- Ship one file that works across Linux and FreeBSD (without a platform
  compatibility layer on either of them). 

	- Ditto

- One hard drive partition can be booted on different machines with
  different CPU architectures, for development and experimentation. Same
  root file system, different kernel and CPU architecture. 

	- Now we are getting desperate.

- Prepare your app on a USB stick for sneakernet, know it'll work on
  whatever Linux box you are likely to plug it into.

	- No I don't because of the dependancies, architecture ordering
	  of data files, lack of testing on each platform and the fact
	  architecture isn't sufficient to define a platform

- Prepare your app on a network share, know it will work with all
  the workstations on your LAN. 

	- Variant of the distribution idea, again better to have multiple
	  files for updating and management, need to deal with
	  dependancies etc. Waste of storage space.
	- We have search paths, multiple mount points etc.

So why exactly do we want FatELF. It was obsoleted in the early 1990s
when architecture handling was introduced into package managers.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 22:08         ` Rayson Ho
@ 2009-11-02  1:17           ` Ryan C. Gordon
  2009-11-02  3:27             ` Rayson Ho
  0 siblings, 1 reply; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-02  1:17 UTC (permalink / raw)
  To: Rayson Ho; +Cc: Måns Rullgård, linux-kernel


> Adding code that might bring lawsuits to Linux developers,
> distributors, users is a BIG disadvantage.

I'm tracking down a lawyer to discuss the issue. I'm surprised there 
aren't a few hanging around here, honestly. I sent a request in to the 
SFLC, and if that doesn't pan out, I'll dig for coins in my car seat to 
pay a lawyer for a few hours of her time.

If it's a big deal, we'll figure out what to do from there. But let's not 
talk about the sky falling until we get to that point, please.

> "Given enough disc space, there's no reason you couldn't have one DVD
> .iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system"

I've had about a million people point out the boot loader thing. There's 
an x86/amd64 forest if you can see past the MIPS trees.

Still, I said there were different points that were more compelling for 
different individuals. I don't think this is the most compelling argument 
on that page, and I think there's a value in talking about theoretical 
benefits in addition to practical ones. Theoretical ones become practical 
the moment someone decides to roll out a company-internal distribution 
that works on all the workstations inside IBM or Google or whatever...even 
if Fedora would turn their nose up at the idea for a general-purpose 
release.

> IMO, the biggest problem users get is not with which hardware binary
> to download, but the incompatibly of different Linux kernels and glibc
> (the API/ABI).

These are concerns, too, but the kernel has been, in my experience, very 
good at binary compatibility with user space back as far as I can 
remember. glibc has had some painful progress, but since NPTL stabilized a 
long time ago, even this hasn't been bad at all.

Certainly one has to be careful--I would even use the word diligent--to 
maintain binary compatibility, but this was much more of a hurting for 
application developers a decade ago.

At least, that's been my experience.

--ryan.



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  0:01       ` Alan Cox
@ 2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02  6:17           ` Julien BLACHE
                             ` (4 more replies)
  2009-11-02 17:52         ` FatELF patches Ryan C. Gordon
  1 sibling, 5 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-02  2:21 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel


> So why exactly do we want FatELF. It was obsoleted in the early 1990s
> when architecture handling was introduced into package managers.

I'm not minimizing your other points by trimming down to one quote. Some 
of it I already covered, but mostly I suspect I'm talking way too much, so 
I'll spare everyone a little. I'm happy to address your other points if 
you like, though, even the one where you said I was being desperate.  :)

Most of your points are "package managers solve this problem" but they 
simply do not solve all of them.

Package managers are a _fantastic_ invention. They are a killer feature 
over other operating systems, including ones people pay way too much money 
to use. That being said, there are lots of places where using a package 
manager doesn't make sense: experimental software that might have an 
audience but isn't ready for wide adoption, software that isn't 
appropriate for an apt/yum repository, software that distros refuse to 
package but is still perfectly useful, closed-source software, and 
software that wants to work between distros that don't have 
otherwise-compatible rpm/debs (or perhaps no package manager at all).

I'm certain I'm about to get a flood of replies that say "you can make a 
cross-distro-compatible RPM if you just follow these steps" but that 
completely misses the point. Not all software comes from yum, or even from 
an .rpm, even if most of it _should_. This isn't about replacing or 
competing with apt-get or yum.

I'm certain if we made a Venn diagram, there would be an overlap. But 
FatELF solves different problems than package managers, and in the case of 
ia32 compatibility packages, it helps the package manager solve its 
problems better.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  1:17           ` Ryan C. Gordon
@ 2009-11-02  3:27             ` Rayson Ho
  0 siblings, 0 replies; 69+ messages in thread
From: Rayson Ho @ 2009-11-02  3:27 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

On Sun, Nov 1, 2009 at 8:17 PM, Ryan C. Gordon <icculus@icculus.org> wrote:
> I'm tracking down a lawyer to discuss the issue. I'm surprised there
> aren't a few hanging around here, honestly. I sent a request in to the
> SFLC, and if that doesn't pan out, I'll dig for coins in my car seat to
> pay a lawyer for a few hours of her time.

Good!! And thanks :)

And is the lawyer specialized in patent laws??


> I've had about a million people point out the boot loader thing. There's
> an x86/amd64 forest if you can see past the MIPS trees.

If it's x86 vs. AMD64, then the installer can already do most of the
work, and it can ask the user to insert the right 2nd/3rd/etc CD/DVD.


> Theoretical ones become practical
> the moment someone decides to roll out a company-internal distribution
> that works on all the workstations inside IBM or Google or whatever...even
> if Fedora would turn their nose up at the idea for a general-purpose
> release.

Don't you think that taking a CD/DVD to each workstation and start the
installation or upgrade is so old school??

Software updates inside those companies are done via the internet
network, and it does not matter whether the DVD can handle all the
architectures or not.

And the idea of a general-purpose release might not work. As 90% of
the users are using a single architecture (I count AMD64 as x86 with
"some" extensions...), we won't get the benefits so having the extra
code in the kernel and the userspace. Most of the shipped commercial
binaries will be x86 anyways -- and as Alan stated, the packaging
system is already doing most of the work for us already (I don't
recall providing anything except the package name when I do apt-get).

For embedded systems, they wanted to take away all the fat more than
shipping a single app.


> These are concerns, too, but the kernel has been, in my experience, very
> good at binary compatibility with user space back as far as I can
> remember. glibc has had some painful progress, but since NPTL stabilized a
> long time ago, even this hasn't been bad at all.
>
> Certainly one has to be careful--I would even use the word diligent--to
> maintain binary compatibility, but this was much more of a hurting for
> application developers a decade ago.

The kernel part refers to kernel modules.

But yes, binary compatibility was a real pain when I "really" (played
with it in 1995, didn't really like it at that time) started using
Linux in 1997. However, I think the installer/package manager took out
most of the burden.

Rayson



>
> At least, that's been my experience.
>
> --ryan.
>
>
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 21:35         ` Ryan C. Gordon
@ 2009-11-02  4:58           ` Valdis.Kletnieks
  2009-11-02 15:14             ` Ryan C. Gordon
  0 siblings, 1 reply; 69+ messages in thread
From: Valdis.Kletnieks @ 2009-11-02  4:58 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1506 bytes --]

On Sun, 01 Nov 2009 16:35:05 EST, "Ryan C. Gordon" said:

> I glued two full Ubuntu installs together as a proof of concept, but I 
> think if Ubuntu did this as a distribution-wide policy, then people would 
> probably choose a different distribution.

Hmm.. so let's see - people compiling stuff for themselves won't use this
feature.  And if a distro uses it, users would probably go to a different
distro.

That's a bad sign right there...

> Then again, I hope Ubuntu uses FatELF on a handful of binaries, and 
> removes the /lib64 and /lib32 directories.

Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
are using FatELF - as long as there's any binaries doing things The Old Way,
you need to keep the supporting binaries around.

> > There is also the issue of speed to launch these things.  It *has* to
> > be slower than executing a native file directly.

> In that there will be one extra read of 128 bytes, yes, but I'm not sure 
> that's a measurable performance hit. For regular ELF files, the overhead 
> is approximately one extra branch instruction. Considering that most files 
> won't be FatELF, that seems like an acceptable cost.

Don't forget you take that hit once for each shared library involved.  Plus
I'm not sure if there's hidden gotchas lurking in there (is there code that
assumes that if executable code is mmap'ed, it's only done so in one arch?
Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
in both 32 and 64 bit modes?

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
@ 2009-11-02  6:17           ` Julien BLACHE
  2009-11-02 18:18             ` Ryan C. Gordon
  2009-11-02  6:27           ` David Miller
                             ` (3 subsequent siblings)
  4 siblings, 1 reply; 69+ messages in thread
From: Julien BLACHE @ 2009-11-02  6:17 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

"Ryan C. Gordon" <icculus@icculus.org> wrote:

Hi,

With my Debian Developer hat on...

> Package managers are a _fantastic_ invention. They are a killer
> feature over other operating systems, including ones people pay way
> too much money to use. That being said, there are lots of places where
> using a package manager doesn't make sense:

> experimental software that might have an audience but isn't ready for
> wide adoption

That usually ships as sources or prebuilt binaries in a tarball - target
/opt and voila! For a bigger audience you'll see a lot of experimental
stuff that gets packaged (even in quick'n'dirty mode).

> software that isn't appropriate for an apt/yum repository

Just create a repository for the damn thing if you want to distribute it
that way. There's no "appropriate / not appropriate" that applies here.

> software that distros refuse to package but is still perfectly useful

Look at what happens today. A lot of that gets packaged by third
parties, and more often than not they involve distribution
maintainers. (See debian-multimedia, PLF for Mandriva, ...)

> closed-source software

Why do we even care? Besides, commercial companies can just stop sitting
on their hands and start distributing real packages. It's no different
from rolling out a Windows Installer or Innosetup. It's packaging.

> and software that wants to work between distros that don't have 
> otherwise-compatible rpm/debs (or perhaps no package manager at all).

Tarball, /opt, static build.


And, about the /lib, /lib32, /lib64 situation Debian and Debian-derived
systems, the solution to that is multiarch and it's being worked
on. It's a lot better and cleaner than the fat binary kludge.

JB.

-- 
Julien BLACHE                                   <http://www.jblache.org> 
<jb@jblache.org>                                  GPG KeyID 0xF5D65169

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02  6:17           ` Julien BLACHE
@ 2009-11-02  6:27           ` David Miller
  2009-11-02 15:32             ` Ryan C. Gordon
  2009-11-02  9:16           ` Alan Cox
                             ` (2 subsequent siblings)
  4 siblings, 1 reply; 69+ messages in thread
From: David Miller @ 2009-11-02  6:27 UTC (permalink / raw)
  To: icculus; +Cc: alan, mans, linux-kernel

From: "Ryan C. Gordon" <icculus@icculus.org>
Date: Sun, 1 Nov 2009 21:21:47 -0500 (EST)

> That being said, there are lots of places where using a package 
> manager doesn't make sense:

Yeah like maybe, just maybe, in an embedded system where increasing
space costs like FatELF does makes even less sense.

I think Alan's arguments against FatELF were the most comprehensive
and detailed, and I haven't seem them refuted very well, if at all.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02  6:17           ` Julien BLACHE
  2009-11-02  6:27           ` David Miller
@ 2009-11-02  9:16           ` Alan Cox
  2009-11-02 17:39             ` david
  2009-11-02 15:40           ` Diego Calleja
  2009-11-04 16:40           ` package managers [was: FatELF patches...] Mikulas Patocka
  4 siblings, 1 reply; 69+ messages in thread
From: Alan Cox @ 2009-11-02  9:16 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

> I'm certain if we made a Venn diagram, there would be an overlap. But 
> FatELF solves different problems than package managers, and in the case of 
> ia32 compatibility packages, it helps the package manager solve its 
> problems better.

Not really - as I said it drives disk usage up, it drives network
bandwidth up (which is a big issue for a distro vendor) and the package
manager and file system exist to avoid this kind of mess being needed.

You can ask the same question as FatELF the other way around and it
becomes even more obvious that it's a bad idea.

Imagine you did it by name not by architecture. So you had a single
"FatDirectory" file for /bin, /sbin and /usr/bin. It means you don't have
to worry about people having different sets of binaries, it means they
are always compatible. And like FatELF it's not a very good idea.

Welcome to the invention of the directory.

Alan

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  4:58           ` Valdis.Kletnieks
@ 2009-11-02 15:14             ` Ryan C. Gordon
  2009-11-03 14:54               ` Valdis.Kletnieks
  0 siblings, 1 reply; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 15:14 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: Måns Rullgård, linux-kernel


> > think if Ubuntu did this as a distribution-wide policy, then people would 
> > probably choose a different distribution.
> 
> Hmm.. so let's see - people compiling stuff for themselves won't use this
> feature.  And if a distro uses it, users would probably go to a different
> distro.

I probably wasn't clear when I said "distribution-wide policy" followed by 
a "then again." I meant there would be backlash if the distribution glued 
the whole system together, instead of just binaries that made sense to do 
it to.

And, again, there's a third use-case besides compiling your programs and 
getting them from the package manager, and FatELF is meant to address 
that.

> Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
> are using FatELF - as long as there's any binaries doing things The Old Way,
> you need to keep the supporting binaries around.

Binaries don't refer directly to /libXX, they count on ld.so to tapdance 
on their behalf. My virtual machine example left the dirs there as 
symlinks to /lib, but they could probably just go away directly.

> Don't forget you take that hit once for each shared library involved.  Plus

That happens in user space in ld.so, so it's not a kernel problem in any 
case, but still...we're talking about, what? Twenty more branch 
instructions per-process?

> I'm not sure if there's hidden gotchas lurking in there (is there code that
> assumes that if executable code is mmap'ed, it's only done so in one arch?

The current code sets up file mappings based on the offset of the desired 
ELF binary.

> Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
> in both 32 and 64 bit modes?

Whose refcounts would this screw up? If there's a possible bug, I'd like 
to make sure it gets resolved, of course.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  6:27           ` David Miller
@ 2009-11-02 15:32             ` Ryan C. Gordon
  0 siblings, 0 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 15:32 UTC (permalink / raw)
  To: David Miller; +Cc: alan, mans, linux-kernel


> > That being said, there are lots of places where using a package 
> > manager doesn't make sense:
> 
> Yeah like maybe, just maybe, in an embedded system where increasing
> space costs like FatELF does makes even less sense.

I listed several examples. Embedded systems wasn't one of them.

> I think Alan's arguments against FatELF were the most comprehensive
> and detailed, and I haven't seem them refuted very well, if at all.

I said I was trying to avoid talking everyone to death.  :)

I'll respond to them, then.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
                             ` (2 preceding siblings ...)
  2009-11-02  9:16           ` Alan Cox
@ 2009-11-02 15:40           ` Diego Calleja
  2009-11-04 16:40           ` package managers [was: FatELF patches...] Mikulas Patocka
  4 siblings, 0 replies; 69+ messages in thread
From: Diego Calleja @ 2009-11-02 15:40 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Alan Cox, Måns Rullgård, linux-kernel

On Lunes 02 Noviembre 2009 03:21:47 Ryan C. Gordon escribió:
> FatELF solves different problems than package managers, and in the case of 
> ia32 compatibility packages, it helps the package manager solve its 
> problems better.

Package managers can be modified to allow embeddeding a package inside of
another package. That could allow shipping support for multiple architectures
in a single package, and it could even do things that fatelf can't, like
in the case of experimental packages that need other experimental
dependencies: all of them could be packed in a single package, even with
support for multiple architectures. Heck, it could even be a new kind of
container that would allow packing .rpm and .debs for multiple distros
together. And it wouldnt touch a single line of kernel code.

So I don't think that fatelf is solving the problems of package managers,
it's quite the opposite.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 21:15       ` Måns Rullgård
  2009-11-02  0:01       ` Alan Cox
@ 2009-11-02 16:11       ` Chris Adams
  2 siblings, 0 replies; 69+ messages in thread
From: Chris Adams @ 2009-11-02 16:11 UTC (permalink / raw)
  To: linux-kernel

Once upon a time, Ryan C. Gordon <icculus@icculus.org> said:
>I wouldn't imagine this is the target audience for FatELF. For embedded 
>devices, just use the same ELF files you've always used.

What _is_ the target audience?

As I see it, there are three main groups of Linux consumers:

- embedded: No interest in this; adds significant bloat, generally
  embedded systems don't allow random binaries anyway

- enterprise distributions (e.g. Red Hat, SuSE): They have specific
  supported architectures, with partner programs to support those archs.
  If something is supported, they can support all archs with
  arch-specific binaries.

- community distributions (e.g. Ubuntu, Fedora, Debian): This would
  greatly increase build infrastructure complexity, mirror disk space,
  and download bandwidth, and (from a user perspective) slow down update
  downloads significantly.

If you don't have buy-in from at least a large majority of one of these
segments, this is a big waste.  If none of the above support it, it will
not be used by any binary-only software distributors.

Is any major distribution (enterprise or community) going to use this?
If not, kill it now.

-- 
Chris Adams <cmadams@hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  9:16           ` Alan Cox
@ 2009-11-02 17:39             ` david
  2009-11-02 17:44               ` Alan Cox
  2009-11-02 19:56               ` Krzysztof Halasa
  0 siblings, 2 replies; 69+ messages in thread
From: david @ 2009-11-02 17:39 UTC (permalink / raw)
  To: Alan Cox; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

On Mon, 2 Nov 2009, Alan Cox wrote:

>> I'm certain if we made a Venn diagram, there would be an overlap. But
>> FatELF solves different problems than package managers, and in the case of
>> ia32 compatibility packages, it helps the package manager solve its
>> problems better.
>
> Not really - as I said it drives disk usage up, it drives network
> bandwidth up (which is a big issue for a distro vendor) and the package
> manager and file system exist to avoid this kind of mess being needed.

I think this depends on the particular package.

how much of the package is binary executables (which get multiplied) vs 
how much is data or scripts (which do not)

fo any individual user it will alsays be a larger download, but if you 
have to support more than one architecture (even 32 bit vs 64 bit x86) 
it may be smaller to have one fat package than to have two 'normal' 
packages.

yes, the package manager could handle this by splitting the package up 
into more pieces, with some of the pieces being arch independant, but that 
also adds complexity.

David Lang

> You can ask the same question as FatELF the other way around and it
> becomes even more obvious that it's a bad idea.
>
> Imagine you did it by name not by architecture. So you had a single
> "FatDirectory" file for /bin, /sbin and /usr/bin. It means you don't have
> to worry about people having different sets of binaries, it means they
> are always compatible. And like FatELF it's not a very good idea.
>
> Welcome to the invention of the directory.
>
> Alan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:39             ` david
@ 2009-11-02 17:44               ` Alan Cox
  2009-11-02 19:56               ` Krzysztof Halasa
  1 sibling, 0 replies; 69+ messages in thread
From: Alan Cox @ 2009-11-02 17:44 UTC (permalink / raw)
  To: david; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

> how much of the package is binary executables (which get multiplied) vs 
> how much is data or scripts (which do not)

IFF the data is not platform dependant formats.

> fo any individual user it will alsays be a larger download, but if you 
> have to support more than one architecture (even 32 bit vs 64 bit x86) 
> it may be smaller to have one fat package than to have two 'normal' 
> packages.

Nope. The data files for non arch specific material get packaged
accordingly. Have done for years.

> 
> yes, the package manager could handle this by splitting the package up 
> into more pieces, with some of the pieces being arch independant, but that 
> also adds complexity.

Which was implemented years ago and turns out to be vital because only
some data is not arch specific.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  0:01       ` Alan Cox
  2009-11-02  2:21         ` Ryan C. Gordon
@ 2009-11-02 17:52         ` Ryan C. Gordon
  2009-11-02 18:53           ` Alan Cox
  2009-11-10 11:27           ` Enrico Weigelt
  1 sibling, 2 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 17:52 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel, davem


(As requested by davem.)

On Mon, 2 Nov 2009, Alan Cox wrote:
> Lets go down the list of "benefits"
> 
> - Separate downloads
> 	- Doesn't work. The network usage would increase dramatically
> 	  pulling all sorts of unneeded crap.

Sure, this doesn't work for everyone, but this list isn't meant to be a 
massive pile of silver bullets. Some of the items are "that's a cool 
trick" and some are "that would help solve an annoyance." I can see a 
use-case for the one-iso-multiple-arch example, but it's not going to be 
Ubuntu.

> 	- Already solved by having a packaging system (in fact FatELF is
> 	  basically obsoleted by packaging tools)

I think I've probably talked this to death, and will again when I reply to 
Julien, but: packaging tools are a different thing entirely. They solve 
some of the same issues, they cause other issues. The fact that Debian is 
now talking about "multiarch" shows that they've experienced some of these 
problems, too, despite having a world-class package manager.

> - Separate lib, lib32, lib64
> 	- So you have one file with 3 files in it rather than three files
> 	  with one file in them. Directories were invented for a reason

We covered this when talking about shell scripts.

> 	- Makes updates bigger

I'm sure, but I'm not sure the increase is a staggering amount. We're not 
talking about making all packages into FatELF binaries.

> 	- Stops users only having 32bit libs for some packages

Is that a serious concern?

> - Third party packagers no longer have to publish multiple rpm/deb etc
> 	- By vastly increasing download size
> 	- By making updates vastly bigger

It's true that /bin/ls would double in size (although I'm sure at least 
the download saves some of this in compression). But how much of, say, 
Gnome or OpenOffice or Doom 3 is executable code? These things would be 
nowhere near "vastly" bigger.

> 	- Assumes data files are not dependant on binary (often not true)

Turns out that /usr/sbin/hald's cache file was. That would need to be 
fixed, which is trivial, but in my virtual machine test I had it delete 
and regenerate the file on each boot as a fast workaround.

The rest of the Ubuntu install boots and runs. This is millions of lines 
of code that does not depend on the byte order, alignment, and word size 
for its data files.

I don't claim to be an expert on the inner workings of every package you 
would find on a Linux system, but like you, I expected there would be a 
lot of things to fix. It turns out that "often not true" just turned out 
to actually _not_ be true at all.

> 	- And is irrelevant really because 90% or more of the cost is
> 	  testing

Testing doesn't really change with what I'm describing. If you want to 
ship a program for PowerPC and x86, you still need to test it on PowerPC 
and x86, no matter how you distribute or launch it.

> - You no longer need to use shell scripts and flakey logic to pick the
>   right binary ...
> 	- Since the 1990s we've used package managers to do that instead.
> 	  I just type "yum install bzflag", the rest is done for me.

Yes, that is true for software shipped via yum, which does not encompass 
all the software you may want to run on your system. I'm not arguing 
against package management.

> - The ELF OSABI for your system changes someday?
> 	- We already handle that

Do we? I grepped for OSABI in the 2.6.31 sources, and can't find anywhere, 
outside of my FatELF patches, where we check an ELF file's OSABI or OSABI 
version at all.

The kernel blindly loads ELF binaries without checking the ABI, and glibc 
checks the ABI for shared libraries--and flatly rejects files that don't 
match what it expects.

Where do we handle an ABI change gracefully? Am I misunderstanding the 
code?

> - Ship a single shared library that provides bindings for a scripting
>   language and not have to worry about whether the scripting language
>   itself is built for the same architecture as your bindings. 
> 	- Except if they don't overlap it won't run

True. If I try to run a PowerPC binary on a Sparc, it fails in any 
circumstance. I recognize the goal of this post was to shoot down every 
single point, but you can't see a scenario where this adds a benefit? Even 
in a world that's still running 32-bit web browsers on _every major 
operating system_ because some crucial plugins aren't 64-bit yet?

> - Ship web browser plugins that work out of the box with multiple
>   platforms.
> 	- yum install just works, and there is a search path in firefox
> 	  etc

So it's better to have a thousand little unique solutions to the same 
problem? Everything has a search path (except things that don't), and all 
of those search paths are set up in the same way (except things that 
aren't). Do we really need to have every single program screwing around 
with their own personal spiritual successor to the CLASSPATH environment 
variable?

> - Ship kernel drivers for multiple processors in one file.
> 	- Not useful see separate downloads

Pain in the butt see "which installer is right for me?"   :)

I don't want to get into a holy war about out-of-tree kernel drivers, 
because I'm totally on board with getting drivers into the mainline. But 
it doesn't change the fact that I downloaded the wrong nvidia drivers the 
other day because I accidentally grabbed the ia32 package instead of the 
amd64 one. So much for saving bandwidth.

I wasn't paying attention. But lots of people wouldn't know which to pick 
even if they were. Nvidia, etc, could certainly put everything in one 
shell script and choose for you, but now we're back at square one again.

This discussion applies to applications, not just kernel modules. 
The applications are more important here, in my opinion.

> - Transition to a new architecture in incremental steps. 
> 	- IFF the CPU supports both old and new

A lateral move would be painful (although Apple just did this very thing 
with a FatELF-style solution, albeit with the help of an emulator), but if 
we're talking about the most common case at the moment, x86 to amd64, it's 
not a serious concern.

> 	- and we can already do that

Not really. compat_binfmt_elf will run legacy binaries on new systems, but 
not vice versa. The goal is having something that will let it work on both 
without having to go through a package manager infrastructure.

> - Support 64-bit and 32-bit compatibility binaries in one file. 
> 	- Not useful as we've already seen

Where did we see that? There are certainly tradeoffs, pros and cons, but 
this is very dismissive despite several counter-examples.

> - No more ia32 compatibility libraries! Even if your distro
>   doesn't make a complete set of FatELF binaries available, they can
>   still provide it for the handful of packages you need for 99% of 32-bit
>   apps you want to run on a 64-bit system. 
> 
> 	- Argument against FatELF - why waste the disk space if its rare ?

This is _not_ an argument against FatELF.

Why install Gimp by default if I'm not an artist? Because disk space is 
cheap in the configurations I'm talking about and it's better to have it 
just in case, for the 1% of users that will want it. A desktop, laptop or 
server can swallow a few megabytes to clean up some awkward design 
decisions, like the /lib64 thing.

A few more megabytes installed may cut down on the support load for 
distributions when some old 32 bit program refuses to start at all.

In a world where terrabyte hard drives are cheap consumer-level 
commodities, the tradeoff seems like a complete no-brainer to me.

> - Have a CPU that can handle different byte orders? Ship one binary that
>   satisfies all configurations!
> 
> 	- Variant of the distribution "advantage" - same problem - its
> 	  better to have two files, its all about testing anyway
> 
> - Ship one file that works across Linux and FreeBSD (without a platform
>   compatibility layer on either of them). 
> 
> 	- Ditto

And ditto from me, too: testing is still testing, no matter how you 
package and ship it. It's just simply not related to FatELF. This problem 
exists in shipping binaries via apt and yum, too.

> - One hard drive partition can be booted on different machines with
>   different CPU architectures, for development and experimentation. Same
>   root file system, different kernel and CPU architecture. 
> 
> 	- Now we are getting desperate.

It's not like this is unheard of. Apple is selling this very thing for 129 
bucks a copy.

> - Prepare your app on a USB stick for sneakernet, know it'll work on
>   whatever Linux box you are likely to plug it into.
> 
> 	- No I don't because of the dependancies, architecture ordering
> 	  of data files, lack of testing on each platform and the fact
> 	  architecture isn't sufficient to define a platform

Yes, it's not a silver bullet. Fedora will not be promising binaries that 
run on every Unix box on the planet.

But the guy with the USB stick? He probably knows the details of every 
machine he wants to plug it into...
 
> - Prepare your app on a network share, know it will work with all
>   the workstations on your LAN. 

...and so does the LAN's administrator.

It's possible to ship binaries that don't depend on a specific 
distribution, or preinstalled dependencies, beyond the existance of a 
glibc that was built in the last five years or so. I do it every day. It's 
not unreasonable, if you aren't part of the package management network, to 
make something that will run, generically on "Linux."

> 	- We have search paths, multiple mount points etc.

I'm proposing a unified, clean, elegant way to solve the problem.

> So why exactly do we want FatELF. It was obsoleted in the early 1990s
> when architecture handling was introduced into package managers.

I can't speak for anyone but myself, but I can see lots of places where it 
would personally help me as a developer that isn't always inside the 
packaging system.

There are programs I support that I just simply won't bother moving to 
amd64 because it just complicates things for the end user, for example.

Goofy one-off example: a game that I ported named Lugaru ( 
http://www.wolfire.com/lugaru ) is being updated for Intel Mac OS X. The 
build on my hard drive will run natively as a PowerPC, x86, and amd64 
process, and Mac OS X just does the right thing on whatever hardware tries 
to launch it. On Linux...well, I'm not updating it. You can enjoy the x86 
version. It's easier on me, I have other projects to work on, and too bad 
for you. Granted, the x86_64 version _works_ on Linux, but shipping it is 
a serious pain, so it just won't ship.

That is anecdotal, and I apologize for that. But I'm not the only 
developer that's not in an apt repository, and all of these rebuttals are 
anecdotal: "I just use yum [...because I don't personally care about 
Debian users]."

The "third-party" is important. If your answer is "you should have 
petitioned Fedora, Ubuntu, Gentoo, CentOS, Slackware and every other 
distro to package it, or packaged it for all of those yourself, or open 
sourced someone else's software on their behalf and let the community 
figure it out" then I just don't think we're talking about the same 
reality at all, and I can't resolve that issue for you.

And, since I'm about to get a flood of "closed source is evil" emails: 
this applies to Free Software too. Take something bleeding edge but open 
source, like, say, Songbird, and you are going to find yourself working 
outside of apt-get to get a modern build, or perhaps a build at all.

In short: I'm glad yum works great for your users, but they aren't all the 
users, and it sure doesn't work well for all developers.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02  6:17           ` Julien BLACHE
@ 2009-11-02 18:18             ` Ryan C. Gordon
  2009-11-02 18:59               ` Julien BLACHE
  2009-11-02 19:08               ` Jesús Guerrero
  0 siblings, 2 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 18:18 UTC (permalink / raw)
  To: Julien BLACHE; +Cc: linux-kernel


> With my Debian Developer hat on...

I'm repeating myself now, so I'm sorry if this is getting tedious for 
anyone. FatELF isn't meant to replace the package managers.

tl;dr: If all you have is an apt-get hammer, everything looks like a .deb nail.

> That usually ships as sources or prebuilt binaries in a tarball - target
> /opt and voila! For a bigger audience you'll see a lot of experimental
> stuff that gets packaged (even in quick'n'dirty mode).

"A lot" is hard to quantify. We can certainly see thousands of forum posts 
for help with software that hadn't been packaged yet.

> > software that isn't appropriate for an apt/yum repository
> 
> Just create a repository for the damn thing if you want to distribute it
> that way. There's no "appropriate / not appropriate" that applies here.

I can't imagine most people are interested in building repositories and 
telling their users how to add it to their package manager, period, but 
even less so when you have to build different repositories for different 
sets of users, and not know what to build for whatever is the next popular 
distribution. For things like Gentoo, which for years didn't have a way to 
extend portage, what was the solution?

(har har, don't run Gentoo is the solution, let's get the joke out of our 
systems here.)

> > software that distros refuse to package but is still perfectly useful
> 
> Look at what happens today. A lot of that gets packaged by third
> parties, and more often than not they involve distribution
> maintainers. (See debian-multimedia, PLF for Mandriva, ...)

I'm hearing a lot of "a lot" ... what actually happens today is that you 
depend on the kindness of strangers to package your software or you make a 
bunch of incompatible packages for different distributions.

> > closed-source software
> 
> Why do we even care?

Maybe you don't care, but that doesn't mean no one cares.

I am on Team Stallman. I'll take a crappy free software solution over a 
high quality closed-source one, and strive to improve the free software 
one until it is indisputably better. Most of my free time goes towards 
this very endeavor.

But still, let's not be jerks about it.

> Tarball,

Ugh.

> /opt,

Ugh.

> static build.

Ugh!

I think we can do better than that when we're outside of the package 
managers, but it's a rant for another time.

> And, about the /lib, /lib32, /lib64 situation Debian and Debian-derived
> systems, the solution to that is multiarch and it's being worked
> on. It's a lot better and cleaner than the fat binary kludge.

Having read the multiarch wiki briefly, I'm pleased to see other people 
find the current system "unwieldy," but it seems like FatELF "kludge" 
solves several of the points in the "unresolved issues" section.

YMMV, I guess.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:52         ` FatELF patches Ryan C. Gordon
@ 2009-11-02 18:53           ` Alan Cox
  2009-11-02 20:13             ` Ryan C. Gordon
  2009-11-10 11:27           ` Enrico Weigelt
  1 sibling, 1 reply; 69+ messages in thread
From: Alan Cox @ 2009-11-02 18:53 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel, davem

> Sure, this doesn't work for everyone, but this list isn't meant to be a 

You've not shown a single meaningful use case yet.

> some of the same issues, they cause other issues. The fact that Debian is 
> now talking about "multiarch" shows that they've experienced some of these 
> problems, too, despite having a world-class package manager.

No it means that Debian is finally catching up with rpm on this issue,
where it has been solved for years.

> 
> > - Separate lib, lib32, lib64
> > 	- So you have one file with 3 files in it rather than three files
> > 	  with one file in them. Directories were invented for a reason
> 
> We covered this when talking about shell scripts.

Without providing a justification

> I'm sure, but I'm not sure the increase is a staggering amount. We're not 
> talking about making all packages into FatELF binaries.

How will you jhandle cross package dependancies

> > 	- Stops users only having 32bit libs for some packages
> 
> Is that a serious concern?

Yes from a space perspective and a minimising updates perspective.

> > - Third party packagers no longer have to publish multiple rpm/deb etc
> > 	- By vastly increasing download size
> > 	- By making updates vastly bigger
> 
> It's true that /bin/ls would double in size (although I'm sure at least 
> the download saves some of this in compression). But how much of, say, 
> Gnome or OpenOffice or Doom 3 is executable code? These things would be 
> nowhere near "vastly" bigger.

Guess what: all the data files for Doom and OpenOffice are already
packaged separately as are many of the gnome ones, or automagically
shared by the two rpm packages.

> 
> > 	- Assumes data files are not dependant on binary (often not true)
> 
> Turns out that /usr/sbin/hald's cache file was. That would need to be 
> fixed, which is trivial, but in my virtual machine test I had it delete 
> and regenerate the file on each boot as a fast workaround.
> 
> The rest of the Ubuntu install boots and runs. This is millions of lines 
> of code that does not depend on the byte order, alignment, and word size 
> for its data files.

That you've noticed. But you've not done any formal testing with tens of
thousands of users so you've not done more than the "hey mummy it boots"
test (which is about one point over the Linus 'it might compile' stage)
 
> I don't claim to be an expert on the inner workings of every package you 
> would find on a Linux system, but like you, I expected there would be a 
> lot of things to fix. It turns out that "often not true" just turned out 
> to actually _not_ be true at all.

You need an expert on the inner workings of each package to review and
test them. Fortunately that work is already done- by the rpm packagers
for all the distros

> > - The ELF OSABI for your system changes someday?
> > 	- We already handle that
> 
> Do we? I grepped for OSABI in the 2.6.31 sources, and can't find anywhere, 
> outside of my FatELF patches, where we check an ELF file's OSABI or OSABI 
> version at all.

ARM has migrated ABI at least once.

> Where do we handle an ABI change gracefully? Am I misunderstanding the 
> code?

You add code for the migration as needed, in the distro

> single point, but you can't see a scenario where this adds a benefit? Even 
> in a world that's still running 32-bit web browsers on _every major 
> operating system_ because some crucial plugins aren't 64-bit yet?

Your distro must be out of date or a bit backward. Good ones thunk those
or run them in a different process (which is a very good idea for quality
reasons as well as security)

> 
> > - Ship web browser plugins that work out of the box with multiple
> >   platforms.
> > 	- yum install just works, and there is a search path in firefox
> > 	  etc
> 
> So it's better to have a thousand little unique solutions to the same 
> problem? 

We have one solution - package management. You want to add the extra one.

> it doesn't change the fact that I downloaded the wrong nvidia drivers the 
> other day because I accidentally grabbed the ia32 package instead of the 
> amd64 one. So much for saving bandwidth.

You mean your package manager didn't do it for you ? Anyway kernel
drivers are dependant on about 1500 variables and 1500! is a very very
large FatELF binary so it won't work.

> Not really. compat_binfmt_elf will run legacy binaries on new systems, but 
> not vice versa. The goal is having something that will let it work on both 
> without having to go through a package manager infrastructure.

See binfmt_misc. In fact you can probably do your ELF hacks in userspace
that way if you really must.

> In a world where terrabyte hard drives are cheap consumer-level 
> commodities, the tradeoff seems like a complete no-brainer to me.

Except that
- we are moving away from rotating storage for primary media
- flash still costs rather more
- virtual machines mean that disk space is now a real cost again as is RAM

> version. It's easier on me, I have other projects to work on, and too bad 
> for you. Granted, the x86_64 version _works_ on Linux, but shipping it is 
> a serious pain, so it just won't ship.

Distro problem, in the open source world someone will package it.

> That is anecdotal, and I apologize for that. But I'm not the only 
> developer that's not in an apt repository, and all of these rebuttals are 
> anecdotal: "I just use yum [...because I don't personally care about 
> Debian users]."

No. See yum/rpm demonstrates that it can be done right. Debian has fallen
a bit behind on that issue. We know it can be done right, and that tells
us that the Debian tools will eventually catch up and also do it right.

You have a solution (quite a nicely programmed one) in search of a
problem, and with patent concerns. That's a complete non-flier for the
kernel. It's not a dumping ground for neat toys and it would be several
gigabytes of code if it was.

You are also ignoring the other inconvenient detail. The architecture
selection used even by package managers is far more complex than i386 v
x86_64. Some distros build i686, some i686 optimisation but without cmov,
some i386, some install i386 or i686, others optimise for newer
processors only and so on.

Alan

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 18:18             ` Ryan C. Gordon
@ 2009-11-02 18:59               ` Julien BLACHE
  2009-11-02 19:08               ` Jesús Guerrero
  1 sibling, 0 replies; 69+ messages in thread
From: Julien BLACHE @ 2009-11-02 18:59 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

"Ryan C. Gordon" <icculus@icculus.org> wrote:

Hi,

> "A lot" is hard to quantify. We can certainly see thousands of forum
> posts for help with software that hadn't been packaged yet.

"A lot" certainly doesn't mean "all of it", sure, but that's already a
clear improvement over the situation 10 years ago.

> I can't imagine most people are interested in building repositories and 
> telling their users how to add it to their package manager, period, but 
> even less so when you have to build different repositories for different 
> sets of users, and not know what to build for whatever is the next popular 
> distribution. For things like Gentoo, which for years didn't have a way to 
> extend portage, what was the solution?

You need to decide if and how you want to distribute your software,
define your target audience and work from there. Yes, it takes some
effort. Yes, it's not something that's very valued by today's
standards. So what?

You can as well decide that your software is so good that packagers from
everywhere will package it for you. Except sometimes your software
actually isn't that good and nobody gives a damn.

As it stands, it really looks like your main problem is that it's too
hard to distribute software for Linux, but you're really making it a lot
more difficult than it really is.

Basically, these days, if you can ship a generic RPM and a clean .deb,
you've got most of your users covered. Oh, that's per-architecture, so
with i386 and amd64, that makes 4 packages. And the accompanying source
packages, because that can't hurt.

Anyone that can't use those packages either knows how to build stuff on
her distro of choice or needs to upgrade.

> I'm hearing a lot of "a lot" ... what actually happens today is that you 
> depend on the kindness of strangers to package your software or you make a 
> bunch of incompatible packages for different distributions.

Err. Excuse me, but if you "depend on the kindness of strangers" it's
because you made that choice in the first place. There is nothing that
prevents you from producing packages yourself. You might even learn a
thing or ten in the process!

When software doesn't get packaged properly after some time, it's
usually because nobody knows about it or because it's not that good and
nobody bothered. As the author, you can fix both issues.

>> > closed-source software
>> 
>> Why do we even care?
>
> Maybe you don't care, but that doesn't mean no one cares.

The ones who care have the resources to produce proper packages. They
just don't do it.

> I am on Team Stallman. I'll take a crappy free software solution over a 
> high quality closed-source one, and strive to improve the free software 

I don't think FatELF improves anything at all in the Free Software
world.

[static builds distributed as tarballs]
> I think we can do better than that when we're outside of the package 
> managers, but it's a rant for another time.

Actually, no, you can't, because too many people out there writing
software don't have a clue about shared libraries. If you want things to
work everywhere, static is the way to go.

> Having read the multiarch wiki briefly, I'm pleased to see other people 
> find the current system "unwieldy," but it seems like FatELF "kludge" 
> solves several of the points in the "unresolved issues" section.

Err, the unresolved issues are all packaging issues, to which the
solutions have not been decided yet. I don't see what FatELF can fix
here.

Now, to put it in a nutshell, you are coming forward with a technical
solution to a problem that *isn't*:
 - "my software, Zorglub++ isn't packaged anywhere!"
   Did you package it? No? Why not? Besides, maybe nobody knows about
   it, maybe nobody needs it, maybe it's just crap. Whatever. Find out
   and act from there.

 - "proprietary Blahblah7 is not packaged!"
   Yeah, well, WeDoProprietaryStuff, Inc. decided not to package it
   for whatever reason. What about contacting them, finding out the
   reason and then working from there?

JB.

-- 
Julien BLACHE                                   <http://www.jblache.org> 
<jb@jblache.org>                                  GPG KeyID 0xF5D65169

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 18:18             ` Ryan C. Gordon
  2009-11-02 18:59               ` Julien BLACHE
@ 2009-11-02 19:08               ` Jesús Guerrero
  1 sibling, 0 replies; 69+ messages in thread
From: Jesús Guerrero @ 2009-11-02 19:08 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

On Mon, 2 Nov 2009 13:18:41 -0500 (EST), "Ryan C. Gordon"
<icculus@icculus.org> wrote:
>> > software that isn't appropriate for an apt/yum repository
>> 
>> Just create a repository for the damn thing if you want to distribute
it
>> that way. There's no "appropriate / not appropriate" that applies here.
> 
> I can't imagine most people are interested in building repositories and 
> telling their users how to add it to their package manager, period, but 
> even less so when you have to build different repositories for different

> sets of users, and not know what to build for whatever is the next
popular 
> distribution. For things like Gentoo, which for years didn't have a way
to 
> extend portage, what was the solution?

I am not going into the FatELF thing. I am just following the debate
because it's interesting :)

However, for the sake of correctness about Gentoo, 

1)
Gentoo has had support for "overlays" *for ages*. I am sure they were
there when I joined in 2004. So I am not sure why you say that portage
can't be extended. I can't be sure when did overlays get into scene, I have
no idea if they were there from the beginning, but even at that stage, if
nothing else, you could still use the "ebuild" tool directly over an
ebuild, stored at any arbitrary place, not necessarily in the portage tree.
Nowadays there's a great number of well known overlays, where several
Gentoo devs are involved. Some of these are the testbed for trees that are
later incorporated to the official portage tree. A well known example is
sunrise, because it's big and of a great quality, but there are many more.

2)
Gentoo is probably the last distro that would benefit from FatELF, since
it's a distro where each user slims the system down to his/her needs.
Gentoo is not about making things generic. That's what compiling for your
architecture, USE flags, etc. are all about. If there's a distro out there
where FatELF doesn't make any sense at all, that's Gentoo for sure (as a
representative of source distros, I guess the same could apply to LFS,
sourcemage, etc.).

3)
Besides that, the average Gentoo user has no problem rolling his own
ebuilds if needed and putting them into a local overlay. And even if they
lack the skill there's always the forum and bugzilla for that. This is as
last resource, as said, there are *lots* of well known and maintained
overlays out there.

Again, these are not arguments in favor or against FatELF, as said, I am
staying away of the discussion, just some clarifications for things that I
thought are not correct. :)
-- 
Jesús Guerrero

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:39             ` david
  2009-11-02 17:44               ` Alan Cox
@ 2009-11-02 19:56               ` Krzysztof Halasa
  2009-11-02 20:11                 ` david
  1 sibling, 1 reply; 69+ messages in thread
From: Krzysztof Halasa @ 2009-11-02 19:56 UTC (permalink / raw)
  To: david; +Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

david@lang.hm writes:

> fo any individual user it will alsays be a larger download, but if you
> have to support more than one architecture (even 32 bit vs 64 bit x86)
> it may be smaller to have one fat package than to have two 'normal'
> packages.

In terms on disk space on distro TFTP servers only. You'll need to
transfer more, both from user's and distro's POV (obviously). This one
simple fact alone is more than enough to forget the FatELF.

Disk space on FTP servers is cheap (though maybe not so on 32 GB SSDs
and certainly not on 16 MB NOR flash chips). Bandwidth is expensive. And
it doesn't seem to be going to change.

FatELF means you have to compile for many archs. Do you even have the
necessary compilers? Extra time and disk space used for what, to solve
a non-problem?

> yes, the package manager could handle this by splitting the package up
> into more pieces, with some of the pieces being arch independant, but
> that also adds complexity.

Even without splitting, separate per-arch packages are a clear win.

I'm surprised this idea made it here. It certainly has merit for
installation medium, but it's called directory tree and/or .tar or .zip
there.
-- 
Krzysztof Halasa

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 19:56               ` Krzysztof Halasa
@ 2009-11-02 20:11                 ` david
  2009-11-02 20:33                   ` Krzysztof Halasa
  2009-11-03  1:35                   ` Mikael Pettersson
  0 siblings, 2 replies; 69+ messages in thread
From: david @ 2009-11-02 20:11 UTC (permalink / raw)
  To: Krzysztof Halasa
  Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

On Mon, 2 Nov 2009, Krzysztof Halasa wrote:

> david@lang.hm writes:
>
>> fo any individual user it will alsays be a larger download, but if you
>> have to support more than one architecture (even 32 bit vs 64 bit x86)
>> it may be smaller to have one fat package than to have two 'normal'
>> packages.
>
> In terms on disk space on distro TFTP servers only. You'll need to
> transfer more, both from user's and distro's POV (obviously). This one
> simple fact alone is more than enough to forget the FatELF.

it depends on if there is only one arch being downloaded ot not.

it could be considerably cheaper for mirroring bandwidth. Even if Alan is 
correct and distros have re-packaged everything so that the arch 
independant stuff is really in seperate packages, most 
mirroring/repository systems keep each distro release/arch in a seperate 
directory tree, so each of these arch-independant things gets copied 
multiple times.

> Disk space on FTP servers is cheap (though maybe not so on 32 GB SSDs
> and certainly not on 16 MB NOR flash chips). Bandwidth is expensive. And
> it doesn't seem to be going to change.
>
> FatELF means you have to compile for many archs. Do you even have the
> necessary compilers? Extra time and disk space used for what, to solve
> a non-problem?

you don't have to compile multiple arches anymore than you have to provide 
any other support for that arch. FatELF is a way to bundle the binaries 
that you were already creating, not something to force you to support an 
arch you otherwise wouldn't (although if it did make it easy enough for 
you to do so that you started to support additional arches, that would be 
a good thing)

>> yes, the package manager could handle this by splitting the package up
>> into more pieces, with some of the pieces being arch independant, but
>> that also adds complexity.
>
> Even without splitting, separate per-arch packages are a clear win.
>
> I'm surprised this idea made it here. It certainly has merit for
> installation medium, but it's called directory tree and/or .tar or .zip
> there.

if you have a 1M binary with 500M data, repeated for 5 arches it is not a 
win vs a single 505M FatELF package in all cases.

David Lang

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 18:53           ` Alan Cox
@ 2009-11-02 20:13             ` Ryan C. Gordon
  2009-11-04  1:09               ` Ryan C. Gordon
  0 siblings, 1 reply; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 20:13 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel, davem


> You've not shown a single meaningful use case yet.

I feel like we're at the point where we're each making points of various 
quality and the other person is going "nuh-uh."

You mentioned the patent thing and I don't have an answer at all yet from 
a lawyer. Let's table this for awhile until I have more information about 
that. If there's going to be a patent problem, it's not worth wasting 
everyone's time any further.

If it turns out to be no big deal, we can decide to revisit this.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 20:11                 ` david
@ 2009-11-02 20:33                   ` Krzysztof Halasa
  2009-11-03  1:35                   ` Mikael Pettersson
  1 sibling, 0 replies; 69+ messages in thread
From: Krzysztof Halasa @ 2009-11-02 20:33 UTC (permalink / raw)
  To: david; +Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

david@lang.hm writes:

>> In terms on disk space on distro TFTP servers only. You'll need to
>> transfer more, both from user's and distro's POV (obviously). This one
>> simple fact alone is more than enough to forget the FatELF.
>
> it depends on if there is only one arch being downloaded ot not.

Well, from user's POV it may get close if the user downloads maybe 5
different archs out of all supported by the distro. Not very typical
I guess.

> it could be considerably cheaper for mirroring bandwidth.

Maybe (though it can be solved with the existing techniques).
What does now count - bandwidth consumed by users or by mirrors?

> Even if Alan
> is correct and distros have re-packaged everything so that the arch
> independant stuff is really in seperate packages, most
> mirroring/repository systems keep each distro release/arch in a
> seperate directory tree, so each of these arch-independant things gets
> copied multiple times.

If it was a (serious) problem (I think it's not), it could be easily
solved. Think rsync, sha1|256-based mirroring stuff etc.

> you don't have to compile multiple arches anymore than you have to
> provide any other support for that arch. FatELF is a way to bundle the
> binaries that you were already creating, not something to force you to
> support an arch you otherwise wouldn't (although if it did make it
> easy enough for you to do so that you started to support additional
> arches, that would be a good thing)

Not sure - longer compile times, longer downloads, no testing.

> if you have a 1M binary with 500M data, repeated for 5 arches it is
> not a win vs a single 505M FatELF package in all cases.

A real example of such binary maybe?
-- 
Krzysztof Halasa

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 20:11                 ` david
  2009-11-02 20:33                   ` Krzysztof Halasa
@ 2009-11-03  1:35                   ` Mikael Pettersson
  1 sibling, 0 replies; 69+ messages in thread
From: Mikael Pettersson @ 2009-11-03  1:35 UTC (permalink / raw)
  To: david
  Cc: Krzysztof Halasa, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

david@lang.hm writes:
 > > FatELF means you have to compile for many archs. Do you even have the
 > > necessary compilers? Extra time and disk space used for what, to solve
 > > a non-problem?
 > 
 > you don't have to compile multiple arches anymore than you have to provide 
 > any other support for that arch. FatELF is a way to bundle the binaries 
 > that you were already creating, not something to force you to support an 
 > arch you otherwise wouldn't (although if it did make it easy enough for 
 > you to do so that you started to support additional arches, that would be 
 > a good thing)

'bundle' by gluing .o files together rather than using what we already have:
directories, search paths, $VARIABLES in search paths, and ELF interpreters
and .so loaders that know to look in $ARCH subdirectories first (I used that
feature to perform an incremental upgrade from OABI to EABI on my ARM/Linux
systems last winter).

Someone, somewhere, has to inspect $ARCH and make a decision. Moving that
decision from user-space to kernel-space for ELF file loading is neither
necessary nor sufficient. Consider .a and .h files for instance.

 > > I'm surprised this idea made it here. It certainly has merit for
 > > installation medium, but it's called directory tree and/or .tar or .zip
 > > there.
 > 
 > if you have a 1M binary with 500M data, repeated for 5 arches it is not a 
 > win vs a single 505M FatELF package in all cases.

If I have a 1M binary with 500M non-arch data I'll split the package because
I'm not a complete moron.

IMNSHO FatELF is a technology pretending to be a solution to "problems"
that don't exist or have user-space solutions. Either way, it doesn't
belong in the Linux kernel.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 15:14             ` Ryan C. Gordon
@ 2009-11-03 14:54               ` Valdis.Kletnieks
  2009-11-03 18:30                 ` Matt Thrailkill
  0 siblings, 1 reply; 69+ messages in thread
From: Valdis.Kletnieks @ 2009-11-03 14:54 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3051 bytes --]

On Mon, 02 Nov 2009 10:14:15 EST, "Ryan C. Gordon" said:

> I probably wasn't clear when I said "distribution-wide policy" followed by 
> a "then again." I meant there would be backlash if the distribution glued 
> the whole system together, instead of just binaries that made sense to do 
> it to.

OK.. I'll bite - which binaries does it make sense to do so?  Remember in
your answer to address the very valid point that any binaries you *don't*
do this for will still need equivalent hand-holding by the package manager.
So if you're not doing all of them, you need to address the additional
maintenance overhead of "which way is this package supposed to be built?"
and all the derivative headaches.

It might be instructive to not do a merge of *everything* in Ubuntu as you
did, but only select a random 20% or so of the packages and convert them
to FatELF, and see what breaks. (If our experience with 'make randconfig'
in the kernel is any indication, you'll hit a *lot* of corner cases and
pre-reqs you didn't know about...)

> > Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
> > are using FatELF - as long as there's any binaries doing things The Old Way,
> > you need to keep the supporting binaries around.
> 
> Binaries don't refer directly to /libXX, they count on ld.so to tapdance 
> on their behalf. My virtual machine example left the dirs there as 
> symlinks to /lib, but they could probably just go away directly.

Only if all your shared libs (which are binaries too) have migrated to FatELF.

On my box, I have:

% ls -l /usr/lib{,64}/libX11.so.6.3.0
-rwxr-xr-x 1 root root 1274156 2009-10-06 13:49 /usr/lib/libX11.so.6.3.0
-rwxr-xr-x 1 root root 1308600 2009-10-06 13:49 /usr/lib64/libX11.so.6.3.0

You can't dump them both into /usr/lib without making it a FatElf or doing
some name mangling. You probably didn't notice because you merged *all* of
an ubuntu distro into FatELF.

> > Don't forget you take that hit once for each shared library involved.  Plus
> 
> That happens in user space in ld.so, so it's not a kernel problem in any 
> case, but still...we're talking about, what? Twenty more branch 
> instructions per-process?

No, a lot more than that - you already identified an extra 128-byte read
as needing to happen.  Plus syscall overhead.

> > Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
> > in both 32 and 64 bit modes?
> 
> Whose refcounts would this screw up? If there's a possible bug, I'd like 
> to make sure it gets resolved, of course.

That's the point - nobody's done an audit for such things.  Does the kernel
DTRT when counting mapped pages (probably close-to-right, if you got it to boot)?
Where are the corresponding patches, if any, for tools like perf and oprofile?
Does lsof DTRT? /proc/<pid>/pagemap?  Any other tools that may break because
the make an assumption that executable files are mapped as 32-bit or 64-bit,
but not both (most likely choking if they see a 64-bit address someplace
after they've decided the binary is a 32-bit)?

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-03 14:54               ` Valdis.Kletnieks
@ 2009-11-03 18:30                 ` Matt Thrailkill
  0 siblings, 0 replies; 69+ messages in thread
From: Matt Thrailkill @ 2009-11-03 18:30 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

On Tue, Nov 3, 2009 at 6:54 AM,  <Valdis.Kletnieks@vt.edu> wrote:
> On Mon, 02 Nov 2009 10:14:15 EST, "Ryan C. Gordon" said:
>
>> I probably wasn't clear when I said "distribution-wide policy" followed by
>> a "then again." I meant there would be backlash if the distribution glued
>> the whole system together, instead of just binaries that made sense to do
>> it to.
>
> OK.. I'll bite - which binaries does it make sense to do so?  Remember in
> your answer to address the very valid point that any binaries you *don't*
> do this for will still need equivalent hand-holding by the package manager.
> So if you're not doing all of them, you need to address the additional
> maintenance overhead of "which way is this package supposed to be built?"
> and all the derivative headaches.
>
> It might be instructive to not do a merge of *everything* in Ubuntu as you
> did, but only select a random 20% or so of the packages and convert them
> to FatELF, and see what breaks. (If our experience with 'make randconfig'
> in the kernel is any indication, you'll hit a *lot* of corner cases and
> pre-reqs you didn't know about...)

I think he is thinking of only having FatELF binaries for binaries and
libraries
that overlap between 32- and 64-bit in a distro install.  Perhaps everything
that is sitting in /lib32 for example could instead be in a FatELF
binaries in /lib,
alongside the 64-bit binary.

A thought I had, that I don't think has come up in this thread:
could it be practical or worthwhile for distros to use FatElf to ship multiple
executables with different compiler optimizations?  i586, i686, etc.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 20:13             ` Ryan C. Gordon
@ 2009-11-04  1:09               ` Ryan C. Gordon
  0 siblings, 0 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-04  1:09 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel, davem


> You mentioned the patent thing and I don't have an answer at all yet from 
> a lawyer. Let's table this for awhile until I have more information about 
> that. If there's going to be a patent problem, it's not worth wasting 
> everyone's time any further.
> 
> If it turns out to be no big deal, we can decide to revisit this.

The Software Freedom Law Center replied with this...

"I refer you to our Legal Guide section on dealing with patents available 
from our website.  I also refer you to our amici brief in Bilski, where we 
argue that patents on pure software are invalid.  If a patent is invalid, 
there's no reason to consider whether it is infringed."

...which may be promising some day, but doesn't resolve current concerns. 
Also: "I read a FAQ" doesn't hold up in court.  :)

Based on feedback from this list, the patent concern that I'm not 
qualified to resolve myself, and belief that I'll be on the losing end of 
the same argument with the glibc maintainers after this, I'm withdrawing 
my FatELF patch. If anyone wants it, I'll leave the project page and 
patches in place at http://icculus.org/fatelf/ ...

Thank you everyone for your time and feedback.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* package managers [was: FatELF patches...]
  2009-11-02  2:21         ` Ryan C. Gordon
                             ` (3 preceding siblings ...)
  2009-11-02 15:40           ` Diego Calleja
@ 2009-11-04 16:40           ` Mikulas Patocka
  2009-11-04 16:54             ` Alan Cox
                               ` (2 more replies)
  4 siblings, 3 replies; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 16:40 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Alan Cox, Måns Rullgård, linux-kernel

> Package managers are a _fantastic_ invention. They are a killer feature 
> over other operating systems, including ones people pay way too much money 
> to use.

No, package managers are evil feature that suppresses third party software 
and kills Linux success on desktop.

Package managers are super-easy to use --- but only as long as the package 
exists. No developer can make a package for all versions of all 
distributions. No distribution can make a package for all versions of all 
Linux software. So, inevitably, there are holes in the
[distribution X software] matrix, where the package isn't available.

- With Windows installers (next - next - next - finish), even a 
  technically unskilled person can select which version of a given 
  software he wants to use. If the software doesn't work, he can simply 
  uninstall it and try another version.

- With Linux package managers, the user is stuck with the software and 
  version shipped by the distribution. If he wants to install anything 
  newer or older, it turns into black magic and the typical desktop user 
  (non-hacker) can't do it.

- For non-technical user who can't compile, getting newer software for 
  Linux means reinstalling the whole distribution to a newer version. So, 
  "upgrade one program" translates into "upgrade all programs" (that will 
  bring many changes that the user didn't want and new bugs)


Let me say that instead of making a single binary for multiple 
architectures, you should concentrate on developing a method to make a 
single binary that works on all installations on one architecture :)

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 16:40           ` package managers [was: FatELF patches...] Mikulas Patocka
@ 2009-11-04 16:54             ` Alan Cox
  2009-11-04 17:25               ` Mikulas Patocka
  2009-11-04 17:36             ` Valdis.Kletnieks
  2009-11-04 20:28             ` Ryan C. Gordon
  2 siblings, 1 reply; 69+ messages in thread
From: Alan Cox @ 2009-11-04 16:54 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

> - With Linux package managers, the user is stuck with the software and 
>   version shipped by the distribution. If he wants to install anything 
>   newer or older, it turns into black magic and the typical desktop user 
>   (non-hacker) can't do it.

In the rpm/yumworld that would be "yum downgrade" and "yum upgrade" for
packages or whatever button on whatever gui wrapper you happen to have.

And of course yum supports third party repositories so you can also deal
with the updating problem which Windows tends not to do well for third
party software.

Installing it is the easy bit, keeping it current and secure is the fun
bit.

All pretty routine stuff and a lot of users add other repositories
themselves: generally by having a package that adds the repository, so
you just have one package to click on in a web browser and open, then off
it all goes.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 16:54             ` Alan Cox
@ 2009-11-04 17:25               ` Mikulas Patocka
  2009-11-04 17:48                 ` Martin Nybo Andersen
  0 siblings, 1 reply; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 17:25 UTC (permalink / raw)
  To: Alan Cox; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

On Wed, 4 Nov 2009, Alan Cox wrote:

> > - With Linux package managers, the user is stuck with the software and 
> >   version shipped by the distribution. If he wants to install anything 
> >   newer or older, it turns into black magic and the typical desktop user 
> >   (non-hacker) can't do it.
> 
> In the rpm/yumworld that would be "yum downgrade" and "yum upgrade" for
> packages or whatever button on whatever gui wrapper you happen to have.

And what if there isn't a package? Upgrade option doesn't solve the need 
for [ distributions X software ] matrix of packages.

> And of course yum supports third party repositories so you can also deal
> with the updating problem which Windows tends not to do well for third
> party software.

A practical example --- when I wanted to get Wine on RHEL 5, all I found 
was a package for 1.0.1. Nothing newer.

I managed to compile the current version of Wine (it wasn't straghtforward 
and took few days to solve all the problems) and it ran the program I 
wanted. But I can imagine that a typical business user or home gamer will 
just say "that Linux sux".

You can say that I should delete RHEL-5 and install Fedora, but that is 
just that "upgrade one program" => "upgrade all programs" problem.

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 16:40           ` package managers [was: FatELF patches...] Mikulas Patocka
  2009-11-04 16:54             ` Alan Cox
@ 2009-11-04 17:36             ` Valdis.Kletnieks
  2009-11-04 20:28             ` Ryan C. Gordon
  2 siblings, 0 replies; 69+ messages in thread
From: Valdis.Kletnieks @ 2009-11-04 17:36 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Ryan C. Gordon, Alan Cox, Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 541 bytes --]

On Wed, 04 Nov 2009 17:40:02 +0100, Mikulas Patocka said:

> - With Windows installers (next - next - next - finish), even a 
>   technically unskilled person can select which version of a given 
>   software he wants to use. If the software doesn't work, he can simply 
>   uninstall it and try another version.

Theoretically.  There's this little detail called "DLL Hell" though...

(And one could reasonably argue that it requires *more* clue to resolve a
DLL Hell issue than it does to fix the equivalent dependency issue on Linux...)


[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 17:25               ` Mikulas Patocka
@ 2009-11-04 17:48                 ` Martin Nybo Andersen
  2009-11-04 18:46                   ` Mikulas Patocka
  0 siblings, 1 reply; 69+ messages in thread
From: Martin Nybo Andersen @ 2009-11-04 17:48 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

On Wednesday 04 November 2009 18:25:07 Mikulas Patocka wrote:
> On Wed, 4 Nov 2009, Alan Cox wrote:
> > > - With Linux package managers, the user is stuck with the software and
> > >   version shipped by the distribution. If he wants to install anything
> > >   newer or older, it turns into black magic and the typical desktop
> > > user (non-hacker) can't do it.
> >
> > In the rpm/yumworld that would be "yum downgrade" and "yum upgrade" for
> > packages or whatever button on whatever gui wrapper you happen to have.
> 
> And what if there isn't a package? Upgrade option doesn't solve the need
> for [ distributions X software ] matrix of packages.
> 
> > And of course yum supports third party repositories so you can also deal
> > with the updating problem which Windows tends not to do well for third
> > party software.
> 
> A practical example --- when I wanted to get Wine on RHEL 5, all I found
> was a package for 1.0.1. Nothing newer.
> 
> I managed to compile the current version of Wine (it wasn't straghtforward
> and took few days to solve all the problems) and it ran the program I
> wanted. But I can imagine that a typical business user or home gamer will
> just say "that Linux sux".
> 
> You can say that I should delete RHEL-5 and install Fedora, but that is
> just that "upgrade one program" => "upgrade all programs" problem.

Have you ever tried upgrading Windows because some program is incompatible 
with the current installation? ... That is indeed an 'upgrade all' procedure 
... _If_ you're lucky enough to be able to reinstall your software.

Being able to upgrade at least Debian -- and others as well -- without the 
need to attend the computer is IMHO one of Linux' biggest wins.

BTW: Wine has, like many others, the newest version of their software 
prepackaged for RHEL 4 & 5 among others at their site: 
http://www.winehq.org/download/

If all else fail the developers could go for statically compiled binaries in 
an executable tarball, which then handles the installation to /usr/local

-Martin


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 17:48                 ` Martin Nybo Andersen
@ 2009-11-04 18:46                   ` Mikulas Patocka
  2009-11-04 19:46                     ` Alan Cox
                                       ` (2 more replies)
  0 siblings, 3 replies; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 18:46 UTC (permalink / raw)
  To: Martin Nybo Andersen
  Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

> > You can say that I should delete RHEL-5 and install Fedora, but that is
> > just that "upgrade one program" => "upgrade all programs" problem.
> 
> Have you ever tried upgrading Windows because some program is incompatible 
> with the current installation? ... That is indeed an 'upgrade all' procedure 
> ... _If_ you're lucky enough to be able to reinstall your software.

Some Windows programs force upgrade, but not in yearly cycles, like Linux 
programs. Majority of programs still work on XP shipped in 2001.

> Being able to upgrade at least Debian -- and others as well -- without the 
> need to attend the computer is IMHO one of Linux' biggest wins.

When I did it (from Etch to Lenny), two programs that I have compiled 
manually ("vim" and "links") stopped working because Etch and Lenny have 
binary-incompatible libgpm.

If some library cannot keep binary compatibility, it should be linked 
staticaly, dynamic version shouldn't even exists on the system --- so that 
no one can create incompatible binaries.

> BTW: Wine has, like many others, the newest version of their software 
> prepackaged for RHEL 4 & 5 among others at their site: 
> http://www.winehq.org/download/

This is exactly the link that I followed and the last version for "RHEL 5" 
is "wine-1.0.1-1.el5.i386.rpm".

> If all else fail the developers could go for statically compiled binaries in 
> an executable tarball, which then handles the installation to /usr/local
> 
> -Martin

Static linking doesn't work for any program that needs plug-ins (i.e. 
you'd have one glibc statically linked into the program and another glibc 
dynamically linked with a plug-in and these two glibcs will beat each 
other).

---

I mean this --- the distributions should agree on a common set of 
libraries and their versions (call this for example "Linux-2010 
standard"). This standard should include libraries that are used 
frequently, that have low occurence of bugs and security holes and that 
have never had an ABI change.

A distribution that claims compatibility with the standard must ship 
libraries that are compatible with the libraries in the standard (not 
necessarily the same version, it may ship higher version for security or 
so).

Software developers that claim compatibility with the standard will link 
standard libraries dynamically and must use static linking for all 
libraries not included in the standard. Or they can use dynamic linking 
and ship the non-standard library with the application in its private 
directory (so that nothing but that application links against it).

Then, software developers could make a release for "Linux-2010" and it 
would work on all distributions.

You'd no longer need a [ distributions X programs ] matrix of binaries 
and packages.

In five years, you could revisit the standard to "Linux-2015" with newer 
versions of libraries and force the users only to five-years upgrades, not 
yearly upgrades as it is now. "Linux-2015" should be backward compatible 
with "Linux-2010", so an user doing upgrade would only need to overwrite 
his /lib and /usr/lib, he woudn't even need to change the programs.

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 18:46                   ` Mikulas Patocka
@ 2009-11-04 19:46                     ` Alan Cox
  2009-11-04 20:04                       ` Mikulas Patocka
  2009-11-04 20:02                     ` Valdis.Kletnieks
  2009-11-10 11:57                     ` Enrico Weigelt
  2 siblings, 1 reply; 69+ messages in thread
From: Alan Cox @ 2009-11-04 19:46 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Martin Nybo Andersen, Ryan C. Gordon, Måns Rullgård,
	linux-kernel

> > BTW: Wine has, like many others, the newest version of their software 
> > prepackaged for RHEL 4 & 5 among others at their site: 
> > http://www.winehq.org/download/
> 
> This is exactly the link that I followed and the last version for "RHEL 5" 
> is "wine-1.0.1-1.el5.i386.rpm".

So you have a supplier issue. A random windows user wouldnt cope with
that either. You try installing a Windows Vista only app on XP ;)

> A distribution that claims compatibility with the standard must ship 
> libraries that are compatible with the libraries in the standard (not 
> necessarily the same version, it may ship higher version for security or 
> so).

Welcome to the Linux Standard Base. It's been done and it exists.
Generally speaking open source projects don't seem to care to build to it
but prefer to build to each distro.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 18:46                   ` Mikulas Patocka
  2009-11-04 19:46                     ` Alan Cox
@ 2009-11-04 20:02                     ` Valdis.Kletnieks
  2009-11-04 20:08                       ` Mikulas Patocka
  2009-11-10 11:57                     ` Enrico Weigelt
  2 siblings, 1 reply; 69+ messages in thread
From: Valdis.Kletnieks @ 2009-11-04 20:02 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5683 bytes --]

On Wed, 04 Nov 2009 19:46:44 +0100, Mikulas Patocka said:

> When I did it (from Etch to Lenny), two programs that I have compiled 
> manually ("vim" and "links") stopped working because Etch and Lenny have 
> binary-incompatible libgpm.
> 
> If some library cannot keep binary compatibility, it should be linked 
> staticaly, dynamic version shouldn't even exists on the system --- so that 
> no one can create incompatible binaries.

No, all they need to do is bump the .so version number.

I have a creeping-horror binary that was linked against an older audit shared
library. Fedora shipped a newer one.  The fix?  Upgraded the lib, then
snarfed the old version off backups (you *do* make backups, right?)

% ls -l /lib64/libaudit*
lrwxrwxrwx 1 root root     17 2009-09-26 16:47 /lib64/libaudit.so.0 -> libaudit.so.0.0.0
-rwxr-xr-x 1 root root 107304 2009-04-03 15:47 /lib64/libaudit.so.0.0.0
lrwxrwxrwx 1 root root     17 2009-09-30 11:09 /lib64/libaudit.so.1 -> libaudit.so.1.0.0
-rwxr-xr-x 1 root root 103208 2009-09-28 16:00 /lib64/libaudit.so.1.0.0

They happily co-exist. My creeping horror references libaudit.so.0, the rest
of the system references libaudit.so.1 and everybody is happy.

And some distros even pre-package the previous set of libraries for some packages:

% yum list 'compat*'
Loaded plugins: dellsysidplugin2, downloadonly, refresh-packagekit, refresh-updatesd
Installed Packages
compat-expat1.x86_64                                          1.95.8-6                                    @rawhide
compat-readline5.i686                                         5.2-17.fc12                                 @rawhide
compat-readline5.x86_64                                       5.2-17.fc12                                 @rawhide
Available Packages
compat-db.x86_64                                              4.6.21-5.fc10                               rawhide 
compat-db45.x86_64                                            4.5.20-5.fc10                               rawhide 
compat-db46.x86_64                                            4.6.21-5.fc10                               rawhide 
compat-erlang.x86_64                                          R10B-15.12.fc12                             rawhide 
compat-expat1.i686                                            1.95.8-6                                    rawhide 
compat-flex.x86_64                                            2.5.4a-6.fc12                               rawhide 
compat-gcc-34.x86_64                                          3.4.6-18                                    rawhide 
compat-gcc-34-c++.x86_64                                      3.4.6-18                                    rawhide 
compat-gcc-34-g77.x86_64                                      3.4.6-18                                    rawhide 
compat-guichan05.i686                                         0.5.0-10.fc12                               rawhide 
compat-guichan05.x86_64                                       0.5.0-10.fc12                               rawhide 
compat-guichan05-devel.i686                                   0.5.0-10.fc12                               rawhide 
compat-guichan05-devel.x86_64                                 0.5.0-10.fc12                               rawhide 
compat-libf2c-34.i686                                         3.4.6-18                                    rawhide 
compat-libf2c-34.x86_64                                       3.4.6-18                                    rawhide 
compat-libgda.i686                                            3.1.2-3.fc12                                rawhide 
compat-libgda.x86_64                                          3.1.2-3.fc12                                rawhide 
compat-libgda-devel.i686                                      3.1.2-3.fc12                                rawhide 
compat-libgda-devel.x86_64                                    3.1.2-3.fc12                                rawhide 
compat-libgda-sqlite.x86_64                                   3.1.2-3.fc12                                rawhide 
compat-libgda-sqlite-devel.i686                               3.1.2-3.fc12                                rawhide 
compat-libgda-sqlite-devel.x86_64                             3.1.2-3.fc12                                rawhide 
compat-libgdamm.i686                                          3.0.1-4.fc12                                rawhide 
compat-libgdamm.x86_64                                        3.0.1-4.fc12                                rawhide 
compat-libgdamm-devel.i686                                    3.0.1-4.fc12                                rawhide 
compat-libgdamm-devel.x86_64                                  3.0.1-4.fc12                                rawhide 
compat-libgfortran-41.i686                                    4.1.2-38                                    rawhide 
compat-libgfortran-41.x86_64                                  4.1.2-38                                    rawhide 
compat-libstdc++-296.i686                                     2.96-143                                    rawhide 
compat-libstdc++-33.i686                                      3.2.3-68                                    rawhide 
compat-libstdc++-33.x86_64                                    3.2.3-68                                    rawhide 
compat-readline5-devel.i686                                   5.2-17.fc12                                 rawhide 
compat-readline5-devel.x86_64                                 5.2-17.fc12                                 rawhide 
compat-readline5-static.x86_64                                5.2-17.fc12                                 rawhide 

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 19:46                     ` Alan Cox
@ 2009-11-04 20:04                       ` Mikulas Patocka
  2009-11-04 20:27                         ` david
  0 siblings, 1 reply; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 20:04 UTC (permalink / raw)
  To: Alan Cox
  Cc: Martin Nybo Andersen, Ryan C. Gordon, Måns Rullgård,
	linux-kernel

> Welcome to the Linux Standard Base. It's been done and it exists.
> Generally speaking open source projects don't seem to care to build to it
> but prefer to build to each distro.

Why?

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 20:02                     ` Valdis.Kletnieks
@ 2009-11-04 20:08                       ` Mikulas Patocka
  2009-11-04 20:41                         ` Valdis.Kletnieks
  0 siblings, 1 reply; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 20:08 UTC (permalink / raw)
  To: Valdis.Kletnieks
  Cc: Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

On Wed, 4 Nov 2009, Valdis.Kletnieks@vt.edu wrote:

> On Wed, 04 Nov 2009 19:46:44 +0100, Mikulas Patocka said:
> 
> > When I did it (from Etch to Lenny), two programs that I have compiled 
> > manually ("vim" and "links") stopped working because Etch and Lenny have 
> > binary-incompatible libgpm.
> > 
> > If some library cannot keep binary compatibility, it should be linked 
> > staticaly, dynamic version shouldn't even exists on the system --- so that 
> > no one can create incompatible binaries.
> 
> No, all they need to do is bump the .so version number.

That's what Debian did. Obviously, I can extract the old library from the 
old package. But non-technical desktop user can't.

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 20:04                       ` Mikulas Patocka
@ 2009-11-04 20:27                         ` david
  0 siblings, 0 replies; 69+ messages in thread
From: david @ 2009-11-04 20:27 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Alan Cox, Martin Nybo Andersen, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

On Wed, 4 Nov 2009, Mikulas Patocka wrote:

>> Welcome to the Linux Standard Base. It's been done and it exists.
>> Generally speaking open source projects don't seem to care to build to it
>> but prefer to build to each distro.
>
> Why?

also note that commercial products generally don't use LSB either.

David Lang

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 16:40           ` package managers [was: FatELF patches...] Mikulas Patocka
  2009-11-04 16:54             ` Alan Cox
  2009-11-04 17:36             ` Valdis.Kletnieks
@ 2009-11-04 20:28             ` Ryan C. Gordon
  2 siblings, 0 replies; 69+ messages in thread
From: Ryan C. Gordon @ 2009-11-04 20:28 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Alan Cox, Måns Rullgård, linux-kernel


> No, package managers are evil feature that suppresses third party software 
> and kills Linux success on desktop.

There are merits and flaws, of course, but I'm going to take this moment 
to encourage everyone to not descend into a conversation about this on 
linux-kernel. My point with FatELF wasn't to start a conversation about 
package management at all, let alone on this mailing list.

--ryan.


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 20:08                       ` Mikulas Patocka
@ 2009-11-04 20:41                         ` Valdis.Kletnieks
  2009-11-04 21:11                           ` Mikulas Patocka
  0 siblings, 1 reply; 69+ messages in thread
From: Valdis.Kletnieks @ 2009-11-04 20:41 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 914 bytes --]

On Wed, 04 Nov 2009 21:08:01 +0100, Mikulas Patocka said:
> On Wed, 4 Nov 2009, Valdis.Kletnieks@vt.edu wrote:
> 
> > On Wed, 04 Nov 2009 19:46:44 +0100, Mikulas Patocka said:
> > 
> > > When I did it (from Etch to Lenny), two programs that I have compiled 
> > > manually ("vim" and "links") stopped working because Etch and Lenny have 
> > > binary-incompatible libgpm.
> > > 
> > > If some library cannot keep binary compatibility, it should be linked 
> > > staticaly, dynamic version shouldn't even exists on the system --- so tha
t 
> > > no one can create incompatible binaries.
> > 
> > No, all they need to do is bump the .so version number.
> 
> That's what Debian did. Obviously, I can extract the old library from the 
> old package. But non-technical desktop user can't.

But the non-technical user probably wouldn't have hand-compiled vim and links
either, so how would they get into that situation?

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 20:41                         ` Valdis.Kletnieks
@ 2009-11-04 21:11                           ` Mikulas Patocka
  2009-11-04 21:32                             ` kevin granade
  2009-11-04 23:11                             ` Valdis.Kletnieks
  0 siblings, 2 replies; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 21:11 UTC (permalink / raw)
  To: Valdis.Kletnieks
  Cc: Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

> > > No, all they need to do is bump the .so version number.
> > 
> > That's what Debian did. Obviously, I can extract the old library from the 
> > old package. But non-technical desktop user can't.
> 
> But the non-technical user probably wouldn't have hand-compiled vim and links
> either, so how would they get into that situation?

Non-technical users won't hand-compile but they want third party software 
that doesn't come from the distribution. And package management system 
hates it. Truly. It is written with the assumption that everything 
installed is registered in the package database.

Another example: I needed new binutils because it had some bugs fixed over 
standard Debian binutils. So I downloaded .tar.gz from ftp.gnu.org, 
compiled it, then issued a command to remove the old package, passed it a 
flag to ignore broken dependencies and then typed make install to install 
new binaries. --- guess what --- on any further invocation of dselect it 
complained that there are broken dependencies (the compiler needs 
binutils) and tried to install the old binutils package!

Why is the package management so stupid? Why can't it check $PATH for "ld" 
and if there is one, don't try to install it again?

After few hours, I resolved the issue by creating an empty "binutils" 
package and stuffing it into the database.

Now, if I were not a programmer ... if I were an artist who needs the 
latest version of graphics software, if I were a musican who needs the 
latest version of audio software, if I were a gamer who needs the latest 
version of wine ... I'd be f'cked up. That's why I think that package 
management is an evil feature hurts desktop users. As a technical user, I 
somehow solve these quirks and install what I want, as a non-technical 
user, I wouldn't have a chance.

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 21:11                           ` Mikulas Patocka
@ 2009-11-04 21:32                             ` kevin granade
  2009-11-04 22:05                               ` Mikulas Patocka
  2009-11-04 23:11                             ` Valdis.Kletnieks
  1 sibling, 1 reply; 69+ messages in thread
From: kevin granade @ 2009-11-04 21:32 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Valdis.Kletnieks, Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

On Wed, Nov 4, 2009 at 3:11 PM, Mikulas Patocka
<mikulas@artax.karlin.mff.cuni.cz> wrote:
>> > > No, all they need to do is bump the .so version number.
>> >
>> > That's what Debian did. Obviously, I can extract the old library from the
>> > old package. But non-technical desktop user can't.
>>
>> But the non-technical user probably wouldn't have hand-compiled vim and links
>> either, so how would they get into that situation?
>
> Non-technical users won't hand-compile but they want third party software
> that doesn't come from the distribution. And package management system
> hates it. Truly. It is written with the assumption that everything
> installed is registered in the package database.
>
> Another example: I needed new binutils because it had some bugs fixed over
> standard Debian binutils. So I downloaded .tar.gz from ftp.gnu.org,
> compiled it, then issued a command to remove the old package, passed it a
> flag to ignore broken dependencies and then typed make install to install
> new binaries. --- guess what --- on any further invocation of dselect it
> complained that there are broken dependencies (the compiler needs
> binutils) and tried to install the old binutils package!
>
> Why is the package management so stupid? Why can't it check $PATH for "ld"
> and if there is one, don't try to install it again?
>
> After few hours, I resolved the issue by creating an empty "binutils"
> package and stuffing it into the database.
>
> Now, if I were not a programmer ... if I were an artist who needs the
> latest version of graphics software, if I were a musican who needs the
> latest version of audio software, if I were a gamer who needs the latest
> version of wine ... I'd be f'cked up. That's why I think that package
> management is an evil feature hurts desktop users. As a technical user, I
> somehow solve these quirks and install what I want, as a non-technical
> user, I wouldn't have a chance.

I think the important question here is what is is exactly that the
package manager *did* to break the app you are talking about?  Did it
keep the person who released the software from including the required
libraries?  Did it keep them from compiling it statically?  Did it
interfere with them building against LSB? No, it didn't do any of
these things, all it did was not be as up to date as you wanted it to
be, and not magically be able to discern that you've replaced one of
the most core packages in the system (which, by the way is most
definitely not something that %99.999 of users are going to try)

I'm of the opinion that the package manager IS the "killer app" for
Linux, and the main thing that makes it usable at all for the
less-technical users you seem to think it is driving off.  Is it
perfect?  of course not, particularly if you want to strike off on
your own and install things manually.  But the pain you're running
into when you do that isn't caused by the package manager, it's what
is left if you take the package manager away.

>
> Mikulas
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 21:32                             ` kevin granade
@ 2009-11-04 22:05                               ` Mikulas Patocka
  2009-11-04 22:19                                 ` Marcin Letyns
  2009-11-04 22:43                                 ` Martin Nybo Andersen
  0 siblings, 2 replies; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 22:05 UTC (permalink / raw)
  To: kevin granade
  Cc: Valdis.Kletnieks, Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

> I think the important question here is what is is exactly that the
> package manager *did* to break the app you are talking about?

It interferred with my will to install the version of the software that I 
want.

> be, and not magically be able to discern that you've replaced one of
> the most core packages in the system (which, by the way is most
> definitely not something that %99.999 of users are going to try)

If you need new 3D driver because of better gaming performance ... if you 
need new lame because it encodes mp3 better ... if you need new libsane 
because it supports the new scanner that you have ... you are going to 
face the same problems like me when I needed new binutils. But the big 
problem is that persons needing these things usually don't have enough 
skills to install the software on their own and then fight with the 
package management system.

On Windows, the user can just download the EXE, run it, click 
next-next-next-finish and have it installed. There is no package 
management that would try to overwrite what you have just installed.

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 22:05                               ` Mikulas Patocka
@ 2009-11-04 22:19                                 ` Marcin Letyns
  2009-11-04 22:28                                   ` david
  2009-11-04 22:43                                 ` Martin Nybo Andersen
  1 sibling, 1 reply; 69+ messages in thread
From: Marcin Letyns @ 2009-11-04 22:19 UTC (permalink / raw)
  To: Mikulas Patocka, linux-kernel

2009/11/4 Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>:
>
> It interferred with my will to install the version of the software that I
> want.

You did it in very idiotic way...

>
> If you need new 3D driver because of better gaming performance ... if you
> need new lame because it encodes mp3 better ... if you need new libsane
> because it supports the new scanner that you have ... you are going to
> face the same problems like me when I needed new binutils. But the big
> problem is that persons needing these things usually don't have enough
> skills to install the software on their own and then fight with the
> package management system.

You use a rolling distro or add a proper repository with newer
packages. Nope, I never faced such problems, but I'm smart enough to
install software in a proper way. I consider package managers being
killer features you can only dream about being windows user.

> On Windows, the user can just download the EXE, run it, click
> next-next-next-finish and have it installed. There is no package
> management that would try to overwrite what you have just installed.

On windows, user cannot upgrade entire system in such easy way (he
can't even install a thing in such easy way) as Linux distros let you
to do so. I recommend you to stop writing such bull. It was you, who
wanted to overwrite what you have just installed. Stop trolling.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 22:19                                 ` Marcin Letyns
@ 2009-11-04 22:28                                   ` david
  0 siblings, 0 replies; 69+ messages in thread
From: david @ 2009-11-04 22:28 UTC (permalink / raw)
  To: Marcin Letyns; +Cc: Mikulas Patocka, linux-kernel

On Wed, 4 Nov 2009, Marcin Letyns wrote:

> 2009/11/4 Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>:
>>
>> It interferred with my will to install the version of the software that I
>> want.
>
> You did it in very idiotic way...

he's not alone in trying to do this

package managers are wonderful when they work and the package you need is 
in there. they are a pain to be worked around when the package you want 
isn't in the repository. if the package just isn't in there it's not a big 
deal to deal with it, the problem comes when you want a package that's 
different from one that _is_ in the repository.

how easy or hard it is to work around the package manager depends in large 
part on if you know the tricks for that particular package manager.

and no, a rolling update distro doesn't solve the problem. one issue is 
that trying to upgrade one package may trigger a pull of many others, but 
the bigger problem shows up when you need to compile a package with 
different options and really need to tell the package manager "hands off, 
I'll do this manually". They all have a way to do this, but most of the 
time it means learning enough about how packages work on that system to be 
able to create a dummy package to trick the package manager.

I think both sides here are overstating it.

package managers are neither the solution to all possible problems, nor 
are they the root of all evil.

David Lang

>>
>> If you need new 3D driver because of better gaming performance ... if you
>> need new lame because it encodes mp3 better ... if you need new libsane
>> because it supports the new scanner that you have ... you are going to
>> face the same problems like me when I needed new binutils. But the big
>> problem is that persons needing these things usually don't have enough
>> skills to install the software on their own and then fight with the
>> package management system.
>
> You use a rolling distro or add a proper repository with newer
> packages. Nope, I never faced such problems, but I'm smart enough to
> install software in a proper way. I consider package managers being
> killer features you can only dream about being windows user.
>
>> On Windows, the user can just download the EXE, run it, click
>> next-next-next-finish and have it installed. There is no package
>> management that would try to overwrite what you have just installed.
>
> On windows, user cannot upgrade entire system in such easy way (he
> can't even install a thing in such easy way) as Linux distros let you
> to do so. I recommend you to stop writing such bull. It was you, who
> wanted to overwrite what you have just installed. Stop trolling.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 22:05                               ` Mikulas Patocka
  2009-11-04 22:19                                 ` Marcin Letyns
@ 2009-11-04 22:43                                 ` Martin Nybo Andersen
  2009-11-04 23:55                                   ` Mikulas Patocka
  1 sibling, 1 reply; 69+ messages in thread
From: Martin Nybo Andersen @ 2009-11-04 22:43 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: kevin granade, Valdis.Kletnieks, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

On Wednesday 04 November 2009 23:05:17 Mikulas Patocka wrote:
> > I think the important question here is what is is exactly that the
> > package manager *did* to break the app you are talking about?
> 
> It interferred with my will to install the version of the software that I
> want.
> 
> > be, and not magically be able to discern that you've replaced one of
> > the most core packages in the system (which, by the way is most
> > definitely not something that %99.999 of users are going to try)
> 
> If you need new 3D driver because of better gaming performance ... if you
> need new lame because it encodes mp3 better ... if you need new libsane
> because it supports the new scanner that you have ... you are going to
> face the same problems like me when I needed new binutils. But the big
> problem is that persons needing these things usually don't have enough
> skills to install the software on their own and then fight with the
> package management system.
> 
> On Windows, the user can just download the EXE, run it, click
> next-next-next-finish and have it installed. There is no package
> management that would try to overwrite what you have just installed.

Exactly. There is nothing to help you from installing incompatible software 
(ie libraries). If your next-next-next-finish installer overwrites a crucial 
library, you're screwed. The package manager, on the other hand, knows about 
all your installed files and their dependencies and conflicts.

If you really want to fiddle with your own software versions, dependencies, and 
conflicts, then the equivs package is a perfect helper, which lets you create 
virtual Debian packages (empty packages with dependencies and such).
For instance, I compile mplayer directly from the subversion repository - 
however I still have some packages installed, which depends on mplayer. Here 
the virtual mplayer package keeps apt and friends from complaining.

My home brewed mplayer will still fail to work when a needed library is gone, 
but now I only have about a dozen apps that can break this way (all are nicely 
installed under /usr/local/stow btw).

Without the package manager, it would have been all of them.

Another nice thing about apt: It's an installer, that frees you from the next-
next-next steps. ;-)

-Martin

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 21:11                           ` Mikulas Patocka
  2009-11-04 21:32                             ` kevin granade
@ 2009-11-04 23:11                             ` Valdis.Kletnieks
  2009-11-05  0:05                               ` Mikulas Patocka
  1 sibling, 1 reply; 69+ messages in thread
From: Valdis.Kletnieks @ 2009-11-04 23:11 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1659 bytes --]

On Wed, 04 Nov 2009 22:11:47 +0100, Mikulas Patocka said:

> Another example: I needed new binutils because it had some bugs fixed over 
> standard Debian binutils. So I downloaded .tar.gz from ftp.gnu.org, 
> compiled it, then issued a command to remove the old package, passed it a 
> flag to ignore broken dependencies and then typed make install to install 
> new binaries. --- guess what --- on any further invocation of dselect it 
> complained that there are broken dependencies (the compiler needs 
> binutils) and tried to install the old binutils package!

> Why is the package management so stupid? Why can't it check $PATH for "ld" 
> and if there is one, don't try to install it again?

Because it has no way to tell what version of /usr/bin/foobar you installed
behind its back, if it's GNU Foobar or some other foobar, what its flags are,
whether it's bug-compatible with the foobar other things on the system are
expecting, and so on. (And go look at the scripts/ver_linux file in the Linux
source tree before you suggest the package manager run the program to find out
its version. That's only 10-15 binaries, and you'd need something like that for
*every single thing in /usr/bin). And it can't blindly assume you installed a
newer version - you may have intentionally installed a *backlevel* binary,
because you found a showstopper bug in the shipped version. So the only sane
thing it can do is try to re-install what it thinks is current.

Walking $PATH is even worse - if it finds a /usr/local/bin/ld, it's a pretty
damned good guess that it's there *because* it's not the /bin/ld that the
system shipped with.  So why should it use it?

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 22:43                                 ` Martin Nybo Andersen
@ 2009-11-04 23:55                                   ` Mikulas Patocka
  2009-11-05  2:24                                     ` Valdis.Kletnieks
  0 siblings, 1 reply; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-04 23:55 UTC (permalink / raw)
  To: Martin Nybo Andersen
  Cc: kevin granade, Valdis.Kletnieks, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

> > On Windows, the user can just download the EXE, run it, click
> > next-next-next-finish and have it installed. There is no package
> > management that would try to overwrite what you have just installed.
> 
> Exactly. There is nothing to help you from installing incompatible software 
> (ie libraries). If your next-next-next-finish installer overwrites a crucial 
> library, you're screwed. The package manager, on the other hand, knows about 
> all your installed files and their dependencies and conflicts.

The package manager can make the system unbootable too --- because of bugs 
in it or in packages.

In some situations, the package manager is even more dangerous than manual 
install. For example, if you are manually installing new alpha-quality 
version of mplayer, and it is buggy, you end up with a working system with 
broken mplayer. If you install alpha-quality version from some package 
repository, it may need experimental version of libfoo, that needs 
experimental version of libfee, that needs experimental version of glibc, 
that contains a bug --- and you won't boot (are rescue CDs smart enough to 
revert that upgrade?)

> If you really want to fiddle with your own software versions, 
> dependencies, and conflicts, then the equivs package is a perfect 
> helper, which lets you create virtual Debian packages (empty packages 
> with dependencies and such). For instance, I compile mplayer directly 
> from the subversion repository - however I still have some packages 
> installed, which depends on mplayer. Here the virtual mplayer package 
> keeps apt and friends from complaining.

Nice description ... the problem is that for desktop users it is still too 
complicated task.

I think the ultimate installer should work somehow like:
- extract the configure file from the *.tar.gz package.
- parse the options from configure (or configure.in/.ac) and present them 
to the user as checkboxes / input fields.
- let him click through the options with next-next-next-finish buttons :)
- try to run configure with user's options.
- if it fails, try to parse its output and install missing dependencies. 
This can't work in 100% cases, but it can work in >90% cases --- if we 
fail to parse the output, present it to the user and let him act on it.
- compile the program on background. (or foreground for geeks :)
- run "make install" it into a temporary directory.
- record what it tried to install (for possible undo) and then copy the 
files to the real places.

At least this would allow Linux users to use a lot available free software 
without relying on what the distribution does or doesn't pack. The user 
would work just like in Windows, download the program from developer's 
webpage and install it. He could upgrade or downgrade to any available 
versions released by the developer.

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 23:11                             ` Valdis.Kletnieks
@ 2009-11-05  0:05                               ` Mikulas Patocka
  0 siblings, 0 replies; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-05  0:05 UTC (permalink / raw)
  To: Valdis.Kletnieks
  Cc: Martin Nybo Andersen, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

> Walking $PATH is even worse - if it finds a /usr/local/bin/ld, it's a pretty
> damned good guess that it's there *because* it's not the /bin/ld that the
> system shipped with.  So why should it use it?

If it finds /usr/local/bin/ld it's because the admin installed it there 
--- and he installed it there because he wants it to be used. So it's OK 
to use it.

Anyway, if you have both /usr/bin/ld and /usr/local/bin/ld, you are in a 
pretty unpleasant situation because different programs search for them in 
a different order and you never know which one will be used. (i.e. what if 
./configure scripts prepends /usr/local/bin to $PATH ... or if it prepends 
/usr/bin? who knows - no one can check all ./configure scripts. These 
scripts do crazy things just because it worked around some flaw on some 
ancient Unix system).

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 23:55                                   ` Mikulas Patocka
@ 2009-11-05  2:24                                     ` Valdis.Kletnieks
  2009-11-05  2:52                                       ` Mikulas Patocka
  0 siblings, 1 reply; 69+ messages in thread
From: Valdis.Kletnieks @ 2009-11-05  2:24 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Martin Nybo Andersen, kevin granade, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1919 bytes --]

On Thu, 05 Nov 2009 00:55:53 +0100, Mikulas Patocka said:

> In some situations, the package manager is even more dangerous than manual 
> install. For example, if you are manually installing new alpha-quality 
> version of mplayer, and it is buggy, you end up with a working system with 
> broken mplayer. If you install alpha-quality version from some package 
> repository, it may need experimental version of libfoo, that needs 
> experimental version of libfee, that needs experimental version of glibc, 
> that contains a bug 

Total bullshit.  You know *damned* well that if you were installing that alpha
version of mplayer by hand, and it needed experimental libfoo, you'd go and
build libfoo by hand, and then build the experimental libfee by hand, and then
shoehorn in that glibc by hand, and bricked your system anyhow.

Or if you're arguing "you'd give up after seeing it needed an experimental
libfoo", I'll counter "you'd hopefully think twice if yum said it was
installing a experimental mplayer, and dragging in a whole chain of pre-reqs".

And any *sane* package manager won't even *try* to install an experimental one
unless you specifically *tell* it that the vendor-testing repository is
fair game.  You install Fedora, it looks in Releases and Updates.  You want
it to look for testing versions in Rawhide, you have to enable that by hand.
I'm positive Debian and Ubuntu and Suse are similar.

Plus, building by hand you're *more* likely to produce a brick-able library,
because you didn't specify the same './configure --enable-foobar' flags that
the rest of your system was expecting. (Been there, done that - reported a
Fedora Rawhide bug that an X11 upgrade borked the keyboard mapping, so the
keysym reported for 'uparrow' was 'Katakana', among other things.  Actual root
cause - running a -mm kernel that didn't have CONFIG_INPUT_EVDEV defined.
Previous X didn't care, updated it. Whoops).


[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-05  2:24                                     ` Valdis.Kletnieks
@ 2009-11-05  2:52                                       ` Mikulas Patocka
       [not found]                                         ` <f42384a10911050134t37a0a812hd85ff5541423dc9f@mail.gmail.com>
  2009-11-10 11:40                                         ` Enrico Weigelt
  0 siblings, 2 replies; 69+ messages in thread
From: Mikulas Patocka @ 2009-11-05  2:52 UTC (permalink / raw)
  To: Valdis.Kletnieks
  Cc: Martin Nybo Andersen, kevin granade, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel



On Wed, 4 Nov 2009, Valdis.Kletnieks@vt.edu wrote:

> On Thu, 05 Nov 2009 00:55:53 +0100, Mikulas Patocka said:
> 
> > In some situations, the package manager is even more dangerous than manual 
> > install. For example, if you are manually installing new alpha-quality 
> > version of mplayer, and it is buggy, you end up with a working system with 
> > broken mplayer. If you install alpha-quality version from some package 
> > repository, it may need experimental version of libfoo, that needs 
> > experimental version of libfee, that needs experimental version of glibc, 
> > that contains a bug 
> 
> Total bullshit.  You know *damned* well that if you were installing that 
> alpha version of mplayer by hand, and it needed experimental libfoo, 
> you'd go and build libfoo by hand, and then build the experimental 
> libfee by hand, and then shoehorn in that glibc by hand, and bricked 
> your system anyhow.

No, if I compile alpha version of mplayer by hand, it compiles and links 
against whatever libraries I have on my system. If I pull it out of some 
"testing" repository, it is already compiled and linked against libraries 
in the same "testing" repository and it will load the system with crap.

That is the unfortunate reality of not having a binary standard :-(

> Or if you're arguing "you'd give up after seeing it needed an experimental
> libfoo", I'll counter "you'd hopefully think twice if yum said it was
> installing a experimental mplayer, and dragging in a whole chain of pre-reqs".

... or use --disable-libfoo if it insists on newer version and I don't 
want to upgrade it. Or maybe the configure scripts detects on its own that 
the library is too old will compile without new features. Or it uses 
libfoo shipped with the sources.

But if the binary in the package is compiled with --enable-libfoo, there 
is no other way. It forces libfoo upgrade.

Mikulas

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Fwd: package managers [was: FatELF patches...]
       [not found]                                         ` <f42384a10911050134t37a0a812hd85ff5541423dc9f@mail.gmail.com>
@ 2009-11-05  9:35                                           ` Marcin Letyns
  0 siblings, 0 replies; 69+ messages in thread
From: Marcin Letyns @ 2009-11-05  9:35 UTC (permalink / raw)
  To: linux-kernel

---------- Forwarded message ----------
From: Marcin Letyns <mletyns@gmail.com>
Date: 2009/11/5
Subject: Re: package managers [was: FatELF patches...]
To: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>


2009/11/5 Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>:
>
>
> On Wed, 4 Nov 2009, Valdis.Kletnieks@vt.edu wrote:
>

> No, if I compile alpha version of mplayer by hand, it compiles and links
> against whatever libraries I have on my system. If I pull it out of some
> "testing" repository, it is already compiled and linked against libraries
> in the same "testing" repository and it will load the system with crap.
>
> That is the unfortunate reality of not having a binary standard :-(

Another bull, it's reality of not having precompiled package for your
distro. Try installing this on windows (not by running executable -
consider there's no executable in this case :>)! Precompiled packages
are smarter equivalents of windows exes. If there's no precompiled
package or if there's no exe you've got to compile and in your
situation mess everything up. I don't consider someone needs binary
standard etc. What you need is just precompiled packages for your
distro. If someone doesn't want to give you them, it's his fault. In
example you go at skype.com or whatever and you download single .deb
or .rpm and if you use (K)ubuntu you just click on this package and
skype or whatever will install automagically.

Btw. I don't consider such discussion should take place at lkml, so
maybe you'll start writing about this at linux.com or somewhere else?

Regards

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-01 19:20 ` David Hagood
  2009-11-01 20:28   ` Måns Rullgård
  2009-11-01 20:40   ` Ryan C. Gordon
@ 2009-11-10 10:04   ` Enrico Weigelt
  2 siblings, 0 replies; 69+ messages in thread
From: Enrico Weigelt @ 2009-11-10 10:04 UTC (permalink / raw)
  To: linux-kernel

* David Hagood <david.hagood@gmail.com> wrote:

Hi,

> I hope it's not too late for a request for consideration: if we start
> having fat binaries, could one of the "binaries" be one of the "not
> quite compiled code" formats like Architecture Neutral Distribution
> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

If you really wanna have arch independent binaries, you need some sort 
of virtual processor. Java, LLVM, etc. The idea is far from being new, 
IMHO originally came from old Burroughs Mainframes, which ran some
Algol-tailored bytecode, driven by an interpreter in microcode.
(I'm currently desiging a new VP with similar concepts, just in case
anybody's interested).

BTW: this does not need additional kernel support - binfmt_misc 
is your friend ;-P

> As an embedded systems guy who is looking to have to support multiple
> CPU types, this is really very interesting to me.

Just for the protocol: you want to have FatELF on embedded system ?


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:52         ` FatELF patches Ryan C. Gordon
  2009-11-02 18:53           ` Alan Cox
@ 2009-11-10 11:27           ` Enrico Weigelt
  2009-11-10 12:40             ` Bernd Petrovitsch
  1 sibling, 1 reply; 69+ messages in thread
From: Enrico Weigelt @ 2009-11-10 11:27 UTC (permalink / raw)
  To: linux-kernel

* Ryan C. Gordon <icculus@icculus.org> wrote:

> It's true that /bin/ls would double in size (although I'm sure at least 
> the download saves some of this in compression). But how much of, say, 
> Gnome or OpenOffice or Doom 3 is executable code? These things would be 
> nowhere near "vastly" bigger.

OO takes about 140 MB for binaries at my site. Now just multiply it by 
the number of targets you'd like to support.

Gnome stuff also tends to be quite fat.

> > 	- Assumes data files are not dependant on binary (often not true)
> 
> Turns out that /usr/sbin/hald's cache file was. That would need to be 
> fixed, which is trivial, but in my virtual machine test I had it delete 
> and regenerate the file on each boot as a fast workaround.

Well, hald (and already the dbus stuff) is a misdesign, so we shouldn't
count it here ;-P
 
> Testing doesn't really change with what I'm describing. If you want to 
> ship a program for PowerPC and x86, you still need to test it on PowerPC 
> and x86, no matter how you distribute or launch it.

BUT: you have to test the whole combination on dozens of targets.
And it in now way releaves to from testing dozens of different distros.

If you want one binary package for many different targets, go for 
autopackage, LSM, etc.

> Yes, that is true for software shipped via yum, which does not encompass 
> all the software you may want to run on your system. I'm not arguing 
> against package management.

Why not fixing the package ?

> True. If I try to run a PowerPC binary on a Sparc, it fails in any 
> circumstance. I recognize the goal of this post was to shoot down every 
> single point, but you can't see a scenario where this adds a benefit? Even 
> in a world that's still running 32-bit web browsers on _every major 
> operating system_ because some crucial plugins aren't 64-bit yet?

The root of evil are plugins - even worse: binary-only plugins.

Let's just take browsers: is there any damn good reason for not putting
those things into their own process (9P provides a fine IPC for that),
besides stupidity and lazyness of certain devs (yes, this explicitly
includes mozilla guys) ?
 
> > - Ship web browser plugins that work out of the box with multiple
> >   platforms.
> > 	- yum install just works, and there is a search path in firefox
> > 	  etc
> 
> So it's better to have a thousand little unique solutions to the same 
> problem? Everything has a search path (except things that don't), and all 
> of those search paths are set up in the same way (except things that 
> aren't). Do we really need to have every single program screwing around 
> with their own personal spiritual successor to the CLASSPATH environment 
> variable?

You dont like $PATH ? Use a unionfs and let a installer / package manager
handle proper setups.

Yes, on Linux (contrary to Plan9) this (AFAIK) still requires root 
privileges, but there're ways around this.

> > - Ship kernel drivers for multiple processors in one file.
> > 	- Not useful see separate downloads
> 
> Pain in the butt see "which installer is right for me?"   :)

It even gets worse: you need different modules for different kernel
versions *and* kernel configs. Kernel image and modules strictly 
belong together - it's in fact *one* kernel that just happens to be 
split off into several files so parts of it can be loaded on-demand.
 
> I don't want to get into a holy war about out-of-tree kernel drivers, 
> because I'm totally on board with getting drivers into the mainline. But 
> it doesn't change the fact that I downloaded the wrong nvidia drivers the 
> other day because I accidentally grabbed the ia32 package instead of the 
> amd64 one. So much for saving bandwidth.

NVidia is a bad reference here. These folks simply don't get their
stuff stable, instead playing around w/ ugly code obfuscation.
No mercy for those jerks.

I'm strongly in favour of prohibiting proprietary kernel drivers.
 
> I wasn't paying attention. But lots of people wouldn't know which to pick 
> even if they were. Nvidia, etc, could certainly put everything in one 
> shell script and choose for you, but now we're back at square one again.

If NV wants to stick in their binary crap, they'll have to bite the 
bullet of maintaining proper packaging. The fault is on their side,
not on Linux' one.

> > - Transition to a new architecture in incremental steps. 
> > 	- IFF the CPU supports both old and new
> 
> A lateral move would be painful (although Apple just did this very thing 
> with a FatELF-style solution, albeit with the help of an emulator), but if 
> we're talking about the most common case at the moment, x86 to amd64, it's 
> not a serious concern.

This is a specific case, which could be handled easily in userland, IMHO.

> Why install Gimp by default if I'm not an artist? Because disk space is 
> cheap in the configurations I'm talking about and it's better to have it 
> just in case, for the 1% of users that will want it. A desktop, laptop or 
> server can swallow a few megabytes to clean up some awkward design 
> decisions, like the /lib64 thing.

What's so especially bad on the multilib approach ?

> A few more megabytes installed may cut down on the support load for 
> distributions when some old 32 bit program refuses to start at all.

The distro could simply provide a few compat packages.
It even could use a hooked-up ld.so which does appropriate checks
and notify the package manager if some 32bit libs are missing.

> > - One hard drive partition can be booted on different machines with
> >   different CPU architectures, for development and experimentation. Same
> >   root file system, different kernel and CPU architecture. 
> > 
> > 	- Now we are getting desperate.
> 
> It's not like this is unheard of. Apple is selling this very thing for 129 
> bucks a copy.

Distro issue.
You need to have all packages installed for each supported arch *and*
all applications must be capable of handling different bytesex or
typesizes in their data.

> > - Prepare your app on a USB stick for sneakernet, know it'll work on
> >   whatever Linux box you are likely to plug it into.
> > 
> > 	- No I don't because of the dependancies, architecture ordering
> > 	  of data files, lack of testing on each platform and the fact
> > 	  architecture isn't sufficient to define a platform
> 
> Yes, it's not a silver bullet. Fedora will not be promising binaries that 
> run on every Unix box on the planet.
> 
> But the guy with the USB stick? He probably knows the details of every 
> machine he wants to plug it into...

Then he's most likely capable of maintaining a multiarch distro.
Leaving out binary application data (see above), it's not such a big
deal - just work-intensive. Using FatELF most likely increases that work.

> It's possible to ship binaries that don't depend on a specific 
> distribution, or preinstalled dependencies, beyond the existance of a 
> glibc that was built in the last five years or so. I do it every day. It's 
> not unreasonable, if you aren't part of the package management network, to 
> make something that will run, generically on "Linux."

Good, why do you need FatELF then ?

> There are programs I support that I just simply won't bother moving to 
> amd64 because it just complicates things for the end user, for example.

Why don't you just solve that in userland ?

> That is anecdotal, and I apologize for that. But I'm not the only 
> developer that's not in an apt repository, and all of these rebuttals are 
> anecdotal: "I just use yum [...because I don't personally care about 
> Debian users]."

Can't just just make up your own repo ? Is it so hard ?
Just can speak for Gentoo - overlays are quite convenient here.
 

cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-05  2:52                                       ` Mikulas Patocka
       [not found]                                         ` <f42384a10911050134t37a0a812hd85ff5541423dc9f@mail.gmail.com>
@ 2009-11-10 11:40                                         ` Enrico Weigelt
  1 sibling, 0 replies; 69+ messages in thread
From: Enrico Weigelt @ 2009-11-10 11:40 UTC (permalink / raw)
  To: linux-kernel

* Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz> wrote:

> No, if I compile alpha version of mplayer by hand, it compiles and links 
> against whatever libraries I have on my system. If I pull it out of some 
> "testing" repository, it is already compiled and linked against libraries 
> in the same "testing" repository and it will load the system with crap.

You picked the wrong repo. Use one which contains only the wanted
package, not tons of other stuff. If there is none, create it.
 
> > Or if you're arguing "you'd give up after seeing it needed an experimental
> > libfoo", I'll counter "you'd hopefully think twice if yum said it was
> > installing a experimental mplayer, and dragging in a whole chain of pre-reqs".
> 
> ... or use --disable-libfoo if it insists on newer version and I don't 
> want to upgrade it. 

Either abdicate the feature requiring libfoo or statically link that
new version. In neither way FatELF will help here.

> Or maybe the configure scripts detects on its own that the library is 
> too old will compile without new features. Or it uses libfoo shipped 
> with the sources.

Blame mplayer folks for their crappy configure script. Automatically
switching features on presence of some libs (also *against* explicit
options), or - even worse - hard coded system lib pathes (!) is simply
insane. FatELF can't delete ignorance from jerks like Rich Felker ;-O


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: package managers [was: FatELF patches...]
  2009-11-04 18:46                   ` Mikulas Patocka
  2009-11-04 19:46                     ` Alan Cox
  2009-11-04 20:02                     ` Valdis.Kletnieks
@ 2009-11-10 11:57                     ` Enrico Weigelt
  2 siblings, 0 replies; 69+ messages in thread
From: Enrico Weigelt @ 2009-11-10 11:57 UTC (permalink / raw)
  To: linux-kernel

* Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz> wrote:

> Some Windows programs force upgrade, but not in yearly cycles, like Linux 
> programs. Majority of programs still work on XP shipped in 2001.

You really use old, outdated software on production systems ?

> > Being able to upgrade at least Debian -- and others as well -- without the 
> > need to attend the computer is IMHO one of Linux' biggest wins.
> 
> When I did it (from Etch to Lenny), two programs that I have compiled 
> manually ("vim" and "links") stopped working because Etch and Lenny have 
> binary-incompatible libgpm.

Distro issue. If ABI changes, the binary package has get a different name.
 
> Static linking doesn't work for any program that needs plug-ins (i.e. 
> you'd have one glibc statically linked into the program and another glibc 
> dynamically linked with a plug-in and these two glibcs will beat each 
> other).

Plugins are crap by design. Same situation like w/ kernel modules:
you need them compiled against the right version of the main program,
in fact: on binary packaging they are *part* of the main program which
just happen to be loaded on-demand. If you want to split them up into
several packages, you'll end up in a dependency nightmare.

> I mean this --- the distributions should agree on a common set of 
> libraries and their versions (call this for example "Linux-2010 
> standard"). This standard should include libraries that are used 
> frequently, that have low occurence of bugs and security holes and that 
> have never had an ABI change.

See the discussion on stable kernel module ABI.

> Software developers that claim compatibility with the standard will link 
> standard libraries dynamically and must use static linking for all 
> libraries not included in the standard. Or they can use dynamic linking 
> and ship the non-standard library with the application in its private 
> directory (so that nothing but that application links against it).

Yeah, ending up in the windows-world maintenance hell. Dozens of packages
will ship dozens of own library copies, making their own private changes, 
not keeping up with upstream, so carrying around ancient bugs.

Wonderful idea.


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-10 11:27           ` Enrico Weigelt
@ 2009-11-10 12:40             ` Bernd Petrovitsch
  2009-11-10 13:00               ` Enrico Weigelt
  0 siblings, 1 reply; 69+ messages in thread
From: Bernd Petrovitsch @ 2009-11-10 12:40 UTC (permalink / raw)
  To: weigelt; +Cc: linux-kernel

On Tue, 2009-11-10 at 12:27 +0100, Enrico Weigelt wrote:
> * Ryan C. Gordon <icculus@icculus.org> wrote:
[...] 
> > True. If I try to run a PowerPC binary on a Sparc, it fails in any 
> > circumstance. I recognize the goal of this post was to shoot down every 
If tools like qemu support PowerPC or Sparc (similar to some dialects of
ARM), you your run it through that (on every hardware where qemu as such
runs[0]).
And if you have bimfmt_misc, you can start it like any other "native"
program.

> > single point, but you can't see a scenario where this adds a benefit? Even 
> > in a world that's still running 32-bit web browsers on _every major 
> > operating system_ because some crucial plugins aren't 64-bit yet?
>
> The root of evil are plugins - even worse: binary-only plugins.
> 
> Let's just take browsers: is there any damn good reason for not putting
> those things into their own process (9P provides a fine IPC for that),
> besides stupidity and lazyness of certain devs (yes, this explicitly
> includes mozilla guys) ?
Or implement running 32bit plugins from a 64bit browser.

[...]  
> > > - Prepare your app on a USB stick for sneakernet, know it'll work on
> > >   whatever Linux box you are likely to plug it into.
Trojan horse deployers paradise BTW.

[....]
> > It's possible to ship binaries that don't depend on a specific 
> > distribution, or preinstalled dependencies, beyond the existance of a 
> > glibc that was built in the last five years or so. I do it every day. It's 
ACK, just link it statically and be done (but then you have other
problems, e.g. "$LIB has an exploit and I have to rebuild and redeploy
$BINARY").

[...]
> > That is anecdotal, and I apologize for that. But I'm not the only 
> > developer that's not in an apt repository, and all of these rebuttals are 
> > anecdotal: "I just use yum [...because I don't personally care about 
> > Debian users]."
It's not that the other way around is much of a difference:-(
And if there is some really interested Debian user, he can package it
for Debian.
IMHO better no package for $DISTRIBUTION than only bad (and old) ones
because some packager (which is not necessarily a core programmer) has
only very little personal interest in the .deb version.

> Can't just just make up your own repo ? Is it so hard ?
> Just can speak for Gentoo - overlays are quite convenient here.
And it's not that hard to write .spec files for RPM (for average
packages - e.g. the kernel and gcc is somewhat different). Just take a
small one (e.g. the one from "trace") and start from there.
SCNR,
	Bernd

[09: I never tried to cascade qemu though.
-- 
Firmix Software GmbH                   http://www.firmix.at/
mobil: +43 664 4416156                 fax: +43 1 7890849-55
          Embedded Linux Development and Services



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-10 12:40             ` Bernd Petrovitsch
@ 2009-11-10 13:00               ` Enrico Weigelt
  2009-11-10 13:19                 ` Alan Cox
  0 siblings, 1 reply; 69+ messages in thread
From: Enrico Weigelt @ 2009-11-10 13:00 UTC (permalink / raw)
  To: linux-kernel

* Bernd Petrovitsch <bernd@firmix.at> wrote:

> > The root of evil are plugins - even worse: binary-only plugins.
> > 
> > Let's just take browsers: is there any damn good reason for not putting
> > those things into their own process (9P provides a fine IPC for that),
> > besides stupidity and lazyness of certain devs (yes, this explicitly
> > includes mozilla guys) ?
> Or implement running 32bit plugins from a 64bit browser.

And land in an nightmare: you have to create an kind of in-process jail, 
so all referenced 32bit lib references get properly emulated.

Better kickoff the whole idea of plugins at all.


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: FatELF patches...
  2009-11-10 13:00               ` Enrico Weigelt
@ 2009-11-10 13:19                 ` Alan Cox
  0 siblings, 0 replies; 69+ messages in thread
From: Alan Cox @ 2009-11-10 13:19 UTC (permalink / raw)
  To: weigelt; +Cc: linux-kernel

> > Or implement running 32bit plugins from a 64bit browser.
> 
> And land in an nightmare: you have to create an kind of in-process jail, 
> so all referenced 32bit lib references get properly emulated.

You instead want them out of process. Something that most distributions
seem to have managed.

http://gwenole.beauchesne.info//en/projects/nspluginwrapper

Alan

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2009-11-10 13:17 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-30  2:19 FatELF patches Ryan C. Gordon
2009-10-30  5:42 ` Rayson Ho
2009-10-30 14:54   ` Ryan C. Gordon
2009-11-01 19:20 ` David Hagood
2009-11-01 20:28   ` Måns Rullgård
2009-11-01 20:59     ` Ryan C. Gordon
2009-11-01 21:15       ` Måns Rullgård
2009-11-01 21:35         ` Ryan C. Gordon
2009-11-02  4:58           ` Valdis.Kletnieks
2009-11-02 15:14             ` Ryan C. Gordon
2009-11-03 14:54               ` Valdis.Kletnieks
2009-11-03 18:30                 ` Matt Thrailkill
2009-11-01 22:08         ` Rayson Ho
2009-11-02  1:17           ` Ryan C. Gordon
2009-11-02  3:27             ` Rayson Ho
2009-11-02  0:01       ` Alan Cox
2009-11-02  2:21         ` Ryan C. Gordon
2009-11-02  6:17           ` Julien BLACHE
2009-11-02 18:18             ` Ryan C. Gordon
2009-11-02 18:59               ` Julien BLACHE
2009-11-02 19:08               ` Jesús Guerrero
2009-11-02  6:27           ` David Miller
2009-11-02 15:32             ` Ryan C. Gordon
2009-11-02  9:16           ` Alan Cox
2009-11-02 17:39             ` david
2009-11-02 17:44               ` Alan Cox
2009-11-02 19:56               ` Krzysztof Halasa
2009-11-02 20:11                 ` david
2009-11-02 20:33                   ` Krzysztof Halasa
2009-11-03  1:35                   ` Mikael Pettersson
2009-11-02 15:40           ` Diego Calleja
2009-11-04 16:40           ` package managers [was: FatELF patches...] Mikulas Patocka
2009-11-04 16:54             ` Alan Cox
2009-11-04 17:25               ` Mikulas Patocka
2009-11-04 17:48                 ` Martin Nybo Andersen
2009-11-04 18:46                   ` Mikulas Patocka
2009-11-04 19:46                     ` Alan Cox
2009-11-04 20:04                       ` Mikulas Patocka
2009-11-04 20:27                         ` david
2009-11-04 20:02                     ` Valdis.Kletnieks
2009-11-04 20:08                       ` Mikulas Patocka
2009-11-04 20:41                         ` Valdis.Kletnieks
2009-11-04 21:11                           ` Mikulas Patocka
2009-11-04 21:32                             ` kevin granade
2009-11-04 22:05                               ` Mikulas Patocka
2009-11-04 22:19                                 ` Marcin Letyns
2009-11-04 22:28                                   ` david
2009-11-04 22:43                                 ` Martin Nybo Andersen
2009-11-04 23:55                                   ` Mikulas Patocka
2009-11-05  2:24                                     ` Valdis.Kletnieks
2009-11-05  2:52                                       ` Mikulas Patocka
     [not found]                                         ` <f42384a10911050134t37a0a812hd85ff5541423dc9f@mail.gmail.com>
2009-11-05  9:35                                           ` Fwd: " Marcin Letyns
2009-11-10 11:40                                         ` Enrico Weigelt
2009-11-04 23:11                             ` Valdis.Kletnieks
2009-11-05  0:05                               ` Mikulas Patocka
2009-11-10 11:57                     ` Enrico Weigelt
2009-11-04 17:36             ` Valdis.Kletnieks
2009-11-04 20:28             ` Ryan C. Gordon
2009-11-02 17:52         ` FatELF patches Ryan C. Gordon
2009-11-02 18:53           ` Alan Cox
2009-11-02 20:13             ` Ryan C. Gordon
2009-11-04  1:09               ` Ryan C. Gordon
2009-11-10 11:27           ` Enrico Weigelt
2009-11-10 12:40             ` Bernd Petrovitsch
2009-11-10 13:00               ` Enrico Weigelt
2009-11-10 13:19                 ` Alan Cox
2009-11-02 16:11       ` Chris Adams
2009-11-01 20:40   ` Ryan C. Gordon
2009-11-10 10:04   ` Enrico Weigelt
     [not found] <dAPfP-5R6-1@gated-at.bofh.it>
     [not found] ` <dBOhH-uY-9@gated-at.bofh.it>

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.