All of lore.kernel.org
 help / color / mirror / Atom feed
* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
@ 2015-05-15 11:01 Masahiro Yamada
  2015-05-15 19:52 ` Joe Hershberger
  2015-05-19 15:28 ` Joe Hershberger
  0 siblings, 2 replies; 10+ messages in thread
From: Masahiro Yamada @ 2015-05-15 11:01 UTC (permalink / raw)
  To: u-boot

When we send patches, we are supposed to test them by build utilities
such as MAKEALL, buildman.  When we want to test global changes, the
first hurdle is, I think, to collect toolchains for all the architectures.

We have some documents about build utilities, but I have not seen any
official information about how to get the suitable cross-tools.
Of course, it is possible to build them from sources, but it is not
necessarily feasible.

Fortunately, the kernel.org site provides us pre-built toolchains, but
some architectures are missing.  Also, some boards fail to build with
the kernel.org tools.  We sometimes see, "where can I get the compiler
for this architecture?" things on the ML.  We should be able to prepare
cross-compilers more easily.

It is true that buildman provides --fetch-arch option for downloading
kernel.org toolchains, but it does not have access to others.  And what
we really want to know is most likely how to get compilers for such minor
architectures as kernel.org does not provide.

This tool intends to be more generic design without hard-coding such
kernel.org things.

To achieve that, this tool consists of two files:
Python script (this file) and the database file containing URLs of tarballs.

We just need to update the latter when new version compilers are released
(or better compilers are found.)  The file is in the form of RFC 822 for
easier editing.

The script only uses Python libraries, not relies on external programs
although it displays wget-like log when downloading tarballs.  :-)

This is RFC because I am thinking it can be more brushed up.
If the basis idea is OK, I will improve code, add more comments.

Note this script is written in Python 3 and only works on Python 3.3
or later.  I do not think it is too much limitation, but some popular
distributions under support might include older version.  For example,
looks like Ubuntu 12.04 LTS is shipped with Python 3.2.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 tools/get-toolchains | 400 +++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/toolchains.cfg |  24 ++++
 2 files changed, 424 insertions(+)
 create mode 100755 tools/get-toolchains
 create mode 100644 tools/toolchains.cfg

diff --git a/tools/get-toolchains b/tools/get-toolchains
new file mode 100755
index 0000000..7cb4d5c
--- /dev/null
+++ b/tools/get-toolchains
@@ -0,0 +1,400 @@
+#!/usr/bin/env python3
+#
+# Author: Masahiro Yamada <yamada.masahiro@socionext.com>
+#
+# SPDX-License-Identifier:	GPL-2.0+
+#
+
+"""
+Get toolchains for U-boot.
+
+When we send patches, we are supposed to test them by build utilities
+such as MAKEALL, buildman.  When we want to test global changes, the
+first hurdle is, I think, to collect toolchains for all the architectures.
+
+We have some documents about build utilities, but I have not seen any
+official information about how to get the suitable cross-tools.
+Of course, it is possible to build them from sources, but it is not
+necessarily feasible.
+
+Fortunately, the kernel.org site provides us pre-built toolchains, but
+some architectures are missing.  Also, some boards fail to build with
+the kernel.org tools.  We sometimes see, "where can I get the compiler
+for this architecture?" things on the ML.  We should be able to prepare
+cross-compilers more easily.
+
+It is true that buildman provides --fetch-arch option for downloading
+kernel.org toolchains, but it does not have access to others.  And what
+we really want to know is most likely how to get compilers for such minor
+architectures as kernel.org does not provide.
+
+This tool intends to be more generic design without hard-coding such
+kernel.org things.
+
+To achieve that, this tool consists of two files:
+Python script (this file) and the database file containing URLs of tarballs.
+
+We just need to update the latter when new version compilers are released
+(or better compilers are found.)  The file is in the form of RFC 822 for
+easier editing.
+
+The script only uses Python libraries, not relies on external programs
+although it displays wget-like log when downloading tarballs.  :-)
+
+Usage
+-----
+
+Just run
+
+  $ tools/get-toolchains
+
+Tarballs will be downloaded, extracted, and installed for all the
+architectures.  Finally, settings that might be useful for shell
+and buildman will be displayed.
+
+You can pass architectures to the arguments if you only want to obtain
+particular toolchains.  For example, to get ARM and AARCH64 tools, do this:
+
+  $ tools/get-toolchains arm aarch64
+
+Options
+-------
+
+ -c, --config
+   Specify the custom database file.  If not specified, DEFAULT_CONFIG
+   (toolchains.cfg) is used.
+
+ -d, --destdir
+   Specify where to install the toolchains.  Tools are installed into
+   DEFAULT_DESTDIR (~/.u-boot-toolchains) by default, but you may wish
+   to install them under /opt, /usr/local/, or somewhere else.
+
+ -k, --keep-tarballs
+   Keep downloaded tarballs, which might be useful when you want to
+   re-install the tools (and then you should add -r (--reuse) option
+   for the next run.)  If this option is disabled, all the tarballs are
+   deleted after installation.
+
+ -r, --reuse
+   Allow to use local tarballs.  If a file with the same name is found
+   in the tarball directory, the tool skips downloading and use the
+   local one.  If disabled, it always downloads tarballs from URLs.
+
+To see the complete list of supported options, run
+
+  $ tools/get-toolchains
+"""
+
+import configparser
+import errno
+import optparse
+import os
+import platform
+import shutil
+import sys
+import tarfile
+import tempfile
+import time
+import urllib.request
+
+DEFAULT_CONFIG = 'toolchains.cfg'
+DEFAULT_DESTDIR = '~/.u-boot-toolchains'
+
+assert sys.version_info >= (3, 3, 0), \
+       'This script only works on Python 3.3 or later.  Exit.'
+
+def rmfile(file):
+    """Remove a file ignoring 'No such file or directory' error."""
+    try:
+        os.remove(file)
+    except OSError as exception:
+        # Ignore 'No such file or directory' error
+        if exception.errno != errno.ENOENT:
+            raise
+
+def mkdir(dir):
+    """Make a directory ignoring 'File exists' error."""
+    try:
+        os.makedirs(dir)
+    except OSError as exception:
+        # throw errors other than 'File exists'
+        if exception.errno != errno.EEXIST:
+            raise
+
+def format_size(size):
+    size = float(size)
+    if size > 1024 * 1024 * 1024:
+        size /= 1024 * 1024 * 1024
+        unit = 'G'
+    elif size > 1024 * 1024:
+        size /= 1024 * 1024
+        unit = 'M'
+    elif size > 1024:
+        size /= 1024
+        unit = 'K'
+
+    return '%.2f%s' % (size, unit)
+
+def format_time(sec):
+    sec = int(sec)
+    if sec > 99:
+        min = sec // 60
+        sec = sec % 60
+        return '%dm %2ds' % (min, sec)
+    else:
+        return '%2ds' % sec
+
+def parse_config(config_file):
+    """Parse config file and return dictionary of URLs.
+    """
+    config = configparser.SafeConfigParser()
+    read_files = config.read(config_file)
+
+    if not read_files:
+        sys.exit('%s: config file not found' % config_file)
+
+    host_arch = platform.machine()
+
+    if not host_arch:
+        sys.exit('failed to get host architecture')
+
+    section = 'host "%s"' % host_arch
+
+    if not config.has_section(section):
+        sys.exit('%s: unsupported host architecture' % host_arch)
+
+    urls = {}
+
+    for arch, url in config.items(section):
+        if not arch.startswith('alias_'):
+            urls[arch] = url
+
+    return urls
+
+class Downloader:
+    """Tarball downloader."""
+    def __init__(self, arch_urls, tarball_dir, reuse):
+        self.arch_urls = arch_urls
+        self.tarball_dir = tarball_dir
+        self.reuse = reuse
+        mkdir(tarball_dir)
+
+    def __del__(self):
+        if hasattr(self, 'tempfile'):
+            rmfile(self.tempfile)
+
+    def download_one_url(self, url, dest):
+        """Download one tarball.
+
+        Arguments:
+          url: URL of tarball to be downloaded
+          dest: downloaded file is saved into this path
+        """
+        chunk = 256 * 1024
+
+        print('Download %s' % url)
+
+        if self.reuse and os.path.exists(dest):
+            print('local file found at %s.  skip downloading.' % dest)
+            return
+
+        print('Connecting ... ', end=' ')
+        response = urllib.request.urlopen(url)
+        print('connected')
+
+        file_size = response.headers.get('content-length')
+
+        if file_size:
+            file_size = int(file_size)
+        else:
+            file_size = 0
+
+        if file_size:
+            print('Length: %d' % file_size)
+        else:
+            print('Length: Unknown')
+
+        done = 0
+
+        (fd, self.tempfile) = tempfile.mkstemp()
+
+        start_time = time.time()
+
+        self.show_progress(done, file_size, 0)
+
+        with os.fdopen(fd, 'wb') as f:
+            while True:
+                data = response.read(chunk)
+                if not data:
+                    break
+                f.write(data)
+                done += len(data)
+                self.show_progress(done, file_size, time.time() - start_time)
+
+        print()
+
+        shutil.move(self.tempfile, dest)
+
+    def download_archs(self, archs):
+        """Download tarballs for given architectures.
+
+        Arguments:
+          archs: List of architectures.  If empty, download all the
+                 available tarballs.
+        """
+        # if not specified, download all the archs we know
+        if len(archs) == 0:
+            archs = self.arch_urls.keys()
+
+        arch_tarballs = {}
+
+        for arch in archs:
+            if arch not in self.arch_urls:
+                print('%s: URL not defined for this architecture. skip.' % arch,
+		file=sys.stderr)
+                continue
+            url = self.arch_urls[arch]
+            dest = os.path.join(self.tarball_dir, os.path.basename(url))
+            self.download_one_url(url, dest)
+            arch_tarballs[arch] = dest
+
+        return arch_tarballs
+
+    def show_progress(self, done, file_size, time):
+        """Display wget-like log.
+
+        done: downloaded size
+        file_size: total size
+        time: elapsed time
+        """
+        # just init and return for the first call
+        if done == 0:
+            self.prev_done = done
+            self.prev_time = time
+            return
+
+        width = shutil.get_terminal_size().columns
+        width -= 35
+        width -= len('%d' % file_size)
+
+        percent = 100 * done // file_size
+        arrow_length = width * done // file_size
+        arrow_length = max((arrow_length, 1))
+        speed = (done - self.prev_done) / (time - self.prev_time)
+        eta = time * (file_size - done) / done
+
+        msg = ('%2d%% ' % percent)[:4]
+        msg += '[' + '=' * (arrow_length - 1) + '>' + ' ' * (width - arrow_length) + ']'
+        msg += ' %-9d' % done
+        msg += ' %7s/s ' % format_size(speed)
+        if done == file_size:
+            msg += '   in %-6s ' % format_time(time)
+        else:
+            msg += '  eta %-6s ' % format_time(eta)
+
+        print('\r' + msg, end='', flush=True)
+
+        # remember the last done and time
+        self.prev_done = done
+        self.prev_time = time
+
+def get_bin_path(names):
+    while True:
+        stem = os.path.commonprefix(names)
+        if stem[-1] == '/':
+            return stem + 'bin'
+        names.remove(stem)
+
+def unpack_one_tarball(path, dest):
+    # caution: only Python 3.3 or later can handle .xz
+    with tarfile.open(path) as tar:
+        tar.extractall(dest)
+        bin_path = get_bin_path(tar.getnames())
+
+    return os.path.join(dest, bin_path)
+
+def unpack_tarballs(tarballs, destdir):
+    """
+    Arguments:
+      tarballs: Dictionary of tarball paths
+      destdir: Destination directory for installation
+    """
+    print()
+    paths = {}
+
+    for arch, tarball in tarballs.items():
+        dest = os.path.join(destdir, arch)
+        print('Unpacking %s into %s ... ' % (os.path.basename(tarball), dest),
+              end='', flush=True)
+        paths[arch] = os.path.realpath(unpack_one_tarball(tarball, dest))
+        print('done')
+
+    return paths
+
+def print_tool_settings(paths):
+    """Print settings for bash and buildman.
+
+    Arguments:
+      paths: Dictionary of toolchains paths.
+    """
+    print('\n\nAdd the followings to your ~/.(bash_)profile if necessary\n')
+    for (arch, path) in sorted(paths.items()):
+        print('PATH=%s:$PATH' % path)
+
+    print('\n\nAdd the followings to your ~/.buildman if necessary\n')
+    print('[toolchain]')
+    for (arch, path) in sorted(paths.items()):
+        print('%s: %s' % (arch, path))
+
+def get_toolchains(options, args):
+    """
+    Arguments:
+      options: Option flags.
+      args: List of architectures.  If empty, download and install
+            all the available toolchians.
+    """
+    if options.config:
+        config_file = options.config
+    else:
+        config_file = os.path.join(os.path.dirname(__file__), DEFAULT_CONFIG)
+
+    if options.destdir:
+        destdir = options.destdir
+    else:
+        destdir = DEFAULT_DESTDIR
+
+    destdir = os.path.expanduser(destdir)
+    tarball_dir = os.path.join(destdir, 'Tarballs')
+
+    urls = parse_config(config_file)
+
+    downloader = Downloader(urls, tarball_dir, options.reuse)
+
+    tarballs = downloader.download_archs(args)
+
+    paths = unpack_tarballs(tarballs, destdir)
+
+    print_tool_settings(paths)
+
+    if not options.keep_tarballs:
+        shutil.rmtree(tarball_dir)
+
+def main():
+    parser = optparse.OptionParser()
+    # Add options here
+
+    parser.add_option('-c', '--config', type='string',
+                      help='custom config_file')
+    parser.add_option('-d', '--destdir', type='string',
+                      help='custom config_file')
+    parser.add_option('-k', '--keep-tarballs', action='store_true',
+                       default=False,
+                       help='keep downloaded tarballs after installation')
+    parser.add_option('-r', '--reuse', action='store_true', default=False,
+                       help='use locally existing tarballs if available')
+    (options, args) = parser.parse_args()
+
+    get_toolchains(options, args)
+
+if __name__ == '__main__':
+    main()
diff --git a/tools/toolchains.cfg b/tools/toolchains.cfg
new file mode 100644
index 0000000..e7135b0
--- /dev/null
+++ b/tools/toolchains.cfg
@@ -0,0 +1,24 @@
+[host "x86_64"]
+arc: %(alias_synopsys)s/arc-2014.12/arc_gnu_2014.12_prebuilt_uclibc_le_arc700_linux_install.tar.gz
+arceb: %(alias_synopsys)s/arc-2014.12/arc_gnu_2014.12_prebuilt_uclibc_be_arc700_linux_install.tar.gz
+aarch64: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_aarch64-linux.tar.xz
+avr32: %(alias_kernel_org)s/x86_64/4.2.4/x86_64-gcc-4.2.4-nolibc_avr32-linux.tar.xz
+arm: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_arm-unknown-linux-gnueabi.tar.xz
+blackfin: http://sourceforge.net/projects/adi-toolchain/files/2014R1/2014R1_45-RC2/x86_64/blackfin-toolchain-elf-gcc-4.5-2014R1_45-RC2.x86_64.tar.bz2
+m68k: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_m68k-linux.tar.xz
+microblaze: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_microblaze-linux.tar.xz
+mips: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_mips-linux.tar.gz
+nds32: http://osdk.andestech.com/packages/nds32le-linux-glibc-v1.tgz
+nios2: https://sourcery.mentor.com/GNUToolchain/package13742/public/nios2-elf/sourceryg++-2015.05-12-nios2-elf-i686-pc-linux-gnu.tar.bz2
+openrisc: %(alias_kernel_org)s/x86_64/4.5.1/x86_64-gcc-4.5.1-nolibc_or32-linux.tar.gz
+powerpc: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_powerpc-linux.tar.gz
+sh: http://sourcery.mentor.com/public/gnu_toolchain/sh-linux-gnu/renesas-2012.09-61-sh-linux-gnu-i686-pc-linux-gnu.tar.bz2
+sparc: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_sparc-linux.tar.gz
+x86: %(alias_kernel_org)s/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_i386-linux.tar.gz
+
+[host "i386"]
+; TODO  add i386 host toolchains
+
+[DEFAULT]
+alias_kernel_org: https://www.kernel.org/pub/tools/crosstool/files/bin
+alias_synopsys: https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/releases/download/
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-15 11:01 [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures Masahiro Yamada
@ 2015-05-15 19:52 ` Joe Hershberger
  2015-05-16  4:58   ` Masahiro Yamada
  2015-05-19 15:28 ` Joe Hershberger
  1 sibling, 1 reply; 10+ messages in thread
From: Joe Hershberger @ 2015-05-15 19:52 UTC (permalink / raw)
  To: u-boot

Hi Masahiro-san,

On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
<yamada.masahiro@socionext.com> wrote:
> When we send patches, we are supposed to test them by build utilities
> such as MAKEALL, buildman.  When we want to test global changes, the
> first hurdle is, I think, to collect toolchains for all the architectures.
>
> We have some documents about build utilities, but I have not seen any
> official information about how to get the suitable cross-tools.
> Of course, it is possible to build them from sources, but it is not
> necessarily feasible.
>
> Fortunately, the kernel.org site provides us pre-built toolchains, but
> some architectures are missing.  Also, some boards fail to build with
> the kernel.org tools.  We sometimes see, "where can I get the compiler
> for this architecture?" things on the ML.  We should be able to prepare
> cross-compilers more easily.
>
> It is true that buildman provides --fetch-arch option for downloading
> kernel.org toolchains, but it does not have access to others.  And what
> we really want to know is most likely how to get compilers for such minor
> architectures as kernel.org does not provide.

Maybe just integrate this into buildman? Or remove it from buildman?
In buildman has the benefit that it updates buildman's config to know
how to find the compiler.

> This tool intends to be more generic design without hard-coding such
> kernel.org things.
>
> To achieve that, this tool consists of two files:
> Python script (this file) and the database file containing URLs of tarballs.
>
> We just need to update the latter when new version compilers are released
> (or better compilers are found.)  The file is in the form of RFC 822 for
> easier editing.

Any reason not to just maintain this list on the wiki. It seem this is
the primary issue for everyone... not figuring out how to download or
extract the toolchain.

> The script only uses Python libraries, not relies on external programs
> although it displays wget-like log when downloading tarballs.  :-)

It seems like using wget would be more appropriate. Why reinvent the wheel?

> This is RFC because I am thinking it can be more brushed up.
> If the basis idea is OK, I will improve code, add more comments.
>
> Note this script is written in Python 3 and only works on Python 3.3
> or later.  I do not think it is too much limitation, but some popular
> distributions under support might include older version.  For example,
> looks like Ubuntu 12.04 LTS is shipped with Python 3.2.

Why not write it in something that exists everywhere? If it's just for
downloading tool-chains, seems it should be easy to make it python 2.6
compatible.

> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>

Cheers,
-Joe

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-15 19:52 ` Joe Hershberger
@ 2015-05-16  4:58   ` Masahiro Yamada
  2015-05-17 17:50     ` Simon Glass
  0 siblings, 1 reply; 10+ messages in thread
From: Masahiro Yamada @ 2015-05-16  4:58 UTC (permalink / raw)
  To: u-boot

Hi Joe,
(added Simon)

2015-05-16 4:52 GMT+09:00 Joe Hershberger <joe.hershberger@gmail.com>:
> Hi Masahiro-san,
>
> On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
> <yamada.masahiro@socionext.com> wrote:
>> When we send patches, we are supposed to test them by build utilities
>> such as MAKEALL, buildman.  When we want to test global changes, the
>> first hurdle is, I think, to collect toolchains for all the architectures.
>>
>> We have some documents about build utilities, but I have not seen any
>> official information about how to get the suitable cross-tools.
>> Of course, it is possible to build them from sources, but it is not
>> necessarily feasible.
>>
>> Fortunately, the kernel.org site provides us pre-built toolchains, but
>> some architectures are missing.  Also, some boards fail to build with
>> the kernel.org tools.  We sometimes see, "where can I get the compiler
>> for this architecture?" things on the ML.  We should be able to prepare
>> cross-compilers more easily.
>>
>> It is true that buildman provides --fetch-arch option for downloading
>> kernel.org toolchains, but it does not have access to others.  And what
>> we really want to know is most likely how to get compilers for such minor
>> architectures as kernel.org does not provide.
>
> Maybe just integrate this into buildman? Or remove it from buildman?
> In buildman has the benefit that it updates buildman's config to know
> how to find the compiler.

I wanted to add more options to provide better flexibility.

For example, I wanted --destdir option
because I think installing tools under /opt/  or /usr/loca/ is a generic demand.

That's why I implemented this tool as a separate script.
I also want to hear Simon's opinion.


>> This tool intends to be more generic design without hard-coding such
>> kernel.org things.
>>
>> To achieve that, this tool consists of two files:
>> Python script (this file) and the database file containing URLs of tarballs.
>>
>> We just need to update the latter when new version compilers are released
>> (or better compilers are found.)  The file is in the form of RFC 822 for
>> easier editing.
>
> Any reason not to just maintain this list on the wiki. It seem this is
> the primary issue for everyone... not figuring out how to download or
> extract the toolchain.

I can just note URLs down in README or wiki.

Of course, everyone knows how to download a tarball and extract it, but
isn't it more convenient to prepare a utility that can do everything for you?


>> The script only uses Python libraries, not relies on external programs
>> although it displays wget-like log when downloading tarballs.  :-)
>
> It seems like using wget would be more appropriate. Why reinvent the wheel?


My intention was to not depend on particular external programs like wget, curl.

But, you are right, we should not reinvent the wheel.

I will replace my implementation with a caller of wget.


>> This is RFC because I am thinking it can be more brushed up.
>> If the basis idea is OK, I will improve code, add more comments.
>>
>> Note this script is written in Python 3 and only works on Python 3.3
>> or later.  I do not think it is too much limitation, but some popular
>> distributions under support might include older version.  For example,
>> looks like Ubuntu 12.04 LTS is shipped with Python 3.2.
>
> Why not write it in something that exists everywhere? If it's just for
> downloading tool-chains, seems it should be easy to make it python 2.6
> compatible.

The reason of Python 3.3 dependency is that I wanted to unpack tarballs
with a Python library.

The tarfile library only supports .xz format on version 3.3 or later.

We can just invoke tar program rather than Python library, so this version
dependency will go away.

But, using Python 3 seems a right way in a long run, I think.

Python 3k is already widespread, isn't it?


-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-16  4:58   ` Masahiro Yamada
@ 2015-05-17 17:50     ` Simon Glass
  2015-05-19  5:04       ` Masahiro Yamada
  0 siblings, 1 reply; 10+ messages in thread
From: Simon Glass @ 2015-05-17 17:50 UTC (permalink / raw)
  To: u-boot

Hi Masahiro,

On 15 May 2015 at 22:58, Masahiro Yamada <yamada.masahiro@socionext.com>
wrote:
> Hi Joe,
> (added Simon)
>
> 2015-05-16 4:52 GMT+09:00 Joe Hershberger <joe.hershberger@gmail.com>:
>> Hi Masahiro-san,
>>
>> On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
>> <yamada.masahiro@socionext.com> wrote:
>>> When we send patches, we are supposed to test them by build utilities
>>> such as MAKEALL, buildman. When we want to test global changes, the
>>> first hurdle is, I think, to collect toolchains for all the
architectures.
>>>
>>> We have some documents about build utilities, but I have not seen any
>>> official information about how to get the suitable cross-tools.
>>> Of course, it is possible to build them from sources, but it is not
>>> necessarily feasible.
>>>
>>> Fortunately, the kernel.org site provides us pre-built toolchains, but
>>> some architectures are missing. Also, some boards fail to build with
>>> the kernel.org tools. We sometimes see, "where can I get the compiler
>>> for this architecture?" things on the ML. We should be able to prepare
>>> cross-compilers more easily.
>>>
>>> It is true that buildman provides --fetch-arch option for downloading
>>> kernel.org toolchains, but it does not have access to others. And what
>>> we really want to know is most likely how to get compilers for such
minor
>>> architectures as kernel.org does not provide.
>>
>> Maybe just integrate this into buildman? Or remove it from buildman?
>> In buildman has the benefit that it updates buildman's config to know
>> how to find the compiler.
>
> I wanted to add more options to provide better flexibility.
>
> For example, I wanted --destdir option
> because I think installing tools under /opt/ or /usr/loca/ is a generic
demand.
>
> That's why I implemented this tool as a separate script.
> I also want to hear Simon's opinion.

I think a separate script is fine - it helps if we can reduce the
functionality in buildman. But buildman should call this script. Also I
think your script should use urllib2 and IMO the simple progress update
that buildman provides is plenty.

Re the destination, buildman could provide its own destination for its
download operation. But I suggest we use the same default one for both.
Perhaps your default makes more sense that buildman's? After all, buildman
doesn't really care where it is.

>
>
>>> This tool intends to be more generic design without hard-coding such
>>> kernel.org things.
>>>
>>> To achieve that, this tool consists of two files:
>>> Python script (this file) and the database file containing URLs of
tarballs.
>>>
>>> We just need to update the latter when new version compilers are
released
>>> (or better compilers are found.) The file is in the form of RFC 822 for
>>> easier editing.
>>
>> Any reason not to just maintain this list on the wiki. It seem this is
>> the primary issue for everyone... not figuring out how to download or
>> extract the toolchain.
>
> I can just note URLs down in README or wiki.
>
> Of course, everyone knows how to download a tarball and extract it, but
> isn't it more convenient to prepare a utility that can do everything for
you?
>
>
>>> The script only uses Python libraries, not relies on external programs
>>> although it displays wget-like log when downloading tarballs. :-)
>>
>> It seems like using wget would be more appropriate. Why reinvent the
wheel?
>
>
> My intention was to not depend on particular external programs like wget,
curl.
>
> But, you are right, we should not reinvent the wheel.
>
> I will replace my implementation with a caller of wget.

I think urllib2 is a better solution.

>
>
>>> This is RFC because I am thinking it can be more brushed up.
>>> If the basis idea is OK, I will improve code, add more comments.
>>>
>>> Note this script is written in Python 3 and only works on Python 3.3
>>> or later. I do not think it is too much limitation, but some popular
>>> distributions under support might include older version. For example,
>>> looks like Ubuntu 12.04 LTS is shipped with Python 3.2.
>>
>> Why not write it in something that exists everywhere? If it's just for
>> downloading tool-chains, seems it should be easy to make it python 2.6
>> compatible.
>
> The reason of Python 3.3 dependency is that I wanted to unpack tarballs
> with a Python library.
>
> The tarfile library only supports .xz format on version 3.3 or later.
>
> We can just invoke tar program rather than Python library, so this version
> dependency will go away.

Yes :-)

def Unpack(self, fname, dest):
"""Unpack a tar file

Args:
fname: Filename to unpack
dest: Destination directory
Returns:
Directory name of the first entry in the archive, without the
trailing /
"""
stdout = command.Output('tar', 'xvfJ', fname, '-C', dest)
return stdout.splitlines()[0][:-1]

>
> But, using Python 3 seems a right way in a long run, I think.
>
> Python 3k is already widespread, isn't it?

Getting that way, but why require it?

When I run the tool with no args it seems to start downloading everything.
I think it would be better if it printed help. Also it could use slightly
more extensive help (maybe a -H option like buildman which prints the
README?).

One more question - how do we handle multiple toolchain versions? We may as
well figure that out now. How about adding another level in the directory
hierarchy with the toolchain source or type? Then we could build with both
eldk and kernel.org, for example. Tom has talked about how we might add
this feature to buildman too (i.e. build the same arch with multiple
toolchains).

Regards,
Simon

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-17 17:50     ` Simon Glass
@ 2015-05-19  5:04       ` Masahiro Yamada
  2015-05-19 17:16         ` Simon Glass
  0 siblings, 1 reply; 10+ messages in thread
From: Masahiro Yamada @ 2015-05-19  5:04 UTC (permalink / raw)
  To: u-boot

Hi Simon,


2015-05-18 2:50 GMT+09:00 Simon Glass <sjg@chromium.org>:
> Hi Masahiro,
>
> On 15 May 2015 at 22:58, Masahiro Yamada <yamada.masahiro@socionext.com>
> wrote:
>> Hi Joe,
>> (added Simon)
>>
>> 2015-05-16 4:52 GMT+09:00 Joe Hershberger <joe.hershberger@gmail.com>:
>>> Hi Masahiro-san,
>>>
>>> On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
>>> <yamada.masahiro@socionext.com> wrote:
>>>> When we send patches, we are supposed to test them by build utilities
>>>> such as MAKEALL, buildman. When we want to test global changes, the
>>>> first hurdle is, I think, to collect toolchains for all the
> architectures.
>>>>
>>>> We have some documents about build utilities, but I have not seen any
>>>> official information about how to get the suitable cross-tools.
>>>> Of course, it is possible to build them from sources, but it is not
>>>> necessarily feasible.
>>>>
>>>> Fortunately, the kernel.org site provides us pre-built toolchains, but
>>>> some architectures are missing. Also, some boards fail to build with
>>>> the kernel.org tools. We sometimes see, "where can I get the compiler
>>>> for this architecture?" things on the ML. We should be able to prepare
>>>> cross-compilers more easily.
>>>>
>>>> It is true that buildman provides --fetch-arch option for downloading
>>>> kernel.org toolchains, but it does not have access to others. And what
>>>> we really want to know is most likely how to get compilers for such
> minor
>>>> architectures as kernel.org does not provide.
>>>
>>> Maybe just integrate this into buildman? Or remove it from buildman?
>>> In buildman has the benefit that it updates buildman's config to know
>>> how to find the compiler.
>>
>> I wanted to add more options to provide better flexibility.
>>
>> For example, I wanted --destdir option
>> because I think installing tools under /opt/ or /usr/loca/ is a generic
> demand.
>>
>> That's why I implemented this tool as a separate script.
>> I also want to hear Simon's opinion.
>
> I think a separate script is fine - it helps if we can reduce the
> functionality in buildman. But buildman should call this script.

We cannot mix up python 2 and 3 script.

If buildman should call this script, it must be re-written in 2.



> Also I
> think your script should use urllib2 and IMO the simple progress update
> that buildman provides is plenty.

In python 2, there exist two libraries, urllib and urlib2, to do similar things.

In python 3,  the former was discontinued and the latter was renamed
into "urllib".

So, the "urllib" I am using in my python 3 script
is equivalent to what you call "urllib2" in python 2.


> Re the destination, buildman could provide its own destination for its
> download operation. But I suggest we use the same default one for both.

OK, I can do this.

> Perhaps your default makes more sense that buildman's? After all, buildman
> doesn't really care where it is.
>
>>
>>
>>>> This tool intends to be more generic design without hard-coding such
>>>> kernel.org things.
>>>>
>>>> To achieve that, this tool consists of two files:
>>>> Python script (this file) and the database file containing URLs of
> tarballs.
>>>>
>>>> We just need to update the latter when new version compilers are
> released
>>>> (or better compilers are found.) The file is in the form of RFC 822 for
>>>> easier editing.
>>>
>>> Any reason not to just maintain this list on the wiki. It seem this is
>>> the primary issue for everyone... not figuring out how to download or
>>> extract the toolchain.
>>
>> I can just note URLs down in README or wiki.
>>
>> Of course, everyone knows how to download a tarball and extract it, but
>> isn't it more convenient to prepare a utility that can do everything for
> you?
>>
>>
>>>> The script only uses Python libraries, not relies on external programs
>>>> although it displays wget-like log when downloading tarballs. :-)
>>>
>>> It seems like using wget would be more appropriate. Why reinvent the
> wheel?
>>
>>
>> My intention was to not depend on particular external programs like wget,
> curl.
>>
>> But, you are right, we should not reinvent the wheel.
>>
>> I will replace my implementation with a caller of wget.
>
> I think urllib2 is a better solution.

Now I understand we must depend on "tar" anyway.

So my first intention "no external program dependency"  seems impossible
(at least on Python 2).

I do not mind depending on wget, and it seems easier.



>>
>>
>>>> This is RFC because I am thinking it can be more brushed up.
>>>> If the basis idea is OK, I will improve code, add more comments.
>>>>
>>>> Note this script is written in Python 3 and only works on Python 3.3
>>>> or later. I do not think it is too much limitation, but some popular
>>>> distributions under support might include older version. For example,
>>>> looks like Ubuntu 12.04 LTS is shipped with Python 3.2.
>>>
>>> Why not write it in something that exists everywhere? If it's just for
>>> downloading tool-chains, seems it should be easy to make it python 2.6
>>> compatible.
>>
>> The reason of Python 3.3 dependency is that I wanted to unpack tarballs
>> with a Python library.
>>
>> The tarfile library only supports .xz format on version 3.3 or later.
>>
>> We can just invoke tar program rather than Python library, so this version
>> dependency will go away.
>
> Yes :-)
>
> def Unpack(self, fname, dest):
> """Unpack a tar file
>
> Args:
> fname: Filename to unpack
> dest: Destination directory
> Returns:
> Directory name of the first entry in the archive, without the
> trailing /
> """
> stdout = command.Output('tar', 'xvfJ', fname, '-C', dest)
> return stdout.splitlines()[0][:-1]
>
>>
>> But, using Python 3 seems a right way in a long run, I think.
>>
>> Python 3k is already widespread, isn't it?
>
> Getting that way, but why require it?

I imagine the main interest for Python developers
has already moved to Python 3.

I think Python 2 will be also maintained long enough,
but, our scripts move to Python 3 for better maintainability in the long run.



> When I run the tool with no args it seems to start downloading everything.

tools/get-crosstools all
is better?


> I think it would be better if it printed help. Also it could use slightly
> more extensive help (maybe a -H option like buildman which prints the
> README?).
>
> One more question - how do we handle multiple toolchain versions? We may as
> well figure that out now. How about adding another level in the directory
> hierarchy with the toolchain source or type? Then we could build with both
> eldk and kernel.org, for example. Tom has talked about how we might add
> this feature to buildman too (i.e. build the same arch with multiple
> toolchains).

Seems a good idea, but it feels like too much work here...


-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-15 11:01 [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures Masahiro Yamada
  2015-05-15 19:52 ` Joe Hershberger
@ 2015-05-19 15:28 ` Joe Hershberger
  1 sibling, 0 replies; 10+ messages in thread
From: Joe Hershberger @ 2015-05-19 15:28 UTC (permalink / raw)
  To: u-boot

On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
<yamada.masahiro@socionext.com> wrote:
> When we send patches, we are supposed to test them by build utilities
> such as MAKEALL, buildman.  When we want to test global changes, the
> first hurdle is, I think, to collect toolchains for all the architectures.
>
> We have some documents about build utilities, but I have not seen any
> official information about how to get the suitable cross-tools.
> Of course, it is possible to build them from sources, but it is not
> necessarily feasible.
>
> Fortunately, the kernel.org site provides us pre-built toolchains, but
> some architectures are missing.  Also, some boards fail to build with
> the kernel.org tools.  We sometimes see, "where can I get the compiler
> for this architecture?" things on the ML.  We should be able to prepare
> cross-compilers more easily.
>
> It is true that buildman provides --fetch-arch option for downloading
> kernel.org toolchains, but it does not have access to others.  And what
> we really want to know is most likely how to get compilers for such minor
> architectures as kernel.org does not provide.
>
> This tool intends to be more generic design without hard-coding such
> kernel.org things.
>
> To achieve that, this tool consists of two files:
> Python script (this file) and the database file containing URLs of tarballs.
>
> We just need to update the latter when new version compilers are released
> (or better compilers are found.)  The file is in the form of RFC 822 for
> easier editing.
>
> The script only uses Python libraries, not relies on external programs
> although it displays wget-like log when downloading tarballs.  :-)
>
> This is RFC because I am thinking it can be more brushed up.
> If the basis idea is OK, I will improve code, add more comments.
>
> Note this script is written in Python 3 and only works on Python 3.3
> or later.  I do not think it is too much limitation, but some popular
> distributions under support might include older version.  For example,
> looks like Ubuntu 12.04 LTS is shipped with Python 3.2.
>
> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
> ---

Also, in case you didn't notice, you have a typo in the subject of the
patch. "toolchais".

-Joe

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-19  5:04       ` Masahiro Yamada
@ 2015-05-19 17:16         ` Simon Glass
  2015-05-19 18:13           ` Joe Hershberger
  0 siblings, 1 reply; 10+ messages in thread
From: Simon Glass @ 2015-05-19 17:16 UTC (permalink / raw)
  To: u-boot

Hi Masahiro,

On 18 May 2015 at 23:04, Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
> Hi Simon,
>
>
> 2015-05-18 2:50 GMT+09:00 Simon Glass <sjg@chromium.org>:
>> Hi Masahiro,
>>
>> On 15 May 2015 at 22:58, Masahiro Yamada <yamada.masahiro@socionext.com>
>> wrote:
>>> Hi Joe,
>>> (added Simon)
>>>
>>> 2015-05-16 4:52 GMT+09:00 Joe Hershberger <joe.hershberger@gmail.com>:
>>>> Hi Masahiro-san,
>>>>
>>>> On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
>>>> <yamada.masahiro@socionext.com> wrote:
>>>>> When we send patches, we are supposed to test them by build utilities
>>>>> such as MAKEALL, buildman. When we want to test global changes, the
>>>>> first hurdle is, I think, to collect toolchains for all the
>> architectures.
>>>>>
>>>>> We have some documents about build utilities, but I have not seen any
>>>>> official information about how to get the suitable cross-tools.
>>>>> Of course, it is possible to build them from sources, but it is not
>>>>> necessarily feasible.
>>>>>
>>>>> Fortunately, the kernel.org site provides us pre-built toolchains, but
>>>>> some architectures are missing. Also, some boards fail to build with
>>>>> the kernel.org tools. We sometimes see, "where can I get the compiler
>>>>> for this architecture?" things on the ML. We should be able to prepare
>>>>> cross-compilers more easily.
>>>>>
>>>>> It is true that buildman provides --fetch-arch option for downloading
>>>>> kernel.org toolchains, but it does not have access to others. And what
>>>>> we really want to know is most likely how to get compilers for such
>> minor
>>>>> architectures as kernel.org does not provide.
>>>>
>>>> Maybe just integrate this into buildman? Or remove it from buildman?
>>>> In buildman has the benefit that it updates buildman's config to know
>>>> how to find the compiler.
>>>
>>> I wanted to add more options to provide better flexibility.
>>>
>>> For example, I wanted --destdir option
>>> because I think installing tools under /opt/ or /usr/loca/ is a generic
>> demand.
>>>
>>> That's why I implemented this tool as a separate script.
>>> I also want to hear Simon's opinion.
>>
>> I think a separate script is fine - it helps if we can reduce the
>> functionality in buildman. But buildman should call this script.
>
> We cannot mix up python 2 and 3 script.
>
> If buildman should call this script, it must be re-written in 2.
>

Could it not call it at the command line instead of importing it? Then
it would not matter.

What benefit do we get with Python 3?

>
>
>> Also I
>> think your script should use urllib2 and IMO the simple progress update
>> that buildman provides is plenty.
>
> In python 2, there exist two libraries, urllib and urlib2, to do similar things.
>
> In python 3,  the former was discontinued and the latter was renamed
> into "urllib".
>
> So, the "urllib" I am using in my python 3 script
> is equivalent to what you call "urllib2" in python 2.
>
>
>> Re the destination, buildman could provide its own destination for its
>> download operation. But I suggest we use the same default one for both.
>
> OK, I can do this.
>
>> Perhaps your default makes more sense that buildman's? After all, buildman
>> doesn't really care where it is.
>>
>>>
>>>
>>>>> This tool intends to be more generic design without hard-coding such
>>>>> kernel.org things.
>>>>>
>>>>> To achieve that, this tool consists of two files:
>>>>> Python script (this file) and the database file containing URLs of
>> tarballs.
>>>>>
>>>>> We just need to update the latter when new version compilers are
>> released
>>>>> (or better compilers are found.) The file is in the form of RFC 822 for
>>>>> easier editing.
>>>>
>>>> Any reason not to just maintain this list on the wiki. It seem this is
>>>> the primary issue for everyone... not figuring out how to download or
>>>> extract the toolchain.
>>>
>>> I can just note URLs down in README or wiki.
>>>
>>> Of course, everyone knows how to download a tarball and extract it, but
>>> isn't it more convenient to prepare a utility that can do everything for
>> you?
>>>
>>>
>>>>> The script only uses Python libraries, not relies on external programs
>>>>> although it displays wget-like log when downloading tarballs. :-)
>>>>
>>>> It seems like using wget would be more appropriate. Why reinvent the
>> wheel?
>>>
>>>
>>> My intention was to not depend on particular external programs like wget,
>> curl.
>>>
>>> But, you are right, we should not reinvent the wheel.
>>>
>>> I will replace my implementation with a caller of wget.
>>
>> I think urllib2 is a better solution.
>
> Now I understand we must depend on "tar" anyway.
>
> So my first intention "no external program dependency"  seems impossible
> (at least on Python 2).
>
> I do not mind depending on wget, and it seems easier.

Is wget always installed? Maybe urllib is better just in case.

>
>
>
>>>
>>>
>>>>> This is RFC because I am thinking it can be more brushed up.
>>>>> If the basis idea is OK, I will improve code, add more comments.
>>>>>
>>>>> Note this script is written in Python 3 and only works on Python 3.3
>>>>> or later. I do not think it is too much limitation, but some popular
>>>>> distributions under support might include older version. For example,
>>>>> looks like Ubuntu 12.04 LTS is shipped with Python 3.2.
>>>>
>>>> Why not write it in something that exists everywhere? If it's just for
>>>> downloading tool-chains, seems it should be easy to make it python 2.6
>>>> compatible.
>>>
>>> The reason of Python 3.3 dependency is that I wanted to unpack tarballs
>>> with a Python library.
>>>
>>> The tarfile library only supports .xz format on version 3.3 or later.
>>>
>>> We can just invoke tar program rather than Python library, so this version
>>> dependency will go away.
>>
>> Yes :-)
>>
>> def Unpack(self, fname, dest):
>> """Unpack a tar file
>>
>> Args:
>> fname: Filename to unpack
>> dest: Destination directory
>> Returns:
>> Directory name of the first entry in the archive, without the
>> trailing /
>> """
>> stdout = command.Output('tar', 'xvfJ', fname, '-C', dest)
>> return stdout.splitlines()[0][:-1]
>>
>>>
>>> But, using Python 3 seems a right way in a long run, I think.
>>>
>>> Python 3k is already widespread, isn't it?
>>
>> Getting that way, but why require it?
>
> I imagine the main interest for Python developers
> has already moved to Python 3.
>
> I think Python 2 will be also maintained long enough,
> but, our scripts move to Python 3 for better maintainability in the long run.

OK.

>
>
>
>> When I run the tool with no args it seems to start downloading everything.
>
> tools/get-crosstools all
> is better?

Yes I think so.

>
>
>> I think it would be better if it printed help. Also it could use slightly
>> more extensive help (maybe a -H option like buildman which prints the
>> README?).
>>
>> One more question - how do we handle multiple toolchain versions? We may as
>> well figure that out now. How about adding another level in the directory
>> hierarchy with the toolchain source or type? Then we could build with both
>> eldk and kernel.org, for example. Tom has talked about how we might add
>> this feature to buildman too (i.e. build the same arch with multiple
>> toolchains).
>
> Seems a good idea, but it feels like too much work here...

Should just be an extra mkdir. The only pain is adding it to the path.
But in the case where we only have kernel.org toolchains perhaps that
does not matter?

If you do this bit and patch it in so that buildman uses it, I'll
update buildman to support multiple toolchains for the same arch.

Regards,
Simon

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-19 17:16         ` Simon Glass
@ 2015-05-19 18:13           ` Joe Hershberger
  2015-05-19 18:17             ` Simon Glass
  0 siblings, 1 reply; 10+ messages in thread
From: Joe Hershberger @ 2015-05-19 18:13 UTC (permalink / raw)
  To: u-boot

Hi Simon,

On Tue, May 19, 2015 at 12:16 PM, Simon Glass <sjg@chromium.org> wrote:
> Hi Masahiro,
>
> On 18 May 2015 at 23:04, Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
>> Hi Simon,
>>
>>
>> 2015-05-18 2:50 GMT+09:00 Simon Glass <sjg@chromium.org>:
>>> Hi Masahiro,
>>>
>>> On 15 May 2015 at 22:58, Masahiro Yamada <yamada.masahiro@socionext.com>
>>> wrote:
>>>> Hi Joe,
>>>> (added Simon)
>>>>
>>>> 2015-05-16 4:52 GMT+09:00 Joe Hershberger <joe.hershberger@gmail.com>:
>>>>> Hi Masahiro-san,
>>>>>
>>>>> On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
>>>>> <yamada.masahiro@socionext.com> wrote:

8< snip >8

>>>>>> This tool intends to be more generic design without hard-coding such
>>>>>> kernel.org things.
>>>>>>
>>>>>> To achieve that, this tool consists of two files:
>>>>>> Python script (this file) and the database file containing URLs of
>>> tarballs.
>>>>>>
>>>>>> We just need to update the latter when new version compilers are
>>> released
>>>>>> (or better compilers are found.) The file is in the form of RFC 822 for
>>>>>> easier editing.
>>>>>
>>>>> Any reason not to just maintain this list on the wiki. It seem this is
>>>>> the primary issue for everyone... not figuring out how to download or
>>>>> extract the toolchain.
>>>>
>>>> I can just note URLs down in README or wiki.
>>>>
>>>> Of course, everyone knows how to download a tarball and extract it, but
>>>> isn't it more convenient to prepare a utility that can do everything for
>>> you?
>>>>
>>>>
>>>>>> The script only uses Python libraries, not relies on external programs
>>>>>> although it displays wget-like log when downloading tarballs. :-)
>>>>>
>>>>> It seems like using wget would be more appropriate. Why reinvent the
>>> wheel?
>>>>
>>>>
>>>> My intention was to not depend on particular external programs like wget,
>>> curl.
>>>>
>>>> But, you are right, we should not reinvent the wheel.
>>>>
>>>> I will replace my implementation with a caller of wget.
>>>
>>> I think urllib2 is a better solution.
>>
>> Now I understand we must depend on "tar" anyway.
>>
>> So my first intention "no external program dependency"  seems impossible
>> (at least on Python 2).
>>
>> I do not mind depending on wget, and it seems easier.
>
> Is wget always installed? Maybe urllib is better just in case.

In my case I do some work on an old distro and on that machine I have
wget, but not python 3.

8< snip >8

Cheers
-Joe

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-19 18:13           ` Joe Hershberger
@ 2015-05-19 18:17             ` Simon Glass
  2015-05-20  4:50               ` Masahiro Yamada
  0 siblings, 1 reply; 10+ messages in thread
From: Simon Glass @ 2015-05-19 18:17 UTC (permalink / raw)
  To: u-boot

Hi Joe,

On 19 May 2015 at 12:13, Joe Hershberger <joe.hershberger@gmail.com> wrote:
>
> Hi Simon,
>
> On Tue, May 19, 2015 at 12:16 PM, Simon Glass <sjg@chromium.org> wrote:
> > Hi Masahiro,
> >
> > On 18 May 2015 at 23:04, Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
> >> Hi Simon,
> >>
> >>
> >> 2015-05-18 2:50 GMT+09:00 Simon Glass <sjg@chromium.org>:
> >>> Hi Masahiro,
> >>>
> >>> On 15 May 2015 at 22:58, Masahiro Yamada <yamada.masahiro@socionext.com>
> >>> wrote:
> >>>> Hi Joe,
> >>>> (added Simon)
> >>>>
> >>>> 2015-05-16 4:52 GMT+09:00 Joe Hershberger <joe.hershberger@gmail.com>:
> >>>>> Hi Masahiro-san,
> >>>>>
> >>>>> On Fri, May 15, 2015 at 6:01 AM, Masahiro Yamada
> >>>>> <yamada.masahiro@socionext.com> wrote:
>
> 8< snip >8
>
> >>>>>> This tool intends to be more generic design without hard-coding such
> >>>>>> kernel.org things.
> >>>>>>
> >>>>>> To achieve that, this tool consists of two files:
> >>>>>> Python script (this file) and the database file containing URLs of
> >>> tarballs.
> >>>>>>
> >>>>>> We just need to update the latter when new version compilers are
> >>> released
> >>>>>> (or better compilers are found.) The file is in the form of RFC 822 for
> >>>>>> easier editing.
> >>>>>
> >>>>> Any reason not to just maintain this list on the wiki. It seem this is
> >>>>> the primary issue for everyone... not figuring out how to download or
> >>>>> extract the toolchain.
> >>>>
> >>>> I can just note URLs down in README or wiki.
> >>>>
> >>>> Of course, everyone knows how to download a tarball and extract it, but
> >>>> isn't it more convenient to prepare a utility that can do everything for
> >>> you?
> >>>>
> >>>>
> >>>>>> The script only uses Python libraries, not relies on external programs
> >>>>>> although it displays wget-like log when downloading tarballs. :-)
> >>>>>
> >>>>> It seems like using wget would be more appropriate. Why reinvent the
> >>> wheel?
> >>>>
> >>>>
> >>>> My intention was to not depend on particular external programs like wget,
> >>> curl.
> >>>>
> >>>> But, you are right, we should not reinvent the wheel.
> >>>>
> >>>> I will replace my implementation with a caller of wget.
> >>>
> >>> I think urllib2 is a better solution.
> >>
> >> Now I understand we must depend on "tar" anyway.
> >>
> >> So my first intention "no external program dependency"  seems impossible
> >> (at least on Python 2).
> >>
> >> I do not mind depending on wget, and it seems easier.
> >
> > Is wget always installed? Maybe urllib is better just in case.
>
> In my case I do some work on an old distro and on that machine I have
> wget, but not python 3.
>
> 8< snip >8

One option there might be Python 2 and urllib2 like buildman? In
general it is nice to support older platforms if we can as it reduces
friction.

Regards,
Simon

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures
  2015-05-19 18:17             ` Simon Glass
@ 2015-05-20  4:50               ` Masahiro Yamada
  0 siblings, 0 replies; 10+ messages in thread
From: Masahiro Yamada @ 2015-05-20  4:50 UTC (permalink / raw)
  To: u-boot

2015-05-20 3:17 GMT+09:00 Simon Glass <sjg@chromium.org>:

>> In my case I do some work on an old distro and on that machine I have
>> wget, but not python 3.
>>
>> 8< snip >8
>
> One option there might be Python 2 and urllib2 like buildman? In
> general it is nice to support older platforms if we can as it reduces
> friction.

Looks the sole choice to me.




-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-05-20  4:50 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-15 11:01 [U-Boot] [RFC PATCH] tools: get-toolchais: a tool to get cross-tools for all architectures Masahiro Yamada
2015-05-15 19:52 ` Joe Hershberger
2015-05-16  4:58   ` Masahiro Yamada
2015-05-17 17:50     ` Simon Glass
2015-05-19  5:04       ` Masahiro Yamada
2015-05-19 17:16         ` Simon Glass
2015-05-19 18:13           ` Joe Hershberger
2015-05-19 18:17             ` Simon Glass
2015-05-20  4:50               ` Masahiro Yamada
2015-05-19 15:28 ` Joe Hershberger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.