* Review request 0/13: Contribute meta-tensorflow to Yocto
@ 2019-02-21 11:37 Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 01/13] initial Hongxu Jia
` (15 more replies)
0 siblings, 16 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Hi RP and Yocto folks,
Currently AI on IoT edge becomes more and more popular, but there is no
machine learning framework in Yocto/OE. With the support of Eric
<Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com>
and Randy <randy.macleod@windriver.com>, after two months effort, I've
integrated TensorFlow to Yocto.
Now, I contribute the patches to Yocto for review, and apply for creating
a layer named `meta-tensorflow' on Yocto.
For test convenient, there is a fork on github:
https://github.com/hongxu-jia/meta-tensorflow
BTW, I have contributed other 11 fundamental recipes to meta-openembedded
and all of them have been merged to master branch.
Please no hesitate to share your suggestion.
//Hongxu
Testing Commands:
-----------------
See README
Testing, Expected Results:
--------------------------
See README
^ permalink raw reply [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 01/13] initial
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 02/13] bazel-native: add version 0.21.0 Hongxu Jia
` (14 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
conf/layer.conf | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
create mode 100644 conf/layer.conf
diff --git a/conf/layer.conf b/conf/layer.conf
new file mode 100644
index 0000000..352c2bc
--- /dev/null
+++ b/conf/layer.conf
@@ -0,0 +1,23 @@
+# We have a conf and classes directory, add to BBPATH
+BBPATH =. "${LAYERDIR}:"
+
+# We have a packages directory, add to BBFILES
+BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
+ ${LAYERDIR}/recipes-*/*/*.bbappend"
+
+BBFILE_COLLECTIONS += "meta-tensorflow"
+BBFILE_PATTERN_meta-tensorflow = "^${LAYERDIR}/"
+BBFILE_PRIORITY_meta-tensorflow = "10"
+
+LAYERVERSION_meta-tensorflow = "1"
+
+LAYERSERIES_COMPAT_meta-tensorflow = "thud"
+
+LAYERDEPENDS_meta-tensorflow = " \
+ core \
+ meta-java \
+ meta-python \
+ openembedded-layer \
+"
+
+LAYER_PATH_meta-tensorflow = "${LAYERDIR}"
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 02/13] bazel-native: add version 0.21.0
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 01/13] initial Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 03/13] create classes/bazel.bbclass Hongxu Jia
` (13 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
It is the build system of tensorflow.
The build steps refers:
https://docs.bazel.build/versions/master/install-compile-source.html
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
recipes-devtools/bazel/bazel-native_0.21.0.bb | 33 +++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
create mode 100644 recipes-devtools/bazel/bazel-native_0.21.0.bb
diff --git a/recipes-devtools/bazel/bazel-native_0.21.0.bb b/recipes-devtools/bazel/bazel-native_0.21.0.bb
new file mode 100644
index 0000000..122e507
--- /dev/null
+++ b/recipes-devtools/bazel/bazel-native_0.21.0.bb
@@ -0,0 +1,33 @@
+DESCRIPTION = "Bazel build and test tool"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
+
+SRC_URI[md5sum] = "8c8240b178a35c0f3c1bc03017550270"
+SRC_URI[sha256sum] = "6ccb831e683179e0cfb351cb11ea297b4db48f9eab987601c038aa0f83037db4"
+
+SRC_URI = "https://github.com/bazelbuild/bazel/releases/download/${PV}/bazel-${PV}-dist.zip"
+
+inherit native
+
+INHIBIT_SYSROOT_STRIP = "1"
+
+DEPENDS = "coreutils-native \
+ zip-native \
+ openjdk-8-native \
+ "
+
+S="${WORKDIR}"
+
+do_compile () {
+ export JAVA_HOME="${RECIPE_SYSROOT_NATIVE}/usr/lib/jvm/openjdk-8-native"
+ TMPDIR="${TOPDIR}/bazel" \
+ VERBOSE=yes \
+ EXTRA_BAZEL_ARGS="--distdir=${DL_DIR}" \
+ ./compile.sh
+}
+
+do_install () {
+ install -d ${D}${bindir}
+ install -m 0755 ${S}/output/bazel ${D}${bindir}
+ create_cmdline_wrapper ${D}/${bindir}/bazel \$BAZEL_ARGS
+}
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 03/13] create classes/bazel.bbclass
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 01/13] initial Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 02/13] bazel-native: add version 0.21.0 Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 04/13] tensorflow-native: add version 1.13.0 Hongxu Jia
` (12 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Inherit the bbclass to use bazel to build tensorflow-native,
tensorflow, tensorboard and tensorflow-estimator.
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
classes/bazel.bbclass | 80 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)
create mode 100644 classes/bazel.bbclass
diff --git a/classes/bazel.bbclass b/classes/bazel.bbclass
new file mode 100644
index 0000000..3bda2c8
--- /dev/null
+++ b/classes/bazel.bbclass
@@ -0,0 +1,80 @@
+DEPENDS += "bazel-native \
+ openjdk-8-native \
+ "
+
+BAZEL_OUTPUTBASE_DIR ?= "${WORKDIR}/bazel/output_base"
+export BAZEL_ARGS="--output_user_root=${WORKDIR}/bazel/user_root \
+ --output_base=${BAZEL_OUTPUTBASE_DIR} \
+ --bazelrc=${S}/bazelrc \
+ "
+
+export JAVA_HOME="${RECIPE_SYSROOT_NATIVE}/usr/lib/jvm/openjdk-8-native"
+
+def bazel_get_flags(d):
+ flags = ""
+ for i in d.getVar("CC").split()[1:]:
+ flags += "--conlyopt=%s --cxxopt=%s " % (i, i)
+
+ for i in d.getVar("CFLAGS").split():
+ if i == "-g":
+ continue
+ flags += "--conlyopt=%s " % i
+
+ for i in d.getVar("BUILD_CFLAGS").split():
+ flags += "--host_conlyopt=%s " % i
+
+ for i in d.getVar("CXXFLAGS").split():
+ if i == "-g":
+ continue
+ flags += "--cxxopt=%s " % i
+
+ for i in d.getVar("BUILD_CXXFLAGS").split():
+ flags += "--host_cxxopt=%s " % i
+
+ for i in d.getVar("CPPFLAGS").split():
+ if i == "-g":
+ continue
+ flags += "--conlyopt=%s --cxxopt=%s " % (i, i)
+
+ for i in d.getVar("BUILD_CPPFLAGS").split():
+ flags += "--host_conlyopt=%s --host_cxxopt=%s " % (i, i)
+
+ for i in d.getVar("LDFLAGS").split():
+ flags += "--linkopt=%s " % i
+
+ for i in d.getVar("BUILD_LDFLAGS").split():
+ flags += "--host_linkopt=%s " % i
+
+ for i in d.getVar("TOOLCHAIN_OPTIONS").split():
+ flags += "--linkopt=%s " % i
+
+ return flags
+
+bazel_do_configure () {
+ cat > "${S}/bazelrc" <<-EOF
+build --verbose_failures
+build --spawn_strategy=standalone --genrule_strategy=standalone
+build --jobs=${@oe.utils.cpu_count()}
+test --verbose_failures --verbose_test_summary
+test --spawn_strategy=standalone --genrule_strategy=standalone
+
+build --linkopt=-Wl,-latomic
+build --strip=never
+
+fetch --distdir=${DL_DIR}
+build --distdir=${DL_DIR}
+
+EOF
+
+}
+
+bazel_do_configure_append_class-target () {
+ cat >> "${S}/bazelrc" <<-EOF
+# FLAGS
+build ${@bazel_get_flags(d)}
+EOF
+
+ sed -i "s:${WORKDIR}:${BAZEL_OUTPUTBASE_DIR}/external/yocto_compiler:g" ${S}/bazelrc
+}
+
+EXPORT_FUNCTIONS do_configure
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 04/13] tensorflow-native: add version 1.13.0
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (2 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 03/13] create classes/bazel.bbclass Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 05/13] tensorflow-native: add Python 3.7 compatibility Hongxu Jia
` (11 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
It is required by tensorflow-estimator.
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
.../tensorflow/tensorflow-native_1.13.0.bb | 60 ++++++++++++++++++++++
1 file changed, 60 insertions(+)
create mode 100644 recipes-framework/tensorflow/tensorflow-native_1.13.0.bb
diff --git a/recipes-framework/tensorflow/tensorflow-native_1.13.0.bb b/recipes-framework/tensorflow/tensorflow-native_1.13.0.bb
new file mode 100644
index 0000000..bb979ab
--- /dev/null
+++ b/recipes-framework/tensorflow/tensorflow-native_1.13.0.bb
@@ -0,0 +1,60 @@
+DESCRIPTION = "TensorFlow C/C++ Libraries"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=01e86893010a1b87e69a213faa753ebd"
+
+DEPENDS = "bazel-native protobuf-native util-linux-native protobuf"
+SRCREV = "c8875cbb1341f6ca14dd0ec908f1dde7d67f7808"
+SRC_URI = "git://github.com/tensorflow/tensorflow.git;branch=r1.13 \
+ "
+S = "${WORKDIR}/git"
+
+DEPENDS += " \
+ python3 \
+ python3-numpy-native \
+ python3-keras-applications-native \
+ python3-keras-preprocessing-native \
+ python3-pip-native \
+ python3-wheel-native \
+"
+
+inherit python3native bazel native
+
+export PYTHON_BIN_PATH="${PYTHON}"
+export PYTHON_LIB_PATH="${PYTHON_SITEPACKAGES_DIR}"
+
+do_configure_append () {
+ TF_NEED_CUDA=0 \
+ TF_NEED_OPENCL_SYCL=0 \
+ TF_NEED_OPENCL=0 \
+ TF_CUDA_CLANG=0 \
+ TF_DOWNLOAD_CLANG=0 \
+ TF_ENABLE_XLA=0 \
+ TF_NEED_MPI=0 \
+ TF_SET_ANDROID_WORKSPACE=0 \
+ ./configure
+}
+
+do_compile () {
+ unset CC
+ ${STAGING_BINDIR_NATIVE}/bazel build \
+ -c opt \
+ --subcommands --explain=${T}/explain.log \
+ --verbose_explanations --verbose_failures \
+ --verbose_failures \
+ //tensorflow/tools/pip_package:build_pip_package
+
+ ${STAGING_BINDIR_NATIVE}/bazel shutdown
+}
+
+do_install() {
+ export TMPDIR="${WORKDIR}"
+ echo "Generating pip package"
+ BDIST_OPTS="--universal" \
+ ${S}/bazel-bin/tensorflow/tools/pip_package/build_pip_package ${WORKDIR}
+
+ echo "Installing pip package"
+ install -d ${D}/${PYTHON_SITEPACKAGES_DIR}
+ ${STAGING_BINDIR_NATIVE}/pip3 install --disable-pip-version-check -v --no-deps \
+ -t ${D}/${PYTHON_SITEPACKAGES_DIR} --no-cache-dir ${WORKDIR}/tensorflow*.whl
+
+}
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 05/13] tensorflow-native: add Python 3.7 compatibility
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (3 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 04/13] tensorflow-native: add version 1.13.0 Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 06/13] tensorflow-estimator: add version 1.13 Hongxu Jia
` (10 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
SyntaxError around async keyword on Python 3.7
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
...xError-around-async-keyword-on-Python-3.7.patch | 116 +++++++++++++++++++++
.../tensorflow/tensorflow-native_1.13.0.bb | 1 +
2 files changed, 117 insertions(+)
create mode 100644 recipes-framework/tensorflow/files/0001-SyntaxError-around-async-keyword-on-Python-3.7.patch
diff --git a/recipes-framework/tensorflow/files/0001-SyntaxError-around-async-keyword-on-Python-3.7.patch b/recipes-framework/tensorflow/files/0001-SyntaxError-around-async-keyword-on-Python-3.7.patch
new file mode 100644
index 0000000..75cb572
--- /dev/null
+++ b/recipes-framework/tensorflow/files/0001-SyntaxError-around-async-keyword-on-Python-3.7.patch
@@ -0,0 +1,116 @@
+From 8abbdce7a7ec7428b7f657e313ee0b6642c1de76 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Thu, 14 Feb 2019 10:45:55 +0800
+Subject: [PATCH] SyntaxError around async keyword on Python 3.7
+
+Backport a fix from upstream astor to fix the error
+
+Upstream-Status: Pending
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ tensorflow/workspace.bzl | 1 +
+ ...-Don-t-use-async-as-a-keyword-argument-94.patch | 79 ++++++++++++++++++++++
+ 2 files changed, 80 insertions(+)
+ create mode 100644 third_party/systemlibs/0001-Don-t-use-async-as-a-keyword-argument-94.patch
+
+diff --git a/tensorflow/workspace.bzl b/tensorflow/workspace.bzl
+index aefab03..a281803 100755
+--- a/tensorflow/workspace.bzl
++++ b/tensorflow/workspace.bzl
+@@ -278,6 +278,7 @@ def tf_workspace(path_prefix = "", tf_repo_name = ""):
+ tf_http_archive(
+ name = "astor_archive",
+ build_file = clean_dep("//third_party:astor.BUILD"),
++ patch_file = clean_dep("//third_party/systemlibs:0001-Don-t-use-async-as-a-keyword-argument-94.patch"),
+ sha256 = "ff6d2e2962d834acb125cc4dcc80c54a8c17c253f4cc9d9c43b5102a560bb75d",
+ strip_prefix = "astor-0.6.2",
+ system_build_file = clean_dep("//third_party/systemlibs:astor.BUILD"),
+diff --git a/third_party/systemlibs/0001-Don-t-use-async-as-a-keyword-argument-94.patch b/third_party/systemlibs/0001-Don-t-use-async-as-a-keyword-argument-94.patch
+new file mode 100644
+index 0000000..aafb172
+--- /dev/null
++++ b/third_party/systemlibs/0001-Don-t-use-async-as-a-keyword-argument-94.patch
+@@ -0,0 +1,79 @@
++From fe1ef7f9d746847c157197e4cb2ab6505fe19faf Mon Sep 17 00:00:00 2001
++From: Berker Peksag <berker.peksag@gmail.com>
++Date: Fri, 23 Mar 2018 16:50:21 +0300
++Subject: [PATCH] Don't use 'async' as a keyword argument (#94)
++
++Fixes #86
++
++Upstream-Status: Backport[https://github.com/berkerpeksag/astor.git]
++Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
++---
++ astor/code_gen.py | 18 +++++++++---------
++ 1 file changed, 9 insertions(+), 9 deletions(-)
++
++diff --git a/astor/code_gen.py b/astor/code_gen.py
++index 7c27f70..47d6acc 100644
++--- a/astor/code_gen.py
+++++ b/astor/code_gen.py
++@@ -308,8 +308,8 @@ class SourceGenerator(ExplicitNodeVisitor):
++ self.statement(node)
++ self.generic_visit(node)
++
++- def visit_FunctionDef(self, node, async=False):
++- prefix = 'async ' if async else ''
+++ def visit_FunctionDef(self, node, is_async=False):
+++ prefix = 'async ' if is_async else ''
++ self.decorators(node, 1 if self.indentation else 2)
++ self.statement(node, '%sdef %s' % (prefix, node.name), '(')
++ self.visit_arguments(node.args)
++@@ -322,7 +322,7 @@ class SourceGenerator(ExplicitNodeVisitor):
++
++ # introduced in Python 3.5
++ def visit_AsyncFunctionDef(self, node):
++- self.visit_FunctionDef(node, async=True)
+++ self.visit_FunctionDef(node, is_async=True)
++
++ def visit_ClassDef(self, node):
++ have_args = []
++@@ -364,24 +364,24 @@ class SourceGenerator(ExplicitNodeVisitor):
++ self.else_body(else_)
++ break
++
++- def visit_For(self, node, async=False):
+++ def visit_For(self, node, is_async=False):
++ set_precedence(node, node.target)
++- prefix = 'async ' if async else ''
+++ prefix = 'async ' if is_async else ''
++ self.statement(node, '%sfor ' % prefix,
++ node.target, ' in ', node.iter, ':')
++ self.body_or_else(node)
++
++ # introduced in Python 3.5
++ def visit_AsyncFor(self, node):
++- self.visit_For(node, async=True)
+++ self.visit_For(node, is_async=True)
++
++ def visit_While(self, node):
++ set_precedence(node, node.test)
++ self.statement(node, 'while ', node.test, ':')
++ self.body_or_else(node)
++
++- def visit_With(self, node, async=False):
++- prefix = 'async ' if async else ''
+++ def visit_With(self, node, is_async=False):
+++ prefix = 'async ' if is_async else ''
++ self.statement(node, '%swith ' % prefix)
++ if hasattr(node, "context_expr"): # Python < 3.3
++ self.visit_withitem(node)
++@@ -392,7 +392,7 @@ class SourceGenerator(ExplicitNodeVisitor):
++
++ # new for Python 3.5
++ def visit_AsyncWith(self, node):
++- self.visit_With(node, async=True)
+++ self.visit_With(node, is_async=True)
++
++ # new for Python 3.3
++ def visit_withitem(self, node):
++--
++2.7.4
++
+--
+2.7.4
+
diff --git a/recipes-framework/tensorflow/tensorflow-native_1.13.0.bb b/recipes-framework/tensorflow/tensorflow-native_1.13.0.bb
index bb979ab..e747670 100644
--- a/recipes-framework/tensorflow/tensorflow-native_1.13.0.bb
+++ b/recipes-framework/tensorflow/tensorflow-native_1.13.0.bb
@@ -5,6 +5,7 @@ LIC_FILES_CHKSUM = "file://LICENSE;md5=01e86893010a1b87e69a213faa753ebd"
DEPENDS = "bazel-native protobuf-native util-linux-native protobuf"
SRCREV = "c8875cbb1341f6ca14dd0ec908f1dde7d67f7808"
SRC_URI = "git://github.com/tensorflow/tensorflow.git;branch=r1.13 \
+ file://0001-SyntaxError-around-async-keyword-on-Python-3.7.patch \
"
S = "${WORKDIR}/git"
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 06/13] tensorflow-estimator: add version 1.13
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (4 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 05/13] tensorflow-native: add Python 3.7 compatibility Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 07/13] Customize Yocto toolchain for cross compiling Hongxu Jia
` (9 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
The build steps refers README of https://github.com/tensorflow/estimator
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
.../tensorflow/tensorflow-estimator_1.13.bb | 50 ++++++++++++++++++++++
1 file changed, 50 insertions(+)
create mode 100644 recipes-framework/tensorflow/tensorflow-estimator_1.13.bb
diff --git a/recipes-framework/tensorflow/tensorflow-estimator_1.13.bb b/recipes-framework/tensorflow/tensorflow-estimator_1.13.bb
new file mode 100644
index 0000000..5400888
--- /dev/null
+++ b/recipes-framework/tensorflow/tensorflow-estimator_1.13.bb
@@ -0,0 +1,50 @@
+DESCRIPTION = "A high-level TensorFlow API that greatly simplifies machine \
+learning programming."
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=01e86893010a1b87e69a213faa753ebd"
+
+SRC_URI = "git://github.com/tensorflow/estimator.git;branch=r1.13 \
+ "
+SRCREV = "340703eed78ba4854862f749ed94f91598826e79"
+S = "${WORKDIR}/git"
+
+inherit python3native bazel
+
+DEPENDS += " \
+ python3-pip-native \
+ python3-wheel-native \
+ python3-six-native \
+ python3-protobuf-native \
+ python3-absl-native \
+ python3-astor-native \
+ python3-gast-native \
+ python3-termcolor-native \
+ tensorflow-native \
+"
+
+do_compile () {
+ unset CC
+ export TMPDIR="${WORKDIR}"
+ ${STAGING_BINDIR_NATIVE}/bazel build \
+ --subcommands --explain=${T}/explain.log \
+ --verbose_explanations --verbose_failures \
+ --verbose_failures \
+ --python_path="${PYTHON}" \
+ //tensorflow_estimator/tools/pip_package:build_pip_package
+
+ ${STAGING_BINDIR_NATIVE}/bazel shutdown
+
+ PYTHON_BIN_PATH="${PYTHON}" \
+ ${S}/bazel-bin/tensorflow_estimator/tools/pip_package/build_pip_package \
+ ${WORKDIR}/estimator_pip
+}
+
+do_install () {
+ echo "Installing pip package"
+ install -d ${D}${PYTHON_SITEPACKAGES_DIR}
+ ${STAGING_BINDIR_NATIVE}/pip3 install --disable-pip-version-check -v --no-deps \
+ -t ${D}/${PYTHON_SITEPACKAGES_DIR} --no-cache-dir ${WORKDIR}/estimator_pip/*.whl
+
+}
+
+FILES_${PN} += "${libdir}/*"
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 07/13] Customize Yocto toolchain for cross compiling
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (5 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 06/13] tensorflow-estimator: add version 1.13 Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 08/13] tensorboard: add version 1.12.2 Hongxu Jia
` (8 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
The idea comes from upstream arm compiler which `Build from
source for the Raspberry Pi'
$ ls <TensorFlow source code>/third_party/toolchains/cpus/arm/
arm_compiler_configure.bzl BUILD CROSSTOOL.tpl
https://www.tensorflow.org/install/source_rpi
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
recipes-framework/tensorflow/files/BUILD | 56 +++++
.../tensorflow/files/BUILD.yocto_compiler | 82 ++++++++
recipes-framework/tensorflow/files/CROSSTOOL.tpl | 229 +++++++++++++++++++++
.../tensorflow/files/yocto_compiler_configure.bzl | 24 +++
4 files changed, 391 insertions(+)
create mode 100644 recipes-framework/tensorflow/files/BUILD
create mode 100644 recipes-framework/tensorflow/files/BUILD.yocto_compiler
create mode 100644 recipes-framework/tensorflow/files/CROSSTOOL.tpl
create mode 100644 recipes-framework/tensorflow/files/yocto_compiler_configure.bzl
diff --git a/recipes-framework/tensorflow/files/BUILD b/recipes-framework/tensorflow/files/BUILD
new file mode 100644
index 0000000..fd1f99a
--- /dev/null
+++ b/recipes-framework/tensorflow/files/BUILD
@@ -0,0 +1,56 @@
+package(default_visibility = ["//visibility:public"])
+
+cc_toolchain_suite(
+ name = "toolchain",
+ toolchains = {
+ "armeabi|compiler": ":cc-compiler-armeabi",
+ "local|compiler": ":cc-compiler-local",
+ "armeabi": ":cc-compiler-armeabi",
+ "k8": ":cc-compiler-local",
+ "piii": ":cc-compiler-local",
+ "arm": ":cc-compiler-local",
+ "s390x": ":cc-compiler-local",
+ },
+)
+
+filegroup(
+ name = "empty",
+ srcs = [],
+)
+
+filegroup(
+ name = "arm_linux_all_files",
+ srcs = [
+ "@yocto_compiler//:compiler_pieces",
+ ],
+)
+
+cc_toolchain(
+ name = "cc-compiler-local",
+ all_files = ":empty",
+ compiler_files = ":empty",
+ cpu = "local",
+ dwp_files = ":empty",
+ dynamic_runtime_libs = [":empty"],
+ linker_files = ":empty",
+ objcopy_files = ":empty",
+ static_runtime_libs = [":empty"],
+ strip_files = ":empty",
+ supports_param_files = 1,
+ toolchain_identifier = "local_linux",
+)
+
+cc_toolchain(
+ name = "cc-compiler-armeabi",
+ all_files = ":arm_linux_all_files",
+ compiler_files = ":arm_linux_all_files",
+ cpu = "armeabi",
+ dwp_files = ":empty",
+ dynamic_runtime_libs = [":empty"],
+ linker_files = ":arm_linux_all_files",
+ objcopy_files = "arm_linux_all_files",
+ static_runtime_libs = [":empty"],
+ strip_files = "arm_linux_all_files",
+ supports_param_files = 1,
+ toolchain_identifier = "yocto-linux-gnueabihf",
+)
diff --git a/recipes-framework/tensorflow/files/BUILD.yocto_compiler b/recipes-framework/tensorflow/files/BUILD.yocto_compiler
new file mode 100644
index 0000000..0dd84d3
--- /dev/null
+++ b/recipes-framework/tensorflow/files/BUILD.yocto_compiler
@@ -0,0 +1,82 @@
+package(default_visibility = ['//visibility:public'])
+
+filegroup(
+ name = 'gcc',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-gcc',
+ ],
+)
+
+filegroup(
+ name = 'ar',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-ar',
+ ],
+)
+
+filegroup(
+ name = 'ld',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-ld',
+ ],
+)
+
+filegroup(
+ name = 'nm',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-nm',
+ ],
+)
+
+filegroup(
+ name = 'objcopy',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-objcopy',
+ ],
+)
+
+filegroup(
+ name = 'objdump',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-objdump',
+ ],
+)
+
+filegroup(
+ name = 'strip',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-strip',
+ ],
+)
+
+filegroup(
+ name = 'as',
+ srcs = [
+ 'recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-as',
+ ],
+)
+
+filegroup(
+ name = 'compiler_pieces',
+ srcs = glob([
+ 'recipe-sysroot-native/usr/include/**',
+ 'recipe-sysroot-native/usr/lib/%%CT_NAME%%/**',
+ 'recipe-sysroot-native/usr/lib/%%CT_NAME%%/gcc/**',
+ 'recipe-sysroot-native/usr/libexec/%%CT_NAME%%/**',
+ 'recipe-sysroot/usr/include/**',
+ ]),
+)
+
+filegroup(
+ name = 'compiler_components',
+ srcs = [
+ ':gcc',
+ ':ar',
+ ':ld',
+ ':nm',
+ ':objcopy',
+ ':objdump',
+ ':strip',
+ ':as',
+ ],
+)
diff --git a/recipes-framework/tensorflow/files/CROSSTOOL.tpl b/recipes-framework/tensorflow/files/CROSSTOOL.tpl
new file mode 100644
index 0000000..296d6a6
--- /dev/null
+++ b/recipes-framework/tensorflow/files/CROSSTOOL.tpl
@@ -0,0 +1,229 @@
+major_version: "local"
+minor_version: ""
+default_target_cpu: "same_as_host"
+
+toolchain {
+ abi_version: "armeabi"
+ abi_libc_version: "armeabi"
+ builtin_sysroot: ""
+ compiler: "compiler"
+ host_system_name: "armeabi"
+ needsPic: true
+ supports_gold_linker: false
+ supports_incremental_linker: false
+ supports_fission: false
+ supports_interface_shared_objects: false
+ supports_normalizing_ar: false
+ supports_start_end_lib: false
+ target_libc: "armeabi"
+ target_cpu: "armeabi"
+ target_system_name: "armeabi"
+ toolchain_identifier: "yocto-linux-gnueabihf"
+
+ tool_path { name: "ar" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-ar" }
+ tool_path { name: "compat-ld" path: "/bin/false" }
+ tool_path { name: "cpp" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-cpp" }
+ tool_path { name: "dwp" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-dwp" }
+ tool_path { name: "gcc" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-gcc" }
+ tool_path { name: "gcov" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-gcov" }
+ tool_path { name: "ld" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-ld" }
+
+ tool_path { name: "nm" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-nm" }
+ tool_path { name: "objcopy" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-objcopy" }
+ tool_path { name: "objdump" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-objdump" }
+ tool_path { name: "strip" path: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/bin/%%CT_NAME%%/%%CT_NAME%%-strip" }
+
+
+ cxx_builtin_include_directory: "%%YOCTO_COMPILER_PATH%%"
+
+ compiler_flag: "--sysroot=%%YOCTO_COMPILER_PATH%%/recipe-sysroot"
+
+ # The path below must match the one used in
+ # tensorflow/tools/ci_build/pi/build_raspberry_pi.sh.
+ cxx_builtin_include_directory: "/tmp/openblas_install/include/"
+ cxx_flag: "-std=c++11"
+ # The cxx_builtin_include_directory directives don't seem to be adding these, so
+ # explicitly set them as flags. There's a query to the Bazel team outstanding about
+ # why this is necessary.
+ linker_flag: "-lstdc++"
+
+ unfiltered_cxx_flag: "-Wno-builtin-macro-redefined"
+ unfiltered_cxx_flag: "-D__DATE__=\"redacted\""
+ unfiltered_cxx_flag: "-D__TIMESTAMP__=\"redacted\""
+ unfiltered_cxx_flag: "-D__TIME__=\"redacted\""
+
+ unfiltered_cxx_flag: "-no-canonical-prefixes"
+ unfiltered_cxx_flag: "-fno-canonical-system-headers"
+
+ # Include target pyconfig.h
+ compiler_flag: "-D_PYTHON_INCLUDE_TARGET"
+
+ compiler_flag: "-U_FORTIFY_SOURCE"
+ compiler_flag: "-D_FORTIFY_SOURCE=1"
+ compiler_flag: "-fstack-protector"
+ compiler_flag: "-DRASPBERRY_PI" # To differentiate from mobile builds.
+ linker_flag: "-Wl,-z,relro,-z,now"
+
+ linker_flag: "-no-canonical-prefixes"
+ linker_flag: "-pass-exit-codes"
+
+ linker_flag: "-Wl,--build-id=md5"
+
+ compilation_mode_flags {
+ mode: DBG
+ # Enable debug symbols.
+ compiler_flag: "-g"
+ }
+ compilation_mode_flags {
+ mode: OPT
+
+ # No debug symbols.
+ # Maybe we should enable https://gcc.gnu.org/wiki/DebugFission for opt or
+ # even generally? However, that can't happen here, as it requires special
+ # handling in Bazel.
+ compiler_flag: "-g0"
+
+ # Conservative choice for -O
+ # -O3 can increase binary size and even slow down the resulting binaries.
+ # Profile first and / or use FDO if you need better performance than this.
+ compiler_flag: "-O2"
+
+ # Disable assertions
+ compiler_flag: "-DNDEBUG"
+
+ # Removal of unused code and data at link time (can this increase binary size in some cases?).
+ compiler_flag: "-ffunction-sections"
+ compiler_flag: "-fdata-sections"
+ linker_flag: "-Wl,--gc-sections"
+ }
+ linking_mode_flags { mode: DYNAMIC }
+
+}
+
+toolchain {
+ abi_version: "local"
+ abi_libc_version: "local"
+ builtin_sysroot: ""
+ compiler: "compiler"
+ host_system_name: "local"
+ needsPic: true
+ supports_gold_linker: false
+ supports_incremental_linker: false
+ supports_fission: false
+ supports_interface_shared_objects: false
+ supports_normalizing_ar: false
+ supports_start_end_lib: false
+ target_libc: "local"
+ target_cpu: "local"
+ target_system_name: "local"
+ toolchain_identifier: "local_linux"
+
+ tool_path { name: "ar" path: "/usr/bin/ar" }
+ tool_path { name: "compat-ld" path: "/usr/bin/ld" }
+ tool_path { name: "cpp" path: "/usr/bin/cpp" }
+ tool_path { name: "dwp" path: "/usr/bin/dwp" }
+ tool_path { name: "gcc" path: "/usr/bin/gcc" }
+ cxx_flag: "-std=c++0x"
+ linker_flag: "-lstdc++"
+ linker_flag: "-B/usr/bin/"
+
+ # TODO(bazel-team): In theory, the path here ought to exactly match the path
+ # used by gcc. That works because bazel currently doesn't track files at
+ # absolute locations and has no remote execution, yet. However, this will need
+ # to be fixed, maybe with auto-detection?
+ cxx_builtin_include_directory: "/usr/lib/gcc/"
+ cxx_builtin_include_directory: "/usr/local/include"
+ cxx_builtin_include_directory: "/usr/include"
+ cxx_builtin_include_directory: "%%YOCTO_COMPILER_PATH%%/recipe-sysroot-native/usr/include"
+
+ tool_path { name: "gcov" path: "/usr/bin/gcov" }
+
+ # C(++) compiles invoke the compiler (as that is the one knowing where
+ # to find libraries), but we provide LD so other rules can invoke the linker.
+ tool_path { name: "ld" path: "/usr/bin/ld" }
+
+ tool_path { name: "nm" path: "/usr/bin/nm" }
+ tool_path { name: "objcopy" path: "/usr/bin/objcopy" }
+ objcopy_embed_flag: "-I"
+ objcopy_embed_flag: "binary"
+ tool_path { name: "objdump" path: "/usr/bin/objdump" }
+ tool_path { name: "strip" path: "/usr/bin/strip" }
+
+ # Anticipated future default.
+ unfiltered_cxx_flag: "-no-canonical-prefixes"
+ unfiltered_cxx_flag: "-fno-canonical-system-headers"
+
+ # Make C++ compilation deterministic. Use linkstamping instead of these
+ # compiler symbols.
+ unfiltered_cxx_flag: "-Wno-builtin-macro-redefined"
+ unfiltered_cxx_flag: "-D__DATE__=\"redacted\""
+ unfiltered_cxx_flag: "-D__TIMESTAMP__=\"redacted\""
+ unfiltered_cxx_flag: "-D__TIME__=\"redacted\""
+
+ # Security hardening on by default.
+ # Conservative choice; -D_FORTIFY_SOURCE=2 may be unsafe in some cases.
+ # We need to undef it before redefining it as some distributions now have
+ # it enabled by default.
+ compiler_flag: "-U_FORTIFY_SOURCE"
+ compiler_flag: "-D_FORTIFY_SOURCE=1"
+ compiler_flag: "-fstack-protector"
+ linker_flag: "-Wl,-z,relro,-z,now"
+
+ # Include native pyconfig.h
+ compiler_flag: "-D_PYTHON_INCLUDE_NATIVE"
+
+ # Enable coloring even if there's no attached terminal. Bazel removes the
+ # escape sequences if --nocolor is specified. This isn't supported by gcc
+ # on Ubuntu 14.04.
+ # compiler_flag: "-fcolor-diagnostics"
+
+ # All warnings are enabled. Maybe enable -Werror as well?
+ compiler_flag: "-Wall"
+ # Enable a few more warnings that aren't part of -Wall.
+ compiler_flag: "-Wunused-but-set-parameter"
+ # But disable some that are problematic.
+ compiler_flag: "-Wno-free-nonheap-object" # has false positives
+
+ # Keep stack frames for debugging, even in opt mode.
+ compiler_flag: "-fno-omit-frame-pointer"
+
+ # Anticipated future default.
+ linker_flag: "-no-canonical-prefixes"
+ # Have gcc return the exit code from ld.
+ linker_flag: "-pass-exit-codes"
+ # Stamp the binary with a unique identifier.
+ linker_flag: "-Wl,--build-id=md5"
+ linker_flag: "-Wl,--hash-style=gnu"
+ # Gold linker only? Can we enable this by default?
+ # linker_flag: "-Wl,--warn-execstack"
+ # linker_flag: "-Wl,--detect-odr-violations"
+
+ compilation_mode_flags {
+ mode: DBG
+ # Enable debug symbols.
+ compiler_flag: "-g"
+ }
+ compilation_mode_flags {
+ mode: OPT
+
+ # No debug symbols.
+ # Maybe we should enable https://gcc.gnu.org/wiki/DebugFission for opt or
+ # even generally? However, that can't happen here, as it requires special
+ # handling in Bazel.
+ compiler_flag: "-g0"
+
+ # Conservative choice for -O
+ # -O3 can increase binary size and even slow down the resulting binaries.
+ # Profile first and / or use FDO if you need better performance than this.
+ compiler_flag: "-O2"
+
+ # Disable assertions
+ compiler_flag: "-DNDEBUG"
+
+ # Removal of unused code and data at link time (can this increase binary size in some cases?).
+ compiler_flag: "-ffunction-sections"
+ compiler_flag: "-fdata-sections"
+ linker_flag: "-Wl,--gc-sections"
+ }
+ linking_mode_flags { mode: DYNAMIC }
+}
diff --git a/recipes-framework/tensorflow/files/yocto_compiler_configure.bzl b/recipes-framework/tensorflow/files/yocto_compiler_configure.bzl
new file mode 100644
index 0000000..19c7cd1
--- /dev/null
+++ b/recipes-framework/tensorflow/files/yocto_compiler_configure.bzl
@@ -0,0 +1,24 @@
+# -*- Python -*-
+"""Yocto rule for yocto compiler autoconfiguration."""
+
+def _tpl(repository_ctx, tpl, substitutions={}, out=None):
+ if not out:
+ out = tpl
+ repository_ctx.template(
+ out,
+ Label("//third_party/toolchains/yocto:%s.tpl" % tpl),
+ substitutions)
+
+
+def _yocto_compiler_configure_impl(repository_ctx):
+ _tpl(repository_ctx, "CROSSTOOL")
+ repository_ctx.symlink(repository_ctx.attr.build_file, "BUILD")
+
+
+yocto_compiler_configure = repository_rule(
+ implementation = _yocto_compiler_configure_impl,
+ attrs = {
+ "remote_config_repo": attr.string(mandatory = False, default =""),
+ "build_file": attr.label(),
+ },
+)
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 08/13] tensorboard: add version 1.12.2
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (6 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 07/13] Customize Yocto toolchain for cross compiling Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 09/13] tensorflow: add version 1.13.0 Hongxu Jia
` (7 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
.../tensorboard/0001-customize-for-Yocto.patch | 128 +++++++++++++++++++++
recipes-framework/tensorflow/tensorboard_1.12.2.bb | 62 ++++++++++
2 files changed, 190 insertions(+)
create mode 100644 recipes-framework/tensorflow/tensorboard/0001-customize-for-Yocto.patch
create mode 100644 recipes-framework/tensorflow/tensorboard_1.12.2.bb
diff --git a/recipes-framework/tensorflow/tensorboard/0001-customize-for-Yocto.patch b/recipes-framework/tensorflow/tensorboard/0001-customize-for-Yocto.patch
new file mode 100644
index 0000000..1f0b309
--- /dev/null
+++ b/recipes-framework/tensorflow/tensorboard/0001-customize-for-Yocto.patch
@@ -0,0 +1,128 @@
+From 3834b8ecb55ebf2527aaa2502d9030460882931c Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Thu, 31 Jan 2019 22:24:54 +0800
+Subject: [PATCH] customize for Yocto
+
+- Remove virtualenv/pip/bdist_wheel calling which Yocto does not support
+
+- Add Yocto toolchain to support cross compiling
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ WORKSPACE | 6 ++++++
+ tensorboard/pip_package/build_pip_package.sh | 27 +++------------------------
+ third_party/workspace.bzl | 8 ++++++++
+ 3 files changed, 17 insertions(+), 24 deletions(-)
+
+diff --git a/WORKSPACE b/WORKSPACE
+index 8ab70cc..0c18f6f 100644
+--- a/WORKSPACE
++++ b/WORKSPACE
+@@ -1,5 +1,11 @@
+ workspace(name = "org_tensorflow_tensorboard")
+
++new_local_repository(
++ name = "yocto_compiler",
++ path = "%%WORKDIR%%",
++ build_file = "BUILD.yocto_compiler",
++)
++
+ load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
+
+ # Needed as a transitive dependency of rules_webtesting below.
+diff --git a/tensorboard/pip_package/build_pip_package.sh b/tensorboard/pip_package/build_pip_package.sh
+index 754fa83..e473f51 100755
+--- a/tensorboard/pip_package/build_pip_package.sh
++++ b/tensorboard/pip_package/build_pip_package.sh
+@@ -23,7 +23,7 @@ else
+ sedi="sed -i"
+ fi
+
+-run_smoke_test=1
++run_smoke_test=0
+ while [ "$#" -gt 0 ]; do
+ case "$1" in
+ "--no-smoke")
+@@ -75,27 +75,20 @@ command -v curl >/dev/null
+ command -v perl >/dev/null
+ command -v python2 >/dev/null
+ command -v python3 >/dev/null
+-command -v virtualenv >/dev/null
+ [ -d "${RUNFILES}" ]
+
+-dest=/tmp/tensorboard
++dest=${DESTDIR}
+ if [ ! -e $dest ]; then
+- mkdir $dest
++ mkdir -p $dest
+ else
+ dest="$(mktemp -d -p /tmp -t tensorboard-pip.XXXXXXXXXX)"
+ fi
+ cd "${dest}"
+
+ cp -LR "${RUNFILES}/org_tensorflow_tensorboard/tensorboard" .
+-mv -f "tensorboard/pip_package/LICENSE" .
+-mv -f "tensorboard/pip_package/MANIFEST.in" .
+-mv -f "tensorboard/pip_package/README.rst" .
+-mv -f "tensorboard/pip_package/setup.cfg" .
+-mv -f "tensorboard/pip_package/setup.py" .
+ rm -rf tensorboard/pip_package
+
+ rm -f tensorboard/tensorboard # bazel py_binary sh wrapper
+-chmod -x LICENSE # bazel symlinks confuse cp
+ find . -name __init__.py | xargs chmod -x # which goes for all genfiles
+
+ mkdir -p tensorboard/_vendor
+@@ -117,21 +110,7 @@ find tensorboard -name \*.py |
+ s/from tensorflow_serving/from tensorboard._vendor.tensorflow_serving/
+ '
+
+-virtualenv venv
+-export VIRTUAL_ENV=venv
+-export PATH="$PWD/venv/bin:${PATH}"
+-unset PYTHON_HOME
+-
+-# Require wheel for bdist_wheel command, and setuptools 36.2.0+ so that
+-# env markers are handled (https://github.com/pypa/setuptools/pull/1081)
+-pip install -qU wheel 'setuptools>=36.2.0'
+-
+-python setup.py bdist_wheel --python-tag py2 >/dev/null
+-python setup.py bdist_wheel --python-tag py3 >/dev/null
+-
+ if [ "$run_smoke_test" = 1 ]; then
+ smoke 2
+ smoke 3
+ fi
+-
+-ls -hal "$PWD/dist"
+diff --git a/third_party/workspace.bzl b/third_party/workspace.bzl
+index 083c441..24786f8 100644
+--- a/third_party/workspace.bzl
++++ b/third_party/workspace.bzl
+@@ -24,6 +24,7 @@ load("//third_party:polymer.bzl", "tensorboard_polymer_workspace")
+ load("//third_party:python.bzl", "tensorboard_python_workspace")
+ load("//third_party:js.bzl", "tensorboard_js_workspace")
+ load("//third_party:typings.bzl", "tensorboard_typings_workspace")
++load("//third_party/toolchains/yocto:yocto_compiler_configure.bzl", "yocto_compiler_configure")
+
+ def tensorboard_workspace():
+ tensorboard_fonts_workspace()
+@@ -32,6 +33,13 @@ def tensorboard_workspace():
+ tensorboard_typings_workspace()
+ tensorboard_js_workspace()
+
++ # Point //external/local_config_yocto_compiler to //external/yocto_compiler
++ yocto_compiler_configure(
++ name = "local_config_yocto_compiler",
++ build_file = str(Label("//third_party/toolchains/yocto:BUILD")),
++ remote_config_repo = "../yocto_compiler",
++ )
++
+ http_archive(
+ name = "com_google_protobuf_js",
+ strip_prefix = "protobuf-3.6.0/js",
+--
+2.7.4
+
diff --git a/recipes-framework/tensorflow/tensorboard_1.12.2.bb b/recipes-framework/tensorflow/tensorboard_1.12.2.bb
new file mode 100644
index 0000000..bb15b27
--- /dev/null
+++ b/recipes-framework/tensorflow/tensorboard_1.12.2.bb
@@ -0,0 +1,62 @@
+DESCRIPTION = "A suite of web applications for inspecting and understanding \
+your TensorFlow runs and graphs."
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=e74df23890b9521cc481e3348863e45d"
+
+SRC_URI = "git://github.com/tensorflow/tensorboard.git; \
+ file://0001-customize-for-Yocto.patch \
+ file://BUILD \
+ file://BUILD.yocto_compiler \
+ file://CROSSTOOL.tpl \
+ file://yocto_compiler_configure.bzl \
+ "
+SRCREV = "7194c7486a0c4d107322ffad102c1ca0fcc0fc24"
+S = "${WORKDIR}/git"
+
+RDEPENDS_${PN} += "python3 \
+ python3-numpy \
+ python3-protobuf \
+ python3-grpcio \
+ python3-werkzeug \
+ python3-six \
+ python3-markdown \
+"
+inherit python3native bazel
+
+do_configure_append () {
+ mkdir -p ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/BUILD ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/CROSSTOOL.tpl ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/yocto_compiler_configure.bzl ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/BUILD.yocto_compiler ${S}
+
+ CT_NAME=$(echo ${HOST_PREFIX} | rev | cut -c 2- | rev)
+ SED_COMMAND="s#%%CT_NAME%%#${CT_NAME}#g"
+ SED_COMMAND="${SED_COMMAND}; s#%%WORKDIR%%#${WORKDIR}#g"
+ SED_COMMAND="${SED_COMMAND}; s#%%YOCTO_COMPILER_PATH%%#${BAZEL_OUTPUTBASE_DIR}/external/yocto_compiler#g"
+
+ sed -i "${SED_COMMAND}" ${S}/BUILD.yocto_compiler \
+ ${S}/third_party/toolchains/yocto/CROSSTOOL.tpl \
+ ${S}/WORKSPACE
+}
+
+do_compile () {
+ unset CC
+ DESTDIR=${WORKDIR}/python-tensorboard \
+ ${STAGING_BINDIR_NATIVE}/bazel run \
+ --cpu=armeabi \
+ --subcommands --explain=${T}/explain.log \
+ --verbose_explanations --verbose_failures \
+ --crosstool_top=@local_config_yocto_compiler//:toolchain \
+ --verbose_failures \
+ //tensorboard/pip_package:build_pip_package
+
+ ${STAGING_BINDIR_NATIVE}/bazel shutdown
+}
+
+do_install () {
+ install -d ${D}${PYTHON_SITEPACKAGES_DIR}
+ cp -rf ${WORKDIR}/python-tensorboard/* ${D}${PYTHON_SITEPACKAGES_DIR}
+}
+
+FILES_${PN} += "${libdir}/*"
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 09/13] tensorflow: add version 1.13.0
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (7 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 08/13] tensorboard: add version 1.12.2 Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 10/13] tensorflow: fix gcc internal compile error on qemuarm64 Hongxu Jia
` (6 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
...octo-toolchain-to-support-cross-compiling.patch | 108 +++++++++++++++
recipes-framework/tensorflow/tensorflow_1.13.0.bb | 154 +++++++++++++++++++++
2 files changed, 262 insertions(+)
create mode 100644 recipes-framework/tensorflow/files/0001-add-yocto-toolchain-to-support-cross-compiling.patch
create mode 100644 recipes-framework/tensorflow/tensorflow_1.13.0.bb
diff --git a/recipes-framework/tensorflow/files/0001-add-yocto-toolchain-to-support-cross-compiling.patch b/recipes-framework/tensorflow/files/0001-add-yocto-toolchain-to-support-cross-compiling.patch
new file mode 100644
index 0000000..5fa5f91
--- /dev/null
+++ b/recipes-framework/tensorflow/files/0001-add-yocto-toolchain-to-support-cross-compiling.patch
@@ -0,0 +1,108 @@
+From dd303f745d159a2359c81922a2171a409998a71d Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Thu, 31 Jan 2019 20:37:26 +0800
+Subject: [PATCH] add yocto toolchain to support cross compiling
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ WORKSPACE | 6 ++++++
+ tensorflow/BUILD | 9 +++++++++
+ tensorflow/workspace.bzl | 8 ++++++++
+ third_party/aws/BUILD.bazel | 3 +++
+ third_party/repo.bzl | 1 +
+ 5 files changed, 27 insertions(+)
+
+diff --git a/WORKSPACE b/WORKSPACE
+index 7057d3f..869c180 100644
+--- a/WORKSPACE
++++ b/WORKSPACE
+@@ -53,6 +53,12 @@ android_configure(name="local_config_android")
+ load("@local_config_android//:android.bzl", "android_workspace")
+ android_workspace()
+
++new_local_repository(
++ name = "yocto_compiler",
++ path = "%%WORKDIR%%",
++ build_file = "//:BUILD.yocto_compiler",
++)
++
+ # Please add all new TensorFlow dependencies in workspace.bzl.
+ tf_workspace()
+
+diff --git a/tensorflow/BUILD b/tensorflow/BUILD
+index 823ad8f..6270301 100644
+--- a/tensorflow/BUILD
++++ b/tensorflow/BUILD
+@@ -100,6 +100,15 @@ config_setting(
+ )
+
+ config_setting(
++ name = "yocto_armeabi",
++ values = {
++ "crosstool_top": "@local_config_yocto_compiler//:toolchain",
++ "cpu": "armeabi",
++ },
++ visibility = ["//visibility:public"],
++)
++
++config_setting(
+ name = "android_arm",
+ values = {
+ "crosstool_top": "//external:android/crosstool",
+diff --git a/tensorflow/workspace.bzl b/tensorflow/workspace.bzl
+index aefab03..12c6fab 100755
+--- a/tensorflow/workspace.bzl
++++ b/tensorflow/workspace.bzl
+@@ -12,6 +12,7 @@ load("//third_party/sycl:sycl_configure.bzl", "sycl_configure")
+ load("//third_party/systemlibs:syslibs_configure.bzl", "syslibs_configure")
+ load("//third_party/toolchains/clang6:repo.bzl", "clang6_configure")
+ load("//third_party/toolchains/cpus/arm:arm_compiler_configure.bzl", "arm_compiler_configure")
++load("//third_party/toolchains/yocto:yocto_compiler_configure.bzl", "yocto_compiler_configure")
+ load("//third_party:repo.bzl", "tf_http_archive")
+ load("//third_party/clang_toolchain:cc_configure_clang.bzl", "cc_download_clang_toolchain")
+ load("@io_bazel_rules_closure//closure/private:java_import_external.bzl", "java_import_external")
+@@ -76,6 +77,13 @@ def tf_workspace(path_prefix = "", tf_repo_name = ""):
+ remote_config_repo = "../arm_compiler",
+ )
+
++ # Point //external/local_config_yocto_compiler to //external/yocto_compiler
++ yocto_compiler_configure(
++ name = "local_config_yocto_compiler",
++ build_file = clean_dep("//third_party/toolchains/yocto:BUILD"),
++ remote_config_repo = "../yocto_compiler",
++ )
++
+ mkl_repository(
+ name = "mkl_linux",
+ build_file = clean_dep("//third_party/mkl:mkl.BUILD"),
+diff --git a/third_party/aws/BUILD.bazel b/third_party/aws/BUILD.bazel
+index 5426f79..b106b12 100644
+--- a/third_party/aws/BUILD.bazel
++++ b/third_party/aws/BUILD.bazel
+@@ -24,6 +24,9 @@ cc_library(
+ "@org_tensorflow//tensorflow:raspberry_pi_armeabi": glob([
+ "aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",
+ ]),
++ "@org_tensorflow//tensorflow:yocto_armeabi": glob([
++ "aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",
++ ]),
+ "//conditions:default": [],
+ }) + glob([
+ "aws-cpp-sdk-core/include/**/*.h",
+diff --git a/third_party/repo.bzl b/third_party/repo.bzl
+index bad6d20..9823cab 100644
+--- a/third_party/repo.bzl
++++ b/third_party/repo.bzl
+@@ -16,6 +16,7 @@
+
+ _SINGLE_URL_WHITELIST = depset([
+ "arm_compiler",
++ "yocto_compiler",
+ ])
+
+ def _is_windows(ctx):
+--
+2.7.4
+
diff --git a/recipes-framework/tensorflow/tensorflow_1.13.0.bb b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
new file mode 100644
index 0000000..33649ea
--- /dev/null
+++ b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
@@ -0,0 +1,154 @@
+DESCRIPTION = "TensorFlow C/C++ Libraries"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=01e86893010a1b87e69a213faa753ebd"
+
+DEPENDS = "bazel-native protobuf-native util-linux-native protobuf"
+SRCREV = "c8875cbb1341f6ca14dd0ec908f1dde7d67f7808"
+SRC_URI = "git://github.com/tensorflow/tensorflow.git;branch=r1.13 \
+ file://0001-add-yocto-toolchain-to-support-cross-compiling.patch \
+ file://0001-SyntaxError-around-async-keyword-on-Python-3.7.patch \
+ file://BUILD \
+ file://BUILD.yocto_compiler \
+ file://CROSSTOOL.tpl \
+ file://yocto_compiler_configure.bzl \
+ "
+S = "${WORKDIR}/git"
+
+DEPENDS += " \
+ python3 \
+ python3-numpy-native \
+ python3-keras-applications-native \
+ python3-keras-preprocessing-native \
+ python3-pip-native \
+ python3-wheel-native \
+"
+
+RDEPENDS_${PN} += " \
+ python3 \
+ python3-numpy \
+ python3-keras-applications \
+ python3-keras-preprocessing \
+ python3-protobuf \
+ python3-grpcio \
+ python3-absl \
+ python3-astor \
+ python3-gast \
+ python3-termcolor \
+ tensorboard \
+ tensorflow-estimator \
+"
+
+inherit python3native bazel
+
+export PYTHON_BIN_PATH="${PYTHON}"
+export PYTHON_LIB_PATH="${STAGING_DIR_NATIVE}${PYTHON_SITEPACKAGES_DIR}"
+
+do_configure_append () {
+ CROSSTOOL_PYTHON_INCLUDE_PATH="${STAGING_INCDIR}/python${PYTHON_BASEVERSION}${PYTHON_ABI}"
+ install -d ${CROSSTOOL_PYTHON_INCLUDE_PATH}
+ mv ${CROSSTOOL_PYTHON_INCLUDE_PATH}/pyconfig.h ${CROSSTOOL_PYTHON_INCLUDE_PATH}/pyconfig-target.h
+
+ install -m 644 ${STAGING_INCDIR_NATIVE}/python${PYTHON_BASEVERSION}${PYTHON_ABI}/pyconfig.h \
+ ${CROSSTOOL_PYTHON_INCLUDE_PATH}/pyconfig-native.h
+
+ cat > ${CROSSTOOL_PYTHON_INCLUDE_PATH}/pyconfig.h <<ENDOF
+#if defined (_PYTHON_INCLUDE_TARGET)
+#include "pyconfig-target.h"
+#elif defined (_PYTHON_INCLUDE_NATIVE)
+#include "pyconfig-native.h"
+#else
+#error "_PYTHON_INCLUDE_TARGET or _PYTHON_INCLUDE_NATIVE is not defined"
+#endif // End of #if defined (_PYTHON_INCLUDE_TARGET)
+
+ENDOF
+
+ mkdir -p ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/BUILD ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/CROSSTOOL.tpl ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/yocto_compiler_configure.bzl ${S}/third_party/toolchains/yocto/
+ install -m 644 ${WORKDIR}/BUILD.yocto_compiler ${S}
+
+ CT_NAME=$(echo ${HOST_PREFIX} | rev | cut -c 2- | rev)
+ SED_COMMAND="s#%%CT_NAME%%#${CT_NAME}#g"
+ SED_COMMAND="${SED_COMMAND}; s#%%WORKDIR%%#${WORKDIR}#g"
+ SED_COMMAND="${SED_COMMAND}; s#%%YOCTO_COMPILER_PATH%%#${BAZEL_OUTPUTBASE_DIR}/external/yocto_compiler#g"
+
+ sed -i "${SED_COMMAND}" ${S}/BUILD.yocto_compiler \
+ ${S}/third_party/toolchains/yocto/CROSSTOOL.tpl \
+ ${S}/WORKSPACE
+
+ TF_NEED_CUDA=0 \
+ TF_NEED_OPENCL_SYCL=0 \
+ TF_NEED_OPENCL=0 \
+ TF_CUDA_CLANG=0 \
+ TF_DOWNLOAD_CLANG=0 \
+ TF_ENABLE_XLA=0 \
+ TF_NEED_MPI=0 \
+ TF_SET_ANDROID_WORKSPACE=0 \
+ ./configure
+}
+
+do_compile () {
+ unset CC
+ ${STAGING_BINDIR_NATIVE}/bazel build \
+ --config=monolithic \
+ -c opt \
+ --cpu=armeabi \
+ --subcommands --explain=${T}/explain.log \
+ --verbose_explanations --verbose_failures \
+ --crosstool_top=@local_config_yocto_compiler//:toolchain \
+ --verbose_failures \
+ //tensorflow:libtensorflow.so \
+ //tensorflow:libtensorflow_cc.so \
+ //tensorflow:libtensorflow_framework.so \
+ //tensorflow/tools/benchmark:benchmark_model \
+ //tensorflow/tools/pip_package:build_pip_package
+
+ ${STAGING_BINDIR_NATIVE}/bazel shutdown
+}
+
+do_install() {
+ install -d ${D}${libdir}
+ install -m 644 ${S}/bazel-bin/tensorflow/libtensorflow.so \
+ ${D}${libdir}
+ install -m 644 ${S}/bazel-bin/tensorflow/libtensorflow_cc.so \
+ ${D}${libdir}
+ install -m 644 ${S}/bazel-bin/tensorflow/libtensorflow_framework.so \
+ ${D}${libdir}
+
+ install -d ${D}${sbindir}
+ install -m 755 ${S}/bazel-bin/tensorflow/tools/benchmark/benchmark_model \
+ ${D}${sbindir}
+
+ export TMPDIR="${WORKDIR}"
+ echo "Generating pip package"
+ BDIST_OPTS="--universal" \
+ ${S}/bazel-bin/tensorflow/tools/pip_package/build_pip_package ${WORKDIR}
+
+ echo "Installing pip package"
+ install -d ${D}/${PYTHON_SITEPACKAGES_DIR}
+ ${STAGING_BINDIR_NATIVE}/pip3 install --disable-pip-version-check -v \
+ -t ${D}/${PYTHON_SITEPACKAGES_DIR} --no-cache-dir --no-deps \
+ ${WORKDIR}/tensorflow*.whl
+
+ (
+ cd ${D}${PYTHON_SITEPACKAGES_DIR}/bin;
+ for app in `ls`; do
+ sed -i "s:^'''exec' ${PYTHON} :'''exec' /usr/bin/python3 :g" $app
+ mv $app ${D}${sbindir}
+ done
+
+ )
+}
+
+FILES_${PN}-dev = ""
+INSANE_SKIP_${PN} += "dev-so \
+ "
+FILES_${PN} += "${libdir}/*"
+
+UNSUPPORTED_TARGET_ARCH = "powerpc"
+python __anonymous() {
+ target_arch = d.getVar("TARGET_ARCH")
+ if target_arch in d.getVar("UNSUPPORTED_TARGET_ARCH").split():
+ raise bb.parse.SkipPackage("TensorFlow does not support Target Arch '%s'" % target_arch)
+}
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 10/13] tensorflow: fix gcc internal compile error on qemuarm64
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (8 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 09/13] tensorflow: add version 1.13.0 Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 11/13] tensorflow: support musl Hongxu Jia
` (5 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
...x-gcc-internal-compile-error-on-qemuarm64.patch | 64 ++++++++++++++++++++++
recipes-framework/tensorflow/tensorflow_1.13.0.bb | 1 +
2 files changed, 65 insertions(+)
create mode 100644 recipes-framework/tensorflow/files/0001-fix-gcc-internal-compile-error-on-qemuarm64.patch
diff --git a/recipes-framework/tensorflow/files/0001-fix-gcc-internal-compile-error-on-qemuarm64.patch b/recipes-framework/tensorflow/files/0001-fix-gcc-internal-compile-error-on-qemuarm64.patch
new file mode 100644
index 0000000..aca3de4
--- /dev/null
+++ b/recipes-framework/tensorflow/files/0001-fix-gcc-internal-compile-error-on-qemuarm64.patch
@@ -0,0 +1,64 @@
+From e9871369eee1d98652eaf1c7dcc6adaf72733f55 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 13 Feb 2019 20:58:17 -0500
+Subject: [PATCH] fix gcc internal compile error on qemuarm64
+
+Backport a fix from eigen upstream to fix the error.
+
+Upstream-Status: Inappropriate [oe specific]
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ tensorflow/workspace.bzl | 1 +
+ ...ling-workaround-on-architectures-with-SSE.patch | 28 ++++++++++++++++++++++
+ 2 files changed, 29 insertions(+)
+ create mode 100644 third_party/0001-enable-spilling-workaround-on-architectures-with-SSE.patch
+
+diff --git a/tensorflow/workspace.bzl b/tensorflow/workspace.bzl
+index 12c6fab..aa49190 100755
+--- a/tensorflow/workspace.bzl
++++ b/tensorflow/workspace.bzl
+@@ -144,6 +144,7 @@ def tf_workspace(path_prefix = "", tf_repo_name = ""):
+ tf_http_archive(
+ name = "eigen_archive",
+ build_file = clean_dep("//third_party:eigen.BUILD"),
++ patch_file = clean_dep("//third_party:0001-enable-spilling-workaround-on-architectures-with-SSE.patch"),
+ sha256 = "753fbb58d0a49b6bcbcfb126ebfa2e21fc97f7471529ba835a096008ce588d8a",
+ strip_prefix = "eigen-eigen-9f48e814419e",
+ urls = [
+diff --git a/third_party/0001-enable-spilling-workaround-on-architectures-with-SSE.patch b/third_party/0001-enable-spilling-workaround-on-architectures-with-SSE.patch
+new file mode 100644
+index 0000000..e3848bd
+--- /dev/null
++++ b/third_party/0001-enable-spilling-workaround-on-architectures-with-SSE.patch
+@@ -0,0 +1,28 @@
++From c1b4d0195674d4196683d4988d774e74e3cc291a Mon Sep 17 00:00:00 2001
++From: Gael Guennebaud <g.gael@free.fr>
++Date: Mon, 10 Dec 2018 23:22:44 +0100
++Subject: [PATCH] enable spilling workaround on architectures with SSE/AVX
++
++Upstream-Status: Backport [https://github.com/eigenteam/eigen-git-mirror.git]
++Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
++
++---
++ Eigen/src/Core/products/GeneralBlockPanelKernel.h | 2 +-
++ 1 file changed, 1 insertion(+), 1 deletion(-)
++
++diff --git a/Eigen/src/Core/products/GeneralBlockPanelKernel.h b/Eigen/src/Core/products/GeneralBlockPanelKernel.h
++index 61521e2..b1e98b6 100644
++--- a/Eigen/src/Core/products/GeneralBlockPanelKernel.h
+++++ b/Eigen/src/Core/products/GeneralBlockPanelKernel.h
++@@ -1391,7 +1391,7 @@ void gebp_kernel<LhsScalar,RhsScalar,Index,DataMapper,mr,nr,ConjugateLhs,Conjuga
++
++ // NOTE: the begin/end asm comments below work around bug 935!
++ // but they are not enough for gcc>=6 without FMA (bug 1637)
++- #if EIGEN_GNUC_AT_LEAST(6,0)
+++ #if EIGEN_GNUC_AT_LEAST(6,0) && defined(EIGEN_VECTORIZE_SSE)
++ #define EIGEN_GEBP_2PX4_SPILLING_WORKAROUND __asm__ ("" : [a0] "+rm" (A0),[a1] "+rm" (A1));
++ #else
++ #define EIGEN_GEBP_2PX4_SPILLING_WORKAROUND
++--
++2.8.1
++
+--
+2.8.1
+
diff --git a/recipes-framework/tensorflow/tensorflow_1.13.0.bb b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
index 33649ea..9e493dc 100644
--- a/recipes-framework/tensorflow/tensorflow_1.13.0.bb
+++ b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
@@ -6,6 +6,7 @@ DEPENDS = "bazel-native protobuf-native util-linux-native protobuf"
SRCREV = "c8875cbb1341f6ca14dd0ec908f1dde7d67f7808"
SRC_URI = "git://github.com/tensorflow/tensorflow.git;branch=r1.13 \
file://0001-add-yocto-toolchain-to-support-cross-compiling.patch \
+ file://0001-fix-gcc-internal-compile-error-on-qemuarm64.patch \
file://0001-SyntaxError-around-async-keyword-on-Python-3.7.patch \
file://BUILD \
file://BUILD.yocto_compiler \
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 11/13] tensorflow: support musl
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (9 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 10/13] tensorflow: fix gcc internal compile error on qemuarm64 Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 12/13] build tensorflow-native and tensorflow in order Hongxu Jia
` (4 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Build fails looking for `execinfo.h` when building against musl
|In file included from ./tensorflow/core/platform/stacktrace.h:26,
| from tensorflow/core/platform/stacktrace_handler.cc:34:
|./tensorflow/core/platform/default/stacktrace.h:27:10: fatal error:
execinfo.h: No such file or directory
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
.../tensorflow/files/0001-support-musl.patch | 49 ++++++++++++++++++++++
recipes-framework/tensorflow/tensorflow_1.13.0.bb | 1 +
2 files changed, 50 insertions(+)
create mode 100644 recipes-framework/tensorflow/files/0001-support-musl.patch
diff --git a/recipes-framework/tensorflow/files/0001-support-musl.patch b/recipes-framework/tensorflow/files/0001-support-musl.patch
new file mode 100644
index 0000000..f76041b
--- /dev/null
+++ b/recipes-framework/tensorflow/files/0001-support-musl.patch
@@ -0,0 +1,49 @@
+From 02e58aa624aa6c330984474b9119c6b29a1ed77d Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Thu, 14 Feb 2019 10:26:27 -0500
+Subject: [PATCH] support musl
+
+Build fails looking for `execinfo.h` when building against musl
+|In file included from ./tensorflow/core/platform/stacktrace.h:26,
+| from tensorflow/core/platform/stacktrace_handler.cc:34:
+|./tensorflow/core/platform/default/stacktrace.h:27:10: fatal error:
+execinfo.h: No such file or directory
+
+Upstream-Status: Pending
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ tensorflow/core/platform/default/stacktrace.h | 3 ++-
+ tensorflow/core/platform/stacktrace_handler.cc | 3 ++-
+ 2 files changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/tensorflow/core/platform/default/stacktrace.h b/tensorflow/core/platform/default/stacktrace.h
+index c8e297f..8fecf05 100644
+--- a/tensorflow/core/platform/default/stacktrace.h
++++ b/tensorflow/core/platform/default/stacktrace.h
+@@ -18,7 +18,8 @@ limitations under the License.
+
+ #include "tensorflow/core/platform/platform.h"
+ #if !defined(IS_MOBILE_PLATFORM) && defined(PLATFORM_POSIX) && \
+- (defined(__clang__) || defined(__GNUC__))
++ (defined(__clang__) || defined(__GNUC__)) && \
++ defined(__GLIBC__)
+ #define TF_GENERATE_BACKTRACE
+ #endif
+
+diff --git a/tensorflow/core/platform/stacktrace_handler.cc b/tensorflow/core/platform/stacktrace_handler.cc
+index ff31c97..41d62f7 100644
+--- a/tensorflow/core/platform/stacktrace_handler.cc
++++ b/tensorflow/core/platform/stacktrace_handler.cc
+@@ -16,7 +16,8 @@ limitations under the License.
+ #include "tensorflow/core/platform/platform.h"
+
+ #if !defined(PLATFORM_GOOGLE) && !defined(IS_MOBILE_PLATFORM) && \
+- defined(PLATFORM_POSIX) && (defined(__clang__) || defined(__GNUC__))
++ defined(PLATFORM_POSIX) && (defined(__clang__) || defined(__GNUC__)) && \
++ defined(__GLIBC__)
+ #define TF_GENERATE_STACKTRACE
+ #endif
+
+--
+2.8.1
+
diff --git a/recipes-framework/tensorflow/tensorflow_1.13.0.bb b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
index 9e493dc..24986f5 100644
--- a/recipes-framework/tensorflow/tensorflow_1.13.0.bb
+++ b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
@@ -8,6 +8,7 @@ SRC_URI = "git://github.com/tensorflow/tensorflow.git;branch=r1.13 \
file://0001-add-yocto-toolchain-to-support-cross-compiling.patch \
file://0001-fix-gcc-internal-compile-error-on-qemuarm64.patch \
file://0001-SyntaxError-around-async-keyword-on-Python-3.7.patch \
+ file://0001-support-musl.patch \
file://BUILD \
file://BUILD.yocto_compiler \
file://CROSSTOOL.tpl \
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 12/13] build tensorflow-native and tensorflow in order
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (10 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 11/13] tensorflow: support musl Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 13/13] add README Hongxu Jia
` (3 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Add tensorflow-native to tensorflow's DEPENDS, actually tensorflow
does not require tensorflow-native, but to avoid do_compile at
the same time. Bazel build system does not support parallel build
very well (very slowly).
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
recipes-framework/tensorflow/tensorflow_1.13.0.bb | 1 +
1 file changed, 1 insertion(+)
diff --git a/recipes-framework/tensorflow/tensorflow_1.13.0.bb b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
index 24986f5..5d41f5a 100644
--- a/recipes-framework/tensorflow/tensorflow_1.13.0.bb
+++ b/recipes-framework/tensorflow/tensorflow_1.13.0.bb
@@ -23,6 +23,7 @@ DEPENDS += " \
python3-keras-preprocessing-native \
python3-pip-native \
python3-wheel-native \
+ tensorflow-native \
"
RDEPENDS_${PN} += " \
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [meta-tensorflow][PATCH 13/13] add README
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (11 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 12/13] build tensorflow-native and tensorflow in order Hongxu Jia
@ 2019-02-21 11:37 ` Hongxu Jia
2019-02-21 12:27 ` Review request 0/13: Contribute meta-tensorflow to Yocto Richard Purdie
` (2 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-21 11:37 UTC (permalink / raw)
To: richard.purdie, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
---
README | 170 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 170 insertions(+)
create mode 100644 README
diff --git a/README b/README
new file mode 100644
index 0000000..3da4e76
--- /dev/null
+++ b/README
@@ -0,0 +1,170 @@
+Introduction
+-----------
+TensorFlow is an open source software library for high performance numerical
+computation primarily used in machine learning. Its flexible architecture
+allows easy deployment of computation across a variety of types of platforms
+(CPUs, GPUs, TPUs), and a range of systems from single desktops to clusters
+of servers to mobile and edge devices.
+(https://www.tensorflow.org/)
+
+The build system of TensorFlow is Bazel (https://bazel.build/).
+
+This layer integrates TensorFlow to OE/Yocto platform
+- Integrate Google's bazel to Yocto
+- Add Yocto toolchain for bazel to support cross compiling.
+- Replace python package system(pip/wheel) with Yocto package system(rpm/deb/ipk).
+
+Dependencies
+------------
+URI: git://github.com/openembedded/openembedded-core.git
+branch: master
+revision: HEAD
+
+URI: git://github.com/openembedded/bitbake.git
+branch: master
+revision: HEAD
+
+URI: git://github.com/openembedded/meta-openembedded.git
+layers: meta-python, meta-oe
+branch: master
+revision: HEAD
+
+URI: git://git.yoctoproject.org/meta-java
+branch: master
+revision: HEAD
+
+Source code
+-----------
+git://git.yoctoproject.org/meta-tensorflow (TODO, github first?)
+
+Maintenance
+-----------
+Maintainers: Hongxu Jia <jiahongxujia@163.com> | <hongxu.jia@windriver.com>
+
+Contributing
+-----------
+Contributions and patches can be sent to the Yocto Project mailing
+list: yocto@yoctoproject.org"
+
+When sending patches please take a look at the contribution guide available
+here: https://wiki.yoctoproject.org/wiki/Contribution_Guidelines
+
+example:
+git send-email -1 -M --to yocto@yoctoproject.org --subject-prefix=meta-tensorflow][PATCH
+
+Limitation
+-----------
+- Bazel build takes lots of time, since it like bitbake which has own rules and builds
+ everything from scratch. Currently bazel could not reuse Yocto DEPENDS/RDEPENDS.
+
+- Do not support offline build since bazel build system require fetches archive
+ tarballs through network.
+
+- In order to run tensorflow cases in a reasonable time, although it builds
+ successfully on qemuarm, qemuarm64, qemumips, qemumips64, qemux86 and qemux86-64,
+ only qemux86-64 with kvm for runtime test.
+
+- Do not support 32-bit powerpc (qemuppc) since BoringSSL does not support it.
+ (BoringSSL is a fork of OpenSSL used to implement cryptography and TLS across
+ most of Google's products)
+
+Future plan
+-----------
+- Support offline build which bazel build system fetches archive tarballs
+ from Yocto download mirror.
+
+- Support more BSP, such as atom, beagleboard, raspberrypi.
+
+- Introduce more machine learning cases to meta-tensorflow.
+
+- Recipe maintenance and upgrade
+
+Build and run
+-----------
+1. Clone away
+$ mkdir <ts-project>
+$ cd <ts-project>
+$ git clone git://git.yoctoproject.org/meta-tensorflow
+$ git clone git://git.yoctoproject.org/meta-java
+$ git clone git://git.openembedded.org/meta-openembedded
+$ git clone git://git.openembedded.org/openembedded-core oe-core
+$ cd oe-core
+$ git clone git://git.openembedded.org/bitbake
+
+2. Prepare build
+$ . <ts-project>/oe-core/oe-init-build-env <build>
+
+# Build qemux86-64 which runqemu supports kvm.
+$ echo 'MACHINE = "qemux86-64"' >> conf/local.conf
+
+$ echo 'IMAGE_INSTALL_append = " tensorflow"' >> conf/local.conf
+
+Edit conf/bblayers.conf to include other layers
+BBLAYERS ?= " \
+ <ts-project>/oe-core/meta \
+ <ts-project>/meta-openembedded/meta-python \
+ <ts-project>/meta-openembedded/meta-oe \
+ <ts-project>/meta-java \
+ <ts-project>/meta-tensorflow \
+"
+
+
+3. Build image in <build>.
+$ bitbake core-image-minimal
+
+4. Start qemu with slrip + kvm + 5GB memory:
+$ runqemu qemux86-64 core-image-minimal slirp kvm qemuparams="-m 5120"
+
+5. Verify the install
+root@qemux86-64:~# python3 -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"
+tf.Tensor(-604.65454, shape=(), dtype=float32)
+
+6. Run tutorial case
+https://www.tensorflow.org/tutorials
+
+root@qemux86-64:~# cat >code.py <<ENDOF
+import tensorflow as tf
+mnist = tf.keras.datasets.mnist
+
+(x_train, y_train),(x_test, y_test) = mnist.load_data()
+x_train, x_test = x_train / 255.0, x_test / 255.0
+
+model = tf.keras.models.Sequential([
+ tf.keras.layers.Flatten(input_shape=(28, 28)),
+ tf.keras.layers.Dense(512, activation=tf.nn.relu),
+ tf.keras.layers.Dropout(0.2),
+ tf.keras.layers.Dense(10, activation=tf.nn.softmax)
+])
+model.compile(optimizer='adam',
+ loss='sparse_categorical_crossentropy',
+ metrics=['accuracy'])
+
+model.fit(x_train, y_train, epochs=5)
+model.evaluate(x_test, y_test)
+ENDOF
+
+root@qemux86-64:~# python3 ./code.py
+Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
+11493376/11490434 [==============================] - 7s 1us/step
+Instructions for updating:
+Colocations handled automatically by placer.
+Instructions for updating:
+Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
+Epoch 1/5
+60000/60000 [==============================] - 27s 449us/sample - loss: 0.2211 - acc: 0.9346
+Epoch 2/5
+60000/60000 [==============================] - 24s 408us/sample - loss: 0.0969 - acc: 0.9702
+Epoch 3/5
+60000/60000 [==============================] - 26s 439us/sample - loss: 0.0694 - acc: 0.9780
+Epoch 4/5
+60000/60000 [==============================] - 23s 390us/sample - loss: 0.0540 - acc: 0.9832
+Epoch 5/5
+60000/60000 [==============================] - 24s 399us/sample - loss: 0.0447 - acc: 0.9851
+10000/10000 [==============================] - 1s 91us/sample - loss: 0.0700 - acc: 0.9782
+
+License
+-------
+
+All metadata is MIT licensed unless otherwise stated. Source code included
+in tree for individual recipes is under the LICENSE stated in each recipe
+(.bb file) unless otherwise stated.
--
2.8.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (12 preceding siblings ...)
2019-02-21 11:37 ` [meta-tensorflow][PATCH 13/13] add README Hongxu Jia
@ 2019-02-21 12:27 ` Richard Purdie
2019-02-21 18:39 ` Tim Orling
2019-02-22 16:51 ` Stephen Lawrence
2019-11-20 14:37 ` Mauro Ziliani
15 siblings, 1 reply; 30+ messages in thread
From: Richard Purdie @ 2019-02-21 12:27 UTC (permalink / raw)
To: Hongxu Jia, mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Hi Hongxu,
On Thu, 2019-02-21 at 06:37 -0500, Hongxu Jia wrote:
> Currently AI on IoT edge becomes more and more popular, but there is
> no
> machine learning framework in Yocto/OE. With the support of Eric
> <Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com>
> and Randy <randy.macleod@windriver.com>, after two months effort,
> I've
> integrated TensorFlow to Yocto.
>
> Now, I contribute the patches to Yocto for review, and apply for
> creating
> a layer named `meta-tensorflow' on Yocto.
>
> For test convenient, there is a fork on github:
> https://github.com/hongxu-jia/meta-tensorflow
>
> BTW, I have contributed other 11 fundamental recipes to meta-
> openembedded
> and all of them have been merged to master branch.
>
> Please no hesitate to share your suggestion.
I like this a lot, thanks for working on and sharing it!
I had a quick glance through the patches and I didn't see anything that
concerned me. I don't have much knowledge about tensor flow. I think
this would make a great addition to git.yoctoproject.org and would love
to see it there.
Thanks again, I'm pleased to see something like this!
Cheers,
Richard
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-21 12:27 ` Review request 0/13: Contribute meta-tensorflow to Yocto Richard Purdie
@ 2019-02-21 18:39 ` Tim Orling
0 siblings, 0 replies; 30+ messages in thread
From: Tim Orling @ 2019-02-21 18:39 UTC (permalink / raw)
To: Hongxu Jia; +Cc: lpd-cdc-core-dev, Zhangle.Yang, paul.eggleton, yocto
[-- Attachment #1: Type: text/plain, Size: 1715 bytes --]
On Thu, Feb 21, 2019 at 4:27 AM Richard Purdie <
richard.purdie@linuxfoundation.org> wrote:
> Hi Hongxu,
>
> On Thu, 2019-02-21 at 06:37 -0500, Hongxu Jia wrote:
> > Currently AI on IoT edge becomes more and more popular, but there is
> > no
> > machine learning framework in Yocto/OE. With the support of Eric
> > <Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com>
> > and Randy <randy.macleod@windriver.com>, after two months effort,
> > I've
> > integrated TensorFlow to Yocto.
> >
> > Now, I contribute the patches to Yocto for review, and apply for
> > creating
> > a layer named `meta-tensorflow' on Yocto.
> >
> > For test convenient, there is a fork on github:
> > https://github.com/hongxu-jia/meta-tensorflow
> >
> > BTW, I have contributed other 11 fundamental recipes to meta-
> > openembedded
> > and all of them have been merged to master branch.
> >
> > Please no hesitate to share your suggestion.
>
> I like this a lot, thanks for working on and sharing it!
>
> I had a quick glance through the patches and I didn't see anything that
> concerned me. I don't have much knowledge about tensor flow. I think
> this would make a great addition to git.yoctoproject.org and would love
> to see it there.
>
> Thanks again, I'm pleased to see something like this!
>
This is great. I had started writing a bazel recipe around the time of the
last ELC/YP DevDay, but got blocked. I’ll take a look and give it a spin.
Thank you for working on this.
—Tim
>
> Cheers,
>
> Richard
>
> --
> _______________________________________________
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
[-- Attachment #2: Type: text/html, Size: 2938 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (13 preceding siblings ...)
2019-02-21 12:27 ` Review request 0/13: Contribute meta-tensorflow to Yocto Richard Purdie
@ 2019-02-22 16:51 ` Stephen Lawrence
2019-02-22 17:22 ` Hongxu Jia
` (2 more replies)
2019-11-20 14:37 ` Mauro Ziliani
15 siblings, 3 replies; 30+ messages in thread
From: Stephen Lawrence @ 2019-02-22 16:51 UTC (permalink / raw)
To: Hongxu Jia, richard.purdie, mhalstead, ross.burton, raj.khem,
paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Hi Hongxu,
> -----Original Message-----
> From: yocto-bounces@yoctoproject.org <yocto-bounces@yoctoproject.org> On Behalf
> Of Hongxu Jia
> Sent: 21 February 2019 11:37
> To: richard.purdie@linuxfoundation.org; mhalstead@linuxfoundation.org;
> ross.burton@intel.com; raj.khem@gmail.com; paul.eggleton@linux.intel.com;
> yocto@yoctoproject.org
> Cc: lpd-cdc-core-dev@windriver.com; Zhangle.Yang@windriver.com
> Subject: [yocto] Review request 0/13: Contribute meta-tensorflow to Yocto
>
> Hi RP and Yocto folks,
>
> Currently AI on IoT edge becomes more and more popular, but there is no
> machine learning framework in Yocto/OE. With the support of Eric
> <Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com>
> and Randy <randy.macleod@windriver.com>, after two months effort, I've
> integrated TensorFlow to Yocto.
Good work.
You might be interested in the yocto layers for tensorflow, tensorflow-lite and caffe2
on github here [1]. I'm not part of the team that developed that work but I forwarded
your announcement to them. Perhaps there is the opportunity for some collaboration
on the platform independent parts. The maintainer details are in the readme.
[1] https://github.com/renesas-rz/meta-renesas-ai
The layers were developed for the industrial focused Renesas RZ/G1 platforms.
Regards
Steve
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-22 16:51 ` Stephen Lawrence
@ 2019-02-22 17:22 ` Hongxu Jia
2019-02-22 17:32 ` Khem Raj
2019-02-22 20:49 ` Manjukumar Harthikote Matha
2 siblings, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-22 17:22 UTC (permalink / raw)
To: Stephen Lawrence, richard.purdie, mhalstead, ross.burton,
raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
On 2019/2/23 上午12:51, Stephen Lawrence wrote:
> Good work.
>
> You might be interested in the yocto layers for tensorflow, tensorflow-lite and caffe2
> on github here [1]. I'm not part of the team that developed that work but I forwarded
> your announcement to them. Perhaps there is the opportunity for some collaboration
> on the platform independent parts. The maintainer details are in the readme.
Yes, I know meta-renesas-ai, my first try of tensorflow build was based
on it,
but failed, I am afraid its version is old and no maintenance. So I
choose to
refer upstream arm compiler which `Build from source for the Raspberry Pi'
https://www.tensorflow.org/install/source_rpi
For tensorflow-lite, I am afraid it is not completed in meta-renesas-ai.
But tensorflow-lite is on my TODO list, Currently I am not sure what to
build
(may be c/c++ framework) and how to use (use cases).
For caffe2, it is another story `pytorch', I am afraid I don't have
resources(time)
to focus
//Hongxu
> [1]https://github.com/renesas-rz/meta-renesas-ai
>
> The layers were developed for the industrial focused Renesas RZ/G1 platforms.
>
> Regards
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-22 16:51 ` Stephen Lawrence
2019-02-22 17:22 ` Hongxu Jia
@ 2019-02-22 17:32 ` Khem Raj
2019-02-22 20:49 ` Manjukumar Harthikote Matha
2 siblings, 0 replies; 30+ messages in thread
From: Khem Raj @ 2019-02-22 17:32 UTC (permalink / raw)
To: Stephen Lawrence; +Cc: lpd-cdc-core-dev, Zhangle.Yang, paul.eggleton, yocto
On Fri, Feb 22, 2019 at 8:51 AM Stephen Lawrence
<stephen.lawrence@renesas.com> wrote:
>
> Hi Hongxu,
>
> > -----Original Message-----
> > From: yocto-bounces@yoctoproject.org <yocto-bounces@yoctoproject.org> On Behalf
> > Of Hongxu Jia
> > Sent: 21 February 2019 11:37
> > To: richard.purdie@linuxfoundation.org; mhalstead@linuxfoundation.org;
> > ross.burton@intel.com; raj.khem@gmail.com; paul.eggleton@linux.intel.com;
> > yocto@yoctoproject.org
> > Cc: lpd-cdc-core-dev@windriver.com; Zhangle.Yang@windriver.com
> > Subject: [yocto] Review request 0/13: Contribute meta-tensorflow to Yocto
> >
> > Hi RP and Yocto folks,
> >
> > Currently AI on IoT edge becomes more and more popular, but there is no
> > machine learning framework in Yocto/OE. With the support of Eric
> > <Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com>
> > and Randy <randy.macleod@windriver.com>, after two months effort, I've
> > integrated TensorFlow to Yocto.
>
> Good work.
>
> You might be interested in the yocto layers for tensorflow, tensorflow-lite and caffe2
> on github here [1]. I'm not part of the team that developed that work but I forwarded
> your announcement to them. Perhaps there is the opportunity for some collaboration
> on the platform independent parts. The maintainer details are in the readme.
>
> [1] https://github.com/renesas-rz/meta-renesas-ai
>
> The layers were developed for the industrial focused Renesas RZ/G1 platforms.
>
It would be great to cherry-pick goodies from these layers and
maintain a single layer
which can be sustained and support wide variety of platforms and distributions.
> Regards
>
> Steve
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-22 16:51 ` Stephen Lawrence
2019-02-22 17:22 ` Hongxu Jia
2019-02-22 17:32 ` Khem Raj
@ 2019-02-22 20:49 ` Manjukumar Harthikote Matha
2019-02-22 23:57 ` Hongxu Jia
2019-02-23 15:29 ` Richard Purdie
2 siblings, 2 replies; 30+ messages in thread
From: Manjukumar Harthikote Matha @ 2019-02-22 20:49 UTC (permalink / raw)
To: Stephen Lawrence, Hongxu Jia, richard.purdie, mhalstead,
ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
Hi Hongxu,
> -----Original Message-----
> From: yocto-bounces@yoctoproject.org [mailto:yocto-bounces@yoctoproject.org]
> On Behalf Of Stephen Lawrence
> Sent: Friday, February 22, 2019 8:52 AM
> To: Hongxu Jia <hongxu.jia@windriver.com>; richard.purdie@linuxfoundation.org;
> mhalstead@linuxfoundation.org; ross.burton@intel.com; raj.khem@gmail.com;
> paul.eggleton@linux.intel.com; yocto@yoctoproject.org
> Cc: lpd-cdc-core-dev@windriver.com; Zhangle.Yang@windriver.com
> Subject: Re: [yocto] Review request 0/13: Contribute meta-tensorflow to Yocto
>
> Hi Hongxu,
>
> > -----Original Message-----
> > From: yocto-bounces@yoctoproject.org <yocto-bounces@yoctoproject.org>
> > On Behalf Of Hongxu Jia
> > Sent: 21 February 2019 11:37
> > To: richard.purdie@linuxfoundation.org; mhalstead@linuxfoundation.org;
> > ross.burton@intel.com; raj.khem@gmail.com;
> > paul.eggleton@linux.intel.com; yocto@yoctoproject.org
> > Cc: lpd-cdc-core-dev@windriver.com; Zhangle.Yang@windriver.com
> > Subject: [yocto] Review request 0/13: Contribute meta-tensorflow to
> > Yocto
> >
> > Hi RP and Yocto folks,
> >
> > Currently AI on IoT edge becomes more and more popular, but there is
> > no machine learning framework in Yocto/OE. With the support of Eric
> > <Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com> and
> > Randy <randy.macleod@windriver.com>, after two months effort, I've
> > integrated TensorFlow to Yocto.
>
> Good work.
>
> You might be interested in the yocto layers for tensorflow, tensorflow-lite and
> caffe2 on github here [1]. I'm not part of the team that developed that work but I
> forwarded your announcement to them. Perhaps there is the opportunity for some
> collaboration on the platform independent parts. The maintainer details are in the
> readme.
>
Thanks for the layer Hongxu. I agree with Steve, it would be good if you could collaborate with meta-renesas-ai and introduce the layer as meta-ai under meta-openembedded.
Thanks,
Manju
> [1] https://github.com/renesas-rz/meta-renesas-ai
>
> The layers were developed for the industrial focused Renesas RZ/G1 platforms.
>
> Regards
>
> Steve
> --
> _______________________________________________
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-22 20:49 ` Manjukumar Harthikote Matha
@ 2019-02-22 23:57 ` Hongxu Jia
2019-02-23 15:29 ` Richard Purdie
1 sibling, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-22 23:57 UTC (permalink / raw)
To: Manjukumar Harthikote Matha, Stephen Lawrence, richard.purdie,
mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
On 2019/2/23 上午4:49, Manjukumar Harthikote Matha wrote:
> Hi Hongxu,
>
>> -----Original Message-----
>> From: yocto-bounces@yoctoproject.org [mailto:yocto-bounces@yoctoproject.org]
>> On Behalf Of Stephen Lawrence
>> Sent: Friday, February 22, 2019 8:52 AM
>> To: Hongxu Jia <hongxu.jia@windriver.com>; richard.purdie@linuxfoundation.org;
>> mhalstead@linuxfoundation.org; ross.burton@intel.com; raj.khem@gmail.com;
>> paul.eggleton@linux.intel.com; yocto@yoctoproject.org
>> Cc: lpd-cdc-core-dev@windriver.com; Zhangle.Yang@windriver.com
>> Subject: Re: [yocto] Review request 0/13: Contribute meta-tensorflow to Yocto
>>
>> Hi Hongxu,
>>
>>> -----Original Message-----
>>> From: yocto-bounces@yoctoproject.org <yocto-bounces@yoctoproject.org>
>>> On Behalf Of Hongxu Jia
>>> Sent: 21 February 2019 11:37
>>> To: richard.purdie@linuxfoundation.org; mhalstead@linuxfoundation.org;
>>> ross.burton@intel.com; raj.khem@gmail.com;
>>> paul.eggleton@linux.intel.com; yocto@yoctoproject.org
>>> Cc: lpd-cdc-core-dev@windriver.com; Zhangle.Yang@windriver.com
>>> Subject: [yocto] Review request 0/13: Contribute meta-tensorflow to
>>> Yocto
>>>
>>> Hi RP and Yocto folks,
>>>
>>> Currently AI on IoT edge becomes more and more popular, but there is
>>> no machine learning framework in Yocto/OE. With the support of Eric
>>> <Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com> and
>>> Randy <randy.macleod@windriver.com>, after two months effort, I've
>>> integrated TensorFlow to Yocto.
>> Good work.
>>
>> You might be interested in the yocto layers for tensorflow, tensorflow-lite and
>> caffe2 on github here [1]. I'm not part of the team that developed that work but I
>> forwarded your announcement to them. Perhaps there is the opportunity for some
>> collaboration on the platform independent parts. The maintainer details are in the
>> readme.
>>
> Thanks for the layer Hongxu. I agree with Steve, it would be good if you could collaborate with meta-renesas-ai and introduce the layer as meta-ai under meta-openembedded.
Agree, I will add it to my TODO list, more AI, more machine learning
framework should be integrated to Yocto.
//Hongxu
> Thanks,
> Manju
>
>> [1] https://github.com/renesas-rz/meta-renesas-ai
>>
>> The layers were developed for the industrial focused Renesas RZ/G1 platforms.
>>
>> Regards
>>
>> Steve
>> --
>> _______________________________________________
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-22 20:49 ` Manjukumar Harthikote Matha
2019-02-22 23:57 ` Hongxu Jia
@ 2019-02-23 15:29 ` Richard Purdie
2019-02-23 17:04 ` Khem Raj
2019-02-24 5:40 ` Hongxu Jia
1 sibling, 2 replies; 30+ messages in thread
From: Richard Purdie @ 2019-02-23 15:29 UTC (permalink / raw)
To: Manjukumar Harthikote Matha, Stephen Lawrence, Hongxu Jia,
mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
On Fri, 2019-02-22 at 20:49 +0000, Manjukumar Harthikote Matha wrote:
> >
> > You might be interested in the yocto layers for tensorflow,
> > tensorflow-lite and
> > caffe2 on github here [1]. I'm not part of the team that developed
> > that work but I
> > forwarded your announcement to them. Perhaps there is the
> > opportunity for some
> > collaboration on the platform independent parts. The maintainer
> > details are in the
> > readme.
> >
>
> Thanks for the layer Hongxu. I agree with Steve, it would be good if
> you could collaborate with meta-renesas-ai and introduce the layer as
> meta-ai under meta-openembedded.
Please don't do the meta-openembedded part!
I believe that meta-oe is too large to be maintainable and that we need
a larger number of smaller layers.
Having tensorflow in its own layer which as a specific purpose and its
specific maintainers who understand it is in my view much more
desirable and sustainable.
Cheers,
Richard
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-23 15:29 ` Richard Purdie
@ 2019-02-23 17:04 ` Khem Raj
2019-02-24 6:33 ` Hongxu Jia
2019-02-25 15:10 ` Stephen Lawrence
2019-02-24 5:40 ` Hongxu Jia
1 sibling, 2 replies; 30+ messages in thread
From: Khem Raj @ 2019-02-23 17:04 UTC (permalink / raw)
To: Richard Purdie
Cc: Manjukumar Harthikote Matha, lpd-cdc-core-dev, Zhangle.Yang,
paul.eggleton, yocto
On Sat, Feb 23, 2019 at 7:29 AM Richard Purdie
<richard.purdie@linuxfoundation.org> wrote:
>
> On Fri, 2019-02-22 at 20:49 +0000, Manjukumar Harthikote Matha wrote:
> > >
> > > You might be interested in the yocto layers for tensorflow,
> > > tensorflow-lite and
> > > caffe2 on github here [1]. I'm not part of the team that developed
> > > that work but I
> > > forwarded your announcement to them. Perhaps there is the
> > > opportunity for some
> > > collaboration on the platform independent parts. The maintainer
> > > details are in the
> > > readme.
> > >
> >
> > Thanks for the layer Hongxu. I agree with Steve, it would be good if
> > you could collaborate with meta-renesas-ai and introduce the layer as
> > meta-ai under meta-openembedded.
>
> Please don't do the meta-openembedded part!
>
I would agree to not make it a sub layer under meta-openembedded, but it can
be hosted on openembedded git infrastructure, I dont see much problem with that
if thats the case
> I believe that meta-oe is too large to be maintainable and that we need
> a larger number of smaller layers.
>
There is a fine balance to be had, that I have come to realize over years now
but AI is large enough and segmented enough to have a layer of its own.
> Having tensorflow in its own layer which as a specific purpose and its
> specific maintainers who understand it is in my view much more
> desirable and sustainable.
I think its a good idea to have various AI infras in one layer
including tensorflow
unless we have large enough dev community to maintain each of them so I like
meta-ai conceptually.
>
> Cheers,
>
> Richard
>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-23 15:29 ` Richard Purdie
2019-02-23 17:04 ` Khem Raj
@ 2019-02-24 5:40 ` Hongxu Jia
1 sibling, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-24 5:40 UTC (permalink / raw)
To: Richard Purdie, Manjukumar Harthikote Matha, Stephen Lawrence,
mhalstead, ross.burton, raj.khem, paul.eggleton, yocto
Cc: lpd-cdc-core-dev, Zhangle.Yang
On 2019/2/23 下午11:29, Richard Purdie wrote:
> Please don't do the meta-openembedded part!
OK, I can't agree more, for tensorflow, if we move it to
meta-openembedded/meta-ai,
we have to move the depending layer `meta-java' to meta-openembedded
but it has
been already as a standalone layer , otherwise the meta-openembedded
will depend outer layer.
//Hongxu
> I believe that meta-oe is too large to be maintainable and that we need
> a larger number of smaller layers.
>
> Having tensorflow in its own layer which as a specific purpose and its
> specific maintainers who understand it is in my view much more
> desirable and sustainable.
>
> Cheers,
>
> Richard
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-23 17:04 ` Khem Raj
@ 2019-02-24 6:33 ` Hongxu Jia
2019-02-25 15:10 ` Stephen Lawrence
1 sibling, 0 replies; 30+ messages in thread
From: Hongxu Jia @ 2019-02-24 6:33 UTC (permalink / raw)
To: Khem Raj, Richard Purdie
Cc: Manjukumar Harthikote Matha, lpd-cdc-core-dev, Zhangle.Yang,
paul.eggleton, yocto
On 2019/2/24 上午1:04, Khem Raj wrote:
> On Sat, Feb 23, 2019 at 7:29 AM Richard Purdie
> <richard.purdie@linuxfoundation.org> wrote:
>> On Fri, 2019-02-22 at 20:49 +0000, Manjukumar Harthikote Matha wrote:
>>>> You might be interested in the yocto layers for tensorflow,
>>>> tensorflow-lite and
>>>> caffe2 on github here [1]. I'm not part of the team that developed
>>>> that work but I
>>>> forwarded your announcement to them. Perhaps there is the
>>>> opportunity for some
>>>> collaboration on the platform independent parts. The maintainer
>>>> details are in the
>>>> readme.
>>>>
>>> Thanks for the layer Hongxu. I agree with Steve, it would be good if
>>> you could collaborate with meta-renesas-ai and introduce the layer as
>>> meta-ai under meta-openembedded.
>> Please don't do the meta-openembedded part!
>>
> I would agree to not make it a sub layer under meta-openembedded, but it can
> be hosted on openembedded git infrastructure, I dont see much problem with that
> if thats the case
>
>> I believe that meta-oe is too large to be maintainable and that we need
>> a larger number of smaller layers.
>>
> There is a fine balance to be had, that I have come to realize over years now
> but AI is large enough and segmented enough to have a layer of its own.
>
>> Having tensorflow in its own layer which as a specific purpose and its
>> specific maintainers who understand it is in my view much more
>> desirable and sustainable.
> I think its a good idea to have various AI infras in one layer
> including tensorflow
> unless we have large enough dev community to maintain each of them so I like
> meta-ai conceptually.
I know to create a standalone meta-ai than meta-tensorflow is more
reasonable, that's my initial
layer naming, but
- It will dramatically increase the maintainer burden, so I limit the
scope to the specific framework
name. There are lots of TODO in tensorflow and I am afraid I do not
have extra attention to
other AI framework recently.
- Tensorflow is standalone enough, its build system is google's `bazel',
like bitbake, it has special
rules to build everything from scratch. (I've already sent other
unbazel built recipes to
meta-openembedded)
- Bazel is built by java, if we do not create sub layer in meta-ai (such
as meta-ai/meta-tensorflow),
the number of meta-ai layer deps will be more and more along with
other AI frameworks
are added. For other AI framework customer, depends unused layer is
not a good idea.
- For future AI framework integration, if the framework is huge like
TensorFlow (another well known is
Facebook's PyTorch), we could create a standalone layer and appoint
special maintainer to maintain it;
if the framework is small and light, or fundamental algorithm
packages used by multiple frameworks,
we could create a meta-ai for collection, or directly add them to
meta-openembedded. (For TensorFlow
integration, I added 11 fundamental recipes to meta-openembedded )
//Hongxu
>> Cheers,
>>
>> Richard
>>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-23 17:04 ` Khem Raj
2019-02-24 6:33 ` Hongxu Jia
@ 2019-02-25 15:10 ` Stephen Lawrence
1 sibling, 0 replies; 30+ messages in thread
From: Stephen Lawrence @ 2019-02-25 15:10 UTC (permalink / raw)
To: Khem Raj, Richard Purdie
Cc: Manjukumar Harthikote Matha, lpd-cdc-core-dev, Zhangle.Yang,
paul.eggleton, yocto
> -----Original Message-----
> From: Khem Raj <raj.khem@gmail.com>
> Sent: 23 February 2019 17:05
> To: Richard Purdie <richard.purdie@linuxfoundation.org>
> Cc: Manjukumar Harthikote Matha <MANJUKUM@xilinx.com>; Stephen Lawrence
> <stephen.lawrence@renesas.com>; Hongxu Jia <hongxu.jia@windriver.com>;
> mhalstead@linuxfoundation.org; ross.burton@intel.com;
> paul.eggleton@linux.intel.com; yocto@yoctoproject.org; lpd-cdc-core-
> dev@windriver.com; Zhangle.Yang@windriver.com
> Subject: Re: [yocto] Review request 0/13: Contribute meta-tensorflow to Yocto
>
[snip]
> > I believe that meta-oe is too large to be maintainable and that we need
> > a larger number of smaller layers.
> >
>
> There is a fine balance to be had, that I have come to realize over years now
> but AI is large enough and segmented enough to have a layer of its own.
>
> > Having tensorflow in its own layer which as a specific purpose and its
> > specific maintainers who understand it is in my view much more
> > desirable and sustainable.
>
> I think its a good idea to have various AI infras in one layer
> including tensorflow
> unless we have large enough dev community to maintain each of them so I like
> meta-ai conceptually.
From a brief discussion with the team here one issue is the low backwards compatibility
between models from different Tensorflow versions. I don't know how fast upstream is moving,
but there may be demand to support more than one version per YP version. That's not
unsurmountable of course, but I thought I would mention it.
meta-renesas-ai has a MIT license btw if it’s a help in creating something more generic
and shared.
Regards
Steve
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
` (14 preceding siblings ...)
2019-02-22 16:51 ` Stephen Lawrence
@ 2019-11-20 14:37 ` Mauro Ziliani
2019-11-20 15:40 ` Mauro Ziliani
15 siblings, 1 reply; 30+ messages in thread
From: Mauro Ziliani @ 2019-11-20 14:37 UTC (permalink / raw)
To: yocto
[-- Attachment #1: Type: text/plain, Size: 1091 bytes --]
Hi all.
There a port for meta-tensorflow for Krogoth or Sumo?
Mayabe I need to use it on this distribution
Thaks
M
Il 21/02/19 12:37, Hongxu Jia ha scritto:
> Hi RP and Yocto folks,
>
> Currently AI on IoT edge becomes more and more popular, but there is no
> machine learning framework in Yocto/OE. With the support of Eric
> <Zhangle.Yang@windriver.com>, Robert <liezhi.yang@windriver.com>
> and Randy <randy.macleod@windriver.com>, after two months effort, I've
> integrated TensorFlow to Yocto.
>
> Now, I contribute the patches to Yocto for review, and apply for creating
> a layer named `meta-tensorflow' on Yocto.
>
> For test convenient, there is a fork on github:
> https://github.com/hongxu-jia/meta-tensorflow
>
> BTW, I have contributed other 11 fundamental recipes to meta-openembedded
> and all of them have been merged to master branch.
>
> Please no hesitate to share your suggestion.
>
> //Hongxu
>
> Testing Commands:
> -----------------
> See README
>
> Testing, Expected Results:
> --------------------------
> See README
>
[-- Attachment #2: Type: text/html, Size: 1983 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-11-20 14:37 ` Mauro Ziliani
@ 2019-11-20 15:40 ` Mauro Ziliani
2019-11-20 16:56 ` Mauro Ziliani
0 siblings, 1 reply; 30+ messages in thread
From: Mauro Ziliani @ 2019-11-20 15:40 UTC (permalink / raw)
To: yocto
[-- Attachment #1: Type: text/plain, Size: 1262 bytes --]
I forked the repository and I'm tryng to port the layer for Krogoth
M
Il 20/11/19 15:37, Mauro Ziliani ha scritto:
>
> Hi all.
>
> There a port for meta-tensorflow for Krogoth or Sumo?
>
> Mayabe I need to use it on this distribution
>
> Thaks
>
> M
>
> Il 21/02/19 12:37, Hongxu Jia ha scritto:
>> Hi RP and Yocto folks,
>>
>> Currently AI on IoT edge becomes more and more popular, but there is no
>> machine learning framework in Yocto/OE. With the support of Eric
>> <Zhangle.Yang@windriver.com>, Robert<liezhi.yang@windriver.com>
>> and Randy<randy.macleod@windriver.com>, after two months effort, I've
>> integrated TensorFlow to Yocto.
>>
>> Now, I contribute the patches to Yocto for review, and apply for creating
>> a layer named `meta-tensorflow' on Yocto.
>>
>> For test convenient, there is a fork on github:
>> https://github.com/hongxu-jia/meta-tensorflow
>>
>> BTW, I have contributed other 11 fundamental recipes to meta-openembedded
>> and all of them have been merged to master branch.
>>
>> Please no hesitate to share your suggestion.
>>
>> //Hongxu
>>
>> Testing Commands:
>> -----------------
>> See README
>>
>> Testing, Expected Results:
>> --------------------------
>> See README
>>
>
[-- Attachment #2: Type: text/html, Size: 2646 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-11-20 15:40 ` Mauro Ziliani
@ 2019-11-20 16:56 ` Mauro Ziliani
2019-11-20 20:20 ` Randy MacLeod
0 siblings, 1 reply; 30+ messages in thread
From: Mauro Ziliani @ 2019-11-20 16:56 UTC (permalink / raw)
To: yocto
[-- Attachment #1: Type: text/plain, Size: 1423 bytes --]
Is it possible to compile tensorflow against python2.7?
Il 20/11/19 16:40, Mauro Ziliani ha scritto:
>
> I forked the repository and I'm tryng to port the layer for Krogoth
>
> M
>
> Il 20/11/19 15:37, Mauro Ziliani ha scritto:
>>
>> Hi all.
>>
>> There a port for meta-tensorflow for Krogoth or Sumo?
>>
>> Mayabe I need to use it on this distribution
>>
>> Thaks
>>
>> M
>>
>> Il 21/02/19 12:37, Hongxu Jia ha scritto:
>>> Hi RP and Yocto folks,
>>>
>>> Currently AI on IoT edge becomes more and more popular, but there is no
>>> machine learning framework in Yocto/OE. With the support of Eric
>>> <Zhangle.Yang@windriver.com>, Robert<liezhi.yang@windriver.com>
>>> and Randy<randy.macleod@windriver.com>, after two months effort, I've
>>> integrated TensorFlow to Yocto.
>>>
>>> Now, I contribute the patches to Yocto for review, and apply for creating
>>> a layer named `meta-tensorflow' on Yocto.
>>>
>>> For test convenient, there is a fork on github:
>>> https://github.com/hongxu-jia/meta-tensorflow
>>>
>>> BTW, I have contributed other 11 fundamental recipes to meta-openembedded
>>> and all of them have been merged to master branch.
>>>
>>> Please no hesitate to share your suggestion.
>>>
>>> //Hongxu
>>>
>>> Testing Commands:
>>> -----------------
>>> See README
>>>
>>> Testing, Expected Results:
>>> --------------------------
>>> See README
>>>
>>
>
[-- Attachment #2: Type: text/html, Size: 3196 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: Review request 0/13: Contribute meta-tensorflow to Yocto
2019-11-20 16:56 ` Mauro Ziliani
@ 2019-11-20 20:20 ` Randy MacLeod
0 siblings, 0 replies; 30+ messages in thread
From: Randy MacLeod @ 2019-11-20 20:20 UTC (permalink / raw)
To: Mauro Ziliani, yocto, Jia, Hongxu
[-- Attachment #1: Type: text/plain, Size: 1735 bytes --]
On 11/20/19 11:56 AM, Mauro Ziliani wrote:
>
> Is it possible to compile tensorflow against python2.7?
>
I doubt that it's easy/supported but Hongxu, who lives in China, will
reply later today to explain.
Btw, Krogoth has python3, why not use it?
../Randy
> Il 20/11/19 16:40, Mauro Ziliani ha scritto:
>>
>> I forked the repository and I'm tryng to port the layer for Krogoth
>>
>> M
>>
>> Il 20/11/19 15:37, Mauro Ziliani ha scritto:
>>>
>>> Hi all.
>>>
>>> There a port for meta-tensorflow for Krogoth or Sumo?
>>>
>>> Mayabe I need to use it on this distribution
>>>
>>> Thaks
>>>
>>> M
>>>
>>> Il 21/02/19 12:37, Hongxu Jia ha scritto:
>>>> Hi RP and Yocto folks,
>>>>
>>>> Currently AI on IoT edge becomes more and more popular, but there is no
>>>> machine learning framework in Yocto/OE. With the support of Eric
>>>> <Zhangle.Yang@windriver.com>, Robert<liezhi.yang@windriver.com>
>>>> and Randy<randy.macleod@windriver.com>, after two months effort, I've
>>>> integrated TensorFlow to Yocto.
>>>>
>>>> Now, I contribute the patches to Yocto for review, and apply for creating
>>>> a layer named `meta-tensorflow' on Yocto.
>>>>
>>>> For test convenient, there is a fork on github:
>>>> https://github.com/hongxu-jia/meta-tensorflow
>>>>
>>>> BTW, I have contributed other 11 fundamental recipes to meta-openembedded
>>>> and all of them have been merged to master branch.
>>>>
>>>> Please no hesitate to share your suggestion.
>>>>
>>>> //Hongxu
>>>>
>>>> Testing Commands:
>>>> -----------------
>>>> See README
>>>>
>>>> Testing, Expected Results:
>>>> --------------------------
>>>> See README
>>>>
>>>
>>
>
--
# Randy MacLeod
# Wind River Linux
[-- Attachment #2: Type: text/html, Size: 4003 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2019-11-20 20:20 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-21 11:37 Review request 0/13: Contribute meta-tensorflow to Yocto Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 01/13] initial Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 02/13] bazel-native: add version 0.21.0 Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 03/13] create classes/bazel.bbclass Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 04/13] tensorflow-native: add version 1.13.0 Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 05/13] tensorflow-native: add Python 3.7 compatibility Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 06/13] tensorflow-estimator: add version 1.13 Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 07/13] Customize Yocto toolchain for cross compiling Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 08/13] tensorboard: add version 1.12.2 Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 09/13] tensorflow: add version 1.13.0 Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 10/13] tensorflow: fix gcc internal compile error on qemuarm64 Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 11/13] tensorflow: support musl Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 12/13] build tensorflow-native and tensorflow in order Hongxu Jia
2019-02-21 11:37 ` [meta-tensorflow][PATCH 13/13] add README Hongxu Jia
2019-02-21 12:27 ` Review request 0/13: Contribute meta-tensorflow to Yocto Richard Purdie
2019-02-21 18:39 ` Tim Orling
2019-02-22 16:51 ` Stephen Lawrence
2019-02-22 17:22 ` Hongxu Jia
2019-02-22 17:32 ` Khem Raj
2019-02-22 20:49 ` Manjukumar Harthikote Matha
2019-02-22 23:57 ` Hongxu Jia
2019-02-23 15:29 ` Richard Purdie
2019-02-23 17:04 ` Khem Raj
2019-02-24 6:33 ` Hongxu Jia
2019-02-25 15:10 ` Stephen Lawrence
2019-02-24 5:40 ` Hongxu Jia
2019-11-20 14:37 ` Mauro Ziliani
2019-11-20 15:40 ` Mauro Ziliani
2019-11-20 16:56 ` Mauro Ziliani
2019-11-20 20:20 ` Randy MacLeod
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.